Documentation Index
Fetch the complete documentation index at: https://claude.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Research Preview. Cowork on 3P is under active development. These docs will be updated with new product features. We will share the timeline for GA when it is available.
The data-residency, compliance, and “no conversation data sent to Anthropic” statements throughout these pages apply when
inferenceProvider is vertex or bedrock. They also apply when inferenceProvider is gateway, provided your gateway does not route inference to Anthropic infrastructure (directly or via Microsoft Foundry). They do not apply when using Microsoft Foundry.Microsoft Foundry (preview): In this preview platform integration, Claude models run on Anthropic’s infrastructure. This is a commercial integration for billing and access through Azure. As an independent processor for Microsoft, customers using Claude through Microsoft Foundry are subject to Anthropic’s data use terms. Anthropic continues to provide its industry-leading safety and data commitments, including zero data retention availability.Who it’s for
Cowork on 3P is designed for organizations whose security, regulatory, or contractual requirements prevent them from sending data to Anthropic’s first-party infrastructure. Typical deployments include:- Highly regulated enterprises on 3P only — organizations that use third-party inference for regulatory or security reasons
- International enterprises with data residency requirements — organizations that require in-region data residency and cannot send conversation data to the United States
- Public sector and defense — agencies and contractors operating under FedRAMP, ITAR, or sovereign-cloud mandates
Architecture
Cowork on 3P keeps the standard Cowork feature set and relocates inference to the provider you configure.| Component | Standard Cowork | Cowork on 3P |
|---|---|---|
| Model inference | Anthropic API | Your Vertex AI / Bedrock / Foundry / gateway endpoint |
| Web application | Loaded from claude.ai | Bundled inside the desktop app |
| User identity | Anthropic account | Local device identity only |
| Conversation storage | Anthropic backend | Local disk on the user’s machine |
| Code execution sandbox | Local VM | Local VM (identical) |
| Configuration | Admin console at claude.ai | OS-native configuration (MDM-managed or per-user) |
Security posture
- No conversation egress to Anthropic (Vertex AI and Bedrock only). Prompts, responses, files, and tool outputs are sent only to your configured inference endpoint and stored only on the local machine.
- Sandboxed tool execution. Shell commands run in the hardened Cowork VM; file access is scoped to your allowed folders and web fetches to your egress allowlist.
- Auditable telemetry. Crash reports and product analytics are scrubbed of conversation and user data before being sent to Anthropic, and can be fully disabled via configuration keys. Independently, you can export full session activity (prompts, tool calls, token counts) to your own OpenTelemetry collector.
- Centrally managed. All configuration is delivered via your existing MDM (Jamf, Intune, Workspace ONE, Group Policy) and cannot be overridden by end users when an admin profile is present.
Data residency and international deployment
This section applies when using Vertex AI or Bedrock. Inference requests go directly from the user’s machine to the regional endpoint you configure (inferenceVertexRegion or inferenceBedrockRegion). Because conversation data goes only to that endpoint and to local disk, residency is determined entirely by:
- The cloud region you select for inference
- The physical location of the user’s device, where conversations are persisted
Public sector and highly regulated environments
This section applies when using Vertex AI or Bedrock. Because inference runs in your cloud tenant, Cowork on 3P operates inside whatever compliance boundary your provider and region give you. The desktop application itself contacts Anthropic only for crash reporting, product analytics, and auto-updates, and each of these can be disabled independently via managed configuration. With Anthropic-bound telemetry and updates disabled, the compliance posture of your deployment is determined entirely by your inference provider. If you run inference in a FedRAMP High–authorized region of Vertex AI or Bedrock, model traffic stays within that boundary and the FedRAMP relationship is between your organization and that provider. See Telemetry and egress for the full set of network paths and how to lock them down.Next steps
Installation and setup
Roll out Cowork on 3P to your organization with MDM, or configure a single machine for evaluation.
Configuration reference
Every managed-configuration key, what it does, and recommended security profiles.
Extensions
Deploy MCP servers, plugins, skills, and hooks across your fleet.
Telemetry and egress
What the app sends to Anthropic, how to turn it off, and the firewall allowlist you’ll need.