Data & AI Privacy

Compare how leading AI providers handle your data.

OpenAI, Anthropic, Google, and local LLM workflows all handle data differently. This page summarizes the main operating differences for teams evaluating AI workflows.

OpenAI

API and business data controls

OpenAI separates consumer controls from business and API controls. For API usage, customer data is not used for model training unless the customer explicitly opts in.

Training use

API inputs and outputs are not used to train OpenAI models unless the organization opts in.

Retention

Abuse monitoring logs are retained up to 30 days by default, with additional controls for eligible customers.

Controls

Eligible organizations can request Modified Abuse Monitoring or Zero Data Retention for supported API usage.

Official references: OpenAI API data controls and OpenAI business data privacy.

Anthropic

Commercial Claude and API controls

Anthropic documents different handling for consumer and commercial usage. Team, Enterprise, API, third-party platform, and Claude Gov usage follow commercial data policies.

Training use

Commercial code and prompts are not used to train generative models unless the customer opts in.

Retention

Commercial Claude Code and API usage has a standard 30-day retention period.

Controls

Zero data retention is available for supported commercial configurations and is enabled at the organization level.

Official references: Anthropic Claude data usage and Anthropic commercial data processor guidance.

Google

Vertex AI and Gemini on Google Cloud

Google Cloud publishes data governance guidance for generative AI on Vertex AI, including training restrictions and configuration steps for zero data retention.

Training use

Google says customer data is not used to train or fine-tune managed AI/ML models without permission or instruction.

Retention

Gemini inputs may be cached for up to 24 hours by default unless project-level caching is disabled.

Controls

Zero data retention requires configuration choices, and some features have specific retention behavior.

Official reference: Google Cloud Vertex AI data governance.

Local LLM

Ollama on your own hardware

Ollama runs open-weight models locally on macOS, Windows, or Linux. For teams with strict privacy needs, this can keep prompts, files, and generated responses on controlled machines.

Training use

Local model runs do not send prompts or outputs to a provider for model training.

Retention

Retention depends on your own device, logs, local app settings, and any tools connected to Ollama.

Controls

Teams control model choice, local storage, network access, machine permissions, and integration boundaries.

Official references: Ollama documentation and Ollama API introduction.