Training use
API inputs and outputs are not used to train OpenAI models unless the organization opts in.
QualiRise AI
Data & AI Privacy
OpenAI, Anthropic, Google, and local LLM workflows all handle data differently. This page summarizes the main operating differences for teams evaluating AI workflows.
OpenAI
OpenAI separates consumer controls from business and API controls. For API usage, customer data is not used for model training unless the customer explicitly opts in.
API inputs and outputs are not used to train OpenAI models unless the organization opts in.
Abuse monitoring logs are retained up to 30 days by default, with additional controls for eligible customers.
Eligible organizations can request Modified Abuse Monitoring or Zero Data Retention for supported API usage.
Official references: OpenAI API data controls and OpenAI business data privacy.
Anthropic
Anthropic documents different handling for consumer and commercial usage. Team, Enterprise, API, third-party platform, and Claude Gov usage follow commercial data policies.
Commercial code and prompts are not used to train generative models unless the customer opts in.
Commercial Claude Code and API usage has a standard 30-day retention period.
Zero data retention is available for supported commercial configurations and is enabled at the organization level.
Official references: Anthropic Claude data usage and Anthropic commercial data processor guidance.
Google Cloud publishes data governance guidance for generative AI on Vertex AI, including training restrictions and configuration steps for zero data retention.
Google says customer data is not used to train or fine-tune managed AI/ML models without permission or instruction.
Gemini inputs may be cached for up to 24 hours by default unless project-level caching is disabled.
Zero data retention requires configuration choices, and some features have specific retention behavior.
Official reference: Google Cloud Vertex AI data governance.
Local LLM
Ollama runs open-weight models locally on macOS, Windows, or Linux. For teams with strict privacy needs, this can keep prompts, files, and generated responses on controlled machines.
Local model runs do not send prompts or outputs to a provider for model training.
Retention depends on your own device, logs, local app settings, and any tools connected to Ollama.
Teams control model choice, local storage, network access, machine permissions, and integration boundaries.
Official references: Ollama documentation and Ollama API introduction.