← Guides

Cloud AI logs your mind

Type into ChatGPT, Gemini, or Claude and you create a surveillance record. The company gets your questions, fears, symptoms, money problems, and stray thoughts. It ties that record to your account, IP address, and device fingerprint.

Key points

  • ChatGPT stores prompts by default. "Temporary Chat" still leaves data on OpenAI servers for review and legal compliance.
  • AI chat logs carry no legal privilege. A subpoena to OpenAI, Google, or Anthropic can produce the whole history.
  • A local model in Ollama is the only clean fix. Prompts stay on your device and cannot be subpoenaed from a cloud provider.
30 days
CHATGPT LOG RETENTION (OPT-OUT)
OpenAI privacy policy
18 months
GEMINI DEFAULT RETENTION
Google Gemini settings
100K+
GOOGLE LAW ENFORCEMENT REQUESTS/YR
Google Transparency Report
0
LOCAL MODEL LOGGING
Ollama: prompts never leave device

This is one of the most intimate surveillance systems ever sold as a convenience tool.

ChatGPT keeps more than you think

OpenAI's privacy policy says it plainly. By default, ChatGPT stores:

OpenAI trains future models on that data unless you opt out. The switch hides in Settings → Data Controls → "Improve the model for everyone." Opting out of training does not erase the logs. OpenAI still keeps them for 30 days. If your account keeps chat history on, storage can last indefinitely.

In 2023, a bug in a Redis client library exposed other users' chat histories, payment data, and partial card numbers. Not a planned leak. Just a bug. The lesson stays the same: the data exists, sits in one place, and people can reach it.

Gemini gets worse through integration

Gemini carries a different problem. Integration. Use it with Google Workspace or a linked Google account and your prompts connect to your wider Google identity: search history, location history, Gmail, YouTube.

Google keeps Gemini conversations for 18 months by default unless you change it. Human reviewers can read some of them. Law enforcement can request them. Google gets over 100,000 such requests per year, according to Google's Transparency Report.

People tell chatbots what they hide from doctors

People feed AI chatbots things they would never tell a doctor, boss, or spouse. Addiction. Sexual health. Mental illness. Suicidal thoughts. Hidden conditions. Debt. Panic. Unlike a doctor visit, none of this carries patient privilege.

If a US court, the FBI, or a divorce lawyer subpoenas OpenAI, OpenAI must comply. No privilege. No special shield. Just a log with your name on it. The Electronic Frontier Foundation has tracked the surveillance risk of cloud AI for years.

Do not type anything into a cloud AI chatbot that you would not post on a public forum under your real name. That includes symptoms, financial details, relationship problems, business plans, or anything that could surface in court or in an insurance dispute.

Training data leaks back out

Models memorize pieces of training data. In 2023, researchers at Google DeepMind, ETH Zurich, and elsewhere pulled verbatim training data from ChatGPT by forcing repetition until the model broke pattern. It then spilled memorized text, including names, addresses, phone numbers, and other private data scraped from the web.

If your private data landed in a training set, some of it may live inside a deployed model. Not in a normal database. In the model's parameters. Sometimes that still leaks.

What the major providers actually do

AI Provider Data Practices
ProviderTrains on prompts?Human review?JurisdictionDefault retention
ChatGPT (OpenAI)Yes (opt-out available)YesUS (Five Eyes)30 days+ if opted out; indefinite if history on
Google GeminiYes (opt-out available)YesUS (Five Eyes)18 months default
Microsoft CopilotEnterprise: No. Consumer: YesYesUS (Five Eyes)6 months consumer
Claude (Anthropic)Yes unless API/EnterpriseYes (safety review)US (Five Eyes)Not specified
Mistral (Le Chat)No by defaultNoFrance (EU/GDPR)30 days
Local (Ollama)Never, offlineNeverYour deviceYour device only

"Private mode" is not private

ChatGPT's "Temporary Chat" and similar features do not wipe data from company systems. They only keep the conversation out of your visible history. OpenAI can still hold it for abuse monitoring, safety review, and legal compliance.

If you want a real guarantee, do not send the prompt to someone else's server.

What you can do

1
Run a Local Model. This is the only real privacy fix. Ollama plus Llama 3.2, Mistral 7B, or Phi-4 Mini runs on your device. No prompts leave. No logs sit on a vendor server. No cloud subpoena reaches them. See How to Run AI Privately.
2
Disable Training and History. In ChatGPT, open Settings → Data Controls and turn off both "Improve the model" and "Chat history & training." In Gemini, open myactivity.google.com → Gemini Apps Activity and turn it off. This will not erase old logs. It does reduce what gets added next.
3
Use TorThe Tor network uses onion routing to obscure IP addresses and browsing paths by relaying traffic through multiple volunteer-run nodes.Glossary → or a VPNA virtual private network encrypts traffic between your device and a provider-run server, hiding activity from local networks while shifting trust to the VPN operator.Glossary → for Anonymity. If you still need cloud AI, reach it through Mullvad VPN or Tor Browser. That stops the provider from tying prompts to your home IP and location. It does not stop prompt logging.
4
Use Non-US Providers for Sensitive Queries. Mistral Le Chat in France, or Perplexity used anonymously over VPN, gives you better legal footing than a US provider. GDPR grants a right of erasure. US law does not.
5
Compartmentalise. Do not use one account for sensitive prompts and everyday work. Split them. Use a burner identity, a separate browser profile or device, and a VPN.

The business model tells the truth

Cloud AI answers questions. It also harvests thought at scale. The big providers chase as much human data as they can get because that data builds models worth hundreds of billions.

Private AI exists. It just asks you to do a little setup. That friction is not an accident.


Cunicula receives no funding or sponsorship from any AI company mentioned in this article.

Follow the money behind AI

OpenAI, Anthropic, and Google AI draw money from the same capital networks that fund state surveillance tools. Your prompts train models worth hundreds of billions, then sit on US servers that can face NSL gag orders. No legal privilege protects them.

$AI provider investment chain and government data access exposure
OpenAI
$10B Microsoft investment · Azure AI → US gov contracts · projected ARR $3.4B (2024)
Anthropic
$2B Amazon + $300M Google · Amazon Bedrock Government Cloud · US jurisdiction, NSL/FISA exposure
Meta AI
Free service trains ad model · Meta ad revenue $134B/yr, your prompts feed targeting
Palantir AIP
LLMs integrated with gov databases · NSA · FBI · ICE contracts · surveillance data + AI combined

Frequently Asked Questions

Does ChatGPT store my conversations?

Yes. OpenAI stores prompts and responses by default, tied to your account, IP address, and device fingerprint. If you opt out of model training in Settings → Data Controls, logs still stay for 30 days. If chat history stays on, storage can last indefinitely. "Temporary Chat" only hides the conversation from your visible history. OpenAI still keeps it for safety review and legal compliance.

Can law enforcement access my AI chat history?

Yes. OpenAI is a US company and must answer US court orders, subpoenas, and national security letters. AI chat logs carry no legal privilege. If the FBI, a divorce lawyer, or an employer subpoenas OpenAI, the company must comply. Google Gemini works the same way. Google receives over 100,000 law enforcement requests each year.

How do I use AI without my prompts being logged?

Run a local model. Tools like Ollama let you run Llama, Mistral, or Phi on your own device, so no prompt leaves the machine. Mistral 7B runs on 8GB RAM. Llama 3.2 3B runs on 4GB RAM. That is the only setup that gives you zero logging, zero training contribution, and zero subpoena risk. See cunicula.com/articles/private-ai-local-llm.

Is Mistral or Claude safer than ChatGPT for privacy?

Mistral Le Chat, based in France, gives you better legal footing than a US provider. GDPR includes a right of erasure that US law does not. French courts also demand more before forcing disclosure. Claude is US-based and carries the same legal risk as ChatGPT. No cloud AI tool matches an offline local model.