gptAnon
AI Privacy Blog

Claude Now Wants Your Passport: Why Anonymous AI Access Has Never Mattered More

April 19, 2026 · 5 min read

Anthropic now requires government ID to use Claude. With AI surveillance growing and FBI reporting $893M in AI scams, anonymous AI access has never been more important.

The Privacy Paradox in AI Just Got Worse

In a move that stunned the AI community, Anthropic — the company millions turned to as a privacy-conscious alternative to OpenAI — quietly rolled out identity verification requirements for its Claude chatbot this week. Some users are now being asked to hand over a government-issued photo ID and a live selfie just to access certain features.

The irony is hard to overstate. Just weeks ago, users fled to Claude in droves after OpenAI signed a controversial agreement with the Department of Defense. Now, those same privacy-conscious users face a new demand: prove who you are before you can use AI.

What Anthropic Is Asking For

According to Anthropic's help center page, which went live on April 14, the company has partnered with Persona Identities — the same KYC infrastructure used across the financial services industry — to verify users. The requirements include a physical, undamaged passport, driver's license, or national identity card, plus a live selfie in some cases.

Photocopies, mobile IDs, and student credentials are not accepted. Reports indicate that users in China whose national ID cards are not supported have been effectively locked out of the platform entirely.

Anthropic says this verification will trigger during access to "certain capabilities," during "routine platform integrity checks," and as part of broader safety and compliance measures. The company insists the identity data goes to Persona's servers, not Anthropic's, and will not be used for model training.

A Pattern of Broken Promises

This development does not exist in a vacuum. It follows a cascade of events that have systematically eroded trust in every major AI provider's commitment to user privacy.

The Anthropic-DOD saga is the most dramatic example. After refusing to give the Department of Defense unrestricted access to its AI for mass surveillance purposes, Anthropic saw its $200 million Pentagon contract terminated and became the first American company ever designated a "supply chain risk" by the Department of War. While Anthropic's stand on surveillance was admirable, the subsequent rollout of government ID requirements sends a contradictory signal to the privacy-focused users who rallied behind the company.

Meanwhile, the Electronic Privacy Information Center (EPIC) published a warning on April 14 that the U.S. government is actively using AI to process and analyze billions of data points purchased from data brokers — information about Americans' gun ownership, religious beliefs, political leanings, and medical histories, all acquired without a warrant through what privacy advocates call the "data broker loophole."

The Threat Landscape Is Real

The FBI's 2025 Internet Crime Report, released this month, adds another dimension to the crisis. For the first time in the report's nearly 25-year history, AI earned its own dedicated section: 22,364 complaints referencing artificial intelligence, with adjusted losses exceeding $893 million. Investment fraud alone accounted for $632 million of those losses.

AI-generated synthetic content is powering business email compromise, romance scams, and employment fraud at unprecedented scale. The technology creates convincing personas, personalized conversations, and fraudulent documents faster than any human could.

These threats are real — but the solution is not more surveillance, more identity verification, or more data collection. History has shown repeatedly that centralized identity databases become targets themselves.

Why Architecture Matters More Than Policy

Every major AI company now operates on a model that fundamentally conflicts with user privacy. They require accounts. They log conversations. They store metadata. And increasingly, they want to know exactly who is behind every prompt.

This is where architectural decisions matter more than corporate promises. A company can promise not to share your data today and change that policy tomorrow. A company can promise not to train on your conversations and face a court order next month. But a platform that never collects your identity in the first place has nothing to hand over, nothing to breach, and nothing to weaponize.

This is the principle behind GPTAnon. No accounts. No login. No government ID. No selfie verification. No conversation logging. No data retention. Access to the same frontier AI models — ChatGPT, Claude, Grok, and Gemini — without the surveillance infrastructure that every other platform now demands.

The Road Ahead

The convergence of government surveillance ambitions, corporate identity requirements, and AI-powered cybercrime creates a landscape where anonymous access to AI is not a luxury but a necessity. The 77% of employees already pasting corporate data into AI chatbots are doing so on platforms that record everything. The dissidents, journalists, researchers, and ordinary people who need AI assistance without creating a dossier of their queries have fewer options every day.

The EU AI Act becomes fully applicable in August 2026. New state-level AI laws are passing across the United States. Regulation is catching up — but it is focused on AI developers, not on protecting users at the point of interaction.

Until regulation catches up, architecture is the only reliable privacy guarantee. And right now, anonymous AI access is the architecture that matters most.

---

GPTAnon provides anonymous access to leading AI models including ChatGPT, Claude, Grok, and Gemini. No account required. No data retained. Visit gptanon.com to start chatting privately.

Read without being tracked

GPTAnon lets you chat with AI models — ChatGPT, Claude, Gemini, and more — without creating accounts or having your conversations logged.

Start chatting anonymously →