300 Million AI Chat Messages Exposed: Chat & Ask AI Leak Is the Year's Biggest Consumer AI Breach
April 14, 2026 · 2 min read
An exposed database at Chat & Ask AI spilled 300 million messages from over 25 million users — the full chat history, the model used, and account metadata, all sitting on the open internet.
The Leak
An independent security researcher discovered an unsecured database belonging to Chat & Ask AI, a popular mobile AI assistant with more than 50 million installs. The database contained roughly 300 million messages from over 25 million users, along with the user's files, the models they queried, and their app settings.
Translation: every candid question, every uploaded document, every "don't tell anyone I asked this" query — publicly accessible to anyone who stumbled across the endpoint.
What Was In There
- Full chat transcripts tied to user identifiers
- File uploads (resumes, medical notes, legal drafts, you name it)
- Model selection and settings
- Device and session metadata
There was no encryption at rest and no authentication on the database. This is not a sophisticated zero-day. It is the digital equivalent of leaving the office with the filing cabinets on the sidewalk.
Why Consumer AI Apps Keep Doing This
Consumer AI apps are in a land grab. Growth wins funding, and privacy engineering is the first thing cut when founders are optimizing for install counts. The result is a predictable pipeline: rapid release, large data accumulation, and breach.
The 2026 pattern so far — Chat & Ask AI, OmniGPT, multiple "AI companion" apps — is consistent enough that regulators should stop treating these as isolated incidents.
What It Means For Users
Assume that anything you have typed into a consumer AI app on your phone in the last three years may be recoverable by a motivated attacker. Passwords, side businesses, health struggles, custody disputes — if you wrote them to an AI, they are sitting in a data store somewhere, protected by whatever the cheapest engineer could ship in a sprint.
The Fix
Choose AI tools that do not keep identified logs of your conversations. Look for products that sign you in without credentials (or require none at all), that do not stitch queries to persistent identifiers, and that make their retention policies legible.
We built GPTAnon precisely because "please trust our promise" is not a privacy model. No accounts, no logs, no leak to have.