gptAnon
AI Privacy Blog

20 AI Apps. 14 Months. Tens of Millions of Leaked Chats. The Breach Pattern Nobody's Connecting.

April 8, 2026 · 6 min read

From Chat & Ask AI's 300 million exposed messages to Sears' 4-hour ambient audio recordings — it's not bad luck. It's a broken industry. We mapped every major AI data breach since January 2025.

20 AI Apps. 14 Months. Tens of Millions of Leaked Chats. The Breach Pattern Nobody's Connecting.

> This isn't a rash of bad luck. It's an industry built wrong.

---

Between January 2025 and February 2026, security researchers documented at least 20 separate data breaches across AI-powered apps. Twenty. In fourteen months. Each one exposing the private conversations, health questions, financial fears, and personal secrets of millions of people who thought they were talking to a machine — not to the entire internet.

We went through every incident. And what we found isn't surprising. It's infuriating.

---

🗓️ The Breach Timeline

Here are the biggest hits from a very bad 14 months for AI privacy:

---

💥 Chat & Ask AI — 300 Million Messages Exposed

February 2026 | Firebase misconfiguration

The top-ranked AI chat app on Google Play and the App Store — 50+ million installs — left its entire database wide open to anyone who went looking. No authentication. No access controls. Just 300 million messages from 25 million users, sitting there.

The conversations included discussions of illegal activity, mental health crises, and deeply personal confessions. The kind of things people only say because they believe they're in private.

> The root cause: a Firebase database left with public read/write permissions. A misconfiguration so common it has its own documentation entry.

---

🎙️ Sears Home Services — 3.7M Chat Logs + 1.4M Audio Files

March 2026 | Misconfigured cloud storage

Security researcher Jeremiah Fowler found three publicly accessible databases containing the complete interaction history of Sears' AI chatbot "Samantha." That included 3.7 million text transcripts and 1.4 million audio recordings — some up to four hours long.

Four hours. Of ambient audio. From a home appliance chatbot.

Those recordings captured conversations happening in customers' homes that had nothing to do with the service call. Names, addresses, and personal details — all accessible to anyone who found the buckets.

Sears' parent company Transformco still hasn't commented publicly.

---

📊 The Scorecard: 20 Breaches, 3 Root Causes

Every single incident in this 14-month stretch traced back to a handful of mistakes that have been documented, warned about, and ignored for years:

`

ROOT CAUSE | # INCIDENTS

------------------------------------|------------

🔴 Firebase misconfiguration | 9

🟠 Missing database access controls | 5

🟡 Hardcoded API keys in apps | 4

🔵 Exposed cloud storage buckets | 2

`

These aren't exotic vulnerabilities. These aren't zero-day exploits discovered by nation-state hackers. These are beginner-level security mistakes — the kind that get you a failing grade in an intro security course.

The AI industry is moving fast. It's just not moving safely.

---

🔍 Why Does This Keep Happening?

Three reasons, and none of them are technical:

1. Speed over security.

Every AI app is racing to ship. "Move fast" culture hasn't changed — it's just moved into a space where the data is far more sensitive than a social network post. Your therapy conversation isn't a tweet.

2. Cloud backends are invisible until they're not.

Firebase, Supabase, AWS S3 — these backend services are powerful and easy to set up wrong. The default permissions on a misconfigured Firebase database can expose your entire user table to the public internet. Most developers don't check. Many never find out.

3. There's no penalty for getting hacked.

Outside of a few states with strong breach notification laws, the consequences for exposing millions of user conversations are... a news cycle. A brief embarrassment. No serious regulatory consequence. So companies invest in features, not infrastructure.

---

> The safest data is data that was never collected. GPTAnon never stores your conversations — there's nothing to breach →

🧩 The Real Problem: The Data Exists at All

Here's the thing every one of these companies has in common: they stored your data.

That's the prerequisite for every breach on this list. The conversation was saved. The audio was logged. The database existed. And then, at some point, someone left the door unlocked.

The security community calls this the "data minimization" principle — don't collect what you don't need. It's been a best practice for decades.

The AI industry largely ignored it.

`

Traditional AI app architecture:

[You] → [Your words] → [Server saves everything] → [Training database] → [Potential breach]

This is the problem

GPTAnon's architecture:

[You] → [Encrypted query via MIT Tiptoe] → [AI answers] → [Nothing stored]

No breach surface

`

---

🔒 The GPTAnon Difference

While these 20 apps were leaking user data, GPTAnon was designed from the ground up so there's nothing to leak. No server-side conversation storage means no breach target.

We didn't add a privacy checkbox to an existing chatbot. We built the entire product around a single question: what if we never stored anything?

GPTAnon uses MIT's Tiptoe protocol — which means your queries are cryptographically separated from your identity before they ever leave your device. The AI sees a question. It never sees you. There's no log of your conversations. No audio file. No database row that can be misconfigured or breached.

You can't leak what you never had.

---

💡 What You Should Know Before Using Any AI Chat App

Before you type anything sensitive into an AI app, ask these questions:

  • Where are my conversations stored? (Almost always: their servers, indefinitely)
  • Who can access them? (Employees, contractors, and — as we've now seen — sometimes the entire internet)
  • What happens if they get breached? (You find out weeks later in a press release)
  • Is there an alternative? (Yes. You're reading about it.)

---

Twenty breaches. Tens of millions of people. The same three mistakes over and over again.

This isn't an accident. It's what happens when an industry prioritizes shipping over safety. We built GPTAnon because we believed you deserve better than hoping the next company got their Firebase permissions right.

Try GPTAnon — no logs, no leaks →

---

Sources: Malwarebytes — Chat & Ask AI Breach | Cybernews — Sears Chatbot Leak | Barrack AI — 20 Breach Investigation

---

20 AI apps. 20 breaches. One lesson: if they store it, it will leak. GPTAnon takes the opposite approach — your conversations are never stored on our servers. Access GPT-5, Claude, Gemini, DeepSeek, and 25+ models with zero data retention. Chat without the risk →

Read without being tracked

GPTAnon lets you chat with AI models — ChatGPT, Claude, Gemini, and more — without creating accounts or having your conversations logged.

Start chatting anonymously →