Perplexity AI Sued for Secretly Sharing Your Private Chats with Meta and Google
April 13, 2026 · 3 min read
A 135-page class action lawsuit alleges Perplexity AI embedded Meta and Google ad trackers that secretly transmitted user conversations, search prompts, and personal data — even in Incognito mode.
A bombshell class action lawsuit filed on March 31, 2026 in San Francisco federal court has exposed what privacy advocates are calling one of the most brazen betrayals of user trust in the AI industry. The 135-page complaint alleges that Perplexity AI — a company that markets itself as a privacy-respecting AI search engine — secretly embedded Meta Pixel, Google Ads, Google DoubleClick, and Meta's Conversions API trackers directly into its platform code.
According to the lawsuit, these tracking tools silently transmitted user conversations, search prompts, AI responses, email addresses, IP addresses, and device information straight to Meta and Google for advertising purposes. Most damning of all: the data sharing allegedly continued even when users activated Perplexity's "Incognito" mode — a feature explicitly designed to give users the impression their activity was private.
The Scale of the Betrayal
With millions of users over three-plus years of operation, the potential exposure is staggering. Under California privacy law, penalties can exceed $5,000 per individual violation, meaning total damages could reach into the billions.
The lawsuit names not just Perplexity AI, but also Meta Platforms and Google's parent company Alphabet as defendants, arguing they knowingly received and profited from the illegally collected data.
What This Means for AI Privacy
This case strikes at the heart of a fundamental question: can you trust any AI company with your private thoughts and conversations? When an AI chatbot promises privacy but secretly funnels your most intimate queries to the world's largest advertising companies, it doesn't just violate a terms of service — it violates the basic social contract between technology and its users.
For users of anonymous AI platforms like GPTAnon, this lawsuit validates the core premise that true privacy requires architectural guarantees, not just corporate promises. At GPTAnon, we don't collect user data, we don't track conversations, and we don't embed ad trackers — because privacy isn't a feature we offer, it's the foundation we're built on.
What Happens Next
The case is in its earliest stages. No class has been certified, no settlement has been proposed, and Perplexity has stated it hasn't been formally served. But regardless of the legal outcome, the allegations have already sent shockwaves through the AI industry and should serve as a wake-up call for anyone who assumes their AI conversations are private.
If you're concerned about AI privacy, the lesson is clear: don't trust promises — trust architecture. Choose AI platforms that are built from the ground up to protect your anonymity.