gptAnon
AI Privacy Blog

How a Supply-Chain Attack on an Open-Source LLM Tool Exposed 40,000 People at a $10B AI Startup

April 11, 2026 · 3 min read

A compromised open-source LLM proxy gave attackers access to 4 terabytes of data from Mercor, an AI hiring startup valued at $10 billion. Five class-action lawsuits have followed — and the fallout is just beginning.

On March 27, 2026, attackers compromised LiteLLM — a widely used open-source proxy that lets developers route requests across multiple large language models — and used it to breach Mercor, an AI-powered hiring startup valued at $10 billion.

The result: 4 terabytes of stolen data, more than 40,000 contractors exposed, and a cascade of lawsuits that is reshaping how the industry thinks about AI supply-chain security.

Why It Matters

This wasn't a sophisticated zero-day exploit. The attackers used stolen developer credentials to publish two malicious versions of the LiteLLM package. Thousands of companies downloaded the tainted software before the compromise was detected. Mercor — whose clients include OpenAI and Anthropic — was among them.

The breach is a wake-up call: the AI ecosystem's reliance on open-source tooling creates a single point of failure that threat actors are now actively targeting.

What Happened

The attack chain was textbook supply-chain compromise:

  • Attackers obtained legitimate developer credentials for the LiteLLM project
  • They published two malicious package versions containing backdoor code
  • Companies running automated dependency updates pulled the compromised versions
  • The backdoor gave attackers a foothold inside Mercor's infrastructure
  • Attackers exfiltrated 4 TB of data and listed it for auction on the dark web
  • The Fallout

    The first class-action lawsuit landed on April 1, filed by Lisa Gill in the U.S. District Court for the Northern District of California. The complaint alleges Mercor failed to implement basic security controls: no multi-factor authentication, no encryption of sensitive data at rest or in transit, no access controls limiting employee visibility into sensitive records, and no real-time monitoring for suspicious activity.

    Four more contractor lawsuits followed within a week. The complaints paint a picture of a company that moved fast on AI innovation but neglected foundational cybersecurity hygiene.

    Meta has reportedly frozen AI data work tied to the breach, signaling the ripple effects extending well beyond Mercor itself.

    The Bigger Picture

    The Mercor breach exposes a structural vulnerability in how AI companies build and deploy software. Modern AI infrastructure depends on a web of open-source tools — model routers, inference engines, vector databases, embedding pipelines — many maintained by small teams with limited security resources.

    When one of those components is compromised, the blast radius can be enormous. Mercor is the cautionary tale, but it won't be the last.

    The Bottom Line

    If your company uses open-source AI tooling (and most do), the Mercor breach should trigger an immediate review of your software supply chain. Key questions to ask:

    • Are you pinning dependency versions and verifying package integrity?
    • Do you have multi-factor authentication on all developer accounts that publish packages?
    • Are you monitoring for anomalous data exfiltration in real time?
    • Do you encrypt sensitive data at rest and in transit?

    The era of "move fast and break things" in AI needs a security reckoning. The attackers aren't waiting.

    ---

    Sources: Fortune, TechCrunch, TechCrunch, StrikeGraph

    Read without being tracked

    GPTAnon lets you chat with AI models — ChatGPT, Claude, Gemini, and more — without creating accounts or having your conversations logged.

    Start chatting anonymously →