Mercor Breach Tied to LiteLLM Supply-Chain Attack, Contractors Face Exposure
In a terse company statement, mercor acknowledged that it was among thousands of organizations swept up by a recent supply-chain compromise of the open-source library LiteLLM. The announcement centered on containment and a third-party forensics probe, while outside actors have claimed they extracted large troves of internal files.
What happened to Mercor in the LiteLLM supply-chain attack?
The incident began when malicious code was planted inside LiteLLM, an open-source tool used to connect applications to AI services. Security firm Snyk explained the library is typically downloaded millions of times per day, and the altered code was designed to harvest credentials and spread across developer environments before it was removed within hours of discovery.
Mercor confirmed it was “one of thousands of companies” affected by the compromise and said its security team had “moved promptly” to contain and remediate the incident. A Mercor spokesperson, Heidi Hagberg, said a third-party forensics investigation was underway and stressed the company would devote resources to resolving the matter.
Who was affected and what data may be exposed?
Mercor provides training data and recruits subject-matter experts for AI companies, and the company counts customers including Anthropic, OpenAI, and Meta. Lapsus$, an extortion-focused hacking group, claimed it had targeted Mercor and accessed data. The group has published samples it says were taken, including what appeared to be Slack data, internal ticketing information, and two videos purportedly showing conversations between Mercor’s AI systems and contractors on its platform.
Lapsus$ claimed as much as four terabytes of data were obtained, encompassing source code and database records. The initial supply-chain operation is linked to a hacking group known as TeamPCP, which specializes in embedding malware inside widely used codebases so that the malicious code disperses as developers download and use those libraries.
What steps are being taken to contain the damage and investigate?
Mercor emphasized rapid containment and engagement of outside forensics experts. “The privacy and security of our customers and contractors is foundational to everything we do at Mercor, ” Heidi Hagberg said, adding that the company would continue direct communications with affected parties and devote resources necessary to resolving the matter.
LiteLLM posted that it was “investigating a suspected supply chain attack involving unauthorized PyPI package publishes” and said evidence indicated a compromised PyPI account had been used to distribute malicious code. Security researchers at the cybersecurity firm Wiz noted that TeamPCP has been associated with engineered supply-chain intrusions and has recently begun collaborating with extortion-focused groups in operations that blend malware insertion and credential theft.
Industry defenders removed the malicious LiteLLM code within hours of discovery and released a clean version of the library. Mercor said containment was a priority while the forensic review continues; the company highlighted its ongoing outreach to customers and contractors directly.
Mercor is a three-year-old startup valued at $10 billion that raised $350 million in a Series C round led by Felicis Ventures. Its platform recruits experts across medicine, law, and literature to help improve AI model training. The company and outside security teams are working to understand what elements of datasets or proprietary project information may have been exposed.
Back at the company statement, the language of caution underscored competing pressures: halt further spread, keep customers informed, and determine the scope of any data loss. For contractors whose work feeds AI systems and for organizations that rely on such data pipelines, the incident is a concrete reminder of the reach of supply-chain attacks and the cascading effects when widely used developer libraries are hijacked.
The investigation continues, and mercor says it will update affected parties as appropriate while remediation proceeds. The public claims by external groups and the scale of the alleged data extraction leave open urgent questions about resilience and oversight in AI training ecosystems.