Anthropic Accuses Chinese Labs of Mining Claude Amid US AI Export Debate

Anthropic Accuses Chinese Labs of Mining Claude Amid US AI Export Debate

Anthropic, an AI research outfit, has leveled serious accusations against three Chinese laboratories for allegedly exploiting its Claude AI model. The companies in question—DeepSeek, Moonshot AI, and MiniMax—are said to have created over 24,000 fake accounts to produce more than 16 million interactions with Claude. This practice is known as “distillation,” a method often used to enhance the performance of their own AI models.

Accusations and Distillation Techniques

According to Anthropic, these Chinese firms specifically targeted Claude’s advanced capabilities, which include agentic reasoning, coding, and tool usage. The recent allegations come during ongoing discussions regarding export controls on advanced AI chips, a measure aimed at limiting China’s AI growth.

  • DeepSeek: Recorded more than 150,000 exchanges aimed at improving foundational logic, especially around sensitive queries.
  • Moonshot AI: Generated over 3.4 million exchanges focusing on agentic reasoning, coding, and computer vision.
  • MiniMax: Conducted around 13 million exchanges that targeted coding and orchestration capabilities.

Context of Export Control Debates

The accusations arise amidst fierce discussions on U.S. chip exports to China. The Trump administration recently allowed companies like Nvidia to supply advanced AI chips to China. Critics argue that loosening restrictions could enhance China’s AI computational capabilities at a crucial time in the global AI race.

In Anthropic’s assessment, the extensive data extraction by these companies suggests they require access to advanced chips. The organization argues that such distillation practices highlight the need for stringent export controls to limit illegal AI development.

Industry Responses

Dmitri Alperovitch, chairman of Silverado Policy Accelerator, commented on the situation, expressing little surprise regarding these accusations. He indicated that the rapid advancement of Chinese AI technology is often tied to the illicit appropriation of U.S. models through distillation. Alperovitch advocates for restricting AI chip sales to Chinese firms as a deterrent.

Moreover, Anthropic warns that distillation poses risks beyond undermining AI leadership. It could lead to national security threats, as models developed through unauthorized means are unlikely to contain important safeguards against misuse. The potential for authoritarian regimes to exploit these technologies for offensive cyber operations and mass surveillance is particularly alarming.

In light of these events, Anthropic has pledged to enhance its defenses against such distillation attacks. They are also calling for a unified response from the AI community, cloud providers, and policymakers to address the ongoing challenges posed by these practices.

Next