Anthropic Criticizes China’s AI Labs Efforts
Anthropic, the US-based creator of Claude AI models, has voiced concerns about potential data theft by Chinese AI laboratories. The company has accused three China-based firms—DeepSeek, Moonshot AI, and MiniMax—of large-scale efforts to extract knowledge from its models through a technique known as model distillation.
Understanding Model Distillation
Model distillation is a deep learning method where a large “teacher” model transfers its learned patterns to a smaller “student” model. This technique compresses data, producing a more efficient model while retaining performance quality. Although distillation can enhance AI explanations, it also poses risks of unauthorized model replication.
Legal Background
Anthropic has encountered several lawsuits accusing it of copyright infringement and unauthorized data scraping. Key cases include:
- Bartz v. Anthropic
- Carreyrou v. Anthropic
- Concord Music Group, Inc. v. Anthropic
- MacKinnon v. Anthropic (Canada)
- Reddit, Inc. v. Anthropic
The unresolved question is whether training AI on copyrighted material without authorization breaches legal standards.
Widespread Data Extraction Efforts
According to Anthropic, the aforementioned companies engaged over 16 million exchanges with their Claude models through approximately 24,000 fraudulent accounts. These actions violated structured access rules and terms of service.
Concerns Over National Security
Anthropic fears that the unauthorized distillation may empower authoritarian regimes to launch cyberattacks, disinformation campaigns, and mass surveillance efforts. The company warns that if these models become open-sourced, the risk could escalate significantly, allowing unrestricted access to dangerous capabilities.
Industry Response and Competitor Insights
OpenAI, a major competitor, recently alerted US congressional committees regarding increased data extraction activities from Chinese adversaries. The company indicated that Chinese entities are evolving from basic model extraction tactics to more sophisticated techniques that combine data generation and optimization strategies.
National AI Industry Concerns
Both Anthropic and OpenAI emphasize the importance of safeguarding the national AI sector. Anthropic highlighted that models gained through unauthorized distillation may lack critical protections, posing significant national security threats.
Future Predictions
A report from the Forecasting Research Institute indicates that experts foresee the performance gap between US and Chinese AI models narrowing by 2031. Parity in model capabilities is expected by 2041.
In light of these developments, the actions of DeepSeek, Moonshot AI, and MiniMax remain under scrutiny, as they have yet to issue comments regarding Anthropic’s allegations.