Anthropic alleges Chinese firms mined Claude outputs to their

Anthropic has accused DeepSeek, Moonshot, and MiniMax of running large scale distillation campaigns to extract Claude’s capabilities, saying they generated more than 16 million exchanges through about 24,000 fraudulent accounts. The company says the activity violated its terms of service and regional access restrictions.
Distillation itself is a common training method, and labs use it to make smaller, cheaper versions of their own models. But these were coordinated extraction efforts using fake accounts and proxy services to pull out the most valuable parts of Claude for training. Anthropic says it linked the campaigns to specific labs using technical indicators and, in some cases, partner corroboration.
The company breaks the activity into three campaigns.
Where DeepSeek ran more than 150,000 exchanges and used prompts aimed at generating chain of thought style reasoning data at scale, while also creating alternatives for politically sensitive queries.
Moonshot ran more than 3.4 million exchanges across different account types and later tried to extract and reconstruct reasoning traces.
MiniMax ran more than 13 million exchanges and shifted traffic within 24 hours of a new Claude release, sending nearly half of it to the latest model.
Anthropic also explains how this worked in practice. It says proxy resellers keep traffic moving even when accounts are banned, and in one case a single network managed more than 20,000 fraudulent accounts while mixing traffic with unrelated customer requests. That is why the company ends by talking about safeguards, and national security, alongside detection systems, tighter access controls, and countermeasures.
As the AI race speeds up and major labs push to close gaps, this reads as a warning. And honestly, that is the part that should worry people most. When a company starts laying out this much detail in public, it usually means the abuse is already large, organized, and hard to dismiss. This is clear as Anthropic is trying to make the scale visible, much like when Google cited huge prompt injection volumes to show this had become a real threat category rather than a rare edge case.
Y. Anush Reddy is a contributor to this blog.


