OpenAI and Google employees back Anthropic in Pentagon blacklist fight

Dozens of employees from OpenAI and Google, including Google chief scientist Jeff Dean, have filed a court brief backing Anthropic’s lawsuit against the Pentagon. That filing does more than add another voice to a company dispute. It sharpens the real question underneath the case. Can the US government punish an AI company for refusing to loosen its limits on war and surveillance?
The employees say they are speaking for themselves, not for their companies, but they are not hedging much beyond that. In the brief, they call the government’s supply-chain-risk label against Anthropic an improper and arbitrary use of power. They also warn that the move could hurt US competitiveness and make researchers think twice before speaking plainly about what frontier AI systems can and cannot safely do.
Anthropic’s fight with the Pentagon started after it refused to drop two restrictions on military use of its models. One barred domestic mass surveillance. The other blocked fully autonomous weapons.
The two sides had been at odds for months before the administration moved to brand Anthropic a supply-chain risk, while rival AI firms stepped in with broader lawful-use terms. This suggests the government is not only looking for AI tools. It is looking for vendors willing to give it fewer boundaries.
The designation does not just threaten Anthropic’s own Pentagon business. It also reaches into the wider contractor network. Companies doing military work could be forced to remove Anthropic’s tools from defense workflows if they want to keep their contracts. Claude was not some fringe experiment in that world. Anthropic had already become part of serious national security work. The fight reaches beyond future contracts. Asking whether a company can be frozen out for drawing a line after it is already inside the system.
Also read: Anthropic brings multi-agent code review to Claude Code as pull request volume keeps rising
Why does the brief matter? because it comes from people building these systems at rival labs. Their point is not that Anthropic should get special treatment. It is that the red lines themselves are grounded in real technical and civic concerns.
On surveillance, they argue the real danger is not scattered data on its own, but an AI layer that could stitch together cameras, location trails, financial records and social graphs into something much closer to live, mass monitoring. On autonomous weapons, they argue today’s systems are still too brittle in ambiguous conditions and too prone to error to be trusted without human judgment in the loop.
Another point is, if the Pentagon no longer liked Anthropic’s contract terms, the employees argue, it could have walked away and bought from someone else. What it should not be able to do is use government power to make an example of a company that refused the most extreme uses of its technology. That is why this case matters beyond Anthropic.
It is turning into an early fight over who gets to set the outer limits of military AI — the companies building it, or the state demanding more.
Y. Anush Reddy is a contributor to this blog.


