Anthropic wins first court round against Trump and Pentagon

On March 26, a federal judge in San Francisco stopped the Pentagon from branding Anthropic a national security “supply chain risk” and put on hold President Donald Trump’s order telling federal agencies to stop using the company’s technology. Judge Rita Lin’s ruling is temporary and comes with a seven-day stay while the government decides whether to appeal.
And it gives Anthropic the one thing it needed most right now: a court order preventing a contract fight from hardening into a full federal blacklist.
Lin said the Pentagon was free to stop using Claude if it no longer wanted the product. What it could not do, on the record before her, was to answer a public disagreement by treating an American supplier like a threat. Lin wrote that Anthropic had shown a strong likelihood of success on its First Amendment retaliation claim, called the government’s conduct “classic illegal First Amendment retaliation,” and rejected what she described as the “Orwellian notion” that a company could be cast as a potential saboteur simply for disagreeing with the government. The opinion also points directly to the role played by Defense Secretary Pete Hegseth in using that designation.
In other words, the judge did not just pause the blacklist. She questioned the logic behind the Trump administration’s move and the way Hegseth used that designation.
That sting lands harder once you remember where Anthropic stood before this blew up. It was not a company trying to keep the Pentagon at arm’s length. Anthropic says it was the first frontier AI lab to deploy models on classified US government networks, and the Defense Department had already given it a two-year agreement worth up to $200m as part of its broader AI push.
And the court record says the damage from the standoff has already been real. Three deals worth more than $180m fell apart, and Anthropic’s finance chief warned that the hit to 2026 revenue could run from hundreds of millions of dollars to multiple billions.
Also read: Why OpenAI and Google employees are backing Anthropic in its Pentagon blacklist fight.
The break came over two uses Anthropic would not approve. The Pentagon wanted access for “all lawful uses.” Anthropic said Claude would not be available for mass domestic surveillance of Americans and would not be used for fully autonomous lethal weapons. That is the part of the story that matters most, because it turns this from a procurement spat into a fight over who gets to draw the red lines once military AI becomes important enough that nobody wants to walk away from it.
Anthropic’s position was that the government could choose another vendor. Instead, after the disagreement spilled into public view, it moved to label the company a security risk.
Lin also made clear that the Pentagon does not have to keep using Claude. Her order blocks the supply-chain-risk designation and pauses President Donald Trump’s broader directive against Anthropic while the case continues. Microsoft and other outside groups backed Anthropic in court, arguing that the government’s move would reach beyond a single vendor dispute. For now, the blacklist is on hold, but the underlying contract fight remains in court.
Y. Anush Reddy is a contributor to this blog.



