Trump Orders Federal Agencies to Stop Using Anthropic Technology as OpenAI Reaches Out

Donald Trump said Friday that federal agencies must stop using Anthropic’s technology, turning what had been a Pentagon deadline into something much bigger. Trump’s order would hit most agencies immediately, though the Pentagon was given a six-month window to untangle Claude from systems already using it. That pushes the story past a contract fight and into something closer to a federal blacklist.
At the center of the dispute was the Pentagon’s push to use Claude without Anthropic’s remaining guardrails. Anthropic would not drop its limits around fully autonomous weapons and mass surveillance. Sean Parnell put that conflict in plain terms late Thursday, saying “we will not let ANY company dictate the terms regarding how we make operational decisions” and that Anthropic had “until 5:01PM ET on Friday to decide.”
By Friday, the fight had spilled far beyond the Defense Department. Trump stepped in a little more than an hour before the deadline and wrote that Anthropic had made a “DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.” He drove it further with another line that made clear this was now political as much as operational, saying “WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company.”
Anthropic had already made clear it was not backing off. Dario Amodei said the company “cannot in good conscience accede” to the Pentagon’s demand and argued that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” That remains the heart of the dispute. Anthropic is not trying to stop all military use. It is trying to preserve limits in the places it sees as most dangerous.
And now Anthropic says it will challenge the Pentagon’s supply-chain-risk designation in court, which means this fight is moving from contract pressure into a legal battle. At the same time, OpenAI has reached a deal to deploy models on the Pentagon’s classified networks. Anthropic is being pushed out, even as a rival is being brought in under what AP says are similar guardrails as Claude.
The bigger question now is whether this ends with Anthropic. Sam Altman said Friday that OpenAI shares the same red lines on surveillance and autonomous weapons, and hundreds of OpenAI and Google employees have backed Anthropic’s position. So this no longer looks like one company being singled out over one disagreement. It looks more like a test of whether any AI company is allowed to keep meaningful limits once military access is on the table.
Y. Anush Reddy is a contributor to this blog.


