Pentagon considers cutting Anthropic deal amid Claude limits dispute

February 15, 2026News
#AI in Law
2 min read
Pentagon considers cutting Anthropic deal amid Claude limits dispute

The Pentagon is weighing whether to scale back or end its work with Anthropic after a dispute over how Claude can be used for lawful military missions, according to Reuters and Axios. The tension sits inside a reported $200 million ceiling deal and it is part of a broader push to get frontier AI tools from OpenAI, Google, and xAI onto classified networks with fewer restrictions.

The argument is about who sets the boundaries once the government says a use is legal. Reports say that the Pentagon wants access for all lawful purposes, including intelligence and battlefield support, while Anthropic has held to stricter limits. Those reported red lines include refusing to support fully autonomous weapons targeting and refusing mass domestic surveillance of Americans, even if a customer claims the request fits inside the law.

The story caught fire after The Wall Street Journal reported that Claude was used in a U.S. operation tied to Nicolás Maduro and that the model was accessed through Palantir Technologies. Reuters reported it could not independently verify the claim and that the Pentagon and other parties did not publicly confirm operational details.

Anthropic’s public response has been to draw a clean line between policy talks and mission specifics. As quoted in reporting, the company has said it has not discussed using Claude for specific operations and that its conversations with the government have focused on usage policies rather than operational deployment. That lets the company contest the most explosive interpretation while still acknowledging the underlying fight over what guardrails should exist inside national security work.

This started with Anthropic and Palantir publicly stating a path for government customers to use Claude through Palantir software on Amazon Web Services, while Anthropic framed its Defense Department work as a responsible AI effort for defense operations. 

If Claude is reaching sensitive workflows through an integrator channel, then the debate stops being theoretical and turns into contract language, access terms, and who is allowed to override a refusal when the stakes are high. The next move to watch is whetherthe Pentagon accepts a constrained deployment model from Anthropic or shifts more of this work toward vendors willing to sign broader terms.

YR
Y. Anush Reddy

Y. Anush Reddy is a contributor to this blog.