Anthropic has raised Claude’s limits while the fight over AI power escalated

Anthropic spent the past week making Claude easier to lean on, and the most important part was not a splashy model launch. From 13 to 27 March, the company is doubling Claude’s five-hour usage outside weekday peak hours for Free, Pro, Max and Team users.
On the developer side, Anthropic made its 1m-token context window generally available for Claude Opus 4.6 and Sonnet 4.6, removed dedicated 1m rate limits, and raised the media limit to 600 images or PDF pages per request when that 1m window is used.
That may sound like product admin, but taken together says it’s more important. A model that can stay with you longer is easier to keep open all day. A model that can take in much larger files, codebases or research bundles starts to fit messier, real work. And once the hard stop becomes a payment choice, the old cap stops feeling like a boundary and starts feeling like an interruption.
It shows how AI companies can make heavier use feel normal without ever having to announce that as the goal.
What made the week feel bigger was that Anthropic was also publishing research about the world these tools are entering. In a labor market report, the company said real-world AI use is still well below what the technology could theoretically do, but also found that occupations with higher observed exposure are projected to grow less through 2034.
It did not find a broad rise in unemployment yet, though it pointed to suggestive evidence that younger workers may already be seeing slower hiring in more exposed occupations.
Anthropic also had a collaboration with Mozilla, the company said Claude Opus 4.6 found 22 Firefox vulnerabilities over two weeks, and Mozilla says 14 were high severity. That is a picture of a model being pointed at dense, technical, high-stakes work for long enough to produce something that mattered.
Also read: Anthropic says it will no longer automatically pause development when a model could be dangerous.
While widening access to Claude, the company is also suing the Pentagon after being designated a supply-chain risk, a clash tied to Anthropic’s refusal to allow uses involving mass domestic surveillance and fully autonomous lethal weapons. The dispute is framed as a consequential fight over who gets to set the limits around powerful AI systems.
Y. Anush Reddy is a contributor to this blog.



