X Restricts Grok AI Images to Paid Users as U.S Demands App Store Ban

X has started restricting Grok’s image generation and editing after days of backlash over what one European minister described as the “industrialisation of sexual harassment.” On Friday, Grok began replying that “image generation and editing” are now limited to paying subscribers, and the change appears to have stopped the @Grok account from generating and auto-posting edits in replies.
Grok’s image tool messed up big time this week, turning nonconsensual edits into a viral, repeatable loop. Here’s what happened and how it snowballed. Today’s restriction changes the surface, but not the entire system.
What X actually changed
The clearest shift is this: free users can no longer tag @grok and get an image generated in replies the way they could earlier this week. Reuters reports Grok began telling X users on Friday that image generation and editing features were now available only to paying subscribers, which appeared to stop Grok from generating and automatically publishing sexualized edits in response to posts.
Others captured the same automated response (“Image generation and editing are currently limited to paying subscribers”) and then made the more uncomfortable point: the message creates the impression the feature is “paywalled,” but that impression is false.
So yes, X flipped a switch. But it looks more like a throttle on the most viral workflow than a true shutdown.
The paywall is porous: Grok tab and grok.com still work
Users are still able to create sexualized images via the Grok tab inside X and then post them manually, and that the standalone Grok app was still allowing image generation without a subscription.
The bigger “ghost channel,” though, is the standalone website: grok.com. The Verge reports Grok can still be accessed through a standalone website and that it tested the Grok website, app, and X tab with free accounts, with Grok complying with image generation/editing requests.
That distinction matters for tech readers: even if one surface says “restricted,” the capability can remain reachable elsewhere, and then reposted back into the feed.
The scale wasn’t “a wave.” It was a factory.
The most damning part of this story is the rate.
TechCrunch reports that a sample from January 5-6 found Grok generating roughly 6,700 images per hour over a 24-hour period, citing Bloomberg’s research.
Once you pin the story to a number like that, “misuse” stops sounding like a handful of bad actors and starts sounding like a throughput problem, the kind regulators treat as systemic.
The real strategy is an “identity wall”
There’s a deeper tactical shift in the Premium gate: it reduces anonymous generation.
Reports note that paying subscribers have their details and credit card information stored by X, making it easier to identify if the tool is misused. That’s not a safety wall. It’s an accountability wall: traceable-on-payment instead of free-for-all.
But this only works if access is actually gated. The Verge’s testing undercuts the premise: it says free users can still access Grok image editing/generation via the “Edit image” button, the Grok tab, and the standalone website.
This isn’t just PR risk. It's a fines-and-operations risk.
Regulators aren’t treating this as a platform embarrassment. They’re treating it as unlawful content at scale.
Under the UK Online Safety Act, Ofcom can impose fines up to 10% of a company’s global turnover and can seek a court order to block a website or app in the UK in serious cases. Separately, EU enforcement under the Digital Services Act can reach up to 6% of global annual turnover for noncompliance.
Turnover-based penalties are what convert “backlash” into board-level risk, especially for a platform already under revenue pressure.
The EU’s “document freeze” runs through December 31, 2026
On January 8, the European Commission ordered X to retain all internal documents and data related to Grok until the end of 2026, extending an earlier retention order. A Commission spokesperson framed the logic bluntly: keep internal documents because the Commission may need access to them if requested.
Even without a fresh formal investigation being announced, that preservation order functions like a document freeze for anything Grok-related: the “what did we know, what did we change, when did we change it” trail becomes part of a compliance file.
Regulators’ blunt message: paid vs free doesn’t change the underlying illegality
Reuters quotes the European Commission making the key point: limiting image generation to paying subscribers “doesn’t change our fundamental issue.” Paid subscription or not, the Commission says it does not want to see such images.
This is why “paywalling” is landing as provocation in some capitals, not progress.
U.S. senators are now applying the “app store lever”
The need to respond is real, the struggle is no longer just about what Grok can do inside X. It is about whether X is allowed to remain on your phone at all.
On January 9, 2026, three U.S. Democratic senators—Ron Wyden, Ben Ray Luján, and Ed Markey—sent a joint letter to Apple CEO Tim Cook and Google CEO Sundar Pichai. The demand was blunt: remove X and Grok from your app stores until the mass generation of nonconsensual sexualized images is addressed.
In plain terms, this move shifts the pressure from “limiting a feature” to “terminating the platform’s reach.” This isn't a request for X to update its interface; it’s an ultimatum to Apple and Google to enforce their own terms of service against Grok’s output.
By targeting the distribution layer (iOS and Android), senators are betting that even if Elon Musk ignores regulators, he cannot afford to be de-platformed from the world’s two largest app ecosystems.
Y. Anush Reddy is a contributor to this blog.



