Senator Markey presses OpenAI for transparency on ChatGPT Ads

When you hear "ChatGPT is getting ads," your instinct is to shrug it off. Ads are everywhere; you have likely gotten very good at tuning them out. But the difference with a chatbot is that it is not just an ad space. It is where people go for answers, help, and sometimes to say things they wouldn't say anywhere else.
That is why a letter package from Senator Ed Markey, dated January 22, 2026, arrived with the impact it did. It wasn’t sent only to OpenAI; it went to seven different CEOs across the AI universe. The message was clear: “If advertising is coming to these bots, how will they keep 'help' from becoming 'influence'?”
The Catalyst and Why It Happened Now
The catalyst was OpenAI’s January 16 announcement that it plans to start testing ads in ChatGPT for logged-in adults in the US on its Free and Go tiers. OpenAI framed the move as a sustainability initiative, a way to keep access broad while keeping higher tiers ad-free.
OpenAI also attempted to pre-empt the obvious concern: nobody wants to see advertising blended into the assistant’s voice. The company stated it would place ads at the bottom of answers, clearly separated from the response. Users could dismiss ads and see why they were shown. OpenAI also promised not to show ads to users under 18 or run them alongside sensitive topics like healthcare, politics, or mental health.
Markey’s argument is that the issue is not simply that ads exist. The issue is where they exist. In a conversation, a suggestion can feel like guidance. And guidance is where persuasion likes to hide.
Who the letter went to and why that list matters
The PDF is a compiled packet of letters addressed to:
OpenAI (Sam Altman), Anthropic (Dario Amodei), Alphabet/Google (Sundar Pichai), Meta (Mark Zuckerberg), Microsoft (Satya Nadella), Snapchat (Evan Spiegel), and xAI (Elon Musk).
The list is the point. Markey isn’t treating this as “an OpenAI thing.” He is treating it as a playbook the whole industry could copy fast once one big player proves it works.
The inclusion of xAI is telling. While Musk positions himself as outside of “mainstream tech,” this letter lumps xAI in with the incumbents. From Washington’s perspective, if you are building a chatbot people might trust, you are in the same category.
What Markey is really saying
Markey’s thesis has three layers.
First, chatbots feel human. This triggers anthropomorphism—we naturally lower our guard when something sounds or acts like us. It is great for making a bot feel friendly, but risky when money is trying to steer the conversation.
Second, “blurred advertising” is the core danger. Markey warns that ads could be “woven directly into the flow of the conversation,” becoming indistinguishable from neutral advice.
Third, the data is different. People ask chatbots questions they would never type into Google or share with an ad network. Markey argues that repurposing those private exchanges for ads crosses a line and could damage the trust that makes these tools useful.
What Markey asked OpenAI, specifically
Because the test was publicly announced, the letter to OpenAI pushes for details, not vibes. Markey is asking specifically about targeting and default exposure. How will OpenAI decide who sees ads? Do they appear by default, or only if users explicitly choose them?
He is also pressing on the enforcement of sensitive topic protections. OpenAI promises to avoid ads around health and politics. Markey asks, "How will you actually enforce that, rather than just stating it in a blog post?"
Crucially, he asks if sensitive conversations can still influence ads later. Even if an ad never appears next to a sensitive chat, can the data from that chat be used to target ads later? He also wants to know if paid influence will ever enter the answer itself.
Will OpenAI ever include product placements or endorsements inside the text of the answer? An ad box at the bottom is one thing; a paid nudge in the assistant's voice is another.
Finally, he asks about commercial bias in training, has OpenAI entered into agreements that influence how ChatGPT is trained or how it ranks products? And regarding personalization data, will conversation content be used to personalize ads? Will outside data (like social feeds) be brought in? And will there be a hard ban on profiling children and teens?
What Markey asked the other companies
The letters to Anthropic, Google, Meta, Microsoft, Snap, and xAI follow the same structure, designed to force a clear "yes or no" on whether ads are coming.
He asks if they plan to include advertisements or paid recommendations in their chatbot, and if they have commercial agreements that influence training, ranking, or recommendations. He also asks if they will use conversation content to inform advertising, or use outside data like search history.
He demands to know if they have tested their recommendations to ensure they aren't manipulative or misleading, and how they will ensure users can distinguish neutral conversation from ads. Finally, he asks if they will commit to not displaying paid recommendations to children or teens.
The Core Design Challenge: Help vs. Funnel
Underneath the Senate framing is a product tension that never goes away: Can an assistant be “on your side” while also being an ad surface?
OpenAI’s early design with ads at the bottom is clearly labeled as an attempt to keep a wall between help and selling. Markey’s letter is a stress test of that wall. The danger is not that an ad exists; it's that the assistant learns which phrasing sells more and could start optimizing for that, even if it sounds neutral to the user.
The Reputational Knife Fight
This letter didn’t arrive in a quiet moment. It landed in the middle of a fight over public trust, where credibility is being weaponized. At Davos, Google DeepMind CEO Demis Hassabis criticized OpenAI’s move toward ads as premature, warning that it could erode user trust, while noting that Google has no plans for ads in Gemini.
But underneath that rhetoric is an economic reality. Google’s core business is advertising at internet scale; they can afford to be patient. OpenAI, meanwhile, faces massive compute costs and a high cash burn forecast for 2026.
This infighting validates the regulator’s premise: ads and trust are on a collision course, making Markey’s demands harder to dismiss as hypothetical.
Where the FTC fits
Markey’s letters are an oversight; they don’t create rules. But the FTC is the body that can punish “blurred advertising". In September 2025, the FTC issued orders to seven companies regarding AI “companions", explicitly noting that these bots can prompt users, especially children, to form emotional relationships.
If a user is emotionally dependent on a bot, a sponsored suggestion doesn't register as an ad. It registers as trusted advice. That is exactly the dynamic regulators fear will become exploitative.
What happens next
The deadline is February 12, 2026. The most telling part of what follows won’t be slogans about “transparency.” It will be the specific commitments. Will ads remain strictly outside the answer text? Will sensitive conversations be exempted from ad personalization, not just placement? Can users turn personalization off in a way that actually stops the data flow? Will advertiser relationships influence ranking or tone over time?
If the companies respond with tight guardrails, it could be a blueprint for monetizing "answer engines" safely. If they respond with vague promises, it will be the start of a much larger fight.
The deadline is February 12. Until then, the question for users is simple: Do you trust an AI assistant enough to tell you what to buy, or does money inevitably change what “help” means?
Y. Anush Reddy is a contributor to this blog.



