The rise of fake artists using AI is haunting Spotify discovery feeds

Sienna Rose reads like a streaming era glitch. She racks up millions of monthly listeners, and her catalog feels engineered for playlists, yet there is barely any trace of her in the real world for fans to follow. If listeners run into an artist like that inside Spotify recommendations, the question hits fast and uncomfortable. How do you spot AI artists on Spotify, and how did they end up here?
That mystery sits underneath the louder argument now spreading across social feeds.
By mid-January 2026, vague discomfort hardened into specific complaints about AI-generated song recommendations sneaking onto discovery playlists like Discover Weekly and Release Radar. Users demanded labels, filters, or an opt-out. Spotify pushed back on the idea that it was actively boosting synthetic music, claiming it neither creates nor owns content, and does not promote or penalize tracks made with AI tools.
Spotify wants that to sound like a clean, reasonable stance.
Neutral platform, open marketplace, let the audience decide. The trouble is, Spotify makes the call every time it suggests something. An algorithm never stays neutral once it starts ranking. If it retains people highly, even through trickery, the system rewards it with distribution. To the user, distribution means promotion, regardless of how Spotify phrases the policy.
Spotify has already told the world what it wants to fight.
In September 2025, Spotify described stronger protections aimed at AI impersonation and abuse, including policing content that pretends to be someone else and tightening defenses against music spam engineered for royalty farming. That posture does not reject AI tools outright. It tries to separate legitimate creation from marketplace fraud.
Then AI made the economics ugly.
Cheap scale fueled an explosion of low-effort uploads, duplicates, and misleading content designed to game payouts. Reporting highlighted how Spotify removed tens of millions of spam tracks in a year, a signal of how industrialized the problem has become. Spotify also tightened monetization rules, including a minimum of at least 1,000 streams to earn royalty payouts, trying to make mass junk less profitable.
But even perfect enforcement would not soothe the listener experience.
A spam filter catches obvious garbage, and an impersonation policy targets clear identity theft. The trust crisis lives in the gray zone where content can be technically allowed and still poison discovery. A track might not impersonate anyone.
It might not trigger an anti-spam rule. It can still sound like filler optimized for algorithmic placement, and that is when listeners start asking each other the same thing. Is my Release Radar full of AI tracks, or did I just get unlucky this week?
Sienna Rose became a centre of attraction because she fits that gray zone narrative so perfectly. Recent coverage framed her rise as suspicious because the music appears everywhere while the artist appears nowhere, with minimal public presence relative to the scale of listening.
Whether the account ultimately proves human, synthetic, or something in between, the case exposes the real vulnerability. Streaming can manufacture familiarity faster than it can produce credibility.
This is where Spotify’s neutrality line starts to crack.
Spotify can say it does not promote AI music, but the recommendation engine promotes outcomes. If the system rewards a track because it holds attention, it will ship it to more people. If that attention comes from bots, fake streams, or a coordinated push, Spotify still shipped it until detection caught up. That gap between distribution and detection fuels the suspicion that Spotify is letting synthetic content ride the same rails as human work, but asking listeners to treat the difference as irrelevant.
Spotify has nodded toward transparency, but listeners want something more concrete than a promise.
Voluntary disclosure sounds polite until you remember who benefits from staying vague. The platform already knows how to add labels when it wants to, whether for explicit content, podcasts, or audiobooks. When users ask for AI tags, they are asking Spotify to make discovery legible again, not just safe from fraud.
The uncomfortable truth is that Spotify cannot win this with takedowns alone. It can remove75 million spam tracksand still lose the room if listeners keep stumbling into artists they cannot place in reality. It can tighten a Spotify fake stream policy and still watch trust erode if recommendations feel like a black box that favors whatever performs best in the machine.
Spotify built its edge on discovery that felt personal. AI did not just add more music, it added ambiguity at scale, and ambiguity kills confidence. If Spotify wants to stop this from turning into a lasting trust crisis, it needs control as product features, not moral statements. Give listeners a way to understand what they are hearing, and the argument cools down.
Y. Anush Reddy is a contributor to this blog.



