Spotify’s AI Cleanup: 75M Tracks Gone, What It Means for You

Ever had your Discover Weekly slip in a random 31-second “song” that feels like filler or a track by an artist you’ve never heard of that sounds suspiciously generic? You’re not imagining it. Spotify says it’s started cleaning that up.
Spotify says it removed over 75 million “spammy” tracks in the past 12 months. This isn’t a ban on creativity or AI. It's a move against catalog flooding, voice-clones, and micro-tracks designed to skim royalties from the pool that pays real artists.
The Two Drivers Behind the Cleanup
Protecting the money. Streaming pays from a shared pool. Fake streams on junk uploads make everyone else’s share smaller. Other services have shown the problem is big (lots of AI uploads and heavy fraud), so platforms are tightening rules.
Protecting trust. Low-effort uploads and unauthorized voice clones hurt trust in recommendations and the platform. Spotify’s recent updates focus on impersonation takedowns, stricter spam filters, and clear AI labels, less “ban AI,” more “be transparent, don’t deceive.”
A Creator’s Checklist for Spotify’s New Rules
Get consent for voices. If a track imitates a recognizable singer without permission, it’s at risk of removal under impersonation rules.
Disclose AI use. Expect more emphasis on metadata and labels; hiding tools or origins is more likely to trigger moderation.
Skip the 30-second games. Mass micro-tracks, near-duplicates, and noise uploads are explicit spam targets. Catalog quality > catalog size.
Avoid botted promo. Fraud filtering is getting stricter; manipulated streams can jeopardize payouts and catalog standing.
What This Means for Your Daily Mix (Listeners)
Recommendations should feel cleaner as spam filtering and labeling mature, fewer obvious clones or junk uploads in autoplay, and clearer signals when a track uses AI tools.
What’s Next: From Cleanup to Licensing
Detection is hard, and mistakes (false positives) happen. The next step is to go beyond takedowns and allow licensed, transparent AI use. Several reports say Universal Music and Warner Music are only weeks away from major AI licensing deals with tech firms and startups. These aim for consented training, standard labels, and micropayments for uses, similar to streaming. If and when those deals land, the system moves from whack-a-mole enforcement to a clear, governed setup where AI-assisted music can exist and pay the right people.
Bottom line
This isn’t anti-AI; it’s anti-abuse. The cleanup aims to protect royalties and listener trust while the industry builds a labeled, licensed path for AI-assisted music to exist—and pay the right people.
Y. Anush Reddy
Y. Anush Reddy is a contributor to this blog.