Child-Safety Groups Demand YouTube Curb Rising AI Slop for Kids

Pressure on YouTube over AI-made children’s videos intensified this week as more than 200 child-safety groups and experts led by Fairplay urged Google and YouTube to crack down on what they call “AI slop” for kids.
In an April 1 letter to Google CEO Sundar Pichai and YouTube CEO Neal Mohan, the coalition said the platforms are feeding young viewers a growing stream of low-quality AI-generated videos and asked YouTube to label all AI-generated content, ban it from YouTube Kids, stop recommending it to minors, add parental controls that block it by default, and halt all investment in the creation of AI-generated videos for children.
YouTube says it already has safeguards in place. The company told AP it limits AI-generated videos in YouTube Kids to a “small set of high-quality channels,” lets parents block channels, and is working on labels for YouTube Kids. But the gap critics are targeting is still visible in YouTube’s own policy. The platform requires disclosure when content is meaningfully altered or synthetically generated and seems realistic, while unrealistic or clearly animated material generally does not require disclosure. Creating a loophole from which most of this content slips through.
Also read: Why AI design tools still struggle once the real edits begin
Reports described a flood of AI-generated “educational” videos aimed at small children, including road-safety clips and nursery-style videos packed with errors and strange visuals. An outlet said one channel, Jo Jo Funland, posted more than 10,000 videos in about seven months, or roughly 50 a day.
The numbers driving this fight are still coming mostly from outside reporting rather than YouTube’s own disclosures, but they are hard to ignore.
Fairplay’s letter says that after watching popular preschool shows on YouTube like Cocomelon, 40% of the recommended videos that followed contained AI-generated content. The same letter says the top-watched AI-slop channels targeting kids have earned more than $4.25 million in annual revenue so far.
Even YouTube’s own rules hint at the enforcement problem. Its monetization policy says repetitive or mass-produced “inauthentic content” is ineligible for monetization, yet critics argue that child-directed AI slop still manages to rack up huge reach before any action is taken.
Y. Anush Reddy is a contributor to this blog.



