YouTube prepares crackdown on ‘mass-produced’ and ‘repetitive’ videos, as concern over AI slop grows
YouTube is preparing to update its policies to crack down on creators’ ability to generate revenue from “inauthentic” content, including mass-produced videos and other types of repetitive content — things that have become easier to generate with the help of AI technology.
On July 15, the company will update its YouTube Partner Program (YPP) Monetization policies with more detailed guidelines around what type of content can earn creators money and what cannot.
The exact policy language itself has not yet been released, but a page on YouTube’s Help documentation explains that creators have always been required to upload “original” and “authentic” content. The update says that the new language will help creators to better understand what “inauthentic” content looks like today.
Some YouTube creators were concerned that the update would limit their ability to monetize certain types of videos, like reaction videos or those featuring clips, but a post from YouTube Head of Editorial & Creator Liaison, Rene Ritchie, says that’s not the case.
In a video update published on Tuesday, Ritchie says that the change is just a “minor update” to YouTube’s longstanding YPP policies and is designed to better identify when content is mass-produced or repetitive.
Plus, Ritchie adds, this type of content has been ineligible for monetization for years, as it’s content that viewers often consider spam.
What Ritche is not saying, however, is how much easier it is to create such videos these days.
With the rise of AI technology, YouTube has become flooded with AI slop, a term referencing low-quality media or content made using generative AI technology. For instance, it’s common to find an AI voice overlaid on photos, video clips, or other repurposed content, thanks to text-to-video AI tools. Some channels filled with AI music have millions of subscribers. Fake, AI-generated videos about news events, like the Diddy trial, have racked up millions of views.
In another example, a true crime murder series on YouTube that went viral was found to be entirely AI-generated, 404 Media reported earlier this year. Even YouTube CEO Neal Mohan’s likeness was used in an AI-generated phishing scam on the site, despite having tools in place that allow users to report deepfake videos.
While YouTube may downplay the coming changes as a “minor” update or clarification, the reality is that allowing this type of content to grow and its creators to profit could ultimately damage YouTube’s reputation and value. It’s no surprise, then, that the company wants clear policies in place that allow it to enact mass bans of AI slop creators from YPP.