YouTube has announced new rules regarding content generated or manipulated by artificial intelligence, including the implementation of labeling requirements. The Google-owned video platform revealed in a blog post that it will be rolling out a series of updates over the next few months. These updates will include requiring creators to disclose whether their content has been AI-generated upon uploading it, which will add a label to the video alerting viewers.
The new policy aims to address the potential for AI-generated content to mislead viewers, particularly in cases involving sensitive topics such as elections, ongoing conflicts, public health crises, or public officials. YouTube emphasized the importance of ensuring transparency and preventing the dissemination of misleading information through AI-generated content.
Creators who repeatedly fail to disclose AI-generated content under the new rules face having their content taken down from YouTube. However, YouTube clarified that not all content will be removed, and various factors will be considered when evaluating requests for content removal. These factors may include whether the content is parody or satire, the unique identification of the person making the request, or the involvement of public officials or well-known individuals.
The platform also highlighted its commitment to “building responsibility” into its AI tools, reflecting its dedication to maintaining a healthy ecosystem of information on YouTube. YouTube’s approach to responsible AI innovation involves a combination of human reviewers and machine learning technologies to enforce community guidelines and ensure the integrity of the content available on the platform.
YouTube’s new policy requiring disclosure and labeling of AI-generated content represents a significant step towards preventing the spread of misleading information. The platform’s proactive approach aims to enhance transparency, maintain viewer trust, and promote responsible content creation.