YouTube adds label for authentic content to increase AI transparency

16 Oct 2024

Image: © artempohrebniak/Stock.adobe.com

The C2PA verification labelling system only works with some devices currently so it may be a while before the label becomes widespread.

YouTube is slowly rolling out labels that tell if a video has been taken from a real camera with real sound and footage or if it has been digitally altered using generative AI.

However, triggering these labels takes specific tools, which means that users would not see these labels being used widespread for some time.

The first video on YouTube to showcase this label was uploaded yesterday (15 October) by Truepic, a digital content authentication service. Truepic said it’s “secure capture camera, Lens, was used to create the first authentic video with C2PA Content Credentials on YouTube”.

In the video description, users can see a new label titled ‘How this content was made’, and an explanation that says “this content was captured using a camera of other recording device”.

The Coalition for Content Provenance and Authenticity (C2PA) verification technology only works on specific cameras, software or mobile apps with an in-built C2PA version 2.1 or higher that attaches secure metadata to the piece of content. The metadata then verifies the content’s origin, confirming whether it was altered or not.

Google told The Verge that it has been “exploring” how to relay C2PA information to YouTube viewers.

YouTube-owner Google joined the C2PA as a steering committee member earlier this year, a list that includes software giants OpenAI, Meta, Intel, Amazon and the BBC, among others.

“At Google, a critical part of our responsible approach to AI involves working with others in the industry to help increase transparency around digital content,” Laurie Richardson, the vice-president of trust and safety at Google said at the time. She added that this builds on the company’s work on Google DeepMind’s SynthID, Search’s ‘About this Image’ and YouTube’s labels denoting content that is altered or synthetic.

YouTube, in an announcement this March, introduced a new tool in its Creator Studio that requires content creators to disclose when something is “realistic content” or content a viewer might mistake to be “real” but is actually made with altered media, including generative AI. This, however does not include animation, special effects or content that has used generative AI for production assistance.

Under these guidelines, content creators would need to use the label ‘Altered or synthetic content’ if they are using the likeness of a realistic person but digitally altered, altering footage of real events or places or generating realistic scenes.

Other social media platforms, including TikTok have also started labelling AI-generated content while OpenAI’s Dall-E started including metadata that informs users if content was generated by the AI model.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Suhasini Srinivasaragavan is a sci-tech reporter for Silicon Republic

editorial@siliconrepublic.com