The new tools let developers build custom apps and displays for verifying the history and credentials of digital content.
Software giant Adobe has been working on ways to help users identify the full history of digital content to deal with the spread of visual misinformation online.
Is is part of the Content Authenticity Initiative (CAI), a cross-industry group first announced by Adobe in 2019 to address digital misinformation and the authenticity of content. The CAI has grown to around 750 members, with collaborators such as The New York Times, Twitter, Truepic, Qualcomm, Witness, CBC and the BBC.
In the latest step to help authenticate digital content, the CAI has released a suite of open-source tools to help developers “integrate content provenance” across web, mobile or desktop projects.
The new tools available include a JavaScript SDK and a Rust SDK. These tools are designed to let developers build functions displaying content credentials in browsers, or make custom desktop and mobile apps that can create, verify and display content credentials. There is also a tool to let developers use their command line to explore content details.
“Being active in open-source communities enables CAI to empower developers around the world to create interoperable solutions, native to their applications, that will help advance the adoption of content provenance across a wide array of use cases,” said CAI senior director Andy Parsons in a blogpost.
“We can’t wait to see what the community builds with these raw materials.”
The underlying standard of these tools stems from C2PA, which aims to address the prevalence of misleading information online through the development of technical standards. Some of the organisations involved in C2PA are Intel, Microsoft, Sony and Arm.
Tackling deepfakes
One of the main goals of the CAI is to tackle the spread of digital misinformation by making it easier to access the history of content to see when and how it was created. One area that this could help is in the rise of deepfake imagery.
Deepfakes use a form of artificial intelligence to combine and superimpose existing images and videos to make fake images of people or make it look like a person has said or done something they have not.
An investigation earlier this year uncovered more than 1,000 LinkedIn profiles using what appeared to be AI-generated facial images.
There has also been a wave of AI-generated images spreading across the internet recently from DALL-E Mini, an open-source AI model inspired by OpenAI’s tech that can create images from text prompts.
While DALL-E Mini does not create realistic images, OpenAI’s DALL-E 2, along with its Google competitor Imagen, have been shown to create very realistic AI-generated images from text descriptions, raising fears that this technology could be used to spread visual misinformation.
The European Commission is also taking steps to tackle misinformation online. According to an EU document seen by Reuters, tech companies such as Google, Meta and Twitter will have to take steps to counter deepfake content and fake accounts, or face the risk of fines.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.