Meta and Microsoft move to protect elections from AI

8 Nov 2023

Image: © surasak/Stock.adobe.com

Microsoft is developing a watermarking service to show the origins of political content, while Meta will require advertisers to disclose if their content has been digitally altered.

Both Meta and Microsoft have announced new measures to deal with the risk of AI and deepfakes disrupting elections worldwide.

Microsoft said its new measures are being made due to concerns that authoritarian nations will interfere in elections by combining traditional techniques “with AI and other new technologies”.

One of the tech giant’s measures is a tool to digitally sign and authenticate media using a type of digital watermark from the Coalition for Content Provenance and Authenticity. This watermark is a set of metadata that encodes details about the content’s origin, including if it was generated by AI.

“These watermarking credentials empower an individual or organisation to assert that an image or video came from them while protecting against tampering by showing if content was altered after its credentials were created,” Microsoft said in a blogpost.

Microsoft said this service will launch in Spring 2024 as a private preview and will be made available to political campaigns first.

The company has also set up a ‘campaign success team’ to support political campaigns as they “navigate the world of AI” and help “protect the authenticity of their own content and images”.

Microsoft also said it will ensure Bing has “authoritative election information” and will support legislative changes that add to “the protection of campaigns and electoral processes from deepfakes and other harmful uses of new technologies”.

“No one person, institution or company can guarantee elections are free and fair,” Microsoft said. “But, by stepping up and working together, we can make meaningful progress in protecting everyone’s right to free and fair elections.”

In September, Microsoft claimed that state-sponsored hackers in China are using AI-generated images as a way to spread misinformation and influence US voters.

Meta

Meanwhile, Meta has announced new requirements for political or social issue ads that appear on its platforms.

The company said advertisers will have to disclose when they digitally create or alter a political or social issue ad “in certain cases”.

Under this rule, a disclosure has to be made if the ad depicts a real person doing something they did not do or saying something they did not say.

This rule also applies if the ad shows a realistic-looking person or event that does not exist, alters footage of a real event that happened, or shows a realistic event that “allegedly occurred” but is not a true image, video or audio recording.

“Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered,” the company said in a blogpost. “If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser.”

Meanwhile, the EU provisionally agreed on new rules this week, which aim to rein in targeted political advertising by placing limits on the use of targeting and ad delivery techniques.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com