The social media giant said that its teams took down around 20 new influence operations around the world this year.
Meta, the parent company of Instagram, WhatsApp and Facebook, has claimed that artificial intelligence (AI) content made up less than 1pc of election-related misinformation on its apps this year.
Earlier this year, Meta revealed plans to set up a dedicated team to combat disinformation and AI misuse ahead of the EU elections, which took place in June.
And now, the social media company has said in a blogpost that it ran several election operations centres around the world to “monitor and react swiftly to issues that arose” in relation to major worldwide elections. Countries and regions that held elections this year include the US, EU, Bangladesh, Indonesia, India, Pakistan, France, the UK, South Africa, Mexico and Brazil.
Meta’s latest announcement follows the contentious US presidential election, where AP reported that “a flood of misinformation” sought to undermine trust in voting.
Commenting on the US election, Nick Clegg, president of global affairs at Meta, noted that its Imagine AI image generator rejected 590,000 requests to create images of president-elect Donald Trump, vice-president-elect JD Vance, governor Tim Walz, current vice-president Kamala Harris and current president Joe Biden in the month leading up to election day, in an effort to prevent people from creating deepfakes.
The company claimed that while there were instances of confirmed or suspected use of AI to spread misinformation, “the volumes remained low, and our existing policies and processes proved sufficient” to reduce the risk around generative AI content. It added that ratings on AI content related to elections, politics and social topics represented less than 1pc of all fact-checked misinformation.
Meta further said that its teams took down around 20 new “covert influence” operations around the world this year.
In addition, Meta took aim at social media sites X and Telegram, saying that fake videos about the US election linked to Russian-based influence operations were posted on these platforms. X’s owner Elon Musk attracted controversy this year when he shared a deepfake ad of Kamala Harris.
However, it should be noted that Meta also found itself in hot water regarding misinformation and disinformation in the past: in April, the European Commission opened an investigation into Meta for allegedly violating EU rules through “deceptive advertising and political content” on Facebook and Instagram.
Tackling misinformation, including AI-generated misinformation, has taken priority in recent years – in the context of America, according to the Pew Research Centre, concerns about inaccuracy regarding news coverage in the US have been raised.
And while regulation slowly creeps in around the world, groups continue to use social media to spread disinformation and misinformation, influence elections and promote violence.
In July, Meta said that it had “never thought about news” as a way to counter misleading content on Facebook and Instagram.
A recent study published in Nature journal Scientific Reports found that source-credibility information and social norms help to improve truth discernment and therefore also reduces engagement with misinformation online.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.