Is disinformation on social media running rampant?

16 Aug 2024

Image: © Syifa5610/Stock.adobe.com

While regulation slowly creeps in around the world, groups continue to use social media to spread disinformation, influence elections and promote violence.

It’s not exactly news that you can’t trust everything you see online, but that doesn’t stop false narratives from causing harm in the real world.

From the Rohingya genocide in 2017 being fueled by posts on Facebook to more recent examples of hate speech spreading on X, many of us are well aware that social media can be used as a tool for misinformation and to promote violence.

But awareness has not stopped the problem and it seems examples are constantly popping up of people believing false claims with malicious agendas online. In Ireland, a Red C poll found that 22pc of people believe the Government is working to replace white people.

Governments have also made it clear that they are aware of the various issues on social media – from spreading hate speech to the impacts on children’s mental health. Regulation is slowly coming to address these concerns, but it remains to be seen how effective these efforts will be.

Dangerous developments

While we wait for change, it seems that in some cases, social media companies are doing less instead of more to address the spread of disinformation.

One of the most notable examples has been X, formerly Twitter, which has changed significantly since Elon Musk took it over in 2022. The loss of the blue bird logo is only the tip of the iceberg – verification has become a payment subscription, bot accounts remain an issue and the platform’s owner himself shares controversial posts.

Musk recently commented on riots in the UK and said “civil war is inevitable”, leading to a backlash from UK officials. Social media sites such as X were used to propagate disinformation about a tragic stabbing incident that caused the deaths of three young girls in Southport.

The rumours that spread claimed the killer had an Islamic connection or that he was an asylum seeker – UK police said neither of these rumours are true.

Musk has also taken to using his social media presence to influence the US presidential election, giving his full support (and massive donations) to Donald Trump’s campaign while being critical of Kamala Harris. Musk also shared a deepfake video of Harris without noting that it was altered.

But X is not the only site that seems lax in tackling offensive and deceptive content. Meta was recently criticised for its decision to shut down its analytics tool CrowdTangle, despite various reports that this tool was “invaluable” for social media research and in spotting disinformation campaigns on Meta’s platforms.

Apps that focus more on private messaging are not immune to these issues. A recent Financial Times report found that the userbase of encrypted messaging app Telegram surged amid the UK riots. This report claims that Telegram – known for a “hands off” approach to content moderation – was one of the main platforms used to organise those riots, along with TikTok and X.

Tech can make it worse

There are discussions worldwide about the issue of encrypted messaging services – supporters say it is vital for privacy and serves important purposes, while opponents say these apps can be used by extremist groups and to spread harmful content.

But that is only one area where technology can be abused to spread disinformation and fuel violence. Another form of technology that is a dangerous tool for these purposes is AI.

Advanced generative-AI models are capable of engaging with a large number of users and reply to queries in moments. But their capabilities also pose a problem – studies suggest they can be better at spreading misinformation that humans.

Various reports have described AI as a disinformation amplifier, capable of quickly creating massive amounts of fake content that gets distributed online. US officials recently urged Musk to fix his AI chatbot Grok, after it spread the false narrative that Harris isn’t eligible to appear on some 2024 US presidential ballots.

AI image generators also present a problem, as they can be quickly used to create detailed, realistic images based on a text prompt from a user. Without sufficient guardrails, these tools can be used to spread deepfake images of real people.

But maybe one of the biggest threats AI poses is simply how much content it can throw into a discussion online. Wasim Khaled of Blackbird.AI spoke last year about the danger of “denial-of-trust” attacks – drowning out truth and any trust on a certain topic by flooding the discussion with conspiracy theories and polarising viewpoints.

Could we see this coming?

Despite the recent wave of misinformation found online, this is not a new issue. UN investigators noted the role Facebook played in the Rohingya genocide in 2018, for example.

But detailed reports since then have only confirmed the issues that exist on social media. An online moderation report from the EU Agency for Fundamental Rights last year found that more than half of the social media posts it analysed had content considered  “hateful” by human coders.

The report noted the difficulties that exist in detecting and removing hate speech from social media – there is no commonly agreed definition of online hate speech, while online content moderation systems are “not open to researchers’ scrutiny”.

A report this year from Dutch researchers found that coordinated networks of accounts were used to spread disinformation across social media prior to the European elections.

A struggle for regulation

Governments are slowly taking steps to address the issues on social media and to tackle the spread of disinformation online. The EU has been investigating multiple platforms under its Digital Services Act (DSA), which is designed to put more responsibilities on these sites to tackle fake and harmful content.

Meanwhile, the UK approved its controversial Online Safety Act last year, which aims to protect children by making tech companies monitor and remove harmful content. UK regulator Ofcom recently said this act will give it new powers when dealing with platforms, but not all of these powers are in effect yet. Ofcom told CNBC that the new rules won’t fully come into force until 2025.

In the US, a group of senators recently introduced a No Fakes Act to make the creation of voice and visual likenesses of people, such as AI deepfakes, illegal without their consent. This bill aims to hold individuals or companies liable for damages for producing, hosting or sharing AI deepfakes of people in audiovisual content that they “never actually appeared in or otherwise approved”.

Meanwhile, some groups believe technology such as AI may also be useful in tackling the spread of fake content online.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com