DCU’s Dr Eileen Culloty thinks that content moderation is the only way to rein in Big Tech.
Social media platforms have long since claimed neutrality when it comes to moderating content.
Elon Musk comes to mind – who said in 2022 that Twitter, now X, must remain “politically neutral”. Musk is currently clashing with Brazilian authorities over what he claims are its attempts to “censor” X.
While recently, Meta’s Mark Zuckerberg said that he was pressured by the US Biden administration to censor Covid-19 content on his platforms – which he expressed frustration over. “I feel strongly that we should not compromise our content standards due to pressure from any administration in either direction,” he said.
Dr Eileen Culloty, the deputy director of Dublin City University’s (DCU) Institute for Media, Democracy and Society, co-authored Disinformation and Manipulation in Digital Media, a book that explains processes of disinformation, involving platforms, bad actors and audiences, and suggests possible countermeasures.
Speaking with SiliconRepublic.com, Culloty says that many of these social media platforms aren’t neutral. “The problem with most of the social media companies is that they claim to be neutral platforms – but they just aren’t.
“[Platforms] decide what content gets promoted – they set rules about what people are and aren’t allowed to do, and those things are often completely arbitrary.”
Unmoderated platforms and illegal activity
However, in all the talk of content moderation and censorship, it is important to remember that the internet can be reflective of beliefs in the offline world.
“I think there’s a misconception that if social media didn’t exist, none of these things would be happening and that’s not true,” says Culloty. “There is a lot of offline hatred and offline organisation [of groups] and so social media is a tool.
“[Digital media] has made access to creating and distributing media more open to all of us and that’s a good thing. But one of the consequences of that is that people can use it for bad purposes. And that’s what we see with the spread of disinformation and conspiracy theories.”
The messaging platform Telegram has been praised by some for developing a secure platform for private conversations. The result was that bad actors began using the platform for illegal activities.
Last month, Pavel Durov, Telegram’s CEO, was arrested for this very seemingly hands-off approach to content moderation, being accused of creating a space to spread illegal and extremist content.
A Financial Times report found that the user base of Telegram surged amid the recent UK riots. This report claimed that Telegram was one of the main platforms used to organise those riots, along with TikTok and X.
Similarly, WhatsApp’s end-to-end encryption made it hard for Meta to remove dangerous content, according to the UK’s terror watchdog.
This is one of the challenges of supposed platform neutrality – where is the line between protecting free speech and protecting other rights and preventing illegality.
How can social media effectively be moderated?
In a nutshell, misinformation is false information spread unintentionally, while disinformation is false information spread intentionally.
The spread of misinformation or disinformation is not illegal, nor is believing in falsities. And proving a user’s intent to harm is often difficult, especially considering the sheer volume of misinformation on social media.
Culloty says that content moderation is the solution. But, she says, social media companies aren’t doing a good job at it, especially in non-English speaking markets.
“If people think those platforms are bad in English, they should really consider what it would be like if you’re in another language in a smaller country where the resources are almost non-existent.”
Culloty gives the example of the dangerous misinformation spread on Facebook regarding Rohingya Muslims and their ongoing genocide.
“They did not have moderators who spoke the relevant languages. So, what were those people moderating?” Companies should ensure they have adequate resources to operate in a particular country before they’re allowed to, Culloty says.
“[Platforms] can’t be perfect but they’re so far away from even trying properly to moderate better,” she says.
Added to this, the widespread use of AI is causing disinformation to spread much faster. Culloty says AI has become the biggest tool for spreading disinformation on social media. And that even though the quality is still poor for most AI-generated content, there’s just so much of it and it can be created and spread so quickly that it floods platforms. And, of course, “the tools are getting better and better all the time,” she says.
Increasing regulations
Up until now, regulations have seemingly failed to hold platforms accountable, and some platforms are still fighting against them. X recently won a case blocking parts of California’s content moderation laws that the company said “violates free speech”.
However, the European Union’s Digital Services Act, which came into force in 2023, aims to hold Big Tech more accountable. The Act legally obliges large online platforms to enforce strict content moderation rules.
“The regulatory model is to say you can’t just pretend that you have no responsibility. When you bring millions and millions of people to your platform, when you design it a particular way or when you promote particular content, you can’t wash your hands of it,” explains Culloty.
“I think the EU model is a good one.”
Fines under the Digital Services Act for platforms with more than 45m users can be up to 6pc of their annual global turnover. For behemoths like Meta, the fine could be as much as $8bn (Meta’s annual revenue for 2023 was $134bn).
Australia is also growing in confidence in going after social media behemoths. Legislation was introduced in the country’s parliament this month to tackle “seriously harmful and verifiably false” misinformation and disinformation spread online. While the country is also trying to ban children from social media – although that pledge currently lacks detail.
“Australia is very interesting,” Culloty says, because it’s willing to take a stance on its own. “They’re one of the first countries to have a really well-resourced child safety commissioner that just rejected the idea – [that] platforms [can’t do] much about bullying and harassment.” Perhaps Australia can be a lesson for other jurisdictions.
Amidst all of the “nonsense”, Culloty asks if it’s even worth it to be on these platforms anymore.
“It’s just nonsense, fake products, fake reviews and just really cheap kind of stuff, so I think that’s the big challenge for the tech companies is how do they actually convince people that it’s worthwhile being on their platforms?”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.