Based on the activity of millions of Twitter accounts, a study analysed the algorithmic amplification of different voices on the political spectrum.
It has been almost exactly a year since the 6 January riots at the US Capitol that shocked the world and highlighted the potential dangers of misinformation spreading on social media.
Now, a team of researchers has suggested that Twitter’s algorithm disproportionately favours politically right-wing content over left-wing content.
Twitter uses an algorithm to personalise the content seen by users on their social feed. Figures across the political spectrum often allege that their opponents’ voices get more amplification on social media – a claim that is hard to verify.
Ferenc Huszár of the University of Cambridge led a research team in a large-scale study involving a randomised control group of nearly 2m daily active Twitter users. This group received content from Twitter in reverse chronological order without personalisation. The team also studied a separate treatment group representing 4pc of all other accounts with personalised timelines.
Using these two groups, the team analysed the algorithmic amplification effect on tweets from 3,634 elected politicians from major political parties in seven countries that have strong representation on Twitter. Researchers also measured the algorithmic amplification of 6.2m political news articles shared on Twitter in the US.
Based on these analyses, they found that in six of the seven countries studied, the mainstream political right enjoyed significantly higher levels of amplification compared to the mainstream political left – indicating a clear bias in the algorithm.
However, contrary to popular belief, there was no evidence to suggest that the algorithm amplified far-right and far-left groups over more moderate ones.
The role of personalisation
The study was organised by Twitter and was first reported on at the end of last year.
In a blog post published in October, software engineering director Rumman Chowdhury and machine learning researcher Luca Belli said the purpose of the study was to “better understand the amplification of elected officials’ political content” on an algorithmically ranked timeline compared to a chronological timeline.
They wrote that certain political content is amplified on the platform but added that “further root cause analysis is required” in order to determine what changes could reduce adverse impacts.
The study was published yesterday (4 January) in the Proceedings of the National Academy of Sciences of the United States of America. Huszár, who is a senior lecturer of machine learning at Cambridge, led a team of researchers working for Twitter including Belli, Sofia Ira Ktena, Conor O’Brien, Andrew Schlaikjer and Moritz Hardt.
“This study carries out the most comprehensive audit of an algorithmic recommender system and its effects on political content,” the study read. “We hope our findings will contribute to an evidence-based debate on the role personalisation algorithms play in shaping political content consumption.”
In an editorial accompanying the article, Susan Fiske, a social psychologist at Princeton University, wrote that the findings raise ethical questions about Twitter’s impact on democracy.
“On the lofty assessment of Twitter’s corporate responsibility, this article will prompt much debate. Holiday dinners just got more interesting,” she wrote.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.