Trinity professor Abeba Birhane named in Time 100 AI list

7 Sep 2023

Image: Abeba Birhane

Birhane rose to fame in 2020 when she helped uncover racist and misogynistic terms in an MIT image library that was used to train AI.

Time Magazine has named Trinity College Dublin’s Abeba Birhane as one of the world’s 100 most influential people in the field of AI.

A cognitive scientist by trade, Birhane is currently a senior advisor in AI accountability at the Mozilla Foundation and an adjunct assistant professor at Trinity’s school of computer science and statistics, where she works with the Complex Software Lab.

Published today (7 September), the first-ever Time 100 AI list highlights the world’s top leaders, innovators, shapers and thinkers in the field of artificial intelligence and how they are influencing the way the technology is advancing.

Others on the list include OpenAI CEO Sam Altman, Black Mirror writer Charlie Brooker and EU commissioner Margrethe Vestager.

Birhane was born in Ethiopia and studied at University College Dublin before taking up her current role at Trinity. She rose to fame in 2020 when she helped uncover racist and misogynistic terms in an MIT image library that was used to train AI.

“The current AI landscape, especially developments around generative AI, is simply unpredictable,” Birhane said upon being named in the Time 100 list.

“There is currently so much noise, which makes deciphering hype from reality difficult. One current interesting development is the emergence of numerous lawsuits around data harvesting practices. Such developments, if persistent, could be key to responsible and accountable data creation, curation and management practices.”

Essentially, Birhane’s research has revolved around auditing publicly accessible AI training datasets, which is often not checked by enough people for harmful material that could make AI structurally racist, sexist and discriminatory in other ways.

“Most of the time, my screen is not safe for work,” she told Time Magazine. “I used to love working in a café; now I can’t.”

In a recently published paper currently under peer review, Birhane and her co-authors concluded that AI models trained on larger datasets are more likely to display harmful biases and stereotypes.

“We wanted to test the hypothesis that as you scale up, your problems disappear. We found that as datasets scale, hateful content also scales,” she went on.

“When a dataset consists of billions of pieces of data, it’s chaos. It’s just impossible to delve in and look at it, to diagnose the problems and find solutions.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com