The lab will be led by Dr Abeba Birhane, named by Time Magazine as one of the 100 most influential people in AI last year.
A new Artificial Intelligence Accountability Lab (AIAL) aimed at addressing the structural inequalities and transparency issues related to AI deployment is launching this evening (28 November) at Trinity College Dublin (TCD).
Supported by a nearly €1.5m grant from the AI Collaborative (an initiative of the Omidyar Group), Luminate and the John D and Catherine T MacArthur Foundation, the AIAL will examine the technology’s broader impact, aiming to hold “powerful entities” accountable for technological harms, outline a “justice-driven” AI evaluation model, as well as audit already deployed models – specifically those used on vulnerable groups.
Housed in the School of Computer Science and Statistics at Trinity, the AIAL will be led by Dr Abeba Birhane, a research fellow in the ADAPT Research Ireland Centre and one of Time Magazine’s 100 most influential people in AI last year.
According to the Lab – which citied examples of structural issues caused by AI – a UK National Health Service liver allocation algorithm for transplant patients was found to discriminate by age while a decision support algorithm used by the Danish child protection services to aid child protection was deployed without formal evaluation and was found to suffer from numerous issues, including information leakage, inconsistent risk scores and age-based discrimination.
Discussing issues around the current limitations and dangers of generative AI with SiliconRepublic.com last year, Birhane said that the media reporting around generative AI tends to be “extremely super-hyped” when it comes to its abilities. However, she added that we hear “very little about its problems”.
Speaking about the new lab today, Birhane said it “aims to foster transparency and accountability in the development and use of AI systems”.
“And we have a broad and comprehensive view of AI accountability. This includes better understanding and critical scrutiny of the wider AI ecology – for example via systematic studies of possible corporate capture, to the evaluation of specific AI models, tools and training datasets.”
The Lab’s initial goals include leveraging empirical evidence to inform evidence-driven policies, challenge and dismantle “harmful” technologies and pave the way for just and equitable AI. It also plans to collaborate with research and policy organisations across Europe and Africa to strengthen international accountability measures and policy recommendations.
Prof John D Kelleher, the director of ADAPT and the chair of AI at Trinity said: “We are proud to welcome the AI Accountability Lab to ADAPT’s vibrant community of multidisciplinary experts, all dedicated to addressing the critical challenges and opportunities that technology presents.
“By integrating the AIAL within our ecosystem, we reaffirm our commitment to advancing AI solutions that are transparent, fair and beneficial for society, industry and government.
“With the support of ADAPT’s collaborative environment, the Lab will be well positioned to drive impactful research that safeguards individuals, shapes policy and ensures AI serves society responsibly.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.