AI that promotes objectivity in recruitment is ‘pseudoscience’, study finds

17 Oct 2022

Image: © sirawut/Stock.adobe.com

A Cambridge study that replicated a commercial AI model used in HR found that it can further entrench discrimination in hiring.

Some companies have been looking to improve diversity and inclusion in their hiring processes by using artificial intelligence. But a new study from the University of Cambridge has found that AI can, in fact, be antithetical to these goals.

Published in the journal Philosophy and Technology last week, the study set out to examine the claim that recruitment AI can objectively assess candidates by removing gender and race from systems to make the process fairer and more meritocratic.

A commercial AI model used in industry was replicated to study how recruitment software predicts people’s personalities using pictures of their faces. The study concluded that such software is based on no more than “pseudoscience”.

“Rather than removing gender and race from their systems altogether, AI-powered tools are part of a much longer lineage of sorting, taxonomising and classifying voices and bodies along gendered and racialised lines,” co-authors Dr Eleanor Drage and Dr Kerry Mackereth wrote.

They argued that attempts to strip gender and race from AI systems often misunderstand what they are, “casting them as isolatable attributes rather than broader systems of power”. This can lead to an unintentional entrenchment of inequality and discrimination within organisations.

A significant aspect of the study involved analysing candidates’ faces to draw conclusions on five broad areas of interest for the recruiter: agreeableness, conscientiousness, extroversion, neuroticism and openness.

Irrelevant metrics that have no bearing on ability such as facial expressions, brightness and contrast, lighting and even clothing were found to affect the software’s predictions. It also looked for candidates who are similar to previous successful candidates, which can be problematic.

“Machine learning models are understood as predictive; however, since they are trained on past data, they are re-iterating decisions made in the past, not the future,” Mackereth told The Register.

“As the tools learn from this pre-existing dataset, a feedback loop is created between what the companies perceive to be an ideal employee and the criteria used by automated recruitment tools to select candidates.”

The UK’s data watchdog recently revealed plans to investigate whether the use of AI in recruitment leads to issues of bias. It followed accusations that recruitment software was discriminating against minority groups by discounting them from the hiring process.

The latest study concluded by calling for HR professionals to be more cognisant of the limitations of using AI as it is today for the purpose of recruitment and drew attention to the need for greater regulation in this space.

“Only through this heightened awareness of the AI capabilities of new HR tools can the field of HR seriously grapple with both the benefits and the risks posed by new and emerging AI technologies,” the study noted.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com