Fidelity Investments’ Karen Conway explains the importance of diversity within AI teams and how AI professionals can work towards non-biased systems.
Through our focus on AI and analytics this week, we’ve looked at the exponential growth of AI and the challenges that come with it.
The expansion and interest of the industry will no doubt give rise to many jobs within the industry as companies will increasingly need AI engineers and architects, as well as data and machine learning experts, for what the future will demand.
But alongside the popularity of AI tools like facial recognition, computer vision and chatbots, is an important conversation about ethical AI, free from bias. And this can only exist if the professionals working on this technology are thinking about how they can safeguard it against bias.
To find out more about this, SiliconRepublic.com spoke to Karen Conway, director of software engineering at Fidelity Investments in Galway.
Conway spoke about the issues that can arise when AI professionals don’t pay attention to the outcomes of this technology from an ethical perspective.
“We see recommender systems that are proposing self-harm content. We see credit card limits being changed dependent on gender. We see facial recognition is not working as good for certain races.”
She said that regulators and lawmakers including the European Commission and the White House in the US are pushing ethical AI to the forefront, but there’s still more work to do and more importantly, those working in the AI industry need to ensure they understand what these future regulations are going to entail.
“Everybody involved must understand AI outcomes are built on data that may not reflect reality and may not reflect society, and that models are designed by humans so they could have human bias embedded in them,” she said.
“The designers behind the systems need to ensure that it behaves in a fair and just manner to all of society.”
The European Commission has been working hard at finetuning legislation around the use of AI, with a list of guidelines on trustworthy AI published by an expert group in 2019, to the AI Act aimed at reining in ‘high-risk’ AI.
Among these regulations is the need for transparency and explicability, which Conway said is vital and needs to be at the forefront of AI professionals’ minds.
“It’s really important for people to trust the system if they can explain. It’s not good enough to say the system said ‘no’. At the end of the day, the company needs to be in a position to understand, explain and own the outcome of the AI they’ve designed.”
She added that in order to create unbiased, ethical AI systems, companies need to start with diverse teams.
“Although we have very talented and experienced designers and developers creating the AI, the problem is if they’re all too similar, the data that’s being used is built on bias and it leads to unintended bias,” she said. “If we can get a more diverse base working in STEM, we have access to much more perspectives.”
While the challenge of biased or unethical AI systems can be seen as a difficult hurdle to overcome, Conway remained optimistic about the future of the sector.
“When AI is done right, without bias and discrimination and following ethical principles, it can positively augment businesses and society and create a brighter world for everyone.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.