Microsoft has noted some of the issues around facial recognition and is limiting public features as part of a broader push for ethical AI use.
Microsoft is limiting access to parts of its facial recognition technology and removing certain capabilities that detect controversial features such as a person’s age, gender and emotional state.
The tech giant noted that experts “inside and outside the company” have highlighted issues when it comes to the definition of emotions, the way AI detects them and privacy concerns around this type of capability.
As a result, some aspects of Microsoft’s facial recognition tech are being retired, including the ability to detect emotion, gender or age. These features are no longer available for new customers and will be discontinued for existing customers within one year.
New customers who want to use Microsoft’s facial recognition service, called Azure Face, will have to apply for access and explain how they intend to use the system, while existing customers have one year to apply and receive approval for continued access.
The decision is part of a broader push by Microsoft to tighten the usage of its AI products. The tech company has updated its Responsible AI Standard, a 27-page document that sets out requirements for accountability in AI systems and their impact on society.
“We recognise that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve,” said Microsoft chief responsible AI officer Natasha Crampton in a blogpost.
“As part of our work to align our Azure Face service to the requirements of the Responsible AI Standard, we are also retiring capabilities that infer emotional states and identity attributes such as gender, age, smile, facial hair, hair and makeup.”
While emotional detection features won’t be available for public use, Microsoft’s principal group product manager for Azure AI, Sarah Bird, said the company recognises that “these capabilities can be valuable when used for a set of controlled accessibility scenarios”.
Concerns around facial recognition
In recent years, concerns have been raised about facial recognition tech in terms of surveillance, privacy, consent, accuracy and bias.
Other tech companies have been taking a step back in this area. Meta announced last November that it will delete face recognition data from more than 1bn users collected over a decade, and IBM announced plans the previous year to scrap its facial recognition software over concerns of mass surveillance or racial profiling.
The EU published proposals for regulating AI last year, but these were criticised by EU watchdogs for not going far enough when it comes to live facial recognition in public places and MEPs called for a ban on biometric mass surveillance technologies.
Despite the concerns raised, the technology is still being deployed in many areas. Last month, The Irish Times reported that the Garda Síochána is expected to get new powers to use facial recognition for criminal identification in Ireland.
Facial recognition technology company Clearview AI has also been facing criticism and pressure from watchdogs around the world.
In February, it was reported that the company told investors it is on track to have 100bn facial photos in its database within a year. This would be enough to identify “almost everyone in the world”, according to documents obtained by The Washington Post.
The American Civil Liberties Union of Illnois filed a lawsuit against Clearview AI in 2020, alleging it violated the privacy rights of citizens. That lawsuit reached a settlement last month, when Clearview agreed to a new set of restrictions including a permanent ban in the US on making its faceprint database available to most businesses and other private entities.
But despite being hit with a £7.5m fine in the UK for multiple data protection breaches, the company’s database still appears to be growing based on its claims.
Dr Kris Shrishak of the Irish Council for Civil Liberties recently told SiliconRepublic.com that it may be difficult for regulators to enforce rulings against Clearview AI as it is headquartered in the US. He said it would be easier to have “enforcement teeth” if a US authority cracked down on the technology.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.