UK fines Clearview AI £7.5m for privacy breaches with facial recognition

24 May 2022

Image: © New Africa/Stock.adobe.com

The US facial recognition tech company was facing a potential £17m fine for gathering data without the knowledge or consent of UK residents.

The UK’s data protection watchdog has hit controversial facial recognition company Clearview AI with a £7.5m fine for multiple data protection breaches.

The Information Commissioner’s Office (ICO) has also ordered Clearview AI to stop obtaining and using the personal data of UK residents and to delete any existing UK data from its growing database.

Clearview AI, which describes itself as “the world’s largest facial network”, has built a database with more than 20bn images of people’s faces. The US-based company collects these images from publicly available sites such as social media platforms.

It then works with customers such as law enforcement agencies to compare facial data against its database.

But the ICO described Clearview AI’s practices as “unacceptable” as it collected the information of UK residents without their consent or knowledge.

“The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service,” UK information commissioner John Edwards said in a statement yesterday (23 May).

“People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement.”

The ICO said Clearview AI breached UK data laws by failing to have a lawful reason for collecting people’s data, failing to use this information in a way that is fair and transparent, not having a process to stop the data being retained indefinitely, and not meeting GDPR standards for biometric data.

The watchdog said Clearview AI also asked for additional personal information, including photos, when asked by members of the public if they are on the company’s database. “This may have acted as a disincentive to individuals who wish to object to their data being collected and used,” the ICO said.

The fine is considerably lower than the £17m penalty the company was potentially facing, when the ICO announced its provisional intent to impose a fine on Clearview AI last November.

Regulatory pressure

In February, it was reported that Clearview AI told investors it is on track to have 100bn facial photos in its database within a year. This would be enough to identify “almost everyone in the world”, according to company documents obtained by The Washington Post.

While Clearview AI has ambitious goals for expanding its services, it has been facing pressure from organisations and watchdogs around the world.

In 2020, the American Civil Liberties Union (ACLU) of Illnois filed a lawsuit against the company, alleging it violated the privacy rights of citizens. The ACLU said the case was filed after a New York Times investigation revealed details of the company’s tracking and surveillance tools.

That lawsuit reached a settlement earlier this month, when Clearview AI agreed to a new set of restrictions, including a permanent ban in the US on making its faceprint database available to most businesses and other private entities.

Last November, Australia’s top information authority ordered Clearview AI to stop collecting facial images and biometric templates of Australian citizens, and to delete what data it already has. It came after Canada’s federal privacy commissioner deemed the company’s practices illegal, saying it collected facial images of Canadians without their consent.

Dr Kris Shrishak of the Irish Council for Civil Liberties recently told SiliconRepublic.com that it may be difficult for regulators to enforce rulings against Clearview AI as it is headquartered in the US and does not appear to have offices in other countries. He said it would be easier to have “enforcement teeth” if a US authority cracked down on the technology.

In recent years, concerns have been raised about facial recognition tech in terms of surveillance, privacy, consent, accuracy and bias.

Last year, EU proposals for regulating AI were criticised by EU watchdogs for not going far enough when it comes to live facial recognition in public places. MEPs then called for a ban on biometric mass surveillance technologies, such as facial recognition tools, citing the threat these technologies can present to human rights.

Some companies have also been taking a step back on facial recognition. Facebook parent company Meta announced last November that it will delete face recognition data from more than 1bn users collected over a decade, and those who opted in for the face recognition feature will no longer be automatically recognised in photos and videos on the platform.

In July 2020, IBM said it would scrap its facial recognition and analysis software, saying it opposed the use of technology for mass surveillance or racial profiling.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com