Clearview AI claims to have a database of more than 50bn images of people, a number that keeps growing despite multiple fines from regulators worldwide.
Facial recognition company Clearview AI has once again been hit with a major fine for its data-collection practices, but that doesn’t seem to concern the US company.
The Dutch Data Protection Authority (DPA) has fined Clearview €30.5m and a potential non-compliance fine of more than €5m. The DPA has accused Clearview of creating an illegal database that contains billions of images of people worldwide.
Clearview AI has created a massive facial recognition network by scraping images from the internet. It then provides facial recognition services to various intelligence and law enforcement agencies. The size of this database has grown dramatically over the years – the company said it had more than 10bn images in 2022 and now says it has more than 50bn images.
The company previously told investors that 100bn images would be enough to identify “almost everyone in the world”, according to documents obtained by The Washington Post. Clearview has faced cease and desist letters from social media companies for its image-scraping practices.
The Dutch DPA said Clearview has breached GDPR on “several points” and that its database of photos and biometric data is illegal. The authority also said Clearview took images without people’s knowledge or consent.
“Facial recognition is a highly intrusive technology that you cannot simply unleash on anyone in the world,” said Dutch DPA chair Aleid Wolfsen. “If there is a photo of you on the internet – and doesn’t that apply to all of us? – then you can end up in the database of Clearview and be tracked. This is not a doom scenario from a scary film.”
But these allegations don’t seem to concern Clearview AI, which has faced scrutiny for years over its practices. A Clearview spokesperson told The Verge that the company is not subject to GDPR as it does not have a place of business or customers in the Netherlands or the EU.
The Dutch DPA said it is looking for ways to make sure Clearview AI stops its violations, including investigating if the directors of the company can be held personally responsible.
Controversy worldwide
Clearview became a controversial company in 2020 after a New York Times investigation showed details of its surveillance tools and its customers. This was followed by a lawsuit against the company by the American Civil Liberties Union (ACLU) of Illinois, accusing the company of violating privacy rights.
That lawsuit reached a settlement in 2022, when Clearview AI agreed to restrictions including a permanent ban in the US on making its faceprint database available to most businesses and other private entities.
This was arguably the only lawsuit that caused Clearview to adjust its business – it now focuses more on law enforcement clients in the US instead of commercial entities. Clearview AI has also faced regulatory pressure from multiple countries including Australia and Canada, but these hurdles have not hindered the growth of the company’s database.
Last year, Clearview AI managed to appeal a £7.5m fine it received in the UK. This fine was issued by the country’s Information Commissioner’s Office in 2022, but a UK judge ruled that this office did not have the jurisdiction to issue an enforcement notice and monetary penalty.
Dr Kris Shrishak of the Irish Council for Civil Liberties previously told SiliconRepublic.com that it may be difficult for regulators to enforce rulings against Clearview AI because it is headquartered in the US and does not appear to have offices in other countries. He said it would be easier to have “enforcement teeth” if a US authority cracked down on the technology.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.