How will the AI Act regulate biometric systems?

13 Aug 2024

Image: © Fractal Pictures/Stock.adobe.com

William Fry’s Barry Scannell explains how the AI Act aims to mitigate risks around biometric technology and the challenge of distinguishing between prohibited and high-risk systems.

Advanced technology can bring both positive and negative results when it is deployed and one key example of this is biometric systems.

These systems both identify individuals based on certain physical traits and group them into various categories, based on physical or behavioural traits. The concept is not new – fingerprint scanning being a well-known example – but this type of technology has grown far more sophisticated over the years.

There are clear benefits to these systems – law enforcement can use them to quickly spot dangerous criminals for example. But there are also valid concerns about how these systems can be abused.

To address these concerns, the EU’s AI Act is bringing in rules around biometric systems, banning certain uses and listing others as “high risk”. The Act is set to bring various impacts, but what exactly will it do to regulate biometric systems and what is it trying to protect against?

Barry Scannell, a William Fry Technology Group partner, spoke to SiliconRepublic.com to explain how these systems are defined under the AI Act and what businesses will need to consider when using them.

What will the AI Act cover?

Scannell said the Act aims to address multiple risks with biometric systems, particularly “the potential for discrimination and privacy violations”.

In a recent blogpost, William Fry said the AI Act places significant emphasis on the sensitivity and potential misuse of personal biometric data, such as physical, physiological, or behavioural characteristics. Examples of biometric identification listed in the AI Act include scanning the face, eyes, body shape, voice, heart rate, blood pressure and keystrokes characteristics.

Scannell notes that the AI Act makes a “clear distinction” between biometric identification and categorisation systems.

“Identification systems aim to establish a person’s identity, while categorisation systems assign individuals to specific groups,” Scannell said. “Notably, the Act prohibits systems that deduce or infer sensitive attributes. However, high-risk systems that categorise non-sensitive attributes are heavily regulated but not outright banned.”

Scannell said biometric categorisation systems that deduce sensitive information such as race, political opinions or sexual orientation are banned under the AI Act, but in scenarios where “deduction isn’t being used and it’s straightforward categorisation, it is considered high risk”.

“Technical inaccuracies could unfairly impact protected groups, and there’s a risk these systems could unduly influence important decisions about people’s lives,” Scannell said.

The AI Act has faced criticism for how it plans to address biometric systems – the EU Pirate Party claims it will let member states easily use biometric surveillance. Germany is reportedly preparing draft laws that will allow police to search the web using facial recognition technology.

The European Digital Rights network has also been critical of the AI Act and claims there are “wide exceptions” to the “in-principle” ban on live mass facial recognition and other public biometric surveillance by law enforcement.

The Act also allows for the use of emotion recognition technology in specific cases, a type of technology that has been criticised over the years.

The AI Act states that using AI systems to detect the emotional state of individuals in situations related to the workplace and education should be prohibited, but that this prohibition should not cover AI systems being used “strictly for medical or safety reasons, such as systems intended for therapeutical use”.

What do businesses need to know?

Scannell said companies that use biometric categorisation systems will face several obligations under the AI Act when it comes into effect. The “obvious” thing to remember will be to not use any banned biometric systems, but there are more specific obligations too.

“Companies must inform individuals when they’re subject to these systems and ensure all data processing complies with GDPR and other EU regulations,” Scannell said. “Transparency is key, as is implementing safeguards and regularly assessing risks.

“Companies are also required to keep detailed records and undergo independent audits to ensure ongoing compliance.”

The path to compliance won’t be easy however – Scannell said a key hurdle for businesses will be distinguishing between prohibited and high-risk systems, which will require “careful interpretation of the Act”. He added that businesses will have to navigate the “nuances between sensitive inferences and lawful categorisations”.

“Ensuring compliance with both the AI Act and relevant national laws adds another layer of complexity,” he said. “Implementing necessary safeguards and transparency measures while staying updated on legal developments presents ongoing challenges for businesses in this space.”

What are the penalties for failure?

The AI Act will present “substantial” penalties for businesses that fail to comply with its rules. Businesses found using prohibited AI systems will face fines of up to €35m or 7pc of global annual turnover – whichever figure is higher. Meanwhile, using high-risk AI systems but violating their use can lead to fines of up to €15m or 3pc of turnover.

“Even providing incorrect or misleading information can result in fines of up to €7.5m or 1pc of turnover,” Scannell said. “The Act does provide some leniency for SMEs and start-ups, capping their fines at the lower of the specified amounts.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com