EU’s AI Act enters into force – what does this mean for businesses?

1 Aug 2024

Image: © Pixel-Shot/Stock.adobe.com

The AI Act is finally here and big changes are on the way. Here are the key details of the Act and the tips businesses should heed before its full arrival.

The EU’s AI Act – its landmark regulation to rein in the growing power of artificial intelligence – has officially entered into force today (1 August), heralding big changes for Big Tech.

The Act has been in development for years, being first discussed in 2021 and altered in recent years with the sudden rise of generative AI technology. The Act has also been put under heavy scrutiny – challenges from member states towards the end of 2023 made it seem like the Act could collapse before coming to fruition.

But after delays, adjustments and multiple landslide votes, the AI Act is finally here. The changes won’t be felt immediately – it will be years until all of the rules come into effect – but this will give businesses and member states time to prepare for the Act’s full arrival.

The AI Act in brief

Simply put, the AI Act is an attempt to balance managing the risks of this technology while letting the EU benefit from its potential. It has been argued that this is the most robust and detailed form of AI regulation in the world, which could influence legislation in other parts of the world.

The Act is designed to regulate AI technology through a risk-based approach – the riskier an AI application is, the more rules that apply to it. Minimum risk systems such as spam filters and recommender systems do not face any obligations under the AI Act.

Meanwhile, high-risk applications such as AI systems used for recruitment, AI-based loan assessments or autonomous robots will face much stricter requirements, including human oversight, high-quality datasets and cybersecurity. Some systems are banned entirely, such as emotion recognition systems used at the workplace.

The AI Act also introduces rules for “general-purpose AI models”, which are highly capable AI models that are designed to perform a wide variety of tasks such as generating human-like text – think ChatGPT and similar chatbots.

Time to prepare

The AI Act won’t be felt until six months, when prohibitions will apply against unacceptable-risk AI applications. The rules for general-purpose AI models will apply one year from now, while the majority of rules of the AI Act will start applying on 2 August 2026.

Meanwhile, EU member states have until 2 August 2025 to designate “national competent authorities”, which will oversee the application of the AI Act and carry out market surveillance activities.

With AI making its way into so many use cases, it will be important for businesses of all sizes to consider the type of AI systems they are using and where they fall into the AI Act’s risk tiers. Phil Burr, head of product at Lumai, said the biggest risk businesses face is ignoring the Act.

“The good news is that the Act takes a risk-based approach and, given that the vast majority of AI will be minimal or low-risk, the requirements on businesses using AI will be relatively small,” Burr said. “It’s likely to be far less than the effort required to implement the GDPR regulations, for example.

“The biggest problem for compliance is the need to document and then perform regular assessments to ensure that the AI risks – and therefore requirements – haven’t changed. For the majority of businesses there won’t be a change in risk, but businesses at least need to remember to perform these.”

While businesses have plenty of time to prepare, the road ahead is not clear for them. Forrester principal analyst Enza Iannopollo noted that firms don’t have any pre-existing experience of complying with these type of rules, which adds “complexity to the challenge”.

“Right now, it’s crucial that organisations ensure they understand what theirs and their providers’ obligations are in order to be compliant on time,” Iannopollo said. “This is the time for organisations to map their AI projects, classify their AI systems and risk assess their use cases.

“They also need to execute a compliance roadmap that is specific to the amount and combination of use cases they have. Once this work is done, every company will have a compliance roadmap that is unique to them.”

To bridge the period between now and the full implementation of the Act, the European Commission has launched the AI Pact, which is an initiative for AI developers to voluntarily adopt key obligations of the Act ahead of its legal deadlines.

The risk of fines

The EU has been introducing stronger penalties for breaches in its more recent legislation, with the Digital Markets Act and Digital Services Act carrying heavy fines for non-compliance.

The AI Act is no exception to this approach, as companies that breach the Act could face fines of up to 7pc of their global annual turnover for violations of banned AI applications. They will also face fines of up to 3pc for violations of other obligations and up to 1.5pc for supplying incorrect information.

Maria Koskinen, AI policy manager at Saidot, noted how significant these fines are in comparison to other EU regulation.

“For reference, GDPR caps maximum fines to 4pc of annual turnover, whereas EU competition law caps this at 10pc,” Koskinen said. “This comparison shows a clear movement in regulatory enforcement for the AI Act, as the maximum fines inch closer to those imposed on anticompetitive behaviour.

“As businesses around the world look to Europe, the AI Act’s requirements will lead the way in responsible AI innovation and governance, while ensuring organisations are prepared for its rapidly approaching enforcement.”

Issues with the Act?

The AI Act is a very detailed piece of legislation, covering a wide range of AI applications. But it did not come out of years of negotiations with a squeaky-clean reputation. There have been criticisms from members of the EU Parliament and AI experts about the Act.

For example, the EU’s Pirate Party was vocal for months about the Act allowing member states to use biometric surveillance in specific circumstances – such as facial recognition technology for certain law enforcement purposes.

Dr Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties, told SiliconRepublic.com earlier this year that the AI Act “does not set a high bar for protection of people’s rights”. He also claimed that the Act relies too much on “self-assessments” when it comes to risk.

And while others praise the AI Act, there are concerns around how AI regulation will be adopted in other countries, such as the UK. Eleanor Lightbody, Luminance CEO, said a one-size-fits-all approach to AI regulation “risks being rigid and given the pace of AI development, quickly outdated”.

“With the passing of the Act, all eyes are now on the new Labour government to signpost the UK’s intentions for regulation in this crucial sector,” Lightbody said. “Implementing a flexible, adaptive regulatory system will be key, and this involves close collaboration with leading AI companies of all sizes.

“Only by striking the right balance between innovation, regulation and collaboration can the UK maintain its long heritage of technological brilliance and achieve the type of AI-driven growth that the Labour Party is promising.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com