ELI president Pascal Pichonnaz says that the AI Act is ‘flexible enough to potentially to adapt to new risks’.
Top legal experts from all over Europe gathered in Dublin last week to discuss, among other issues, perhaps the biggest tech-related debate of the 2020s – regulations around safeguarding privacy in the booming era of artificial intelligence (AI).
The European Law Institute’s (ELI) annual conference took place in Ireland for the first time ever and saw participation from big names, including Marko Bošnjak, the president of the European Court of Human Rights and Emma Redmond, assistant general counsel for privacy and data protection at OpenAI. Attendees gathered at the Law Society of Ireland in Dublin and discussed the future of regulations in the region as AI technology continues to develop apace.
Regulators are taking a careful step forward in targeting AI technologies, with the aim of creating flexible guardrails that ensure protection whilst allowing for innovation in the sector.
The new AI Act, which entered into force this August, monitors models based on use case and risk, and ELI president Pascal Pichonnaz tells SiliconRepublic.com that the Act’s “flexibility” should satisfy regulators’ concerns even though some Big Tech players are still not convinced that it is the answer.
The AI Act is “flexible enough potentially to adapt to new risks”, Pichonnaz says.
Privacy issues with AI
AI is a fast-growing, pervasive technology that has already cemented its position as a staple in society. Large language models (LLMs) are trained on large amounts of data – from social media posts to literary works and even government documents.
While some AI models are trained for specific tasks using structured data, much of the generative AI tools available are trained on unstructured data – which might contain personal data.
“If it is unstructured, you don’t know what the content is,” Pichonnaz says. “You don’t know whether the AI is using personal data.”
These LLM models can be used for anything, from profiling candidates for jobs, to aiding insurance companies and developing targeted advertising to spear-phishing campaigns and victim impersonation attacks by bad actors.
Protecting privacy becomes tantamount, especially with a risk of data breaches that could lead to users’ personal data landing in the hands of bad actors.
AI-assisted criminality is particularly high in Southeast Asia, a recent UN report found; however, Europe isn’t safe from harm, Pichonnaz pointed out. “It all depends on the data security that is in place,” he says. “There are certainly issues also in Europe. I don’t see why we would be better protected in terms of practical measures.”
In May, the European parliament was targeted in a large cyberattack affecting its recruitment system and exposing the personal data of more than 8,000 current and former employees. Data privacy advocacy group NYOB pointed out in a complaint that the EU parliament had long been aware of its own cybersecurity vulnerabilities.
GDPR and the AI Act
The General Data Protection Regulation (GDPR), the premier legislation protecting user privacy in the EU, predates the AI boom and does not explicitly refer to or monitor AI models. However, some of its provisions are relevant to this technology – especially those that monitor data processing and consent.
Article 22 of the GDPR, in particular, restricts corporations from automatically processing personal data, including measures that require a user’s explicit consent when a website processes personal information.
The ELI conference last week discussed the role of GDPR for AI. Pichonnaz says that while the GDPR monitors “individual decision-making”, it is up to a court to interpret how it might affect AI models that process groups of data subjects rather than individuals.
While the GDPR focuses on individual privacy and data processing, the newly passed AI Act assesses AI models based on implementation and possible risk.
The AI Act is arguably the most robust and detailed form of AI regulation in the world, which aims to deal with AI by using a risk-based approach – higher risk adoptions of AI will be met with stricter rules. Certain AI systems are banned entirely, such as social scoring or behavioural manipulation, while there are strict rules around the use of biometric data.
To ensure organisations use AI properly, various governing bodies are being set up, including an AI office to help implement the Act, a scientific panel of AI experts and an AI board consisting of member state representatives.
While obligations under the Act are still being phased in, Big Tech players have had a critical response so far.
In an open letter signed by 36 organisations including Meta, Spotify, Ericsson, Klarna and SAP, the signatories said that Europe is becoming less competitive and innovative than other regions due to “inconsistent regulatory decision-making”.
“Europe can’t afford to miss out on the widespread benefits from responsibly built open AI technologies that will accelerate economic growth and unlock progress in scientific research,” the letter read. “For that we need harmonised, consistent, quick and clear decisions under EU data regulations that enable European data to be used in AI training for the benefit of Europeans.”
While Dr Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties, told SiliconRepublic.com earlier this year that the AI Act “does not set a high bar for protection of people’s rights”. He also claimed that the Act relies too much on “self-assessments” when it comes to identifying risks.
However, Pichonnaz says that the Act’s strength depends on enforcement. “The question now is to what extent the enforcement mechanisms that are in place ensure that this risk assessment is made in a proper way so that higher risk algorithms are properly assessed.”
In Ireland, the Department of Enterprise, Trade and Employment is leading the national implementation of the AI Act and will designate a notifying authority on AI by mid 2025.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Updated, 2:40pm, 15 October 2024: This article was amended to clarify that Pascal Pichonnaz did not refer to the AI Act as a “soft law”.