Abstract, ubiquitous and opaque: The challenges of AI regulation

27 Sep 2023

Image: Prof Cecilia Danesi

Prof Cecilia Danesi discusses the ethical governance of AI and the difficulty of balance in AI regulation.

Prof Cecilia Danesi first became interested in AI while working on her master’s degree. While she was initially focused on AI and tort law, she began to look into multiple different areas of law and bring in different perspectives such as that of social science.

“I have a clear memory of what impacted me the most: algorithmic bias.

“For that reason, I decided to specialise in AI, gender equality and human rights.”

Danesi went on to establish the first module on AI and law at the University of Buenos Aires, and worked with her team to “raise awareness, study the phenomenon and contribute by helping different sectors (such as the academic, the private and the public sectors) to reach an ethical governance of AI”.

Currently, Danesi is a researcher at the Institute for European Studies and Human Rights, based at the University of Salamanca in Spain. She is also a professor at the School of Law in the University of Buenos Aires, and wrote a book on the ethics of AI and law, El Imperio de Los Algoritmos (The Empire of Algorithms).

‘We have to train humans with the necessary skills of the present, but even more, of the future’

AI and its impact on law

According to Danesi, AI has an impact in almost every area of law. As a result, she says that each area either has to rethink its basis or some aspect of its regulation.

“The first and huge dilemma around it is to determine if it is necessary to create a new regulation specialised for the AI phenomenon, or if it is enough to reinterpret or amend an existing law. There is no consensus about this.”

And despite the introduction of initiatives like the EU AI Act, which Danesi describes as a “breakthrough” for AI regulation, she still believes that more needs to be done.

“The main issue is the difficulty to regulate something that we do not know and that is exponentially growing each and every day,” she says.

Danesi believes that “the big challenge we need to face” is the issue of introducing a law to limit and control something as “abstract and ubiquitous as AI”.

Despite this, Danesi says that the AI Act does address some issues with algorithmic bias relating to data, cybersecurity, transparency and explainability, among others. She also highlights the Act’s proposal of supervising high-risk algorithms, which she believes will become mandatory in the future.

Balance and scale

Since the AI Act proposes different levels of regulatory scrutiny based on the risk that AI systems pose, how can a balance be struck between fostering innovation and safeguarding societal interests?

“First of all,” says Danesi, “risk-based division is the right way to classify systems and to create obligations depending on each of them. This allows us to focus on the social impact of the system, independently of technological advancements.

“Secondly, to reach that balance, we need to work on a regulation with an interdisciplinary perspective. That will create a law which can be applied on the facts, and not, on utopian and bureaucratic mandates.”

Another issue presented by AI ethics is ensuring transparency and understandable explanations for the systems’ decisions. Danesi points out that not every AI system is explainable, such as deep learning algorithms, which she says are “opaque”.

As a result, Danesi says that we need to decide in which areas we want to use explainable AI technology, and in which areas we want to use unexplainable AI technology. She adds that this leads to another point of analysis: what kind of explanation do we need?

“We do not need to know the mathematical formula of the algorithm,” she says, pointing out that this could cause conflict with intellectual property regulations.

“We need to know what the variables and weights of the algorithm are. For example, if the variable to assign a university scholarship is race, we can realise that the system is discriminatory.”

Some major issues that have plagued AI developments are that of bias and discrimination in the design of AI systems. Danesi believes that these issues can be amended and prevented through awareness, training and education from “different angles”.

“First, we need to create awareness of AI impacts and its implications in human rights. Then, training in human rights, gender equality and diversity have to be mandatory for programmers. Finally, high-risk AI systems must be audited.”

The future of AI in law

As AI develops at an unprecedented pace, concerns have arisen regarding its impact on jobs and the possibility of replacing humans in certain roles. How can these concerns be addressed?

“Like all industrial revolutions,” says Danesi, “it implies the reconversion of jobs and, consequently, the readaptation of human skills for the labour market.

“The key is in education. We have to train humans with the necessary skills of the present, but even more, of the future.”

As for the future, Danesi concludes by stressing the importance of taking action now to secure the legal boundaries and regulations of AI, rather than waiting to see what happens.

“Now, it is late,” she warns. “But if we keep waiting, it will be too late.

“Artificial intelligence is growing exponentially. Beyond ridiculous letters that propose slowing its progress, the reality is that it is one of the industries that has the highest levels of investment. We need to reach agreements and ethical principles of AI that can then be translated into a law.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Colin Ryan is a copywriter/copyeditor at Silicon Republic

editorial@siliconrepublic.com