The importance of humans in the loop for AI cannot be overstated


24 Jun 2024

Image: © VectorMine/Stock.adobe.com

The team at William Fry examine the legal challenges around automation and AI and the regulations that ensure humans are part of the process.

Click here for the full Automation Week series.

With the accelerating advancement of AI technologies in recent years, businesses are increasingly looking to capitalise on the technology to automate their processes. Automation brings the potential for significant benefits for organisations: increasing efficiency and productivity, cutting costs and supporting employees.

Moreover, modern customers now expect businesses to automate routine processes and implement AI systems where possible to reduce their end cost. From the agricultural industry to the financial services industry, AI has broad applications and potential and businesses in a wide range of industries are keen to get ahead of the curve and introduce the technology in increasingly innovative ways.

Challenges

Despite the potential benefits that AI is capable of bringing to those who engage it, like most other ground-breaking technologies, AI does not come without its challenges. AI has not yet managed to overcome its ‘black box’ problem – the concept that we don’t actually know what happens within a large language model when it processes outputs, and as a result there is a lack of transparency as to how exactly its outputs are created in some cases. This is why guardrails and risk management are so important when using this technology.

The black box problem can cause uncertainty as to whether an AI system is compatible with fundamental rights and operates within the boundaries of the law. For example, AI systems may illegally process personal data, subject humans to unlawful automated decision-making or infringe upon their intellectual property rights.

Given the nature of AI training, sometimes it can include copyright information, for example, and as AI is generally trained on massive tranches of data such as crawls of the entire internet, it can also include humanity’s inherent biases and flaws.

The problem is that when it comes to issues such as bias, AI systems can amplify those problems and, without proper guardrails or human supervision, this can be problematic. AI systems’ tendency to ‘hallucinate’ can also create problems with accuracy of outputs, which may lead to significant reputational damage for the companies deploying it.

Notwithstanding these challenges and concerns, AI will likely be one of the most transformative technologies created by human civilisation and any downsides will be eclipsed by its benefits, provided the technology is deployed carefully and responsibly.

In order to benefit from the powers of the technology while minimising risk, it is best practice to ensure that a human-centric approach is taken to its design, implementation and ongoing use. In certain instances, such an approach is even mandated by law. This will help organisations to identify and mitigate risks associated with the deployment of AI.

Legal obligations to keep humans in the loop

The AI Act

Organisations using AI systems as part of their automation must comply with the EU’s regulation of the technology under the incoming AI Act. The AI Act prescribes rules of varying degrees of onerousness on developers and users of AI technologies depending on their associated risk level, with certain AI systems which pose an unacceptable risk banned outright.

A key provision of the AI Act in relation to systems which qualify as high-risk systems is the need for organisations to identify risks stemming from the use of the AI system and to implement risk management strategies to mitigate these risks. This is something that humans should be heavily involved with, requiring collaboration from senior management and ICT personnel to ensure adequate understanding of how the AI system in question operates, its potential risks and how best to protect against them.

The AI Act further lays down rules for human oversight of the design and use of high-risk AI systems, requiring that providers identify – and build in if possible – measures to ensure human oversight of the system during its use. This is to allow the human overseeing the system to identify problems, correctly interpret the system’s output and to intervene or halt or override the system where necessary. This prevents misuse or unintended consequences of the AI system.

It is worth noting that under the AI Act, an AI system shall always be considered high-risk if the AI system performs profiling of people (defined as any automated processing of personal data for evaluating certain personal aspects, such as behaviour or location).

Where organisations operate high-risk AI systems, they should ensure that they delegate human oversight of the AI system to a natural person with the necessary competence, training and authority as well as the necessary support.

Depending on the activities of the organisation, existing EU regulation may also be applicable to their activities with regards to AI, including the GDPR and the Digital Services Act (DSA).

A common theme across these regimes and the AI Act is the safeguarding of human involvement in automated processes. These laws mandate transparency and accountability and impose obligations for human oversight of automated processes in certain circumstances. For example, these laws require that a human be kept ‘in the loop’ to oversee and manage automated decision making and profiling activities to maintain accountability and safeguard individuals’ rights.

The GDPR

Under the GDPR, individuals have the right not to be subject to a decision based solely on automated processing. In essence, a decision is automated where there is no human involved in the decision-making process.

There are limited exceptions to this, including where it is necessary for the entry into or performance of a contract, where the individual has provided explicit consent or when it is authorised by law.

Where organisations carry out automated decision making in the context of one of these exceptions, it is vital that they ensure that they have safeguards in place to protect individuals’ rights. Such safeguards can include offering individuals the right to obtain human intervention and enabling them to express their point of view and to contest the decision.

The DSA

The DSA contains a similar requirement to that of the GDPR to keep a human in the loop with regards to automated processes. The DSA requires that providers of online platforms give recipients of their services access to an effective internal complaint handling system, where they can lodge complaints against certain decisions by the platform (for example, a decision to suspend or terminate a user’s access to the service).

Providers must ensure that such decisions are taken under the supervision of qualified staff and not made solely on the basis of automated means.

The importance of human involvement in automating processes and implementing AI cannot be overstated. Businesses incorporating AI should devise an AI governance framework with human oversight at its core to mitigate the risks of the technology.

Human oversight ensures that the potential risks attributed to AI can be identified, planned for and therefore mitigated, preventing legal and regulatory compliance breaches, promoting transparency, accuracy, security and ethics, and in turn fostering trust in a company’s use of the technology among the public.

The AI Act, the GDPR and the DSA all prescribe significant fines for non-compliance, meaning that organisations’ AI strategies should be a board-level agenda item, which will further ensure adequate human involvement in an organisation’s AI and automation journey.

Even where an organisation is not subject to the requirements for human oversight under the laws discussed above, it is good practice to draw inspiration from them if they intend to implement AI responsibly.

By Barry Scannell, Louisa Muldowney and India Delaney

Barry Scannell is a partner in William Fry’s technology department. Louisa Muldowney and India Delaney are both associates in William Fry’s technology department.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.