The EU is looking to ensure that AI is trustworthy, transparent and explainable. Now it’s time for organisations to get on board too, writes EY’s Ciarán Hickey.
There is widespread understanding of the strong contribution that artificial intelligence (AI) can bring for societal benefits, economic growth and enhanced innovation. After decades of being relegated to science fiction, today AI is very much part of our everyday lives.
We now see the technology in everyday use. It is completing our words as we type them; providing driving directions when we ask; vacuuming our floors as per a schedule (we have all seen a video of a cat on these robotic hovers); and recommending what we should buy or binge-watch next.
With the widespread adoption across the most sensitive aspects of our lives, it is essential that the technology is trustworthy, transparent and explainable.
AI Act: The gold standard for trustworthy AI?
That is the focal point for the European AI Strategy, which is aimed at making the EU a global hub for AI development and deployment. The approach is firmly grounded in excellence and trust, with humans placed at the centre.
In April 2021, the European Commission presented its proposal for a regulation setting out harmonised rules on AI and its uses. This regulation has become known as the AI Act, the first-ever legal framework on AI proposed by the European Commission.
The act aims to address the risks associated with specific uses of AI through a set of complementary and proportionate rules. The legal framework proposes a clear, easy-to-understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk. Taken together, the new rules will help Europe set the gold standard for trustworthy AI.
In many ways, the AI Act is similar to GDPR in that organisations need to become familiar with its provisions and implications for their activities as quickly as possible. Organisations need to understand how it might affect them and their AI strategies.
Fortunately, there is lot of very good information available online on each article in the AI Act. A good starting point is the European Commission website.
Impact on AI deployment
In that context, organisations that are in the process of rolling out AI strategies probably need to pause if they haven’t already taken the potential impact of the AI Act into consideration – as that impact could be very far reaching and have an associated financial penalty for infringement (up to €30m or 6pc of annual turnover).
The EU is taking a risk-based approach firmly grounded on transparency, trustworthiness and explainability. These must become the guiding principles underlying every organisation’s AI strategy. It can’t be said often enough – if users don’t trust or understand the technology, they won’t adopt it.
The first step for most organisations will be to understand the risk level associated with their existing or proposed AI deployments. For example, certain AI usages are prohibited under the act. These include real-time biometric systems used for surveillance and social screening. With some important national security exceptions for surveillance, these activities are forbidden.
Placing humans firmly at the centre
High-risk activities include uses in areas such as transport infrastructure, CV-screening software for recruitment procedures, and scoring of exams, to name just a few.
The act lays down specific rules for these high-risk activities and organisations must put their AI systems through a conformity test before deploying them for such uses. There is also a requirement to have appropriate risk management and data governance systems in place, along with protocols that ensure full transparency and place humans firmly at the centre.
It should be noted that the majority of AI applications fall into the limited or minimal risk categories. Those associated with limited risk have specific transparency obligations and users should be made aware that they are interacting with a machine (ie chatbot).
The legal proposal has no specific requirements for minimal risk, however, I would encourage organisations to employ the same ethics-based approach regardless of the level of risk involved. When it comes to AI, there can be no such thing as too much transparency or governance.
Creating additional value through robust trust
By quantifying the risk associated with each AI activity and setting up the processes to manage the risk and ensure continuing compliance with the act, organisations can create additional value from their AI investments as these actions will help accelerate adoption.
A lot of organisations are not following an enterprise approach with trust and ethics sufficiently. They are finding it difficult and not prioritising the potential consequences of AI, such as whether someone has been unfairly disadvantaged or why a solution has made a decision or prediction resulting in a particular action. While there is existing legislation that provides some protection, it is not enough.
The AI Act sets out a number of mandatory requirements to protect people’s fundamental rights in this regard. National authorities will have access to records and other relevant material held by the organisations responsible for the activity to ensure compliance with the law.
However, the strength of this enforcement is reliant on national competent market surveillance authorities to supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation.
Ensuring AI ethics enters the conversation
Will the national authorities have the adequate experience and skillset to ensure the adoption of these new measures?
While the AI Act sets out these requirements, it is also time for society generally to engage in a conversation around AI ethics and the values which underpin the solutions being implemented by organisations.
For example, bias is currently the second highest case of AI failure, with privacy being the highest. AI can systematically disadvantage or exclude a group based on identifiers such as age, gender, health, sexual orientation, ethnicity or other grounds.
As part of its risk-based approach, the AI Act prohibits certain practices as a matter of principle or authorises them subject to specific conditions.
Earning trust among users
Systems need to be designed to minimise the risk of unfair bias with high levels of traceability, auditability and transparency built in to allow for ongoing testing and review. One of the biggest barriers to trust in AI has been the ‘black box’ nature of many of the systems – where it is nearly impossible to trace the process by which the system arrived at a decision.
That may not be a particular problem for people when it comes to search engine rankings, but it is of critical importance when it comes to decisions about people’s future careers or the type of medical treatment they should receive and their eligibility and suitability for it.
Ultimately, it all comes down to trust. And to be trustworthy, the technology needs to be traceable, transparent and explainable.
That is where organisations need to focus their efforts when it comes to AI strategies and why humans must be placed at the centre at all times. It is not simply a matter of compliance with legislation, it is about earning the trust of the people who will use the technology.
Ciarán Hickey is a director in data & analytics at EY Ireland, specialising in data science and artificial intelligence, leading the AI team across the island of Ireland.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.