UN AI body: Global AI governance gaps must be addressed

3 hours ago

Image: © aniqpixel/Stock.adobe.com

The final report from the UN’s AI advisory body pointed to a ‘global governance deficit with respect to AI’ and made several recommendations to address this.

Entire parts of the world have been completely left out of conversations around AI governance.

That’s according to a report from the United Nations (UN) Secretary-General’s High-level Advisory Body on Artificial Intelligence (HLAB-AI), which was released today (19 September).

The Governing AI for Humanity report discussed in detail the risks and challenges around governing AI and created key recommendations for addressing these risks.

“As experts, we remain optimistic about the future of AI and its potential for good. That optimism depends, however, on realism about the risks and the inadequacy of structures and incentives currently in place,” HLAB-AI stated in its report.

‘Whole parts of the world have been left out of international AI governance conversations’

“We also need to be realistic about international suspicions that could get in the way of the global collective action needed for effective and equitable governance. The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action.”

The AI advisory body was launched in October 2023 and includes well-known cognitive scientist Dr Abeba Birhane and OpenAI’s chief technology officer Mira Murati.

At the time of its launch, UN secretary-general António Guterres said he set it up to maximise AI’s benefits to humanity while containing and diminishing the risks.

Today’s report said that while there is no shortage of documents and dialogues focused on AI governance, none of them are “truly global in reach”, adding that while seven countries are parties to all the sampled AI governance efforts, 118 are parties to none – particularly in the global south.

In terms of representation, whole parts of the world have been left out of international AI governance conversations,” it stated.

“Equity demands that more voices play meaningful roles in decisions about how to govern technology that affects us. The concentration of decision-making in the AI technology sector cannot be justified.”

Risks and challenges

While problems such as bias in AI systems, questionable AI-enabled facial recognition and AI-generated disinformation are undoubtedly a threat to society, the report stated that putting together a comprehensive list of AI risks for all time is “a fool’s errand” given how quickly the tech is evolving.

“We believe that it is more useful to look at risks from the perspective of vulnerable communities and the commons,” it said. “Framing risks based on vulnerabilities can shift the focus of policy agendas from the ‘what’ of each risk to ‘who’ is at risk and ‘where’, as well as who should be accountable in each case.”

In an interview with SiliconRepublic.com, international lawyer and AI advisory board member Jimena Sofía Viveros Álvarez said she believes the biggest threats are the weaponised uses of AI and autonomous weapons systems.

“Without proper governance, these technologies pose existential risks to humanity, not to mention they threaten human dignity as the decision over life or death is reduced to a set of zeros and ones,” she said.

“Additionally, an immediate threat is the dual-use nature of AI, as it allows for virtually any system to be easily repurposed to accommodate other use cases. In this regard, ‘civilian’ technology could be easily misused by non-State actors, such as organised crime and terrorist groups, who could weaponise it and scale their unlawful operations.”

Among the challenges that need to be addressed was closing the global gap, particularly in ensuring that countries across the world have a voice.

The report said that no global framework currently exists to govern AI and with the technology’s development in the hands of a few multinational companies in a few countries, the risks that come with AI could be imposed on most people without their having any say in the decision-making process.

“AI governance regimes must be global to be effective. Through a global dialogue we expect global governance will have meaningful and harmonised impact,” said Viveros Álvarez.

“Our recommendations are aimed at the UN’s member states and would need to be taken forward in cooperation with the private sector, technical community, civil society and academic actors, as well as existing international AI governance initiatives.”

Key recommendations

In the report, the HLAB-AI members made several recommendations to address the concerns and challenges that come with the evolution of AI, including the establishment of an international scientific panel on AI, an AI standards exchange and an AI capacity development network to link up a set of collaborating, United Nations-affiliated capacity development centres.

These would all be designed to bring together or source expertise on AI-related initiatives, with the scientific panel to issue annual reports identifying further risks as well as opportunities and trends.

The AI standards exchange would also be responsible for developing and maintaining a register of definitions and applicable standards for measuring and evaluating AI systems and identifying gaps where new standards are needed.

The members also recommended creating a suite of online educational opportunities on AI targeted at university students, young researchers, social entrepreneurs and public sector officials as well as a fellowship programme for promising individuals to spend time in academic institutions or technology companies. The report also suggested the creation of a global fund for AI to “put a floor under the AI divide”.

The final recommendation was to create an AI office within the UN Secretariat, which would report to the secretary-general and act as the ‘glue’ that brings the report’s proposals together.

Viveros Álvarez told SiliconRepublic.com that, while the body does not yet recommend an AI agency with enforcement functions at international levels, “plans for such an organisation should continue to be discussed” so that the governance regime can be fully effective and operational.

A sign of hope

The stark report showed that there is much work to do to ensure AI is deployed safely and fairly across the world. And while industry players with vested interests can sometimes complain that regulation stifles innovation, Viveros Álvarez said that this is false. “Establishing safe guidelines and facilitating active dialogues are vital for an innovative environment that ensures AI can be safely used for the benefit and protection of all of humanity,” she said.

And while the advisory body said that there are divergences across countries and sectors when it comes to AI governance discussions, the “strong desire for dialogue” has given cause for hope.

“When we look back in five years, the technology landscape could appear drastically different from today. However, if we stay the course and overcome hesitation and doubt, we can look back in five years at an AI governance landscape that is inclusive and empowering for individuals, communities and States everywhere.

“It is not technological change itself, but how humanity responds to it, that ultimately matters.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Jenny Darmody is the editor of Silicon Republic

editorial@siliconrepublic.com