Global AI safety treaty signed by US, UK and EU

5 Sep 2024

Image: © amnaj/Stock.adobe.com

The Framework Convention on AI aims to progress this technology while managing the various risks it may pose to human rights, democracy and the rule of law.

The Council of Europe has today (5 August) opened for signatures a global treaty that aims to protect human rights and democracy as AI systems evolve.

This treaty provides a legal framework covering the entire lifecycle of AI systems. It aims to promote the progress of AI technology, while managing the risks it may pose to human rights, democracy and the rule of law – similar to regulation such as the EU’s AI Act.

The Framework Convention was adopted by the Council of Europe Committee of Ministers on 17 May 2024 and has been signed by the UK, the US, the EU, Israel, Andorra, Georgia, Iceland, Norway, the Republic of Moldova and San Marino.

Countries from all over the world are eligible to join this treaty and comply with its provisions. It will enter into force roughly three months after five signatories – including at least three Council of Europe member states – have ratified it. It is the first-ever internationally legally binding treaty of this kind.

“We must ensure that the rise of AI upholds our standards, rather than undermining them,” said Council of Europe secretary general Marija Pejčinović Burić. “The Framework Convention is designed to ensure just that. It is a strong and balanced text – the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives.

“The Framework Convention is an open treaty with a potentially global reach. I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible.”

Countries from around the world have been looking into regulation to rein in the sudden growth of AI in recent years. Australia recently revealed it is working on its own rules on the use of AI systems and is considering making them mandatory for high-risk settings, Reuters reports.

Other efforts have been made to ensure a global effort in creating safe AI. Earlier this year, the UN adopted a landmark resolution on the promotion of “safe, secure and trustworthy” AI systems. At the AI Seoul Summit in May, leading AI companies including Anthropic, Google, Meta, Microsoft and OpenAI agreed to an expanded set of safety commitments relating to the development of AI.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com