The UN has received a stark warning from the Future of Life Institute, in an open letter that was signed by more than 100 leaders in the AI industry.
Technological advances in the artificial intelligence (AI) field are developing at a rapid rate, and, while AI is being used in innovative and helpful ways, there are just as many dangers and ethical quandaries to consider.
In an open letter published today (August 21), a group of AI specialists and experts from 26 countries implored the United Nations (UN) to ban the development and use of autonomous weapons.
The signatories called for autonomous weapons systems to be included in the banned weapons list, under the UN’s Convention on Certain Conventional Weapons, which was enforced in 1983.
‘We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.’
– FUTURE OF LIFE INSTITUTE
Telsa’s Elon Musk was one of many experts to sign the letter, in a list that also included Mustafa Suleyman of Google.
Musk has been vocal about his concerns when it comes to AI. On 15 July this year, he brought up the subject at the National Governors Association summer meeting in Rhode Island.
He said: “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because, I think, by the time we are reactive in AI regulation, it’s too late.
“Normally, the way regulations are set up is when a bunch of bad things happen, there’s a public outcry and, after many years, a regulatory agency is set up to regulate that industry. It takes forever.
“That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation.”
The dangers of an ‘AI arms race’
The letter written today brought up the reservations that tech experts have about these ‘killer robots’, and just how dangerous they could be if deployed.
“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.
“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
Experts have previously said that AI has reached a point where these autonomous weapons could feasibly be in use within just a few years, and there are worries around the threshold of going to battle being lowered.
This has been a growing talking point in the tech world as we have seen AI quickly progress. A similar letter published by the Future of Life Institute on July 28 2015 flagged the dangers of a potential “AI arms race”, and was signed by Musk, Stephen Hawking and Apple co-founder Steve Wozniak.
The UN group that will be examining how to regulate autonomous weapons is set to meet in November of this year.