Industry leaders in AI research have vowed not to play a role in the creation of autonomous weapons.
Artificial intelligence (AI) is changing the world as we know it, with innovations in the area being achieved at rapid speed.
While the technology certainly has a great deal of positive applications, many people are worried about its use in military systems.
Now, a list of concerned organisations and individuals has emerged, including some of the world’s leading AI researchers and companies involved in development.
Concerns over AI weapons
Many tech figureheads have signed this new pledge promising not to take part in the development of lethal autonomous weapons (LAWs). The signatories include the three co-founders of Google’s DeepMind (Shane Legg, Mustafa Suleyman and Demis Hassabis), SpaceX founder Elon Musk and Skype founder Jaan Tallinn as well as respected AI researchers Yoshua Bengio and Stuart Russell.
The pledge was organised by the Future of Life Institute and published at the International Joint Conference on Artificial Intelligence in Stockholm.
While the Future of Life Institute has previously helped publish letters from the same companies and researchers calling on the UN to consider regulation of LAWs, this particular pledge is the first time signatories have made individual promises not to be involved in the development of LAWs.
Signatory Anthony Aguirre, theoretical cosmologist and professor at the University of California, said: “We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody.”
Regulation from governments needed
The pledge itself calls on governments to come to regulatory agreements in terms of laws that would stigmatise the development of so-called ‘killer robots’. As there is no such framework in place currently, those who signed the petition pledged to “neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons”.
More than 150 companies involved in AI development were among the 2,400 signatories of the pledge. It said: “There is a moral component to this position: that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.
“There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilising for every country and individual.”
26 UN states call for a ban
26 UN member countries have called for a ban on autonomous weapons systems separate from this particular pledge, including China, Cuba, Austria and Brazil.
Collective action in terms of ethics in AI has been burgeoning for some time now, with the prime example being the unrest caused at Google due to its involvement in Project Maven, a military AI project aiming to improve drone accuracy.
The pledge from said companies and researchers, while not an outright ban on stopping the creation of military AI tools with non-lethal aims, is certainly a step in a positive direction. Now it just needs to gain enough momentum.
A US military Reaper drone. Image: BlueBarronPhoto/Shutterstock