One of DARPA’s leading AI figures was in Dublin to talk about solving the ‘black box’ problem and how AI technology works in the US military.
Whenever the topic of artificial intelligence (AI) enters mainstream conversation, the talking point of ‘killer AI’ and ‘killer robots’ isn’t far behind. While obviously films such as The Terminator had a large part to play in that, there are genuine fears surrounding the possibility that AI, with the right access to weaponry, could suddenly decide that humans need to be gotten rid of.
The idea that AI is making decisions outside of the knowledge of its human masters is not a new concept and is commonly referred to by computer scientists as the ‘black box’ problem. The basic premise is that in the early days of AI research, the approach was for humans to specify the rules to be followed, and ask the AI to follow them.
Having failed to take off due to it being so complex and time-consuming, machine learning – and later deep learning – became the name of the game. So, instead of letting humans determine AI’s logic, vast amounts of information can be entered into it, and it would determine its own course of action. But how do we know what led to it making a decision?
What’s in the box?
For David Gunning, programme manager at the Defense Advanced Research Projects Agency (DARPA) in the US, this is one of his biggest concerns. Speaking last month at a meeting of the Institute of International and European Affairs (IIEA), Gunning was one of the early developers of what we know as AI today more than 30 years ago, having made the transition from cognitive psychology to computer science via the US Air Force.
“Some of the earliest people in AI were psychologists as well as mathematicians, trying to model a computer system that solves problems the way a person does,” Gunning said in conversation with Siliconrepublic.com.
Where the ‘black box’ issue really becomes problematic is in the much-discussed area of ethics in AI. If a facial recognition AI designed to predict future crime is proven to be biased against people of colour, who is at fault? Has the AI developed biases on its own, or did its creator – consciously or subconsciously – program them in?
At DARPA, Gunning said that he and 12 different research groups are currently in the middle of a four-year programme to modify machine-learning technology so that it pulls out more features that could be explainable to its creators.
“If you happen to, say, feed it an image and say it’s a group of goldfish and you ask why, it will highlight exactly the pixels in the image that made it goldfish. You can retrofit that on any system,” he explained.
Your new PAL constantly listens
With more than 30 years of experience in the development of AI, Gunning has seen it all when it comes to the technology. In fact, he was among researchers working on the Personalised Assistant that Learns (PAL) project that started in 2003. The plan was to build a “really ambitious” AI personal assistant that could digitally do everything a person would need, from responding to emails to planning meetings.
One group involved in this project will probably be familiar to any iPhone user – or anyone with a passing interest in technology for that matter – and it was called SRI International. “They built this app called Siri and it was just a little sliver of the research we were doing and was often the case with DARPA,” Gunning said. “They built a streamlined version of this assistant, put it in the App Store, and three days later Steve Jobs rings them up and says: ‘I’d like you to come over for dinner.’”
However, data privacy advocates may recoil a little in horror when realising what PAL could have been. “One of the projects would have you install it on your laptop – as it was before iPhones – and it would read all of your email, look at all your meetings … every person who you met with, it would look that person up on the internet and download all documents about that person,” Gunning said.
Despite this, elements of the programme were integrated into the US Army’s Command Post of the Future, which integrates data from different communication feeds into a single display.
‘We don’t want to be left behind’
Within the AI community, the issue of increased cooperation between Silicon Valley giants and the US military has been something of a sensitive issue. The most obvious recent example was the US Department of Defense (DoD) partnering with Google on Project Maven, aiming to use AI to select and prioritise enemy targets.
In that instance, about a dozen staff members announced their intention to leave the company over the project, leading to Google saying it would draw up ethical guidelines for future collaboration with the US military.
With an extensive history in and around the military, Gunning doesn’t exactly share their sentiment. “Intellectually I understand the point of view, but I don’t agree with it. I’ve worked enough with the US military to know that they’re the good guys and they need to be helped,” he said.
“We don’t want there to be a military arms race on creating the most vicious AI system around … But, to some extent, I’m not sure how you avoid it. Like any technology arms race, as soon as our enemies use it, we don’t want to be left behind. We certainly don’t want to be surprised.”
He does add that the actual policy of the DoD is that it won’t build any fully autonomous lethal weapons system; any AI making such a call would have to have a human in the loop to make the final decision.
Whatever your opinion, with the US military steadfast in making sure it remains the world’s greatest military superpower – especially when it comes to the latest weapons technology – you would certainly think overcoming the ‘black box’ problem and other AI hurdles will be a massive priority for every developer.