Dr Marie Postma spoke to SiliconRepublic.com about misconceptions around AI as well its relationship with human consciousness.
AI and robots are getting smarter all the time. From Irish-made care robot Stevie to Spot the robot dog from Boston Dynamics, these helpers are popping up everywhere with a wide range of uses.
The tech beneath the hardware is getting smarter too. Earlier this year, researchers at MIT developed a simpler way to teach robots new skills after only a few physical demonstrations. And just this week, Google revealed how it’s combining large language models with its Everyday Robots to help them better understand humans.
However, advances in these areas have led to recent discussions around the idea of sentient AI. While this idea has been largely rebuffed by the AI community, an understanding of the relationship between cognitive science and AI is an important one.
Dr Marie Postma is head of the department of cognitive science and artificial intelligence at Tilburg University’s School of Humanities and Digital Sciences in the Netherlands. The department is mainly financed by three education programmes, with around 100 staff and between 900 and 1,000 students.
‘Technology is not the problem; people are the problem’
– MARIE POSTMA
The team focuses on different research themes that combine cognitive science and AI, such as computational linguistics with a big focus on deep learning solutions, autonomous agents and robotics, and human-AI interaction, which is mainly focused on VR and its use in education.
Postma was a speaker at the latest edition of the Schools for Female Leadership in the Digital Age, run by Huawei’s European Leadership Academy. At the event in Prague, she spoke to students about cognitive science and machine learning, starting with the history of AI and bringing it up to the modern-day challenges, such as how we can model trust in robots and the role empathy could play in AI.
“We have research where we are designing first-person games where people can experience the world from the perspective of an animal – not a very cuddly animal, it’s actually a beaver,” she told me later that day. That choice was intentional to see if AI could help users feel empathy for the animal.
Sentient AI
Postma’s talk brought about a lot of discussion around AI and consciousness, a timely topic following the news that Google engineer Blake Lemoine claimed that an AI chatbot had become sentient.
She said much of the media coverage around this story had muddied the waters. “The way it was described in the media was more focused on the Turing test – interacting with an AI system that comes across as being human-like,” she explained.
“But then at some point they mention consciousness, and consciousness is really a different story.”
Postma said that most people who research consciousness would agree that it’s based on a number of factors. Firstly it’s about having a perceptual basis, both the ability to perceive the world around us but also what’s happening inside us and being self-aware.
Secondly, the purpose of consciousness is being able to interpret yourself as someone who has feelings, needs, actionability in the world and a need to stay alive. “AI systems are not worried about staying alive, at least the way we construct them now. They don’t reflect on their battery life and think, ‘Oh no, I should go plug myself in.’”
Possibilities and limitations
While AI and robots don’t have consciousness, their ability to be programmed to a point where they can understand humans can be highly beneficial.
For example, Postma’s department has been conducting research that concerns brain-computer interaction, with a focus on motor imagery. “[This is] trying to create systems where the user, by focusing on their brain signal, can move objects in virtual reality or on computer screens using [electroencephalography].”
This has a lot of potential applications for people with paralysis or in the advancements of prosthetic limbs. Last year, researchers at Stanford University successfully implanted a brain-computer interface (BCI) capable of interpreting thoughts of handwriting in a 65-year-old man paralysed below the neck.
However, Postma said there is still a long way to go with this technology and it’s not just about the AI itself. “The issue with that is there are users who are able to [use BCI] and others who are not, and we don’t really know what the reasons are,” she said.
“There is some research that suggests that being able to do special rotation might be one of the factors, but what we’re trying to discover is how we can actually train users so that they can use BCI.”
And in the interest of quelling any lingering fears around sentient AI, she also said people should not worry about this kind of technology being able to read their thoughts because the BCI is very rudimentary. “For the motor imagery BCI, it’s typically about directions, you know, right, left, etc.”
Other misconceptions about AI
Aside from exactly how smart the robots around us really are, one of the biggest falsehoods that Postma wants to correct is that the technology itself is not necessarily what causes the problems that surround it.
“What I repeat everywhere I go is that the technology is not the problem, people are the problem. They’re the ones who create the technology solutions and use them in a certain way and who regulate them or don’t regulate them in a certain way,” she said.
“The bias in some AI solutions is not there because some AI solutions are biased, they’re biased because the data that’s used to create the solutions is biased, so there is human bias going in.”
While bias in AI has been a major discussion topic for several years, Postma has an optimistic view, saying that these biased systems are actually helping to uncover biased data that would have previously been hidden behind human walls.
“It becomes explicit because all the rules are there, all the predictive features are there, even for deep learning architecture, we have techniques to simplify them and to uncover where the decision is made.”
While Postma is a major advocate for all the good AI can do, she is also concerned about how certain AI and data are used, particularly in how it can influence human decisions in politics.
“What Cambridge Analytica did – just because you can, doesn’t mean you should. And I don’t think they’re the only company that are doing that,” she said.
“I’m [also] concerned about algorithms that make things addictive, whether it’s social media or gaming, that really try to satisfy the user. I’m concerned about what it’s doing to kids.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.