Why ‘sentient AI’ claims are actually very damaging


20 Jun 2022

Image: © BNP Design Studio/Stock.adobe.com

Stoking fears of artificial intelligence only hampers innovation, and the LaMDA ‘child’ hasn’t helped, writes Prof Noel O’Connor.

Google’s LaMDA system and its conversation with engineer Blake Lemoine got everyone talking, from tabloids to daytime TV, about the possibility of AI becoming sentient and threatening our way of life. The subject of AI rarely makes it off the science and business pages but when it does, it often comes with the theme from 2001: A Space Odyssey or a picture of the Terminator. This development should in fact alarm the tech community, but not because LaMDA has feelings.

We need to worry because stories like these damage public trust in AI applications which have the potential to transform how we deliver everything from financial services to healthcare. The intellectual property rights to a variety of potentially groundbreaking AI systems are increasingly being retained in Ireland, as Irish companies get braver about developing bespoke AI solutions in tandem with researchers based here. A significant threat to this development is a perception that these systems either cannot be trusted or are designed to completely replace humans, or both.

The reality is that AI is a human creation and the words that came from the ‘mouth’ of LaMDA were scoured from human inputs online in a search directed by Lemoine’s questions. The system ‘felt’ nothing.

AI is only as good as the humans driving it. Much of it promises enormous benefits to humanity.

Take Irish spin-out Digital Gait Labs, for example. Its AI can ‘look’ at the way we walk and predict how likely we are to lose balance. An avoidable fall can often spell the end of independent living for otherwise healthy older adults, and this intelligence gives clinicians the tools to support patients in avoiding life-limiting accidents.

Elderly people who have fallen at home and are admitted to the emergency department are often categorised as ‘frail’. Research tells us that such patients will have more hospital admissions, longer stays and higher use of healthcare resources. Digital Gait Labs has developed a gait analysis and assessment tool that can be used as a measurement for frailty.

The analysis is powered by state-of-the-art AI that can assess a person’s gait using a mobile phone app and cloud-based data processing. Digital Gait Labs’ clinically validated technology means that gait analysis can be performed more easily and more efficiently by a clinician in a less intrusive manner for the patient, compared to existing systems.

What’s important here is the term ‘by a clinician’, not ‘instead of a clinician’. The system doesn’t ‘know’ anything. It simply helps the human to know.

Digital Gait Labs’ AI, like many AI technologies, was developed to help humans make better decisions – decisions that are more accurate, more efficient and less intrusive. In this case, the company has done so with the participation and engagement of clinicians, patients and hospitals from the start.

We need the trust of users to make these technologies fly. We shouldn’t be concerned about machines becoming sentient as is claimed about LaMDA. But we should be concerned that such technologies are developed in ethically sound ways that involve not just techies with the means to do it, but input and constant engagement from the citizens and non-tech professionals who will use and ultimately benefit from the technology.

The academic research community in particular understands this and is increasingly putting this ethos into practice via publicly engaged research programmes. Digital Gait Labs is just one example of best practice in this regard.

Beyond the research and innovation community, we also need to have complex and sometimes difficult conversations within our society with a view to building awareness and understanding of this multifaceted issue, informing legislative frameworks and most importantly bringing the public along with the technology to the betterment of all. A fundamental right of humans in relation to AI is that they are told when they are dealing with AI; that there is no trickery, that deepfake images are flagged and that conversations with AI are clearly signposted.

Renegade hard drives and digital Frankensteins have no place in this conversation. As we all know, it was Shelley’s doctor, and not her monster, who was responsible.

By Prof Noel O’Connor

Noel O’Connor is the CEO of Insight, the Science Foundation Ireland research centre for data analytics, and a professor in the School of Electronic Engineering at Dublin City University.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.