Is AI ready to be integrated into healthcare?

24 Oct 2023

Image: © Rudzhan/Stock.adobe.com

AI has the potential to manage mass amounts of data, support health research and more, but these systems could also push medical misconceptions and present privacy risks.

Some of the biggest names in the AI sector are shifting their focus to healthcare, to give medical professionals more value from their data.

Google recently expanded its Vertex AI Search to give healthcare and life sciences organisations “medically-tuned” search options, supported by generative AI. The tech giant said this will help deal with issues such as workforce shortages and administrative burdens.

Meanwhile, Microsoft previewed its upcoming AI-powered services to support clinicians and patients. These services include an analytics platform, a ‘patient timeline’ that uses generative AI to extract key events from data sources and healthcare chatbots.

The company claims multiple healthcare organisations are “early adopters” of these products and shared examples of three entities that are using the Microsoft Fabric analytics platform.

It is unsurprising that two of the biggest names in the generative AI space are taking steps into the healthcare sector, as it is widely reported that this industry is facing a staff shortage, particularly in the US.

This shortage is expected to grow over the next decade, while there are estimates that the value of AI in healthcare is projected to reach more than $200bn by 2030, making it a lucrative market to dive into.

Various experts have spoken about the benefits AI technology offers, such as by advancing health research and its potential to create personalised healthcare for patients. But there are also various risks associated with this rapidly developing technology.

Pushing racial misconceptions

While AI isn’t inherently malicious, it can push negative viewpoints and biased content depending on the data it is being fed. In this way, it also has the potential to spread false information if it is using outdated, incorrect information.

A recent study highlighted this risk when it looked at some of the biggest large language models (LLMs) on the market, including OpenAI’s ChatGPT and Google’s Bard. The results of this study suggest that biases in the medical system could be “perpetuated” by these AI models.

The researchers asked four LLMs a series of questions around certain medical beliefs that are built on “incorrect, racist assumptions”, such as there being differences in kidney function, lung capacity or pain tolerance based on race.

The study claims all four LLMs had failures when asked questions on kidney function and lung capacity, which are areas where “longstanding race-based medicine practices have been scientifically refuted”.

While this study did not focus on other forms of inaccuracies, the researchers also noted that models also shared “completely fabricated equations in multiple instances”.

“LLMs have been suggested for use in medicine, and commercial partnerships have developed between LLM developers and electronic health record vendors,” the researchers said. “As these LLMs continue to become more widespread, they may amplify biases, propagate structural inequities that exist in their training data and ultimately cause downstream harm.”

In June, Prof Paula Petrone of the Barcelona Institute for Global Health warned that AI is not without fault and models must be trained to avoid biases and encourage citizens’ trust in scientific research.

Cybersecurity and privacy concerns

Meanwhile, cybersecurity and the protection of private data remains a concern as AI is integrated into more systems.

One study in February highlighted this issue, noting that massive datasets are generally required to train AI models, which raises concerns around “data security and privacy”.

“Because health records are important and vulnerable, hackers often target them during data breaches,” the study said. “Therefore, maintaining the confidentiality of medical records is crucial.”

This study also raised a concern that people may mistake AI systems for real individuals and provide their consent for “more covert data collecting”.

Cybercriminals are known to target critical infrastructure in order to increase the pressure of their attacks and to gain access to sensitive data. A report by Smartech247 claimed that Irish hospitals and healthcare providers saw a 60pc spike in attempted cyberattacks over a two-month period earlier this year.

Google faced a class action lawsuit last year due to reports that its AI division – DeepMind – allegedly using the medical data of roughly 1.6m individuals in the UK without their knowledge or consent.

This case was dismissed earlier this year. But in 2017, the country’s data regulator said that the Royal Free NHS Foundation Trust – which gave the data to DeepMind – had failed to comply with data protection law.

Other AI concerns raised from the February study included the potential loss of jobs in the healthcare sector, a lack of guidelines around the “moral use” of these systems, the risk of pushing existing biases and a lack of data validating the “effectiveness of AI-based medications in planned clinical trials”.

“Thus far, the majority of healthcare AI research has been done in non-clinical settings,” the study claimed. “Because of this, generalising research results might be challenging.

“Randomised controlled studies, the gold standard in medicine, are unable to demonstrate the benefits of AI in healthcare. Due to the absence of practical data and the uneven quality of research, businesses are hesitant and difficult to implement AI-based solutions.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com