Is the EU doing enough to protect health data from AI-powered Big Tech?


15 Oct 2024

Image: © Toowongsa/Stock.adobe.com

While AI opens up technical solutions in healthcare, giving Big Tech unregulated access to health data is a big mistake, argues Dr Nicole Gross.

Generative artificial intelligence (AI) in healthcare is an ethical minefield. In a recent report, the World Health Organization (WHO) made it very clear that generative AI holds a lot of promise for applications in healthcare, including better diagnosis and clinical care, patient-centered applications, clerical functions and administrative tasks, medical and nursing education, scientific research, medical research and drug development. However, the same report warned that generative AI systems carry a lot of ethical concerns, which pose a risk to both healthcare systems and society.

Some of these ethical concerns relate to just how much Big Tech has been encroaching on healthcare to unlock this $11tn market opportunity.

The problem here is that companies such as Amazon, Alphabet, Apple, Meta and Microsoft have spent the past two decades staking a claim on personal, behavioural and biometric data, and often in quite pervasive, ubiquitous and mundane ways.

Selected ‘data enclaves‘ have been built by Big Tech firms to ensure that markets for ‘future behaviour’ remain growing and literally any type of data is turned into a valuable asset.

Given how attractive healthcare is to Big Tech, we should be very worried about the launch and scale of generative AI into this field.

Health is a fundamental human right as well as a public good, thus it needs to be protected.

However, many of the generative AI tools and applications found in healthcare have been built around Big Tech’s foundational AI models, associated services and/or data ecosystems. Not only is this solidifying Big Tech’s powerful market position by integrating their technologies and data ecosystems into public and private healthcare, but it also helps them to collect even more data as they move from surveillance which ‘curates’ to surveillance capitalism which ‘creates’ new information, knowledge and data. And the truth is that no user is ever safe from being manipulated by data.

As we now live in a society where Big Tech has gained a lot of market power, and digital surveillance and data capitalism have become a pervasive features of life, social justice concerns also have become increasingly problematic. Data justice is social justice in an increasingly digitalised, datafied and AI-ified world, and these issues include fairness, access, beneficence, democracy, solidarity, inclusion and harms to society.

And when it comes to healthcare, AI poses so many risks, harms and inequities related to age, gender, sexual orientation, cultural identity, racialised characteristics, literacy, disability and health status. Yet, these people or communities affected remain powerless and choiceless against the strategies, technologies and companies that create those social injustices – rarely do they even know who creates these injustices from ‘above’, how this is done or where this ‘above’ even is.

How do the regulations measure up?

Through a series of ‘landmark’ regulations and measures, the EU AI Act, Digital Services Act and European Health Data Space, the EU has tried to intervene and get the balance right.

On the one hand, the EU wants to get the best value out of digital health and AI, and let (more) data flow across Europe for the purposes of research, innovation and business. On the other hand, the EU has implemented certain bans and restrictions on AI systems to protect its citizens from the risks and dangers of AI – mostly by prohibiting unacceptable uses and implementing restrictions for high risk categories.

However, the Act left far too many loopholes for healthcare. What is more, Big Tech is already leveraging its connections to secure a seat on the table and ‘help to draft’ the very same codes of practice that will govern generative purpose AI models going forward. Co-creation will thus set the new EU AI Office up for a false start from the get go.

Then there is the Digital Services Act, which gives EU citizens better protection of their fundamental rights, particularly when it comes to control and choice.  Though the Act could work to restore democracy and save Europe from surveillance capitalism, no one really knows what impact this Act will make in reality.

Advocacy groups including Amnesty International and the Center for Democracy & Technology have already called out the fundamental flaws in these new regulations, stating that these steps will not stop surveillance or data capitalism nor will data justice be safeguarded in all circumstances, situations and contexts.

And if the EU’s history of GDPR violations is anything to go by – tech companies deploying deceptive designs to get around the legitimate interest problem or the fact that only 2,086 fines were issued as of 1 March 2024 (amounting to a total of €4.48bn), whereby any fines issued are easily paid off –  Big Tech has arguably not much to worry about when it comes to pursuing AI-backed surveillance capitalism with speed and veracity.

As the EU new regulations come into full effect over the coming years, a wave of data justice issues will thus inevitably emerge. No field or market, including healthcare, is ever static, however. The misfires created in this process will give rise to the need for new regulations or amendments of existing regulations at least.

To protect the health data of 450 million citizens and restore health as a human right and a public good, European regulators must punch harder and faster and move precious health data out of the powerful clutches and away from the capitalistic interests of Big Tech.

While the WHO is right – generative AI hold a lot of promises for healthcare – this ethical minefield needs to be navigated with more political will and less input from Big Tech.

By Dr Nicole Gross

Dr Nicole Gross is an associate professor at the National College of Ireland. Her research interests include market shaping in healthcare markets, practice-research and market innovation to build more moral markets. She is an active member of various research groups, involved in advocacy through Health Action International and her recent work on generative AI in healthcare and data justice has received funding from the Irish Research Council.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.