What EU digital reforms mean for healthcare deep tech

26 Oct 2023

Image: © ThisDesign/Stock.adobe.com

We speak to William Fry partner David Kirton about the EU Digital Reforms Package and its impact on various aspects of deep-tech companies in the healthcare and life science sectors.

Click to read more stories from Deep Tech Week.

If there’s one thing that everyone agrees that the European Union is known for, it is regulation. Especially in the regulation of tech, EU lawmakers have curated some of the most comprehensive sets of rules and guidelines across a range of areas relevant to the consumer economy, from data protection and advertising to social media, and most recently, AI.

But what do these regulations mean for Europe’s burgeoning life sciences and healthcare industries? As new rules and regulations get created and updated every month, organisations need to keep track of how these rules impact their business and industry at large.

Particularly for businesses incorporating emerging or advanced technologies that would qualify them as deep-tech enterprises, the EU package of digital reforms have significant relevance around the use of people’s data and its impact on privacy.

Data protection and privacy

According to David Kirton, a partner in the technology group at William Fry, EU regulations such as the Data Act, Data Governance Act and Health Data Space Regulation of have a “significant impact” on current data protection and privacy laws and are “highly relevant” to organisations operating in the life science and healthcare industries.

“For patients and other end-users of healthcare services, the Data Act will reinforce the GDPR’s right to data portability, making it easier for patients and other healthcare end-users to switch providers by facilitating the transfer of data gathered through smart objects and connected devices such as wearable tech,” he explains.

Similarly, Kirton argues, the Data Governance Act applies to personal and non-personal data and imposes obligations on public sector bodies, including those operating in the life science and healthcare sector to share non-personal data.

“Such organisations will need to establish processes to enable data-sharing, while ensuring that any personal data is anonymised for the purposes of GDPR,” he says.

“While imposing a burden on public sector bodies, this presents huge opportunities for private sector organisations and research organisations to have access to vast quantities of healthcare data for use in research, product development and the delivery of services.”

New e-privacy rules set by the EU will also complement existing data protection rules as they apply to electronic communications such as direct marketing.

“Businesses will need to ensure that their websites – and all ways in which they communicate electronically with their customers – are compliant with new e-privacy rules.”

High-risk AI systems

In June, EU lawmakers overwhelmingly voted in favour of the AI Act, a long-awaited batch of rules to prohibit dangerous technology and monitor the application of artificial intelligence.

The act will also bring in a uniform definition for AI that will be relatively neutral, so it can be applied to the tech as it evolves over time. “We must reconsider how we legislate and think in the face of unlimited access to artificial intelligence because a new age of scrutiny has begun,” EU Parliament president Roberta Metsola said at the time.

And like all sectors, this means more vigilance for those operating in life sciences and healthcare using AI systems.

“At a basic level, organisations using high-risk AI systems will need to establish, implement, document and maintain a risk management system in relation to that AI system. In addition, so-called ‘training data’ used in the development of AI systems will also need to comply with stringent ‘quality’ criteria set out in the act,” Kirton explains.

“This means that training, validation and testing datasets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used.”

Kirton gives the example of a healthcare organisation that uses an AI system to triage patients presenting with certain symptoms.

“[The organisation] would be under a legal obligation to ensure that the data used to train, validate and test that AI system are sufficiently representative of all population groups – for example, different ethnicities – in respect of which that AI system is used.”

Transparency v confidentiality

Another EU regulation that will have an impact on deep-tech companies in life sciences and healthcare is the proposed Health Data Space Regulation, which aims to empower EU patients to access their health data and support the use of said data for healthcare research, innovation and policymaking.

“The proposed regulation will give patients the right to access their health data immediately, free of charge and in an easily readable, consolidated and accessible form, and an electronic copy of the same in a specified format,” says Kirton.

He explains that the regulation will apply to  “data holders” – any entity or body in the healthcare sector which has the “right, obligation or ability” to make health data available in accordance with the framework of the regulation.

“It is therefore most obviously targeted at healthcare providers such as hospitals but is also very likely to include providers of many kinds of medical devices and digital health technologies, such as wearable technologies which gather data.”

Kirton also argues that there exists a “core tension” between the transparency and accountability promoted by the EU Digital Reforms Package, which includes the acts mentioned above, and the confidentiality and secrecy espoused by intellectual property rights.

“One of the most exciting use-cases for AI within the life sciences industry is the use of generative AI in pharmaceutical and medical device development,” he goes on.

“However, one of the transparency requirements imposed by the AI Act is that any provider which places on the market or puts into service an AI system (including for its own use, for example in pharmaceutical development) must make publicly available a sufficiently detailed summary of the use of training data protected by copyright law.

“This means that any users of such systems need to ‘open the black box’ and show the public – and their competitors – how such models were trained.  In practice, how big of a concern this is to deployers and users of these systems will depend on how granular and detailed that information in practice needs to be.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com