The Irish DPC welcomed the new guidelines, which were issued after its request in September.
The European Data Protection Board (EDPB) has introduced guidelines around the use of personal data for the development and deployment of artificial intelligence (AI) models.
The guidelines – requested by the Irish Data Protection Commission (DPC) in September – will determine how and at what situations AI models can be considered anonymous, the “legitimate interest” argument when developing or using AI models and the consequences for an AI model developed using personal data that was processed unlawfully.
The measures, which also considers the usage of first and third-party data, mandates that the anonymity of AI models should be assessed on a case-by-case basis by national data protection watchdogs.
It details that an AI model can be anonymous only if it is “very unlikely” to directly or indirectly identify individuals whose data was used in the models, or extract their personal data using search queries via the model.
Moreover, the EDPB said that these guidelines provide “general consideration,” that national watchdogs should take into discretion while making decisions relating to the legitimate interest of AI models processing personal data.
EDPB guidelines consists of a number of criteria, including whether the personal data in question was publicly available, the nature of the service the model provides as well as the source from which the personal data was collected.
“As the lead supervisory authority of many of the world’s largest tech companies, we have a deep awareness and understanding of the complexities associated with regulating the processing of personal data in an AI context,” said DPC chairperson Des Hogan.
“In having made this request for an opinion, the DPC triggered a discussion, in which we participated, that led to this agreement at EDPB level, on some of the core issues that arise in the context of processing personal data for the development and deployment of AI models, thereby bringing some much needed clarity to this complex area.”
Moreover, the watchdog’s commissioner Dale Sunderland said that the guidelines will enable “proactive, effective and consistent regulation across the EU/EEA, giving greater clarity and guidance to industry, while promoting responsible innovation.”
Commenting on the new guidelines, EDPB’s chair Anu Talus said: “AI technologies may bring many opportunities and benefits to different industries and areas of life. We need to ensure these innovations are done ethically, safely, and in a way that benefits everyone.
“The EDPB wants to support responsible AI innovation by ensuring personal data are protected and in full respect of the GDPR”.
The Irish DPC has been on the forefront of the battle for data privacy. Earlier this year, the Commission opened an investigation into Google to see whether it complied with EU data laws when developing its PaLM2 AI model, while it also opened an inquiry into Ryanair, looking into how the airline company processes personal data – including potentially biometric data.
Also this year, the DPC concluded a number of cases against Big Tech companies, including an investigation into X which was “struck out” due to the company’s agreement to suspend its processing of the personal data of its EU and EEA users on a permanent basis, as well as slapping a €310m fine on LinkedIn after it found that the company’s data processing practices infringed on multiple articles of the GDPR.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.