© All rights reserved to Barnea Jaffa Lande Law offices

Together is powerful

New EDPB Opinion on Personal Data Protection in AI

Several days ago, the European Data Protection Board (EDPB) adopted an opinion addressing key data protection concerns arising from the use of Artificial Intelligence (AI) models. The opinion specifically focuses on how GDPR principles apply to AI, emphasizing issues such as anonymization of data within AI models, the legal basis for processing personal data during AI development and deployment, and the consequences of unlawful data processing in AI systems.

From a practical point of view, the EDPB believes that processing using an AI model may be legally done by relying on legitimate interest as a legal basis.

Legitimate Interest as a Legal Basis

Relying on legitimate interest in processing allows companies greater flexibility and freedom. It more particularly enables companies to avoid seeking explicit user consent for processing and to rely on their own risk and necessity assessments. What the EDPB is saying is that AI does not inherently mandate consent, and that legitimate interest may be relied on as a legal basis.

The EDPB provides a detailed framework for assessing the legitimacy of relying on legitimate interest as a legal basis for data processing within the context of AI.

First, the interest pursued must comply with legal standards and demonstrate tangible benefits, such as improving AI-based fraud detection, enhancing conversational agents to assist users or improving threat detection in an information system.

Second, the processing must be strictly necessary for achieving the stated purpose. When assessing the necessity of processing, controllers should also consider the reasonable expectations of data subjects as to the use of their personal data, which are influenced by the complexity and explainability of the AI system. In this context, controllers are encouraged to implement anonymizing and privacy preserving practices.

Finally, a balancing test should be completed to ensure that the rights and freedoms of data subjects are not overridden. Within this context, companies may consider the way the processing benefits the data subject, as well as the possible risks to the rights of private life, data privacy, freedom of expression, non-discrimination, work engagement, mental health and more.

To complete the analysis, mitigating measures should be considered. For AI applications, companies should consider anonymization, transparency measures, facilitating data subject rights (deletion, opt-out, additional requests for examination of data memorization by the model, setting a “holding period” before training, and more) the context of data collection (e.g., whether the data was scraped from public sources), and specific additional strategies (such as exclusion of data sources, avoiding collection of certain categories of data, etcetera).

Such an analysis paves the way for significant applications of AI in business, allowing companies freedom of action using AI applications. However, the responsibility to ensure user rights still lies with the company.

Anonymization of AI Models

In its opinion, the EDPB emphasizes that AI models trained on personal data are rarely inherently anonymous, calling for a case-by-case evaluation of the possibility of extraction or inference of personal data.

There may be models that will always be considered not anonymous, such as voice modeling tools, or models used to provide information in relation to specific individuals (e.g. in HR contexts). Other models may vary in relation to the possibility of anonymization, based on their design, use, the nature of the training data, the susceptibility to attack and other factors. These considerations are to be taken into account in an evaluation of the identification risk and review of the steps implemented by the data controller.

As to the models themselves, the EDPB notes that even if personal data is unlawfully used during training, it may not impact further use if it has been anonymized, and the process sufficiently documented.

Organizations deploying AI are advised to:

  1. Adopt privacy-preserving techniques such as differential privacy or pseudonymization during training to minimize risks of re-identification.
  2. Thoroughly test and document anonymization measures to demonstrate compliance with GDPR requirements.

Practical Guidance for Businesses utilizing AI

For businesses developing or deploying AI systems we recommend organizations to:

  1. Identify the processing activities using AI.
  2. Assess the applicability of the GDPR on your AI activity and conduct a Data Protection Impact Assessment (DPIA) for the relevant activities.
  3. Conduct detailed assessments to verify the legality of data collection for those activities.
  4. Perform a detailed Legitimate Interest Assessment (LIA) for the processing activities based on this legal base.
  5. Assure the AI uses facilitate data subject rights implementation.
  6. Keep relevant documentation to demonstrate compliance.

 

The Privacy, AI and Cybersecurity department in our office will be happy to help and accompany you in understanding the specific risks of implementing artificial intelligence tools, creating a compliance program in the field, and mapping your responsibilities under GDPR related to the activity.

________

Dr. Avishay Klein is the Head of the Privacy, Artificial Intelligence and Cybersecurity Department

Masha Yudashkin is a lawyer in the Privacy, Artificial Intelligence and Cybersecurity Department

 

Tags: AI | Personal information