Israeli Privacy Authority Targets AI: Business Implications Ahead
The Israeli Privacy Protection Authority (PPA) published a draft directive recently to clarify its position about how businesses and organizations should apply the provisions of the Privacy Protection Law and the privacy regulations to artificial intelligence systems.
The directive relates to the entire life cycle of AI systems, from training of AI models to deployment in the business. The directive presents the PPA’s interpretation of the Law and serves as a basis for the PPA to exercise its authorities and to impose significant sanctions, permitted by Amendment 13 to the Law that is expected to come into effect in August 2025.
In this article, we review the highlights of the directive in the form of questions and answers and, at the end, provide a list of practical measures that businesses should be taken.
Does the Privacy Protection Law apply to artificial intelligence systems?
Yes. According to the PPA, the provisions of the Law fully apply to AI systems used to process personal data, both at model training and during their use. The PPA’s position is that data that an AI system generates about a person (such as estimates, predictions or categorizations) is also considered personal data pursuant to the Law.
How can personal data be processed legally using AI systems?
In most instances, in order to process personal data using AI systems, the data subjects’ informed consent must be obtained.
The data subjects must be provided with detailed information about the processing and issue their informed consent. Such information should include the purposes of processing (such as training the AI model), the uses of the personal data, additional types of data collected and their sources, as well as the risks of processing using AI. The PPA also requires disclosure to the data subjects about interaction with an AI system.
Is active and separate consent required?
In many instances, yes. When the purposes of the processing are complex and deviate from data subjects’ reasonable expectations, or if they pose a high risk to data subjects’ rights, the PPA believes that active and separate consent is required.
What do organizations who want to implement “responsible AI” need to do?
The PPA emphasizes the importance of accountability when developing and using AI. Inter alia, following its previous directive, the PPA states that boards of directors and managements must oversee compliance, set clear organizational policies and, where required, to appoint a DPO. The PPA also recommends that organizations should perform a DPIA (Data Protection Impact Assessment) before deploying AI systems, especially systems that process data on a wide scale, include sensitive data, or that pose high risk.
How can organizations ensure data accuracy and protect data subjects’ rights?
Data subjects are entitled to access their personal data and to request the correction of incorrect, incomplete, unclear or outdated personal data. The PPA’s position is that this also applies to output of AI systems, and in particular circumstances, to prevent the continued generation of erroneous data, correction may apply to the underlying algorithm itself. In our view, this position is far-reaching and can be expected to arouse controversy.
Are there any special data security requirements for AI systems?
Yes. The PPA states that AI leads to unique data security risks, such as attempts to extract training data from the model. In light of this, businesses should adopt security controls tailored to the specific AI model according to the anticipated level of risk.
What are the requirements in relation to employees who use external AI tools?
The PPA recommends that organizations set clear policies for the use of AI tools, including a risk assessment of security and privacy risks, defining which employees are authorized to use the tools, the permitted purposes for the use, what data is permitted to be used, etc. The PPA also recommends that organizations should examine the terms of use and the privacy policy of the external AI tool-provider to ascertain whether personal data may be used to train the AI model, as well as provide relevant training to their employees.
Is scraping personal data from the web (e.g. from social-media profiles) to train an AI model permitted?
The PPA’s position is that, as a rule, this is prohibited without the data subjects’ explicit informed consent. Consent will exist only if the platform’s terms of use do not restrict the use of data to a specific purpose and provided that the data subjects have not restricted access to their data. The PPA adds that, even in these instances, it is not possible to infer data subjects’ consent for complex uses of their data or for uses that pose high risk to data subjects’ rights (such as training a facial recognition system).
Do website operators (such as social networks) have a responsibility to prevent data scraping?
Yes. The PPA’s position is that owners of databases that allow personal data sharing on the Internet are obligated to take reasonable measures to prevent prohibited data scraping from their platforms. The PPA also states that prohibited data scraping may be considered a data security incident that requires reporting.
Implications and recommendations
The draft directive is a significant step towards regulating the use of AI in Israel. The PPA intends to enforce the key provisions of the directive made more stringent enforcement in light of Amendment 13 to the Law, including the potential imposition of heavy sanctions.
However, the draft directive also includes dramatic and sometimes even far-reaching requirements, such as the requirements regarding data scraping and the correction of algorithms, which can be expected to arouse objections from businesses.
Although at issue is a draft, due to the fact that the PPA clearly spelled out its interpretation of the existing law, we recommend that companies developing or using artificial intelligence systems in Israel should already take measures to implement the requirements specified in the directive, including:
- Map your AI landscape – Identify the AI tools you have in use, noting which ones handle personal data.
- Build governance & run DPIAs – Establish an internal AI-governance framework and conduct privacy impact assessments where warranted.
- Draft policies and train staff – Issue clear procedures for developing and using AI and roll out tailored employee training.
- Notice and consent mechanisms – Implement transparent, detailed notices and consent flows wherever personal data power an AI system.
- Refresh legal documents – Update terms of use, privacy policies and contracts to meet the new disclosure and consent standards.
- Vendor onboarding – Create a procurement workflow that screens AI vendors for privacy and security risks before engagement.
- Facilitate data-subject rights – Ensure individuals can easily exercise their rights in relation to every AI system you deploy.
***
The firm’s privacy protection, cybersecurity and artificial intelligence department is at your service to provide legal assistance during the development and assimilation of AI systems, designing related internal governance frameworks and adapting them to the requirements of the authorities in Israel and abroad.
Dr. Avishay Klein is a partner and heads the firm’s privacy protection, cybersecurity and artificial intelligence department.
Adv. Masha Yudashkin is an associate in the firm’s privacy protection, cybersecurity and artificial intelligence department.