2025 Year-End Review: Privacy Protection and Artificial Intelligence
Summary
- Privacy Law: Amendment 13 and Heightened Standards – Amendment 13 to the Israeli Privacy Protection Law (August 2025) expanded organizational obligations and enforcement powers, including enhanced notice requirements, mandatory DPO appointments, and active board oversight of data processing and security. The Privacy Protection Authority has started market-wide enforcement.
- Artificial Intelligence: Enforcement, IP, and Legal Liability – In 2025, local and international case law clarified AI legal boundaries, covering training on copyrighted works, marketing claims, data privacy, and scraping. Legal liability rests with AI companies and management, including commercial or high-risk uses.
- AI in the Financial Sector – Israel’s final recommendations for financial AI emphasize responsible innovation, transparency, risk management, data controls, and fraud prevention, with focus on GenAI, Deepfake technologies, and chatbots.
- Outlook for 2026 and Practical Recommendations – Organizations must fully implement Amendment 13 and comply with AI rules: appoint a DPO, adopt comprehensive information governance, manage AI risks throughout system lifecycles, monitor suppliers and data sources, and report regularly to the board. Compliance is a strategic asset that protects business continuity and reputation.
2025 marked a major regulatory turning point in the fields of artificial intelligence (AI) and privacy protection. In Israel, Amendment 13 to the Privacy Protection Law forced Israeli companies to make significant changes in their routine operations and granted regulatory authorities unprecedented enforcement powers.
At the same time, the regulatory and judicial approach to innovative technologies like AI underwent a dramatic shift. Enforcement authorities and courts made it clear that they can rely on existing law and do not have to wait for specific legislation to impose legal liability for misuse of technologies.
Privacy Law: Amendment 13 and Tightening Standards
Amendment 13 to the Privacy Protection Law:
In August 2025, Amendment 13 to the Privacy Protection Law came into effect, substantially reshaping privacy protection law in Israel. The amendment updates terminology, expands data subjects’ rights, and imposes significant obligations and responsibilities on organizations, demanding material changes to routine operations. In parallel, it grants broad oversight and law enforcement powers to the Privacy Protection Authority (PPA).
In addition, the PPA published a draft statement clarifying a significant change in the interpretation and requirements for obtaining data subjects’ consent for the processing of their personal information. The PPA now expects organizations to prioritize explicit, informed, and free consent. Mechanisms such as pre-marked checkboxes, general notices about data use, or inferring consent merely from the use of a product or service may be considered “suspicious consent.” In such instances, the data controller may be required to prove the data subject provided informed willing consent.
Beyond granting the PPA oversight and enforcement authorities, key changes under Amendment 13 include the following:
- Expanded obligation to inform data subjects: When collecting data, organizations must specify whether the provision of data is compulsory, the purposes of use, the data recipients, and data subjects’ rights of access and correction. This impacts how organizations must inform customers, users, and even employees about personal information they collect.
- Obligation to appoint a data protection officer (DPO): The amendment obligates public bodies, data traders, organizations performing wide-scale systematic monitoring, and entities processing a high volume of sensitive data (such as banks and hospitals) to appoint a DPO. The DPO must have in-depth knowledge of law and technology and serve as a liaison with the PPA. The PPA also published a draft directive explaining the scope of this obligation.
- Board responsibilities: In companies where data processing is a key component of their operations, the board of directors bears an active duty to oversee compliance with the law, adopt a privacy protection policy, and be regularly and fully informed of security incidents in the organization.
The PPA recently announced that it has begun lateral supervision of local authorities, e-commerce companies, commonly used applications, and other companies, underscoring the seriousness of its supervisory proceedings and the shift toward stepped-up enforcement in the market.
You are invited to review our guide on Amendment 13, which offers tips to implement in your organization that will significantly improve compliance with the Israeli Privacy Protection Law.
AI: Law Enforcement, Intellectual Property, and Legal Liability
A central issue in 2025 was the legality of using copyrighted works to train AI models. US case law began to consolidate a clearer trend in an effort to strike a balance between copyright protection and AI training needs—the vanguard of technological progress.
In the Anthropic case, a landmark USD 1.5 billion settlement was reached following allegations the company used pirated copies of books. The US district court held that while the AI training process itself may constitute transformative “fair use,” the fact the company obtained copies from unauthorized sources constitutes infringement.
Another relevant case is Thomson Reuters v. Ross Intelligence. A Delaware court ruled that commercial use intended to directly compete with the original product (a legal search engine) is not protected under the fair use doctrine. Europe has taken an even harsher stance. In a precedent-setting ruling, a German court held that training AI models using song lyrics without a license constitutes copyright infringement. The court further determined that when copyrighted lyrics are found in AI products, liability for infringement rests with AI companies and cannot be relayed to users.
The US Federal Trade Commission (FTC) clarified that there is no need for new legislation to enforce liability on AI. The FTC banned the Rite Aid chain from using AI-based facial recognition technology for five years due to high error rates and discrimination. Workado LLC was ordered to stop making unsubstantiated statements about 98.3% accuracy rates. In Texas, a settlement was reached with Pieces Technologies for unsubstantiated marketing claims that its GenAI system for medical uses has low “hallucination” rates.
The message is clear: marketing claims about AI performance are subject to consumer protection laws.
-
AI in the financial sector:
In 2025, the Israeli Money Laundering and Terrorist Financing Prohibition Authority (IMPA) published a requirement for proactive reporting of suspicious activity carried out through GenAI and deepfakes, such as forging KYC documents or impersonating senior officials to commit financial fraud. The Israel Securities Authority also permitted the use of chatbots on financial platforms to make analytics accessible, provided that financial entities maintain transparency, perform no further data processing, and enable access to the full report.
A final report of recommendations on AI in the financial sector came out in December. It reflects an approach that encourages innovation and the adoption of AI in the financial sector, coupled with responsible risk management. Israeli regulation focuses on supervising high-risk uses of AI and removing unnecessary regulatory obstacles, with the aim of encouraging a competitive, innovative, and efficient market aligned with international standards.
-
Privacy protection in AI systems:
In 2025, the PPA published a new draft directive clarifying how current Israeli law, particularly the Privacy Protection Law, applies to the development, training, and use of AI systems. Compulsory compliance with the law applies to all stages of a system’s lifecycle. Key obligations include obtaining informed consent from users, maintaining full transparency about how data is processed, and imposing responsibility on organizations’ management echelon.
Case law in Israel and abroad has also prohibited data harvesting, data scraping, and the collection of biometric data without consent, and ruled that website operators bear responsibility for preventing such data scraping. In addition, courts have stated that, in certain instances, there may be a right to demand “correction of the algorithm” if it generates erroneous information.
For example, the Clearview AI case in the United States was one of the most significant precedents this year. The company was ordered to pay about USD 51 million as part of a class action settlement for scraping more than 10 billion images from social networks without consent to train facial recognition systems. The settlement, which also awarded 23% of the company’s shares to the plaintiffs, serves as a warning sign for companies using biometric data and scraping technologies without explicit consent mechanisms, especially considering US state privacy protection laws (such as the Illinois Biometric Invasion of Privacy Act).
Looking Ahead to 2026 and Practical Recommendations
2026 will focus on implementing these new regulatory powers. The preparatory period given to the Israeli economy has ended, and the PPA is likely to step up active enforcement and the exercise of its expanded powers. Amendment 13 signals a fundamental change in market approach, from “technical privacy protection” to lateral information governance. Companies that continue to rely on outdated practices like sweeping consents and vague data sources are likely to face significant legal and economic exposure.
practical recommendations for 2026 to ensure AI and privacy Compliance:
- Immediate fulfillment of Amendment 13 obligations (see link above): Appoint a DPO (internal or external); ensure database definition documents are in full compliance; and update information security procedures, customer and supplier agreements, and other documentation.
- AI regulatory compliance (EU, US, and Israel): Compile a regulatory map that addresses the development and use of AI systems, including system risk classification, transparency and documentation obligations, data management and data security requirements, and unique obligations for general AI models and those using highly sensitive information. Monitor the entry into force of the EU AI Act, regulatory developments and standards in the United States (state and federal legislation and regulatory directives), and directives issued in Israel (including relevant regulatory positions on privacy protection, data security, and responsible use of AI).
- Implementation of AI risk management mechanism: Adopt an orderly methodology for managing AI risks throughout the system lifecycle, according to degree of risk and legal need. This should include mapping AI uses and prohibited/restricted uses; assessing privacy protection and data security risks; performing bias and fairness tests; implementing accuracy/robustness controls; implementing human-in-the-loop mechanisms if needed; justifying and documenting uses of data; performing supplier and external component examinations; implementing post-rollout monitoring mechanisms; implementing a process for rectifying malfunctions and AI incidents; and preparing a continuous improvement program.
- Corporate governance for privacy and AI risks: Establish binding organizational procedures that define functionaries, spheres of responsibility, authorities, and decision-making processes regarding the use of AI systems and privacy protection risks.
- Strict oversight of data supply chain: Segregate licensed databases from pirated databases; map all systems and verify their legality; verify rights in supplier contracts; and implement filtering mechanisms to prevent “hallucinations” and uses of third-party trademarks.
- Board involvement: Define periodic reporting to the board on privacy risk management and security incidents as part of the board’s statutory oversight duties.
The bottom line: in 2026, compliance with privacy protection and AI legislation is no longer merely a checkbox on a legal to-do list, but a strategic asset that ensures business continuity and safeguards organizational reputation.
***
Dr. Avishay Klein is a partner and the head of our firm’s Privacy, Cyber and AI Department.
Barnea Jaffa Lande’s Privacy, Cyber and AI Department is one of the leading and most prominent practices in Israel. We provide comprehensive and innovative legal counsel to technology companies, institutional bodies, and corporations from diverse sectors in Israel and abroad. The department specializes in the practical implementation of privacy law in Israel, including DPO services with a business-oriented focus, mitigating legal risks, and designing programs tailored to clients’ specific needs. The department also provides comprehensive legal counsel on all regulatory issues related to the development, use, and assimilation of AI tools in organizations.

