Israeli Supreme Court Steps Up Sanctions for Unchecked Use of AI Products in Court Pleadings
Summary
Amidst the growing use of artificial intelligence, 2025 was marked by a series of rulings in which courts sharply criticized references to erroneous sources in pleadings—a byproduct of unchecked and unverified use of AI tools.
In a recent Supreme Court ruling, an unrepresented litigant was ordered to pay ILS 5,000 to the State Treasury after citing non-existent legal provisions following use of AI without verifying its results.
This ruling reflects a direct continuation of the gradual escalation in the imposition of similar sanctions throughout the year. Will this trend continue, and what can we expect in 2026?
AI Hallucinations in Pleadings
This past year, we have witnessed a concerning and growing phenomenon: pleadings containing references to nonexistent case law, fabricated legal quotes, and, at times, even fictitious provisions of law. In many instances, the “original sin” was use of GenAI systems to locate legislation and case law, without independent, professional, or adequate verification of the sources.
The problem is not the use of AI—which case law itself has recognized as offering potential benefit—but rather blind reliance on AI outputs that may be “hallucinations”: content that sounds legal and appears credible, yet is incorrect or does not exist.
The Supreme Court sharply criticized this phenomenon and set a clear norm: lawyers and unrepresented litigants cannot shirk their responsibility and submit court filings containing AI-generated legal precedents and references without verifying their reliability and accuracy.
Development of Case Law in 2025: From Principled Criticism to Increasingly Harsher Sanctions
Throughout 2025, as instances of unverified use of AI tools multiplied, courts began to take a harsher stance against litigants who cited nonexistent “legal” sources. What began as normative clarification and principled criticism evolved into moderate financial sanctions, and later into penalties clearly intended to have a deterrent effect.
In February 2025, two Supreme Court rulings explicitly clarified the obligation imposed on filers of court pleadings to verify the accuracy and reliability of the data contained in them. A lawyer who files pleadings containing fabricated content, while relying on nonexistent supporting evidence, breaches obligations to the client, the court, the counterparty, and the legal profession. Nevertheless, no sanctions were imposed in those cases.
In May 2025, the Supreme Court imposed a personal sanction for the first time on a litigant who cited non-existent case law after relying on AI. As the defendant was not represented by a lawyer, the court set the fine at only ILS 500.
Throughout the year, trial courts also began adopting the approach set by the Supreme Court. Beyond criticism, they imposed financial sanctions on lawyers who blindly relied on AI results that cited nonexistent case law or legislation. Labor courts, magistrates’ courts, and even district courts ordered lawyers to pay sanctions to the State Treasury at sums between ILS 1,000 and ILS 8,000.
In late December 2025, the sanctions intensified further. The president of the Supreme Court ordered an unrepresented litigant to pay ILS 5,000 to the State Treasury. The message is clear: reckless uses of AI tools without verifying the facts will no longer be tolerated. These are no longer symbolic fines, but sanctions designed to deter.
The Supreme Court is signaling to the public that its judicial response is not static. The longer the phenomenon persists, the more severe the punishment will be. The Supreme Court is also stressing that AI is nothing more than a work tool, and it cannot replace human discernment and fact-checking.
Approaching 2026: Between Technological Professionalization and Stiffer Judicial Sanctions
The phenomenon of pleadings referring to nonexistent case law or legislation, or citing fabricated legal precedents, is particularly concerning. It reflects negligent conduct that denigrates lawyers’ professional integrity, the purity of the judicial process, and the fundamental duties owed to clients.
What’s more, lawyers know that case law is not freely available online, but is contained in legal databases requiring a paid subscription. It is therefore obvious that general AI models have no direct access to them. While there are legal databases currently available on the market that offer AI-based search capabilities, even those results are not infallible and require meticulous scrutiny and verification.
What Can We Expect in 2026?
Lawyers who have not yet internalized how to use AI responsibly are likely to realize that they cannot rely on “open” searches that are not grounded in legal databases, that AI generates content but does not substantiate its sources, and that orderly practices for verifying data and supporting evidence are essential.
We can also expect Israeli search engines focusing on legislation and case law to improve their models and deliver more accurate and reliable results.
And the courts? The trend that began in 2025 is likely to continue and intensify in 2026. In a world where artificial intelligence is no longer novel, and after a series of explicit warnings across all judicial levels, we can assume courts will have less patience for such lapses and impose higher personal sanctions on lawyers.
***
Adv. Gal Livshits is a partner in the firm’s Litigation Department.
Barnea’s Litigation Department is one of Israel’s Leading practices, ranked in the top tiers by both local and international legal directories. We represent some of the country’s most prominent clients in complex legal disputes, Locally and Internationally. Our firm has extensive experience in representing leading clients in the Israeli market, including public and private companies, government entities and local authorities, banks and financial institutions, and directors and senior executives.

