© All rights reserved to Barnea Jaffa Lande Law offices

Together is powerful

Landmark Settlement in AI Copyright Infringement Lawsuit – Anthropic Case

A significant development occurred last week at the interface between artificial intelligence and intellectual property following the settlement agreement reached by Anthropic in the class action lawsuit filed against it with the US District Court for the Northern District of California.

 

The lawsuit, Bartz v. Anthropic: Claude, was brought by a group of authors whose copyrighted works Anthropic allegedly used without authorization to train a generative AI model. The court examined whether the use of these works for AI training qualifies as fair use and Anthropic’s potential liability for infringement.

 

In line with previous rulings, the court determined that the use of the works themselves may constitute fair use, as the training process is considered transformative.

 

However, it also ruled that mere possession of copies obtained from unauthorized sources constitutes infringement, regardless of the use made of them. Thus, the case focused not only on the application of the fair use doctrine to training processes but, primarily, on the legality of data collection itself.

 

The settlement allowed Anthropic to avoid a precedent-setting ruling that could have established direct liability for possessing unauthorized datasets. The total compensation was set at USD 1.5 billion, with the company paying USD 3,000 for each work in the datasets. Additionally, Anthropic agreed to delete its database of illegally obtained works.

 

What the Settlement Means for the AI Industry

 

The Anthropic settlement marks a turning point in the AI industry for several reasons:

  • This is the first time a leading AI company has agreed not only to pay damages but also to change its data collection and retention practices.
  • The settlement was reached at an early stage of the proceedings, reflecting Anthropic’s high risk assessment and recognition that the focus of legal scrutiny has shifted. The question is no longer whether the training itself constitutes fair use, but whether it is permissible to possess unlawfully obtained datasets.
  • The commitment to implement transparency mechanisms and remove copyrighted works sets a precedent likely to influence new industry standards.

 

While other AI companies, such as OpenAI and Meta, continue to face similar lawsuits, Anthropic’s settlement may serve as a model for resolving similar disputes and accelerate the development of ethical and legal standards in AI model training.

 

Practical Recommendations for Companies

 

In light of Anthropic’s landmark settlement and its implications for the AI industry, the following practical recommendations are suggested for companies developing and training AI models:

 

  • Ensure the legality of data sources – Avoid possessing or using datasets obtained unlawfully or lacking proper documentation. As demonstrated in the Anthropic case, mere possession of pirated datasets may constitute infringement, even if the use itself might qualify as fair use.
  • Work only with reliable suppliers – Require clear documentation of terms of use and distribution rights for all collected content. Anthropic’s commitment to remove specific works and implement filtering mechanisms underscores the importance of partnering with trusted, authorized content providers.
  • Develop a structured policy for training with copyrighted works – Companies should establish comprehensive policies addressing not only how data is used, but also how it is collected and documented.
  • Include strong contractual protections with suppliers – Considering the increased legal risk highlighted by the Anthropic case, contracts with content suppliers should include robust legal safeguards, including indemnification provisions in case of infringement claims.
  • Develop technical capabilities for filtering and identifying copyrighted content – Following Anthropic’s commitment to implement technical measures to prevent the use of copyrighted works, companies should consider developing similar tools to identify, filter, and remove copyrighted material from their training datasets.

 

 

***

 

Dr. Avishay Klein is a partner and head of the firm’s Privacy, Cyber & AI Department.

 

The firm’s Privacy, Cyber & AI Department is one of the most prominent and leading practices in Israel, providing comprehensive and innovative legal counsel to technology companies, institutional entities,  and multinational corporations. The department specializes in the practical implementation of privacy laws in Israel, with a focus on catering to clients’ specific business needs, identifying unique legal exposures, and designing privacy programs tailored for each situation.

 

Tags: AI | AI Regulation | AI Ruling | Artificial Intelligence | Data Protection | Intellectual Property | IP | Personal data | Privacy