© All rights reserved to Barnea Jaffa Lande Law offices

Together is powerful

US Senate Rejects AI Enforcement Moratorium: Legal Liability Applies to AI Companies

Earlier this month, the US Senate voted 99-1 to strike a key clause in a sweeping Republican domestic policy bill that sought to impose a ten-year moratorium on states’ enforcement of artificial intelligence regulations. Had it passed, the clause would have prevented states, such as California, New York and Texas, from enforcing existing laws or enacting new legislation regulating topics such as deepfakes, algorithmic discrimination and oversight over medical and automated systems, at a time when uniform federal AI regulations have not yet been enacted.

 

The Republican draft policy bill triggered strong bipartisan opposition from lawmakers, from state regulatory authorities, from academia and from civil society organizations, who warned that a moratorium is likely to deprive the public of vital protections precisely at a time when technological risks are intensifying and federal AI legislation is still in its infancy. The vote, by an almost absolute majority, reflects the adamant position that states should be allowed to continue actively overseeing uses of artificial intelligence.

 

Regulatory authorities are increasing enforcement, even without federal law

The Senate’s position, whereby states’ authority to enforce AI regulation should not be frozen, reflects a clear trend among federal and state regulators towards increasing enforcement by exercising their existing powers over uncontrolled uses of AI technologies even in the absence of any specific federal legislative framework. The following are three recent examples of enforcement measures.

 

  1. Rite Aid: federal order restricting the use of AI facial recognition technology

In December 2023, the Federal Trade Commission (FTC) issued an unprecedented injunction against the Rite Aid pharmacy chain banning it from using AI-based facial recognition technology for five years. The injunction also obligates the company to delete databases of images and algorithms that it developed, to issue clear disclosures to consumers, to implement strict oversight, reporting and data security procedures and to implement an internal compliance program under the senior management’s supervision.

According to the FTC’s findings, Rite Aid installed the systems in dozens of branches without verifying their reliability, level of accuracy or the existence of oversight mechanisms. As a result, many customers were mistakenly flagged as criminals or suspected shoplifters without any grounds to support this. As a result, store employees, acting on these false matches, either ordered customers to leave, or followed them around the store, detained and searched them or called the police to confront them. Rite Aid’s facial recognition system also disproportionately misidentified women and people of color, including from Asian communities, in a way that created systematic discrimination and considerable humiliation and emotional suffering.

 

The Rite Aid case reflects the FTC’s stance of not waiting for federal AI regulation. The Federal Trade Commission treats irresponsible uses of AI technologies – particularly biometric identification technologies – as violations of consumer, privacy protection and data security laws, and also exercises its enforcement authorities against new technologies.

 

  1. Workado: The FTC rules that unsubstantiated claims of accuracy violate consumer protection laws

In April 2025, the Federal Trade Commission  (FTC) issued an administrative order against Workado LLC (formerly Content at Scale AI), which developed and marketed a tool for detecting AI-generated texts. According to the FTC’s findings, Workado advertised that its AI detection tool is 98.3% accurate, but failed to provide reliable empirical evidence, and further failed to clarify that the accuracy rate is based solely on academic studies. The FTC performed independent testing and found that the accuracy rate actually ranges between 53% and only 74% – a ratio of error that raises serious concerns that Workado misled consumers, especially since Workado marketed the tool for diverse uses, including among lecturers, students and authors. In short, Workado violated consumer protection laws by making false, misleading or unsubstantiated claims of accuracy.

 

The administrative order obligates Workado to remove all unsubstantiated claims about the product’s accuracy rate, to refrain from publishing any further advertisements containing claims that cannot be reliably verified, and to ensure that any future claims are based on solid empirical evidence.

 

The Workado case illustrates that advertising claims about the accuracy of AI tools are also subject to consumer protection laws. The FTC ruled that the presentation of an accuracy rate of 98.3% without any reliable empirical evidence constitutes misrepresentation, and ordered that any material claim be either removed or substantiated. In short, AI companies are obligated to transparent, accurate and responsible advertising – even in the absence of specific regulations – and violations trigger law enforcement measures.

 

  1. Pieces Technologies: first-of-its-kind regulatory arrangement with a generative AI healthcare technology company

In June 2025, the Texas Attorney-General reached a first-of-its-kind settlement with Pieces Technologies, which developed a generative AI system for medical uses in hospitals. The system produced summaries of patients’ clinical status based on real-time information streamed directly from hospital systems. The Attorney-General’s investigation found that the company advertised exaggerated marketing claims about the system’s accuracy, including an error rate or “severe hallucination rate” of <1 per 100,000, without any real evidence or reliable data. By doing so, Pieces Technologies could mislead medical teams and create a real risk of incorrect reliance on automatic outputs as part of the treatment protocol.

 

According to the settlement, Pieces Technologies is obligated to accurately disclose the limitations of its tools and to ensure that medical teams understand the extent at which they should or should not rely on such tools. The Pieces Technologies case shows that even without specific federal AI legislation, state law enforcement authorities deem misleading statements about the accuracy and safety of AI systems as clear violations of existing consumer and health protection laws, and emphasizes that AI companies must be fully transparent about the use of such models, disclose empirical validation of error rates and provide explicit guidance to end-customers.

 

US law enforcement authorities are wielding their existing powers against misrepresentations and deception in commerce, and are imposing high standards of compliance on AI companies, including due disclosure, control documentation and avoiding risks to human life, even when at issue are innovative AI-based tools.

 

Conclusion: the US is enforcing general laws on uses of AI technologies

The four news items we reviewed here – the rejection of the proposed AI regulatory enforcement moratorium, the FTC’s enforcement order against Rite Aid, the FTC’s administrative order against Workado and the Texas Attorney-General’s settlement with Pieces Technologies – reflect a consistent and clear regulatory trend.

 

Public law enforcement agencies in the United States are not waiting for specific federal legislation in order to enforce standards of accountability, transparency and due caution in the use of AI technologies. Federal and state authorities are wielding their powers by virtue of existing general laws, such as consumer protection, privacy protection, anti-discrimination, data security and professional ethics laws, and are making it crystal clear that AI technology companies are also expected to uphold fundamental legal principles.

For Israeli companies operating in the US market, the message is that there are no regulatory grey areas. Companies must ensure that they do not expose themselves to enforcement measures by managing legal risks, documenting product limitations, issuing accurate disclosures to users and by already implementing control mechanisms now, and should not wait until federal AI regulations are enacted.

 

Recommendations to Israeli companies, considering the emerging enforcement trend in the United States

Israeli AI technology companies operating in the United States should:

  • comprehensively map all uses of their technology, including internal uses and product and marketing uses;
  • ensure that they have methodical mechanisms in place for examining AI vendors (both third-party and self-developed);
  • ensure that they comply with existing regulatory requirements in the United States pertaining to privacy protection, consumer protection, fairness and data security;
  • revise internal policies to include documentation, disclosure and transparency procedures;
  • incorporate accountability, algorithm-bias and non-discrimination considerations already at the development and implementation stages;
  • conduct legal and regulatory risk assessments in relation to every AI-based product or service; and
  • revise their use agreements, privacy policies and marketing statements accordingly.

 

Our firm is at your service to provide advice and assistance in ensuring AI regulatory compliance in the United States and in other markets.

 

***

Dr. Avishay Klein is a partner and heads the firm’s Privacy, Cybersecurity and Artificial Intelligence Department.

Dr. Nadine Liv is an associate in the firm’s Privacy, Cybersecurity and Artificial Intelligence Department.

 

Barnea Jaffa Lande’s Privacy, Cybersecurity and Artificial Intelligence Department is one of the most prominent and leading departments in Israel, and provides comprehensive, innovative legal advice to technology companies, institutional entities, companies and corporations in various sectors in Israel and abroad. The department helps the firm’s diverse clients contend with all statutory and regulatory aspects, including compliance with international laws (including the European General Data Protection Regulation (GDPR) and the California Consumer Privacy Act  (CCPA)), external Data Protection Officer services, legal support during the development and implementation of AI technologies, formulating and implementing privacy protection and data security policies, representation before various regulatory authorities, cyberattack prevention and response solutions, legal solutions for technology transactions and ventures tailored to the local and international market, while minimizing exposures and risks, etc. 

 

Tags: AI | artificial intelligence