© All rights reserved to Barnea Jaffa Lande Law offices

Together is powerful

New Israeli Regulatory Requirement: Proactive Reporting of Suspicious Use of GenAI

Summary

  • Advanced GenAI and Deepfake technologies are creating new and rapidly evolving threats to the financial sector, including the automated fabrication of identification documents, the production of falsified media, and sophisticated impersonation of officials or senior corporate executives.
  • These capabilities enable the opening of fictitious accounts, the circumvention of KYC and AML controls, and the execution of social engineering schemes that induce victims to disclose sensitive information or authorize fraudulent transfers.
  • Global trends show a sharp rise in the criminal use of these tools, and the accessibility of such technologies significantly lowers the barrier to producing convincing falsified media, increasing the likelihood of large-scale incidents.
  • Effective mitigation requires reinforcing identification and prevention mechanisms, updating controls and reporting procedures, integrating red-flag indicators into monitoring systems, enhancing document and media verification, reassessing the technological capabilities of KYC vendors, and delivering targeted employee training to identify GenAI- and Deepfake-enabled fraud.

The Israeli Money Laundering and Terrorist Financing Prohibition Authority (IMPA) published a document recently to raise awareness of the new challenges abuses of Generative AI and Deepfake pose to the financial sector. The document reviews the potential risks of use of these technologies, presents case studies from Israel and around the world, discusses emerging threats, and provides a list of “red flags” to help identify suspicious activities.

 

The key immediate message to all reporting entities is a new requirement: IMPA is now calling for proactive filing of “exceptional reports” about any activity perceived as irregular or that raises concerns of money laundering or terrorist financing related to the use of GenAI and Deepfake technologies. This may be during direct interactions with the reporting entity (such as fabricating documents during a KYC process or impersonating another customer) or during a customer’s activities with any third party (such as a customer who fell victim to a Deepfake-based investment fraud or money transfer).

 

Main Risks Identified by IMPA

The document explains that GenAI is primarily abused to falsify identification documents, while Deepfake (a GenAI application that enables the altering of pictures and the production of images, audio clips, or videos that look so authentic it is difficult to detect they are fakes) is used for impersonation. The report focuses on the threat posed by GenAI applications, particularly Deepfake, for committing fraud and money laundering across two main channels:

 

  1. Fraud against the financial sector (circumventing KYC and AML processes)

The main risk is abuse of these technologies to bypass know your customer (KYC), customer due diligence (CDD), and verification processes, namely to open fictitious bank accounts for use as a conduit for money laundering. The document presents an actual case from Hong Kong (April 2025) as an example. A fraud network used Deepfake to replace criminals’ photos with photos from stolen IDs, successfully opening accounts in 30 different banks while falsifying “selfie” verification checks. These accounts were used to launder more than USD 1.2 million.

 

  1. Fraud against the public (impersonating senior officials)

Another risk, relevant to all companies, is social engineering scams. The document presents a case study in which criminals used a Deepfake image of a CFO in a video conference call to convince an employee to transfer USD 25 million. In another case, criminals used a Deepfake clone of a well-known CEO’s voice to convince a bank manager to transfer USD 35 million.

 

Israeli Context: Calm Before the Storm

IMPA analyzed the gap between global trends and the situation in Israel and stressed that these technologies are “reshaping the face of financial crime” so rapidly that entities worldwide are having a hard time catching up. However, the document emphasizes that “evidence of widespread abuse of these technologies has not yet been identified in Israel … and no evidence of fraud in KYC processes has been identified.”

 

The fact that no phenomenon of similar magnitude has been identified in Israel is precisely why IMPA published the document. IMPA is taking preventive measures by informing he market that it foresees attempts to commit similar crimes in Israel and expects entities to prepare accordingly.

 

Red Flags and IMPA’s Guidance

The document contains a list of “red flags” to help financial and business entities detect suspicious activity. IMPA emphasizes that a single red flag does not necessarily attest to illegal activities and that entities should assess the overall circumstances. Red flags include audio or visual anomalies in a conversation (such as poor lip synchronization or inconsistent facial movements), suspicious camera glitches during live verification, discrepancies between identification documents (for example, the date of birth indicates a different age than that of the person in the photo), and “urgent” requests for money transfers when the audio or video is of poor quality or seems abnormal.

 

Recommended Actions

Considering the new reporting requirement and the risks described, we recommend the following:

To financial entities (banks, fintech companies, payment service providers, and crypto companies):

  • Update policies – Revise internal suspicious transaction reporting (STR) policies to include explicit reference to the detection of suspected GenAI and Deepfake abuses, as required by IMPA.
  • Incorporate IMPA’s aforementioned list of red flags into monitoring systems and manual review processes.
  • Query technology providers – Verify with KYC and biometric authentication technology providers that their systems have the latest Deepfake detection capabilities.

To all companies:

  • Strengthen financial controls – Update procedures so that irregular money transfers (especially requests from “executives”) are never approved based solely on video conference calls, telephone calls, or emails. Enforce an additional, independent (out-of-band) authentication channel.
  • Provide specific training to employees – Provide training to financial and management staff on identifying signs of Deepfake-based social engineering.

 

***

 

Adv. Andrey Yanai is a partner in the Regulation Department. 

 

Adv. Yasmin Omari is as associate in the Regulation Department. 

 

The firm’s Regulation Department is one of Israel’s leading legal team in this practice. Our services to foreign companies include the provision of opinions, assistance obtaining licenses and approvals, and guidance during transactions and when commencing activities in Israel.

Tags: AI Regulation | Regulation