AI Use in the Financial Sector – Final Report of the Regulatory Workgroup
Summary
- Final Report on the Use of AI in the Financial Sector: The final report on AI in the financial sector was published following a public consultation process by an interministerial committee and constitutes a continuation of the interim report issued at the end of 2024. The report promotes innovation while removing Israel-specific regulatory barriers.
- Key Risks: The report identifies key risks, including financial stability, cybersecurity, fraud, competition, discrimination, and privacy concerns, emphasizing the need for a systemic approach and preparedness by financial institutions.
- Principles for Responsible Implementation: The report’s recommendations include a risk-based approach, internal corporate governance, system explainability, human oversight where necessary, end-user disclosure, and prevention of bias and discrimination. It also proposes a new legal basis for processing personal data for AI training purposes.
- Application Areas: Regarding investment advisory, banking credit, and insurance underwriting, the report recommends controlled AI implementation, ensuring clear explanations for automated decisions and maintaining full accountability of financial institutions.
- Required Actions for Financial Institutions: Institutions are advised to map and identify existing AI uses, assess associated risks, develop internal policies, ensure user notice and transparency, and implement measures to guarantee responsible and fair operation of AI systems.
Following an extensive public consultation process, a final report on the use of AI in the financial sector was published last week. The report was prepared by a committee comprising representatives of the Ministry of Justice, the Ministry of Finance, the Competition Authority, the Israel Securities Authority, the Capital Market, Insurance and Savings Authority, and the Bank of Israel.
The report follows the interim report published at the end of 2024, the main points of which we reviewed in a previous client update. It reflects the conclusions reached after considering feedback from public and professional bodies, including submissions by our firm.
The report sets forth recommendations for the use of AI in the financial sector and presents a proposed action framework for financial regulators in Israel, while encouraging innovation and the adoption of AI. The recommendations aim to avoid creating barriers unique to Israel and to remove existing obstacles, thereby fostering a competitive, innovative, and efficient market.
The report reviews the general challenges associated with AI, the unique risks relevant to the financial sector, and the implications in three areas of activity: investment advising and portfolio management, credit in the banking system, and insurance underwriting.
AI Challenges in the Financial Sector
The report identifies several key challenges associated with the use of AI tools that financial entities must prepare for:
Financial stability risks – Reliance on a limited number of common AI models by multiple players in the financial sector could result in many of them operating similarly, potentially impairing the ability to make diverse decisions. A failure in these systems could trigger a shock across the entire sector and endanger numerous players due to their significant reliance on AI systems.
Cyber and fraud risks – AI may be used as a tool to spread disinformation and commit financial fraud, including through technological impersonation methods such as deepfakes and other techniques.
Risk to competition – Implementing AI models in the financial sector may raise competitive concerns on several levels: the centralization or creation of monopolies in the supply of AI services to financial entities (e.g., by large technology companies), increased market concentration and higher entry barriers for entities lacking sufficient data, the strengthening of financial entities’ power over captive customers, and the facilitation of coordination or cartel-like behavior enabled by AI models.
Discrimination and individual harm – Errors arising from the use of AI may cause significant harm to customers due to reliance on data processing performed by AI. Examples include the rejection of credit applications, incorrect identification of suspicious transactions, AI-based trading preferences, and more.
Principles for AI Implementation in the Financial Sector
As noted above, the report reviews the general challenges associated with the use of AI systems, particularly in the financial sector. For each issue, the report offers specific recommendations:
- Defining artificial intelligence – The report proposes establishing a uniform definition of AI for regulators to use. However, it also recommends maintaining flexibility so that financial entities may exclude certain systems from the scope of the definition. Thus, a financial entity may determine, in accordance with regulatory guidelines, that a certain system is not, by its nature, an AI system. This approach is intended to prevent overregulation of simple tools and to focus resources and supervision on complex and significant systems.
- Risk-based approach and internal governance – The report recommends that the implementation of AI systems follow a risk-oriented approach. Financial entities should determine each system’s risk level, considering the risks to customers, investors, the public, and the financial entity itself. Consequently, financial entities should establish internal corporate governance frameworks that ensure effective risk assessment and management for each system. The report proposes two risk levels—low-medium and high—with high-risk systems requiring enhanced risk mitigation measures.
- Explainability in AI systems – The report distinguishes between general explainability (understanding a system’s overall activity) and specific explainability (explaining how the system made a specific decision). It recommends requiring specific explainability whenever the AI system acts as a material component in the decision-making or recommendation process (e.g., automatic investment recommendations). In other cases, specific explainability may not be required, or a partial explanation may suffice.
- Human oversight – To ensure safe and ethical operation of AI systems, the report recommends implementing human oversight processes, though not necessarily at every stage of the system’s operation. It proposes ensuring human oversight in decision-making systems that pose a high risk to individuals, especially where no other compensating measures exist. The report emphasizes that human oversight should be considered in relation to all regulatory frameworks applicable to the AI system, and that in systems supporting human decisions, the requirement may be significantly reduced. Furthermore, and in contrast to European legislation, the report does not mandate a human alternative to AI and permits decisions executed exclusively by AI.
- Notice and disclosure – The report recommends notifying end users about the actual use of AI systems to enable informed choice and strengthen trust in the service. In particular, it recommends ensuring such disclosure in cases where users might mistakenly believe they are interacting with a human agent. Regarding notice, the report encourages regulators to adapt the wording and manner of disclosure to the nature of each financial entity’s activities. The report also recommends integrating information about the characteristics of the AI system’s use into the information accessible to users.
- Legal basis for data processing – The report includes a significant and innovative recommendation to establish a new legal basis in Israel for AI training use, beyond reliance on consent under the Privacy Protection Law. This marks a material shift in the current legal landscape, aimed at removing significant barriers to AI development. The report further proposes permitting the use of previously collected personal data for the new purpose of AI model training, effectively waiving the requirement for renewed consent. These recommendations currently conflict with provisions of the Privacy Protection Law, and their implementation may require a legislative change.
- Bias and discrimination – The report emphasizes the need to ensure that AI systems are free from bias and discrimination. Among other things, it recommends that regulators exercise supervisory powers regarding the prohibition on discrimination under the law. To address this risk, the report recommends implementing processes that examine these aspects throughout the system’s lifecycle, including outcome-based tests to determine if a certain tool is free from bias.
Recommendations for Specific Sectors
In addition to a broad review of cross-sector risks, the report includes specific recommendations for three key areas:
-
Investment advising and portfolio management
The report views AI as a means to reduce costs and make investment advising and portfolio management services accessible to a broader audience. It recommends:
- Encouraging licensed investment advisors and portfolio managers to integrate AI technologies, with the aim of broadening public access to financial services.
- Updating the online services directive to include specific provisions addressing modern AI technologies, including the use of chatbots for investment advising and portfolio management.
- Promoting research into user interactions with AI systems to better understand the technology’s impact on investor behavior.
The report emphasizes that liability for the use of AI systems, specifically for the results of recommendations generated by AI tools, remains with the investment house. The guidelines are intended to supplement, not diminish, existing responsibilities.
-
Banking credit
AI applications in the credit field are used, among other things, for identifying customer needs, underwriting, and credit scoring. In some cases, this represents an evolution of existing models. However, the use of AI raises concerns about discrimination and aggressive credit marketing activities. The report recommends:
- Minimizing discrimination risk to emphasize that the use of AI does not alter existing legal obligations, and having financial institutions ensure, prior to deployment, that they are able to strictly adhere to them.
- Integrating AI into institutions’ corporate governance frameworks and ensuring that management and the board effectively exercise their oversight responsibilities regarding these unique risks.
- Implementing data review processes for credit risk determination to ensure that only relevant and lawful data is used during the model training process.
- Ensuring the ability to provide a clear explanation for decisions made by these systems.
-
Insurance underwriting
While AI can streamline underwriting processes, premium pricing, and insurance coverage management, it raises risks related to transparency, privacy, and bias in automated decisions. The report recommends:
- Strict adherence to general principles of existing regulation, updating as necessary.
- Providing explanations for decisions on granting insurance coverage and pricing.
What Should Financial Institutions Do Now?
To ensure responsible integration of AI in financial services, the report recommends several important steps:
- Strengthen explainability for models used in decisions requiring argumentation, where significant risk to consumers exists, or where the degree of AI involvement in the decision is substantial.
- Ensure human oversight in decision-making processes, where appropriate.
- Provide notice and disclosure regarding the actual use of AI tools and their impact on the service or product for end consumers.
- Increase protection of personal data in AI applications.
- Prevent discrimination in the provision of services, credit, and insurance.
- Expand the responsibility of financial entities toward their customers to include activities performed through AI.
- Develop risk assessment and corporate governance mechanisms that enable fair and transparent AI activity, including the establishment of supervision mechanisms, creation of relevant policy documents, and assessment of appropriate vendors.
We anticipate that financial regulators will adopt at least some of the report’s recommendations and implement them in binding directives for supervised entities.
Practical Steps
We recommend taking the following steps now to ensure responsible and safe use of AI:
- Map and identify AI processes used within the organization.
- Conduct a risk assessment regarding these uses in all relevant areas (including privacy, cyber, financial regulation, and competition).
- Formulate an internal policy for AI use and development, including risk assessment and mitigation processes, and appoint stakeholders responsible for its implementation.
- Implement protective measures to ensure responsible and fair system activity.
- Ensure that privacy and data security programs address risks and challenges related to artificial intelligence.
***
Dr. Avishay Klein is a partner and head of the Privacy, Cyber and AI Department.
Adv. Gal Rozent is a partner and head of the Antitrust and Competition Department.
Adv. Andrey Yanai is a partner in the Regulation Department.
Adv. Masha Yudashkin is an associate in the Privacy, Cyber and AI Department.
Adv. Yasmin Omari is an associate in the Regulation Department.

