Call For Public Comment: A Report on AI Use in the Financial Sector
This month, an interim report on artificial intelligence in the financial sector was published for public comment. The report was written by the Ministries of Justice and Treasury, the Competition Authority, the Securities Authority, the Capital Market Authority, and the Bank of Israel.
The report includes first-of-their-kind recommendations regarding regulation of the use of AI technology in the financial sector. The report is open for comments until December 15th.
General Approach and Principles for Use of Artificial Intelligence in the Financial Sector
The report centres on encouraging innovation and AI adoption, while addressing associated risks and establishing appropriate regulation as needed.
It proposes adopting flexible regulatory tools that can respond to technological developments, such as administrative directives and temporary regulations, alongside granting discretion to supervised entities to implement self-regulation tools.
According to the report’s authors, Israeli regulation should adapt to global regulation in the field, while avoiding establishing a local standard. The report advocates for a risk-based approach and removing unnecessary regulatory barriers, with the aim of encouraging a competitive, innovative, and efficient market.
Additionally, the report reviews challenges in the field of artificial intelligence generally, unique risks related to the financial sector, and impacts in three activity areas – investment consulting and portfolio management, credit in the banking system, and insurance underwriting. We have previously outlined the general risks of AI use by companies in the legal guide published this year.
Artificial Intelligence Challenges in the Financial Sector
The report outlines unique challenges of AI in the Financial sector:
- Financial Stability Risks – Reliance on a limited number of common artificial intelligence models by various players in the financial sector could potentially lead to a situation where many actors operate similarly and potentially impair independent and diverse decision-making capabilities. This also means that during an operational failure in these systems, many actors may be compromised, creating widespread impact on the sector.
Report Recommendations: to continuously monitor AU use and map sensitive activities relying on AI, while ensuring relevant regulation.
- Cyber and Fraud Risks – AI could potentially serve as a tool for spreading disinformation and financial fraud, through the use of deep fakes or other impersonation methods.
Report Recommendations: to expand regulatory monitoring and oversight of these risks, guiding financial entities in risk mapping and increasing public awareness.
- Risk to Competition – Using artificial intelligence systems may enhance market concentration and create entry barriers for new actors, especially in where a few suppliers control base model development.
Report Recommendations: Strengthen fair competition practices in accordance with competition laws and improve access to information required for AI model training, to ensure equal opportunities and market concentration reduction.
- Discrimination and Individual Harm – errors due to AI use could lead to significant damages to customers relying on AI data processing. For example, credit request rejections, incorrect suspicious transaction identification, AI-based trading preferences, and more.
The team recommends promoting solutions to examine AI outputs for reducing biases and discriminatory results.
Financial Sector recommendations
The report focuses on three financial domains: investment consulting and portfolio management, banking system credit, and insurance underwriting.
Investment Consulting and Portfolio Management
AI can reduce costs and make consulting and portfolio management services accessible to a broader audience.
Report Recommendations:
- Encourage investment consulting and portfolio management license holders to integrate AI technology in their activities to increase accessibility to financial services.
- Add a chapter in online services guidelines addressing contemporary AI technology and examine chatbots’ activity in investment consulting and portfolio management.
- Promote research to examine the behavior AI systems users to understand technology’s impact on investor behavior.
Banking Credit
AI applications in credit are used, among others, for customer needs identification, underwriting, and credit rating. In some cases, this represents an evolution of existing models, however, given AI’s operation, there are concerns about discrimination alongside aggressive credit marketing to customers.
Team Recommendations:
- To minimize risks of discrimination and exclusion, the report recommends clarifying that AI uses are not exempt from existing law, and financial institutions must ensure that before using AI tools, they have the means to strictly adhere to the regulatory requirements.
- The report recommends that companies include AI issues in their corporate governance process to realize management and board responsibility regarding unique risks.
- Implementing data review processes used to determine credit risks is essential to ensure data privacy and use of only relevant data in training processes.
Insurance Underwriting
AI can streamline underwriting processes, premium pricing, and insurance coverage management, but raises transparency, privacy, and bias risks in automatic decisions. The report recommends adhering to general principles of existing regulation and updating it as needed.
Next Steps
To ensure responsible integration of AI in financial services, the report recommends several important steps:
- Strengthen explainability for models requiring provision of reasoning, creating significant consumer risks or where reliance on AI in the decision is significant.
- Ensure human involvement in the decision-making processes where relevant.
- Ensure notification and disclosure about the use of AI tools and their impact on the consumer services or products.
- Enhance personal data protection in AI uses.
- Prevent discrimination in service, credit, and insurance provision.
- Expand financial entities’ responsibility towards their customers to include activities performed through AI.
- Develop risk assessment and corporate governance mechanisms that enable fair and transparent AI activity, including establishing oversight mechanisms, creating relevant policy documents, and appropriate vendor assessment.
Recommendations for Financial Sector Companies
We recommend all financial sector companies to not wait, and take steps to ensure responsible and safe AI use, including:
- Map and identify AI processes in use within the organization.
- Conduct a risk assessment regarding these uses.
- Formulate internal policy for AI use and identify the stakeholders responsible for its implementation.
- Implement protective measures to ensure responsible and fair system activity.
***
The Privacy, AI and Cybersecurity department in our office will be happy to help and accompany you in understanding the specific risks of implementing artificial intelligence tools, creating a compliance program in the field and mapping significant risks related to the activity.
Dr. Zvi Gabbay is the Head of Capital Market Department
Dr. Avishay Klein is the Head of Privacy, Artificial Intelligence and Cybersecurity Department
Adv. Masha Yudashkin is a lawyer in the Privacy, Artificial Intelligence and Cybersecurity Department