© All rights reserved to Barnea Jaffa Lande Law offices

Together is powerful

AI and Regulation: The Future Is Already Here

Since the internet became ubiquitous during the 1990s, only a handful of technologies have been developed that are so far-reaching they put ethical and legal questions to the test. Among these select few is artificial intelligence. As with any disruptive innovation, the enormous potential for developing and applying AI-based technologies also gives rise to many questions about the need for legal regulation and for defining the rules of the game in relation to the uses that may be made of artificial intelligence. It’s not for nothing that Mira Murati, the CTO of OpenAI (the company responsible for developing ChatGPT), called on governments and regulatory authorities to regulate the field.

 

While complex debates about this topic are underway, it is important to examine the most significant regulatory developments in the field, both in Israel and in the international arena. It is also vital to shine a spotlight on the key principles to scrutinize when weighing these developments, some of which already exist and are legally binding today.

 

Regulation in the International Arena

 

 

Europe 

The European Union has undertaken a significant initiative by developing and promoting comprehensive regulation of the various AI fields. The proposed European law, the AI Act, initially published in April 2021, establishes material obligations on developers and users of AI technologies and imposes extremely stiff fines on companies that fail to comply with the obligations (up to EUR 30 million or 6% of the annual turnover for every violation, whichever is higher). Similar to the European data protection regulation, the GDPR, the AI Act will also apply to non-European companies that develop and use AI-based products also intended for use in Europe. Given its extraterritorial scope, the AI Act, the first-ever comprehensive legislation in this field, shows clear potential of becoming a standard for specific AI regulations.

 

European law divides the types of artificial intelligence into four categories, depending upon the level of risk posed to individual rights, and adapts the various requirements to the relevant risk level. The law stipulates, inter alia, that there are high-risk systems (such as an AI-based system used to screen job applicants or an AI-based credit rating systems), as well as systems posing unacceptable risk, such as the system that rates citizens in China, which European law will absolutely prohibit.

 

The European Parliament must approve the law in order for it to come into effect, after which final discussions will begin among member states, the parliament and the European Commission. These discussions are expected to begin in April, with the goal of reaching final approval of this revolutionary law already by the end of this year.

 

United States

 

The United States does not yet have a comprehensive regulatory initiative, but we can see initial indications both in policy guidelines issued by the Federal Trade Commission (FTC) and in state-level legislative initiatives.

 

The FTC already published needed directives and policy guidelines regarding the development and use of AI technologies back in 2020. These directives, most of which the European AI ACT incorporates, have been bolstered by the FTC’s enforcement proceedings in recent years, inter alia, through various provisions in federal laws, including the Federal Credit Reporting Act (FCRA), the Equal Opportunity Credit Act, and the FTC Act.

 

At the state level, in addition to legislation regarding privacy protection and automated decision-making, state legislatures are also addressing the use of AI in hiring procedures, via automated employment decision tools (AEDT). For example, New York State’s AEDT Law prescribes, inter alia, that AEDT systems must undergo an annual audit of the potential for bias and discrimination, and the results of this audit must be publicly available.

 

Regulation in Israel

 

Israel has taken a different approach. At the end of 2022, the Ministry of Innovation, Science and Technology published a document entitled “Draft Policy for Regulations and Ethics in Artificial Intelligence” for public comments. This document is the Israeli government’s first comprehensive examination of the repercussions of AI-systems entering the public domain and the challenges they pose. This document promotes a policy of “soft regulation,” in that it calls on industry and government authorities to adopt a framework of broad and non-binding ethical rules. The document also calls on the various regulatory authorities, each in their own field, to examine the need to promote concrete regulations.

 

While there are material differences between the various initiatives in Europe, the United States, and Israel, all of these initiatives share several significant principles pertaining to the development and use of artificial intelligence.

 

Key Principles of AI Regulations

 

 

Transparency

 

Decision-making processes rely heavily on AI-based algorithms, including, for example, when authorizing financial transactions, screening job applicants, etc. The principle of transparency seeks to inform anyone coming into contact with artificial intelligence or affected by its use in advance about the reaching of decisions and the parameters the system uses to base its decisions.

 

Fairness and nondiscrimination

 

The principle of fairness obligates companies to ensure the development and use of artificial intelligence takes into account the need for gender equality, racial equality, religious equality, etc., and that they eliminate bias in AI-based systems in order to minimize the risk of unjust discrimination against individuals or groups.

 

Accuracy and reliability

 

In order to be able to rely on information and decisions made using artificial intelligence, it is of utmost importance that the technology is accurate. One of the main concerns in this regard is that the artificial intelligence will not be trained adequately enough, which will result in the AI system performing poorly, thereby diminishing its reliability. Another risk concerns the situation whereby the information fed to the AI system has changed, but the system has not been updated during its training process. This may then lead to incorrect consequences and decisions.

 

Privacy protection and data security


AI-based systems often require wide-scale use of personal information. This use is subject to privacy and data protection regulations, during both the training stage and the usage stage. Consequently, companies must ensure they comply with privacy laws, including the need to obtain the data subject’s consent, transparency, data minimization, data deletion, etc. Furthermore, information security and maintaining the technology’s reliability are also essential, both to ensure compliance with the regulatory provisions and to ensure no unauthorized party gains access to personal information or disrupts the AI system’s operation.

 

Accountability and risk management

 

Who should we hold accountable for AI decisions when hiring procedures, credit ratings, and perhaps even legal proceedings are at stake? Is it justified to impose accountability on companies that developed or used the technology, even in situations when it is impossible to foresee the decision or its impact on the relevant person? In order to provide some responses to this complex issue, it is obvious that, at the very least, internal risk management of the various types of technologies is necessary as a derivative of the risk potential. Accordingly, artificial intelligence poses significant risks and repercussions (for example, a credit rating that could prevent people from executing transactions or law enforcement authorities’ use of biometric information to combat crime), and will require internal controls and constant monitoring processes in order to ensure, inter alia, accuracy, reliability, and the absence of bias and discrimination. These processes require documentation, sometimes even within the framework of the relevant regulatory authority. Within this context, we note that the proposed European law will obligate relevant companies to register high-risk AI systems in an EU database managed by the European Commission, for the purposes of enhancing transparency and public oversight.

 

There is no doubt regulatory discussions about artificial intelligence will continue with greater intensity as the technology evolves and permeates even more spheres of life. However, you should be aware that legislative initiatives and extensive regulations are already in place that have implications at the international level. Furthermore, besides the specific laws and regulations referred to above, various provisions in existing laws in Israel and internationally, including privacy protection laws, contract and tort laws, consumer protection laws, labor laws, etc., are relevant and applicable to various uses of artificial intelligence.

 

Given this situation, companies that develop and purchase AI-based products, as well as investors considering investing in these companies, should examine, already at this stage, whether the companies can comply with the relevant regulations, especially the five principles underpinning the artificial intelligence regulations likely to be enacted in the near future.

 

***

 

Our firm’s privacy, data protection and cyber practice is at your service if you have any questions about AI, privacy protection, and data security regulations.

 

Dr. Avishay Klein heads the Privacy and Cyber practice at Barnea Jaffa Lande.

Tags: AI | Privacy | Regulation