Israeli Court Criticizes the Israel Police over its Use of AI Tools
The Israel Police’s use of AI tools was criticized by the District Court recently, in light of the police’s claim that the AI system operates like a “black box” without human supervision or monitoring. The court ruling demonstrates the importance of responsible use of these systems coupled with an adequate understanding of their operating principles. The rules evolving in the technological and judicial arenas for examining and auditing AI systems also apply to the business world and, as this ruling demonstrates, to the criminal world as well.
The Petition and the Ruling
In the petition for disclosure of evidence, the defendant sought to expose the manner by which the Israel Police creates a list of potential suspects entering the country, and which justified the search of the defendant’s belongings (the search found GBL drugs in the defendants belongings and an indictment was filed against him). During the petition proceedings, it was revealed that the list was compiled by a computerized police system that analyzes statistical information from various sources and uses AI and machine learning (ML) to issue statistical predictions of the potential that incoming passengers might be involved in drug crimes.
“Black Box”
Police representatives testified before the court that the system has been in operation since 2014 and continues to constantly learn from the data available to it through AI and ML, in a process that is not subject to human supervision. The list of characteristics analyzed by the system is extensive and includes various details that do not clearly relate to drug offenses. The court also stated that the threshold criteria for including citizens in the list operated by the system are very low, and could lead to searches of people who have no connections to any offense at all. The police officers testified that the police have no way of knowing the weights that the AI system attributes to the various parameters and that it operates like a kind of “black box.” Regarding the supervision over the system’s function, the officers testified that the supervision and operation processes are not set in any procedure and that the system’s output is not checked.
Judicial Review
The District Court ruled that, in light of the testimony and evidence, the court is denied any possibility of exercising effective judicial review of the police’s IA system, and that therefore, the use of this system raises difficulties, despite the fact that such use is not prohibited under Israeli law.
The judge also saw fit to refer to the ruling of the Court of Justice of the European Union, which dealt with a similar issue. The CJEU ruled that AI systems are not to be used to identify potential drug couriers at airports, due to the fact that the system is characterized by opaque decision-making and therefore, it is not possible to explain the system’s rationale for identifying this or that person as a suspect.
The Significance of the Ruling
This ruling supports the need to regulate the use of AI systems. AI systems are not “black boxes,” but rather, are significant and powerful technological tools that, when used correctly, can provide solutions for major difficulties in the business arena and in law enforcement.
The regulation of artificial intelligence, which is being developed by leading technology companies and by various government authorities, requires several fundamental principles to ensure legal and fair use:
- Transparency of the system’s operation, including the ability to examine the weight attributed to each parameter.
- Explainability of the system’s operation, including the risks inherent in it.
- Human involvement at decision-making junctions when systems are being used to make significant decisions.
- Control processes over the system’s learning and conclusions.
- Ongoing analysis of the system’s activity to detect biases and errors during its operation.
- Training of personnel and implementing orderly procedures.
Observing these fundamental principles when developing and using AI systems should improve their safety and effectiveness and reduce instances of erroneous profiling and civil rights violations.
We recommend that companies that use AI tools, or are considering using them, should analyze the tools before using them, and implement an orderly AI management policy in their organizations.
***
Barnea Jaffa Lande’s Privacy, Data Protection and Cyber Department is at your service to answer any questions about the use, implementation or development of AI tools.
Dr. Avishay Klein is a partner and heads the department.
Adv. Masha Yudashkin is an associate in the department.