AI technologies are changing the way we work and are likely to impact the labor market for years to come. Many employers understand that in addition to the many advantages inherent in generative AI, the use of AI technology also entails quite a few legal challenges and risks. Consequently, it is clear employers must prepare for increased use of this technology.
The 10 Guidelines
Verify the veracity of the information provided by AI systems
AI systems may provide outdated and even erroneous and baseless answers (AI hallucinations). One such case occurred recently in New York, when an attorney cited to the court fake rulings fabricated by ChatGPT. The use of AI systems also generates cognitive dissonance between the perceived credibility of information and the factual situation. Therefore, employers must implement work procedures instructing employees not to blindly rely on information provided by AI systems.
Implement technological and organizational measures to protect personal and confidential information
In independent tests, ChatGPT candidly stated: “As an AI language model, I do not have the ability to guarantee the confidentiality of any information shared with me.” Indeed, the use of AI could lead to the loss of critical trade secrets and the leaking of personal information, thereby causing regulatory and contractual exposures. Employers must set clear policies to ensure the protection of trade secrets whenever AI tools are used, add clauses to employment agreements and employee covenants regarding the safeguarding of trade secrets, implement technological control measures, and provide training in information protection and data security.
Ascertain the AI tool’s limit of liability for the specific information before using it
Only use AI tools after analyzing the legal risks deriving from use of this specific technology. Within this context, it is imperative to verify that the technology’s usage agreements contain express covenants of liability for compliance with legal and regulatory standards, as well as indemnity and liability obligations in respect of damages resulting from reliance on the technology.
Ascertain the ethical impacts of using AI tools on labor relations
Using AI to manage organizational changes, such as layoffs, raises ethical problems. Employers must ensure the algorithm does not consider biased criteria (race, gender, marital status, etc.) when reaching hiring, layoff, or other decisions. Failure to nail down these aspects could create legal exposures to lawsuits and violations of labor laws.
When it comes to labor relations, AI must never replace human involvement.
AI systems have a hard time weighing all relevant factors when making decisions and tend to focus solely on numerical and measurable data, thereby failing to consider other critical aspects of labor relations. Any use of AI in labor relations should also include intangible aspects not covered by the dry data, such as teamwork and leadership ability, motivation, and empathy. Many countries are revising privacy protection and labor laws and regulations to include a legal requirement for human analyses of findings and decisions, and for good reason.
Accordingly, whenever using AI tools to reach decisions about employees and candidates, employers must take “human” considerations into account before making decisions.
Inform employees about the use of AI systems
Employers must ensure their employees understand the decision-making process carried out by AI systems, including the criteria used to base decisions that concern them. Israel’s National Labor Court has recognized employees’ right to privacy in the workplace and ruled that employers must obtain their employees’ consent under particular circumstances, and that they must inform their employees about the information collected about them. Therefore, any employer looking to use an AI system to monitor employee performance or workflows, for example, must inform its employees and check if their consent is necessary.
Verify that use of AI systems does not adversely impact business continuity or learning and knowledge acquisition processes in the organization
Full reliance on an AI system for any company’s critical aspects creates an enormous operational risk, since an AI system failure will cause the company to stop functioning, temporarily or for a prolonged period. Just as IDF soldiers learn how to navigate even though they are equipped with sophisticated GPS systems, it is imperative to consider business continuity aspects when using AI in any critical business infrastructure.
The company’s risk management policy should address a potential risk of not being able to rely on AI technologies, whether temporarily or permanently.
Ensure separation between personal and professional uses of AI accounts
Employers should set a clear policy that separates employees’ personal and professional uses of AI accounts. First, similar to the separation between personal email accounts and corporate email accounts, the contents of AI accounts may also have personal and private characteristics. Second, in relation to intellectual property aspects, it is important to stipulate that the products of correspondence are the intellectual property of the workplace and not of the employee.
Monitor regulatory developments
Regulatory discussions about the use of AI have accelerated considerably in recent months, both in Israel and internationally. In fact, numerous far-reaching legislative and regulatory initiatives are already underway at the international level.
Employers should closely monitor regulatory developments and prepare an action plan aided by legal advisors to ensure compliance with the emerging standards.
Institute an organizational policy governing the use of AI tools
Employers should formulate a clear and orderly plan, including internal control mechanisms, to ensure, inter alia, transparency, credibility, and the absence of discrimination.
We recommend that employers formulate a clear organizational policy governing the use of AI tools, assign a dedicated team to oversee compliance with the policy, and provide relevant training to employees in relation to all aspects of this policy.
AI systems can offer numerous advantages, when used wisely and correctly. However, employers must exercise the utmost caution when using AI and implement organizational and technological measures to minimize the risks of its use, especially if used in the context of HR.
***
Barnea Jaffa Lande is at your service for questions regarding implementing AI at the workplace and other AI regulation issues.
Adv. Netta Bromberg heads the firm’s Employment Department.
Dr. Avishay Klein heads the firm’s Privacy and Cyber Department.