As we approach the end of the year, here are the Top 10 Artificial Intelligence (“AI”) posts on the Debevoise Data Blog in 2022 by page views. If you are not already a Blog subscriber, click here to sign up.


1New York’s AI Employment Law, Proposed Rules, and Similar Regulations

September 26, 2022

In July 2022, we wrote about New York City’s Automated Employment Decision Tool Law (the “AEDT Law” or the “Law”), which requires employers to conduct an independent bias audit of their AI employment tools by January 1, 2023. On September 23, 2022, New York City’s Department of Consumer and Worker Protection (“DCWP”) released proposed rules (the “Proposed Rules”) that would implement the Law, and which clear up some, but not all, of the Law’s ambiguities. The Proposed Rules were subject to a comment period ending on the day of DCWP’s public hearing in November 2022. In December 2022, DCWP announced that it is planning a second public hearing due to the high volume of public comments. As a result, DCWP announced that it will not enforce the AEDT Law until April 15, 2023.


2.  New Automated Decision-Making Laws: Four Tips for Compliance

June 25, 2022

With the widespread adoption of AI and other complex algorithms across industries, many business decisions that used to be made by humans are now being made (either solely or primarily) by algorithms or models. Across the globe, regulators and lawmakers have passed laws aimed at reducing risks related to automated decision-making (“ADM”) technologies, including operational, transparency, explainability, legal process, and discrimination risks. In this post, we discuss several new laws focused on ADM that are either in effect today or will go into effect in 2023, as well as circumstances in which litigants have used these laws to challenge companies’ uses of ADM tools. In light of these trends, we have also included four tips for companies seeking to establish practical compliance and governance programs related to their ADM systems.


3.  Connecticut Requires Non-Discrimination Certifications from Insurers Using AI

May 2, 2022

One of the most significant trends in insurance regulation involves regulators requiring insurers to demonstrate that their use of alternative data (“Big Data”) and AI is not discriminatory. On April 20, 2022, the Connecticut Insurance Department (the “Department”) released a notice titled “The Usage of Big Data and Avoidance of Discriminatory Practices” (the “Notice”) addressed to all entities and persons licensed by the Department (“Licensees”). In the Notice, the Department raises concerns about the expanding role of Big Data in the insurance process and the potential that its use could result in unfair discrimination. In light of those concerns, the Notice reminds all Licensees of their obligation to ensure that their use of Big Data and AI complies with applicable anti-discrimination laws and requires all Connecticut domestic insurers to complete an annual data certification by September 1, 2022. We have previously discussed and predicted regulatory developments and trends regarding insurance and AI, including in our webcasts here and here. The Notice potentially positions Connecticut as the new forerunner in AI insurance regulation in the United States.


4.  California Restricts Insurers’ Use of AI and Big Data

July 5, 2022

On June 30, 2022, the California Department of Insurance (the “Department”) released Bulletin 2022-5 (the “Bulletin”), which places several limitations on the use of AI and Big Data by the insurance industry. The Bulletin states that the Department is aware of recent allegations of racial discrimination in marketing, rating, underwriting and claims practices by insurance companies and reminds all insurance companies of their obligations to conduct their businesses “in a manner that treats all similarly situated persons alike.” In this post, we discuss the six most significant aspects of the Bulletin and key takeaways for insurance companies seeking to comply with these new developments.


5.  The EU AI Liability Directive Will Change Artificial Intelligence Legal Risks

October 24, 2022

On September 28, 2022, the European Commission released a proposal to change the legal landscape for companies developing and implementing AI in EU Member States. This AI Liability Directive would require Member States to implement rules that would significantly lower evidentiary hurdles for victims injured by AI-related products or services to bring civil liability claims. Most importantly, the Directive would create a “presumption of causality” against the AI system’s developer, provider, or user. The proposed AI Liability Directive should be seen as part of a broader package of EU legal reforms aimed at regulating AI and other emerging technologies. The other parts of that package include the draft EU AI Act, which aspires to establish the world’s first comprehensive regulatory scheme for AI, and the Digital Services Act (“DSA”), which is set to transform the regulation of online intermediaries. In this post, we explore the key elements of the proposed AI Liability Directive, as well as steps that businesses should consider to enhance their AI governance and compliance programs in anticipation of these changes.


6. The White House’s Blueprint for an AI Bill of Rights: What It Gets Right and What It Gets Wrong About Artificial Intelligence Regulation

October 26, 2022

On October 4, 2022, the White House released the Blueprint for an AI Bill of Rights (the “Blueprint”), which provides nonbinding “principles” for organizations in both the public and private sectors to use when developing or deploying AI or other automated systems. In this post, we discuss the Blueprint’s five principles and provide a checklist of actions that the White House believes will advance each principle.


7. Protecting AI Models and Data – The Latest Cybersecurity Challenge

September 22, 2022

One of the most difficult challenges for cybersecurity professionals is the increasing complexity of corporate systems. Mergers, vendor integrations, new software tools and remote work all expand the footprint of companies’ information systems, creating a larger attack surface for hackers. The adoption of AI presents additional and, in some ways, unique cybersecurity challenges for protecting the AI models themselves, as well as the sensitive data that are used to train and operate the AI systems. On August 31, 2022, in recognition of these growing challenges, the UK National Cyber Security Centre (“NCSC”) released its Principles for the Security of Machine Learning, which are designed to help companies protect AI systems from exploitation. In this post, we discuss the NCSC’s recommendations, and we examine the growing cybersecurity threats to AI systems and how companies can prepare for and respond to these attacks.


8. The Value of AI Incident Response Plans and Tabletop Exercises

April 27, 2022

Today, it is widely accepted that most large organizations benefit from maintaining a written cybersecurity incident response plan (“CIRP”) to guide their responses to cyberattacks. For businesses that have invested heavily in AI, the risks of AI-related incidents and the value of implementing an AI incident response plan (“AIRP”) to help mitigate the impact of AI incidents are often underestimated. In this post, we discuss the value of CIRPs and tabletops for cybersecurity and AI, as well as critical tasks and responsibilities that companies could consider, including in an AIRP.


9. AI Oversight Is Becoming a Board Issue

April 7, 2022

As more businesses adopt AI, directors on many corporate boards are starting to consider their oversight obligations. Part of this interest is related to directors’ increasing focus on Environmental, Social and Governance (“ESG”) issues. There is a growing recognition that, for all its promise, AI can present serious risks to society, including invasion of privacy, carbon emissions and perpetuation of discrimination. But there is also a more traditional basis for the recent interest of corporate directors in AI: as algorithmic decision-making becomes part of many core business functions, it creates the kind of enterprise risks to which boards need to pay attention. In this post, we discuss key considerations for boards and companies where AI has become (or is likely to become in the near future) a mission-critical regulatory compliance risk.


10. Why Ethical AI Initiatives Need Help from Corporate Compliance

April 4, 2022

AI is becoming part of the core business operations at many companies. This widespread adoption of AI has led to a proliferation of corporate “ethical AI” principles and programs, as companies seek to ensure that they are using AI fairly and responsibly, and in a manner consistent with the growing expectations of customers, employees, investors, regulators, and the public. But ethical AI programs at many companies are struggling. Recent reports of AI ethics leaders being fired, resigning, or bringing whistleblower claims illustrate the friction that is common between ethical AI teams and executives who are trying to gain efficiencies and competitive advantages through the adoption of AI. In this post, we discuss the recent emergence of a regulatory compliance approach to AI as well as frameworks and key considerations for companies developing and enhancing AI programs.


To subscribe to the Data Blog, please click here.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.