Allowing employees to use generative AI (“GenAI”) comes with significant risks—such as the loss of confidentiality over sensitive firm and client information, mistakes occurring in important documents or decisions, loss of critical skills, and potential violations of contractual obligations and regulatory requirements. That said, one of the biggest AI risks comes from not letting employees use GenAI tools at all. Back in 2023, when GenAI tools were new, their benefits for most businesses were still speculative, and the risks were not well understood, many firms reasonably concluded that the lowest-risk option was not giving GenAI access to a large number of employees. But in 2025, many GenAI tools are now well developed for enterprises, have demonstrated that they can provide enormous value, and their risks are better understood, which leads to a different cost-benefit analysis. Today, powerful GenAI tools are available to employees on their personal phones. As a result, the risk of off-platform GenAI use is growing—and for many firms, not allowing employees to use GenAI for work is no longer the lowest-risk option.

In this Debevoise Client Update, we discuss lessons for AI adoption from our experience with off-channel text communications, using AI tools for recording, transcribing, and summarizing meetings (“AI meeting tools”) as an illustrative example of the risks of being too conservative with GenAI adoption.

The Growing Value and Risks of AI Meeting Tool Adoption

AI meeting tools are becoming ubiquitous, both in existing video conferencing platforms (e.g., Zoom AI Companion, Microsoft Teams Copilot) and through built-in or downloadable mobile applications (e.g., Apple Intelligence). The widespread use of these tools creates many risks, including lack of adequate notice or consent, the creation of a large number of potentially inaccurate documents, inconsistent compliance with record-keeping and legal hold obligations, as well as significant discovery burdens. Because of these risks, many firms have imposed a blanket ban on the use of these AI meeting tools—much like the prohibitions imposed on mobile messaging for business purposes once smartphones became commonplace. But the SEC and CFTC off-channel communications sweep suggests that outright bans on useful technology can carry significant risk, and many firms are now choosing to manage the enterprise risk of AI meeting tools by allowing some limited, controlled use within a practical framework that balances business needs with compliance safeguards.

Off-Channel Risk Is Real

From 2021 until early 2025, the SEC and CFTC conducted investigations into off-channel communications into dozens of asset managers and broker-dealers, resulting in more than $3 billion in total penalties. These firms were charged with failing to supervise, detect, and prevent off-channel communications on personal mobile messaging platforms such as iMessage, WhatsApp, and Signal.[1] The government alleged that because firms lacked control over these messages—including the ability to identify and produce them to regulators—this conduct obstructed its investigative and examination efforts to safeguard market integrity in violation of the federal securities laws.[2] The failure to preserve and produce off-channel communications has also resulted in significant adverse consequences in civil litigation, where courts have imposed sanctions for improperly preserved or produced communications.[3]

As these matters reflect, mobile messaging is a widespread and routine method of business communication. Yet firms often imposed blanket bans on mobile messaging without providing an enterprise mobile messaging platform, unintentionally driving this activity underground and pushing employees to use unmonitored channels beyond firm surveillance, oversight, and control. Moreover, the government enforcement matters and civil litigation discovery disputes demonstrate that off-channel communications can impede regulatory and litigation response. When firms become aware of off-channel communications, they must rely on individual employees to identify and produce relevant off-channel communications—often without reliable verification mechanisms. Retrieving off-channel communications from employees’ personal devices is also a complex and burdensome process that often implicates challenging cross-jurisdictional employment law and privacy considerations.

Let’s Not Do That Again

Gen AI tools are booming in popularity because they offer significant employee efficiency and productivity benefits. While many firms have resisted adopting enterprise AI tools because of legitimate concerns, wholesale prohibitions on AI meeting tools and other GenAI applications pose similar risks as off-channel communications. For example, just as outright bans on text messaging historically resulted in unsupervised off-channel communications, prohibitions on AI meeting tools could similarly drive employees toward unapproved AI meeting tools on personal devices — and the creation of meeting summaries, transcripts, and recordings that reside beyond the reach of firm supervision. This issue will become increasingly prevalent as AI tools gain popularity and users increasingly rely on their productivity benefits. As with off-channel communications, off-channel AI meeting tool content may also raise discovery concerns.

For this reason, firms that have considerable demand for the use of AI meeting tools and other GenAI applications should consider a pragmatic approach to adoption that responds to business needs within a robust compliance and oversight framework. To integrate AI meeting tools successfully into the enterprise environment, firms should consider first identifying the AI meeting tools that meet their business needs and then testing these tools through carefully crafted and monitored pilot programs. Such pilot programs enable firms to determine which features offer the greatest value from a risk-reward perspective and establish tailored compliance protocols.

Once a firm has selected an enterprise AI meeting tool, firms should consider implementing guardrails to effectively balance the risks and benefits of such tools (which may differ across various functions and meeting types) incorporating the following considerations:

  • Clear Usage Guidelines: Define which groups can use AI meeting tools and for which types of meetings. It may be easier to start with low-risk meeting categories, like technology trainings or routine internal update meetings that do not involve sensitive client, firm, or personal information.
  • Accuracy: Before retaining content from AI meeting tools in a firm’s records, establish a protocol for (1) adding disclaimers to the output of AI meeting tools that label the content as AI generated, and/or (2) as appropriate, reviewing the content for accuracy.
  • Data Confidentiality and Cybersecurity: Confirm that AI meeting tool providers, and the configurations implemented by the firm, adhere to the necessary confidentiality, security, and privacy standards.
  • Notice, Consent, and Transparency: Ensure that the required consent and notification practices are in place for meeting participants, including parties joining from outside the firm.
  • Circulation, Retention, and Preservation: Develop protocols for retaining, searching, and producing AI-generated content in compliance with regulatory and litigation-related obligations.
  • Monitoring and Oversight: Establish systems for evaluating AI meeting tool usage and compliance to ensure that the benefits of their use continue to outweigh the risks.

To subscribe to the Data Blog, please click here.

The Debevoise STAAR (Suite of Tools for Assessing AI Risk) is a monthly subscription service that provides Debevoise clients with an online suite of tools to help them fast track their AI adoption. Please contact us at STAARinfo@debevoise.com  for more information.

The cover art used in this blog post was generated by ChatGPT 4o.

 

[1] See, e.g., Press Release, Sec. & Exch. Comm’n, Twelve Firms to Pay More than $63 Million Combined to Settle SEC’s Charges for Recordkeeping Failures (Jan. 13, 2025), https://www.sec.gov/newsroom/press-releases/2025-6; Press Release, Sec & Exch. Comm’n, Eleven Firms to Pay More than $88 Million Combined to Settle SEC’s Charges for Widespread Recordkeeping Failures (Sept. 24, 2024), https://www.sec.gov/newsroom/press-releases/2024-144; Press Release, Sec. & Exch. Comm’n, Twenty-six Firms to Pay More than $390 Million Combined to Settle SEC’s Charges for Widespread Recordkeeping Failures (Aug. 14, 2024), https://www.sec.gov/newsroom/press-releases/2024-98; Press Release, Sec. & Exch. Comm’n, Sixteen Firms to Pay More than $81 Million Combined to Settle Charges for Widespread Recordkeeping Failures (Feb. 9, 2024), https://www.sec.gov/newsroom/press-releases/2024-18; Press Release, Sec. & Exch. Comm’n, SEC Charges 11 Wall Street Firms with Widespread Recordkeeping Failures (Aug. 8, 2023), https://www.sec.gov/news/press-release/2023-149; Press Release, Commodity Futures Trading Comm’n, CFTC Orders Four Financial Institutions to Pay Total of $260 Million for Recordkeeping and Supervision Failures for Widespread Use of Unapproved Communication Methods (Aug. 8, 2023), https://www.cftc.gov/PressRoom/PressReleases/8762-23; Press Release, Sec. & Exch. Comm’n, SEC Charges 16 Wall Street Firms with Widespread Recordkeeping Failures (Sept. 27, 2022), https://www.sec.gov/news/press-release/2022-174; Press Release, Commodity Futures Trading Comm’n, CFTC Orders 11 Financial Institutions to Pay Over $710 Million for Recordkeeping and Supervision Failures for Widespread Use of Unapproved Communication Methods (Sept. 27, 2022), https://www.cftc.gov/PressRoom/PressReleases/8599-22.

[2] See, e.g., Press Release, Sec. & Exch. Comm’n, Twelve Firms to Pay More than $63 Million Combined to Settle SEC’s Charges for Recordkeeping Failures (Jan. 13, 2025), https://www.sec.gov/newsroom/press-releases/2025-6 (“In order to effectively carryout their oversight responsibilities, the Commission’s Examinations and Enforcement Divisions must, and indeed do, rely heavily on registrants complying with the books and records requirements of the federal securities laws.”) (quoting Sanjay Wadhwa, Acting Dir., Div. of Enforcement); Press Release, Sec. & Exch. Comm’n, SEC Charges 11 Wall Street Firms with Widespread Recordkeeping Failures (Aug. 8, 2023), https://www.sec.gov/news/press-release/2023-149 (“Compliance with the books and records requirements of the federal securities laws is essential to investor protection and well-functioning markets.”) (quoting Gurbir S. Grewal, Dir., Div. of Enforcement); Press Release, Sec. & Exch. Comm’n, JPMorgan Admits to Widespread Recordkeeping Failures and Agrees to Pay $125 Million Penalty to Resolve SEC Charges (Dec. 17, 2021), https://www.sec.gov/newsroom/press-releases/2021-262 (“Since the 1930s, recordkeeping and books-and-records obligations have been an essential part of market integrity and a foundational component of the SEC’s ability to be an effective cop on the beat.”) (quoting Gary Gensler, Chair).

[3] See, e.g., Safelite Grp., Inc. v. Lockridge, No. 2:21-cv-4558 (S.D. Ohio Sept. 30, 2024) (imposing sanctions under Rule 37(e)(1) for failure to preserve text messages, resulting in permissive adverse-inference jury instruction and attorneys’ fees); Maziar v. City of Atlanta, No. 1:21-cv-02172-SDG (N.D. Ga. June 10, 2024) (awarding attorneys’ fees and costs and denying defendant’s motion for summary judgment as a sanction for grossly negligent failure to preserve text messages); Goldstein v. Denner, C.A. No. 2020-1061-JTL, 2024 WL 303638 (Del. Ch. Jan. 26, 2024) (granting motion for sanctions where defendants recklessly failed to preserve relevant text messages).

 

Author

Andrew J. Ceresney is a partner in the New York office and Co-Chair of the Litigation Department. Mr. Ceresney represents public companies, financial institutions, asset management firms, accounting firms, boards of directors, and individuals in federal and state government investigations and contested litigation in federal and state courts. Mr. Ceresney has many years of experience prosecuting and defending a wide range of white collar criminal and civil cases, having served in senior law enforcement roles at both the United States Securities and Exchange Commission and the U.S. Attorney’s Office for the Southern District of New York. Mr. Ceresney also has tried and supervised many jury and non-jury trials and argued numerous appeals before federal and state courts of appeal.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and cybersecurity regulatory counseling and defense. Charu can be reached at cchandra@debevoise.com.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Julie M. Riewe is a litigation partner and a member of Debevoise's White Collar & Regulatory Defense Group. Her practice focuses on securities-related enforcement and compliance issues and internal investigations, and she has significant experience with matters involving private equity funds, hedge funds, mutual funds, business development companies, separately managed accounts and other asset managers. She can be reached at jriewe@debevoise.com.

Author

Kristin Snyder is a litigation partner and member of the firm’s White Collar & Regulatory Defense Group. Her practice focuses on securities-related regulatory and enforcement matters, particularly for private investment firms and other asset managers.

Author

Suchita Mandavilli Brundage is an associate in the Debevoise Data Strategy & Security Group. She can be reached at smbrundage@debevoise.com.