Balancing Legal Ethics With Rapid AI Innovation

The promised efficiency gains offered by these tools come with their own set of challenges, including doubts over result accuracy and concerns about client confidentiality.

artificial-intelligence-4111582_1920There’s no doubt about it — we’ve entered the generative artificial intelligence (AI) age and advancements are occurring at a rapid clip. Whether you follow technology news or not, you’ve likely encountered many recent headlines about the release of new generative AI products.

The vast majority of legal technology generative AI products (50-plus companies and counting) are in beta, but just this week two legal technology companies announced the general release of generative AI tools into their products.

On Tuesday, NetDocuments, a company that provides document management software, announced the global rollout of PatternBuilder Max. This new generative AI capability includes nine out-of-the-box apps designed to automate legal workflows involving documents. Key apps include summarize, draft, timeline, translate, and compare.

On Wednesday, LexisNexis announced the general availability of Lexis+ AI for U.S. customers. According to the press release, the generative AI solution “features conversational search, intelligent legal drafting, insightful summarization, and document upload capabilities.”

The rapid release of both of these products is indicative of the general demand and thirst for access to legal generative AI technology. I’ve been covering the intersection of law and technology for nearly two decades, and I have never seen legal professionals express this level of interest in an emerging technology. Every time I speak about this topic, the room is packed with attendees who are incredibly attentive and curious about generative AI and how they can use it in their law firms.

Not surprisingly, one of the primary blockers when it comes to lawyers embracing tools like ChatGPT is legal ethics rules. The promised efficiency gains offered by these tools come with their own set of challenges, including doubts over result accuracy and concerns about client confidentiality. In the absence of clear ethical guidance, adoption by lawyers is anything but straightforward.

Fortunately, ethics committees in several states are already hard at work on these issues. For example, in July, the New York State Bar Association announced that it was forming a task force to address emerging issues related to artificial intelligence. A few weeks later, the Texas State Bar also disclosed the formation of a group that would “examine the ethical pitfalls and practical uses of AI and report back within the year.” In May, the California Bar created a committee tasked with examining the impact of AI on the profession.

Most recently, the Florida Bar threw its hat into the ring, sharing the Board Review Committee on Professional Ethics’ plans to consider drafting a proposed advisory opinion at the direction of the Florida Bar Board of Governors resulting from an inquiry by the Special Committee on Artificial Intelligence (AI) Tools and Resources.  The committee is seeking comments from Bar members on the following questions that may be considered in the opinion:

1) Whether a lawyer is required to obtain a client’s informed consent to use generative AI in the client’s representation;

2) Whether a lawyer is required to supervise generative AI and other similar large language model-based technology pursuant to the standard applicable to non-lawyer assistants;

3) The ethical limitations and conditions that apply to a lawyer’s fees and costs when a lawyer uses generative AI or other similar large language model-based technology in providing legal services, including whether a lawyer must revise their fees to reflect an increase in efficiency due to the use of AI technology and whether a lawyer may charge clients for the time spent learning to use AI technology more effectively;

4) May a law firm advertise that its private and/or in-house generative AI technology is objectively superior or unique when compared to those used by other lawyers or providers; and,

5) May a lawyer instruct or encourage clients to create and rely upon due diligence reports generated solely by AI technology?

As we head into a new era where AI innovation and legal ethics may collide, it’s clear that we’re in for some interesting times. Generative AI tools are already here and lawyers are using them, despite the unsettled ethical landscape.

Demand for these tools is high, and so are the stakes when it comes to faulty compliance. It’s anyone’s guess how the ethical chips may fall, but one thing is certain: the collision of AI and legal ethics will make for some compelling developments that could significantly impact the future of legal practice.


Nicole Black is a Rochester, New York attorney and Director of Business and Community Relations at MyCase, web-based law practice management software. She’s been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. She’s easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter at @nikiblack and she can be reached at niki.black@mycase.com.

CRM Banner