Supervising AI: 7 Steps For Lawyers To Create Accountability

One thing remains certain: accountability is vital.

ai-artificial-intelligence-banking-982334-300×163Supervision is a familiar concept for professionals in various fields, including lawyers bound by the American Bar Association’s model rules. However, there is yet to be a universally agreed-upon definition of successful supervision or established best practices for overseeing humans or machines. 

As lawyers now become responsible for supervising AI technology, one thing remains certain: accountability is vital. Embracing this responsibility gives lawyers the power to help shape AI as a fair and just technological advancement that enhances the pursuit of justice in society.

Until industry and government leaders develop official AI supervisory frameworks and audit processes, here are seven steps you can use to create accountability in AI use now:

1. Maintain a decision log. It’s hardly surprising that, with lawyers involved, most of these steps involve documentation. Keep track of your decisions while using AI, especially the decisions you base on an AI tool’s output (see 2). Note when and why you decided to use a specific AI-powered tool. This log will serve as a valuable reference for future improvements and as evidence should any issues arise.

2. Record reliance on AI outputs. Expand your record-keeping with details on all AI-generated conclusions you rely on, including the tool used and steps taken to verify its findings independently (see 3). Adding the date and time creates a chronological record that may prove useful should legal disputes occur regarding your choice to use (or not use) AI.

3. Fact-check and scrutinize AI-generated content. Lawyers must diligently monitor the work of AI, just as they do for new associates working in law firms and legal departments. Examine AI outputs thoroughly to ensure their accuracy, credibility, and completeness. Verify any material conclusions and legal reasoning AI provides before finalizing your work. In doing so, you uphold the integrity of the legal profession and aid in refining AI technology to foster its development into a fair, unbiased, and indispensable tool of legal justice.

4. Address conflicts of interest. Take proactive steps to consider possible conflicts arising from your firm, clients, AI tool vendors, AI models, and the source of the data sets used to train AI. Document your considerations and how you addressed each to demonstrate transparent and ethical conduct.

5. Test various generative AI prompts. Experiment with different methods of prompting a generative AI tool to explore potential inconsistencies or discrepancies in its answers. This can help you uncover underlying biases or shortcomings in an AI system, reducing potential harm and building a more robust and helpful tool for users as AI systems learn from your feedback.

6. Prevent and address biased outcomes. Actively prevent the creation or reinforcement of any unfair biases in AI data sets and training models. Unfair biases can perpetuate detrimental stereotypes and uphold inequalities that seep into decisions that impact lives. 

Collaborate with AI experts who can help you pinpoint sources of bias and develop counteractive remedies. Continuously monitor AI systems to detect and rectify any unintended biases promptly.

7. Seek independent analysis for bias detection. Periodically solicit an outside opinion on the conclusions your AI tool provides. A neutral external perspective can uncover blind spots and biases that may go unnoticed during internal evaluations and illuminate areas for improvement.

Promote The Responsible Use Of AI

Through these steps, you can do more than just confidently supervise AI and ensure accountability for its use. You can also help promote AI technology’s fair and unbiased development in the legal field. Your efforts help shape AI’s evolution into a tool that can create equitable outcomes for all. 

As lawyers embrace a transparent and comprehensive approach to AI supervision, companies and society will realize short-term benefits — such as improved efficiency and accuracy — and long-term advancements grounded in fairness and justice. Then, we’ll ask AI to produce the ideal definition and best practices for successfully supervising humans and AI!

What do you think changes when lawyers supervise AI rather than humans?

What other steps should lawyers take to ensure accountability for AI use?


Olga MackOlga V. Mack is the VP at LexisNexis and CEO of Parley Pro, a next-generation contract management company that has pioneered online negotiation technology. Olga embraces legal innovation and had dedicated her career to improving and shaping the future of law. She is convinced that the legal profession will emerge even stronger, more resilient, and more inclusive than before by embracing technology. Olga is also an award-winning general counsel, operations professional, startup advisor, public speaker, adjunct professor, and entrepreneur. She founded the Women Serve on Boards movement that advocates for women to participate on corporate boards of Fortune 500 companies. She authored Get on Board: Earning Your Ticket to a Corporate Board SeatFundamentals of Smart Contract Security, and  Blockchain Value: Transforming Business Models, Society, and Communities. She is working on Visual IQ for Lawyers, her next book (ABA 2023). You can follow Olga on Twitter @olgavmack.

CRM Banner