Webinar replay: Artificial Intelligence for Legal Professionals Demystified

This webinar with senior thought leaders from Thomson Reuters delved into the use of artificial intelligence in the context of legal practice, with the aim of demystifying some of the technological complexities and misconceptions around generative AI, including the concept of ‘AI hallucinations’.  

We spoke with Zachary Warren, who leads technology and innovation content and research for the Thomson Reuters Institute, based in Minneapolis, and Dr Andrew Fletcher, who is the director of AI Strategy & Partnerships with Thomson Reuters Labs, based in London. Fletcher founded the London Labs team, comprised of data scientists, designers, and engineers, in 2016.

Within the Institute, Warren has helmed multiple research projects on technology in professional services, including a series of Generative AI reports for the legal and tax industries. Commenting at the outset of the webinar as to where the sector is in the adoption curve, Warren said: “Legal, for better or worse, had a reputation of being slow to adopt. I don’t think that’s the case with generative AI.”

Fletcher added: “There is a huge spectrum as to where people are at with generative AI. There have been some real trailblazer law firms and corporate legal teams, but 2024 is the year of moving from POC to production and that’s certainly the case for us in TR, where gen AI capabilities are becoming embedded in our core products. That helps with adoption – it’s less a case of having to be a trailblazer. We are seeing more ways in which you can use this capability in existing products like the TR ones, but you also mentioned Copilot, and bringing those capabilities into the tools that people are already using is the next step to enabling you to explore further.”

Fletcher pointed out that the barrier to AI has been reduced, and he observed: “It used to be that AI was spoken about as a mystical thing where you would come up with an idea and need to go off for a few weeks or months to cook up something that you could use to test, whereas now you can just say ‘write me an email about X’ and you’ll get something back, which has driven a real curiosity among people.”

During the webinar there were multiple and varied questions from the live, global audience, kicked off with: ‘How can AI be used to improve PII compliance and contracts and policy drafting? Can it be used to verify compliance in other jurisdictions?’

Fletcher said: “One of the things you can deploy generative AI to do is extract a certain language from the things that you provide. So you could put in your contracts and ask it to identify the relevant PII or language around that. That’s not a new capability that only gen AI can do. People will be familiar with contract analysis tools and various ways you can do that but what generative AI does is enables you to tackle some of the long tail of agreements that you might not have pre-trained AI capabilities for already. So, to answer the question, gen AI doesn’t remove the need for you to interrogate that detail to make sure it’s meeting your needs, but it is definitely a tool that adds value to that.”

Another listener asked: “Assuming we’re past peak hype and into the trough of disillusionment how long until we get to the plateau of productivity?’ Warren said: “I don’t think we’re at the trough of disillusionment yet, just because this is a relatively new technology and people are still exploring it. The hype cycle is still up. But in my conversations I’m finding people taking a sceptical eye. In our research, we asked: ‘What is your sentiment to generative AI’ and it largely skewed more positive than negative, but the number one answer was ‘hesitant.’ People just wanted to see where this goes. As a result, I don’t think that the trough of disillusionment will be necessarily that much of a trough compared with some of the past implementations of AI in legal.

“As to how long before we get to the plateau of productivity, that depends on adoption rates and crucially the client side as well – how much this starts to get back into RFPs and is demanded from law firms, which will drive adoption and an understanding of what it can do.”

In terms of hallucinations, Fletcher observed: “In the last year and a half this notion that we’re talking about machines hallucinating is a very sci-fi concept and it shows that our choice of language can sometimes humanise machines when they are machines, not humans. But when we’re thinking about it from our product perspective, it’s the work we do in the build that comes from the Labs team and our partners in Thomson Reuters, and about how we’re prompting the model, and that includes using our content to ground the answers. It also includes other pre and post processing steps, such as: ‘If you cannot find an answer from Practical Law, then come back and say there is not an answer in Practical Law,’ so you know that you’re controlling for hallucinations.”

During the webinar we answered many more questions and looked at where gen AI can and should be used in today’s legal work, both for non-legal and for legal work, and what the key difference is between public gen AI systems like ChatGPT and proprietary systems, and why the distinction matters for legal professionals.

To listen to those questions and answers, plus some future gazing, listen to the website HERE