Exclusive: Addleshaw Goddard undertakes five gen AI trials with cross-firm engagement 

Addleshaw Goddard is conducting trials or proof of concepts with five generative AI vendors in the drafting, review, legal review and productivity space, led by a working group that has been set up specifically to investigate and research the technology and with engagement from around 140 people across the UK top 50 firm. 

The Generative AI Decision Group, officially formed in January, is made up of general counsel David Handy; head of innovation and legal tech Kerry Westland; head of innovation and legal tech operations Elliot White; IT director Scott Foley; head of technology/deputy IT director Peter Fardy; and Katie Kinloch, who is a data protection lawyer from the risk team. 

Westland told Legal IT Insider: “We’re investigating and researching the possibilities of generative AI including what are the practical applications and how do we address the questions around privacy and data confidentiality. The working group is always communicating and speaking to companies re whether they meet our risk profile. It means we’re having much better discussions around what this means to us as a firm and enabling us to move slightly quicker with some of the tools.”  

She adds: “A lot of it is about can we get these new technologies signed off for applying to the work that we do. Previously we were working on the guidance. You need everyone to be comfortable. The interesting thing about this space is that some of these companies are new, so not where our normal risk profile would be, and we have to agree where on the risk scale we want to be and when a decision needs to get escalated.” 

From a long list of around 40 platforms, Addleshaw Goddard is taking five forward, and White told us: “We are either testing betas, trialling or running a fuller POC. They are all using generative AI and mostly with OpenAI but they are doing it in very different ways.” 

Westland added: ”The POC is not controlled because we have learned that if it is controlled, we won’t learn the full potential. We’re saying to people ‘this is what this tool does but think about how else it could help you during your day.’ We hope that they will come up with more use cases, and that changes the value you get from that tool.”  

Whereas in the past the firm’s innovation and technology teams have identified a problem and gone to find technology to solve it, Westland says that generative AI has the potential to change that, commenting: “If this does what we think it does, it will change almost how and why we work. At the moment it’s a bit cool to talk about gen AI but what we’re seeing is genuinely interesting in terms of what it can do.” 

She adds: “We review a lot of documents in M&A, for example, and you might use machine learning tools to pull out rows of data to help our lawyers identify the risks and issues for clients. These tools, from what we’re seeing, turn that on its head. With the right prompt, you can almost say ‘tell me from these contracts where the issues are’. That almost changes the nature of due diligence.” 

Having reached out to ask anyone interested in testing the new technologies to come forward, the working group now has 140 people, mostly fee-earners, engaged across the firm. Westland said: “The engagement from the firm is incredible. In all the years I have been doing this role, I haven’t had partners coming to us and asking what we’re doing with a technology.” 

The innovation team is focussing heavily on trialling the new technologies and White said: “For us one challenge is sifting through the companies that have said they have incorporated generative AI but haven’t, while others are much further ahead than expected. This is now taking up around 60% of myself and Kerry’s time; we have found ourselves having to focus on it heavily.” 

While law firms are nervous about the potential risks involved in using generative AI, and much of the work that Westland, White and the working group are doing is around assessing risk, Westland says: “It has to be a balance. Do you hold back on this? If we leave it too long, we will be behind.”  

Alongside the trials and the possibility of licensing vendor-specific AI tools, the innovation team and working group are looking at building their own private ChatGPT application powered by OpenAI, and building their own large language model using open-source tools. In addition to providing training to its lawyers, the firm is running education and talking point sessions with clients, sharing what it is doing while also looking to understand its clients’ views on generative AI. 

Addleshaw Goddard has taken the decision not to block ChatGPT, but together with Handy have put together guidance and policies governing its usage. Westland said: “If you are going to use ChatGPT, there is a specific way we expect you to do that and you aren’t to use confidential data, including client data. We encourage people to try ChatGPT and understand how it can help for more basic tasks. There are some great productivity uses, and if you are trying to write something, it can help you to form a paragraph around what you are trying to say and beyond.”  

White adds: “There is an element of trust involved and we expect people to follow the guidance. These are really smart people, who we feel we can trust with the education and guidance we giving them on Generative AI. You can block ChatGPT and other technologies but that is no guarantee that it won’t be used. We are moving ahead on the basis of trying to understand how much it is used. ChatGPT is not the only technology that people will be interested in, there’s Anthropic‘s Claude and Microsoft Bing and Google Bard. The genie is out of the bottle.”  

caroline@legaltechnology.com