Maybe We've Got The Artificial Intelligence In Law 'Problem' All Wrong

Perhaps artificial intelligence could stand to be a little less human.

little robot fired employerWhen some hapless NY lawyers submitted a brief riddled with case citations hallucinated by consumer-facing artificial intelligence juggernaut ChatGPT and then doubled down on the error, we figured the resulting discipline would serve as a wake-up call to attorneys everywhere. But there would be more. And more. And more.

We’ve repeatedly balked at declaring this an “AI problem,” because nothing about these cases really turned on the technology. Lawyers have an obligation to check their citations and if they’re firing off briefs without bothering to read the underlying cases, that’s a professional problem whether ChatGPT spit out the case or their summer associate inserted the wrong cite. Regulating “AI” for an advocate falling down on the job seemed to miss the point at best and at worst poison the well against a potentially powerful legal tool before it’s even gotten off the ground.

Another popular defense of AI against the slings and arrows of grandstanding judges is that the legal industry needs to remember that AI isn’t human. “It’s just like every other powerful — but ultimately dumb — tool and you can’t just trust it like you can a human.” Conceived this way, AI fails because it’s not human enough. Detractors have their human egos stroked and AI champions can market their bold future where AI creeps ever closer to humanity.

But maybe we’ve got this all backward.

“The problem with AI is that it’s more like humans than machines,” David Rosen, co-founder and CEO of Catylex told me off-handedly the other day. “With all the foibles, and inaccuracies, and idiosyncratic mistakes.” It’s a jarring perspective to hear after months of legal tech chit chat about generative AI. Every conversation I’ve had over the last year frames itself around making AI more like a person, more able to parse through what’s important and what’s superfluous. Though the more I thought about it, there’s something to this idea. It reminded me of my issue with AI research tools trying to find the “right” answer when that might not be in the lawyer’s — or the client’s — best interest.

How might the whole discourse around AI change if we flipped the script?

If we started talking about AI as “too human,” we could worry less about figuring out how it makes a dangerous judgment call between two conclusions and worry more about a tool that tries too hard to please its bosses, makes sloppy errors when it jumps to conclusions, and holds out the false promise that it can deliver insights for the lawyers themselves. Reorient around promising a tool that’s going to ruthlessly and mechanically process tons more information than a human ever could and deliver it to the lawyer in a format that the humans can digest and evaluate themselves.

Make AI Artificial Again… if you will.


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

CRM Banner