Firm Submits Fee Request Based On ChatGPT Search... Judge Is Less Than Impressed

'Utterly and unusually unpersuasive,' he wrote.

Man in suit is expressing suffering while sitting at tableCount Judge Paul Engelmayer of the Southern District of New York as an artificial intelligence skeptic. At least at this juncture. Describing the party’s use of the tool “misbegotten at the jump,” the judge just shot down a fee request that relied — in part — upon ChatGPT inquiries.

Specifically, the Cuddy Law Firm applied for fees after prevailing in an Individuals with Disabilities Education Act matter. While the reasonable fees routinely awarded for IDEA matters in the SDNY would seem the most relevant basis for determining fees, the firm argued that these awards low-balled the compensation that their practice area required, citing: “(1) the Real Rate Report conducted by Wolters Kluwer; (2) the 2022 Litigation
Hourly Rate Survey and Report conducted by the National Association of Legal Fee Analysis (“NALFA”); (3) the 50th Annual Survey of Law Firm Economics (“ASLFE”); and (4) the Laffey Matrix,” a series of reports that weren’t specific to IDEA cases. For instance, the Real Rate Report compiled rates used in “general by litigation partners and associates at New York law firms.” Judge Engelmayer balked at assuming administrative actions involving IDEA reasonably play in the same price point as representing Goldman Sachs in a tussle with JPMC.

But the firm also offered supplementary data based on a ChatGPT search to “cross-check” that data:

It suffices to say that the Cuddy Law Firm’s invocation of ChatGPT as support for its aggressive fee bid is utterly and unusually unpersuasive. As the firm should have appreciated, treating ChatGPT’s conclusions as a useful gauge of the reasonable billing rate for the work of a lawyer with a particular background carrying out a bespoke assignment for a client in a niche practice area was misbegotten at the jump.

The firm argued that the ChatGPT search was relevant because it shows what a parent searching for representation would expect to pay. Parents aren’t going to read all these reports before deciding to hire a lawyer, but they will — increasingly — just ask ChatGPT how much it costs to hire a lawyer.

Which is not as utterly and unusually unpersuasive as the judge asserts.

On the other hand, “cross-check” is an apt term for what generative AI does because it’s just going to cull the same reports cited above — and gloss over all the same distinguishing factors that Judge Engelmayer notes — and spit out the exact same answer. Which is fairly unpersuasive.

Sponsored

Unfortunately, the opinion carries the rationale for rejecting this reasoning a little too far:

In two recent cases, courts in the Second Circuit have reproved counsel for relying on ChatGPT, where ChatGPT proved unable to distinguish between real and fictitious case citations.

How does this have anything to do with hallucinating caselaw? First of all, these caselaw hallucination stories have nothing to do with the technology and everything to do with lawyers who don’t bother to read the cases they cite. That’s a problem regardless of the tech that scrounged up the cite.

Unlike those matters, the firm wasn’t trying to assert that there’s a black letter answer out there, it was trying to say “this is a search parents would use and it would say X,” which isn’t a hallucination anyway. The fact that the bot was going to tell families that their IDEA administrative proceeding lawyer was going to charge Cravath rates makes it wrong, but not hallucinatory.

We have a lot of fun at the expense of hallucinating AI, but it’s crucial that we stop treating all of AI’s drawbacks as hallucination problems. Sometimes it gives bad answers because it draws from inappropriate or incomplete — but very, very real — data and if we keep solely framing its issues through the lens of hallucinations, we’re going to overlook when its answers are verifiably real.

Sponsored

But just bad.

Earlier: For The Love Of All That Is Holy, Stop Blaming ChatGPT For This Bad Brief


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

CRM Banner