Is AI in the Courtroom Worth a Second Look? At Least One Judge Thinks So

The legal industry was abuzz last year when a federal judge in the Southern District of New York imposed monetary sanctions on two attorneys who had submitted a brief written by the artificial intelligence tool ChatGPT, which had hallucinated non-existent cases.  Other courts took notice, and a spate of new or modified court rules followed, requiring that attorneys certify that they have reviewed and verified any content submitted to the Court which has been generated by AI.

A recent concurrence by U.S. Circuit Judge Kevin Newsom of the U.S. Court of Appeals for the Eleventh Circuit in an insurance case, Snell v. United Specialty Insurance Company, 102 F.4th 1208 (11th Cir. May 28, 2024), took a somewhat different view, and dipped a very tentative toe in the water to suggest that there actually might be a way for judges to use generative AI in adjudicating certain cases.  

The case at issue had to do with the “ordinary meaning” of language used in an insurance policy, in which the availability of insurance defense and coverage turned on whether the construction of an in-ground trampoline could be deemed “landscaping.”  Judge Newsom wrote:

Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance.  Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider – consider – whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might – might – inform the interpretive analysis.  There, having thought the unthinkable, I’ve said the unsayable.  Now let me explain myself. 

Id. at 1221.  Judge Newsom went on to make the case that “ordinary meaning interpretation aims to capture how normal people use language in their everyday lives—and the bulk of the LLMs’ training data seem to reflect exactly that.”  Id. at 1227.  LLMs “understand” context because they are essentially “high-octane language-prediction machines capable of probablistically mapping, among other things, how ordinary people use words and phrases in context.”  Id. at 1228.  LLMs are accessible to judges, lawyers and ordinary citizens, and the use of LLMs to facilitate ordinary-meaning interpretation “may actually enhance the transparency and reliability of the interpretive enterprise itself. . . .” Id.  Judge Newsom acknowledged the potential downsides of his suggestion, namely, that LLMs hallucinate (though he posited that such hallucinations would become fewer and farther between as the technology improved); that LLMs don’t capture offline speech, and thus might not fully account for underrepresented populations’ usages; that lawyers, judges, and would-be litigants might try to manipulate LLMs; and that reliance on LLMs will lead us into dystopia.  Id. at 1230-32.  But he viewed each of these downsides as surmountable.  See id.

The judge concluded by saying that he was “not, not, not” suggesting that AI should be used for rendering judgment:

My only proposal – and again, I think it’s a pretty modest one – is that we consider whether LLMs might provide additional datapoints to be used alongside dictionaries, canons, and syntactical context in the assessment of terms’ ordinary meaning.  That’s it. 

Id. at 1232.

Judge Newsom’s concurrence (which, by the way, was not necessary to the Court’s decision in the case), may be a bit of a surprise, given where U.S. courts are at the moment.  But those in the world of alternative dispute resolution are evaluating products that will help neutrals identify next steps, such as offers and counter-offers in mediation, case strengths and weaknesses, and even likely outcomes.  Meanwhile, an early application of AI in China, known as the “smart court,” uses AI technology to offer advice, conduct hearings, and make decisions based on case law.  There is no question that using AI for decision-making is already infusing other industries as well, including retail, medicine, customer service, risk assessment, and strategic planning. 

The legal system is notoriously slow to adapt to technological change, and Judge Newsom may just be ahead of the curve on this one.  But this concurrence has started the discussion of whether AI can and should be incorporated into certain types of judicial decision-making.  The debate will no doubt continue as AI technologies continue to improve and some of Judge Newsom’s concerns are resolved.               

Lisa A. Ferrari is the co-chair of Cozen O’Connor’s Copyright Practice.

Author

  • Co-Chair, Copyright Practice

    Lisa is a member of the firm’s Intellectual Property Group, practicing out of the firm’s New York of...

Related Posts