Circuit Judge Proposes Using AI-Powered LLMs to Interpret Legal Texts

U.S. Circuit Judge Kevin Newsom issued a concurring opinion on Tuesday with a “modest proposal” to use AI-powered large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Gemini to help analyze and interpret the ordinary meaning of words and phrases in legal texts.

The underlying case involved a dispute between a landscaper and his insurance company over whether his installation of an in-ground trampoline fell under the term “landscaping” in his insurance policy. The district court noted that because the term “landscaping” was not defined in the policy, whether or not the landscaper’s activities were covered by the insurance policy hinged on whether the installation of the trampoline fell within the common, everyday meaning of the word “landscaping.” After a review of assorted dictionary definitions, the court determined that the work was not “landscaping.” However, the parties continued to debate the plain meaning of the word on appeal.

Though the appellate court resolved the appeal without determining the ordinary meaning of the term, Judge Newsom took the opportunity to express his thoughts about artificial intelligence’s possible role in future disputes after “hours and hours (and hours) laboring over the question [of the ordinary meaning of ‘landscaping’ in the context].”

Judge Newsom recounted that querying ChatGPT about the ordinary meaning of “landscaping” resulted in an explanation that “squared with [his] own impression.” The ChatGPT result defined “landscaping” in part as “the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes.” ChatGPT’s definition also listed a few activities that could be considered “landscaping,” including planting trees and installing paths and water features. Newsom noted that dictionary definitions of the term took a more varied approach, some only referencing modification of “plant cover,” and some confining the activity to an aesthetic rather than practical pursuit.

Judge Newsom and his clerk went on to ask ChatGPT whether installing an in-ground trampoline was “landscaping,” and got an affirmative answer. Google’s Bard (now Gemini) gave a similar result.

In evaluating the pros and cons of using LLMs to interpret legal texts, Judge Newsom found one of the strongest pros to be that LLMs are trained on a vast range of ordinary-language data, “from Hemmingway [sic] novels and Ph.D. dissertations to gossip rags and comment threads.” However, he noted as one of the cons that LLMs cannot capture “pure offline” usages, meaning usages that do not occur online or are not eventually digitized online.

Other pros he listed were that LLMs can understand context, LLMs are accessible, LLM research is relatively transparent, and LLMs have advantages over other empirical interpretive methods, like dictionary research. Other cons listed were that LLMs can “hallucinate” (meaning they sometimes make things up), that lawyers, judges, and would-be litigants may try and manipulate them, and that “reliance on LLMs will lead us into dystopia.”

Finally, Judge Newsom offered some suggestions for the use of LLMs when determining ordinary meanings of words and phrases, including an emphasis on more data “representing a more representative cross-section of Americans,” determining the proper question, giving thought to and documenting multiple prompts and responses, as well as using different models, specifying the desired output, and taking timeframes of usage into account.

Additional Reading

US judge makes ‘unthinkable’ pitch to use AI to interpret legal texts, Reuters (May 29, 2024)

Snell v. United Specialty Ins. Co. May 28 Opinion, Newsom Concurrence

Image Credit: Tada Images /