Among the damages sought in his pro se federal lawsuit: monetary sanctions in the amount of “$355.69 quintillion ($355,687,428,096,000,000,000)” — a request that Senior U.S. District Judge Judith Herrera characterized as “quite simply ludicrous.”
In the end, he himself had to pay $8,640 in sanctions after the judge determined other aspects of his legal filings were afflicted by a 21st-century phenomenon: AI hallucinations.
With the widespread and transformative use of artificial intelligence — particularly generative AI — in daily life, federal and state courts in New Mexico are finding self-represented litigants and some attorneys filing legal cases containing false or misleading information.
For example, one attorney last year filed a pleading that cited at least six nonexistent cases to support his arguments.
“The six cases were fake and likely the handiwork of a ChatGPT or similar artificial intelligence (AI) program’s hallucinations,” wrote U.S. Magistrate Judge Damian Martínez of Las Cruces in a ruling in the case.
An out-of-state lawyer wrote the legal brief, which the New Mexico attorney didn’t read before filing.
Martínez’s ruling states that a “hallucination occurs when an AI database generates fake sources of information. To explain how this occurs, AI models are trained on data, and they learn to make predictions by finding patterns in the data.”
“However, the accuracy of these predictions often depends on the quality and completeness of the training data. If the training data is incomplete, biased, or otherwise flawed, the AI model may learn incorrect patterns, leading to inaccurate predictions or hallucinations,” the judge wrote.
Martínez fined the attorney $1,500, required him to report the incident to the state and federal bar disciplinary committee and ordered him to take an hourlong course in legal ethics on the use of AI in writing.
Courts in New Mexico have detected AI-generated hallucinations in at least seven lawsuits since 2023, sometimes imposing sanctions but more often issuing warnings.
“A lot of self-represented litigants, especially, are relying heavily on AI and they don’t know how to check these citations or the statutes, and so they’re filing a lot of pleadings that have a lot of earmarks of hallucinations,” said state District Judge John P. Sugg of Carrizozo last week.
“I’ll read a motion that they filed, and I can’t find anything that they’ve cited to. We’ve had it with attorneys, too. I think that it’s concerning because we’ve got very limited judicial resources, very limited time and when we’re chasing down a bunch of stuff that doesn’t actually exist, it wastes a lot of our time.”
Last summer, a federal judge in Colorado ordered two attorneys representing MyPillow CEO Mike Lindell in a defamation case to pay $3,000 each after they used artificial intelligence to prepare a court filing filled with a host of mistakes and citations of cases that didn't exist.
Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the document in February. It contained more than two dozen mistakes, including hallucinated cases, meaning fake cases made up by AI tools, according to National Public Radio.
In October 2023, then-Chief U.S. District Judge William Johnson of New Mexico discovered a case with opinions that were fake or nonexistent. He wrote that it appeared to be only the second time a federal court had dealt with a pleading involving nonexistent judicial opinion.
“Quite obviously, many harms flow from such deception — including wasting the opposing party's time and money, the Court's time and resources, and reputational harms to the legal system (to name a few),” Johnson wrote.
Sugg and other judges aren’t advocating against the use of AI, just as long as the information produced is accurate.
While the New Mexico Supreme Court is looking into creating a formal policy on AI use in the state judiciary, Sugg said he imposed his own order two weeks ago.
He is requiring any attorney or self-represented litigant who relies on generative AI to draft, edit or modify any pleading, motion or other written document filed with the court to disclose the use of AI at the top of the document.
Filers must also certify that the language drafted by AI was checked for accuracy using traditional methods, such as legal databases, “or by a human being.”
“I think that AI is a good tool for a lot of people,” Sugg said. “It just needs to be something that we’re careful using.”
©2026 the Albuquerque Journal, Distributed by Tribune Content Agency, LLC.