AI Fabricated 20 Court Cases and Ended a Lawyer's Career.

Greg Lake walked into the Nebraska Supreme Court on February 18th to argue a divorce appeal. The justices stopped him 37 seconds in.
Not because he stumbled. Not because he was unprepared. Because one of them had read his brief.
Of the 63 citations Lake had submitted, 57 had problems. Twenty weren't just wrong, they didn't exist. Cases with real-sounding names, plausible dates, proper formatting. "Kennedy v. Kennedy," a 2019 Nebraska Court of Appeals decision that Lake used to argue parenting time standards. The opposing counsel checked the citation. The real Kennedy v. Kennedy says nothing about parenting time. The quote Lake pulled from it doesn't appear anywhere in Nebraska case law. The AI had written it from scratch.
A justice asked him directly: "The elephant in the room is whether or not you used artificial intelligence. Did you?"
Lake said no. Blamed a broken laptop. A wedding anniversary. The wrong draft.
The court didn't believe him. By April 15th, he was suspended from practicing law indefinitely.
Greg Lake's case was treated like a scandal. But it's just a Tuesday.
Damien Charlotin, a researcher at HEC Paris, maintains a global database of AI hallucination incidents that happen during legal proceedings. He's tracked more than 1,200 cases. Around 800 from US courts alone. He noted recently that the pace had hit ten cases from ten different courts on a single day.
A federal court ordered an Oregon attorney to pay $109,700 for filing AI-generated errors across multiple briefs in a family dispute over a vineyard. It wasn't isolated, the Sixth Circuit hit two Tennessee attorneys with $30,000, the highest federal appellate penalty linked to fabricated citations. With at least $145,000 of sanctions in the first three months of 2026.
Lake's suspension is different in kind, not degree. It's the first time a US attorney has lost the right to practice entirely over AI hallucinations. Financial penalties are recoverable. A career suspension is not. Not cleanly, anyway.
None of this touched the company whose product fabricated the citations. It’s more of a design choice rather than an oversight.
OpenAI's terms of service prohibit using ChatGPT for "provision of tailored advice that requires a license, such as legal or medical advice." The framing says: they're not saying the tool can't do it. They're saying you agreed it shouldn't. If something goes wrong, you agreed. Every AI company does a version of this. The liability flows one direction — toward the user — and it's buried in a click-through agreement nobody reads.
AI tools carry no legal liability for their hallucinations. There is no AI entity to sue. No professional license to revoke. The tool that wrote 20 fictional court cases with confident, plausible specificity is still available, still being sold.
Also Read: AI Hallucinations Aren't a Bug. They're Built In.
Nippon Life Insurance sued OpenAI in March after a woman used ChatGPT as a legal adviser and filed a wave of frivolous motions against the insurer. The lawsuit accuses OpenAI of practicing law without a license — the first time the liability wall has faced a serious legal challenge. OpenAI's response: "This complaint lacks any merit whatsoever."
A Northwestern University study published in the Sedona Conference Journal this year surveyed 502 federal judges. 61.6% of responding judges reported using AI tools in their judicial work. The most common uses: legal research and document review. The exact same categories that get attorneys sanctioned when the AI makes a mistake. 45.5% of respondents had received no AI training from their courts.
The institution handing down $109,700 penalties to lawyers for AI errors is itself using AI, without oversight, without the verification standards it imposes on everyone else who appears before it.
This analysis puts it precisely: if you frame AI as a product with foreseeable failure modes, the sanctions start to look less like ethics enforcement and more like a pricing mechanism — one that makes AI adoption expensive enough to be survivable only for large firms, while solo practitioners face career-ending consequences for using the same tools.
Greg Lake made real mistakes. He used AI carelessly. He lied about it to a supreme court. He deserved consequences.
But the tool that produced 20 fictional cases and presented with total confidence, formatted correctly, indistinguishable from real law but that tool has no consequences. Its makers wrote the right disclaimers. Their lawyers checked. They're fine.
Someone always pays. Usually the one who clicked agrees.
Y. Anush Reddy is a contributor to this blog.



