Your Next Lawyer Might Be AI, But Can You Trust It?

AI in law isn’t theory anymore, it’s happening. In 2023, about one in five lawyers used AI. By 2024, that jumped to almost four in five. Reports from Thomson Reuters show similar uptake in law firms and in-house teams. Clio says up to 74% of billable work could be automated.
This isn’t vague “adoption.” Real tools are now common. Lexis+ AI and Westlaw Precision AI let lawyers ask case-law questions in plain English. Casetext’s CoCounsel drafts briefs and deposition outlines quickly. Harvey, used by firms like Allen & Overy, speeds up contract review and due diligence. Outside BigLaw, tools like PainWorth and EvenUp help people price claims and prepare demand letters. Courts are testing AI too: the Calcutta High Court launched a filing chatbot in 2025, and Alaska is piloting tools for self-represented litigants. In short, AI has arrived.
The Promise: Democratising the Scales of Justice
Supporters say this is about access. New, well-funded AI startups aim to give regular people power they lacked before. Eve, a litigation AI for plaintiffs’ firms, reached a $1 billion valuation in 2025 by drafting demand letters and scoring claims at scale. EvenUp became a unicorn by standardizing strong demand packages for injury victims.
These tools aren’t gimmicks. PainWorth helps people estimate injury claims without paying experts. Court chatbots guide pro se litigants through complex forms. For the first time, many can generate filings that are clear, professional, and correctly formatted. In a system where small mistakes can sink a good claim, that is real progress.
The Peril: Hallucinations and Humiliation
But there are serious failures too. The same tech that helps can also spread errors fast and the costs are high. In February 2024, a Missouri appeals court fined a self-represented litigant $10,000 for a brief packed with AI-made fake citations. A year later, a federal judge in Puerto Rico fined lawyers over $24,000 in a FIFA case after finding fifty-five defective citations and warned about unverified AI. In April 2025, a New York appellate court stopped a litigant from arguing through an AI-generated avatar. The lesson is clear: in court, AI mistakes don’t just embarrass you; they can bring sanctions, fines, and a total loss of credibility.
A Deeper Divide: The Rich Get Smarter AI
Even if AI stopped hallucinating tomorrow, another risk remains: inequality. Regular people often use free or basic chatbots. Big companies use powerful, private systems. Firms like Allen & Overy run Harvey, trained on huge proprietary datasets, to transform due diligence and discovery. Work that took teams weeks can take hours. The gap is obvious: a tenant using a free chatbot is not on equal footing with a Fortune 500 client backed by BigLaw’s AI. So while AI can open doors, it can also make the walls higher for those already outside.
The Verdict: AI Is Also on Trial
From drafting demand letters for renters to producing sanction-worthy hallucinations, AI is no longer a theoretical concept in law and it’s an active, unpredictable participant. The solution isn’t to banish it from the courtroom but to hold it to the same standard as any other evidence: when questioned, it must hold up under scrutiny. Every citation must be real. Every data flow must be secure. Every litigant must understand the system’s limits as well as its powers.
Y. Anush Reddy
Y. Anush Reddy is a contributor to this blog.