If the past few years should have taught attorneys one thing about artificial intelligence (AI), it’s that attorneys should not trust AI for case citations.
In December 2025, a friendly neighborhood judge asked if the undersigned had heard about the latest decision imposing sanctions for faulty citations. To be honest, I wasn’t quite sure about this most recent case, only because there were so many in recent years.
For example, CAAA wrote about one such case in September. And then WorkCompAcademy wrote about a similar, but different case in December. Long before those articles, a 2023 court sanctioned New York attorneys for using as many as six imaginary case citations, causing state bar associations to take notice. To be clear, the judge I spoke to was not referring to any of these cases – he was talking about a newer case.
WHY?
Why is this happening? AI-powered engines occasionally make things up out of thin air, and when they do that, those are known as “hallucinations.”
Attorneys rushing to prepare pleadings are simply going to AI Chatbots for case citations and legal summaries, then copypasta-ing that right into their trial briefs, petitions, and appellate briefs. (Copypasta is when someone bombs a message board or social media app with a block of copied and pasted text, with that text often being ridiculous in nature.)
The judges who are assigned to these attorneys’ cases, and their clerks, must attempt to vet these citations. (Interestingly enough, these folks are now similarly-situated as high school and college teachers, who must determine whether the work before them is real, or simply plagiarized garbage.)
When the judges cannot find those citations anywhere and turn around and ask the attorneys for those cases, those attorneys are incapable of producing them because they simply do not exist.
Yikes! That’s not a good spot to be in.
HOW TO AVOID THIS PROBLEM
There’s a simple way of avoiding this problem – whether you’re using AI, Google, or your favorite legal treatise to do your legal research – when you see a citation, pull up the specific case and actually read it.
LexisNexis should have most of those cases, and if that’s too expensive or difficult to find, many of these cases can be located online too.
By reading the actual opinion, you may find nuances in that opinion that either support or undermine your argument.
The last time I read a case that an opponent cited, I saw that the case handed the defendant a victory on my precise issue. As a result, I was happy to see opposing counsel make that argument, and I was more than happy to point that out to our trial judge.
(The cited case featured multiple nuanced sub-issues on the same topic, and opposing counsel thought it favored them. However, a close reading showed that the defendant won on my sub-issue. One wouldn’t know that without reading the case, as the summaries didn’t mention my sub-issue.)
Reading the cases also leads to another advantage – it allows you the opportunity to pull-quote language from the opinion. There’s no need to reinvent the wheel when a wise, savvy predecessor judge/commissioner/appellate justice said it better than anyone else could years ago. And of course, after quoting them, you can paraphrase it before and afterwards, just for dramatic effect.
If you are quoting a prior opinion, we would just advise that you make sure you cite to it, and give attribution where attribution is due. When I was a journalist and wasn’t quite sure how to summarize some complex legal analysis, I would often resort to using an exact quote because it was often the most-accurate way to go.
In summary, read the cases you’re citing, or at the very least you should verify that they exist and generally stand for the proposition you’re citing to them for. There’s a million ways it can help you. Not reading them can only hurt you.
WHAT NOT TO DO
One tactic that has failed repeatedly when asked for an explanation about AI hallucinations is blaming the AI chatbot. Blaming a bot for work that an attorney is billing for has failed in every jurisdiction the undersigned has seen this fact pattern occur in.
Judges view that explanation as simply lazy, and sanctionable.
In a scenario like that, one should take the least-offensive path, even if it means eating a little AI crow.
CONCLUSION
In conclusion, beware that AI chatbots may “hallucinate” false citations that could get you into a heap of trouble. That heap of trouble can damage your reputation.
Don’t fall into that trap – verify the cases you’re citing, and read them – you’ll be more knowledgeable for it.
Got a question about workers’ compensation defense issues or pending legislation? Feel free to contact John P. Kamin. Mr. Kamin is a workers’ compensation defense attorney and partner at Bradford & Barthel’s Woodland Hills location, where he monitors the recent legislative affairs as the firm’s Director of the Editorial Board. Mr. Kamin previously worked as a journalist for WorkCompCentral, where he reported on work-related injuries in all 50 states. Please feel free to contact John at jkamin@bradfordbarthel.com or at (818) 654-0411.
Viewing this website does not form an attorney/client relationship between you and Bradford & Barthel, LLP or any of its attorneys. This website is for informational purposes only and does not contain legal advice. Please do not act or refrain from acting based on anything you read on this site. This document is not a substitute for legal advice and may not address every factual scenario. If you have a legal question, we encourage you to contact your favorite Bradford & Barthel, LLP attorney to discuss the legal issues applicable to your unique case. No website is entirely secure, so please be cautious with information provided through the contact form or email. Do not assume confidentiality exists in anything you send through this website or email, until an attorney/client relationship is formed.


