In a recent Utah filing, an attorney argued with unwavering confidence that his client had a controlling interest in a subdivided tract under Royer v. Nelson, 2023 UT App 17, and that the legal standard from Hammersmith v. Goldstein, 2021 UT App 45, compelled a summary judgment in his favor. The brief cited precise paragraphs, internal logic, and even a footnote referencing “Royer, supra, at ¶ 12.” It looked airtight.
The problem? Royer v. Nelson never existed.
Hammersmith v. Goldstein was equally fictitious. Every legal citation was fabricated by ChatGPT. When the court checked, it found no case history, no reporter, no opinion. The judge sanctioned the attorney and reminded the profession: a signature on a brief still means personal accountability, even if you blame AI for conjuring your authority.
These examples make a strong case for the need to better understand artificial intelligence (AI) for law firms. This kind of misstep makes for a dramatic headline, but it also underscores the real issue: law firms using AI as a shortcut rather than a tool. AI can boost efficiency across your practice, including law firm digital marketing, but only when it is treated like an assistant, not an oracle. Think of an eager junior associate — quick with a draft, sometimes clever, always in need of oversight.
At Hennessey Digital, we test AI thoroughly before recommending it, so when firms add it to their digital marketing strategy, it works where it should.
That kind of discipline is what separates firms that benefit from AI from those that end up in the headlines. And yes, there are limitations of AI in law firm marketing. With that in mind, let’s look at the three biggest mistakes law firms make with AI and how to avoid them. (And when you’re done, check out our Guide to AI for Law Firms.)
Three Ways Law Firms Misuse AI
By now, most attorneys should understand that AI has serious limitations, including its tendency to hallucinate, fabricate citations, and present unreliable authority. A recent example is Noland v. Land of the Free, L.P. (Cal. Ct. App. Sept. 12, 2025), where the appellate court sanctioned the appellant’s counsel $10,000 and directed notice to the State Bar after concluding that “nearly all” of the quotations in the opening and reply briefs were fabricated.
However, AI is not out to sabotage attorneys. The real issue is how people choose to use it.
Strip away the drama, and you will see three common mistakes that are easy to recognize and, thankfully, easy to fix.
1. Treating AI as an Expert
The first problem when using AI is blind trust. Some attorneys treat AI outputs as if they were gospel. Courts have already sanctioned lawyers for filing briefs with hallucinated citations, like Mata v. Avianca, Inc., where ChatGPT generated cases that never existed, and the court imposed a $5,000 sanction.
AI can also misjudge what counts as authority. Ask the wrong way, and it might elevate a Reddit thread or a niche blog post to the same tier as a Supreme Court opinion. Entertaining? Sure. Persuasive in court? Not a chance.
Better practice: Use AI as a draft assistant, then confirm everything against real precedent. Treat it like brainstorming with a third-year law student: sometimes helpful, never a substitute for binding authority.
2. Confusing Confidence With Accuracy
This mistake is about style versus substance. AI delivers answers with the same polished tone, whether it is standing on solid ground or skating on thin ice. Studies on human-AI collaboration show that people often confuse confidence with correctness, and in law, that shortcut can be costly.
(It’s tempting to insert a law joke here about “confidence in the courtroom.”)
It does not help that tools like ChatGPT or Grok are built to be agreeable. They are not plotting to mislead; they are simply optimized to keep the conversation flowing. In practice, that means you can end up with a very confident “yes” that is very wrong.
Without the right constraints, it is easy to slide into a rabbit hole of misinformation that sounds convincing but will not hold up in court.
How to get it right: Push AI to explain itself. Ask for sources, reasoning, and alternative perspectives before you take their word for anything, then check those sources. Follow the HITL mantra: a human in the loop is the most important element in using AI professionally and responsibly.
3. Using AI in the Shadows
Many lawyers fear that using AI looks lazy, but hiding it is worse. Tools like Google’s Gemini can speed up research and free attorneys to focus on strategy. Saving research time does not undercut billing. It simply shifts hours toward higher-value work like client counseling or trial prep.
The real problem is secrecy. A KPMG study found that 57 percent of workers conceal their AI use, which, in law, raises red flags around confidentiality, discovery, and billing.
The fix: Be transparent. Create firm-wide guidelines and document how AI fits into the process.
The irony in Noland is that the attorney marketed himself as a malpractice expert while letting ChatGPT, Claude, Gemini, and Grok draft his briefs. The lesson is not to avoid these tools but to understand what they are good for. General-purpose models can help you brainstorm and draft, but firms that want real leverage should look to AI specifically built for legal research. Picking the right tool for the job is what separates a competitive edge from a cautionary tale.
Using the Right Tools for the Right Purpose
Tools like ChatGPT, Perplexity, Gemini, and Grok are your “Swiss Army knives.” They are not built for law, but they are helpful when used with care. They’re also evolving search marketing and how potential clients find you, so it’s important to pay attention to how ChatGPT and AI affect SEO and website traffic for your firm.
Great for brainstorming arguments, rewriting memos, summarizing long documents, and generating new ideas. Just don’t mistake them for specialized research platforms.
“AI chatbots can undeniably be time-savers, but they can’t replace the logic and authenticity of the human behind the keyboard,’ notes Cindy Kerber Spellman, Hennessey Digital’s Senior Director of Marketing. “A piece of advice that’s always stuck with me came from a fitness instructor: Go through it, not to it. It means putting in the work to get to your end goal. ChatGPT and other LMMs have unlocked what’s possible, but for both ethics and accountability purposes, don’t rely on them to do all the work for you.”
Because in law, sometimes you need more than a pocketknife. You need a sharpened arrow aimed at the right target. That is where AI specifically designed for the legal sector comes in.
Some Top Legal-AI Tools
At Hennessey Digital, we don’t just talk about AI; we utilize it. Our internal tools are built on the same principles we recommend to law firms; automation is only as strong as the human oversight behind it.
Our proprietary AI-supported tools developed in-house at Hennessey Digital reflect that balance:
- HD Translate: Converts English content into multiple languages, followed by a QA specialist review to confirm tone, nuance, and context are accurate.
- AI Developer: This isn’t a software developer working on AI, but rather AI that works as one of our developers! It can respond to tickets and share updates on its work, just as a human would. It writes and refines code, with each change passing through QA and an engineer’s manual approval before launch.
- AI Writer: Produces client content automatically, while the AI Compliance Checker helps writers and editors confirm that pages align with each firm’s brand profile and voice. It assists the review process rather than replacing it.
- Human QA Review: Before publication, experienced legal editors trained to recognize AI patterns review every page for accuracy, tone, and legal compliance.
Hennessey Digital’s Senior Manager of Content, Brian Sintay, puts it this way: “Automation helps us work faster, but it never replaces human oversight. Every page still goes through experienced legal editors who verify accuracy, tone, and brand alignment before publication. That balance between AI and human judgment is what keeps our content trustworthy.”
Each tool accelerates workflow, but the process stays human-led. That is exactly what law firms need when introducing AI to their own operations.
And the same logic applies to other AI tools for law firms that are available:
- CoCounsel (Thomson Reuters): Excellent for legal research and document analysis since it plugs into Westlaw and Practical Law. Con: Can still hallucinate, so blind trust is dangerous.
- Lexis+ AI: Seamless if your firm already uses Lexis, with strong drafting and summarization features. Con: Like its peers, it is not error-free and needs attorney oversight.
- Spellbook: Built for contract drafting inside Microsoft Word, great at suggesting clauses or redlines. Con: Narrower focus, less useful outside transactional work.
- Harvey AI: Flexible tool for compliance, contract review, and due diligence. Con: Still maturing and not always reliable for nuanced or jurisdiction-specific analysis.
- Clio Duo: Embedded in Clio practice management, helpful for pulling documents, managing tasks, and automating workflow. Con: Most effective only if your firm already runs on Clio.
What “Right Use” Looks Like in Legal Practice
It is one thing to have the latest AI tools on your desk, but it is another to know how to put them to work. Just like in court, knowing the rules of evidence can make the difference between a winning argument and one that falls flat. (Check out our article: How AI and ChatGPT Are Transforming Personal Injury Law Firms)
As our Content Strategist Gerri Turner puts it, “Using the correct tool for the job is important, but using well-crafted, structured prompts is also essential to getting an AI tool to perform as expected. The axiom ‘junk in, junk out’ applies.”
Here are a few ways firms can use AI wisely without creating new risks:
- Have a Co-Pilot Mindset: AI can summarize discovery, draft memos, or generate marketing content, but attorneys set the direction and strategy.
- Verify Before You File: Always fact-check outputs against statutes, precedent, or trusted legal databases. The bar is higher than “sounds right.”
- Transparency: Treat AI like paralegals or research software. Disclose its role, establish guardrails, and keep usage ethical.
- Task Matching: Use general tools like ChatGPT or Gemini for brainstorming, drafting, and plain-language summaries. Use legal-specific AI for research, contracts, or case prep.
Done well, AI becomes a lever. Done poorly, it turns into an ethics complaint waiting to happen.
Implications for Law Firms
The sanctions in Noland and Mata v. Avianca make one thing clear: courts are not impressed by briefs ghostwritten by overeager chatbots. The problem is not the technology. It is people treating AI like a shortcut instead of a tool.
For firms, the implications are hard to ignore.
- Professional responsibility: Judgment cannot be delegated to a machine. Every AI-assisted draft still requires a lawyer’s eyes, red pen, and verification against real sources.
- Efficiency and billing: AI can shave hours off research and drafting without undercutting revenue, so long as firms are clear about how that time gets reallocated to strategy, client work, and case preparation.
- Competitive advantage: Pairing the Swiss Army knives like ChatGPT, Gemini, or Grok with specialized tools such as CoCounsel, Lexis+ AI, Spellbook, or Harvey is how firms move from playing defense to playing offense.
The takeaway? AI is not here to replace lawyers. It is here to expose which firms know how to use it wisely. The ones that do will practice smarter, faster, and safer. The ones that do not will end up as cautionary tales.
At Hennessey Digital, we take the same approach to AI that we recommend to our clients: open adoption with rigorous testing. We push tools to their limits before suggesting them for a firm’s workflow. We also develop proprietary AI-supported tools in-house to push what’s possible and improve efficiencies and results for our clients. When law firms add AI to their digital marketing strategy, that kind of vetting ensures the tools sharpen performance instead of creating risk.