Is AI a double-edged sword for lawyers?
by Jordan Turk · BetaNewsThe legal industry is not traditionally recognized as one that is quick to embrace change, but recently, some professionals have been embracing emerging technology maybe a little too quickly, leading to all kinds of problems. The use of generative AI tools has exploded in popularity since OpenAI’s ChatGPT debuted in late 2022, and some lawyers have turned to this generative AI (GenAI) technology to help them with everything from legal research to contract drafting.
However, these GenAI models aren’t foolproof. In fact, they’re likely to “hallucinate” information that seems accurate but is actually entirely made up. If lawyers using this tech don’t take the time to double-check their outputs, they run the risk of working with factually incorrect information, which is embarrassing at best and grounds for legal repercussions at worst.
While AI hallucinations pose significant risks for legal professionals, potentially leading to malpractice and ethical violations, the responsible adoption of AI technology remains crucial for law firms to maintain competitiveness and efficiency in an evolving legal landscape. It’s critical that attorneys adapt to this new technology (you won’t regret it).
Misleading machines
Generative AI is a huge step forward for workplace efficiency. If you need help drafting an email to a client or want to ideate on a good opening statement, GenAI can help by outputting some first-draft content for you to work with. However, the key to using generative AI is just that: understanding it’s only a first draft. Any of GenAI’s outputs, though they may seem eloquent and full of useful information, run the risk of being a hallucination. You can’t even trust AI to own up to its mistakes -- in fact, it will often double down on its hallucination and present additional invented information to back up its original claim. GenAI will confidently put forth misinformation as fact, and this mechanical Dunning-Kruger effect can “trick” users into trusting it outright because it seems correct. But this is dangerous, especially for people in high-stakes roles like lawyers. It proffers convincing case cites, but know that some, or all of them, could be fictitious.
If a lawyer uses bad info that came from an AI output, the potential ramifications are dire. Last year, a law firm in California was sanctioned for using ChatGPT to write briefs that were full of fabricated cases. A lawyer in Colorado was suspended for after presenting a motion with multiple hallucinated cases and then lying about it. AI is not a cure-all solution for lawyers; leveraging it irresponsibly will only lead to negative consequences.
GenAI for good
One factor that can affect the accuracy of AI’s outputs is the precision of the prompts it’s being fed. The more clear and thorough the prompt, the better the output will be. For example, prompt chaining, or building the next input off the previous output, gives the AI more context to answer more accurately. As AI tools continue to proliferate, effective prompt writing will become an integral skill for lawyers looking to successfully leverage this tech.
Despite the risk of hallucinations, lawyers shouldn’t be scared of GenAI tools. When leveraged correctly and ethically, lawyers can reap incredible benefits from this technology, like dramatically increased efficiency and enhanced idea brainstorming.
Extracting value from AI hinges on a marriage between this cutting-edge technology and the irreplaceable human touch. Only a person can verify if the information GenAI outputs is true, and it is mission-critical that every single output be checked by a human to ensure accuracy. Some judges have actually made this a requirement in their courts. The goal of GenAI is to supplement human brainpower, not replace it.
Though AI hallucinations pose genuine risks that demand vigilance and thorough verification, particularly in a field like law where accuracy (or lack thereof) carries profound consequences, the benefits can’t be ignored. Law firms that thoughtfully integrate AI tools will find themselves with a leg up over the competition that isn’t using AI. This tech can be a powerful assistant to, rather than a replacement for, uniquely human legal expertise. As the technology and the legal landscape continue to evolve, mastering the integral human-AI balance will set law practitioners up for sustained, future-proofed success.
Jordan Turk is a practicing attorney in Texas and the Legal Technology Advisor at Smokeball. Jordan’s family law expertise includes complex property division and contentious custody cases, as well as appeals and prenuptial agreements. In addition to her family law practice, Jordan is passionate about legal technology and how it can revolutionize firms.