Law firm leaders are increasingly adopting Legal AI, Gen AI tools trained for the legal profession, but as with any breakthrough technology there is a healthy amount of skepticism that must be overcome.

Building trust in Legal AI will mean resolving concerns in several areas — from issues involving accuracy and security, to ensuring clients are comfortable with how firms are using AI to deliver legal services.

There are some simple things that firms can do to build their lawyers' and their clients' confidence in Legal AI tools, opening the door to new workflows that enable attorneys to work faster and smarter. One key step is pro-actively address concerns about accuracy and confidentiality with the use of Legal AI tools.

Avoiding hallucinated answers

We've all heard of the risks inherent in using open-web AI tools for legal research, illustrated by headline-grabbing news stories about lawyers who mistakenly relied upon search results provided by ChatGPT without verifying the accuracy of various case citations. These stories understandably have made some lawyers concerned about the potential of Gen AI to produce "hallucinated" answers.

According to the recent Gen AI report, Gen AI in Law: A Guide to Building Trust, bolstering confidence in the accuracy of AI-generated content is crucial for lawyers to trust the answers they are receiving to their legal research inquiries. This is where specialized Legal AI tools and each lawyer's own legal acumen can help to close the trust gap.

"Companies like LexisNexis ensure that answers are generated with the appropriate source citations and references," says Jeff Pfeifer, chief product officer at LexisNexis. "Doing so allows an individual to trust the answer quality and that the answers are backed by appropriate legal authority."

For example, Lexis+ AI grounds its answers in an underlying legal content database that understands and optimizes prompts. The tool retrieves and ranks relevant source content to generate answers based on that authoritative material. References are included in the text so that users can check the sources themselves.

Ensuring data security and confidentiality

Another key consideration for law firms is making sure their Gen AI solution follows strict data security protocols and upholds all client confidentiality requirements. As discussed in the Gen AI report, this means ensuring that contracts with third-party Gen AI providers do not allow firm data to be shared with the provider, a provision that general-purpose Gen AI companies often include in their terms and conditions under the pretext that sharing data will improve their service.

"Gen AI are data intensive tools, they're like the ravenous plant from 'The Little Shop of Horrors' — they always need to be fed," says Tod Cohen, a partner at Steptoe. "For a law firm, that's really a question of what we are feeding the tools with and how do we make sure that the tool isn't being fed with confidential and proprietary data, then is reused by other clients and potentially by downstream users inside and outside of the firm? That's really the most difficult part."

Ensuring data-sharing clauses are removed from contracts can provide reassurance of confidentiality, while improvements in commercial-grade cloud infrastructure have also made using Gen AI much more secure than earlier generations of the technology.

For example, LexisNexis has made data security and privacy for customers a priority by opting out of certain Microsoft AI monitoring features to ensure OpenAI cannot access or retain confidential customer data.

"We spend significant time working with our clients to help them understand the technology infrastructure and the extensive steps that we've taken to ensure that their experiences are highly secure and highly confidential," says Pfeifer.

Law firms can also increase confidence in Legal AI tools by putting in place policies and guidelines that govern how the technology can be used. Aside from not entering client information that could compromise confidentiality, those policies should also require that attorneys are fact-checking any content provided. This is no different than using other legal research tools where, if an attorney is citing a case, they must read and understand the content if they want to avoid the risk of committing malpractice.

We interviewed a variety of AI leaders from the legal profession to explore how law firms and corporations that embrace Legal AI are building trust in the use of this new technology. In addition to the section of the report we unpacked today, which focuses on the importance of addressing concerns about accuracy and confidentiality, other sections of the report include:

  • Key factors that drive trust with Gen AI;
  • The steps to building trust; and
  • Rethinking workflow, skills and culture.

Read the full report now: Gen AI in Law: A Guide to Building Trust.