Artificial Intelligence

 

To some, artificial intelligence (AI) tools are still the new shiny object in legal. For others, they are proven solutions helping legal teams gain efficiencies and a competitive edge, especially in the contract analysis arena. Regardless of legal's AI journey, everyone wants to use AI and reap the rewards with increased productivity, more automation, cost savings and streamlined workflows.

The big challenge: How does a law firm or corporate law department compare different AI tools and offerings? Without the necessary subject matter expertise? How do you ask the right questions? How do you do this … cost-effectively? comprehensively? methodically? and collaboratively?

Over an eighteen-month period, I conducted a comprehensive comparative trial of nine well-known machine-learning and AI contract analysis software solutions. The universe of tested tools has since increased to 12. The main objective of the trials was to thoroughly evaluate the capabilities of each of these technologies. The end goal was to help a consortium of six global legal departments make informed buying decisions based on each providers performance. Since the original trial, law firms (including those that are already using one of the tools) and other players (including large corporates interested in acquiring the technologies) have been looking for guidance from the trial.

The results were in some cases surprising. Overall, there were far fewer solutions than we expected who had developed to the point that they could learn to recognise new clause types in real time. For the members of our consortium, they benefited from basing their decisions on the outcome of standardized tests rather than just opting for a tool that seems 'safe.' One of the best solutions even managed to identify clauses in our test set that we, the researchers, initially overlooked!

Setting up a comparative trial of this magnitude was no easy feat. Individual law firms or legal departments might easily spend upwards of 200 hours in running a comparable trial spanning document search, RFI development and vendor selection, solution setup and training, trialing solutions, compiling results and final decision making. What I ended up with was a long list of 'lessons learned' and 'best practices,' which I have managed to whittle down to a top five:

Put thought into your RFI: An RFI should be crafted to solicit interest from vendors who are qualified to participate—think through your RFI from the vendor's perspective before you hit send. Approach vendors in a collaborative and collegial fashion. Avoid 'gotcha' type questions or asking them to commit resources beyond what is really necessary. Focus on questions that are specifically going to yield the information you actually need. Make sure vendors clearly understand what you are asking in order to ensure accurate RFI responses.

Build bridges, not barriers: Based on my experience, RFIs crafted by teams of lawyers are often far more complex than they need to be to achieve the desired result. Provide a concise and simple RFI that encourages collaboration with vendors and transparency with regard to trial costs and resource commitments.

More is not always more: Since the trial process is often cumbersome and time consuming to begin with, it's tempting to cram in as many testing criteria as possible. Testing too many software applications for too many features and functions over complicates the process and unnecessarily extends the bake-off period. Instead, avoid scope creep by focusing on a shorter list of key outcomes that are most important to your firm.

Test and test again: In our comparative trial, we devised a four-part testing protocol for each of the dozen machine learning tools. This included understanding the technology; going 'under the hood' with the respective technology stacks; reconciling our testing with what we deemed vital as part of the RFI process; and lastly, running actual, comprehensive performance and training tests on each of the AI/ML tools and scoring them against standardized criteria. While testing protocols will vary, it is critical that testing criteria are uniform and consistent in order to in the end produce true 'apple v. apple' comparisons.

Report Results … then act: It is paramount that trial results are properly reported, shared with the appropriate constituents, and recommendations are implemented. Above all, setting up a comprehensive comparative trial with the above lessons learned in mind allows firms to base their decisions on actual vendor performance instead of following the crowd or acting on some educated guesses.

 

Friedrich is an entrepreneur, business builder and legal industry leader, with expertise in improving the competitiveness and financial performance of legal service providers. Over the past 20 years, Friedrich has consulted with and worked for law firms, legal departments and alternative legal service providers in 17 countries spanning four continents. He can be reached at [email protected].