With all the hype around AI (artificial intelligence), it's easy to think that AI can solve all your business problems. Want to do something faster and more efficiently? AI may seem like the obvious answer; it's certainly being sold that way. But marketing isn't always right. That's why you need to be cautious when you consider deploying AI. If it doesn't deliver the actual value you are seeking, it can be an expensive mistake. (This is why Mark Myslin, a data scientist from Ravel Law, and I wrote an article called "AI in the Right Places").

I've spent the last decade working with engineers, data scientists and computational linguists teaching computers to structure unstructured legal data. Our efforts range from teaching a computer legal vocabulary, to more difficult parts of speech, grammar and even syntax. I have seen well-conceived algorithms replicate certain tasks otherwise performed by humans, including lawyers, at scale, quickly and accurately. But I have also learned that AI should only be deployed in appropriate use cases where there is ample evidence it will succeed.

Before we designate AI as the answer to all legal challenges, we need to ask three fundamental questions:

  • Does the solution provide user value?
  • Does the solution provide business value?
  • Will the approach continue to be feasible over time?

AI is not always the right choice. There are always at least two other options: Keep using existing workflows and solutions that rely heavily on human labor or go hybrid by combining advanced technology with human expertise. For example, faced with the challenge of creating an analytics platform spanning 46 million pages of litigation data—the entire corpus of U.S. federal case law—a company I founded concluded it would be best to go a hybrid route. We realized the tools we were developing would provide the greatest value and remain technically feasible over the long term only if we retained a small group of human subject matter experts (SMEs) constantly reviewing and updating our training data, rather than relying on AI exclusively. This was to ensure the high level of accuracy lawyers require: 70% or 80% was simply not good enough for our users. We needed ongoing input from SMEs to get that number higher.

In the end, we did create a successful AI-based platform that is now used by law firms across the country, but it was done with careful thought, continual testing and multiple iterations. As the legal world continues to be excited by new AI tools, we need to consider how AI will actually deliver by asking questions related to user value, business value and feasibility.

|

User Value

Will users get anything special out of an "AI" solution? Answering this question requires you to think about whether users are frustrated by the problem you're proposing to solve. Sadly, a lot of companies are trying to solve problems that don't really matter to the end user.

Other questions to consider:

  • Is it easy for users understand how to use your solution?
  • Will the impact of solving the problem justify the expense and the effort?
  • Will AI necessarily produce something more valuable to users than existing processes and workflows?
  • Does AI have the potential to create a "virtuous cycle" in which outputs get more accurate with additional data and ongoing user input?
|

Assess Business Value

Assess whether your proposed solution will provide your business with a crucial edge in the marketplace. For example, will your product generate actionable insights? A well-trained and maintained AI solution should continue to get better and provide new business value as the data set grows.

  • Does your AI strategy become valuable over time, considering the rest of your business?
  • Will the solution allow you to save money on staff, or use existing staff in different and more productive ways?
  • Will the solution you are considering generate enough value among your customers to merit the effort and expense of implementation, taking into account the risk of failure?
  • Is your training data expanding and getting better in a way that will be difficult for the competition to replicate?
  • Does your product expose you to the risk of being objectively wrong? What are the stakes of getting something wrong?
|

Assess Feasibility

Most companies struggle to maintain and grow a product after launch. At some point, they move teams on to the next one. An algorithm, like a product, needs to be maintained. As data expands, training data will likely change. You will need to continuously measure accuracy and ensure you are delivering quality results. You may need more human labor than you realize to develop and maintain a useful AI-based solution.

Implementing AI comes with some uncertainties. For example, it's easy to predict exactly how long it's going to take a human to do something like annotate a dataset, but it's difficult to predict how long it will take to build effective models and to know how accurate they will be. With ML, you are always taking a risk the solution won't work as planned. Answering the following questions can help you manage the uncertainty by forcing you to develop and test hypotheses, accurately communicate risk and consider alternatives.

  • Can you afford to invest in upfront training time to develop a viable a ML model?
  • Would it be safer to rely on more predictable but slower human labor? Must the humans be domain experts?
  • How much would you pay a human team to accomplish the same work you plan to delegate to a ML platform?
  • How much training data will be required for a successful algorithm? How does the cost of acquiring sufficient training data compare to the cost of training humans?
  • Can you populate your data model easily, clearly and consistently?
  • How will you maintain and upgrade your AI system and ensure extensibility, and who will be responsible for the ongoing work?

For AI to be feasible in the long run, you need to be able to manage uncertainty, have access to highly skilled data scientists, have high quality training data—and know when to involve experts in the review process. Having a "lawyer in the loop" process to structure contract data fills a void that ML on its own cannot deliver, which is the quality that our clients need. ML combined with expert legal review can not only provide the quality but also the value.

There is a lot to consider as you assess whether an AI solution is right for your organization. Answering the questions above can help you avoid getting swept up in technology hype cycles and focus instead on your specific requirements and use cases.

Nik Reed is senior vice president of product and R&D at Knowable.