It may be getting easier to teach an AI new tricks, but don't expect the lessons to stop coming any time soon.

Laura van Wyngaarden, COO and co-founder of the contract review platform Diligen, has seen the time it takes to successfully train an AI system drop significantly over the past few years. But the challenges left on the table may have less to do with the density of a given practice area than the fluid conditions governing ever-evolving regulatory frontiers like privacy.

“We make sure that we're responsive to regulatory change so that when we see regulatory change on the horizon, that's going to mean that our clients and others out there are going to be forced to do an enormous amount contractually,” van Wyngaarden said.

Applying AI towards those ends isn't necessarily a process that can be undertaken on a whim. Like a small child, an AI has to be taught right from wrong— in other words how to recognize or identify specific terms, actions or parameters related to the task at hand. This requires both a data set (think of that as a textbook) and a solid algorithm.

Which isn't to say that there aren't shortcuts on the market. Last week, Diligen expanded its library with the addition of more than 250 pre-trained clause models that can be used to teach an AI to identify corresponding clauses within a contract.

The idea is to make it easier for companies to review huge volumes of contracts for specific clauses without having to go to through the time-consuming effort of training the AI to locate said clauses themselves. Still, even if an organization does elect to bite the bullet and teach their own AI, the requirements of keeping up with an increasingly global marketplace likely means that the education will be ongoing.

“If you train a contract tool on a U.K. contract and you want to use it in the U.S., you have to adapt it to the U.S. continent. Or even with the U.S., different jurisdictions have different sort of signs or so on,” said Khalid Al-Kofahi, vice president of research and development at Thomson Reuters and head of the company's Center for Artificial Intelligence and Cognitive Computing.

Regulatory pressures that stretch across a variety of industries make it seem likely that a given contract tool will have to do some overseas work at one point or another. For example, one such development that Diligen moved to address after speaking with customers was a decision made by the U.K.'s Financial Conduct Authority in 2017 to discontinue the London Interbank Overnight Rate (LIBOR).

Clients were feeling the pinch of having to sort through thousands of contracts looking for  references to LIBOR that needed to be reviewed. “And you can imagine reviewing hundreds of thousands of contracts is immensely time consuming if you're going to do it manually,” van Wyngaarden said.

Beyond the finance industry, privacy in particular could also present a challenge to AI training simply because the patchwork of evolving laws vary not just in language but in content across the globe.

“If you're using certain training and testing data sets to build up your machine learning, if you're changing the parameters—i.e. if you're changing the law—then what used to be a highly optimized algorithm and performing well in a certain area may not any longer,” said Michael Riesen, a partner at Smith Gambrell & Russell and executive director of its innovation team SGR Labs.

Riesen thinks that it would be easy to reteach an AI tool to identify potential red flags in a contract that need to be addressed in light of changes to the law. However, teaching the same technology to mine for deeper insights, such as trends in how laws are actually being enforced, may require a body of data that is only now starting to take shape.

“As we start seeing these fines being handed out for [GDPR] violations, that would be an interesting data set to start to learn and identify,” Riesen said.

|