That lawyers are generally hostile to technology has become something of a tired line. While the legal industry may not always deserve such a reputation, it is true that the law tends to move slowly, making rules for technological advances after they have happened.

When it comes to AI, there are many important questions that are yet to receive considered answers from the courts or Parliament. For lawyers deciding whether or not to embrace AI this poses a dilemma: do you risk becoming a Luddite, or liable for the consequences of the new technology?

While there are unfortunately more questions than answers in this area, being at least aware of those questions can help practitioners tease out the risks and factors they should be considering in deciding whether and when to deploy new technology.

Take the example of predictive coding software which, in the context of technology assisted review, has become increasingly effective at identifying relevant documents in a disclosure set. This technology has proved so effective that the CPR Disclosure Pilot more or less obliges litigants to discuss using it when the disclosure process will include searches.

This technology moves beyond traditional 'if/then' logic in computer programming, and instead revises its logic as it 'learns' through the disclosure process and 'training' by a lawyer who is also reviewing some of the documents.

Now consider a scenario where a loss has been caused by a negligent failure to pick up on a key document during the disclosure process, and factually that failure is the sole fault of (i) a lawyer, or (ii) software operating with a traditional 'if/then' logic or (iii) a more advanced software trained by the lawyer and then applied to the disclosure.

In circumstance (i), it is clearly the lawyer who is at fault. In circumstance (ii), it is the software developer, who created a defective program and sold it as a bespoke solution to these sorts of disclosure problems.

However, liability in circumstance (iii) is more complex and raises a number of questions:

  • As a matter of principle, can it be said that the loss was caused by the developer who created the software which incorporated the capacity to develop reactively during the review process? Or was it caused by the lawyer who developed that same capacity by training the software? Should it differ depending on the facts of a particular case?
  • As a matter of fact, how can you prove whose fault the failure is? It may be impossible to say how or why the software missed the particular document. In these circumstances, can you rely on the general range of accuracy reasonably expected from the software to show a culpable failure in the particular circumstances? Where should the burden of proof lie in these circumstances?
  • It is likely that an alleged failure of this type of software cannot easily be explained in the conventional terms used for assessing mechanical fault (if X happens, Y should properly then happen) or human error (in these circumstances, a reasonable person would do Z). How then do you assess liability? Do you compare the software to a hypothetical reasonable human lawyer? What would this mean for justifications for the use of such software due to its improved efficiency over human review? Alternatively, do you compare it to another program? Would this mean that only the market-leading option can escape liability?
  • How can the duty to exercise reasonable care to the standard of a reasonably competent solicitor be discharged in this context? Will it be sufficient as a defence to rely on statistics showing a particular software's general rates of accuracy versus a traditional human review? Or will each case involve a forensic examination of the human training inputs to try to achieve some 'best guess' as to whether it was the lawyer's defective training of the software that led to the failure? Should the Disclosure Pilot's tacit encouragement of such technology play any part in assessing liability? At the very least, a comprehensive written advice documenting all of these considerations should offer some protection for law firms.
  • From a more practical perspective, how might insurance policies cover (or refuse to cover) liability in this context? This is a conversation that might be had with a provider before any issues have arisen.

These are just a few examples of the complex issues one particular application of the use of AI might raise. Unfortunately, given there are as yet no clear answers, any disputes arising from such technology are likely to be protracted with lawyers and developers seeking to blame one another.

While we are still waiting for clarification in the law on these issues, having them to the forefront of one's mind when considering or deploying such technology will at least help firms stay alive to the potential risks involved and identify common sense ways to minimise these in their particular circumstances.

Sinead O'Callaghan, Michael Cumming-Bruce and Andrew Flynn are partner, senior associate and associate respectively within the partnership disputes team of CYK