With copious apologies to Joni Mitchell, we've looked at risk from all sides now, from win and lose and still somehow, we really don't know risk at all.

So far, in this series, we've looked at risk from the perspectives of clients, law firms and insurers. Since all those corners of the industry have a particular view of tech, we left that to last.

As followers of Lean Law will instantly recognize, efficiency is a key pillar of lean. Technology creates precisely that efficiency, but the question is always what to do with the won-back time? There are a ton of good answers to this including: i) spend it on strategic or creative work (which is of value to the clients and satisfying to the attorney); ii) spend it on client research and business development; or even iii) spend the won-back time with your kids and your dog.

When it comes to business development, don't forget that tech is your friend. It can help lawyers and marketing teams put together not just research, but content for blogs, newsletters and social media posts. As we'll discuss below, AI generated content will need a review and edit. Not just oversight, also insight, in the form of your opinion as a thought leader.

The industry has pretty much come to terms with most areas of tech. When it comes to AI and its new sibling Gen AI, the risks are viewed differently. Conventional AI, for want of a better term, has been with us for years and every law firm and in-house department uses it, wittingly or unwittingly. For as long as law firms have used precedents, they have been using AI. The extension of this into doc reviews and electronic storage of information was inevitable and happened without undue trauma.

As new efficiency opportunities were identified through AI, there was some skepticism from law firm leaders, particularly those wedded to the time-spent business model. But the enlightened view is that the time-spent model is fundamentally a perverse incentive. It is also universally disliked by clients.

As anybody who has ever lost a document on their server, or emailed it to an opponent will know, all technology has an element of risk. But where this risk starts to become acute and highly sensitive is around Gen AI. Curating precedents and searching documents is one thing, generating text, graphics and complete documents is quite another.

Jump in, say thought leaders, the water is lovely. Murtuza Vohra of Morae Global, for example, said at a recent webinar hosted by Buying Legal Council on practical Gen AI use cases for legal departments: "The biggest risk is in doing nothing. The more you procrastinate, the more you're hesitant, the bigger the opportunity cost of not essentially moving the needle. Don't be hesitant. Don't be fearful. This is a breakthrough, but it is nothing that you cannot grasp."

But law firms aren't buying it. Not yet at least. In 2023 the Thomson Reuters Institute published a report on the adoption of Gen AI in law firms, which noted that "just 3% of respondents said they are already using generative AI or ChatGPT for law firm operations, and an additional 2% said they are actively planning for its use."

It doesn't help that law firm leaders are risk averse by nature. Now add in the messaging they get from their insurers and clients. Some professional indemnity insurers won't provide coverage for work produced by Gen AI while others say they won't pay for research which could've been created by Gen AI.

The slowly emerging view within law firms is that Gen AI might be capable of being used selectively, one day. The quality has been described as being at the level of work product of a trainee. Yes, even including the susceptibility to inventing cases. Unverified Gen AI work product clearly poses unacceptable risks. But, if it is treated as a raw first cut and then given a proper review by a senior set of eyes, and if the clients buy-in to this process, there's no reason it can't play a role. 

The in-house community has a slightly different risk appetite for Gen AI. This is unsurprising given that in-house lawyers are less risk averse (because they can't get sued by their own internal clients) and more into embracing efficiencies (because their job is always to do more with less). From that starting point, in-house legal departments now cover the spectrum in terms of risk appetite. At one end, some departments have gone all in, with big investments and roll-outs. At the other end of the spectrum are departments who still won't touch Gen AI because they can't assess the risk.

Where does the answer lie? Usually in instances where there are opposing ends of a spectrum, the answer lies somewhere in the middle. However, given the efficiencies that Gen AI promises, it seems inevitable that adoption is the way to go. 

As we have mentioned in previous lessons, it's not that AI will replace lawyers, but rather that lawyers who use AI will replace those who don't. So yes there are risks, and no, they're not fully understood. But as generative AI develops, the risks will decrease, and the benefits of carefully implemented, monitored and reviewed AI will outweigh those risks.