Lawyers and their clients are increasingly becoming aware of the benefits of artificial intelligence, but the risks of the burgeoning technology have left some clients wary of implementing AI.

In fact, four out of 10 executives had a “high degree of concern about the legal and regulatory risks associated with AI systems,” according to Deloitte's recently released survey “State of AI in the Enterprise.” 

Artificial intelligence regulations and its risks cut across many practice areas, according to the lawyers contacted by Legaltech News. Lawyers suggested that clients should be fully aware of the data used by their AI and to keep an eye on any results it provides.

Reed Smith's Wendell Bartnick suggested companies may be wary of implementing AI if the program's results have broad applicability, are difficult to reverse or have results that aren't predictable. For example,e Bartnick noted the possibly difficult position a financial institution may face if it uses AI when issuing a loan. In the event the software makes a discriminatory or incorrect decision, detecting, correcting or stopping the result may be difficult, he said.

Roseyna Jahangir, a London-based attorney in Womble Bond Dickinson, said if the results of an AI program's algorithm causes a “detrimental outcome” to customers that a business may not be aware of, U.K. regulators won't allow an enterprise to use, “'Oh well, I didn't know the computer would do that'” as a defense.

Jahangir suggested AI developers know the intricacies of their algorithms and what triggers a result's outcome.

Natalie Pierce, co-chair of Littler Mendelson's robotics, AI and automation industry group, said the slow adoption of AI may be based on the lack of regulation regarding AI, uncertainty about how their AI implementation could be challenged and not seeking to change the status quo. Pierce said AI is an opportunity that could garner great outcomes but all caution can't be thrown out.

“If you do [use AI], clients will see a huge return because these programs can do a better job than we humans can,” said Pierce. “[But] you can't take yourself out of the equation.”

Pierce said it's important, when using AI, to know its integrated data and the science behind it. She also said cross validation of important data should be performed to determine which should be used and that users should constantly retest the algorithm.

Some organizations are tepidly embracing AI because of the amount of data needed for training artificial intelligence and machine learning, Reed Smith's Bartnick also said.

“We have smaller clients that are struggling with companies or fear they won't be able to compete because they don't have the trove of data [like] the big compan[ies],” Bartnick said.

|

Regulations

There isn't a single law regulating artificial intelligence, lawyers said, and AI touches a myriad of legal issues. However, a few attorneys cited provisions in the European Union's General Data Protection Regulation as targeting AI.

Bartnick explained the GDPR's AI regulations are geared toward programs not being able “to run out of control and make substantial effects without human intervention and monitoring.”

Covington & Burling partner Lee Tiedrich suggested taking a global perspective when assessing which jurisdiction an AI program is confide to. Often, Tiedrich said, artificial intelligence is not limited to one jurisdiction.

Tiedrich said clients seek advice on how to develop their product and Covington provides product counseling to minimize the client's legal liability. Clients tend to also ask for advice regarding the legislative outlook for AI and how to manage their risk, she said.