Artificial Intelligence

Showbiz wisdom dictates that you give the people what they want, and apparently there's some crossover appeal to the practice of law as well. On Monday, the law firm Paul Hastings  launched a new practice group devoted specifically to issues surrounding artificial intelligence (AI).

According to the group's co-chair, Robert Silvers, the practice was formed in response to popular demand from clients, many of whom are just beginning to dip a toe into the AI waters and want to make sure there are no sharks lurking nearby.

“What I think is going to evolve in the coming years really is the enforcement and the price to pay for getting these things wrong. I think there's just no question that we're at the dawn of an era of class action,” Silvers said.

Whenever that era arrives, the consequences have the potential to be felt across a variety of different industries and disciplines. Paul Hastings' clients hail from backgrounds that include fintech, privacy and cybersecurity.

Some are interested in deploying AI-based solutions that are capable of screening job candidates or evaluating the performance of employees who are up for promotions. If at all possible, they would very much like for their attorneys to help them do so without running afoul of discrimination or privacy laws. After all, algorithm formulas have been found to contain inherent biases in the past.

The trick for attorneys is navigating the ill-defined legal space surrounding AI, which like most new technologies is growing faster than the law.

“It's not for the faint of heart. You need to be comfortable on unsettled ground. You need to have enough confidence to deliver crisp advice to the client, which means really understanding the issue areas from all angles,” Silvers said.

In the absence of much case law or precedent to fall back upon, Silvers framed the underlying challenge as something that sounds a lot like a game of “What If?” For example, what if a hiring decision involving the application of AI went to court? What is the full spectrum of possible outcomes and the risk associated with each?

“And then what [clients] really value is you can say, 'Here's what other companies are doing in the space and how they're approaching it,'” Silvers said.

To be sure, how companies approach AI is subject to change, especially if the consequences for a misstep continue to grow steeper. Silvers believes that in addition to class action suits brought by consumers or employees who believe they were treated unfairly by the technology, businesses may also have to eventually contend with increased attention from regulators.

There are even provisions found in the European Union's General Data Protection Regulation (GDPR) geared towards limiting the extent to which a program can effect an impact without some kind of human oversight in place.

“I think that's going to be the next ticket as the technology starts being more widely adopted. More people are going to start feeling consequences,” Silvers said.