Artificial Intelligence

Less five minutes into Tuesday's "Tempering Innovation: The Riskier Side of Digital Transformation" session at Legalweek New York, panelists made an attempt to rechristen the whole affair to something more along the lines of "Managing the Risk Associated with Innovation." The implication is that new technologies don't have to be tamped down so much as handled with care.

Of course, when it comes to artificial intelligence, there are precious few instructions to draw from in that pursuit. Lee Tiedrich, a partner and co-chair of the Global Artificial Intelligence Initiative at Covington & Burling, indicated that much like the internet in the 1990s, the growth of technologies such as AI continues to outstrip the development of the law. The lack of absolute regulatory clarity raises the stakes for organizations.

"You need to be very careful as you go forward with AI because even if there's no legal risk, bad headlines can result in bad reputational harm for companies," Tiedrich said.

Ron Peppe, vice president of legal and human resources at Canam Steel Corp., could speak to some of those challenges directly. For example, Peppe said that tech companies are frequently attempting to pitch him on AI solutions related to the hiring, training and even termination of employees. As much as companies may be looking to streamline or expedite certain processes within their organization, even these tools aren't an easy sell.

As in most business practices these days, privacy concerns abound. Peppe indicated that some of these hiring solutions are able to scrape a candidate's social media accounts for data, but there's some information that recruiters aren't permitted to ask for that could potentially get swept up in the search, such as someone's religion.

"When you do stuff like that you tend to learn a lot of stuff about people you're not supposed to know," Peppe said.

Many of those issues inevitably lead into questions around bias, a problem that in the AI world expands well beyond the human resources sphere. Tiedrich referenced State v. Loomis, a case out of the Wisconsin Supreme Court where the defendant claimed that the use of an algorithmic risk assessment tool violated his due process. The court eventually upheld the decision since the judge also relied on additional factors other than the assessment.

Still, the question of AI bias persists. "How can we humans use [AI] in connection with the other types of functions that we use?" Tiedrich asked.

AI's imperfections can be even trickier to navigate for lawyers. David Kessler, public sector counsel for Verizon, spoke to the ethical obligations that attorneys have to consider when AI is brought into their organization, including how to best obtain permission to use a data subject's information.

"You have an obligation to put in proper safeguards for whatever technology you adopt," Kessler said.