While artificial intelligence is powering the next generation of legal technology and potentially changing the foundation of the legal work, its use is not without controversy. For attorneys, who are bound by strict ethical rules of conduct, using AI can sometimes feel like a dereliction of duty.

“I think when we are talking about ethics and AI it comes down to the fact that AI is basically replacing human decision-making,” said Jose Lazares, vice president of Product Strategy and Business Management at Intapp. “We're really talking about machines and algorithms replacing human decision-making and the implications of that.”

Lazares was one of several legal tech speakers discussing the problems with relying on AI at an ILTACON 2018 session Tuesday titled, “Code of Ethics and How Does the ABA Fit into Artificial Intelligence?”.

For Johannes Stiehler, CTO at e-discovery and analytics company Ayfie, the ethical concerns with AI mostly relate back to the issue of accountability. “The question of culpability is going to be key to these ethical discussions, in law as much as in medicine,” he said.

Lazares, however, argues that such concerns can be addressed by ensuring humans always have some control over the how an AI platform comes to a decision. At Intapp, the approach is to make sure there is “a human decision point” in AI processes so there is “a chance to review and understand” how choices are made.

But such transparency is not always possible. “There are some AI methods that are much more of a 'black box', like deep learning, while other methods are much more explainable,” said Daniel Katz, an associate professor of law at Chicago-Kent College of Law.

Once initially trained, such AI tools can learn autonomously, meaning their inner workings can be exceedingly difficult to untangle and understand.

What's more, some AI companies may not be too eager to unveil the inner workings of their technology for fears it will show their products in a negative light. “Most software providers don't like to do that,” Stiehler said. “We don't want to tell you [our product] is only 70 percent accurate. The software tend to hide those errors which makes human intervention much more difficult.”

Concerns over how AI programs make decisions have been heightened in recent years, given the use of the Correctional Offender Management Profiling for Alternative Sanctions analytics tool in some U.S. state courts to predict inmate recidivism. A 2016 investigation by ProPublica found that COMPAS showed bias against African-American prisoners.

Katz, however, noted that bias in the criminal justice program would exist with or without AI. “There are plenty of complaints of biases of judges and police,” he said, adding that it's interesting that there is so much focus on fixing AI, but little focus on addressing human biases.

But Stiehler believes it is only right that technological biases are more heavily scrutinized. AI software “is being held to a higher standard because it is repeatable, always making the same decisions,” he said.

Stiehler added that what is needed is to “educate people about the limits of AI systems, which are beyond what a human can do, but still, there are some limits. People tend to expect pure awesomeness from AI without any effort, and that's just not true.”