Artificial intelligence won't mean the end of the legal industry (or world) as we know it. But neither will it create utopia where new heights of insight and efficiency close the access to justice gap and take much of the guesswork out of the legal profession. The reality of AI's future, and present, is far more ambiguous and complicated than that.

At the "Emerging Technologies in Litigation" at New York State Bar Association's Annual Meeting, local and federal judges, an e-discovery researcher and an emerging technology attorney came together to discuss the way AI is, and likely will be, used in today's courtrooms.

While some use cases presented potential benefits, others were troublesome, and one was downright frightening. Here's a look at the highlights from the panel:

|

AI's Place in Judicial Decisions

The use of AI in legal research is one area that has gotten a fair amount of attention in the legal world, both for better and worse. Gail Gottehrer, founder of an eponymous law firm focusing on emerging technologies, explained that such research platforms use past judicial decisions to "predict behavior and outcomes that different legal strategies will produce."

And to some extent, she said, this shouldn't be contentious. "Law is based on precedent, [and] if your case is similar and has similar factors to another case, the results shouldn't be too surprising."

Still, Gottehrer noted there is a limit to how effective these predictions can be. "Cases vary based on facts, the facts people view as significant, and that's judgment, which is what AI does not do. … So will it guarantee a result to predict what a judge is going to do? I would say no."

Still, some were optimistic that this use case of AI could ultimately prove beneficial. "I would love to know how I'm going to rule on any case because I'm very busy," joked Judge Melissa Crane of New York City Civil Court.

While completely accurate predictions may be a far-off proposition, such legal research tools can offer insight into how a judge has ruled in the past, which Maura Grossman, professor at the University of Waterloo in Ontario, noted can be helpful in "bringing explicit biases to attention."

"I think this can be a check on bias," she said. "Wouldn't it be helpful to know if you decide [certain] cases exclusively for plaintiffs?"

Of course, today's AI doesn't just collect and predict judicial decisions, but in some cases, makes those decisions itself. Katherine Forrest, former U.S. district judge for the Southern District of New York, pointed to the holographic judges currently in China. "They rolled out the utilization of a couple of internet courts where they have AI judges and have litigated to verdict thousands of cases, and Estonia just announced it is following suit for small claims."

While Forrest expressed concern over just how much discretion AI has in these judgments, Gottehrer noted there can be some cases where AI judges could make sense. "I think there is a place for it, something that is very rule-heavy," she said, pointing to traffic courts where a certain infraction could automatically incur a penalty as an example. "If you're driving that fast over the speed limits, excuses don't matter."

|

'Black Box' AI-Based Risk Assessment

One of the most controversial deployments of AI within courts is the use of risk assessment tools. While many of these tools do not leverage AI and are primarily used to determine what programming and supervision an offender receives, some like Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) are a different story. Crane noted that COMPAS uses AI to calculate risk scores by comparing "defendant answers to questions and personal factors against a nationwide data group and comes up with a score."

The tool's risk scores have been used to inform sentencing decisions, most famously in Wisconsin where a state Supreme Court case, Loomis v. Wisconsin, allowed judges to continue using the tool so long as they understood its limits and risk scores were not determinative of sentencing.

But the case did little to stem concerns around the COMPAS' use. "No one knows how COMPAS weighs the risk factors or determines the scores," Crane said, noting that the algorithms behind the tools are proprietary and cannot be verified independent of COMPAS' developer.

Still, while it is not known how the AI behind COMPAS works, it is known that the risk assessment tool calculates risk scores based on national data. Gottehrer said this also raises concerns given the social and economic differences of various demographics across the country.

She explained that a risk assessment tool may associate homelessness with higher risk of failure to appear in court for a hearing. But someone in New York City, where housing costs are higher, could still have a phone and be reachable, and have greater access to public transportation, than someone in other areas of the country, she said.

Forrest also argued that it's concerning to use national arrest rate data as a standard because such data can "vary significantly by time frame." As an example, she noted that "some would argue the stop-and-frisk time in New York resulted in the over-arrests of black men … so if you're using a data set that's running across a period of time … it's going to be picking that up as normative."

|

The AI Best Left Out of Courts

There is one instance of AI that judges and attorneys likely don't want coming to a court near them: deepfakes—fraudulent but convincing video and audio content created by AI-editing tools—aren't just a concern for companies and elected officials. They can also plague court officials with doubts on the veracity certain multimedia evidence.

"Deepfakes are an evidentiary nightmare," said Forrest, adding, "imagine what that is going to do to [our ability to] utilize video, [now we'll] say, did that really happen? … That's AI, AI has enabled that."