AI in the Courts: The Good, the Concerning and the Frightening
A panel at the New York State Bar Association's Annual Meeting looked at the ways AI can benefit, complicate and stifle the way judges and their courtrooms work.
January 29, 2020 at 02:32 PM
6 minute read
Artificial intelligence won't mean the end of the legal industry (or world) as we know it. But neither will it create utopia where new heights of insight and efficiency close the access to justice gap and take much of the guesswork out of the legal profession. The reality of AI's future, and present, is far more ambiguous and complicated than that.
At the "Emerging Technologies in Litigation" at New York State Bar Association's Annual Meeting, local and federal judges, an e-discovery researcher and an emerging technology attorney came together to discuss the way AI is, and likely will be, used in today's courtrooms.
While some use cases presented potential benefits, others were troublesome, and one was downright frightening. Here's a look at the highlights from the panel:
AI's Place in Judicial Decisions
The use of AI in legal research is one area that has gotten a fair amount of attention in the legal world, both for better and worse. Gail Gottehrer, founder of an eponymous law firm focusing on emerging technologies, explained that such research platforms use past judicial decisions to "predict behavior and outcomes that different legal strategies will produce."
And to some extent, she said, this shouldn't be contentious. "Law is based on precedent, [and] if your case is similar and has similar factors to another case, the results shouldn't be too surprising."
Still, Gottehrer noted there is a limit to how effective these predictions can be. "Cases vary based on facts, the facts people view as significant, and that's judgment, which is what AI does not do. … So will it guarantee a result to predict what a judge is going to do? I would say no."
Still, some were optimistic that this use case of AI could ultimately prove beneficial. "I would love to know how I'm going to rule on any case because I'm very busy," joked Judge Melissa Crane of New York City Civil Court.
While completely accurate predictions may be a far-off proposition, such legal research tools can offer insight into how a judge has ruled in the past, which Maura Grossman, professor at the University of Waterloo in Ontario, noted can be helpful in "bringing explicit biases to attention."
"I think this can be a check on bias," she said. "Wouldn't it be helpful to know if you decide [certain] cases exclusively for plaintiffs?"
Of course, today's AI doesn't just collect and predict judicial decisions, but in some cases, makes those decisions itself. Katherine Forrest, former U.S. district judge for the Southern District of New York, pointed to the holographic judges currently in China. "They rolled out the utilization of a couple of internet courts where they have AI judges and have litigated to verdict thousands of cases, and Estonia just announced it is following suit for small claims."
While Forrest expressed concern over just how much discretion AI has in these judgments, Gottehrer noted there can be some cases where AI judges could make sense. "I think there is a place for it, something that is very rule-heavy," she said, pointing to traffic courts where a certain infraction could automatically incur a penalty as an example. "If you're driving that fast over the speed limits, excuses don't matter."
'Black Box' AI-Based Risk Assessment
One of the most controversial deployments of AI within courts is the use of risk assessment tools. While many of these tools do not leverage AI and are primarily used to determine what programming and supervision an offender receives, some like Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) are a different story. Crane noted that COMPAS uses AI to calculate risk scores by comparing "defendant answers to questions and personal factors against a nationwide data group and comes up with a score."
The tool's risk scores have been used to inform sentencing decisions, most famously in Wisconsin where a state Supreme Court case, Loomis v. Wisconsin, allowed judges to continue using the tool so long as they understood its limits and risk scores were not determinative of sentencing.
But the case did little to stem concerns around the COMPAS' use. "No one knows how COMPAS weighs the risk factors or determines the scores," Crane said, noting that the algorithms behind the tools are proprietary and cannot be verified independent of COMPAS' developer.
Still, while it is not known how the AI behind COMPAS works, it is known that the risk assessment tool calculates risk scores based on national data. Gottehrer said this also raises concerns given the social and economic differences of various demographics across the country.
She explained that a risk assessment tool may associate homelessness with higher risk of failure to appear in court for a hearing. But someone in New York City, where housing costs are higher, could still have a phone and be reachable, and have greater access to public transportation, than someone in other areas of the country, she said.
Forrest also argued that it's concerning to use national arrest rate data as a standard because such data can "vary significantly by time frame." As an example, she noted that "some would argue the stop-and-frisk time in New York resulted in the over-arrests of black men … so if you're using a data set that's running across a period of time … it's going to be picking that up as normative."
The AI Best Left Out of Courts
There is one instance of AI that judges and attorneys likely don't want coming to a court near them: deepfakes—fraudulent but convincing video and audio content created by AI-editing tools—aren't just a concern for companies and elected officials. They can also plague court officials with doubts on the veracity certain multimedia evidence.
"Deepfakes are an evidentiary nightmare," said Forrest, adding, "imagine what that is going to do to [our ability to] utilize video, [now we'll] say, did that really happen? … That's AI, AI has enabled that."
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllLaw Firms Mentioned
Trending Stories
- 1DOT Moves to Roll Back Emissions Rules, Eliminate DEI Programs
- 2No Injury: Despite Proven Claims, Antitrust Suit Fails
- 3Miami-Dade Litigation Over $1.7 Million Brazilian Sugar Deal Faces Turning Point
- 4Trump Ordered by UK Court to Pay Legal Bill Within 28 Days
- 5$19.1M Verdict: 'Most Accurate Settlement Demand I Ever Made'
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250