The Holographic Judge
AI playing a role in decision making is not the stuff of science fiction. Focusing on the theories behind decision-making is an exercise for today and not tomorrow.
December 30, 2019 at 12:15 PM
7 minute read
What if you no longer had to go to a courthouse for hearings or trial? What if your case could be handled at any time of the day or night, from any place at all, completely online? And what if the judge presiding over your case was a hologram? That is, a 3D image that appeared on a screen—an AI hologram who could answer questions, preside over proceedings, issue verdicts and then explain them?
Are you saying to yourself, perhaps with a smirk, "not in my lifetime!"? If so, you're wrong. It's happening right now, in China.
China's first AI powered court opened in Hangzhou in 2017 and has handled more than three million cases; on the auspicious date "9/9/18", the Beijing Internet Court opened for business. In the past year, it has handled tens of thousands of cases. Estonia has announced its own plan to deploy AI judges this year or next to hear smaller cases.
The Chinese Internet courts handle a variety of disputes that share the common characteristic of relating to businesses or conduct occurring online. The vast majority of such cases relate to intellectual property but also include contract disputes relating to e-commerce, financial disputes arising from online conduct, loans acquired or performed online, domain name issues, property and civil rights cases involving the Internet, product liability cases that relate to products acquired online and certain administrative agency disputes.
A few statistics relating to the Beijing Internet Court tell an interesting story: The average duration of a case is 40 days; the average dispositive hearing lasts 37 minutes; almost 80% of the litigants before the Chinese Internet Courts are individuals, 20% are corporate entities; and 98% of the rulings have been accepted without appeal (appeal rights and proclivities are obviously different between the U.S. and Chinese legal systems).
The judges that "appear" by hologram are artificial creations—there is no actual judge sitting in a courtroom whose image is beamed to a mobile device. The hologram-judge looks like a real person but is in fact a synthesized, 3D image of different judges, sort of like the "Mash Up" toys that combine parts of different superheros. Instead of engaging in child's play, though, this hologram-judge sets schedules, asks litigants questions, takes evidence, and issues dispositive rulings.
In one back and forth, a robed AI judge asked "Does the defendant have any objection to the nature of the blockchain evidence submitted by the plaintiff?"; the plaintiff responded "No objection."
In addition to providing in personam-like services, the Chinese Internet court system also creates a wide array of legal documents. Using AI capabilities, the court system autonomously creates indictments, investigative demands and written rulings.
In designing and training the AI system underlying the Chinese Internet courts, data sets containing vast numbers of prior decisions and databases of regulations and protocols. This has been used by the courts to adopt standards for determining the authenticity, relevance and validity of certain evidence, issuing a variety of routine documents to assist with the progression of a case, conducting online mediations, and issuing rulings.
China has publicized these courts as demonstrating "achievements in the field of Internet justice … at the forefront of the world" (White Paper on the Application of Internet Technology in Judicial Practice, Beijing Internet Court, Aug. 17, 2019, p. 21), as building a "court without a fence" (p. 6), and allowing closer adherence to "neutral judgment."
Is the Chinese system one that we will move towards, or run away from? Is it utopian or dystopian?
When we consider the implications of hologram-judges and AI-dispensed justice, concerns about fairness should be front and center. In prior articles I have written about AI being trained to accomplish a task based on data sets, and that data sets are only as good as the data within them. AI that learns from a data set accepts the data as useful; the data is its teacher. If we don't like the teacher, we need to be careful of using it to teach our AI.
Unless instructed to ignore or decrease the weight given to certain data, AI uses that data to learn. If a data set includes judgments that reflect discriminatory practices, outdated concepts of what constitutes a crime, or outmoded theories of justice, AI trained on that data will reflect that information back. AI does not have some supernatural ability to understand how or why a human judge's prior decisions may have been based on an incorrect understanding of the facts or law, reflect implicit or explicit bias, or were based on mores of a different era. AI cannot clean up our history to make it less discriminatory.
Before we deploy AI judges, we need agreement on what they should be taught and how. A critical piece of this relates to the theory of justice that we teach the AI deployed in our justice system to apply. Judicial decisions necessarily reflect a judge's application of a theory of justice, even if a judge does not consciously understand that he or she is doing so. In law school some of us may have taken classes in which the Rawlsian "justice-as-fairness" theory was pitted against "utilitarian" or "libertarian" theories. Even within these overarching theories, there are different views regarding the purposes of criminal justice: rehabilitative, retributive, just deserts, etc. Should justice be distributive? Based on divine command, natural law or justice as virtue?
What if we ignore all of these complexities and simply treat our amalgamated history of judicial decision-making as adequately reflective of enough different sorts of justice to teach AI a little something about all of it. In other words, can't we assume that any theory of justice that we care about is somehow embedded already in judicial decision-making sufficient to teach AI what we think constitutes "justice"? Maybe. But maybe we don't want a decision maker of our future to make all of the same mistakes we made in our past; maybe we think we can strive towards a more just society, and that we don't want AI to be caught in whatever theory happens to have risen to the top at the time it is let loose on a data set. And in all events, there is a serious concern that algorithmic AI is most easily oriented towards quantitative decision-making that corresponds most closely with a utilitarian theory of justice that many view as happily left in the dustbin of history. We certainly don't want AI to simply decide to achieve what might make the most people happy by making decisions unfair to the minority.
AI playing a role in decision making is not the stuff of science fiction. Focusing on the theories behind decision-making is an exercise for today and not tomorrow.
Katherine B. Forrest is a partner in Cravath, Swaine & Moore's litigation department. She most recently served as a U.S. District Judge for the Southern District of New York and was the former Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllThe Corporate Transparency Act: One Year Later With Deadline Looming
4 minute readAttorney Sanctioned for Not Exercising Ordinary Care: This Week in Scott Mollen’s Realty Law Digest
Law Firms Mentioned
Trending Stories
- 1Top 10 Predicted Business and Human Rights Issues for 2025
- 2$7.5M in Punitive Damages Awarded in Product Liability Case
- 3Does My Company Really Need a Generative AI Policy?
- 4'This Is a Watershed Moment': Daniel's Law Overcomes Major Hurdle
- 5Navigating the Storm: Effective Crisis Management (Part 1)
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250