Artificial IntelligenceWhat if you no longer had to go to a courthouse for hearings or trial? What if your case could be handled at any time of the day or night, from any place at all, completely online? And what if the judge presiding over your case was a hologram? That is, a 3D image that appeared on a screen—an AI hologram who could answer questions, preside over proceedings, issue verdicts and then explain them?

Are you saying to yourself, perhaps with a smirk, "not in my lifetime!"? If so, you're wrong. It's happening right now, in China.

China's first AI powered court opened in Hangzhou in 2017 and has handled more than three million cases; on the auspicious date "9/9/18", the Beijing Internet Court opened for business. In the past year, it has handled tens of thousands of cases. Estonia has announced its own plan to deploy AI judges this year or next to hear smaller cases.

The Chinese Internet courts handle a variety of disputes that share the common characteristic of relating to businesses or conduct occurring online. The vast majority of such cases relate to intellectual property but also include contract disputes relating to e-commerce, financial disputes arising from online conduct, loans acquired or performed online, domain name issues, property and civil rights cases involving the Internet, product liability cases that relate to products acquired online and certain administrative agency disputes.

A few statistics relating to the Beijing Internet Court tell an interesting story: The average duration of a case is 40 days; the average dispositive hearing lasts 37 minutes; almost 80% of the litigants before the Chinese Internet Courts are individuals, 20% are corporate entities; and 98% of the rulings have been accepted without appeal (appeal rights and proclivities are obviously different between the U.S. and Chinese legal systems).

The judges that "appear" by hologram are artificial creations—there is no actual judge sitting in a courtroom whose image is beamed to a mobile device. The hologram-judge looks like a real person but is in fact a synthesized, 3D image of different judges, sort of like the "Mash Up" toys that combine parts of different superheros. Instead of engaging in child's play, though, this hologram-judge sets schedules, asks litigants questions, takes evidence, and issues dispositive rulings.

In one back and forth, a robed AI judge asked "Does the defendant have any objection to the nature of the blockchain evidence submitted by the plaintiff?"; the plaintiff responded "No objection."

In addition to providing in personam-like services, the Chinese Internet court system also creates a wide array of legal documents. Using AI capabilities, the court system autonomously creates indictments, investigative demands and written rulings.

In designing and training the AI system underlying the Chinese Internet courts, data sets containing vast numbers of prior decisions and databases of regulations and protocols. This has been used by the courts to adopt standards for determining the authenticity, relevance and validity of certain evidence, issuing a variety of routine documents to assist with the progression of a case, conducting online mediations, and issuing rulings.

China has publicized these courts as demonstrating "achievements in the field of Internet justice … at the forefront of the world" (White Paper on the Application of Internet Technology in Judicial Practice, Beijing Internet Court, Aug. 17, 2019, p. 21), as building a "court without a fence" (p. 6), and allowing closer adherence to "neutral judgment."

Is the Chinese system one that we will move towards, or run away from? Is it utopian or dystopian?

When we consider the implications of hologram-judges and AI-dispensed justice, concerns about fairness should be front and center. In prior articles I have written about AI being trained to accomplish a task based on data sets, and that data sets are only as good as the data within them. AI that learns from a data set accepts the data as useful; the data is its teacher. If we don't like the teacher, we need to be careful of using it to teach our AI.

Unless instructed to ignore or decrease the weight given to certain data, AI uses that data to learn. If a data set includes judgments that reflect discriminatory practices, outdated concepts of what constitutes a crime, or outmoded theories of justice, AI trained on that data will reflect that information back. AI does not have some supernatural ability to understand how or why a human judge's prior decisions may have been based on an incorrect understanding of the facts or law, reflect implicit or explicit bias, or were based on mores of a different era. AI cannot clean up our history to make it less discriminatory.

Before we deploy AI judges, we need agreement on what they should be taught and how. A critical piece of this relates to the theory of justice that we teach the AI deployed in our justice system to apply. Judicial decisions necessarily reflect a judge's application of a theory of justice, even if a judge does not consciously understand that he or she is doing so. In law school some of us may have taken classes in which the Rawlsian "justice-as-fairness" theory was pitted against "utilitarian" or "libertarian" theories. Even within these overarching theories, there are different views regarding the purposes of criminal justice: rehabilitative, retributive, just deserts, etc. Should justice be distributive? Based on divine command, natural law or justice as virtue?

What if we ignore all of these complexities and simply treat our amalgamated history of judicial decision-making as adequately reflective of enough different sorts of justice to teach AI a little something about all of it. In other words, can't we assume that any theory of justice that we care about is somehow embedded already in judicial decision-making sufficient to teach AI what we think constitutes "justice"? Maybe. But maybe we don't want a decision maker of our future to make all of the same mistakes we made in our past; maybe we think we can strive towards a more just society, and that we don't want AI to be caught in whatever theory happens to have risen to the top at the time it is let loose on a data set. And in all events, there is a serious concern that algorithmic AI is most easily oriented towards quantitative decision-making that corresponds most closely with a utilitarian theory of justice that many view as happily left in the dustbin of history. We certainly don't want AI to simply decide to achieve what might make the most people happy by making decisions unfair to the minority.

AI playing a role in decision making is not the stuff of science fiction. Focusing on the theories behind decision-making is an exercise for today and not tomorrow.

Katherine B. Forrest is a partner in Cravath, Swaine & Moore's litigation department. She most recently served as a U.S. District Judge for the Southern District of New York and was the former Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice.