Artificial intelligenceDo witnesses against an accused have to be human? Reading this in 2019, most would think this a ridiculous question and assume that implicit in the category of “witness” is an assumption of human-ness, of “personhood.” What we seek from witnesses is truthful information: What did they see, hear, touch, smell; an ideal witness is one with an acute set of senses, an impeccable memory, one not prone to exaggeration and lacking a personal agenda.

But where is it written that witnesses must be human? And what does it mean, anyhow? Are humans really better positioned than machines endowed with advanced AI capabilities to be truth-tellers? Can't we all agree that humans are more likely to forget or misremember details than digital memory or tailor testimony to avoid a harsh truth—in short, to lie?

We are approaching a time, some in the AI field would argue rapidly—others would argue less so, but few would be entirely dismissive—that AI capabilities will allow for testimonial functionality. Put differently, devices (whether they be some form of robot, digital personal assistant such as an Alexa, Echo, Siri, or some other “smart” device) are acquiring functionality that allows them to gather, absorb and create information, as well as convey it orally. These devices can be queried today, and soon the ability to query a device will result in conversational answers, that is, testimony. Devices already have significant capability to provide evidence—our cellphones (carrying our emails and texts, among other things) are example A. But we will soon be able to ask a device for oral answers to questions such as who was in a particular location, what were they doing, what time of day was it, what were the weather conditions, what was the line of sight … who said what to whom, who drew first blood …

More than this, predictive functionality imbedded in many AI devices will allow answers that exceed information provided by human actors. Queries may include “Could the bullet have been fired from behind that tree?”, “Were the voices angry, tense, sad, frustrated?”, or even “Is it more likely that A shot B or that B shot A?”

Responses to these questions may be factual—percipient testimony, and others in the nature of “expert” testimony.

Referred to as the “confrontation clause,” the Sixth Amendment to the U.S. Constitution provides an accused with a right to confront any witness against him or her. The clause traces its roots beyond old English common law to principles applicable in Roman judicial proceedings. It is premised on the belief that the crucible of truth—that is, uncovering lies—is derived from cross-examination. The Sixth Amendment provides an accused with the right to have a witness testify in his or her presence, look an accuser in the eye, and to reveal inconsistencies, lack of recollection, or bias. Cross examination serves the twin goals of truth and promoting confidence in the fairness of the judicial process.

There is little doubt that in the relatively near future (Five years? Seven years? 10 years? Not longer), ubiquitous AI machines will not only possess information relevant to judicial proceedings but will be able to convey it in a testimonial manner. That is, AI machines will be capable of orally responding to inquiries—answering straightforward questions such as “Can you tell me whether John Doe was at home on the day in question?”, or “Can you tell me what John Doe said?”, or “Can you tell me whether John Doe ordered fertilizer and the other products used to make an explosive device ?”, or “Can you tell me whether John Doe had a regular practice of viewing the following videos on the website?”

But more than these routine matters, AI machines will be able to answer predictive questions, akin to an expert witness: “Could a bullet fired from location X have hit the decedent?”, “Could the accused's medication have led to psychosis?”, “Are the bloody shoe-prints consistent with those owned by the accused?”

Should testimony by a non-human be allowed? Would it raise confrontation clause issues? There is an almost immediate temptation to roll one's eyes at such questions—most lawyers would quickly laugh and answer, “Of course witnesses have to be human.” But we need to ask whether that is true, because that question is going to be put to us before long.

The function of a percipient witness is to provide facts; to provide as accurate and as truthful a version of events as possible. It is technically clear that AI machines will be capable of providing facts—the same or more facts than humans on a given issue—and do so more reliably than humans. But how do we test that reliability? How do we know why the machine gave a particular answer, whether it is “telling the truth”?

The confrontation clause provides the accused with a right to test truth by cross examination. To the same extent that AI will have the ability to answer a question in the first instance, it will have an ability to respond to questions on cross-examination regarding that answer. For instance, in response to the question “How do you know he was home at X-time?” AI will have the ability to refer to a variety of cameras, smart devices that record ingress and egress to a dwelling, use of a car, use of a smartphone associated with a GPS location, use of a computer with a unique IP address, and to put all of this information together as a responsive answer. The veracity of an AI machine's answer to a fact question could be further tested by reference to the underlying data: the actual video, the recordation of the IP addresses, etc. The AI's underlying programming might also be subject to review and study for bias or instructions that could lead to error.

In terms of more predictive, expert-like questions, AI's “opinions” could be queried and its basis for predictions elicited. For instance, with regard to the bloody footprints it might refer to mapping software for the footprint compared to known measurements of shoes previously purchased through the Internet.

If an AI machine is more likely than a human to provide an accurate answer, and the basis for its answer can be queried—or “cross-examined”—are we losing or gaining anything by its role as a witness? Certainly there are lawyers who have been able to undermine truthful testimony through skillful questioning, and this art would no longer have the value or utility it may have today. Do we think there is something “special” about a human witness other than truth-telling? Perhaps sometimes we depend upon human emotion to demonstrate to a fact finder the importance and weight of an issue; can a machine replicate that? But aren't there contexts when the emotion may cause a fact finder to lean away from pure facts into a realm driven more by empathy?

“Robot” witnesses are in our future; they are not the stuff of science fiction—they are around the corner. And their ability to replace human witnesses is real. Our task—a task for today and not the future—is to consider the complicated questions of whether we want or would accept that replacement.

Katherine B. Forrest is a partner in Cravath, Swaine & Moore's litigation department. She most recently served as a U.S. District Judge for the Southern District of New York and was the former Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice. She has a forthcoming book titled “When Machines Can Be Judge, Jury and Executioner: Artificial Intelligence and the Law.”