Artificial intelligenceDo witnesses against an accused have to be human? Reading this in 2019, most would think this a ridiculous question and assume that implicit in the category of “witness” is an assumption of human-ness, of “personhood.” What we seek from witnesses is truthful information: What did they see, hear, touch, smell; an ideal witness is one with an acute set of senses, an impeccable memory, one not prone to exaggeration and lacking a personal agenda.

But where is it written that witnesses must be human? And what does it mean, anyhow? Are humans really better positioned than machines endowed with advanced AI capabilities to be truth-tellers? Can't we all agree that humans are more likely to forget or misremember details than digital memory or tailor testimony to avoid a harsh truth—in short, to lie?

We are approaching a time, some in the AI field would argue rapidly—others would argue less so, but few would be entirely dismissive—that AI capabilities will allow for testimonial functionality. Put differently, devices (whether they be some form of robot, digital personal assistant such as an Alexa, Echo, Siri, or some other “smart” device) are acquiring functionality that allows them to gather, absorb and create information, as well as convey it orally. These devices can be queried today, and soon the ability to query a device will result in conversational answers, that is, testimony. Devices already have significant capability to provide evidence—our cellphones (carrying our emails and texts, among other things) are example A. But we will soon be able to ask a device for oral answers to questions such as who was in a particular location, what were they doing, what time of day was it, what were the weather conditions, what was the line of sight … who said what to whom, who drew first blood …