AI and the Confrontation Clause
This second article in a series on Artificial Intelligence and the Law addresses the question: Do witnesses against an accused have to be human?
May 03, 2019 at 12:45 PM
7 minute read
Do witnesses against an accused have to be human? Reading this in 2019, most would think this a ridiculous question and assume that implicit in the category of “witness” is an assumption of human-ness, of “personhood.” What we seek from witnesses is truthful information: What did they see, hear, touch, smell; an ideal witness is one with an acute set of senses, an impeccable memory, one not prone to exaggeration and lacking a personal agenda.
But where is it written that witnesses must be human? And what does it mean, anyhow? Are humans really better positioned than machines endowed with advanced AI capabilities to be truth-tellers? Can't we all agree that humans are more likely to forget or misremember details than digital memory or tailor testimony to avoid a harsh truth—in short, to lie?
We are approaching a time, some in the AI field would argue rapidly—others would argue less so, but few would be entirely dismissive—that AI capabilities will allow for testimonial functionality. Put differently, devices (whether they be some form of robot, digital personal assistant such as an Alexa, Echo, Siri, or some other “smart” device) are acquiring functionality that allows them to gather, absorb and create information, as well as convey it orally. These devices can be queried today, and soon the ability to query a device will result in conversational answers, that is, testimony. Devices already have significant capability to provide evidence—our cellphones (carrying our emails and texts, among other things) are example A. But we will soon be able to ask a device for oral answers to questions such as who was in a particular location, what were they doing, what time of day was it, what were the weather conditions, what was the line of sight … who said what to whom, who drew first blood …
More than this, predictive functionality imbedded in many AI devices will allow answers that exceed information provided by human actors. Queries may include “Could the bullet have been fired from behind that tree?”, “Were the voices angry, tense, sad, frustrated?”, or even “Is it more likely that A shot B or that B shot A?”
Responses to these questions may be factual—percipient testimony, and others in the nature of “expert” testimony.
Referred to as the “confrontation clause,” the Sixth Amendment to the U.S. Constitution provides an accused with a right to confront any witness against him or her. The clause traces its roots beyond old English common law to principles applicable in Roman judicial proceedings. It is premised on the belief that the crucible of truth—that is, uncovering lies—is derived from cross-examination. The Sixth Amendment provides an accused with the right to have a witness testify in his or her presence, look an accuser in the eye, and to reveal inconsistencies, lack of recollection, or bias. Cross examination serves the twin goals of truth and promoting confidence in the fairness of the judicial process.
There is little doubt that in the relatively near future (Five years? Seven years? 10 years? Not longer), ubiquitous AI machines will not only possess information relevant to judicial proceedings but will be able to convey it in a testimonial manner. That is, AI machines will be capable of orally responding to inquiries—answering straightforward questions such as “Can you tell me whether John Doe was at home on the day in question?”, or “Can you tell me what John Doe said?”, or “Can you tell me whether John Doe ordered fertilizer and the other products used to make an explosive device ?”, or “Can you tell me whether John Doe had a regular practice of viewing the following videos on the website?”
But more than these routine matters, AI machines will be able to answer predictive questions, akin to an expert witness: “Could a bullet fired from location X have hit the decedent?”, “Could the accused's medication have led to psychosis?”, “Are the bloody shoe-prints consistent with those owned by the accused?”
Should testimony by a non-human be allowed? Would it raise confrontation clause issues? There is an almost immediate temptation to roll one's eyes at such questions—most lawyers would quickly laugh and answer, “Of course witnesses have to be human.” But we need to ask whether that is true, because that question is going to be put to us before long.
The function of a percipient witness is to provide facts; to provide as accurate and as truthful a version of events as possible. It is technically clear that AI machines will be capable of providing facts—the same or more facts than humans on a given issue—and do so more reliably than humans. But how do we test that reliability? How do we know why the machine gave a particular answer, whether it is “telling the truth”?
The confrontation clause provides the accused with a right to test truth by cross examination. To the same extent that AI will have the ability to answer a question in the first instance, it will have an ability to respond to questions on cross-examination regarding that answer. For instance, in response to the question “How do you know he was home at X-time?” AI will have the ability to refer to a variety of cameras, smart devices that record ingress and egress to a dwelling, use of a car, use of a smartphone associated with a GPS location, use of a computer with a unique IP address, and to put all of this information together as a responsive answer. The veracity of an AI machine's answer to a fact question could be further tested by reference to the underlying data: the actual video, the recordation of the IP addresses, etc. The AI's underlying programming might also be subject to review and study for bias or instructions that could lead to error.
In terms of more predictive, expert-like questions, AI's “opinions” could be queried and its basis for predictions elicited. For instance, with regard to the bloody footprints it might refer to mapping software for the footprint compared to known measurements of shoes previously purchased through the Internet.
If an AI machine is more likely than a human to provide an accurate answer, and the basis for its answer can be queried—or “cross-examined”—are we losing or gaining anything by its role as a witness? Certainly there are lawyers who have been able to undermine truthful testimony through skillful questioning, and this art would no longer have the value or utility it may have today. Do we think there is something “special” about a human witness other than truth-telling? Perhaps sometimes we depend upon human emotion to demonstrate to a fact finder the importance and weight of an issue; can a machine replicate that? But aren't there contexts when the emotion may cause a fact finder to lean away from pure facts into a realm driven more by empathy?
“Robot” witnesses are in our future; they are not the stuff of science fiction—they are around the corner. And their ability to replace human witnesses is real. Our task—a task for today and not the future—is to consider the complicated questions of whether we want or would accept that replacement.
Katherine B. Forrest is a partner in Cravath, Swaine & Moore's litigation department. She most recently served as a U.S. District Judge for the Southern District of New York and was the former Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice. She has a forthcoming book titled “When Machines Can Be Judge, Jury and Executioner: Artificial Intelligence and the Law.”
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllGEICO, Travelers to Pay NY $11.3M for Cybersecurity Breaches
OpenAI, NYTimes Counsel Quarrel Over Erased OpenAI Training Data
Hunter Biden Sues Fox, Ex-Chief Legal Officer Over Mock Trial Series
Trending Stories
- 1Judge Denies Sean Combs Third Bail Bid, Citing Community Safety
- 2Republican FTC Commissioner: 'The Time for Rulemaking by the Biden-Harris FTC Is Over'
- 3NY Appellate Panel Cites Student's Disciplinary History While Sending Negligence Claim Against School District to Trial
- 4A Meta DIG and Its Nvidia Implications
- 5Deception or Coercion? California Supreme Court Grants Review in Jailhouse Confession Case
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250