Amazon-DotWitnesses come in all shapes and sizes. Some are truth-tellers, some not, some have good memories, and others bad, many have known personal motivations and others secret agendas. Throughout American history, witnesses have shared little other than a single characteristic: They have been human. That fact, assumed and relied upon, is changing.

Today, artificial intelligence is actually able to provide much of the same evidence that human witnesses can, with a higher degree of accuracy, unerring memory and an absence of personal motivation. These characteristics are only potential, of course. At least today, humans are behind the design of intelligent machines, determining inputs into and the weighting of algorithms that power a variety of functionalities, designing layers of neural networks, and determining goals and objectives. Human innovators of artificial intelligence have human biases—and human limitations. They have personal motivations that inform how they approach their work, the questions they ask and the issues they seek to address.

There is a tendency to assume that information collected by machines embodies a degree of “truth” that is more certain that the truth a human could or would provide. But it would be a mistake to assume that when designing intelligent software, machines, or devices, the biases of their human inventors have been magically washed away.

The use of evidence harvested from artificial intelligence implicates important constitutional considerations. This is the first of three articles that explore those considerations—focusing on the Fourth Amendment's protections against unreasonable searches and seizures, the Fifth Amendment's due process rights and the Sixth Amendment's confrontation guarantees. The third and final article will examine use of the First Amendment in an attempt to obtain or limit access to information resident within smart devices.

Our homes and workplaces are populated with dozens of smart devices that collect a variety of information about how we live and work in order to assist us with specialized tasks. Referred to as the Internet of Things (or “industrial Internet of Things”), these devices listen, watch, record, learn, predict and speak. Devices know our recent Internet browser searches, remember “erased” searches, know when we have turned on or off the lights and whether that was consistent with our normal pattern, know whether we have watched certain movies or viewed certain clips, and they predict whether we will or would want to. There are machines that monitor whether we have driven more or less than usual and whether we have varied our route home; machines that know who we have dated and can predict who we should have dated or wanted to date; machines that make and cancel appointments, know when they are ordering new products and when we are in need of refills; machines that recognize emotion in a voice and respond with soothing words; and there are machines that vacuum on whatever schedule we ask them to, picking up fibers from our floors and understanding the normal layout of our furniture.

There is a robust body of law governing when we may be required to turn over information resident on computers, telephones and tablets we own, possess or control to third parties. In the civil context, state and federal procedural rules govern disclosure of hard copies, emails and now (increasingly) texts. In sum, if relevant to a matter (and subject to limited exceptions and a burden/expense analysis), such materials are fair game. In the criminal context, warrants that set forth a basis for believing that the same universe of materials has been or is about to be used in connection with a crime, and describe such materials with requisite specificity, also allow access by law enforcement.

Are smart devices really that different? Similar to emails and texts, they may record and store (in allocated and unallocated space) a large amount of information relevant to a lawsuit or investigation. Indeed, they have the potential to more accurately record who came to or left a location, where and when lights were turned on or off, and whether patterns of behavior (product acquisitions, movements) occurred at all, or were normal or abnormal; they can indicate voice patterns reflecting stress, tension, sadness and elation.

The fact that certain information exists on such devices is starting to be understood and sought. In one relatively recent case, law enforcement served Amazon with a warrant to search for and seize certain recordings and other records made by an Amazon Echo in a murder suspect's home—the scene of the suspected crime. Amazon initially turned over only the subscriber's information and lodged objections against the demand for recordings. Ultimately, the suspect agreed to voluntarily hand over the information and Amazon dropped its motion to quash the warrant, eliminating the need for a judicial ruling. Alex Horn, Murder Defendant Volunteers Echo Recordings Amazon Fought to Protect, The Guardian (March 7, 2017). More recently, in November 2018, a New Hampshire judge ordered Amazon to turn over recordings made by an Echo present at the scene of a double murder. Mark Osborne, Judge Orders Amazon To Hand Over Echo Recordings in Double Murder Case, ABC News (Nov. 10, 2018).

There are dueling arguments for and against disclosure of whatever information these devices possess: On the one hand, there is little doubt that most people have a reasonable expectation of privacy with regard to devices sprinkled throughout their homes. This does not, however, prevent disclosure—but may require a warrant before seizure and search in the criminal context, or a higher showing of need and relevance in the civil context. In other words, the Fourth Amendment does not prevent access or disclosure—but it does require compliance with procedural safeguards specific to context.

The Fourth Amendment does not prevent our home-based smart devices from providing evidence against us (and, in some instances, providing helpful information—supporting an alibi, for example). Constitutional law recognizes a legitimate expectation of privacy—as the U.S. Supreme Court recently held in Riley v. California, 573 U.S. 373 (2014)—even with our cellphones, devices understood to store vast quantities of personal information. There is no legal basis to find differently for information stored on other smart devices. While a warrant may be required, once carefully drawn and judicially authorized, disclosure should follow.

Today, disclosure of information on these devices may result in a “ho hum, how useful can it be?” But important precedent is being established. Today's devices provide but an inkling of the capabilities of the devices we will all bring into our homes tomorrow. The legal precedent we establish today will, in the absence of legislation, govern disclosures tomorrow. It is important that we ask key questions: what is the appropriate balance between public and private interests? How accurate is the information stored? (This raises particular issues with regard to a device's predictive tools that may undertake unilateral actions to achieve what it has been programmed to assume is an objective.)

Human choice has determined whether a machine listens and for how long, whether it records and the quality of the recording, whether it watches and the quality of any video, whether its understanding of something as mundane as a “typical route home” is accurate or not, and whether its emotional intelligence is correctly calibrated or not.

Our devices can be repositories of what may be considered evidence—we must scrutinize whether it is or not, whether it is an accurate reflection of events, or not. We need to understand their capabilities and limitations in this regard—and ask ourselves whether this world we are moving into is truly inevitable, or whether we want it constructed differently.

As we move further into AI (advances in current devices and new machines altogether) that can be queried and provide testimonial equivalence, these issues are even more important and answers less certain. Due process and confrontation issues come into play. These and other questions are explored in the next article in this series: “Artificial Intelligence and the Confrontation Clause.”

Katherine B. Forrest is a partner in Cravath, Swaine & Moore's litigation department. She most recently served as a U.S. District Judge for the Southern District of New York and was the former Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice. She has a forthcoming book titled “When Machines Can Be Judge, Jury and Executioner: Artificial Intelligence and the Law.”