When AI Speaks, Is It Protected?
As Americans, we are used to certain protections for speech—rights arising from the First Amendment to the U.S. Constitution. Does or will a “talking AI machine” have the same rights? Do we want them to? Do we need them to?
June 03, 2019 at 01:00 PM
8 minute read
Artificially intelligent devices are starting to talk. We hear them all the time, a rising cacophony: Siri and Alexa devices have been joined by all manner of talking AI performing the role of customer service representatives, routinely asking us for information, responding to questions and problem solving. Google Translate can take a natural English language sentence and translate it for us out loud. AI machines' conversational capabilities are increasing exponentially every year.
As Americans, we are used to certain protections for speech—rights arising from the First Amendment to the U.S. Constitution. Does or will a “talking AI machine” have the same rights? Do we want them to? Do we need them to?
Today, the speech in which AI machines engage is generally agency-based: acting as an assistant to a human. Siri and Alexa devices “hear” our commands and speak to us about the task we have asked them to do, or ask us questions about a task they “think” we may want them to do (“Would you like me to schedule the dentist appointment?” “Would you like me to call an Uber?” “Would you like me to reorder Tide pods?”). Customer service “representatives” that are in fact AI machines ask us to “describe the issue” you would like to talk to your bank or cable company about, and they can then respond with certain information. In both instances, the AI machines are operating within the bounds of the human “task master”.
Function and design create device characteristics with significant speech implications: to name just two: (1) memory of varying scope, depth and duration (that allows for ex post discovery of speech content); and (2) remote, third-party access (that allows for speech control). Over time, as digital assistants increasingly speak to us and for us, technical capabilities to monitor and control such speech will increase. Home-based AI assistants designed to interact with cloud applications are also interacting with third parties responsible for those operations.
What does this mean, practically? It means that speaking with digital assistants is not private in any traditional sense. When we speak to Siri, she hears us, but she may not be the only one: Technology can be implemented to allow others to hear us as well. Today, we assume (usually correctly) that what we say to Siri or Alexa stays with them; whatever memory they have of us, our current or prior commands, they keep to themselves. We also assume they forget what we have told them, when their memories are in fact a function of human design choices. We don't worry that our home devices will “tell on us” to some third party we can't even see—thereby revealing what we have said, done or tried to do. Indeed, the very concept of Siri as a tattle-tale seems the stuff of a fictional Orwellian world. But, pause for a moment and consider the technology—we are closer to this world than many choose to imagine.
AI monitoring can lead to alterations in Siri's assigned tasks; it can determine her method of accomplishing them, or just keep a record of what you asked her to do. There are dystopian versions of AI speech interference already operational in parts of the world today: Tasks given to AI machines in parts of China are reviewed for social acceptability and advisability.
The First Amendment protects our ability to speak. In America, we are accustomed to a broad concept of “free speech” that provides an ability to state the facts as we perceive them, to express opinions or viewpoints—even unpopular ones, perhaps particularly the unpopular ones. Our 2019 version of free speech allows us to speak aloud (whether orally or by way of written word) without prior restraint, and to be free from penalty for what we have said unless certain limited circumstances are present—for instance, penalties may be imposed for speech that incites violence, or that slanders or defames another. Private institutions have long been able to create their own rules regarding acceptable speech—but without a doubt, we are used to freedom of speech in our homes and on street corners.
Antithetical to principles underlying the First Amendment is conduct that chills speech. Conduct that chills speech impacts discourse; an impacted discourse inhibits our ability to have a truly discursive democracy. That is, once we cannot freely speak, we do not freely speak, and we do not freely share ideas, needs, thoughts and concerns.
In the context of AI machines that can do the talking for us, or who act as our audience, does the First Amendment apply? Acting as a human agent, does the AI (that increasingly thinks for itself and decides what to say and how to say it), simply assume the rights of its human task master? Or, because it is machine-based, do we assume none of our very human constitutional rights apply to its actions? What actions vis-à-vis AI devices can in fact chill human speech? And do we need First Amendment protections for AI-based speech to continue to provide broad protections to human speech?
Let us pause for a moment to reflect on a few of the implications of simply categorizing AI machines as just another device that can store information (ignoring for a moment that it can do much more, including creating the very information that is stored). By categorizing AI machines in this way, we allow for disclosure to third parties.
The First Amendment is not a doctrine that addresses third-party access to our speech; it is not a doctrine that provides a developed legal basis to avoid a complying with (for instance) a subpoena for speech recorded in the memory of Siri or Alexa. Siri and Alexa have unique utility and ubiquity. We have an implicit assumption that what we say to them, and in front of them, remains private. It is reasonable to assume that known third-party monitoring and access would, at least sometimes, chill speech. But how is what is resident in the memory of Siri and Alexa any different from what is recorded in a text or written in an email, both of which have long been assumed to be subject to legal disclosure rules. (And there is an additional complexity inherent in the fact that Siri is itself a cellphone-based application).
Among the differences between a text and what Siri and Alexa “know” is that for the former, there is an expectation of transmittal to a third party at the time of the speech initiation (that is, when the text is written or the statement to Siri or Alexa is made). There are certainly vast differences between the expectation that a text will be seen by one person (the intended recipient) versus a courtroom; but what is said to Siri and Alexa (or in front of them) may not be intended for external transmittal (this is different in instances in which a device is tasked with drafting a dictated text or email). Another difference is the basic concept that Siri or Alexa is simply an extension of oneself—doing that which we could do for ourselves if we chose to. At the moment, well-known doctrines concerning disclosure of digital information apply to disclosures of these early AI devices. We need to consider whether we should be making changes in the legal framework governing disclosure for digital devices, for the relatively rudimentary AI devices as they exist today and the far more sophisticated AI devices that will be with us very shortly.
A near-term issue is, therefore, whether requiring production of our speech resident within Siri or Alexa, or their speech on our behalf to third parties, has a chilling impact on speech antithetical to First Amendment protections.
And while we are at it, it is worth pausing to reflect on a few questions only a little further down the road: Do we want free speech protections to extend to devices that communicate on their own? As AI machines take on additional functions in our society, do we want to ensure that our First Amendment framework prohibiting most prior restraints is available for these devices? Do we want to ensure that they are able to say what they need to say, on our behalf? Perhaps the answer lies in a basic principle implicit in the First Amendment: Allowing free expression of ideas from a source—human or AI machine—seeking to communicate for and about our society, is fundamental to our discursive democracy.
Katherine B. Forrest is a partner in Cravath, Swaine & Moore's litigation department. She most recently served as a U.S. District Judge for the Southern District of New York and was the former Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice. She has a forthcoming book titled “When Machines Can Be Judge, Jury and Executioner: Artificial Intelligence and the Law.”
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllGEICO, Travelers to Pay NY $11.3M for Cybersecurity Breaches
OpenAI, NYTimes Counsel Quarrel Over Erased OpenAI Training Data
Hunter Biden Sues Fox, Ex-Chief Legal Officer Over Mock Trial Series
Trending Stories
- 1Judge Denies Sean Combs Third Bail Bid, Citing Community Safety
- 2Republican FTC Commissioner: 'The Time for Rulemaking by the Biden-Harris FTC Is Over'
- 3NY Appellate Panel Cites Student's Disciplinary History While Sending Negligence Claim Against School District to Trial
- 4A Meta DIG and Its Nvidia Implications
- 5Deception or Coercion? California Supreme Court Grants Review in Jailhouse Confession Case
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250