Does Artificial Intelligence Need a General Counsel?
This first of a three part series examines how The Law of Unintended Consequences affects general counsel dealing with the evolution of AI, machine learning, and decision making.
December 18, 2018 at 07:00 AM
5 minute read
In this three part series, Alan Brill and Elaine Wood look at the evolution of artificial intelligence, machine learning and autonomous decision making and how the skills of the General Counsel are likely to be critical in protecting the organization from avoidable risks. Part 2 explored what happens to developers of software when AI evolves in a way that results in an unintended violation of national laws, while Part 3 examined where management enters the picture in AI applications.
Machine learning and autonomous decision making are the hallmarks of artificial intelligence (AI). We see autonomous decision making at work in today's cars with automated crash avoidance systems. Using a combination of sensors and computers, the system “learns” and evolves to be better and better at solving the problems it is programmed to respond to.
The system evolves fast and can respond and adapt more quickly than a human. AI, it is argued, can respond to challenges—such as cyber attacks—faster and more effectively than humans. In certain circumstances, human operators would be too slow to react to the velocity of attacks that can characterize modern hacking and state-sponsored cyber warfare. This raises the specter of a machine deciding to launch a weapon—suddenly [“Dr Strangelove?”]
Worrying About Artificial Intelligence is Not New
For a long time, we have recognized the need to put some limits on what an AI system can do. The most famous statement of these limitations was written by Isaac Asimov for his short story Runaround published in 1942. These have become known as Asimov's Three Laws of Robotics, which can be stated simply as:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In a later novel, Robots and Empire, Asimov added an additional and more basic “Zeroth Law”—a robot may not harm humanity, or, by inaction, allow humanity to be harmed.
Consider an AI system dedicated to carrying out the cyber defense of an organization. It is likely that the system would, as part of its processing, attempt to determine the source of an attack, or, in other words, to seek attribution. But what if the system determines (based on inquiries from managers following prior attacks) that a way to stop the attack, or perhaps to better determine exactly what happened or what data was stolen, was to take action against what it believes to be the perpetrator of the attack? Over time, the AI system could optimize this offensive capability as part of its machine-learning-based evolution to better carry out its cybersecurity mission.
Cyberspace is Not Real
So what's the problem? The AI system seems to be following Asimov's laws:
It is working in cyberspace and dealing with other systems and data, not human beings, so apparently there's no problem with the First Law.
It seems to be following the Second Law, because it is following its developer's instruction to protect the system.
And it seems to be following the Third and Zeroth Laws, because the AI system is protecting itself without a perceived risk to humans or humanity.
Unfortunately, the three (or four if you count the Zeroth) laws of robotics aren't the only laws that apply. We talk about “cyberspace” as if it were a physical reality. It isn't! It is, at best, a way of thinking about where the interactions between systems take place. But as much as the concept of cyberspace is widely accepted and understood, it doesn't exist. The fact is that nothing actually happens in cyberspace—it happens in the real world. There are no computers in cyberspace. They are all in the real world. Signals travel through wires, cables or wireless signals in the real world. They pass through real nation-states. And those nation-states have real laws.
The laws of the country where the AI is operating or is controlled are in force. And the laws of countries that the AI's communication passes through or where an attacker is located may also be relevant. There is no “free pass” because of the notion of cyberspace. Artificial intelligence is not above the law.
For example, while it would be legal to identify the Internet Protocol (IP) address associated with an attack, going further trying to break into the attacker's system, or to do things such as trying to take the attacker's system offline, or to run software designed to defeat the attacker's security and to enter its systems to try to see what is stored on a server, planting any form of software or anything else that could be interpreted as an act of cyber offense could be a crime under national laws.
While striking back against a cyberspace attacker seems like a natural reaction to a cyber attack, it doesn't nullify real-world laws.
Taking an action that seems reasonable, but which actually causes harm—ranging from damage or loss of data, financial loss or reputational damage to criminal violations—can occur without intent. This is known as “The Law of Unintended Consequences” and forgetting about this “law” can have significant consequences, as we will see in Part 2 of this series.
Alan Brill is a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School. Elaine Wood is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1Uber Files RICO Suit Against Plaintiff-Side Firms Alleging Fraudulent Injury Claims
- 2The Law Firm Disrupted: Scrutinizing the Elephant More Than the Mouse
- 3Inherent Diminished Value Damages Unavailable to 3rd-Party Claimants, Court Says
- 4Pa. Defense Firm Sued by Client Over Ex-Eagles Player's $43.5M Med Mal Win
- 5Losses Mount at Morris Manning, but Departing Ex-Chair Stays Bullish About His Old Firm's Future
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250