Does Artificial Intelligence Need a General Counsel? Management in the Age of AI
Whether we are discussing autonomous cars, drones or cyber defense systems, recognizing the need to understand and comply with existing laws and regulations is not something that is a future problem or a science fiction fantasy.
February 11, 2019 at 07:00 AM
8 minute read
In this three part series, Alan Brill, who a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School, and Elaine Wood, who is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor, look at the evolution of artificial intelligence, machine learning and autonomous decision making and how the skills of the General Counsel are likely to be critical in protecting the organization from avoidable risks.
Part 1 examined how The Law of Unintended Consequences affects general counsel dealing with the evolution of AI, machine learning, and decision making, while Part 2 explored what happens to developers of software when AI evolves in a way that results in an unintended violation of national laws. The final part is below.
Do we need a fourth law (or fifth, if you accept the “zeroth law”) to complement Asimoff's laws of robotics, that a robot (or AI system) cannot, without human permission, violate the laws or regulations of a nation in which it operates or through which it transmits information?
In looking at a range of systems and incidents over the past few years, we believe that just as developers have come to understand that things like privacy and cybersecurity have to be “baked into” a system from its initial design, a set of guidelines based on relevant laws and regulations have to be developed and baked into the system design as well.
Doing this will require a different skill set than most other aspects of artificial intelligence. After all, how useful would it be to give a copy of a law to a programmer? Laws and regulations need to be reviewed by counsel and management to determine how best to implement them.
For a global AI application, there may be dozens of laws and regulations written in multiple languages that require compliance, particularly if the application relates to a highly regulated industry like financial services or critical infrastructures. Do we really expect—or want—a programmer interpreting how laws should be built into a specific company's system specifications? Probably not, except in the unlikely event that the programmer also happens to hold a law degree and is an expert on international cyber-law.
Why Artificial Intelligence Needs a General Counsel
In a very real sense, AI systems need the attention of what we think of as a general counsel not only to interpret, but to resolve conflicting laws and regulations. At a minimum, there is a need for a lawyer to translate applicable laws into a set of requirements (or prohibitions) to be supplied to the AI system designers consisting of the following categories of information:
Requirements: Actions that must be undertaken by a system to comply with a law or regulation. For example, a law might require a system to maintain specific elements of documentation of specific kinds of actions that it takes, in the form of detailed log files, and to do so in a form that can be shown to be effectively immutable. Or the law may be written more generally, requiring interpretation of phrases like “reasonable documentation.”
Prohibitions: Actions that are prohibited regardless of the calculations and rationale arrived at by AI. For example, an autonomous vehicle might be prohibited from traveling more than a specific maximum speed. Or a cyber defense system might be prohibited from initiating a denial of service attack on someone identified as having launched an attack against the system that the AI is defending.
Checkpoints: Non-prohibited actions that cannot be undertaken without the specific approval of an authorized human decision maker. For example, an AI system might be able to recognize an attack that resulted in the unauthorized transmittal of data from the system, but might be required first to alert company management and get human permission before filing an automated report with law enforcement agencies. Or to continue that example from above, an autonomous vehicle might require permission from the human driver/passenger before exceeding a predetermined speed limit in an emergency, say, to rush a pregnant passenger in labor to the hospital in time to deliver her baby.
The devil of course is in the details. Each pause or stop in the action for a human checkpoint, by its very nature, limits the speed and automatic process that AI brings to the table. This pause—(as the saying goes) to stop, look and listen—has been part of the general counsel's job since long before machine learning came into the equation.
The general counsel takes time to consider not only conflicting laws but also company goals and values that don't always neatly align, and to evaluate risks unforeseen at the time that an initial course of action is set. Issues can arise from how to handle an internal fraud or attack or threat. Questions also form when a clash of corporate goals and objectives arise as a course of action unfolds. A pause could also be necessary when a clash of cultures occurs, if the company merges with a rival or embarks into a new geography, or even from the launch of a new product that gets an unexpected reaction from clients or regulators.
AI should be used to enhance, not replace, human judgment. Isn't that the essence of Asimov's laws of robotics? Each checkpoint must be defined by counsel and by company management. Each set of rules should align with the organization's values and goals. The pause for human judgment will allow the company to re-evaluate risk and balance competing interests. This is the job of the general counsel—a dynamic challenge that can be assisted, but not wholly undertaken, by AI systems.
The technology enabling self-driving cars is getting a lot of attention these days. So are the potential legal issues—if an autonomous car hits someone, is it the responsibility of the car's owner, passenger or programmer? What evidence will the vehicle have collected, and will that evidence be properly preserved? What if the vehicle is being operated in a country other than that where the vehicle is registered?
One thing that hasn't changed is the importance of human judgment. Domino's Pizza grew rapidly in the 1980s based on a 30-minute promise, that pizzas not delivered to a customer within half an hour would be free to customers. The promotion was stopped after a series of car crashes by teenage delivery drivers who were racing to meet the promised deadline.
Developers of artificial intelligence systems have recognized the importance of defining the role of human decision makers, and there have been a number of commentaries on the need for laws relating to artificial intelligence systems. But while all of that is important, it's too easy to forget that AI operates in the real world, and that the real world is a world of laws. Ignoring that fact represents a vulnerability for AI systems and for the companies that create and use them. Whether we are discussing autonomous cars, drones or cyber defense systems, recognizing the need to understand and comply with existing laws and regulations is not something that is a future problem or a science fiction fantasy.
An AI system has to be designed to be controllable. Some of that control might be imposed using what might be called “E-GC” or “E-Compliance” modules. These would, in real-time, review the activity of the system, determine if it remains within legal and regulatory boundaries, and assure that appropriate logging was done to provide the ability to understand why a system made a specific decision (something for which blockchain-type recordkeeping might be well suited.)
If AI systems are to succeed, the general counsel must have a seat at the table at which decisions about AI systems are being made. Without the input of legal and compliance specialists, the risk of a rogue system to an organization's operations and reputation may be too high.
Alan Brill is a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School. Elaine Wood is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1Uber Files RICO Suit Against Plaintiff-Side Firms Alleging Fraudulent Injury Claims
- 2The Law Firm Disrupted: Scrutinizing the Elephant More Than the Mouse
- 3Inherent Diminished Value Damages Unavailable to 3rd-Party Claimants, Court Says
- 4Pa. Defense Firm Sued by Client Over Ex-Eagles Player's $43.5M Med Mal Win
- 5Losses Mount at Morris Manning, but Departing Ex-Chair Stays Bullish About His Old Firm's Future
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250