Does Artificial Intelligence Need a General Counsel? Management in the Age of AI
Whether we are discussing autonomous cars, drones or cyber defense systems, recognizing the need to understand and comply with existing laws and regulations is not something that is a future problem or a science fiction fantasy.
February 11, 2019 at 07:00 AM
8 minute read
|
In this three part series, Alan Brill, who a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School, and Elaine Wood, who is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor, look at the evolution of artificial intelligence, machine learning and autonomous decision making and how the skills of the General Counsel are likely to be critical in protecting the organization from avoidable risks.
Part 1 examined how The Law of Unintended Consequences affects general counsel dealing with the evolution of AI, machine learning, and decision making, while Part 2 explored what happens to developers of software when AI evolves in a way that results in an unintended violation of national laws. The final part is below.
Do we need a fourth law (or fifth, if you accept the “zeroth law”) to complement Asimoff's laws of robotics, that a robot (or AI system) cannot, without human permission, violate the laws or regulations of a nation in which it operates or through which it transmits information?
In looking at a range of systems and incidents over the past few years, we believe that just as developers have come to understand that things like privacy and cybersecurity have to be “baked into” a system from its initial design, a set of guidelines based on relevant laws and regulations have to be developed and baked into the system design as well.
Doing this will require a different skill set than most other aspects of artificial intelligence. After all, how useful would it be to give a copy of a law to a programmer? Laws and regulations need to be reviewed by counsel and management to determine how best to implement them.
For a global AI application, there may be dozens of laws and regulations written in multiple languages that require compliance, particularly if the application relates to a highly regulated industry like financial services or critical infrastructures. Do we really expect—or want—a programmer interpreting how laws should be built into a specific company's system specifications? Probably not, except in the unlikely event that the programmer also happens to hold a law degree and is an expert on international cyber-law.
|Why Artificial Intelligence Needs a General Counsel
In a very real sense, AI systems need the attention of what we think of as a general counsel not only to interpret, but to resolve conflicting laws and regulations. At a minimum, there is a need for a lawyer to translate applicable laws into a set of requirements (or prohibitions) to be supplied to the AI system designers consisting of the following categories of information:
Requirements: Actions that must be undertaken by a system to comply with a law or regulation. For example, a law might require a system to maintain specific elements of documentation of specific kinds of actions that it takes, in the form of detailed log files, and to do so in a form that can be shown to be effectively immutable. Or the law may be written more generally, requiring interpretation of phrases like “reasonable documentation.”
Prohibitions: Actions that are prohibited regardless of the calculations and rationale arrived at by AI. For example, an autonomous vehicle might be prohibited from traveling more than a specific maximum speed. Or a cyber defense system might be prohibited from initiating a denial of service attack on someone identified as having launched an attack against the system that the AI is defending.
Checkpoints: Non-prohibited actions that cannot be undertaken without the specific approval of an authorized human decision maker. For example, an AI system might be able to recognize an attack that resulted in the unauthorized transmittal of data from the system, but might be required first to alert company management and get human permission before filing an automated report with law enforcement agencies. Or to continue that example from above, an autonomous vehicle might require permission from the human driver/passenger before exceeding a predetermined speed limit in an emergency, say, to rush a pregnant passenger in labor to the hospital in time to deliver her baby.
The devil of course is in the details. Each pause or stop in the action for a human checkpoint, by its very nature, limits the speed and automatic process that AI brings to the table. This pause—(as the saying goes) to stop, look and listen—has been part of the general counsel's job since long before machine learning came into the equation.
The general counsel takes time to consider not only conflicting laws but also company goals and values that don't always neatly align, and to evaluate risks unforeseen at the time that an initial course of action is set. Issues can arise from how to handle an internal fraud or attack or threat. Questions also form when a clash of corporate goals and objectives arise as a course of action unfolds. A pause could also be necessary when a clash of cultures occurs, if the company merges with a rival or embarks into a new geography, or even from the launch of a new product that gets an unexpected reaction from clients or regulators.
AI should be used to enhance, not replace, human judgment. Isn't that the essence of Asimov's laws of robotics? Each checkpoint must be defined by counsel and by company management. Each set of rules should align with the organization's values and goals. The pause for human judgment will allow the company to re-evaluate risk and balance competing interests. This is the job of the general counsel—a dynamic challenge that can be assisted, but not wholly undertaken, by AI systems.
The technology enabling self-driving cars is getting a lot of attention these days. So are the potential legal issues—if an autonomous car hits someone, is it the responsibility of the car's owner, passenger or programmer? What evidence will the vehicle have collected, and will that evidence be properly preserved? What if the vehicle is being operated in a country other than that where the vehicle is registered?
One thing that hasn't changed is the importance of human judgment. Domino's Pizza grew rapidly in the 1980s based on a 30-minute promise, that pizzas not delivered to a customer within half an hour would be free to customers. The promotion was stopped after a series of car crashes by teenage delivery drivers who were racing to meet the promised deadline.
Developers of artificial intelligence systems have recognized the importance of defining the role of human decision makers, and there have been a number of commentaries on the need for laws relating to artificial intelligence systems. But while all of that is important, it's too easy to forget that AI operates in the real world, and that the real world is a world of laws. Ignoring that fact represents a vulnerability for AI systems and for the companies that create and use them. Whether we are discussing autonomous cars, drones or cyber defense systems, recognizing the need to understand and comply with existing laws and regulations is not something that is a future problem or a science fiction fantasy.
An AI system has to be designed to be controllable. Some of that control might be imposed using what might be called “E-GC” or “E-Compliance” modules. These would, in real-time, review the activity of the system, determine if it remains within legal and regulatory boundaries, and assure that appropriate logging was done to provide the ability to understand why a system made a specific decision (something for which blockchain-type recordkeeping might be well suited.)
If AI systems are to succeed, the general counsel must have a seat at the table at which decisions about AI systems are being made. Without the input of legal and compliance specialists, the risk of a rogue system to an organization's operations and reputation may be too high.
Alan Brill is a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School. Elaine Wood is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1PepsiCo's Legal Team Champions Diversity, Wellness, and Mentorship to Shape a Thriving Corporate Culture
- 2The Dynamic Duo Behind CMG's Legal Ops Team
- 3Land Use Issues Presented By Cold Storage Warehouses
- 4Zero-Dollar Verdict: Which of Florida's Largest Firms Lost?
- 5Appellate Div. Follows Fed Reasoning on Recusal for Legislator-Turned-Judge
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250