Artificial intelligence Image: Shutterstock
|

In this three part series, Alan Brill, who a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School, and Elaine Wood, who is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor, look at the evolution of artificial intelligence, machine learning and autonomous decision making and how the skills of the General Counsel are likely to be critical in protecting the organization from avoidable risks.

Part 1 examined how The Law of Unintended Consequences affects general counsel dealing with the evolution of AI, machine learning, and decision making, while Part 2 explored what happens to developers of software when AI evolves in a way that results in an unintended violation of national laws. The final part is below.

Do we need a fourth law (or fifth, if you accept the “zeroth law”) to complement Asimoff's laws of robotics, that a robot (or AI system) cannot, without human permission, violate the laws or regulations of a nation in which it operates or through which it transmits information?

In looking at a range of systems and incidents over the past few years, we believe that just as developers have come to understand that things like privacy and cybersecurity have to be “baked into” a system from its initial design, a set of guidelines based on relevant laws and regulations have to be developed and baked into the system design as well.

Doing this will require a different skill set than most other aspects of artificial intelligence. After all, how useful would it be to give a copy of a law to a programmer? Laws and regulations need to be reviewed by counsel and management to determine how best to implement them.

For a global AI application, there may be dozens of laws and regulations written in multiple languages that require compliance, particularly if the application relates to a highly regulated industry like financial services or critical infrastructures. Do we really expect—or want—a programmer interpreting how laws should be built into a specific company's system specifications? Probably not, except in the unlikely event that the programmer also happens to hold a law degree and is an expert on international cyber-law.

|

Why Artificial Intelligence Needs a General Counsel

In a very real sense, AI systems need the attention of what we think of as a general counsel not only to interpret, but to resolve conflicting laws and regulations. At a minimum, there is a need for a lawyer to translate applicable laws into a set of requirements (or prohibitions) to be supplied to the AI system designers consisting of the following categories of information:

Requirements: Actions that must be undertaken by a system to comply with a law or regulation. For example, a law might require a system to maintain specific elements of documentation of specific kinds of actions that it takes, in the form of detailed log files, and to do so in a form that can be shown to be effectively immutable. Or the law may be written more generally, requiring interpretation of phrases like “reasonable documentation.”

Prohibitions: Actions that are prohibited regardless of the calculations and rationale arrived at by AI. For example, an autonomous vehicle might be prohibited from traveling more than a specific maximum speed. Or a cyber defense system might be prohibited from initiating a denial of service attack on someone identified as having launched an attack against the system that the AI is defending.

Checkpoints: Non-prohibited actions that cannot be undertaken without the specific approval of an authorized human decision maker. For example, an AI system might be able to recognize an attack that resulted in the unauthorized transmittal of data from the system, but might be required first to alert company management and get human permission before filing an automated report with law enforcement agencies. Or to continue that example from above, an autonomous vehicle might require permission from the human driver/passenger before exceeding a predetermined speed limit in an emergency, say, to rush a pregnant passenger in labor to the hospital in time to deliver her baby.

The devil of course is in the details. Each pause or stop in the action for a human checkpoint, by its very nature, limits the speed and automatic process that AI brings to the table. This pause—(as the saying goes) to stop, look and listen—has been part of the general counsel's job since long before machine learning came into the equation.

The general counsel takes time to consider not only conflicting laws but also company goals and values that don't always neatly align, and to evaluate risks unforeseen at the time that an initial course of action is set. Issues can arise from how to handle an internal fraud or attack or threat. Questions also form when a clash of corporate goals and objectives arise as a course of action unfolds. A pause could also be necessary when a clash of cultures occurs, if the company merges with a rival or embarks into a new geography, or even from the launch of a new product that gets an unexpected reaction from clients or regulators.

AI should be used to enhance, not replace, human judgment. Isn't that the essence of Asimov's laws of robotics? Each checkpoint must be defined by counsel and by company management. Each set of rules should align with the organization's values and goals. The pause for human judgment will allow the company to re-evaluate risk and balance competing interests. This is the job of the general counsel—a dynamic challenge that can be assisted, but not wholly undertaken, by AI systems.

The technology enabling self-driving cars is getting a lot of attention these days. So are the potential legal issues—if an autonomous car hits someone, is it the responsibility of the car's owner, passenger or programmer? What evidence will the vehicle have collected, and will that evidence be properly preserved? What if the vehicle is being operated in a country other than that where the vehicle is registered?

One thing that hasn't changed is the importance of human judgment. Domino's Pizza grew rapidly in the 1980s based on a 30-minute promise, that pizzas not delivered to a customer within half an hour would be free to customers. The promotion was stopped after a series of car crashes by teenage delivery drivers who were racing to meet the promised deadline.

Developers of artificial intelligence systems have recognized the importance of defining the role of human decision makers, and there have been a number of commentaries on the need for laws relating to artificial intelligence systems. But while all of that is important, it's too easy to forget that AI operates in the real world, and that the real world is a world of laws. Ignoring that fact represents a vulnerability for AI systems and for the companies that create and use them. Whether we are discussing autonomous cars, drones or cyber defense systems, recognizing the need to understand and comply with existing laws and regulations is not something that is a future problem or a science fiction fantasy.

An AI system has to be designed to be controllable. Some of that control might be imposed using what might be called “E-GC” or “E-Compliance” modules. These would, in real-time, review the activity of the system, determine if it remains within legal and regulatory boundaries, and assure that appropriate logging was done to provide the ability to understand why a system made a specific decision (something for which blockchain-type recordkeeping might be well suited.)

If AI systems are to succeed, the general counsel must have a seat at the table at which decisions about AI systems are being made. Without the input of legal and compliance specialists, the risk of a rogue system to an organization's operations and reputation may be too high.

Alan Brill is a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School. Elaine Wood is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor.