Does Artificial Intelligence Need a General Counsel? The Unintended Consequences of AI
As AI software evolves, it learns, and sometimes that evolution is not what the developers expect. What happens to developers of software when AI evolves in a way that results in an unintended violation of national laws?
January 10, 2019 at 07:00 AM
7 minute read
In this three part series, Alan Brill, who a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School, and Elaine Wood, who is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor, look at the evolution of artificial intelligence, machine learning and autonomous decision making and how the skills of the General Counsel are likely to be critical in protecting the organization from avoidable risks.
Part 1 examined how The Law of Unintended Consequences affects general counsel dealing with the evolution of AI, machine learning, and decision making. Part 2 is below. Part 3 examined where management enters the picture in AI applications.
Well-Intended Actions Can Still Be Illegal, or Unwise
Imagine that a well-known hospital in the United States had a server compromised by criminals located in Asia. This could be done in a way that the hospital might not notice. The hackers would almost certainly have committed U.S. federal crimes in gaining unauthorized access to the hospital's computer (e.g. 18 USC § 1030, Fraud and related activity in connection with computers). Let's say that the criminals then use the hijacked hospital computer to carry out an attack on their ultimate target, Company X, and they are successful in extracting files which they store on the hospital's system. An AI-based cyber defense software system at Company X detects the attack and identifies that it originated at a specific IP address—the IP address of the hospital. The system has evolved to take action to identify files that its logs show were transmitted to the criminals. It reaches out and runs software designed to allow it to enter the system that it associates with the breach.
The hospital's cybersecurity system detects the hack-back activities. It notifies the hospital that it is under attack, and the hospital begins to execute its cyber breach intrusion plan. The hospital follows the protocols in the plan and takes steps including the notification of both local police and FBI cyber units, notification of the broker and carrier of its cyber insurance policy, and it engages a pre-arranged computer forensics company and outside law firm. Significant internal technology resources within the hospital are committed to defending the hospital's network and identifying any information removed in connection with the attack. The hospital, after all, is subject to HIPAA and HITECH laws that require prompt notification of federal regulators should there be evidence of 500 or more patient records being compromised. Within hours, the hospital's in-house experts and forensic consultants track the attack back to the Company X. All of this happens quickly. In the first 24 hours, the hospital has probably spent or committed to spend $50,000 to $100,000 responding to the attack.
Several bills have been introduced into Congress to provide immunity from U.S. criminal laws in connection with hacking back. But these bills, to date, have left two outstanding issues. First, while they deal with U.S. criminal law, they do not appear to address civil law. In our example, the hospital has expended as much as $100,000 in implementing its breach-response program and hiring experts. Should the hospital or its insurer have to cover such charges? Or should they be able to bring a lawsuit for damages against the source of the hack-back, Company X?
Let's now assume that the hospital is located in France, and that Company X is in the U.S. Let's also assume that there is a law in place immunizing an organization like Company X from criminal sanctions under U.S. law for hacking back. But in hacking back, they violate Article 323 of French law (France's computer crimes law). The United States government can't pass a law exempting a person or entity from liability for violating the laws of another country.
So if an AI system's activities violate the criminal laws of one or more nations, who gets prosecuted? While in the U.S., corporations can be defendants in criminal cases, our laws—and those of every other country—were not designed to cover the criminal actions of a piece of software where such actions occurred as a result of decisions made through machine learning or AI.
It would be easy to suggest that the crime is the responsibility of the person or persons who programmed the AI software that “committed” the crime. And that certainly is appropriate for more conventional software, like malware, where we can find a direct link between the intent of the person planting the software and the destructive act itself. But is the same true in the case of self-evolving artificial intelligence software?
As AI software evolves, it learns, and sometimes that evolution is not what the developers expect. While it is certainly possible to understand that AI software can be developed specifically to support criminal activity—a case where the developers should be responsible for the crimes “committed” by their software—what about developers of software that evolves in a way that results in an unintended violation of national laws?
The problem is that developers of AI don't necessarily think in terms of imposing boundaries on AI action that are defined by laws and regulations in the real world. They may not think in terms of specific limitations imposed by laws of their home country, let alone laws of foreign countries. Of course, while we don't expect our AI system architects, designers and programmers to be experts on international law, those laws (including cyber laws) exist and can be enforced whether or not a company's AI development team is aware of them. As Thomas Jefferson wrote in a letter from Paris in 1787, “… ignorance of the law is no excuse in any country. If it were, the laws would lose their effect, because it can always be pretended.”
Consider another hypothetical. A bank hires a “big data” analytics firm to develop a system for using AI to approve or reject personal loan applications. The analysts are provided with 10 years of loan applications, loan decisions, and payment records for all loans made. The analyst team determines that one of the objectives to be set for the new system is to minimize bad loans—those that are made, but never repaid.
The deep learning component of the AI essentially cross-tabulates all of the available data to the payment history of the loans to determine which elements could be used to identify loans that are the most likely to default. The system finds a correlation between loan defaults and the postal code of a loan applicant's residence. While not a very strong correlation, the system determines that it is one of the strongest. As a result, the system recommends against offering loans to people based on their residency. This results in loan denial to all of the residents in several inner city areas and almost immediately results in reputation damaging headlines… “Bank Invents E-Redlining” and “Bank's Artificial Stupidity denies loans to customers in minority neighborhoods.” While this outcome was never intended, the failure to understand and control the actions of the system have resulted in a serious crisis for the bank.
How could this have been avoided? We'll try to answer that question in Part 3 of this series, to be published in February.
Alan Brill is a Senior Managing Director in Kroll's Cyber Risk unit and an Adjunct Professor at Texas A&M Law School. Elaine Wood is a Managing Director at Duff & Phelps specializing in compliance and regulatory consulting and a former federal prosecutor.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1Uber Files RICO Suit Against Plaintiff-Side Firms Alleging Fraudulent Injury Claims
- 2The Law Firm Disrupted: Scrutinizing the Elephant More Than the Mouse
- 3Inherent Diminished Value Damages Unavailable to 3rd-Party Claimants, Court Says
- 4Pa. Defense Firm Sued by Client Over Ex-Eagles Player's $43.5M Med Mal Win
- 5Losses Mount at Morris Manning, but Departing Ex-Chair Stays Bullish About His Old Firm's Future
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250