An Introduction to the Risks of AI for General Counsel
This article aims to assist general counsels in evaluating the risks posed by artificial intelligence with respect to their company by providing a working definition of artificial intelligence and means commonly employed to achieve it, by describing particular technical challenges posed by artificial intelligence and by identifying key questions general counsel should be able to answer about their company's reliance upon artificial intelligence.
October 11, 2018 at 03:10 PM
10 minute read
A humorous commercial played during the last Super Bowl presented a fictional crisis in artificial intelligence: Amazon's digital assistant Alexa lost her voice. Her “human stand-ins,” including Anthony Hopkins in character as Hannibal Lecter, unnerved the people asking Alexa for help. Fortunately, Alexa regained her voice and restored order.
The consequences to Amazon of a real failure of Alexa would be obvious—a loss of revenue and, more significantly, a loss of reputation. Indeed, the financial and reputational risks artificial intelligence poses generally to companies like Amazon, Facebook and Google have been widely discussed, as have potential new regulations they might face.
Many other companies—especially those licensing artificial intelligence technology from others and incorporating it into products or services or operations—are less aware of the risks. The risks of artificial intelligence those companies may be overlooking include the risk that artificial intelligence may yield biased outcomes, the risk that artificial intelligence may yield unexpected outcomes and the risk that artificial intelligence may not be able to explain its outcomes.
This article aims to assist general counsels in evaluating the risks posed by artificial intelligence with respect to their company by providing a working definition of artificial intelligence and means commonly employed to achieve it, by describing particular technical challenges posed by artificial intelligence and by identifying key questions general counsel should be able to answer about their company's reliance upon artificial intelligence.
|Artificial Intelligence
Human intelligence is generally understood to involve the ability to analyze data collected by the senses in light of prior experience, to reach conclusions and make decisions, and to learn from experience. Artificial intelligence is the ability of a computing device to perform those functions.
A commonly employed branch of artificial intelligence is machine learning. As compared to programming a device with millions of lines of code with complex rules and decision trees, machine learning, once implemented, enables a computer to obtain and apply knowledge without being explicitly programmed. For example, machine learning involves training a model or series of models to learn to recognize objects within images. Training for such an application involves feeding huge amounts of data containing known objects within images to the model(s), which, in turn, allows the model(s) to learn how to classify objects in subsequently recognized images.
One type of machine learning that has received a lot of recent attention is deep learning. Deep learning is based on the structure and function of the brain, namely the interconnection of neurons. One type of deep learning model is a neural network in which the neurons in the input layer each receive an input—such as a pixel in an image—and then perform a calculation before outputting a new signal. These outputs are fed through the layers of performance neurons until a picture is produced. Each layer of “neurons” learns a specific feature to learn, such as curves/edges in image recognition. For example, with a model designed to recognize dogs, the lower layers of neurons recognize the shape or outline of dogs, middle layers recognize features of a dog and the highest layer recognizes the image as an image of a dog. Each neuron has a value that is modified as the network learns.
|Risks Posed by Artificial Intelligence
- The Risk of Biased Outcomes
Despite the best intentions of programmers, who believe the factors they select are objective, artificial intelligence may still yield biased outcomes. One field in which the risk of biased outcomes exists is employment. Under the disparate impact theory of state and federal antidiscrimination laws, however, decision-making that treats everyone the same but that results in decisions that have a disproportionate impact on persons due to their race, gender, age, national origin, religion and disability status is prohibited.
In the employment context, data is frequently used to create algorithms or statistical models which classify workers based on variables like job tenure, turnover, satisfaction, performance appraisals, absenteeism and culture fit. The algorithm is then fed a training dataset with information about a group of people from which it determines characteristics that can be correlated with some measure of job success. These factors of success could be traditional, such as education or previous work experience, or they could be nontraditional, such as information about an individual's social media activity.
Risks of unlawful practices may arise, for example, when an algorithm looks for applicants with the same characteristics as those possessed by existing managers or highly successful programmers in a company, but minorities or other groups are not currently represented in the workforce.
Another cause of unintentional discrimination can be the data used. Algorithms used for recruiting often include data obtained by searching publicly available databases where the accuracy or completeness of the data may be questionable or incomplete and social media. Individuals without social media or an online presence are more likely to be of lower socio-economic status, which could implicate protected characteristics such as race or national origin.
2. Risk of Unexpected Outcomes
The risk of unanticipated outcomes exists, for example, in the automotive industry. Autonomous driving programs will establish certain general rules, such as obeying traffic signs and speed limits, and eliminate risks posed by human drivers' bad habits, such as drinking and driving and texting while driving. At the same time, autonomous cars will introduce an element of uncertainty because the developers of autonomous cars will not be able to anticipate and pre-program autonomous cars to address every driving scenario that could arise. Rather, autonomous cars will have to decide how to handle unanticipated dangerous situations based not only on the data they were trained on but also data they have accumulated since being placed in operation and based on what they have learned while in operation.
One can easily imagine situations in which there is no good outcome, for example a child chasing a ball suddenly darts out from between two parked cars and an autonomous car is faced with the choices of hitting the child or swerving and colliding with another car and injuring the occupants of one or both of the cars.
Obviously these situations raise ethical issues. In June 2017, Germany's Federal Ministry of Transport and Digital Infrastructure tried to address these issues in the worldwide first ethical guidelines for autonomous vehicles, including the following:
- The protection of human life enjoys top priority in a balancing of legally protected interests. Thus, within the constraints of what is technologically feasible, the systems must be programmed to accept damage to animals or property in a conflict if this means that personal injury can be prevented.
- Any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another. General programming to reduce the number of personal injuries may be justifiable. Those parties involved in the generation of mobility risks must not sacrifice noninvolved parties.
- Risk of Unexplainable Outcomes
Traditionally, one has been able to determine a great deal about how a computing device works from examining source code, which reflects the rules, decision trees and logic used to enable a computing device to perform a particular function in a particular way.
Artificial intelligence is different. As stated above, a neural network's “reasoning” is embedded in the behavior of thousands of simulated neurons arranged in perhaps hundreds of intricately connected layers. The interplay of calculations in a deep neural network needed for high level pattern recognition and complex decision-making are a quagmire of mathematical functions and variables.
Moreover, over the course of the neural network's training, the operations performed by the individual neurons in the network are continually modified to improve results across the whole set of training samples. Given this iterative process, precisely explaining how a neural network reached an outcome can be difficult, if not impossible, even for a senior data scientist. These difficulties in explaining artificial intelligence are only going to be magnified as it is deployed to handle even more complex tasks in many different industries.
|Regulation of Artificial Intelligence
Given the above-described risks, general counsel will want to know whether and to what extent artificial intelligence is regulated. At present, no general regulation governs artificial intelligence. As the capabilities of artificial intelligence are improved and as artificial intelligence take over regulated activities, however, particular applications of artificial intelligence are being regulated. For example, on April 11, 2018, the U.S. Food and Drug Administration permitted marketing of the first medical device to use artificial intelligence alone—to review retinal scans to screen for diabetic retinopathy.
Moreover, particular outcomes generated by artificial intelligence will be regulated—for example, in employment situations discussed above because various statutes protect persons against discrimination based on race, gender, religion, disabilities, national original and marital status.
Artificial intelligence will also be regulated indirectly because of the regulation of the data artificial intelligence uses in various industries. For example, the Family Educational Rights and Privacy Act protects student records, the Health Insurance Portability and Accountability Act (HIPPA) protects medical information and the Gramm-Leach-Bliley Act protects consumer financial information.
|Questions GCs Should be Able to Answer About Their Company's Use of AI
Finally, general counsel need to know how their company is using or considering using artificial intelligence. Here are 10 questions to help you get started.
- Are we developing artificial intelligence and, if so, for what purpose and will it be used both internally and externally?
- Are we acquiring or licensing artificial intelligence from others or using artificial intelligence-enabled services from a vendor and, if so, from whom and on what terms?
- Have we thoroughly evaluated the potential product liability and our warranties for artificial intelligence-enabled offerings?
- Have we taken into consideration the inability of artificial intelligence to explain the reasons for its conclusions?
- In what countries are we using artificial intelligence?
- Are we subject to the General Data Protection Regulation and, if so, are we using artificial intelligence to make decisions that impact the legal rights of individuals and are we able to provide an explanation of how those decisions are made?
- What data are we using or are others we acquire or license artificial intelligence from using to train the artificial intelligence?
- Have we properly evaluated and addressed the risks of bias in our use of artificial intelligence?
- Are we protecting intellectual property rights we may have in our artificial intelligence innovations?
- How could our Legal Department use artificial intelligence—while still complying with our ethical obligations?
Obviously, good due diligence, contract, product liability, intellectual property and employment lawyers are needed, but the team(s) must include lawyers who have experience in evaluating the above-described risks of artificial intelligence.
Robert Kantner and Carl Kukkonen are partners at Jones Day. This article represents the personal views and opinions of the authors and not necessarily those of the law firm with which they are associated.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllA Blueprint for Targeted Enhancements to Corporate Compliance Programs
7 minute readThree Legal Technology Trends That Can Maximize Legal Team Efficiency and Productivity
Corporate Confidentiality Unlocked: Leveraging Common Interest Privilege for Effective Collaboration
11 minute readTrending Stories
- 1Top 10 Predicted Business and Human Rights Issues for 2025
- 2$7.5M in Punitive Damages Awarded in Product Liability Case
- 3Does My Company Really Need a Generative AI Policy?
- 4'This Is a Watershed Moment': Daniel's Law Overcomes Major Hurdle
- 5Navigating the Storm: Effective Crisis Management (Part 1)
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250