A humorous commercial played during the last Super Bowl presented a fictional crisis in artificial intelligence: Amazon's digital assistant Alexa lost her voice. Her “human stand-ins,” including Anthony Hopkins in character as Hannibal Lecter, unnerved the people asking Alexa for help. Fortunately, Alexa regained her voice and restored order.

The consequences to Amazon of a real failure of Alexa would be obvious—a loss of revenue and, more significantly, a loss of reputation. Indeed, the financial and reputational risks artificial intelligence poses generally to companies like Amazon, Facebook and Google have been widely discussed, as have potential new regulations they might face.

Many other companies—especially those licensing artificial intelligence technology from others and incorporating it into products or services or operations—are less aware of the risks. The risks of artificial intelligence those companies may be overlooking include the risk that artificial intelligence may yield biased outcomes, the risk that artificial intelligence may yield unexpected outcomes and the risk that artificial intelligence may not be able to explain its outcomes.

This article aims to assist general counsels in evaluating the risks posed by artificial intelligence with respect to their company by providing a working definition of artificial intelligence and means commonly employed to achieve it, by describing particular technical challenges posed by artificial intelligence and by identifying key questions general counsel should be able to answer about their company's reliance upon artificial intelligence.

|

Artificial Intelligence

Human intelligence is generally understood to involve the ability to analyze data collected by the senses in light of prior experience, to reach conclusions and make decisions, and to learn from experience. Artificial intelligence is the ability of a computing device to perform those functions.

A commonly employed branch of artificial intelligence is machine learning. As compared to programming a device with millions of lines of code with complex rules and decision trees, machine learning, once implemented, enables a computer to obtain and apply knowledge without being explicitly programmed. For example, machine learning involves training a model or series of models to learn to recognize objects within images. Training for such an application involves feeding huge amounts of data containing known objects within images to the model(s), which, in turn, allows the model(s) to learn how to classify objects in subsequently recognized images.

One type of machine learning that has received a lot of recent attention is deep learning. Deep learning is based on the structure and function of the brain, namely the interconnection of neurons. One type of deep learning model is a neural network in which the neurons in the input layer each receive an input—such as a pixel in an image—and then perform a calculation before outputting a new signal. These outputs are fed through the layers of performance neurons until a picture is produced. Each layer of “neurons” learns a specific feature to learn, such as curves/edges in image recognition. For example, with a model designed to recognize dogs, the lower layers of neurons recognize the shape or outline of dogs, middle layers recognize features of a dog and the highest layer recognizes the image as an image of a dog. Each neuron has a value that is modified as the network learns.

|

Risks Posed by Artificial Intelligence

  1. The Risk of Biased Outcomes

Despite the best intentions of programmers, who believe the factors they select are objective, artificial intelligence may still yield biased outcomes. One field in which the risk of biased outcomes exists is employment. Under the disparate impact theory of state and federal antidiscrimination laws, however, decision-making that treats everyone the same but that results in decisions that have a disproportionate impact on persons due to their race, gender, age, national origin, religion and disability status is prohibited.

In the employment context, data is frequently used to create algorithms or statistical models which classify workers based on variables like job tenure, turnover, satisfaction, performance appraisals, absenteeism and culture fit. The algorithm is then fed a training dataset with information about a group of people from which it determines characteristics that can be correlated with some measure of job success. These factors of success could be traditional, such as education or previous work experience, or they could be nontraditional, such as information about an individual's social media activity.

Risks of unlawful practices may arise, for example, when an algorithm looks for applicants with the same characteristics as those possessed by existing managers or highly successful programmers in a company, but minorities or other groups are not currently represented in the workforce.

Another cause of unintentional discrimination can be the data used. Algorithms used for recruiting often include data obtained by searching publicly available databases where the accuracy or completeness of the data may be questionable or incomplete and social media. Individuals without social media or an online presence are more likely to be of lower socio-economic status, which could implicate protected characteristics such as race or national origin.

2. Risk of Unexpected Outcomes

The risk of unanticipated outcomes exists, for example, in the automotive industry. Autonomous driving programs will establish certain general rules, such as obeying traffic signs and speed limits, and eliminate risks posed by human drivers' bad habits, such as drinking and driving and texting while driving. At the same time, autonomous cars will introduce an element of uncertainty because the developers of autonomous cars will not be able to anticipate and pre-program autonomous cars to address every driving scenario that could arise. Rather, autonomous cars will have to decide how to handle unanticipated dangerous situations based not only on the data they were trained on but also data they have accumulated since being placed in operation and based on what they have learned while in operation.

One can easily imagine situations in which there is no good outcome, for example a child chasing a ball suddenly darts out from between two parked cars and an autonomous car is faced with the choices of hitting the child or swerving and colliding with another car and injuring the occupants of one or both of the cars.

Obviously these situations raise ethical issues. In June 2017, Germany's Federal Ministry of Transport and Digital Infrastructure tried to address these issues in the worldwide first ethical guidelines for autonomous vehicles, including the following:

  • The protection of human life enjoys top priority in a balancing of legally protected interests. Thus, within the constraints of what is technologically feasible, the systems must be programmed to accept damage to animals or property in a conflict if this means that personal injury can be prevented.
  • Any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another. General programming to reduce the number of personal injuries may be justifiable. Those parties involved in the generation of mobility risks must not sacrifice noninvolved parties.
  1. Risk of Unexplainable Outcomes

Traditionally, one has been able to determine a great deal about how a computing device works from examining source code, which reflects the rules, decision trees and logic used to enable a computing device to perform a particular function in a particular way.

Artificial intelligence is different. As stated above, a neural network's “reasoning” is embedded in the behavior of thousands of simulated neurons arranged in perhaps hundreds of intricately connected layers. The interplay of calculations in a deep neural network needed for high level pattern recognition and complex decision-making are a quagmire of mathematical functions and variables.

Moreover, over the course of the neural network's training, the operations performed by the individual neurons in the network are continually modified to improve results across the whole set of training samples. Given this iterative process, precisely explaining how a neural network reached an outcome can be difficult, if not impossible, even for a senior data scientist. These difficulties in explaining artificial intelligence are only going to be magnified as it is deployed to handle even more complex tasks in many different industries.

|

Regulation of Artificial Intelligence

Given the above-described risks, general counsel will want to know whether and to what extent artificial intelligence is regulated. At present, no general regulation governs artificial intelligence. As the capabilities of artificial intelligence are improved and as artificial intelligence take over regulated activities, however, particular applications of artificial intelligence are being regulated. For example, on April 11, 2018, the U.S. Food and Drug Administration permitted marketing of the first medical device to use artificial intelligence alone—to review retinal scans to screen for diabetic retinopathy.

Moreover, particular outcomes generated by artificial intelligence will be regulated—for example, in employment situations discussed above because various statutes protect persons against discrimination based on race, gender, religion, disabilities, national original and marital status.

Artificial intelligence will also be regulated indirectly because of the regulation of the data artificial intelligence uses in various industries. For example, the Family Educational Rights and Privacy Act protects student records, the Health Insurance Portability and Accountability Act (HIPPA) protects medical information and the Gramm-Leach-Bliley Act protects consumer financial information.

|

Questions GCs Should be Able to Answer About Their Company's Use of AI

Finally, general counsel need to know how their company is using or considering using artificial intelligence. Here are 10 questions to help you get started.

  1. Are we developing artificial intelligence and, if so, for what purpose and will it be used both internally and externally?
  2. Are we acquiring or licensing artificial intelligence from others or using artificial intelligence-enabled services from a vendor and, if so, from whom and on what terms?
  3. Have we thoroughly evaluated the potential product liability and our warranties for artificial intelligence-enabled offerings?
  4. Have we taken into consideration the inability of artificial intelligence to explain the reasons for its conclusions?
  5. In what countries are we using artificial intelligence?
  6. Are we subject to the General Data Protection Regulation and, if so, are we using artificial intelligence to make decisions that impact the legal rights of individuals and are we able to provide an explanation of how those decisions are made?
  7. What data are we using or are others we acquire or license artificial intelligence from using to train the artificial intelligence?
  8. Have we properly evaluated and addressed the risks of bias in our use of artificial intelligence?
  9. Are we protecting intellectual property rights we may have in our artificial intelligence innovations?
  10. How could our Legal Department use artificial intelligence—while still complying with our ethical obligations?

Obviously, good due diligence, contract, product liability, intellectual property and employment lawyers are needed, but the team(s) must include lawyers who have experience in evaluating the above-described risks of artificial intelligence.

Robert Kantner and Carl Kukkonen are partners at Jones Day. This article represents the personal views and opinions of the authors and not necessarily those of the law firm with which they are associated.