Artificial IntelligenceIn over 100 million homes around the world people wake up and ask, "Alexa, tell me the weather forecast." For the millions of homes that have installed the Amazon Echo device, artificial intelligence (AI) has arrived and is in use daily. The wide-spread use of AI is an inflection point in the evolution of information technology and understanding the basics of this technology is essential for every area of legal practice.

AI technology has powered much more than table-top entertainment devices. For example, mobile mechanical systems, i.e., mobile robots, are being developed to assist in caring for elderly humans, and autonomous vehicle technology already in use has been shown to reduce accidents. AI is also in current use in less obvious ways that touch the lives of most individuals on a daily basis. Internet search engines, such as Google and Bing, use AI to improve search results. With the help of IBM's Watson technology, doctors are using AI to improve medical outcomes. The same AI technology assists weather forecasters with predicting when rain will arrive with minute-to-minute accuracy.

In these iterations, AI technology is being reframed by the terms Cognitive Computing and Augmented Intelligence, to describe a group of technologies that combine or interact to assist humans in the performance of many different tasks. These and other use cases for AI will inevitably challenge social norms and legal rules, raising new issues for legislators, lawyers, scientists and the public.

|

Defining Artificial Intelligence

AI is not a sentient agent or a conscience—it is simply a man-made system that behaves intelligently. In the book Artificial Intelligence, authors and computer science professors David L. Poole and Alan K. Mackworth define an AI system as having the following characteristics:

  • the system performs appropriately for its circumstances and its goals;
  • the system is flexible and can adjust to changing environments and changing goals;
  • the system learns from experience; and
  • the system makes appropriate choices given its perceptual and computational limitations. An AI system typically cannot observe the state of the world directly because it has only a finite memory and it does not have unlimited time to act.

It should be noted that the term "artificial" is only being used to mean something that is created and developed by people, not to indicate that the "intelligence" achieved is, in any sense, "fake."

|

The Elements of an AI System

How is a computer system able to mimic a human intelligence? AI systems utilize a number of software and related hardware technologies, that do not, of themselves, constitute a working AI system. These technologies interact and overlap to provide AI capability, depending upon the manner in which they are brought to bear on a problem.

|

Big Data

The term Big Data refers to large sets of structured and unstructured data and to the software and hardware technologies that enable effective use of the information they contain. In most applications, these large data sets are accessed remotely, making cloud computing technologies an important aspect of Big Data systems. Big Data technology, in conjunction with AI, is a particularly powerful combination. Among many other use cases, it helps retailers target customers by examining their past sales data; it helps sports teams track the performance of their athletes; and it can be brought to bear on issues of practical importance to attorneys, such as analyzing large databases of text documents such as statutes, legal opinions and regulations more quickly and effectively. It also enables the National Security Agency to sift through raw data on millions of phone calls to target possible terrorist activity.

|

Machine Learning and Deep Learning

One of the key aspects of an intelligent system is the ability to learn from experience. "Machine learning" is a term that refers to algorithms that can improve with the application of more data. Machine learning can be achieved in many different ways.

An Artificial Neural Network is a term that refers to networks of simple interconnected units that emulate the neural structure of the human brain (not literally, but conceptually). Neural networks excel at the unguided discovery of patterns, a quality that is very useful for such tasks as image recognition, prediction of future trends, and audio signal processing. Neural networks learn to identify objects by a series of what the average person might describe as "guesstimates." The system is taught the basic characteristics of target information, for example a chair. After being taught that most chairs have four legs, a seat and a back, the system is fed thousands of images of various chairs. These images "teach" the system the various iterations of furniture used as chairs. In this way networks "learn" to recognize different objects. When a network is then set loose on fresh unlabeled data, it should be able to correctly identify the chairs or other objects within.

Deep Learning is a special type of Machine Learning that involves a multifaceted level of automation. Machine Learning requires a programmer to tell the algorithm what kinds of things it should be looking for in order to make a decision. Merely feeding the algorithm with raw data is rarely effective. Feature extraction places a burden on the programmer, especially in complex problems. The algorithm's effectiveness relies heavily on the skill of the programmer. Deep Learning models address this problem because they are capable of learning to focus on the right features by themselves. The system requires little guidance from the programmer, thus making the analysis better than what humans can do.

Deep learning works very well in complex tasks such as image and speech recognition. But the technique is very computationally intensive and deep learning systems require a great deal of data for their training. Another issue is "explainability" or the "black box" problem. The ability of Deep Learning systems to learn in an "unsupervised" manner (i.e., not dependent upon a programmer), can make it difficult to understand and describe the process that the system is using to make classifications and accomplish tasks. This creates practical difficulties (how do you fix such a system or assign responsibility if it fails or does harm) as well as philosophical and societal issues.

|

Out of the AI Lab and Onto the Road

Consumer vehicles with autonomous features have rolled out of automobile factories and onto the roads in the past decade. While the Tesla brand is most frequently mentioned, most auto manufactures have incorporated some intelligent features in their vehicles and working to create fully autonomous cars. Efforts to develop autonomous vehicles incorporating AI go back to the 1990s, but they received a boost in 2003 when the U.S. Department of Defense, Defense Advanced Research Projects Agency (DARPA) issued the first of several "challenges", with monetary prizes for top-placing teams, to spur the development of autonomous ground vehicles. While no one won the first DARPA Grand Challenge, subsequent iterations of the contest were instrumental in encouraging efforts that led to the first tests of self-driving auto prototypes on public streets. Using a combination of advanced sensors and AI learning, the autonomous vehicles can navigate streets without human interaction. However, to "learn" how to drive the vehicles require millions of miles of driving time to identify other cars, motorcycles, bicycles, stop signs, children, dogs and the many other situations found on a public road.

Experience has shown that AI controlled autonomous vehicles have not yet been perfected to drive without human intervention. Tesla has reported accidents involving their vehicles. The first known fatal accident involving the Tesla Model S was reported in March 2016, when a vehicle with the autopilot engaged failed to stop before crashing into a tractor-trailer making a turn in front of the vehicle. In a statement on its blog following the accident, Tesla reiterated that purchasers of the vehicle are instructed to keep their hands on the wheel when the autopilot is engaged. On the other hand, the marketing of the product as an "autopilot" suggests otherwise.

The National Highway Traffic Safety Administration (NHTSA) launched an investigation of the accident. The agency concluded that it was not caused by a malfunction of the Tesla technology but rather, by a known limitation in the abilities of its AI features. The accident presented a situation (a "crossing path collision") that the Tesla autopilot system was not designed to handle. The NHTSA report reiterated the position taken by Tesla that a Tesla vehicle with the autopilot engaged is not a fully self-driving vehicle; it requires the "continual and full attention of the driver to monitor the traffic environment and be prepared to take action to avoid crashes." The agency also cited statistics showing that the crash rate for Tesla vehicles dropped 40% after its Autosteer technology was installed.

The Tesla accident demonstrates one of the challenges for future deployment of AI technologies in autonomous vehicles as well as other "mission-critical" applications. There is no doubt that there will be accidents involving the use of AI technology, even as the technology improves safety overall. Whether the public will accept the tradeoff remains to be seen.

Peter Brown is the principal at Peter Brown & Associates. He is a co-author of "Computer Law: Drafting and Negotiating Forms and Agreements" (Law Journal Press). Jonathan Kaplan of Kaplan IP Law PLLC in Camas, Washington, contributed to the preparation of this article.