Waking Up to Artificial Intelligence
In his Technology Law column, Peter Brown writes: The wide-spread use of AI is an inflection point in the evolution of information technology and understanding the basics of this technology is essential for every area of legal practice.
February 10, 2020 at 12:15 PM
8 minute read
In over 100 million homes around the world people wake up and ask, "Alexa, tell me the weather forecast." For the millions of homes that have installed the Amazon Echo device, artificial intelligence (AI) has arrived and is in use daily. The wide-spread use of AI is an inflection point in the evolution of information technology and understanding the basics of this technology is essential for every area of legal practice.
AI technology has powered much more than table-top entertainment devices. For example, mobile mechanical systems, i.e., mobile robots, are being developed to assist in caring for elderly humans, and autonomous vehicle technology already in use has been shown to reduce accidents. AI is also in current use in less obvious ways that touch the lives of most individuals on a daily basis. Internet search engines, such as Google and Bing, use AI to improve search results. With the help of IBM's Watson technology, doctors are using AI to improve medical outcomes. The same AI technology assists weather forecasters with predicting when rain will arrive with minute-to-minute accuracy.
In these iterations, AI technology is being reframed by the terms Cognitive Computing and Augmented Intelligence, to describe a group of technologies that combine or interact to assist humans in the performance of many different tasks. These and other use cases for AI will inevitably challenge social norms and legal rules, raising new issues for legislators, lawyers, scientists and the public.
|Defining Artificial Intelligence
AI is not a sentient agent or a conscience—it is simply a man-made system that behaves intelligently. In the book Artificial Intelligence, authors and computer science professors David L. Poole and Alan K. Mackworth define an AI system as having the following characteristics:
- the system performs appropriately for its circumstances and its goals;
- the system is flexible and can adjust to changing environments and changing goals;
- the system learns from experience; and
- the system makes appropriate choices given its perceptual and computational limitations. An AI system typically cannot observe the state of the world directly because it has only a finite memory and it does not have unlimited time to act.
It should be noted that the term "artificial" is only being used to mean something that is created and developed by people, not to indicate that the "intelligence" achieved is, in any sense, "fake."
|The Elements of an AI System
How is a computer system able to mimic a human intelligence? AI systems utilize a number of software and related hardware technologies, that do not, of themselves, constitute a working AI system. These technologies interact and overlap to provide AI capability, depending upon the manner in which they are brought to bear on a problem.
|Big Data
The term Big Data refers to large sets of structured and unstructured data and to the software and hardware technologies that enable effective use of the information they contain. In most applications, these large data sets are accessed remotely, making cloud computing technologies an important aspect of Big Data systems. Big Data technology, in conjunction with AI, is a particularly powerful combination. Among many other use cases, it helps retailers target customers by examining their past sales data; it helps sports teams track the performance of their athletes; and it can be brought to bear on issues of practical importance to attorneys, such as analyzing large databases of text documents such as statutes, legal opinions and regulations more quickly and effectively. It also enables the National Security Agency to sift through raw data on millions of phone calls to target possible terrorist activity.
|Machine Learning and Deep Learning
One of the key aspects of an intelligent system is the ability to learn from experience. "Machine learning" is a term that refers to algorithms that can improve with the application of more data. Machine learning can be achieved in many different ways.
An Artificial Neural Network is a term that refers to networks of simple interconnected units that emulate the neural structure of the human brain (not literally, but conceptually). Neural networks excel at the unguided discovery of patterns, a quality that is very useful for such tasks as image recognition, prediction of future trends, and audio signal processing. Neural networks learn to identify objects by a series of what the average person might describe as "guesstimates." The system is taught the basic characteristics of target information, for example a chair. After being taught that most chairs have four legs, a seat and a back, the system is fed thousands of images of various chairs. These images "teach" the system the various iterations of furniture used as chairs. In this way networks "learn" to recognize different objects. When a network is then set loose on fresh unlabeled data, it should be able to correctly identify the chairs or other objects within.
Deep Learning is a special type of Machine Learning that involves a multifaceted level of automation. Machine Learning requires a programmer to tell the algorithm what kinds of things it should be looking for in order to make a decision. Merely feeding the algorithm with raw data is rarely effective. Feature extraction places a burden on the programmer, especially in complex problems. The algorithm's effectiveness relies heavily on the skill of the programmer. Deep Learning models address this problem because they are capable of learning to focus on the right features by themselves. The system requires little guidance from the programmer, thus making the analysis better than what humans can do.
Deep learning works very well in complex tasks such as image and speech recognition. But the technique is very computationally intensive and deep learning systems require a great deal of data for their training. Another issue is "explainability" or the "black box" problem. The ability of Deep Learning systems to learn in an "unsupervised" manner (i.e., not dependent upon a programmer), can make it difficult to understand and describe the process that the system is using to make classifications and accomplish tasks. This creates practical difficulties (how do you fix such a system or assign responsibility if it fails or does harm) as well as philosophical and societal issues.
|Out of the AI Lab and Onto the Road
Consumer vehicles with autonomous features have rolled out of automobile factories and onto the roads in the past decade. While the Tesla brand is most frequently mentioned, most auto manufactures have incorporated some intelligent features in their vehicles and working to create fully autonomous cars. Efforts to develop autonomous vehicles incorporating AI go back to the 1990s, but they received a boost in 2003 when the U.S. Department of Defense, Defense Advanced Research Projects Agency (DARPA) issued the first of several "challenges", with monetary prizes for top-placing teams, to spur the development of autonomous ground vehicles. While no one won the first DARPA Grand Challenge, subsequent iterations of the contest were instrumental in encouraging efforts that led to the first tests of self-driving auto prototypes on public streets. Using a combination of advanced sensors and AI learning, the autonomous vehicles can navigate streets without human interaction. However, to "learn" how to drive the vehicles require millions of miles of driving time to identify other cars, motorcycles, bicycles, stop signs, children, dogs and the many other situations found on a public road.
Experience has shown that AI controlled autonomous vehicles have not yet been perfected to drive without human intervention. Tesla has reported accidents involving their vehicles. The first known fatal accident involving the Tesla Model S was reported in March 2016, when a vehicle with the autopilot engaged failed to stop before crashing into a tractor-trailer making a turn in front of the vehicle. In a statement on its blog following the accident, Tesla reiterated that purchasers of the vehicle are instructed to keep their hands on the wheel when the autopilot is engaged. On the other hand, the marketing of the product as an "autopilot" suggests otherwise.
The National Highway Traffic Safety Administration (NHTSA) launched an investigation of the accident. The agency concluded that it was not caused by a malfunction of the Tesla technology but rather, by a known limitation in the abilities of its AI features. The accident presented a situation (a "crossing path collision") that the Tesla autopilot system was not designed to handle. The NHTSA report reiterated the position taken by Tesla that a Tesla vehicle with the autopilot engaged is not a fully self-driving vehicle; it requires the "continual and full attention of the driver to monitor the traffic environment and be prepared to take action to avoid crashes." The agency also cited statistics showing that the crash rate for Tesla vehicles dropped 40% after its Autosteer technology was installed.
The Tesla accident demonstrates one of the challenges for future deployment of AI technologies in autonomous vehicles as well as other "mission-critical" applications. There is no doubt that there will be accidents involving the use of AI technology, even as the technology improves safety overall. Whether the public will accept the tradeoff remains to be seen.
Peter Brown is the principal at Peter Brown & Associates. He is a co-author of "Computer Law: Drafting and Negotiating Forms and Agreements" (Law Journal Press). Jonathan Kaplan of Kaplan IP Law PLLC in Camas, Washington, contributed to the preparation of this article.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllNew York Top Court Says Clickwrap Assent Binds Plaintiff's Personal-Injury Claim to Arbitration in Uber Case
New York Sues Charter Bus Operators for $708 Million Over Migrant Transport
Ex-Nikola CEO Sentenced to 4 Years for Securities and Wire Fraud in SDNY
Trending Stories
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250