The 'Driverless' Car Era: Liability Considerations
In his Complex Litigation column, Michael Hoenig writes: By huge advances in computer technology (hardware and software), artificial intelligence, sensors, cameras, radar, and mirrors, a car can be transformed into a platform “intelligent” enough to “self-drive” safely.
November 09, 2017 at 02:30 PM
10 minute read
Most readers have heard something about the advent of so-called “self-driving” or “driverless” cars. Some of the more technical terms used by safety regulators, scientists, the motor vehicle industry and others are “automated” or “autonomous vehicles” (AVs), “connected” cars (i.e., cars that feature vehicle-to-vehicle communications, or “V2V,” as well as vehicles that can “communicate” with infrastructure, or “V2I”). In such vehicles, depending on the level of car autonomy, some (or even all) functions of the traditional driver's tasks are handled by features built into the vehicle.
How is this possible? By huge advances in computer technology (hardware and software), artificial intelligence, sensors, cameras, radar, and mirrors, the car itself can be transformed into a platform “intelligent” enough to “self-drive” safely. At the highest levels of car autonomy, the “driver” can, in effect, be transformed into a “passenger,” now free to do things other than drive. A while back, the Society of Automotive Engineers (SAE) identified six levels of car autonomy, functional categories adopted by the National Highway Traffic Safety Administration (NHTSA), the federal agency that regulates car safety by promulgating safety standards, policing industry compliance, identifying defects and ordering recalls.
The six levels of automation proceed from “Level 0” (no autonomy; human driver performs all driving tasks) to Level 1 (driver controls the vehicle but some driving assist features may be included in the vehicle design to sometimes assist with steering or braking/accelerating), to Level 2 (partial automation; car has combined automated functions but the driver must remain engaged and monitor the environment at all times).
Levels 3 to 5 are much more advanced when it comes to incorporating autonomous features. In Level 3, an Automated Driving System (ADS) can itself perform all aspects of the driving task under some conditions but the human driver must be ready to take back control at any time the ADS requests the driver to do so. In all other circumstances, the driver performs the driving task. In Level 4, the automated system can itself perform all driving tasks (and monitor the environment) in certain circumstances. The human driver need not pay attention in those circumstances. At Level 5, the vehicle's automated system can do all the driving in all circumstances. The human occupants are just passengers and need never be involved in driving.
NHTSA, the safety agency, has been pushing the industry, in stages, to get to the self-driving era expeditiously. In September 2016 the agency issued its vision for “highly automated vehicles” (HAVs) in the form of a “Federal Automated Vehicles Policy” along with issuance of an “Enforcement Guidance Bulletin on Safety-Related Defect and Automated Safety Technologies.” The two publications are not compulsory upon the industry but furnish activity guidelines the agency expects will be followed. Readers can readily access these items via the NHTSA.gov website. Congressional committees, too, have acted with haste. Already, House and Senate draft bills have proposed legislation regarding regulation of AVs. These drafts emerged amidst heavy lobbying by interest groups (including lawyers).
|Lifesaving Benefits
Why this push? Why the rush towards autonomous vehicles? From the regulatory perspective the paramount rationale is safety benefits. Automated vehicles' potential to save lives and reduce injuries is rooted in traffic facts: 94 percent of serious crashes are due to human error; more than 35,000 persons died in motor vehicle-related crashes in the United States in 2015; more than 2.4 million injuries occur per year. By removing the accident-causing human error from the traffic equation, lifesaving benefits to protect drivers, passengers, pedestrians and bicyclists will result.
NHTSA also foresees additional economic and social benefits. A study showed motor vehicle crashes in 2010 cost $242 billion in economic activity, including $57.6 billion in lost workplace productivity, as well as some $594 billion due to loss of life and decreased loss of quality of life due to injuries. Were the vast majority of motor vehicle crashes eliminated, such costs could be erased.
NHTSA also believes that highly automated vehicles will smooth traffic flow and reduce traffic congestion. Americans spent an estimated 6.9 billion hours in traffic delays in 2014, cutting into time at work or with family, increasing fuel costs and vehicle emissions. A recent study stated that automated vehicles could free up some 50 minutes each day that previously was dedicated to driving. Further, self-driving vehicles may provide new mobility options to millions more Americans. There are some 49 million Americans over age 65 and some 53 million with some form of disability who could benefit. One study suggested that automated vehicles could create new employment opportunities for approximately two million people with disabilities.
Automated vehicles will have to react to other vehicles' movements on the roadway. As autonomous vehicles increase, their V2V “communications” will help inform appropriate safety-related maneuvers. In effect, the cars will “talk to each other.” Similarly, since roadway infrastructure, signage, traffic controls, utility poles, guardrails, etc. are part of the environment, “connectivity” of automated vehicles to roadway signals and controls will have to be implemented. This will come at quite some cost. Nevertheless, many states have begun to take concrete steps to join the AV world. This means, for example, revising motor vehicle codes, allowing automated vehicles to be tested on actual roadways and devising insurance programs that satisfy the needs of the new AV era.
One huge complexity is that automated vehicles will, for a substantial period of time, have to share roadways with enormous populations of vehicles that are not automated. In effect, the AV cars will not be “connected” to Level 0 vehicles. Accordingly, even if the potential for human error is largely eliminated in the AV cars, it will not be eclipsed in the scores of millions of non-AV cars and trucks. Accordingly, we can expect a sharper focus on the non-AV driver as a potential accident-causing liability target. Similarly, because the new technology likely will be viewed with suspicion by many, we can expect the owner of the AV car or the vehicle manufacturer (and suppliers of the hardware and software) to be targets for suit.
Space limitations here preclude a detailed discussion of the complicated liability and regulatory picture in the brave new world of AVs. (See generally, e.g., M.A. Geistfeld, “A Roadmap For Autonomous Vehicles: State Tort Liability, Automobile Insurance And Federal Safety Regulation,” 105 California L. Rev. 101 (2017); S.P. Wood, et al., “The Potential Regulatory Challenges of Increasingly Autonomous Vehicles,” 52 Santa Clara L. Rev. 1423 (2012); M.I. Krauss, “What Should Tort Law Do When Autonomous Vehicles Crash?,” Forbes (April 7, 2017); J. Villasenor, “Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation,” Brookings Institution (April 2014); “Autonomous Car Liability,” Wikipedia). Therefore, only some abbreviated highlights regarding liability considerations are identified here. Indeed, there also are serious policy questions as to whether traditional liability doctrines ought to fully control when an emerging technology that promises vast lifesaving and injury-preventing benefits is in its early or interim stages.
|Liability Considerations
Imposing, at the outset, crushing liability costs or explosive class action exposures upon manufacturers of AVs or suppliers of their software could stunt the development and improvement of self-driving vehicles. That could threaten achieving the lifesaving benefits that motivated NHTSA to push for the new technology frontier in the first place. Alternatively, spurring liability costs early on can force manufacturers to pass those costs to their purchasers, many of whom will decline to absorb the increased purchase price, thereby dooming the increased infusion of AVs onto U.S. roadways and frustrating the overall objective of saving lives. Therefore, some scholars and experts have suggested that federal preemption of certain kinds of lawsuits should govern at least early stages of AV use. Alternatively, a victim compensation fund could be established by Congress, much like the National Childhood Vaccine Injury Fund in 1986 when liability concerns threatened public health by jeopardizing access to vaccines. A third approach is to adopt a no-fault insurance program. See V. Schwartz, “Driverless Cars: The Legal Landscape” (June 14, 2017) (Panel 3: Liability & Insurance, in C. Silverman, et al, “Torts of the Future”, etc. (U.S. Chamber Inst. for Legal Reform 2017)).
To begin with, let's understand that self-driving cars will rely on algorithms that program the vehicle to “optimize” its “decision-making” when confronted with a set of circumstances. Some scenarios inevitably will force the automated vehicle to “choose” between avoiding/minimizing injury to the driver versus killing or injuring others by the AV's automated evasive maneuvers. For example, let's say the AV confronts a truck speeding head-on towards the AV in a school zone. At the same time, school children come running out of school and congregate on the sidewalk. If the AV stays where it is or merely brakes, the truck likely will smash the car and kill or maim the driver. If the AV, however, maneuvers to the right to evade the truck, it will mount the sidewalk and kill or injure many students.
This scenario (numerous others can be hypothesized) presents an ethical quandary for the AV programmers. Should the AV's algorithms be designed to always favor what's best for the driver (and the other car occupants) or should the algorithm “choose” the course of least overall harm, including those outside the AV? Indeed, will purchasers readily buy an AV that may treat its owner unfavorably? NHTSA has called for industry members and others to coordinate transparently on such ethical dilemmas and come to a consensus. However, we can visualize the legal “field day” the school children's lawyers would have in court. Programmed AV “behavior” decisions can easily be criticized by lawyers in hindsight. Arguably, were NHTSA to approve algorithmic AV “decision-making” when such ethical quandaries are presented, that regulatory approval ought to immunize the car maker from claims that second guess the algorithmic choice.
The self-driving car era will trigger a host of other liability considerations. Should routine products liability rules (design, manufacturing defect, warnings, warranty, misrepresentation, consumer fraud) apply to such advanced, software-intensive technology? Certainly, we can expect that the consumer must be fully informed and warned about the product and its limitations. Other questions abound. Will pre-trial discovery into complex technical, trade-secret topics become a quagmire that drives up litigation costs? After a crash between AV and non-AV vehicles, how is liability, if any, to be apportioned? Will individual trials become too complex, too lengthy and too expensive for court systems to handle en masse?
Then there will be liability concerns and challenges about vulnerability of AVs to hacking, invasions of privacy and cybercrime by third parties. Anti-hacking specialists have demonstrated that increased computer portals can allow bad actors to “break into” or “take over” operative functions of an AV and cause damaging or injurious mischief. Similarly, since AV car features will include advanced electronic communication capabilities, car occupants are likely to become vulnerable to privacy breaches and cybercrime incursions. Will such threats trigger loads of individual litigations and class actions? Will traditional liability rules handle such challenges or will the rules have to adapt to the premise that AV technology, for all its miracles, may not be perfect. Perhaps some legal slack will have to be given in exchange for preserving the lifesaving and other benefits envisioned by NHTSA.
Michael Hoenig is a member of Herzfeld & Rubin.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllThe Unraveling of Sean Combs: How Legislation from the #MeToo Movement Brought Diddy Down
When It Comes to Local Law 97 Compliance, You’ve Gotta Have (Good) Faith
8 minute readTrending Stories
- 1Senate Confirms Last 2 of Biden's California Judicial Nominees
- 2Morrison & Foerster Doles Out Year-End and Special Bonuses, Raises Base Compensation for Associates
- 3Tom Girardi to Surrender to Federal Authorities on Jan. 7
- 4Husch Blackwell, Foley Among Law Firms Opening Southeast Offices This Year
- 5In Lawsuit, Ex-Google Employee Says Company’s Layoffs Targeted Parents and Others on Leave
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250