Book Review: Ryan Abbott, 'The Reasonable Robot: Artificial Intelligence and the Law'
Womble Bond's Christian Mammen says that Abbott's proposal for AI legal neutrality might be problematic, but his wake-up call for rethinking the law around AI should be heeded.
June 25, 2020 at 08:03 PM
7 minute read
Ryan Abbott is one of the humans behind the recent, unsuccessful, effort to get patent offices in the U.S., U.K. and Europe to issue patents in the name of DABUS, an AI algorithm. Abbott, a law professor at the University of Surrey and an adjunct professor at University of California, Los Angeles medical school, has now written a book, arguing more broadly that "the law should not discriminate between AI and human behavior"—as he puts it, a principle of "AI legal neutrality."
Behind that provocative lead-in—perhaps all the more provocative given current events and the public clamor for true equal justice under law among humans—Abbott's arguments are a smorgasbord of policy arguments that he has drawn together under this umbrella of "AI legal neutrality."
If a skeptic were to read only as far as the introduction, one might be tempted to respond to Abbott that we should good and well wait for the AI-powered robot overlords to demand rights for themselves before we volunteer to hand over the keys to our legal system.
But, reading more deeply, it does appear that Abbott's work is motivated by a concern for the well-being of humans and of (human) society. The same is true of a 2016 report to the European Parliament. Although that report was much derided for suggesting the creation of legal personhood for robots and AI, it is clear from the report's opening words (invoking Frankenstein, Prague's Golem and other fearsome monsters) that its intent was to ensure the protection of human interests.
Consistent with this apparent pro-social focus, Abbott has touched upon a number of issues that warrant further discussion, though they are not fully resolved in the book, "The Reasonable Robot: Artificial Intelligence and the Law." Abbott focuses in particular on four areas of the law: tax, tort, patents and criminal. The policy considerations underlying "AI legal neutrality" in each of these areas are slightly different, and one is overall left with the impression that it is something of a stretch to connect them as Abbott has done.
Abbott's argument about tax policy is hardly groundbreaking. As with prior industrial revolutions, the advancement of technology, now including AI, promises to displace human workers. Compared with prior industrial revolutions, the current one is marked by its speed and its threat to knowledge workers as well as manual laborers. Abbott's core argument is that, at the margins, tax policy should not favor replacing humans with automation, but should be in some respect neutral. But then he acknowledges that the efficiencies gained by automation may well outstrip the marginal benefits of leveling the tax incentives between human labor and automation. This leads Abbott to discussions of overall levels of taxation, and of guaranteed basic income, recently championed by former presidential candidate Andrew Yang.
His discussion of tort law takes several interesting left turns to end up at an unexpected conclusion. The opening premise is simple enough, and is well-illustrated through the example of autonomous vehicles (AVs): AVs overall are safer than human-driven cars. But humans are held to the lower standard of negligence, while AVs are held to a higher standard, approaching strict liability. To incentivize the adoption of the safer AV technology, AVs should be held only to the (human) negligence standard: the level of care that would be taken by an ordinary person. Over time, Abbott postulates, the level of ordinary care would evolve to an expectation that humans would always let the AV do the driving. If a human does drive, and gets in a crash, the failure to have let the AI drive would essentially be negligence per se. In this chain of reasoning, Abbott has turned the liability analysis upside-down. We would start with humans measured against an ordinary care standard and AI measured against a higher standard, and end up with humans measured against a strict liability standard, with the AI measured against … what exactly? Still an ordinary person standard? Or, ultimately, an "ordinary robot" standard?
He argues that AIs should be subject to criminal law. Mainly, he analogizes to the observation that corporations, as nonhumans, can be held criminally liable in certain circumstances. But he also acknowledges that corporate responsibility is generally derived from the culpability of the humans through whom the corporation is said to act. It's not clear how this analogy extends to AIs, particularly autonomous AIs. Nor is it clear what forms of criminal punishment would even be available. And this is to say nothing of how to apply (or not) the protections afforded to criminal defendants under the U.S. Constitution. Is an AI entitled to a jury trial? Is an AI protected from "cruel and unusual" punishment?
Finally, we end up where Abbott started, with a discussion of patent law. Abbott devotes two full chapters to his discussion of patent law: one on inventorship and one on obviousness. The issue of AI inventorship has been resolved, for now, by the patent offices of the U.S., the U.K. and the E.U. Inventors must be human. U.S. courts reached this conclusion years before DABUS was a twinkle in some programmer's eye.
The chapter on obviousness seems oddly focused on the level of skill in the art—a shift, perhaps, from a person of skill in the art (POSITA) to an AI of skill in the art (AISITA). But, oddly, he includes no discussion of what, in the U.S. at least, is the main focus of obviousness: whether some combination of published prior art references fully discloses the claimed invention, as understood by a POSITA. Perhaps Abbott merely takes for granted that AIs, omniscient enough to win at "Jeopardy," will always find combinable prior art and will always see some path to combining them to come up with the claimed invention. In other words, as Abbott puts it, "everything is obvious." This is reductive, incompletely explored, and would spell the end of the patent system.
To summarize, Abbott's principle of AI legal neutrality would, at first: protect some human jobs via modified tax policy, make us all safer by incentivizing adoption of less-dangerous AI products and potentially holding transgressive AIs criminally liable, and encourage innovation by rewarding investment in AI-inventors. But in the long run: the efficiency gains would become insurmountable and the jobs lost anyway along with governments' tax revenue, humans would be held to an AI standard of care, and the patent system would implode. In the end, I agree with Abbott on one point: we need to reckon, probably urgently, with the challenges to our legal system that AI will pose. But a future based on his principle of AI legal neutrality is uncertain and potentially outright dystopian.
Christian E. Mammen is an IP litigation partner with Womble Bond Dickinson in Palo Alto, California. He has practiced in San Francisco and Silicon Valley for over 20 years, and has held visiting faculty positions at a number of universities, including Oxford University, the University of California, Berkeley Law School, and the University of California, Hastings College of the Law. He holds a J.D. from Cornell Law School, and a D.Phil. in legal philosophy from Oxford University.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllPre-Internet High Court Ruling Hobbling Efforts to Keep Tech Giants from Using Below-Cost Pricing to Bury Rivals
6 minute readAs AI-Generated Fraud Rises, Financial Companies Face a Long Cybersecurity Battle
'A Never-Ending Nightmare': Apple Sued for Alleged Failure to Protect Child Sexual Abuse Survivors
Law Firms Mentioned
Trending Stories
- 1Call for Nominations: Elite Trial Lawyers 2025
- 2Senate Judiciary Dems Release Report on Supreme Court Ethics
- 3Senate Confirms Last 2 of Biden's California Judicial Nominees
- 4Morrison & Foerster Doles Out Year-End and Special Bonuses, Raises Base Compensation for Associates
- 5Tom Girardi to Surrender to Federal Authorities on Jan. 7
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250