Book Review: Ryan Abbott, 'The Reasonable Robot: Artificial Intelligence and the Law'
Womble Bond's Christian Mammen says that Abbott's proposal for AI legal neutrality might be problematic, but his wake-up call for rethinking the law around AI should be heeded.
June 25, 2020 at 08:03 PM
7 minute read
Ryan Abbott is one of the humans behind the recent, unsuccessful, effort to get patent offices in the U.S., U.K. and Europe to issue patents in the name of DABUS, an AI algorithm. Abbott, a law professor at the University of Surrey and an adjunct professor at University of California, Los Angeles medical school, has now written a book, arguing more broadly that "the law should not discriminate between AI and human behavior"—as he puts it, a principle of "AI legal neutrality."
Behind that provocative lead-in—perhaps all the more provocative given current events and the public clamor for true equal justice under law among humans—Abbott's arguments are a smorgasbord of policy arguments that he has drawn together under this umbrella of "AI legal neutrality."
If a skeptic were to read only as far as the introduction, one might be tempted to respond to Abbott that we should good and well wait for the AI-powered robot overlords to demand rights for themselves before we volunteer to hand over the keys to our legal system.
But, reading more deeply, it does appear that Abbott's work is motivated by a concern for the well-being of humans and of (human) society. The same is true of a 2016 report to the European Parliament. Although that report was much derided for suggesting the creation of legal personhood for robots and AI, it is clear from the report's opening words (invoking Frankenstein, Prague's Golem and other fearsome monsters) that its intent was to ensure the protection of human interests.
Consistent with this apparent pro-social focus, Abbott has touched upon a number of issues that warrant further discussion, though they are not fully resolved in the book, "The Reasonable Robot: Artificial Intelligence and the Law." Abbott focuses in particular on four areas of the law: tax, tort, patents and criminal. The policy considerations underlying "AI legal neutrality" in each of these areas are slightly different, and one is overall left with the impression that it is something of a stretch to connect them as Abbott has done.
Abbott's argument about tax policy is hardly groundbreaking. As with prior industrial revolutions, the advancement of technology, now including AI, promises to displace human workers. Compared with prior industrial revolutions, the current one is marked by its speed and its threat to knowledge workers as well as manual laborers. Abbott's core argument is that, at the margins, tax policy should not favor replacing humans with automation, but should be in some respect neutral. But then he acknowledges that the efficiencies gained by automation may well outstrip the marginal benefits of leveling the tax incentives between human labor and automation. This leads Abbott to discussions of overall levels of taxation, and of guaranteed basic income, recently championed by former presidential candidate Andrew Yang.
His discussion of tort law takes several interesting left turns to end up at an unexpected conclusion. The opening premise is simple enough, and is well-illustrated through the example of autonomous vehicles (AVs): AVs overall are safer than human-driven cars. But humans are held to the lower standard of negligence, while AVs are held to a higher standard, approaching strict liability. To incentivize the adoption of the safer AV technology, AVs should be held only to the (human) negligence standard: the level of care that would be taken by an ordinary person. Over time, Abbott postulates, the level of ordinary care would evolve to an expectation that humans would always let the AV do the driving. If a human does drive, and gets in a crash, the failure to have let the AI drive would essentially be negligence per se. In this chain of reasoning, Abbott has turned the liability analysis upside-down. We would start with humans measured against an ordinary care standard and AI measured against a higher standard, and end up with humans measured against a strict liability standard, with the AI measured against … what exactly? Still an ordinary person standard? Or, ultimately, an "ordinary robot" standard?
He argues that AIs should be subject to criminal law. Mainly, he analogizes to the observation that corporations, as nonhumans, can be held criminally liable in certain circumstances. But he also acknowledges that corporate responsibility is generally derived from the culpability of the humans through whom the corporation is said to act. It's not clear how this analogy extends to AIs, particularly autonomous AIs. Nor is it clear what forms of criminal punishment would even be available. And this is to say nothing of how to apply (or not) the protections afforded to criminal defendants under the U.S. Constitution. Is an AI entitled to a jury trial? Is an AI protected from "cruel and unusual" punishment?
Finally, we end up where Abbott started, with a discussion of patent law. Abbott devotes two full chapters to his discussion of patent law: one on inventorship and one on obviousness. The issue of AI inventorship has been resolved, for now, by the patent offices of the U.S., the U.K. and the E.U. Inventors must be human. U.S. courts reached this conclusion years before DABUS was a twinkle in some programmer's eye.
The chapter on obviousness seems oddly focused on the level of skill in the art—a shift, perhaps, from a person of skill in the art (POSITA) to an AI of skill in the art (AISITA). But, oddly, he includes no discussion of what, in the U.S. at least, is the main focus of obviousness: whether some combination of published prior art references fully discloses the claimed invention, as understood by a POSITA. Perhaps Abbott merely takes for granted that AIs, omniscient enough to win at "Jeopardy," will always find combinable prior art and will always see some path to combining them to come up with the claimed invention. In other words, as Abbott puts it, "everything is obvious." This is reductive, incompletely explored, and would spell the end of the patent system.
To summarize, Abbott's principle of AI legal neutrality would, at first: protect some human jobs via modified tax policy, make us all safer by incentivizing adoption of less-dangerous AI products and potentially holding transgressive AIs criminally liable, and encourage innovation by rewarding investment in AI-inventors. But in the long run: the efficiency gains would become insurmountable and the jobs lost anyway along with governments' tax revenue, humans would be held to an AI standard of care, and the patent system would implode. In the end, I agree with Abbott on one point: we need to reckon, probably urgently, with the challenges to our legal system that AI will pose. But a future based on his principle of AI legal neutrality is uncertain and potentially outright dystopian.
Christian E. Mammen is an IP litigation partner with Womble Bond Dickinson in Palo Alto, California. He has practiced in San Francisco and Silicon Valley for over 20 years, and has held visiting faculty positions at a number of universities, including Oxford University, the University of California, Berkeley Law School, and the University of California, Hastings College of the Law. He holds a J.D. from Cornell Law School, and a D.Phil. in legal philosophy from Oxford University.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllDOJ, 10 State AGs File Amended Antitrust Complaint Against RealPage and Big Landlords
4 minute readApple Agrees to Pay $95 Million Settlement in Siri Voice Assistant Privacy Class Action
Pre-Internet High Court Ruling Hobbling Efforts to Keep Tech Giants from Using Below-Cost Pricing to Bury Rivals
6 minute readLaw Firms Mentioned
Trending Stories
- 1How ‘Bilateral Tapping’ Can Help with Stress and Anxiety
- 2How Law Firms Can Make Business Services a Performance Champion
- 3'Digital Mindset': Hogan Lovells' New Global Managing Partner for Digitalization
- 4Silk Road Founder Ross Ulbricht Has New York Sentence Pardoned by Trump
- 5Settlement Allows Spouses of U.S. Citizens to Reopen Removal Proceedings
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250