Ryan Abbott, Associate Professor of Law at Southwestern Law School, and Adjunct Assistant Professor of Medicine at David Geffen School of Medicine at UCLA. Ryan Abbott, a law professor at the University of Surrey, and adjunct assistant professor of medicine at David Geffen School of Medicine at UCLA. (Courtesy photo)

Ryan Abbott is one of the humans behind the recent, unsuccessful, effort to get patent offices in the U.S., U.K. and Europe to issue patents in the name of DABUS, an AI algorithm. Abbott, a law professor at the University of Surrey and an adjunct professor at University of California, Los Angeles medical school, has now written a book, arguing more broadly that "the law should not discriminate between AI and human behavior"—as he puts it, a principle of "AI legal neutrality."

Behind that provocative lead-in—perhaps all the more provocative given current events and the public clamor for true equal justice under law among humans—Abbott's arguments are a smorgasbord of policy arguments that he has drawn together under this umbrella of "AI legal neutrality."

If a skeptic were to read only as far as the introduction, one might be tempted to respond to Abbott that we should good and well wait for the AI-powered robot overlords to demand rights for themselves before we volunteer to hand over the keys to our legal system.

But, reading more deeply, it does appear that Abbott's work is motivated by a concern for the well-being of humans and of (human) society. The same is true of a 2016 report to the European Parliament. Although that report was much derided for suggesting the creation of legal personhood for robots and AI, it is clear from the report's opening words (invoking Frankenstein, Prague's Golem and other fearsome monsters) that its intent was to ensure the protection of human interests.

Consistent with this apparent pro-social focus, Abbott has touched upon a number of issues that warrant further discussion, though they are not fully resolved in the book, "The Reasonable Robot: Artificial Intelligence and the Law." Abbott focuses in particular on four areas of the law: tax, tort, patents and criminal. The policy considerations underlying "AI legal neutrality" in each of these areas are slightly different, and one is overall left with the impression that it is something of a stretch to connect them as Abbott has done.

Abbott's argument about tax policy is hardly groundbreaking. As with prior industrial revolutions, the advancement of technology, now including AI, promises to displace human workers. Compared with prior industrial revolutions, the current one is marked by its speed and its threat to knowledge workers as well as manual laborers. Abbott's core argument is that, at the margins, tax policy should not favor replacing humans with automation, but should be in some respect neutral. But then he acknowledges that the efficiencies gained by automation may well outstrip the marginal benefits of leveling the tax incentives between human labor and automation. This leads Abbott to discussions of overall levels of taxation, and of guaranteed basic income, recently championed by former presidential candidate Andrew Yang.

His discussion of tort law takes several interesting left turns to end up at an unexpected conclusion. The opening premise is simple enough, and is well-illustrated through the example of autonomous vehicles (AVs): AVs overall are safer than human-driven cars. But humans are held to the lower standard of negligence, while AVs are held to a higher standard, approaching strict liability. To incentivize the adoption of the safer AV technology, AVs should be held only to the (human) negligence standard: the level of care that would be taken by an ordinary person. Over time, Abbott postulates, the level of ordinary care would evolve to an expectation that humans would always let the AV do the driving. If a human does drive, and gets in a crash, the failure to have let the AI drive would essentially be negligence per se. In this chain of reasoning, Abbott has turned the liability analysis upside-down. We would start with humans measured against an ordinary care standard and AI measured against a higher standard, and end up with humans measured against a strict liability standard, with the AI measured against … what exactly? Still an ordinary person standard? Or, ultimately, an "ordinary robot" standard?

He argues that AIs should be subject to criminal law. Mainly, he analogizes to the observation that corporations, as nonhumans, can be held criminally liable in certain circumstances. But he also acknowledges that corporate responsibility is generally derived from the culpability of the humans through whom the corporation is said to act. It's not clear how this analogy extends to AIs, particularly autonomous AIs. Nor is it clear what forms of criminal punishment would even be available. And this is to say nothing of how to apply (or not) the protections afforded to criminal defendants under the U.S. Constitution. Is an AI entitled to a jury trial? Is an AI protected from "cruel and unusual" punishment?

Finally, we end up where Abbott started, with a discussion of patent law. Abbott devotes two full chapters to his discussion of patent law: one on inventorship and one on obviousness. The issue of AI inventorship has been resolved, for now, by the patent offices of the U.S., the U.K. and the E.U. Inventors must be human. U.S. courts reached this conclusion years before DABUS was a twinkle in some programmer's eye.

The chapter on obviousness seems oddly focused on the level of skill in the art—a shift, perhaps, from a person of skill in the art (POSITA) to an AI of skill in the art (AISITA). But, oddly, he includes no discussion of what, in the U.S. at least, is the main focus of obviousness: whether some combination of published prior art references fully discloses the claimed invention, as understood by a POSITA. Perhaps Abbott merely takes for granted that AIs, omniscient enough to win at "Jeopardy," will always find combinable prior art and will always see some path to combining them to come up with the claimed invention. In other words, as Abbott puts it, "everything is obvious." This is reductive, incompletely explored, and would spell the end of the patent system.

To summarize, Abbott's principle of AI legal neutrality would, at first: protect some human jobs via modified tax policy, make us all safer by incentivizing adoption of less-dangerous AI products and potentially holding transgressive AIs criminally liable, and encourage innovation by rewarding investment in AI-inventors. But in the long run: the efficiency gains would become insurmountable and the jobs lost anyway along with governments' tax revenue, humans would be held to an AI standard of care, and the patent system would implode. In the end, I agree with Abbott on one point: we need to reckon, probably urgently, with the challenges to our legal system that AI will pose. But a future based on his principle of AI legal neutrality is uncertain and potentially outright dystopian.

Christian Mammen.

Christian E. Mammen is an IP litigation partner with Womble Bond Dickinson in Palo Alto, California. He has practiced in San Francisco and Silicon Valley for over 20 years, and has held visiting faculty positions at a number of universities, including Oxford University, the University of California, Berkeley Law School, and the University of California, Hastings College of the Law. He holds a J.D. from Cornell Law School, and a D.Phil. in legal philosophy from Oxford University.