It's the stuff of science fiction made real: robot law.

If your robot hurts someone, are you liable? Legally speaking, is a robot like a pet? An employee? Should robots one day have a form of legal status that makes them responsible for their own actions?

The U.S. Chamber Institute for Legal Reform on Wednesday released a 93-page report, Torts of the Future II: Addressing the Liability and Regulatory Implications of Emerging Technologies. It dives deep into future liability trends involving virtual and augmented reality, wearable devices and 3D printing. But the section that struck me as most interesting—and most immediate—concerns robots and artificial intelligence.

Jenna Greene“As autonomous robots and other products with AI make their way into the workplace, provide medical care in hospitals, operate on public highways, and serve us in our homes, hotels, and stores, they will be involved in incidents that result in personal injuries and other harms,” the report states.

What's less clear is how the legal system will assign fault when compensating people who are injured.

An early—and terrifying—case shows how difficult it may be.

Last year, I wrote about Wanda Holbrook, a Michigan woman who worked in an auto parts factory. She was killed by a robot that inexplicably left its section and came into hers, where it “hit and crushed [her] head between a hitch assembly.”

Why did it do that? No one seems to know.

On behalf of her estate, her husband sued the five companies that designed, built and monitored the robot. (Though not her employer, Venta Ionia—Michigan law requires plaintiffs to prove there was an intentional effort by the employer to harm the worker, which no one suggests happened here.)

More than a year later, the case in U.S. District Court for the Western District of Michigan hasn't gotten far. The defendants in motions to dismiss all (unsurprisingly) claim that they did nothing wrong, that Holbrook was negligent, and that if there was a problem with the robot, then it was someone else's fault.

There's another line of defense as well: “At the time of the alleged injuries, [auto parts maker] Flex-N-Gate did not have control over the subject products,” wrote Pepper Hamilton of counsel James VandeWyngearde.

That may increasingly be a point of contention. As the Chamber report notes, “Robots imbued with AI will have functionality far beyond that of automated equipment and machines. They will move and act autonomously, make decisions and learn from experience, and grow in capability beyond their initial programming.”

The report continues, “In the future, a key overriding issue with respect to robotics and AI will be whether a designer's or manufacturer's conduct can continue to be evaluated under product liability principles when a product is learning and changing after its sale.”

Manufacturing defects are subject to strict liability—but that's problematic if the product goes on to develop its own unique behavior.

“Whether a product has a manufacturing flaw is evaluated based on its condition at the time of sale. This would preclude a manufacturing defect claim when an AI product was manufactured to design specifications but later changed,” the report states.

But that doesn't mean robot makers would be off the hook. The key question instead could be negligence—was the product's action reasonably foreseeable?


Are you fascinated by the intersection of law and technology? Check out What's Next by Ben Hancock, a weekly briefing that delves into the legal complexities of AI, blockchain, Big Data, cryptocurrency … and, yes, robots. Learn more and sign up here.


The report also contemplates treating robots akin to employees. For example, if a drone delivering a pizza crashes into something, the pizza restaurant might be liable, just as it would be if a human driver got into an accident.

Or perhaps the law as it applies to pets would be a good fit. As in, you've got a general duty to stop your dog from biting anyone, but if the dog never before showed signs of being vicious, or was provoked, or you posted a “Beware of dog” sign, liability can be more nuanced.

Also, there are laws against animal cruelty—which means pets have some rights.

This approach “might appropriately balance owner responsibility, robot unpredictability, the level of risk of the particular robot based on its function, and the conduct of the person who was injured. It also opens the door to providing legal protections for AI entities, when warranted.”

This strikes me as vaguely disturbing. Maybe it's because I've seen The Terminator movies too many times. (“Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. Skynet fights back.”) Maybe it's Elon “AI-poses-a-fundamental-risk-to-the-existence-of-human-civilization” Musk getting to me.

But the Chamber of Commerce emerges as an unexpected champion of robot rights—though not in a Bicentennial Man/ Battlestar Galactica free the robots kind of way

The Chamber takes a strictly pragmatic view. After all, corporations are “persons” under the law, and that suits the group just fine. So perhaps robots should be able to enter into contracts and even own intellectual property for creating software codes, art, music, or books.

As the report notes, corporate owners “benefit financially from the corporation's intellectual property and other property rights. Like corporations, which possess legal rights, it seems likely that AI entities with property and other legal rights will also be subject to ownership, and that their owners will also be the ultimate financial beneficiaries.”