Liability in robotics: inside the legal debate

I was dumbfounded by the XPONENTIAL 2018 keynote speech by Professor Zeynep Tufekci of the University of North Carolina. To paraphrase, ‘In the future, we will no longer need two pilots, planes will have just one captain and a dog. The dog will be there to bite the human in case he touches anything.’

Walking the exhibit floor of drones and robots, I kept thinking about the image of a chomping canine biting into a pilot’s arm anxiously attempting to prevent a crash.

The authors suggest, “As robots and other products become more capable of making decisions on their own, courts may look to alternative models of liability.” According to the study, robots may already be covered under “agency law,” whereby employers would be responsible for any injury resulting from their machines similar to employees. Alternatively, courts could view robots like pets in terms of liability, “In each of these areas, the person sued does not fully control the actions of the third party or animal that led to an injury, but, in some circumstances, is liable for the consequences.”

Fatal Accidents Involving Robot Arms

The Occupational Safety and Health Administration (OSHA) reports that in the past 30 years there have been only 30 fatalities caused by robots versus the 5,000 workplace deaths annually. The first known robo-killing was in 1979 when a five-story, 1-ton industrial robotic arm fatally struck Robert Williams in the head. William’s family was awarded $10 million in a jury verdict against the manufacturer. Afterwards, the plaintiff’s attorney declared, “The question, I guess, is, ‘Who serves who?’”

Three years ago, a “rogue” Fanuc robot crushed its repairer Wanda Holbrook in the head. The husband subsequently filed a lawsuit against five companies associated with designing, building, and installing the industrial machine. The case is still pending, recently the five defendants appeared in the U.S. District Court for the Western District of Michigan universally claiming that the negligence lies with Holbrook, not their product.

The defendants’ case might, unfortunately, have merit as current product liability statutes determine manufacturing defects at the condition of sale. The US Chamber Institute appropriately questions the relevance of present regulations for deep learning products, “In the future, a key overriding issue with respect to robotics and AI will be whether a designer’s or manufacturer’s conduct can continue to be evaluated under product liability principles when a product is learning and changing after its sale.”

Who’s to Blame for Autonomous Vehicle Accidents?

The question of the accountability of intelligent machines could be decided by the courts with the recent spate of autonomous driving accidents. Since 2016 there have been four deaths behind the wheel of autonomous cars, leading many to question if humans are too quick to trust computers with their lives. In fact, last March there were two tragic deaths five days apart, each offering insights to answer the query of assigning liability.

The first incident was on March 18 in Tempe, Arizona when an autonomous Uber car crashed into a pedestrian at full speed. The dashcam footage (see below; viewer discretion advised) reveals the “safety driver” was looking down seconds before impact. In 2017, Uber made a strategic decision to reduce the number of safety drivers it employed to only one operator.

Richard Wallace of the Center for Automotive Research, explains, “For more hardcore testing, it’s common to see two or even three operators in a vehicle. It’s clearly cheaper to have just one person. It’s a fairly dull job most of the time. This crash may get companies to take a look at how these safety drivers are trained.” It is common for safety drivers to experience fatigue and disengage from the road, as they are often alone in the cars for up to eight hours at a time blankly staring at the road in silence, as the computer steers.

The second accident involved Tesla’s autopilot system, which has already been associated with two prior fatalities. Walter Haung, an engineer at Apple, turned on his semi-autonomous driving during his commute, minutes later his SUV plowed full speed into the Highway’s median. Tesla was quick to respond with a statement: “The crash happened on a clear day with several hundred feet of visibility ahead, which means that the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so. The fundamental premise of both moral and legal liability is a broken promise, and there was none here. Tesla is extremely clear that Autopilot requires the driver to be alert and have hands on the wheel.”

The San Francisco Business Times disclosed in interviews with the victim’s family that “Huang had reported an issue with the car’s Autopilot mode to his dealership between seven and ten times.” Last month, National Transportation Safety Board (NTSB) Chairman Robert Sumwalt publicly shared his experiences with Tesla’s CEO, Elon Musk, when discussing the Huang accident, “Best I remember, he hung up on us.” Sumwalt specifically questioned Tesla’s quick statement blaming Haung. After investigating previous autopilot failures, Sumwalt noted in 2017 that “Tesla allowed the driver to use the system outside of the environment for which it was designed and the system gave far too much leeway to the driver to divert his attention.” The NTSB is still investigating both tragic accidents; the outcome is destined to shape tort legislation that could eventually view autonomous vehicles, in terms of liability, similar to human drivers.

Nestled on the show floor between spinning drone rotors and autonomous trucks, avionic insurance companies were selling coverage for operators planning future BLOS jobs. Many of these underwriters packaged custom policies on a per mission basis in distances and minutes, rather than annual premiums. SkyWatch, an Israeli insurance tech startup, demonstrated a novel platform that ties liability to GPS coordinates and telemetry data. As their websites promotes, “Don’t settle for ‘pay-when-you-fly’ when you can get ‘pay-how-you-fly.’” This type of technology could, in theory, provide investigators with the empirical data to accurately assess liability.

No one could predict in the 1980s the amount of new legislation drafted regulating Internet privacy, social media and mobile phones. Legal questions relating to robots, autonomous cars, and drones are an indication that the cognitive industrial age is starting to affect the lives of everyday citizens. Part of the responsibility of government and industry is to protect thvictims injured by technology. As the US Chamber Institute explains, “There is no one-size-fits-all approach to addressing liability and regulatory issues associated with emerging technology. The key is to strike the right balance between promoting innovation and entrepreneurship and addressing legitimate safety and privacy concerns.”

Assigning legal liability to machines will be explored further at the next RobotLab event on “The Politics Of Automation,” with New York Assemblyman Clyde Vanel and Democratic Presidential Candidate Andrew Yang on June 13th @ 6pm in NYC – RSVP Today!

About The Author

Oliver Mitchell

Oliver Mitchell is the founding partner at Autonomy Ventures, a venture firm focused on early stage investments in business and industrial automation technologies, including robotics, smart mobility, remote sensing and machine intelligence. Oliver’s portfolio has returned over six exits in the past five years with two IPOs. Previous transactions have included selling Holmes Protection to ADT/Tyco, Americash to American Express, and launching RobotGalaxy, a national EdTech brand. He is an active member of New York Angels and holds 14 patents. Oliver speaks often at international trade shows and writes syndicated articles on his Robot Rabbi blog that reaches thousands of weekly readers.