Time for Driverless Safety Standards

On March 18, in Tempe, Arizona, one of Uber’s self-driving cars struck and killed a pedestrian. In response, the company announced it was suspending all testing of autonomous vehicles on public roads pending an investigation. Toyota has followed suit.

In 2016, over 37,000 people were killed in traffic accidents. To proponents of autonomous vehicles, then, it might seem strange that one fatal accident relating to self-driving cars could impact their deployment. And, indeed, it is hard to extrapolate any larger lessons from this tragic event, given the limited amount of data we have on driverless vehicles. Nevertheless, real-world problems, coupled with available performance statistics, cast doubt on the claim that autonomous vehicles are ready for the spotlight. A premature deployment that exempts driverless cars from the safety standards expected of human drivers would only increase fatalities at substantial public cost.

The Tempe incident raises general questions about how regulatory authorities should deal with single, catastrophic events caused by emerging technologies. One approach, popular in the European Union, holds that regulatory approval should be strictly curtailed if “potentially dangerous effects deriving from a phenomenon, product, or process have been identified” and “scientific evaluation does not allow the risk to be determined with sufficient certainty.” This seemingly simple statement leaves plenty to the imagination, and gives authorities leeway to restrict products and technologies that may have a net beneficial impact but carry the potential for dangerous side effects.

A classic American application cited by Case Western Reserve University legal scholar Jonathan Adler has to do with Misoprostol, a drug used for the treatment of stomach ulcers. Despite ample evidence that the drug was effective in treating an affliction that kills as many as 20,000 people per year, the Food and Drug Administration (FDA) held off approval for years due to concerns about reproductive health, birth defects, and miscarriages.

But technologies like autonomous vehicles threaten to muddy the waters of sound regulatory decision-making. Regulators and transportation officials are not being asked to weigh proven performance against a potential unforeseen consequence. Rather, the performance of the driverless fleet in itself — its ability to transport passengers safely from Point A to Point B — serves as the focal point of decision-making. This supposed ability is endlessly trumpeted, despite the only evidence we have pointing in the opposite direction.

As the Taxpayers Protection Alliance has previously noted, comparing Waymo’s reported “simulated contacts” (projected crashes, reported only in California) with National Highway Traffic Safety Administration (NHTSA) data on human crashes implies that autonomous vehicles are around five times more likely to get into an accident than their human counterparts. Even if we generously assume that an additional 10 million crashes go unreported each year (National Safety Council data suggests lower figures), robot drivers are still at least twice as negligent as their human counterparts.

Moreover, this rough comparison is extremely generous to autonomous fleet operators. Waymo, the only AV company to publicly report simulated contact data, drives under extraordinarily safe conditions on boring, flat roads. GM subjects its vehicles to more difficult conditions on a routine basis and posts correspondingly higher disengagement rates. It stands to reason that GM Cruise vehicles would have an even higher accident rate if test drivers weren’t around to intervene. In other words, excluding the GM fleet in the comparison makes the AV fleet look safer than it actually is. The public has even less information on Uber, which has not been required to submit disengagement data (having only received their California permit last year).

In Arizona, where the fatal accident occurred, no companies operating autonomous fleets are required to submit safety data. But the paucity of data hasn’t stopped motorists from reporting instances of robot malfeasance on the not-so-mean streets of suburban Phoenix. During the testing phase of driverless cars last year, vehicles proved unable to make left turns and proceeded slower than the flow of traffic.

Now, as Arizona continues to permit autonomous vehicles on the road without test drivers, and California stands on the cusp of adopting that same policy, reevaluation is sorely needed. Together, the limited data and a demonstrated inability of robot drivers to perform the basic driving functions expected of a 16-year-old show that more private testing is needed. Before taxpayers sign over a blank check and governments bend over backward to accommodate driverless vehicles, companies should have to meet “simulated contact” thresholds in the elaborate test villages designed by companies such as Waymo. At the very least, an extended test-driving phase on public roads and more data reporting would provide more transparency for motorists and pedestrians.

Tough questions need to be asked now to find out if and when driverless cars will be ready for deployment.