There are many questions surrounding deadly crashes with self-driving vehicles sharing the roads with the public

It’s been a bad couple of weeks for autonomous automobiles. Two weeks ago — March 18th to be exact — a Uber self-driving vehicle struck and killed a pedestrian walking her bicycle in Tempe, Arizona, a suburb of Phoenix. More recently — last Friday, in fact — the US National Transportation Safety Board (NTSB) revealed that the Tesla Model X that was involved in a fiery crash on March 23rd was being Autopiloted. This, on top of the Joshua Brown incident — the Model S driver whose Model S Autopiloted him right into (and under) a semi 18 months ago — is creating a significant amount of consternation, not only amongst the companies developing what has been, until now, deemed the future of mobility, but also the safety organizations tasked with monitoring their development. Among the questions being asked are: How safe are self-driving cars right now? Will the self-driving cars of the future really be safer than current human drivers? And finally, what are various safety organizing agencies doing — or, more accurately, what should they be doing — about the problem?

The numbers always trotted out detailing why the future’s computer-controlled cars will be an advancement are always daunting. For instance, according to the National Highway Traffic Safety Administration, some 37,461 people were killed either in or by automobiles in the United States in 2016. For context, that’s more than were killed by handguns but less than those killed by opioid overdoses.

More critically, when you factor in the number of miles Americans drive, those 37,461 fatalities work out to 1.18 deaths per 100 million miles. No number of fatalties should ever be considered “acceptable,” but one death per 100 million miles doesn’t sound quite as, shall we call it dramatic, as thirty-seven thousand.

More importantly, according to the New York Times, Waymo and Uber, which have been responsible for the most of the testing of self-driving cars, only have about eight million miles of driverless, er, driving between them since Google started testing autonomous automobiles in 2009. One of anything, even a tragic death, is never statistically relevant, but suffice it to say that Uber, at least, has not yet proven that its computers will be more reliable than humans. As for Tesla, now with two deaths — actually three since a Chinese driver was killed when his Model S Autopiloted right into a street sweeper — on its hands, it’s impossible to tell how many miles its Model Xs and Ss have driven themselves. Nonetheless, I think we can all agree that, so far, Autopilot has been a disappointment.

The lessons learned

If the deaths of Elaine Herzberg (the 49-year-old bicyclist killed by Uber’s Volvo XC90), Walter Huang (the 38-year-old Apple engineer who died in the Model X) and Joshua Brown (the 40-year-old former navy seal killed while Autopiloting) are to have any meaning, then it’s imperative that we glean as much knowledge from their passing as possible. And, while all three accidents had little in common — other than all three vehicles were being steered by a computer — there is some commonality to the lessons learned.

The Uber crash

Two things are obvious from the aforementioned Tempe crash. The first is that, since the video clearly shows Ms. Herzberg was crossing from the driver’s left, Uber’s Volvo should have seen her. Never mind that it was dark out, recent advances in Lidar sensors are more than capable of discerning objects — be they be human or animal — at night. Waymo’s CEO John Krafcik’s statement that his cars would have been able to stop for Ms. Herzberg, as crass and as opportunistic as it may seem, is probably true.

Secondly, the in-car footage of the “safety driver” should once and for all put paid to the delusion that having a human at the wheel in case of emergency is, at best, a palliative and not a cure. The issue with “semi” autonomous automobiles has always been the same. Expecting someone to sit behind the wheel completely without duties, but always at the ready to instantly take the wheel in case of computer malfunction is hopelessly optimistic. No one should be blaming Rafaela Vasquez for her inability to react quickly enough to save Ms. Herzberg’s life; chances are if you were behind the wheel you would have been bored to tears as well.

The Tesla Model X crash

Mr. Huang’s Tesla appears to have navigated itself right into a highway median. While the NTSB has not filed an official report, Tesla has said that the Model X was being guided by its computer at the time.

Two things stand out from initial reporting. The first is that said guardrails had already been subjected to a collision within two weeks of Mr. Huang’s accident and had not been repaired. One of the possible causes, then, for Autopilot’s failure in this case is that, while it may have recognized an intact guardrail as something it needed to avoid, the shape of the damaged item might have been something its database could not recognize. In other words, it’s very possible that self-driving cars will never be safe without the ability to “learn” new obstacles by themselves and that requires — cue Tesla’s Mr. Musk — the spectre of artificial intelligence controlling our cars.

The second troubling thing about the Tesla crash is that, according to San Francisco’s KGO-TV, Mr. Leung had already complained that his Model X had a propensity to steer itself towards that abutment. Not guardrails like the one he crashed into, but the exact guardrail that ended up taking his life. That speaks to a very specific software glitch and is an indication of how tricky it is to get the software right. Worse yet, Tesla is saying — despite Mr. Huang’s family stating otherwise — that no complaint was ever made. Contrast that with Uber’s decision to pull all its self-driving cars off the road.

The conundrum

So where does any of this lead us? The quandary is this: We need, as is currently happening, to test self-driving cars in real-world conditions. If the Uber crash tells us anything, however, it’s that having “safety drivers” behind the wheel in case of emergencies will not prevent accidents. The choice — as Waymo has long contended — then is to simply let autonomous cars out on public streets without anyone in the driver’s seat. With the spate of accidents recently, I am not sure how popular this idea will remain.

Perhaps more importantly, we are sending these self-driving cars out on the road without being tested by any agency. Drivers around the world are required to pass certain basic tests to prove they are fit to be allowed to drive on public roads. But, as far as I can see, self-driving cars are self-certified. Oh, many jurisdictions require that the companies involved alert officials whenever a human takes over the wheel in one of those aforementioned emergencies, but as for standardized testing checking to see if these computerized cars are up to the hazards of traffic and human capriciousness, there seem to be none.