Why we made this change

Visitors are allowed 3 free articles per month (without a subscription), and private browsing prevents us from counting how many stories you've read. We hope you understand, and consider subscribing for unlimited online access.

Waymo’s cars drive 10 million miles a day in a perilous virtual world

A simulation lets autonomous cars experience situations that are too dangerous to try in reality.

You could argue that Waymo, the self-driving subsidiary of Alphabet, has the safest autonomous cars around. It’s certainly covered the most miles. But in recent years, seriousaccidents involving early systems from Uber and Tesla have eroded public trust in the nascent technology. To win it back, putting in the miles on real roads just isn’t enough.

So today Waymo not only announced that its vehicles have clocked more than 10 million miles since 2009. It also revealed that its software now drives the same distance inside a sprawling simulated version of the real world every 24 hours—the equivalent of 25,000 cars driving 24/7. Waymo has covered more than 6 billion virtual miles in total.

This virtual test track is incredibly important to Waymo’s efforts to demonstrate that its cars are safe, says Dmitri Dolgov, the firm’s CTO. It lets engineers test the latest software updates on a wide variety of new scenarios, including situations that haven’t been seen on real roads. It also makes it possible to test scenarios that would be too risky to set up for real, like other vehicles driving recklessly at high speed.

“Let’s say you’re testing a scenario where there’s a jaywalker jumping out from a vehicle,” Dolgov says. “At some point it becomes dangerous to test it in the real world. This is where the simulator is incredibly powerful.”

Unlike human drivers, autonomous cars rely on training data rather than real knowledge of the world, so they can easily be confused by unfamiliar scenarios.

But it is not easy to test and prove machine-learning systems that are complex and can behave in ways that are hard to predict (see “The dark secret at the heart of AI”). Letting the cars gather vast amounts of usable training data from a virtual world helps train these systems.

“The question is whether simulation-based testing truly contains all the difficult corner cases that make driving challenging,” says Ramanarayan Vasudevan, an assistant professor at the University of Michigan who specializes in autonomous-vehicle simulation.

To explore as many of these rare cases as possible, the Waymo team uses an approach known as “fuzzing,” a term borrowed from computer security. Fuzzing involves running through the same simulation while adding random variations each time, to see if these perturbations might cause accidents or make things break. Waymo has also developed software that ensures the vehicles don’t depart too much from comfortable behavior in the simulation—by braking too violently, for example.

Besides analyzing real and simulated driving data, Waymo tries to trip its cars up by engineering odd driving scenarios. At a test track at Castle Air Force Base, in central California, testers throw all sorts of stuff at the cars to confuse them: everything from people crossing the road dressed in wild Halloween costumes to objects falling from the backs of passing trucks. Its engineers have also tried cutting the power lines to the main control system to make sure the fallback will step in correctly.

Waymo is making progress. In October last year, it became the first company to remove safety drivers from some of its vehicles. Around 400 people in Phoenix, Arizona, have been using these truly autonomous robo-taxis for their daily drives.

However, Phoenix is a fairly straightforward environment for autonomous vehicles. Moving to less temperate and more chaotic places, like downtown Boston in a snowstorm, will be a huge step up for the technology.

“I’d say the Waymo deployment in Phoenix is more like Sputnik rather than full self-driving in Michigan or San Francisco, which I’d argue would be closer to an Apollo mission,” says Vasudevan.

The situation facing Waymo and other self-driving-car companies remains, in fact, a neat reminder of the big gap that still exists between real and artificial intelligence. Without many billions more miles of real and virtual testing, or some deeper level of intelligence, self-driving cars are always liable to trip up when they come across something unexpected. And firms like Waymo cannot afford that kind of uncertainty.

Be the leader your company needs. Implement ethical AI.Join us at EmTech Digital 2019.

Share

Will Knight is MIT Technology Review’s Senior Editor for Artificial Intelligence. He covers the latest advances in AI and related fields, including machine learning, automated driving, and robotics. Will joined MIT Technology Review in… More 2008 from the UK science weekly New Scientist magazine.

You've read
of three
free articles this month.
Subscribe now for unlimited online access.
You've read
of three
free articles this month.
Subscribe now for unlimited online access.
This is your last free article this month.
Subscribe now for unlimited online access.
You've read all your free articles this month.
Subscribe now for unlimited online access.
You've read
of three
free articles this month.
Log in for more, or subscribe now for unlimited online access.
Log in for two more free articles, or subscribe now
for unlimited online access.