Please help us continue to provide you with free, quality journalism by turning off your ad blocker on our site.

Thank you for signing in.

If this is your first time registering, please check your inbox for more information about the benefits of your Forbes account and what you can do next!

I agree to receive occasional updates and announcements about Forbes products and services. You may opt out at any time.

I'd like to receive the Forbes Daily Dozen newsletter to get the top 12 headlines every morning.

Forbes takes privacy seriously and is committed to transparency. We will never share your email address with third parties without your permission. By signing in, you are indicating that you accept our Terms of Service and Privacy Statement.

Urmson’s recent “Perspectives on Self-Driving Cars” lecture at CMU was particularly interesting because he has had time to absorb the lessons from his long tenure at Google and translate those into his next moves at Aurora. He was also in a thoughtful space at his alma mater, surrounded by mentors, colleagues and students. And, it is early enough in his new start-up’s journey that he seemed truly in “perspective” rather than “pitch” mode.

Much of the carnage due to vehicle accidents is easy to measure. In 2015, in just the U.S., there were 35,092 killed and 2.44 million injured in 6.3 million police-reported vehicle accidents. Urmson estimates, however, that the real accident rate is really between two to 10 times greater.

Over more than two million test miles during his Google tenure, Google’s SDCs were involved in about 25 accidents. Most were not severe enough to warrant a regular police report (they were reported to the California DMV). The accidents mostly looked like this: “Self-driving car does something reasonable. Comes to a stop. Human crashes into it.” Fender bender results.

While we talk a lot about fatalities or police-reported accidents, Urmson said, “there is a lot of property damage and loss that can be cleaned up relatively easily” with driverless technology.

2. Human intent is the fundamental challenge for driverless cars.

The choices made by driverless cars are critically depended on understanding and matching the expectations of human drivers. This includes both humans in operational control of the cars themselves and human drivers of other cars. For Urmson, the difficulty in doing this is “the heart of the problem” going forward.

To illustrate the “human factors” challenge, Urmson dissected three high-profile accidents. (He cautioned that, in the case of the Uber and Tesla crashes, he had no inside information and was piecing together what probably happened based on public information.)

Google Car Crashes With Bus

Google SDC Loses Fender to Bus

Santa Clara Transportation Authority

In the only accident where Google’s SDC was partially at fault, Google’s car was partially blocking the lane of a bus behind it (due to sand bags in its own lane). The car had to decide whether to wait for the bus to pass or merge fully into the lane. The car predicted that the remaining space in the bus’s lane was too narrow and the bus driver would have to stop. The bus driver looked at the situation and thought “I can make it,” and didn’t stop. The car went. The bus did, too. Crunch.

The Uber SDC was in the leftmost lane of three lanes. The traffic in the two lanes to its right were stopped due to congested traffic. The Uber car’s lane was clear, so it continued to move at a good pace.

A human driver wanted to turn left across the three lanes. The turning car pulled out in front of the cars in the two stopped lanes. The driver probably could not see across the blocked lanes to the Uber car’s lane and, given the stopped traffic, probably had the expectation that whatever might be driving down that lane would be moving at a slower speed. It pulled into the Uber car’s lane to make the turn, and the result was a sideways parked car.

The driver had been using Tesla’s Autopilot for a long time, and he trusted it—despite Tesla saying “Don’t trust it.” Tesla user manuals told drivers to keep their hands on the wheel, eyes in front, etc. The vehicle was expecting that the driver was paying attention and would act as the safety check. The driver thought that Autopilot worked well enough on its own. A big truck pulled in front of the car. Autopilot did not see it. The driver did not intervene. Fatal crash.

Tesla, to its credit, has made modifications to improve the car’s understanding about whether the driver is paying attention. To Urmson, however, the crash highlights the fundamental limitation of relying on human attentiveness as the safety mechanism against car inadequacies.

Urmson characterized “one of the big open debates” in the driverless car world as between Tesla (and other automakers’) vs Google’s approach. The former’s approach is “let’s just keep on making incremental systems and, one day, we’ll turn around and have a self-driving car.” The latter is “No, no, these are two distinct problems. We need to apply different technologies.”

Urmson is still “fundamentally in the Google camp.” He believes there is a discrete step in the design space when you have to turn your back on human intervention and trust the car will not have anyone to take control. The incremental approach, he argues, will guide developers down a selection of technologies that will limit the ability to bridge over to fully driverless capabilities.

4. Don’t let the “Trolley Car Problem” make the perfect into the enemy of the great.

The “trolley car problem” is a thought experiment that asks how driverless cars should handle no-win, life-threatening scenarios—such as when the only possible choices are between killing the car’s passenger or an innocent bystander. Some argue that driverless cars should not be allowed to make such decisions.

Urmson, on the other hand, described this as an interesting philosophical problem that should not be driving the question of whether to bring the technology to market. To let it do so would be “to let the perfect be the enemy of the great.”

Urmson offered a two-fold pragmatic approach to this ethical dilemma. First, cars should never get into such situations. “If you got there, you’ve screwed up.” Driverless cars should be conservative, safety-first drivers that can anticipate and avoid such situations. “If you’re paying attention, they don’t just surprise and pop out at you,” he said. Second, if the eventuality arose, a car’s response should be predetermined and explicit. Tell consumers what to expect and let them make the choice. For example, tell consumers that the car will prefer the safety of pedestrians and will put passengers at risk to protect pedestrians. Such an explicit choice is better than what occurs with human drivers, Urmson argues, who react instinctually because there is not enough time to make any judgement at all.

5. The “mad rush” is justified.

Urmson reminisced about the early days when he would talk to automakers and tier 1 suppliers about the Google program and he “literally got laughed at.” A lot has changed in the last five years, and many of those skeptics have since invested billions in competing approaches.

Urmson points to the interaction between automation, environmental standards, electric vehicles and ride sharing as the driving forces behind the rush towards driverless. (Read more about this virtuous cycle.) Is it justified? He thinks so, and points to one simple equation to support his position.

3 Trillion VMT * $0.10 per mile = $300B per year

In 2016, vehicles in the U.S. travel about 3.2 trillion miles. If you could bring technology to bear to reduce the cost and/or increase the quality of those miles and charge 10 cents per mile, that would add up to $300 billion in annual revenues—just in the U.S.

This equation, he points out, is driving the market infatuation with Transportation as a Service (TaaS) business models. The leading contenders in the emerging space, Uber, Lyft and Didi, have a combined market valuation of about $110 billion—roughly equal to the market value of GM, Ford and Chrysler. Urmson predicts that one of these clusters will see their market value double in the next four years. The race is to see who reaps this increased value.

6. Deployment will happen “relatively quickly.”

To the inevitable question of “when,” Urmson is very optimistic. He predicts that self-driving car services will be available in certain communities within the next five years.

You won’t get them everywhere. You certainly not going to get them in incredibly challenging weather or incredibly challenging cultural regions. But, you’ll see neighborhoods and communities where you’ll be able to call a car, get in it, and it will take you where you want to go.

Then, over the next 20 years, Urmson believes we’ll see a large portion of the transportation infrastructure move over to automation.

Urmson concluded his presentation by calling it an exciting time for roboticists. “It’s a pretty damn good time to be alive. We’re seeing fundamental transformations to the structure of labor and the structure transportation. To be a part of that and have a chance to be involved in it is exciting.”

I'm a futurist and advisor on strategy and innovation. I focus on innovation strategies where societal benefit is a first-order goal rather than a secondary

…

I'm a futurist and advisor on strategy and innovation. I focus on innovation strategies where societal benefit is a first-order goal rather than a secondary consequence. I am also the author of four books on strategy and innovation including the award winning "Billion Dollar Lessons: What You Can Learn From The Most Inexcusable Business Failures of the Last 25 Years."