Consider the issue of safety, given the number of cars on the road, said Steven Shladover, a research engineer at the University of California, Berkeley. "[Self-driving car] technology can't be less good than today's traffic safety, and in the United States, there are 3.3 million vehicle-hours per fatal accident and 64,400 vehicle-hours per injury," he explained. Developing fully autonomous vehicles that can achieve similar traffic-safety levels is "not a hard problem," he said, "that's a superhard problem."

Shladover went down a list of technical challenges, ranging from the "potentially manageable" (handling glare from fallen snow or the erratic behavior of small children) to others that he considers "much harder," such as "system design errors, specification errors, coding bugs."

Threat assessment is another problem Shladover says won't go away even as laser detection systems get cheaper and processors get faster. "You have to discriminate between conspicuous but innocuous problems"—like metallized balloons in the road—"and negative obstacles"—like a pothole filled with water whose depth can't be gauged. These are just a few examples of situations that could prove to be literal roadblocks for self-driving cars.

In fact, Shladover argued that developing autonomous cars is "much harder than commercial aircraft automation." The number of vehicles you have to track is four orders of magnitude greater on the road than in the air. The accuracy with which you must measure speed: one order of magnitude. Reaction time in an emergency: two orders. Acceptable cost: two orders. Taking everything together, Shladover contends, and the back-of-the-envelope calculation shows that full automation is 10 orders of magnitude harder for cars than for commercial aircraft. "It will be many decades before we have totally automated vehicles."

And how we will know when we can have them is another problem he and other members of the panel kept returning to. It's not easy to prove that a vast, interacting system is safe before it even exists. Software for ever smarter cars will be big and complicated, and it will interact with existing systems that weren't designed with robotic cars in mind.

"The technology to validate software systems this complex does not exist today," said Raj Rajkumar, a professor in the robotics department at Carnegie Mellon University. "If we were to switch to completely automated vehicles [everywhere], that would be one thing," he says. "But it's another when they have to coexist with many vehicles of varying capabilities."

Testing by trial and error won't get round the problem, either, added Michael Wagner, also of Carnegie Mellon. "Because software doesn't wear out, it will not be stressed by being put through more miles on the road." He proposed that researchers turn software validation into a series of scientific experiments explicitly designed not to confirm a hypothesis true but rather to prove it false.

Even that may be a stretch, suggested panel moderator Ryan Lamm, an electrical engineer and a senior manager at the Southwest Research Institute, in San Antonio, Texas. "It's hard to test fully automated driving on a lab bench," he says.

How, a member of the audience asks, can the panel explain the very different take of some people in and around the auto industry, like Nissan's chairman, Carlos Ghosn, who said we'll have self-driving cars by 2018, and Google's Sergey Brin, who says (or has said) that it'll happen in 2017? The panel avoids making actionable accusations regarding sanity or probity. It must be that those honorable people define "self-driving" to mean "driving itself under many, but not all, conditions."

Getting things to work perfectly would seem to be impossible to do in one fell swoop, says another member of the audience. "The best way to understand automated vehicles," she argues, "is to field them."