I can't see everyday driving being automated because there will always be a scenario that isn't covered, an odd bug that no-one expected, or mechanical failure that the computer can't compensate for in a sensible way.

Commercial flight systems on aircraft must be so thoroughly tested, yet we still have pilots to take over if the machine fails. How would that work with cars? If you have to be there paying close attention in case you need to take over, doesn't that negate the purpose of the automated system in the first place?

I would assert that the population of terrible, drunk/drug addled drivers on the road today are far more dangerous that the odd software error we will see. Especially since each failure of automation will result in improvements over time.

If you have to be there paying close attention in case you need to take over, doesn't that negate the purpose of the automated system in the first place?

If a car "autopilot" detects a problem, it can just slow down, pull off the road, and shut off. You do not have such an option in an aircraft.

In a self-driving car, you only need to "be there paying close attention" if you have reason to believe the navigation system will NOT properly detect when it is operating in conditions beyond its capabilities.

Timeline: within 5 years, the first (partially) self-driving cars will become available. Within the next 5 years, it will be a novelty for early adopters (rich people). The 10 years after that will see increasing consumer adoption (upper middle class only), as well as the first commercial applications (trucks, maybe greyhound), but in developed nations only (even if self-driving cars don't cost much more than manuals, most people don't buy new cars very often). The 10 years after that (i.e. 30 years from now) self-driving cars will start becoming the majority in developed nations, but the bulk of the world population will still not have seen one with their own eyes (look up a video of what streets look like in Bangalore, and tell me how a self-driving car would perform there). I don't dare predict when self-driving will be the majority in the world, because that requires predicting when the majority of the world will become "developed", which may very well be never.

Actually it's not Honda and Toyota et al -- it's the Insurance Companies. As soon as this technology becomes approved and certified by the NTSB here in the States and it's equivalent elsewhere, you can be sure that the insurance companies will offer steeply discounted rates to drivers who have fully automated cars since they can sue the Automobile Manufacturer or Software Manufacturer in the event of an accident by an automated vehicle rather than collecting from another insurer or individual.

When people see that they can get a good break on Insurance if they buy an automated car, the mainstream manufacturers will be tripping over themselves to get their own automated vehicles out to the mainstream market to capitalize on that and to be competitive.

The biggest threat to automated driving algorithms will be erratic drivers not using automated cars. As more automated vehicles take over the population of drivers, you'll see a reduction of problems on the road.

Most attempts at computer-driven cars so far basically just put cars on an electronic track and throw in a little bit of sensor data and pre-programmed logic.

Yes, 99.9% of driving is just dull "stay between the lines" but it's that.1% of unexpected shit that requires a brain. (not that too many people these days have those)

What happens when some crap falls off a truck and the sensors don't quite pick it up? What happens when someone runs out in the road in front of it? Can it tell the difference between a rectangular dark black flat piece of dangerous metal on the road and dark shadow from an overhead roadway sign? Would a computer even realize you might need to drive off the road in the event of an emergency or special circumstances? There are an almost infinite number of possibilities that you just can't pre-program in or plan for.

Throwing control back to the user probably wouldn't work well in practice, because once a car can drive itself sometimes, people will expect it to drive itself ALL the time.

And when something does go wrong, who will people sue? Right now it is the individual driver of the vehicle. I think car manufacturers would prefer to avoid the risk of being sued and leave responsibility where it is.

So what if someone creates a computer that is really intelligent enough to do that? Well, I'd suggest getting out of its way as it kills all humans.:P

I don't think the semi-inebriated drivers showing up on the roads here in Austin for ACL weekend are have been tested exhaustively either.

One of the things about computers... if someone gets in front of an autopiloted car and randomly slams on brakes, the autopilot adjusts speed up and down. Do that to a person, after a bit, someone is going to be run off the road. Road rage isn't an issue with a computer, nor is alcohol, texting, marijuana, bath salts, Molly, LSD, PCP, Ativan, or whatever substances a person is on.

Don't forget what can be done with automated vehicles that can't be done now. Four way intersections where cars can move at speed, being slowed down or sped up slightly to get one group across before the next group hits.

Pedestrians? Punch the crosswalk signal, and the vehicles WILL stop. No slowing down and honking, no beer bottles thrown at the person crossing.

Parking? Vehicles can park themselves a ways away.

Oil change time? The vehicle goes off and takes care of it in the middle the night, and is back before work.

Moving and have two cars? Toss your crap into one car and tell it to go to your new residence, where someone there can unload it. That way, only the large furniture pieces need the U-haul truck.

There are just so many issues and bottlenecks that would be eased by removing the person out of the driving equation.

Firstly, the *most* driver less miles have been put on cars driving at highway speeds around the bay area by Google. They're out testing this in real world conditions. When the car doesn't know what to do, it squaks at the driver and the person takes over. Understandably that has it's own issues. However, the more they drive these cars the better they can make these systems. It's a feedback loop.

Secondly, the hypothetical situation you describe is exceedingly rare and I suspect human drivers would nominally fair equally bad at avoiding the situation.No matter how good your automomous car is, if you're driving along and a 1000lb piece of cast iron pipe tumbles off the back of a flatbed, you're fucked. Possibly having a car with reaction times that exceed a human might save your life. Your autonomous car is going to be not tailgating the semi to begin with and will be going the speed limit or lower depending on the traffic conditions.

Thirdly, I can only imagine that these cars will be recording all the telemetry and video they possibly can. To complete my scenario, when a piece of cast iron pipe falls off the truck and lands in the road and the lidar system doesn't properly identify it and the car runs into it, the insurance company is going to analyze the telemetry and the dashboard video and they'll sue the truck driver for not properly securing their load. They possibly would go after the car maker for a faulty system, but more likely the car companies just going to want the telemetry so they can improve the system.

As for knowing the difference between a shadow and a piece of black metal, these systems are currently using lidar... so I don't see this as much of an issue.

Also your scenario of what happens when someone runs out in front of the car... Mercedes already has an automatic breaking system.