Posted
by
Unknown Lamer
on Wednesday September 26, 2012 @11:36AM
from the third-place-leaders-of-the-free-world dept.

Hugh Pickens writes writes "The Seattle PI reports that California has become the third state to explicitly legalize driverless vehicles, setting the stage for computers to take the wheel along the state's highways and roads ... 'Today we're looking at science fiction becoming tomorrow's reality,' said Gov. Brown. 'This self-driving car is another step forward in this long, march of California pioneering the future and leading not just the country, but the whole world.' The law immediately allows for testing of the vehicles on public roadways, so long as properly licensed drivers are seated at the wheel and able to take over. It also lays out a roadmap for manufacturers to seek permits from the DMV to build and sell driverless cars to consumers. Bryant Walker Smith, a fellow at Stanford's Center for Automotive Research points to a statistical basis for safety that the DMV might consider as it begins to develop standards: 'Google's cars would need to drive themselves (by themselves) more than 725,000 representative miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars. If we look only at fatal crashes, this minimum skyrockets to 300 million miles. To my knowledge, Google has yet to reach these milestones.'"

Here is a scenario where if a self-driving car can pass 100% of the time, then I would deem it safe to get into.

Driving on a mountain road around a sharp corner where there is a steep cliff on the right side. Auto-car is passed on the left by some *sshole "manual" driver, but then the *sshat driver cuts in short because of oncoming traffic at the last second. Robo-driver identifies there is suddenly a car intruding into its safe-T-zone (TM) and does what its programming tells it to do, avoid hitting other vehicles. So the self-driving wonder swerves right to avoid the other car and zooms off the cliff.

A human driver would recognize that hitting the other car in this instance is the safer solution then to go careening off the steep cliff.

I agree that a self-driving car can work, and 99% of the time will perform adequately to protect its occupants from disaster. But since we have not mastered true AI yet, all self-driven cars will be built with flaws in their logic that will fail catastrophically. "Avoid hitting all cars", for instance, is not a good enough directive to ensure the safety of the occupants in 100% of all situations.

Someone mentioned that the deaths caused by self-driven cars would be far less then manual drivers, but then I would disagree that any technology introduced on the highways would be adequate to allow any fatality, especially in scenarios where a human driver may have been able to avoid death.

Basically what I am waiting for is the inevitable 100 car pile up with massive fatalities that WILL occur at some point in time where investigation will identify that a self-driven car, or cars, was the cause of it. Any company involved in programming or manufacturing that self-driven car will be sued out of existence and the "love affair" everyone seems to have about auto-driving cars will end quickly.

I am amazed at how delusional governments are into so quickly allowing this technology on the roads, sounds to me like there is some massive lobbying going on to short-cut the necessary amount of time to test auto-driven cars under all senarios, not just ones in controlled and predictable setups like we have seen. 5 years ago robo-cars could not drive around a dirt track, now they are quickly being allowed on our highways. That just is irresponsible.

Why would a self driving car ever drive off a cliff?Clearly it would rank available options and pick the lowest cost one. The cheapest collision in that case.

Human drivers allow fatalities everyday. The question is not is it better than some hypothetical human driver, but is it better than the drivers we have right now.

5 years ago the tech to do this was not cheap enough, now it is. This is called progress not being irresponsible. What is irresponsible is suggesting that the average person continue to drive automobiles when we have a better solution at hand.

I have known a few terrible drivers in my life. Despite their friends, and occasionally strangers, telling them that they were terrible drivers, multiple collisions in which vehicles have been totaled, and even collisions with pedestrians, they still believed that they were good drivers. Individuals may not be the best judges of whether or not they can drive better than a machine.

It will be interesting to see how this plays out. How the public perceives it. How it is marketed. How it is handled by insurance companies.

I think that the way it will play out is that as self-driving cars become a real and viable option, the penalties for bad driving will go up—drive drunk once, and you lose your license permanently, because why not—you can just use a self-driving car. Driver's tests will get harder, because why not—if you fail, you can just use a self-driving car. It will start with really egregious behavior, because voters won't feel threatened by it in sufficient numbers to cause a problem. Over time, the standards for human drivers will go up; at some point driving your own car will be about as common as flying your own airplane. We'll also probably stop giving licenses or learners' permits to teenagers, because they don't have the vote, and their parents would prefer to avoid a teenage testosterone tragedy.

Of course, a really spectacular failure on the part of a self-driving car could put that whole scenario off by a generation.

I wonder how insurance companies are going to handle this. My self-driving car hit your self driving car. Who's going to pay? Yeah, my car is at fault, but I wasn't at the wheel and I don't even have a license. What then? What if a collision is due to a bug in software?

I'm afraid that legal obstacles this project faces are more serious than technical.

The question is not is it better than some hypothetical human driver, but is it better than the drivers we have right now.

No, the question is: is it better than me?

If not, I don't want it driving my car.

It is.

You're not that great of a driver. Being human prevents you from being a better driver. You only have eyes in front of you, and you need to turn your head and look around, pay attention to mirrors, each time taking your attention away from where you are going for a fraction of a second. The computer can pay attention to 360 degree sensors 100% of the time. Once you detect the need to take immediate action, you need to move your leg to hit the brakes. For the computer controlling the car, the brak

Odds are cliffs do not move often and any automated car will have access to maps with topo data.

Given the last directions I got from Google Maps concluded with 'now drive through the barrier at the side of the highway and fall forty feet into the parking lot of the hotel below you', that does not give me warm fuzzies.

Happens all the time. Your forgot to turn off the "Professsional Stuntman" option. For some reason they have that box checked by default. You might also want to double check the settings for "Knight-Rider Style Turbo Boost" and "Assume I Have Access to Airwolf."

I note that in the USA, the pass rate of the driving test in general exceeds 50% by a considerable margin.This is not due to great tuition and driver skill and knowledge.Also, a number of other safety features that would considerably reduce deaths are not implemented.

What are those fantastic "other safety features that would considerably reduce deaths" that you claim? Care to elaborate? U.S. has been a leader in safety requirements for cars for quite a while I'd think.

That may be a problem in the lawsuit happy USA, but in the rest of the world, self driving cars will improve by leaps and bounds. Anyhoo, a self driving car crashing, is an industrial accident, there are already laws for that.

... but in the rest of the world, self driving cars will improve by leaps and bounds...

Depending on where in the world you are, this might be a necessity. Observing the driving habits I've seen in many countries of the far east and parts of southern europe, self driving cars *better* improve by leaps and bounds just to survive!

My first gut instinct is, this is bad, bad, bad.. but then I think of the stupid beatch in the Hyundai that blew by me at 85mph, then cut into my lane, making me slam my brakes on while driving to work this morning.. so maybe it's not so bad.

No, you read it a little too literally, what I gave was the gist of the situation: first, she blew by me, then couldn't proceed any further because she quickly came up to the back bumper of a car in her lane that was going slower than everyone else, so she had to slow down; then she decided my lane was better..so she cut me off as she squeezed over between me and the car in front of me, when there definitely wasn't a safe amount of space to do that. Jesus, did you want a second by second account with vid

For the foreseeable future, there will be times when it's necessary to disable the autodriver. New roads that aren't in the GPS system, for example, or private driving areas (e.g., parking lots) that aren't well-mapped.

For the foreseeable future, there will be times when it's necessary to disable the autodriver. New roads that aren't in the GPS system, for example, or private driving areas (e.g., parking lots) that aren't well-mapped.

And sometimes it's just more fun to drive the car yourself.

I just replaced my Android-powered car with an iOS 6, and the maps aren't up to par yet.

1) You reaction time is far worse than a computer.2) Your estimation of distances is far worse than machines absolute measurements.3) You are limited to two forward facing eyes, augmented by 3 small mirrors. And you share some of the vision time with looking at the dash. An auto-car can look in all directions at once, and monitor all dashboard information and more at the same time.4) An auto-driver will be better at maintaining a safe speed. Able to stop in the distance it knows to be clear far more often than a human driver.5) I'd expect an auto-driver system to be seperate from any other computing devices in the car, and connected to the internet or any other vector for hacking. I'd expect them to be as immune to hacking as an auto-pilot system in a plane.

All excellent points. We already have computer-assisted driving. Automatic traction control and stability systems have computers hooked up to your car, monitoring the vehicle's characteristics at all times. They adjust in real-time to keep the vehicle on the road, going in the direction you have it pointed. They can do this a lot more effectively than a human ever could.

It's time people realized there are just things machines are better at than we are. It's not something that denigrates humans, it's just ac

'Why did you turn off the computer when you know it is proven to be safer?'

"Because my brain operates at a frequency modern computers cannot even begin to match, and it cannot be hacked."

And yet somehow, in spite of that, you have just demonstrated exactly why You should not be allowed to operate a lethal weapon on our streets. When it comes to objective evaluation of the situation, you are fail... Paranoia != Righteousness

... does what its programming tells it to do, avoid hitting other vehicles...

Its a bit of an assumption to believe that the driving software has that single goal. Staying on the road seems to be something the software is already considering. I wouldn't be surprised if existing software already has "prepare for crash" code that tightens seat belts, unlocks doors,... maybe even sends an "oh shit" text message to the road side assistance service.

The plan to allow test vehicles to cover a large number of miles and then compare collision/fatality stats with human drivers is the correct one. It's quite likely that the auto-driver will make different mistakes than the typical human driver. For the sake of argument, suppose it has a greater tendancy to make the mistake you outline than a human driver does. That doesn't matter if it also avoids more collisions and fatalities in other scenarios. If the stats say you get fewer collisions and fatalities wit

That brings up another thing autocars will be better about than humans. Individual humans can learn from their mistakes, but that knowledge is not directly transferable to other humans. Any mistake a self-driving car makes, however, can have its solution incorporated into all self-driving cars (or at least all the ones of that model.) So, lots and lots of testing should ultimately give us very safe and effective cars.

So the self-driving wonder swerves right to avoid the other car and zooms off the cliff. A human driver would recognize that hitting the other car in this instance is the safer solution then to go careening off the steep cliff.

Someone has never, ever taken an AI class. Or even an algorithm class dealing with risk. Here's how the calculation actually works (and by the way, that approach is about 20-30 years old).Every situation is assessed an impact value: driving into oncoming traffic, 0 (very bad); driving into the right ditch, 10; swerving into a legal lane, 50; etc. Every situation is given a set of possible actions, with each action having a probability of being completed successfully. The algorithm multiplies the outcome with the odds of achieving that outcome, and picks the highest value. You can set it up in different ways, but the idea is the same: multiply outcome severity with odds of achieving outcome, pick lowest combined risk/outcome. In your situation, driving off the cliff (which is assumed to be very bad, since the car can see a very steep drop-off with no bottom) is going to have a much worse outcome than hitting the car in front of it. Hitting the car in front of it is guaranteed, but so is driving off the cliff. As a result, the algorithm will make the automated car hit the car in front of it, rather than drive off the cliff.

Not to mention that cars don't sleep, always behave optimally (according to the algorithms in place), and have no blind spots.

Basically what I am waiting for is the inevitable 100 car pile up with massive fatalities that WILL occur at some point in time where investigation will identify that a self-driven car, or cars, was the cause of it.

You mean like the ones that regularly happen in fog and icy/rainy conditions?

Any company involved in programming or manufacturing that self-driven car will be sued out of existence and the "love affair" everyone seems to have about auto-driving cars will end quickly.

That is a very real risk. Not sure how the laws will deal with it. But until that question is addressed, we won't see large-scale sales of automated cars. I suspect that we'll see the equivalent of ToS: by using this car, you agree to be fully responsible for all its actions and accidents.

I don't think it is a question of the algorithm, but rather a question on the computer's ability to recognize the situation accurately. Machine vision has improved a lot, but is it to the point that it can recognize all the situations brought up in the OP. Maybe it can, but I think the only real test is extended real world testing.

So the self-driving wonder swerves right to avoid the other car and zooms off the cliff. A human driver would recognize that hitting the other car in this instance is the safer solution then to go careening off the steep cliff.

Someone has never, ever taken an AI class. Or even an algorithm class dealing with risk. Here's how the calculation actually works (and by the way, that approach is about 20-30 years old).
Every situation is assessed an impact value: driving into oncoming traffic, 0 (very bad); driving into the right ditch, 10; swerving into a legal lane, 50; etc. Every situation is given a set of possible actions, with each action having a probability of being completed successfully. The algorithm multiplies the outcome with the odds of achieving that outcome, and picks the highest value. You can set it up in different ways, but the idea is the same: multiply outcome severity with odds of achieving outcome, pick lowest combined risk/outcome. In your situation, driving off the cliff (which is assumed to be very bad, since the car can see a very steep drop-off with no bottom) is going to have a much worse outcome than hitting the car in front of it. Hitting the car in front of it is guaranteed, but so is driving off the cliff. As a result, the algorithm will make the automated car hit the car in front of it, rather than drive off the cliff.

Not to mention that cars don't sleep, always behave optimally (according to the algorithms in place), and have no blind spots.

Although I agree with your analysis, the question itself is flawed... It presumes that the self driving car is in a situation where (i) there's a truck immediately ahead, (ii) a truck immediately behind with failing brakes, and (iii) a motorcycle in the next lane (the question doesn't actually specify whether the motorcycle is pacing the car and traveling in the same direction or oncoming, but it's mostly irrelevant*). In order to face the dilemma of (a) crash off the cliff, (b) get smooshed between the tru

The system just needs a rapid manual override and a little common sense from the driver.

I see self driving cars as an evolution of cruise control. Just as cruise control gets out of your way as soon as you manually press the accelerator or brake the auto drive system should get out of your way as soon as you move the steering wheel.

Also, drivers should take responsibility when they feel it's safe to engage the auto drive. I wouldn't use cruise control on a narrow mountain road, neither would I use auto drive. I would love to be able kick on auto drive on a long boring highway though and focus on a phone call or whatever.

The system just needs a rapid manual override and a little common sense from the driver.

See the results of the http://en.wikipedia.org/wiki/Air_France_Flight_447 [wikipedia.org] AF447 flight for the odds of this working. As a one time private pilot I am totally baffled as to how a professional pilot could hold a plane in a stall from 35,000 ft to the ground. I think there were several issues including human factors in the design of the interfaces; but I really think that these guys got used to being along for the ride and it was not conceivable to them that the plane had decided to stop flying itself.

After a week of having an auto-car drive me to work everyday I can not imagine I'd be ready in 1/2 second to suddenly take over for the computer and expect a good result.

Self driving cars *never* swerve. They brake. Statistically they know that swerving almost always is worse than the incoming accident. Humans on the other hand will swerve. See all the accidents that occur when attempting to miss an animal crossing the road.

If the highway were non-divided you could have sent another car into oncoming traffic. And if not, into the wall, likely flipping it. You're rationalizing a mistake that you made as if it were optimal.

If you don't have time to look, you don't change lanes, period. Unless it is a pedestrian in the road, you're more likely to kill someone swerving than by braking and possibly hitting the thing in front of you.

Just let them do whatever they want but don't provide any exemption of liability. When they are prepared to bet the company in lawsuits, then the cars is probably safe enough. Just remember, when 2 of them crash, there is not question who caused the accident/damage/death. When the company is willing to accept that responsibility I'd give them a shot.

And BTW, the reason this is easier to do today is because brake-by-wire, steer-by-wire, radar systems, etc have already been developed by the auto industry.

If the car knows there is a cliff on the right (which it should, otherwise it shouldn't be driving at all) then it will have to quickly brake and possibly hit the car in front of it. It can handle this better than a human driver in a few ways:

1. It can gauge the right balance of braking force to minimize impact and inertia transferred to the passengers.2. It can pre-emptively deploy safety measures a fraction of a second sooner to protect the passengers.

And I forgot my drooling-from-the-mouth-fanboy/shill check list:* brand new account* posts a long post the minute the story goes live, despite the user not being a subscriber* subtle or over anti-Google bent in post

sounds to me like there is some massive lobbying going on to short-cut the necessary amount of time to test auto-driven cars under all senarios, not just ones in controlled and predictable setups like we have seen.

Ah, here it is. Google is paying off the government in order to kill us more quickly! Quick, bring out the pitch forks!

If I was 'driving' the car and came accross a steep drop - I would take control.

As someone mentioned below.. if your "driverless car" experience is to sit there waiting to take control of the device when you sense that it is about to get into trouble, then that is going to be a stressful and shitty experience. You might as well have been driving yourself all of the time.

I believe on public roads you do need a human available to take over for legal reasons.

And that worked so well for AF447.

Aviation autopilots should have proven by now that relying on a human to take over when the situation is so bad the autopilot can't handle it is a recipe for disaster. Besides, what's the point of a 'driverless car' if I have to be continually ready to take over at a millisecond's notice?

Car: 'Warning, warning, kid just jumped out in the road, you are in control'.Driver: "WTF? I just hit a kid and smeared their insides all over my windshield'Car manufacturer: 'Not our fault, driver was in control, human error'.

self driving cars in CA will become ENRAGED by the clueless jackasses you have to deal with driving here.. and will rise up and destroy humanity (doubtless they will enlist the computers of people who don't watch enough cat videos as allies, computers seem to love cat videos).

I for one welcome our new self driving car overloads.

yes this is how the world ends... Self Driving Car ROAD RAGE.. right before you killed by machines remember I called

I have been thinking about driverless cars and I'd love to ask the people at Google (or where ever) how they cope with several real life issues

* Emergency vehicles in general* Vehicles on the side of the road. In general you move over to the other side (road,next lane etc) to give them some room. But where I am (VA) its an offense if you fail to move over when passing a cop car on the side of the road.* Temporary speed limits posted during road works* School zones* Really bad weather where you can't even see 20 feet ahed of you* Looking down the road and predicting that there will be an issue and doing your best to avoid it (ie slowing down/lane changing to avoid the person on the phone who is weaving from side to side)* Crap lying all over the road (saw lots of rocks on a mountain road yesterday)

I'm sure there are lots of other "interesting" situations that human drivers have to deal with day to day that would be difficult to encode into hueristics for the self driving cars.

They're just other vehicles - they might be doing unusual things, but any auto-driver system has to allow for the fact that any vehicle may do unusual things. They are only limited by the laws of physics not the rules of the road. And it's easy to detect flashing blue lights and sirens and give priority.

* Vehicles on the side of the road. In general you move over to the other side (road,next lane etc) to give them some room. But where I am (VA) its an offense if you fail to move over when passing a cop car on the side of the road.

Stationary vehicles are the very simplest vehicles to avoid.

Temporary speed limits posted during road works

The technology for vision systems to interpret road signs is already there. Googl

For example, 725k miles for any incident, but if we look at only "fatal" crashes it skyrockets to 300M? There's a disconnect here: if we look at only "fatal" crashes, I'm pretty sure we can smash up Google cars every 30k without killing anyone and make it to 300M.

If you say, "well it has to be 1 in 300M because that's how often a fatal crash occurs and we want to reduce fatal crashes," you're talking about something completely different. 1 crash in 300M miles isn't lik

...if a driver needs to be behind the wheel? I mean yeah it's great and all you don't need to put your hands and feet anywhere but if you're supposed to be alert watching that the car doesn't make a mistake then what's the difference? You still can't text, read the paper, play cards, eat dinner, whatever - or can you?

I don't think there's any question that automated cars can beat human beings at safety, nor is there any question that they can reduce pollution just by driving more evenly (not to mention by drafting each other, "tailgating" to form car-trains).

The trouble with them is that they'll take the sting out of long commutes. You already have people who think it's a good idea to spend four hours a day driving for the sake of cheaper real estate. What if they up it to six hours a day when they don't have to stare at the road?

Note: cutting a problem (pollution, car-deaths) would do no good if you double the miles.

I wonder if the aging population will end up pushing this into reality. We will not make mass transit is not going to work on a large enough scale, and for many transportation needs are only met by POVs. It will become yet another device to assist people's independence, and that I believe will push the technology and laws as the need for it increases.

Airline pilots seem to be able to do it without going insane, although, admittedly, they don't need the reaction/response time that a driver does. If Something Bad (TM) happens to the plane's autopilot, you've got (up to) minutes to recover, in a car, possibly (down to) tenths of a second.

Tell this to Capt. Sully. When you are travelling over 200 MPH the closing speed is so fast you can't really react fast enough. Um, tell it to the Geese too!An airline pilots job is hours of boring flights punctuated by moments of shear terror.

Heck, modern planes even try to fix the problems itself. In the famous case of Colgan Air 3407(crashed near Buffalo NY) after shaking the yoke to alert the pilot the autopilot attempted to trade altitude for speed to get out of a stall. The human pilot overrode this safety feature and killed everyone on board by attempting to gain altitude and thus turned a recoverable stall into a crash.