Posted
by
Soulskillon Saturday August 30, 2014 @01:01PM
from the not-looking-forward-to-denial-of-driving-attacks dept.

Paul Fernhout writes: Lee Gomes at MIT's Technology Review wrote an article on the current limits of Google self-driving car technology: "Would you buy a self-driving car that couldn't drive itself in 99 percent of the country? Or that knew nearly nothing about parking, couldn't be taken out in snow or heavy rain, and would drive straight over a gaping pothole? If your answer is yes, then check out the Google Self-Driving Car, model year 2014. Google often leaves the impression that, as a Google executive once wrote, the cars can 'drive anywhere a car can legally drive.' However, that's true only if intricate preparations have been made beforehand, with the car's exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It's vastly more effort than what's needed for Google Maps. ... Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. ... Pedestrians are detected simply as moving, column-shaped blurs of pixels — meaning, Urmson agrees, that the car wouldn't be able to spot a police officer at the side of the road frantically waving for traffic to stop."

Paul continues, 'A deeper issue I wrote about in 2001 is whether such software and data will be FOSS or proprietary? As I wrote there: "We are about to see the emergence of companies licensing that publicly funded software and selling modified versions of such software as proprietary products. There will eventually be hundreds or thousands of paid automotive software engineers working on such software no matter how it is funded, because there will be great value in having such self-driving vehicles given the result of America's horrendous urban planning policies leaving the car as generally the most efficient means of transport in the suburb. The question is, will the results of the work be open for inspection and contribution by the public? Essentially, will those engineers and their employers be "owners" of the software, or will they instead be "stewards" of a larger free and open community development process?"'

The technology is in it's infancy stages. Why the media keeps hounding Google on all these issues seems immature. I don't see any other competing company attempting to do the same thing, and if there is, they are definitely staying clear of the media spotlight.

I see Google making some great progress in this area, but give it time people - they will work out the kinks, but it won't be done in year.. lets realistically say that maybe in 5-10 years from now we might fathom the idea that the car is safe enough for whatever weather and situations we can throw at it.

If a new stop light appeared overnight, for example, the car wouldn't know to obey it.

Got it. So the cars cannot handle changes in traffic markers.

Google's cars can detect and respond to stop signs that aren't on its map, a feature that was introduced to deal with temporary signs used at construction sites.

So they cannot deal with new stop LIGHTS but they can deal with new stop SIGNS. WTF?

But in a complex situation like at an unmapped four-way stop the car might fall back to slow, extra cautious driving to avoid making a mistake.

And it would be "unmapped" for the first attempt. Right? Because the cars should be sending back data on road conditions and such to HQ. Right?

Maps have so far been prepared for only a few thousand miles of roadway, but achieving Google's vision will require maintaining a constantly updating map of the nation's millions of miles of roads and driveways.

And the car needs the map to drive, right?

Google's cars have safely driven more than 700,000 miles.

So they just drove over the same "few thousand miles of roadway" again and again and again and again? Until they got to 700,000 miles?

The car's sensors can't tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either.

As it should. Because you don't know if that piece of paper is covering a rock or a pothole or whatever.

For example, John Leonard, an MIT expert on autonomous driving, says he wonders about scenarios that may be beyond the capabilities of current sensors, such as making a left turn into a high-speed stream of oncoming traffic.

Isn't that one of the easier problems? The car waits until it detects a gap of X size where X is dependent upon the speed of oncoming vehicles and the distance it needs to cross PLUS a pre-set "safety margin".

Put another way, if autonomous cars started off working on 0% of roads and you want them to eventually work on 100% of roads, well somewhere in between you have to pass through 1%, 5%, 10%, 25%, 50%, 75%, and 90%. It's rather disingenuous to criticize them for not getting all the way to 100% in one fell swoop. I'm shopping for a new car right now, and the new autonomous-like features like adaptive cruise control, lane change assist, and parking assist are really nice (haven't gotten to play with lane departure warning or assist yet). By themselves, no they don't make a 100% autonomous car. But each gets you a small fraction of the way there.

It will be decades before these vehicles can handle real life situations. You will need AI that can improvise as well as a human. Good luck with that.

I see that problem mostly being attacked from the opposite direction. With cars getting radar and proximity sensors, and being able to electronically communicate their intent with each other before actually moving, you reduce the need for the AI to improvise. If an autonomous car wants to pull in front of your car, the two car AIs will communicate it with each other and work out a plan to make it happen before changing lanes. No improvisation required. Sure you might get the stray deer hopping through traffic that requires a human to take control and improvise. But the vast majority of improvisation situations can be eliminated before they ever happen with better communication. That is after all the whole idea behind brake lights and turn signals - to allow you to communicate your intent to the drivers behind/beside you so they don't have to improvise in response to your sudden moves.

Why the media keeps hounding Google on all these issues seems immature.

It is to counter Google's skewed data that make it look like autonomous cars are just around the corner. For example, why come out with a vehicle that has no steering wheel if it is not viable for another 5-10 years (by your estimate)? Do you ever see a Google press release mention any of these limitations? All you hear from Google is a rising tally of miles driven and the fact that there have been no accidents. The fact that the miles are driven on carefully selected, heavily scanned roads under optimal conditions never seems to make it into the reports. Driving down the same roads thousands of times is not progress.

I'm going to map my drive to work, by driving it a few dozen times. Then the car can take over. I don't care if it's no good in parking garages or my own driveway. I'll spend 3 minutes driving from my house, let the car take over, let the car do the boring freeway driving, and it can alert me when I'm 3 minutes from work. Then I'll take over and get into the parking garage and park my car.

Are we really whining because a brand new technology can't do EVERYTHING for us? Because it only takes care of MOST of the drudgery?

"It is to counter Google's skewed data that make it look like autonomous cars are just around the corner."Google has never said that. And this guy doesn't have all the data, nor does he know whats in development.

"why come out with a vehicle that has no steering wheel if it is not viable for another 5-10 years (by your estimate)?"The same reason worlds fair showed tech that will be coming out in 5-10 years. Its' fun, it's cool. It also show they are thinking long term and not quarterly. It also shows a company spending money on RnD.I consider all of that a good thing.

" All you hear from Google is a rising tally of miles driven and the fact that there have been no accidents. "Which is pretty important.

"The fact that the miles are driven on carefully selected, heavily scanned roads under optimal conditions never seems to make it into the reports."That is the smart way to start, but they are moving past that.

" Driving down the same roads thousands of times is not progress."Of course it is. Same roads, different traffic. The same rods can have 10's of thousands of changing variables at any given time.The team members are using them. A team member took one from Google campus to Tahoe on a trip.

Do you lay awake at night just trying to think of ways to hate cool new things?

It will be decades before these vehicles can handle real life situations. You will need AI that can improvise as well as a human. Good luck with that.

I'm sure that there will always be a few situations where a skilled human driver will make better decisions, and produce better outcomes, than standard automation.

I'm equally sure that there will be exponentially more situations where standard automation will make better decisions, and produce better outcomes, than average (or even well above-average) human drivers.

I'm sorry, but "there will always be situations where a human performs better than AI" sounds an awful lot like "I won't wear a seat belt because it might trap me in a burning car". It's not wrong, but it is foolish, and it's a poor decision.

I'm not saying you're part of the conspiracy. I do think a lot of the excitement for google cars comes from the "privileged white driver" mindset in which there are no pedestrians, no bikes, no transit. Nothing but people like them in their single occupancy vehicles.

That sort of thing is trivial for computers as its basically a simple physics question; whats not trivial is predicting behavior. The point is that a GoogleCar probably wouldnt need to predict behavior in the same sort of way.

People are acting like a googlecar needs to have the exact same senses and responses as a human driver, which is not true; it doesnt have the same limitations (field of view, ~200ms minimum reaction speed, distractions, imperfect data from car) so it can operate differently.

For instance, a person driving a car on an icy winter night has all sorts of unknowns to deal with, between limitied vision, glare from ice / oncoming traffic, not knowing how slippery the roads are, etc. An automated car will have much better vision, a better sense of how well the tires are gripping, and wont be affected by glare. Saying "how will the car know if theres snow in the forecast" is completely missing the point.

Give me a fscking break. You obviously don't actually ride around cities on a bike.

I spend a huge amount of time on a bike. I'd be happy if 75% of drivers paid attention. Simply put, human drivers DO NOT pay attention at the best of times and don't see cyclists a large percentage of the time.

One of the reasons I want to see only Google cars on the road is BECAUSE I'm a cyclist and figure my chances of staying alive will improve dramatically.

It's obvious you've never actually ridden a bicycle in a busy city. I have to deal with drivers making lethal mistakes every single day I commute on two wheels. Given the number of idiotic drivers yacking on their phones I'd take my chances with half a pound of silicon any day of the week.