Google Self-Driving Cars Get Smarter

Google admits it still has many problems to solve before self-driving cars become commonplace on our streets.

Google's 10 Big Bets On The Future

(Click image for larger view and slideshow.)

Google's quest to build and deploy cars that drive themselves has been going well. On Monday, Chris Urmson, director of Google's self-driving car project, said the company's autonomous vehicles have logged more than 700,000 miles.

"With every passing mile we're growing more optimistic that we're heading toward an achievable goal -- a vehicle that operates fully without human intervention," Urmson said in a blog post.

Since its last progress report on the project in August 2012, Google has improved the software that controls its cars to enable the detection of hundreds of distinct objects at once. Urmson says Google's technology can track pedestrians, road signs, cyclists, and a variety of other objects and entities without human liabilities like fatigue or distraction.

The chaos of a city street is fairly predictable to a computer, said Urmson; Google has trained its software by driving thousands of miles on the streets of Mountain View, Calif., where the company is based. Google has built software models for a wide variety of scenarios, from cars stopping at red lights to cars ignoring stoplights, he said. Thousands of situations that would have stumped Google's cars two years ago -- such as the unexpected placement of orange construction cones in a road -- can now be navigated without human aid.

To teach its computers to drive, Google sends employees out to ride with its cars and document anomalous conditions. These scenarios are presented to engineers who then have to implement an appropriate response. Beyond creating algorithms to navigate through areas with road work, Google's cars now slow down when approaching large objects, like a truck parked on a road's shoulder.

Google has also trained its cars to handle railroad crossings, where last year there were 2,087 train-vehicle collisions, 251 fatalities, and 929 injuries, according to the Federal Railroad Administration. When its autonomous vehicles detect train tracks and crossing signs, Google's software waits to make sure the tracks are clear of other vehicles before driving across, to eliminate the chance of being caught behind another car and waiting there as a train approaches.

Cyclists have been accorded special status in Google's software. When an object identified as a cyclist uses a hand signal, Google's cars have been trained to slow down in anticipation of the cyclist's pending lane change. The car will continue to yield to cyclists even when it gets mixed signals from the cyclist, a Google video explains.

Such deference, while advisable for safety, may make Google self-driving cars the object of scorn among human drivers, who tend to prize speed over caution. Imagine how you would react if, during your commute, every other driver stuck to the speed limit and yielded to everything. It would feel as if the road had been taken over by the elderly. What's more, human drivers may not feel the need to drive politely around robot cars.

Then there's the question of how to deal with rage against the machine, something Google already confronts as it buses employees to and from work. In a town like San Francisco, where the relationship between cars and cyclists is often antagonistic, cyclists might be delighted to bring autonomous cars to heel with hand gestures. In a similar vein, imagine how easy it would be for vandals to blind autonomous car sensors using stickers, spray paint, or some other opaque substance.

Google may be able to teach its cars how to behave around people, but judging by the way people abuse computers online, by the tech-hostile climate in San Francisco, and by the social issues facing wearers of Google Glass, the company will have a much harder time teaching people how to behave around its cars.

Anthony Levandowsky, manager of the self-driving car project, said last year that Google was aiming to deploy its self-driving car technology in some form by 2018.

What do Uber, Bank of America, and Walgreens have to do with your mobile app strategy? Find out in the new Maximizing Mobility issue of InformationWeek Tech Digest.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

Ugh....if they replace taxis in an uber city context in lieu of some other public transportation, fine. Why is it that these initiatives are sold to us under safety, like the 55 mph speed limit (which has been completely debunked by years of statistics)? What happens if something not programed into the system occurs like a major weather event, car jacking, or terrorist event, or you forgot your wallet? Thank you, I still like making my own choices. Feel free to give me more information to make those choices however.

Self driving car is a very interesting and amazing thing for everybody, these type of cars are preferred by the people but how can it perform is important. In this type of car, they add the new technology and features for the better use, safety is the most necessary thing for this car.

If all cars become driverless, more safety and much less accidents should be expected. But it really depends on how reliable the system is - even for manual driven car nowadays, we still have problems/defects from time to time, which needs retrofit. Furthermore, the transition period will not be easy. Imagine the mix of manual-driven and driverless cars on the road - will it create more problems? If an accident happened between manual driven one and driverless one, how the responsibility should be decided?

These would be the same human brains that regularly make poor decisions and cause accidents? Even in a scenario where there is a tail percentage of situations in which the car does not make the right decision, the total number of accidents would still likely go down. However, addressing liability for those remaining accidents will be interesting.

I bike a lot, and I see how enraged a very few drivers get if they need to slow down for 15 or 30 seconds navigating alongside bicycles. Add to the mix a driverless car that's being extra careful around those cyclists, we're talking purple rage.

They contest it is better in some ways, and I think they may well be right. Humans lack the ability to concentrate for any real length of time, and repetitive tasks ease us into mindlessness. Computers can hold speed and distance with far more accuracy than a human, never gets bored or angry or drunk and can recognise patterns. The patterns are actually there in the highway code.

My one question is this - Would the passengers be liable for any accidents? Would Google be liable? It seems like a mess. I have a good driving record and enjoy pretty cheap insurance rates ($26/month from Insurance Panda.. woohoo!). I also enjoy taking my car out for a spin and enjoying the 'freedom' of being able to drive anywhere. Will the driverless car allow all this? If not, I'll have to pass.

IMO.. Until they can ensure that there are no humans taking control of the wheel, insurance will be needed... at least uninsured motorist. Who knows? Maybe insurance as we know it will go away, replaced by any number of models that would more accurately represent the new risk distribution.

The idea of self driving cars is all very great but I still believe that it will never be entirely practical. No matter how hard Google try and no matter how many test drives they take, it is simply not possible to prepare the car for all possible types of encounters that may take place on the road. Some of these need a working human brain to handle without making mistakes. I therefore believe that a more practical model would be to come up with a self-driving car that can also be controlled by human beings so that in situations such as the one above the human driver can make a judgment call and save the day.

We do have some in the busier area but since it is the older population with slow reflexes who do things like run through lights late there's less emphasis put on the cameras than you might see if it was younger groups racing through them. Everyone jokes about being hit by some little old lady that can't see over the wheel but a camera isn't going to stop that. There is an area where there is a lot of pedestrian traffic, they have speed bumps that pop up out of the ground when people are crossing, flashing lights everywhere and red light cameras, but people still get hit crossing the road there. I'd love to see robot cars for anyone over 80.

"Behaving with a degree of propriety has been on the wane for the past several decades."

(etc.)

Amusing answer ;-) I think that trust will be the biggest factor initially, and having 1 Google Car per 10,000 on the roads makes its behavior an oddity and thus a target. Make it 1 in 10, and now it starts changing road behavior in bulk.

It's a bit like these cars that can travel in convoy on highways using computer-controlled adaptive cruise control. Hooray, so long as every car involved supports that technology. It's a bit tricker when you have to mix the old and the new in the same place.

Among 688 respondents, 46% have deployed mobile apps, with an additional 24% planning to in the next year. Soon all apps will look like mobile apps – and it's past time for those with no plans to get cracking.