Thursday, 14 October 2010

Google's Fleet Of Self-Driving Cars

Hattie Tinfoil reports:

Google has been developing self-driving cars. Bonus point for anyone who saw that one coming - it wasn’t a natural progression from being good at search engines the last time I checked. I amuse myself by imagining the reception to the news in the Altavista offices: “they’ve done WHAT, now?!”.

As a tech blogger, I say search moguls should spend more millions off-piste to make grist for my mill. But until such time Yahoo! open a string of vodka bars and the Bing boys inaugurate their first school of modern dance, I’d better get musing on Google’s fleet of inorganic chauffeurs.

Obviously, obtaining the legislation to allow your Hyundai to mooch down the high street as it sees fit is going to be really, really hard. But the first question which first popped into my head was “who wants it?”. Well, if you’re as fond of the Cabernet Sauvignon as I am, being able to drive somewhere and have the car drive you back is the first smile raised. But there are far more interesting benefits than that in store.

Fact is, computers don’t make mistakes. Once the software is perfected, a computer can drive faster than a human, closer to the vehicle in front and with far fewer accidents. This is huge. If you can drive closer to the car in front, congestion - on the open road, at least - becomes a lot less of a problem. Motorway gridlock should be a thing of the past, as phantom traffic jams (popular on the M25) disappear with the human error. The aerodynamic benefits of being safe in another’s slipstream could easily reduce motorway fuel consumption by 30% or more. At quieter periods, speed limits would become redundant and journey times would tumble.

The biggest gain, though, would be safety. Google are saying that road deaths could be cut by half, but once everybody has a self-driving car and there’s no such thing as driver error, road safety would be near-total. And the cost of retail items would fall as driverless lorries slashed labour and fuel bills.

There are two problems. The first is that teaching a car to drive, in traffic, is really, really difficult. But while we’ve been mildly impressed by Chryslers line-astern in an empty banked oval and Volkswagens trundling round an almost-deserted car park, Google have taken off the lab coat and kid gloves and done it for real, in the real world. Their fleet have driven 140,000 miles around the San Francisco Bay area, plus a jaunt in to Hollywood Bvd in the middle of Los Angeles.

They trusted the tech among the taxi drivers, the cyclists and those funny everyone-give-way-at-once junctions the Americans seem to like. If there’d been an accident, Google would have been in a lot of trouble. They’re really sure that this stuff really works. In fact, they’ve surely gone public now to start work on that legislation I mentioned.

Ah yes, legislation. That leads me directly to the second problem - public acceptance. As I waxed on the subject of computers’ infallibility and safety gains, how many of you were spluttering and warming up your keyboards to tell me how wrong I was? How many of you are about to tell me how often your PC crashes and that you’re glad your car crashes less often? Well, you’re wrong. Computers don’t make mistakes. Software developers do and they lead to bugs and bugs lead to computers doing things we don’t want.

And, yes, if the computer which is driving your car has a bug in it, it “something you don’t want it to do” might involve accelerating directly through a school playground. Or reversing off a cliff. Or releasing its handbrake overnight and rolling gently over your cat.

But anyone selling a self-driving car will know that any and all accidents which could possibly happen would be not just a moral disaster but a financial one, too, as sales disappear but the lawsuits pile up. It could be the end of any company and they won’t risk that.

Safety-critical software already exists and people know how to make it bombproof (sometimes literally).

Software controls aeroplanes, life-support machines, nuclear reactors, early warning systems, missile guidance. If you have a modern luxury car, it probably has a drive-by-wire throttle and a software bug already has the potential to accelerate you as long and as hard as the fuel lasts, after which you’d be in a state referred-to by doctors as “dead”.

We already live in a world where we trust software with our lives; robbing ourselves of such a great good as the self-driving car would have no basis in reason.

5 comments:

Hattie, I take serious umbrage at your assertion that any company is capable of developing software that is both safe and capable.

Have you seen how relatively simple most of the existing "safety critical" systems are? Much of the drive-by-wire stuff you mention is probably classed at the lower level of "safety related". And then it's on the bottom of the complexity scale, labelled "Noddy".

Companies have a knack at paring-down regulatory requirements to spend the bare minimum. In fact, even though I'm not usually one to don a silver hat I suspect time may show this stunt by Google is a practical joke on a par with the assertion by the BBC back in 1957 that spaghetti grows on trees.

A remote controlled car? Definitely! It won't be hard to find the bandwidth to stream multiple angles of live video back to base, and steering, brake and accelerator controls t'other.

Not that I didn't enjoy this post. I think everyone shares your surprise and curiosity that the company who brought us such technical feats as wave* and Priority Inbox were working - apparently - quietly behind the scenes on this.

IMHO: I don't think Google announced it now to start work on legislation, I think they announced it now because they released their financial results yesterday and someone would have noticed it and started asking questions.

I normally live near James, but am out in Mountain View at Google right now (I work for Google in London). I cannot speak professionally about this project (which I have nothing to do with), but I can say the cars are real. What I can help clarify is why this sort of work does relate closely to Google's core strengths although it might not be immediately obviously. But that's going to take a blog posting ...

Baldy, I accept that making safe software doesn't happen "just like that" and that many safety-critical systems in the consumer domain (including drive-by-wire throttles) are largely "safe-able" because they're so simple. My post is aimed at addressing the kneejerk reaction which the concept of self-driving cars will bring from (I imagine) a majority of people. But I stand by my point that complex software can be made safe - some of my other examples (weapons guidance, for instance) are far from trivial. Therefore, I believe that Google may or may not get it right but we should let them try - in a controlled environment first, of course.