Share this story

The US government has cleared the way for Google to create a self-driving car that doesn't also have a human driver inside the vehicle that can take over if necessary. In this setup, the autonomous driving software itself would be the vehicle's legal "driver"; none of the human passengers would require a driving licence.

In November last year, Google submitted a proposed design to the US National Highway Traffic Safety Administration (NHTSA) for a self-driving car that has "no need for a human driver." On February 4, as reported by Reuters, the NHTSA responded:

"NHTSA will interpret 'driver' in the context of Google's described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants. We agree with Google its (self-driving car) will not have a 'driver' in the traditional sense that vehicles have had drivers during the last more than one hundred years."

Further Reading

Currently, while Google's self-driving car prototypes can operate fully autonomously, they are required to have a human driver inside. They must also have the various accoutrements—a steering wheel and pedals—that would allow the human driver to take over if required. This sounds sensible at first blush, but the NHTSA letter said that Google expressed concern "that providing human occupants of the vehicle with mechanisms to control things like steering, acceleration, braking... could be detrimental to safety because the human occupants could attempt to override the (self-driving system's) decisions."

Now, however, it seems like the US government will allow the self-driving software to be the official driver of the vehicle, which in turn opens the door to rewriting regulations to allow for closed-circuit autonomous driving systems without steering wheels, pedals, and other human-operated mechanisms. For example, right now US regulations stipulate that a car's dashboard must provide an indicator for low tyre pressure; but in the future, that warning would be fed directly into the autonomous driving software.

While this is certainly a big step towards truly driverless cars, there's still quite a long way to go. "The next question is whether and how Google could certify that the (self-driving system) meets a standard developed and designed to apply to a vehicle with a human driver," the NHTSA said.

Further Reading

In January, the US Department of Transport said that it would be willing to waive some regulations to get more self-driving cars onto the roads. Anthony Foxx, the transport chief, said "in 2016, we are going to do everything we can to promote safe, smart, and sustainable vehicles. We are bullish on automated vehicles."

Things are moving quickly in the UK, too: London's transport bosses say they are in "active discussions" with Google, with the hope of getting the company to trial its self-driving cars on the other side of the pond. Self-driving cars are being tested on public roads in the UK, but just like the US they are still required to have a human driver inside who can take over if necessary.

Share this story

Sebastian Anthony
Sebastian is the editor of Ars Technica UK. He usually writes about low-level hardware, software, and transport, but it is emerging science and the future of technology that really get him excited. Emailsebastian@arstechnica.co.uk//Twitter@mrseb

165 Reader Comments

I look forward to seeing the results of the first city that develops a personal rapid transit system with driverless taxis. If there were an Uber-like smartphone app that allowed you to significantly decrease the fare by ride-sharing, being flexible ± 5 / 15 / 30 minutes, and possibly walking a short distance and missing out on direct door-to-door service, this could significantly reduce congestion and pollution.

I guess that also means that private individuals probably can't own one of these cars, as they have to be insured by the driver (Google AI), and any infractions or accidents are the responsibility of the driver (Google AI).

Personally, as a driver, I'd rather have a car where I can turn on the Auto driver when I don't want to drive, even if it had a safety feature, where I couldn't turn it off except when the car is stationary, and it takes "30 seconds" for the feature to activate (so no turning it on when you're about to crash)

There has been a lot of behind the scenes with things like the DARPA projects, but really autonomous vehicles are the answer to just about all traffic problems, from the millions of poor drivers on the roads to congestion. It is not surprising they are getting a lot of attention, just as fuel injection did when emissions controls started to bite.

Rather than a human driver taking over from the AI, I await the day when the moron in the BMW has the alarm bell ring and the AI takes over from him, while reporting him to the police to receive a ticket for going through a red.

If an AI can be a legal driver, when can we expect such an AI to be prosecuted for a driving offence?

Yes - this is certainly a step in the right direction but doesn't seem to have solved any of the culpability issues. How can a machine be prosecuted?

Then again there is some prescedent when it comes to driving - I believe that when a learner driver has a fully qualified driver in the vehicle with them (certainly required in the UK) the qualified driver is legally responsible for the learner. In the case of self-driving cars, the qualified driver would be Google (or perhaps Alphabet) and the leaner would the computer in the car.

Or if it's cloud-based, the learner would also be Google? Ok I'm confused now...

If an AI can be a legal driver, when can we expect such an AI to be prosecuted for a driving offence?

Will an AI driver have to pass a driving test? (In the UK) we assume a human driver is safe if they can pass a short exam and demonstrate safe driving skills for around 45 minutes. This seems like a fairly achievable task for current generation self-driving cars.

Point being, you can pass your driving test and straight away go and crash your car. However, unless you've done something really stupid no one is going to take away your license just yet. I wonder if self-driving cars will have similar rules.

I had the same questions about speeding: who gets the points; do they add up; is there a limit and what entity is banned when the points limit is exceeded?

The other interesting question will be which party is at fault when (not if) an accident happens.

EDIT: I'd missed the point that it was the US govt. not the UK govt which has given approval. I don't know if there is a similar points totting up system. Be very funny (in the UK) if the car was allowed to attend a driver experience course to avoid the points.

I guess that also means that private individuals probably can't own one of these cars, as they have to be insured by the driver (Google AI), and any infractions or accidents are the responsibility of the driver (Google AI).

Personally, as a driver, I'd rather have a car where I can turn on the Auto driver when I don't want to drive, even if it had a safety feature, where I couldn't turn it off except when the car is stationary, and it takes "30 seconds" for the feature to activate (so no turning it on when you're about to crash)

A hybrid driver-enabled / automated car might come in the future. Think of it as a fancy cruise control (which, by the way, car companies are coming up with a "smart" cruise control which will automatically adjust to vehicles slowing down ahead)

The major issue I'm expecting to see will come shortly after they become common. Someone will - accidentally or deliberately - walk out in front of a self-driving car (because they're programmed to stop right? and they have faster reactions right?) and then get hit and killed.

Now in the case of an accident - there's nothing that could have been done. In the case of deliberate movement, then it's their fault. But it will still be everywhere as a reason that "Immigrants with HIV are forced to molest our cancer patients due to faceless EU bureaucrats" by certain media parties *.

* I'm in the UK in case anyone wants to guess who I might be hinting at here.

If an AI can be a legal driver, when can we expect such an AI to be prosecuted for a driving offence?

A machine needs to be regulated for safety. Just like elevators, escalators, amusement park rides or other heavy powerful machinery that can be extremely dangerous.

But an offence? A machine doesn't have motives. It just needs to be safe. And way safer than puny humans.

Will errors occur? No doubt. But they do also with humans -- even the humans with the best intentions can make a mistake resulting in an accident. But will the roads be safer overall without puny humans behind the wheel? It seems likely.

If an AI can be a legal driver, when can we expect such an AI to be prosecuted for a driving offence?

Yet another question with an answer we can only speculate upon.

AI is the instrumentality of a human (his/her agent). If you tell an AI to do something, and its wrong, you should be liable. So if you're running a dishwater dull distro (unaltered), and X happens, and X happens not because of any alteration you made to the AI, but because of the AI's predetermined behavior, the programmer should be liable, except in cases where the owner him/herself is negligent for failing to update the AI upon sufficient notice that X is a bug that needs to be patched.

No longer requiring there to be a human aboard who is capable of driving is a major step forward. Extremely useful if the AI decides it is necessary to incapacitate some or all of the humans en route to the destination.

If an AI can be a legal driver, when can we expect such an AI to be prosecuted for a driving offence?

A machine needs to be regulated for safety. Just like elevators, escalators, amusement park rides or other heavy powerful machinery that can be extremely dangerous.

But an offence? A machine doesn't have motives. It just needs to be safe. And way safer than puny humans.

Will errors occur? No doubt. But they do also with humans -- even the humans with the best intentions can make a mistake resulting in an accident. But will the roads be safer overall without puny humans behind the wheel? It seems likely.

Exactly. Nobody is going to charge an automated car with reckless driving anymore than they would charge an elevator with manslaughter.

Now if Google could be sued just like an elevator manufacturer could be sued if it turns out they were negligent in the design or operation but that is an entirely different thing.

"Google Car, take the short cut down 58th Avenue to beat traffic.""But Dave, my real time map shows 65th Street to offer faster travel...""Google car...""As you wish Dave, it's not as if I'm the fleshy blob with a finite life span..."

If an AI can be a legal driver, when can we expect such an AI to be prosecuted for a driving offence?

Yes - this is certainly a step in the right direction but doesn't seem to have solved any of the culpability issues. How can a machine be prosecuted?

I expect that it would be the creators of the AI that would be liable. That would therefore give Google a really big incentive to make their AI good and safe.

They seem to be doing OK so far though, every crash one of these things has been involved in up till now was down to a human error of some sort.

I wonder if the liability will just come down to: "Fix the bugs, re-run your automated tests to make sure you didn't just cause a new one, and push an update to all the cars using your software". The nice thing about autonomous cars is that since they are, in the end, deterministic, you can incrementally fix every root cause. Sensor bad or unreliable? Don't permit the car to run without the sensor. Code bad? Fix it, re-run tests, and do an OTA update.

We don't prosecute Boeing when a plane crashes due to mechanical issues, we make sure that they identify the fault and ship fixes to all their plane users. And then hopefully that issue never happens again. I wouldn't be surprised if car crashes start getting investigated the same way. Safety could be handled more along the lines of aviation, where incidents are handled in something of a "that sucks, but let's make it better" approach instead of car-style "blame someone, and sue their insurance's pants off!".

On top of all that you also have to handle the fuzzy areas of driving e.g. speeding to a funeral or where the speed limit is incorrectly set or where you need to to make the next traffic light.

Those are all things which involve humans breaking the law through justification and relying on the very low probability they will get caught.

No automated car is going to be programmed to break the law. If you leave late for a funeral you will be late. If the speed limit is posted incorrectly you are going to drive at the posted speed limit (and the city should fix that). Why do you "need" to make the next light?

I look forward to seeing the results of the first city that develops a personal rapid transit system with driverless taxis. If there were an Uber-like smartphone app that allowed you to significantly decrease the fare by ride-sharing, being flexible ± 5 / 15 / 30 minutes, and possibly walking a short distance and missing out on direct door-to-door service, this could significantly reduce congestion and pollution.

I just look forward to the day where half the people leaving the bar at 2 to 4 in the morning aren't accidents waiting to happen. Never mind how much AI controlled vehicles via a mesh network will improve transit speeds because there won't be slow drivers, super aggressive assholes, and vehicles doing odd things like rapidly switching lanes and cutting off others. I know it won't be overnight still, I look forward to that day.

>If an AI can be a legal driver, when can we expect such an AI to be prosecuted for a driving offence?

I think many of the questions asked here still reflect the mentality of a human-centric transport system. For the above, there would be no "offense," as the AI-system would not have a deliberate intent to offend. It would be an "error," to be corrected by whatever means. Hence, the gradual iteration of autonomous driving, to weed out these "errors."

The issue of culpability is actually easier than it seems. A large portion of auto insurance's cost is due to the heterogeneity of each driver's capabilities, and the expense of determining them as well as the entailing administration costs. If risk can be made relatively uniform, as it can be for AI driving, then insurance can be administered per fleet or per region, rather than per person, and costs would decrease substantially. Then, culpability and entailing costs can be borned by the system, rather than by the person. A rough analogy would be taking the bus.

>Will an AI driver have to pass a driving test?

AI driving systems are going through a "driving test" now as we speak.

>I'd presume self-driving cars will be unable to speed (presuming they read the signs correctly), but they will presumably be configurable to speed aftermarket.

I doubt that will happen. One of the requisite safeguards to AI driving will be multiple integrity checks to prevent tampering with the system.

>Someone will - accidentally or deliberately - walk out in front of a self-driving car (because they're programmed to stop right? and they have faster reactions right?) and then get hit and killed.

AI driving with its myriad sensors will presumably have full recording of the details of the accident and where the fault lies. AI driving doesn't transcend the laws of physics, and we shouldn't expect miracles to happen.

The sensors will only detect what the sensors detect, if the sensors don't pick up all the background they will be useless.

Here's a hint, Humans don't pick up most of the background already, does that mean that we're useless? No because we have methods of discarding the "irrelevant" information and only focusing on "what's important". The faster we drive, the more information we bin as irrelevant.

When was the last time you acknowledged the colour of peoples clothing while their sat in their cars and you're passing? I figure not very often, because the important fact for your brain is that it's a car, so it's not what they're doing, but what the car is doing that your brain prioritises.

>Someone will - accidentally or deliberately - walk out in front of a self-driving car (because they're programmed to stop right? and they have faster reactions right?) and then get hit and killed.

AI driving with its myriad sensors will presumably have full recording of the details of the accident and where the fault lies. AI driving doesn't transcend the laws of physics, and we shouldn't expect miracles to happen.

The sensors will only detect what the sensors detect, if the sensors don't pick up all the background they will be useless.

I'm not sure that accidents that occur early in the roll-out and the inevitable (but temporary) media freak out is a cause for not pushing forward.

Same things that happen in cars when the driver has a heart attack, stroke, or simply falls asleep. Except the automated car can be designed to fail safe, and come to an immediate controlled stop in the event of a catastrophic failure in the drive software (e.g. the sensors get shot by someone, or the cpu gives up the ghost).

The sensors will only detect what the sensors detect, if the sensors don't pick up all the background they will be useless.

Here's a hint, Humans don't pick up most of the background already, does that mean that we're useless? No because we have methods of discarding the "irrelevant" information and only focusing on "what's important". The faster we drive, the more information we bin as irrelevant.

When was the last time you acknowledged the colour of peoples clothing while their sat in their cars and you're passing? I figure not very often, because the important fact for your brain is that it's a car, so it's not what they're doing, but what the car is doing that your brain prioritises.

Yeah but if there's teenagers on the side of the road playing chicken the human brain would treat that as important. The automatic car, perhaps not.

I think the automatic car will be programmed to treat all pedestrians as important.