Uber Technologies Inc suspended its pilot program for driverless cars on Saturday after a vehicle equipped with the nascent technology crashed on an Arizona roadway, the ride-hailing company and local police said.

Are they crashing so much or it is just that they get national news when they do? And, in this case, it was the human driver in the other vehicle that was responsible for the crash.

ETA: I know of three recent accidents where a self-driving car was involved. This one, which seems to be the fault of the human in the other car, the Tesla crash, which was the self-driving system's fault, and the bus-car accident in California, which would seem to be the bus driver's fault (the self-driving car had moved to the far right side of the lane to make a right turn and the bus was attempting to pass in the same lane).

I'm curious, is there any outside reference that identifies an automated car from a non-automated car?

Unless there are going to be dedicated lanes for automated cars during for the foreseeable future when automation and human controlled vehicles will co-exist, the automation is going to have to be robust enough to handle unpredictable situations when human controlled cars are around.

Most of the ones I've seen have a large sensor package on the roof. Google's says "Self Driving Car" on the side as well. They are programmed to attempt to avoid crashes by other drivers, but, as with human drivers, there is only so much that can be done to avoid a crash.

Also, such avoidance behavior can cause its own set of problems. IMS, the first version of cars had issues with 4 way stop intersections because they kept waiting for the other car to stop completely. Since many drivers creep, the self-driving car wouldn't go through the intersection.

It seems like one thing human drivers do a lot better than computers is anticipating other drivers breaking the rules. The article lacks details, but it sounds as if the self driving had the right of way, so it proceeded and didn't anticipate that the other driver might not yield. Technically it was in the right, but a human driver might have been able to judge based the other driver's behavior that it looked like he wasn't going to yield and react accordingly.

Are they crashing so much or it is just that they get national news when they do? And, in this case, it was the human driver in the other vehicle that was responsible for the crash.

ETA: I know of three recent accidents where a self-driving car was involved. This one, which seems to be the fault of the human in the other car, the Tesla crash, which was the self-driving system's fault, and the bus-car accident in California, which would seem to be the bus driver's fault (the self-driving car had moved to the far right side of the lane to make a right turn and the bus was attempting to pass in the same lane).

The Tesla crash was not, or not entirely, the Tesla's fault, since it is not a system where the human driver is supposed to even take their hands off the wheel, let alone not pay attention to a semi pulling across the road.

The Google car accident with the bus was admitted by Google, last I read, to be the fault of the self-driving car. The car had recently "learned" that it could drive in the far right of the lane to make a right turn when other traffic ahead had stopped. But, there was a sandbag in the road toward the curb, and the car couldn't get by. When traffic started moving again, it tried to merge back into the lane, and it apparently "expected" the bus in that lane to give way. The Google car literally (at very slow speed) drove into the side of the bus.

For the Tesla, "fault" is too strong because you are correct about it not supposed to be used as a full autonomous driving system. Tesla has said that there is a technical issue that contributed to the accident.

Google said that their system had partial fault because it assumed the bus would yield. You are correct that the Google car moved over to make a right hand turn. But the Google car was being passed by the bus that was in the same lane. IOW, the bus was going to move itself into the same lane space that another vehicle was already in. I would say that meant than the bus was the vehicle with the majority of responsibility to avoid a crash with the vehicle already in the lane.

...Technically it was in the right, but a human driver might have been able to judge based the other driver's behavior that it looked like he wasn't going to yield and react accordingly.

It's hard to say though. Technically it was in the right, but is it possible it paused longer than a human driver would implying that it was giving up its right of way the same way human drivers seem to do at four ways all the time?

It doesn't sound like an accident that would have happened if both drivers were human.

For the Tesla, "fault" is too strong because you are correct about it not supposed to be used as a full autonomous driving system. Tesla has said that there is a technical issue that contributed to the accident.

Google said that their system had partial fault because it assumed the bus would yield. You are correct that the Google car moved over to make a right hand turn. But the Google car was being passed by the bus that was in the same lane. IOW, the bus was going to move itself into the same lane space that another vehicle was already in. I would say that meant than the bus was the vehicle with the majority of responsibility to avoid a crash with the vehicle already in the lane.

Google said that it is the norm in that spot for right-turning cars to pull to the right, and for cars proceeding forward to drive past them. The lane is two carwidths across. The Google car also came to a complete stop while pulled to the curb. I would be surprised if most drivers would expect that they were required to yield to the Google car in those circumstances. I'm not sure about the legality, but it certainly happens constantly. Many drivers would let the car back over, but not all, and I'm not sure they would be required to.

The legal aspect might depend on how far over the Google car was, and how much it had already nosed back into the center of the lane.

If a car has pulled to the curb and stopped, it may be regarded, at times, as an obstruction to be driven past or around, and not a car occupying in the normal sense a "lane space." If the car then decides to pull back into traffic, it may be required to merge rather than expect other cars to yield.

Google said this as part of its report:

Quote:

This is the social norm because a turning vehicle often has to pause and wait for pedestrians; hugging the curb allows other drivers to continue on their way by passing on the left. It’s vital for us to develop advanced skills that respect not just the letter of the traffic code but the spirit of the road.

On February 14, our vehicle was driving autonomously and had pulled toward the right-hand curb to prepare for a right turn. It then detected sandbags near a storm drain blocking its path, so it needed to come to a stop. After waiting for some other vehicles to pass, our vehicle, still in autonomous mode, began angling back toward the center of the lane at around 2 mph – and made contact with the side of a passing bus traveling at 15 mph. Our car had detected the approaching bus, but predicted that it would yield to us because we were ahead of it.

It's hard to say though. Technically it was in the right, but is it possible it paused longer than a human driver would implying that it was giving up its right of way the same way human drivers seem to do at four ways all the time?

If you are referring to the Uber accident, it sounds like the Uber car was driving down the road, not at an intersection and that the other driver turned in front of it. I can't find detailed descriptions, but the accident happened at speed which wouldn't be the case if the Uber car had stopped at an intersection. Having a human driver in the other car probably wouldn't have made a difference.

It was at an intersection. The other car made a left turn into the Uber vehicle. If they hit it hard enough to put it on its side, its almost as if they didn't see it at all. Unless I'm missing something, the other driver (non Uber) was clearly at fault.

It seems like one thing human drivers do a lot better than computers is anticipating other drivers breaking the rules. The article lacks details, but it sounds as if the self driving had the right of way, so it proceeded and didn't anticipate that the other driver might not yield. Technically it was in the right, but a human driver might have been able to judge based the other driver's behavior that it looked like he wasn't going to yield and react accordingly.

If I'm in a situation where I'm dependent on the other person to yield, I only go if I can see the driver looking at me. Not sure a self drive car can check another driver's level of attentivness.

It's very hard to tell from the vague reports, but it seemed to me that the Uber car was proceeding through an intersection, but one that was not controlled in that direction. So the car that hit the Uber car was on a side street, and the Uber car had no reason to stop. If I'm right, it is a kind of accident that some human drivers would avoid, based on seeing the other car approaching and making a correct prediction that it was not going to stop, (or seeing it begin its turn, or whatever clues there might have been), but not all humans would predict that level of inattention on the part of the other driver.

Sometimes we all have rely on other drivers to follow the rules--if we didn't, we would never get anywhere.

Are we really so obsessed with making real life like Star Trek meets The Jetsons that we need self-drivng cars like tomorrow? Maybe if we weren't in such a rush they wouldn't crash so much.

I think you can attribute much of the rush to driverless cars to competition between the various companies involved as much as anything else. They all want to be the first to be able declare "Eureka, we've done it!!"

Are they crashing so much or it is just that they get national news when they do?

We don't know because none of them have released any detailed information about how much they've been driving (and the next parts are extremely important) with what kind of operators, with what kind of traffic, what weather, what road conditions, etc etc. Without that kind of detailed info all comparisons are bogus *. For the most part the comparisons that have been made are definitely skewed in favour of them having a low accident rate because of conditions they will practically never ever see in real life: dry weather, extremely well-known roads, engineers who know in detail how they are supposed to work, etc etc. Also, what about near-miss data? We hear about accidents. I've taught a few people how to drive. If I waited for them to have accidents, it would be too late. What they taught me when I learned to drive is to notice when something goes even a little wrong. Ten near misses is very different from two. We have no idea what the real stats are because every company is holding their cards way too close for an issue that affects everyone.

* It doesn't help that some of these groups have been using some of those bogus stats in PR. For example, claiming that an intelligent cruise control is safer than a human because it has fewer accidents on average, failing to recognise the fact that cruise control is only used in very specific situations. My gosh, if it has even an order of magnitude fewer accidents for the general road, that's not a safe cruise control system!

Just to be clear, with apologies for double posting, I do think that under good conditions and compared to the same conditions for humans they will look OK, in many if not most cases as good or better than humans. But there are certain conditions under which they cannot yet drive at all, let alone like a human. If we knew all that, I think more states would be willing to draw up rules for how and when they can be used and we would have some place to start. Just be completely honest and open; most people want this. Instead, these companies have opted to keep pounding the media with what amounts to nothing more than a campaign with promises of a better world. So whenever the slightest thing goes wrong people reject those slogans and bogus stats. That is a reasonable reaction when they have nothing but articles written from press releases that are full of ad copy rather than actual data.

Let's have some kind of standard for keeping records of self-driving and intelligent assisted driving and agreements to publish that data. Then independent researchers can make more or less valid comparisons. I think there will be some big surprises. I don't think things will be as far along as many claim. Yet I think people will see that it's a good idea to start making rules and laws to make it easier to safely develop and operate these vehicles.

I've always figured that the biggest challenge to driverless cars is the liability issue and not any technical challenges. Car makers really don't want the liability for crashes. Even if the driverless car is not at fault 99.9% of the time the liability for that 0.1% is huge because of the car maker's deep pockets.

A related issue is one of insurance. Will insurers cover self driving (full time or part time) cars? What will the rates be? Where are the accident statistics the insurers need to set the rates?

I don't know that it would be that different from existing liability laws. I believe most states hold that the parent of a 16 or 17 year old is still ultimately financially responsible for damages even though the child is designated to have sufficient decision making capabilities with regards to driving. An owner of a animal is responsible for any damages that that entity does, even though an animal has a great deal of autonomy and decision making capability. The manufacturer would probably be held liable only if the car did not meet whatever ends up being an industry standard for self-driving cars or if there was a defect in the car.

For example, drum brakes have long been accepted on the rear wheels of a vehicle despite the fact that disc brakes usually provide better stopping power. A car manufacturer that uses drum brakes is generally not going to be held liable even if an accident could have been avoided if the car had disc brakes. A similar standard would probably hold true with autonomous cars. If the car meets the standard for avoidance of objects traveling at oblique angles x - x`, then an accident from angle x` + 20*° will probably be considered not the car manufacturer's fault.