Share this story

On Sunday night, an Uber self-driving car killed 49-year-old Elaine Herzberg in Tempe, Arizona. A key argument in Uber's defense has been that the road was so dark that even an attentive driver would not have spotted Herzberg in the seconds before the crash.

Herzberg "came from the shadows right into the roadway," Tempe police chief Sylvia Moir told the San Francisco Chronicle on Monday. "The driver said it was like a flash."

This is one of the first frames of the Uber dashcam video where Elaine Herzberg is visible. Notice that you can faintly see a white sign to the far right of the image. This sign is much more clearly visible in videos taken by YouTubers.

A frame from Brian Kaufman's video shows the spot where Elaine Herzberg was killed.

Brian Kaufman

A frame from Dana Black's video shows a spot slightly further back (Black's video becomes unfocused at the exact spot corresponding to the other two screenshots).

When police released footage from the Uber vehicle's onboard camera on Wednesday, it seemed to somewhat support this view. In the video, Herzberg's feet become visible only about 1.4 seconds before the final frame of the video. Prior to that point, she appears shrouded in shadow.

But then people in the Tempe area started making their own videos—videos that give a dramatically different impression of that section of roadway.

Mill Ave. at night.

In this nighttime video, posted to YouTube by Brian Kaufman on Wednesday, the scene of the crash can be seen around 0:33. Features at the sides of the road—including curbs, signs, and bushes—are clearly visible. No pedestrians walk into the road during the video, but it seems clear that Herzberg would have been visible much earlier if the Uber video had been taken with this camera.

Mill Ave. at night.

Another YouTuber, Dana Black, posted this video. His camera work isn't as good as Kaufman's—the video is blurry and he doesn't hold his camera steady. But his video supports the same basic conclusion. "It's not as dark as that video made it look," Black says in the video as he drives past the point in the road where Herzberg was hit (around 0:33). "My footage is from my Pixel XL and looks pretty similar to real life," he writes in the YouTube description.

Further Reading

To be fair, there are a few other cars on the road in Black's video, which might be adding some illumination. But Kaufman's car appears to be the only vehicle on the road, and visibility is still much better than in Uber's dashcam video.

Headlights are supposed to illuminate more than two seconds ahead of the car

It's not surprising that the road was actually more brightly lit than the Uber video makes out. Think about it: the Uber car was going 38 miles per hour (61km/h), and people on pitch-black country roads drive faster than that all the time. That would be extremely reckless if—as the video implies—headlights can't illuminate the road two seconds ahead at that speed.

The video implies that the Uber car's headlights had a range under 110 feet (33 meters). For comparison, here's a diagram from the Insurance Institute for Highway Safety showing headlight ratings for the car in question, a Volvo XC90:

IIHS shows the XC90 with a range just under 250 feet (76 meters) with "low beams" on. The car's headlights are rated poorly by the IIHS compared with other cars on the market. Still, 250 feet is more than 4 seconds of illumination for a car driving 38 miles per hour. If the Uber car's headlights really didn't illuminate Herzberg until less than two seconds before the crash, there was something seriously wrong with them.

The more likely explanation is that the Uber vehicle's dashcam was poorly configured for nighttime recording, and so the video gives a misleading impression of how bright the scene was and how much warning the driver had.

And even if it's true that the road were poorly lit, it's not clear if that would exonerate Uber. Uber's cars have lidar and radar sensors in addition to cameras, and those sensors don't require ambient light to function. So the vehicle should have spotted Herzberg even if the road was pitch black.

Moreover, interior dashcam footage shows the driver looking down for nearly five seconds just before the accident—so she likely would have missed Herzberg no matter how good the illumination on the road was.

Update: Originally this story featured a generic IIHS diagram on headlight distance. But after it went up my colleague Jonathan Gitlin pointed me to an XC90-specific diagram. So I replaced the diagram and updated the paragraphs on either side accordingly.

Thank you for writing this article. It was immediately obvious to me that the Uber dashcam video was misleading. Why? Because I've driven at night. It's really that simple. I know what it's like to drive in pitch black with low beams on. You can still safely drive 40 mph with the amount of illumination you get. I also knew that the Volvo XC90 would have decent headlights (although apparently not great) because it's a modern luxury car. Reading accounts of people saying "I would have hit that pedestrian too" had me bashing my head against the wall. Do these people really think nighttime driving is as barenuckle as the Uber dashcam video depicted? If so, then they shouldn't be driving at night.

820 Reader Comments

There wasn't a crosswalk or one near, therefore the car did have right of way.

In most states, and I'll assume AZ is one of them, it is explicitly illegal to take right-of-way if it causes a collision. You have a duty to avoid an accident regardless of whether you had right-of-way.

So yes, the vehicle clearly had right-of-way, but the question of whether the accident was avoidable will come up...and the video of the driver will not help matters.

Those exceptions don't quite mean what you are implying they do. For the most part, exceptions like those are intent based. Someone who asserts their perceived right-of-way and causes a collision would be found at fault. But it requires an action, or clear evidence that avoidance was ignored, that led to the accident.

Here, it is unlikely to matter that the car and the safety driver failed because there is clear evidence that they weren't trying to assert their right of way, they just failed to avoid someone who didn't yield to theirs Unless Uber is found to have broken the law or some other contractual obligations in other ways that contributed to the accident, it is extremely likely the pedestrian will remain at fault.

Now that doesn't mean that they couldn't be found liable in civil court if someone wanted to sue, but that's a different matter.

The issue here is that the LIDAR system didn't detect the object in the road AND that the human supervisor also didn't see and react in time. Since we've ruled out bad lighting conditions, the human supervisor obviously got lulled into a state of inattention because they didn't need to be driving.

However, that's not the case for the LIDAR. What I noticed though is that there's all sorts of radiation reflection off that building... could the pedestrian have been in a LIDAR blind spot due to too much other radiation in its operating range?

And if that's the case, what are Uber and other autonomous vehicle companies doing to handle similar situations? It seems like Uber is either falling back to something that also doesn't work, or is depending on visible light as processed by the dashcam.

This incident points out why the more expensive spread spectrum systems are extremely important; just like with human driving, you can't always depend on your main collision avoidance sensors being effective.

What it seems to me *should* have happened here is the car should have emitted a warning and signaled for manual takeover when it lost LIDAR resolution. But if the tech Uber is using is continually dropping out for short bursts, it's possible that they instead suppress that data unless it continues for more than a few milliseconds.

If Uber AV cannot even handle such close to perfect condition (dry air, wide open road, no other vehicle sighted, only one obstacle pedestrian, buildings are sparse and quite far) and got confused by them all to the extent it could not even recognize the pedestrian, then why/how/what the hell it was doing there driving around at night on public roads in the first place ??

I think the scrutiny on EVs is completely unfair. We tolerate an extreme amount of risk from normal cars. It is entirely plausible that EVs are already at least as safe as normal vehicles and with improvements will become far safer. The important facts are:

1.) The primary fault here is with the pedestrian. She had no business where she was.

2.) This accident would be very likely to occur with a normal vehicle.

3.) The fact that an EV could/should have been able to avoid this accident is cause for optimism. Trashing EVs and Uber because it didn't' succeed in this instance so early in the development of EVs is ridiculous is my view.

People die in car accidents every day. The notion that a few deaths involving EVs is somehow unacceptable is ridiculous. Zero fatalities is not a reasonable expectation for EVs or any other transportation system.

This was a pathetically simple accident for the Uber AV to avoid. In fact this could have been a great PR situation for AV in general. Uber could have released video of how their AV saw someone breaking the law and avoiding an accident where a human couldn't (even though I think a human could). They could have shown a side by side of the LIDAR and visual cameras and shown how the LIDAR allowed the AV to detect and avoid the pedestrian long before they were even visible.

Instead Uber utterly failed at the most basic of AV tests. We should be happy about that? Statistically speaking based on the relatively track records there is a less than 4% chance that an Uber AV is as safe as the average human. We can't say with absolute certainty it is less safe but 96% chance is pretty damning. The average human is a pretty shitty driver and there is a 96% chance that Uber is WORSE than that.

You of all people should know that statistics (even if programs allow you to run them) on a single data point does not have any kind of value of meaning. Single data point is a single data point no matter how much statistics you wrongly throw at it.

It isn't a single datapoint. It is 2 million datapoints. That is however small which is why we can't say the Uber AV is definitely worse than a human driver but there is a 96% probability that it was. So the stats combined with Uber dudebro culture and the failure to handle a simple collision avoidance situation and the fact that AVs have safety drivers (how many times did safety driver prevent a collision and thus "boost" Uber's stats) makes me lean towards the Uber AV being worse than the average human driver.

You are ignoring that the fatality is a single data point. You have no way to estimate the probability distribution of those, so you are assuming that it is linear, i.e. one fatality every 2M miles driven which is clearly a dangerous and hardly scientific assumption to make.

There is no need to use shoddy statistics to prove that Uber is bad as that is completely clear from this accident.

If the copilot if fucking off, then the pilot should actually pay attention, pull over, slow down, do something?

That's what a Tesla does,... it instructs the driver to place their hands on the wheel, and if they don't, it brings the vehicle to a stop. So Uber could build a system that makes sure the co-pilot is paying attention. But they didn't, because they are Uber.

Hands on the wheel does not mean that the driver is paying attention. There is essentially no sure way to test that driver is paying attention other than letting him drive which kinda defeats the purpose of "driver assistance" of autopilot type and "safety driver" for AVs.

Tesla and rest of the car manufacturers are exactly as sleazy as Uber when it comes to this as everyone knows humans are incapable of doing what is being asked of them in these cases and no amount of EULA or instructions in the manual is going to change that.

Tesla? If anything, they're the only ones doing it right -- hundreds of thousands of hours of humans driving while the computer watches and compares its decisions against what the human does*. Aside from a couple of suicidal idiots who decided to hand over control to the computer (eg while watching a movie), I'm not aware of Tesla doing any on-road driverless vehicle testing yet.

*(How much of this data gets relayed to the mother ship and incorporated into updates I don't know, but at least some of it is, per comments in Tesla owner forums.)

Self driving vehicles are already many times more safe then human drivers. Does the author really expect something to be 100% perfect? Absolutely nothing in life reaches that standard.

Not per mile driven... not after this little shit show. Fortunately, they will continue to improve, whereas we're pretty much at peak driving skill.

---------------------------------------------------------------

The thing that pisses me off about this story the most is that the chief was dumb enough (or weak enough under pressure from industry reps) to believe that shadows exonerated the car in the first place.

There could even be political pressure for this to go away, or at least manage the potential public backlash, as someone had to be OK with autonomous car testing in the first place.

There wasn't a crosswalk or one near, therefore the car did have right of way.

In most states, and I'll assume AZ is one of them, it is explicitly illegal to take right-of-way if it causes a collision. You have a duty to avoid an accident regardless of whether you had right-of-way.

So yes, the vehicle clearly had right-of-way, but the question of whether the accident was avoidable will come up...and the video of the driver will not help matters.

Those exceptions don't quite mean what you are implying they do. For the most part, exceptions like those are intent based. Someone who asserts their perceived right-of-way and causes a collision would be found at fault. But it requires an action, or clear evidence that avoidance was ignored, that led to the accident.

Here, it is unlikely to matter that the car and the safety driver failed because there is clear evidence that they weren't trying to assert their right of way, they just failed to avoid someone who didn't yield to theirs Unless Uber is found to have broken the law or some other contractual obligations in other ways that contributed to the accident, it is extremely likely the pedestrian will remain at fault.

Now that doesn't mean that they couldn't be found liable in civil court if someone wanted to sue, but that's a different matter.

I think the main issue will be the lack of attention by the safety driver, and any laws or agreements governing that. I swear I look up more at where we're going on the bus while playing Bayonetta on my Switch than the "safety driver" was. If the reality of the street is closer to the other videos shown as far as visibility, I'd have a hard time buying Uber's line here.

Which would definitely mean liability in civil court, but more importantly is whether the safety driver's constant use of a distracting device while safety driving was actually legal, given that it seems to have contributed directly to a death. That could mean criminal liability as well, though I am the furthest thing from a lawyer.

The issue here is that the LIDAR system didn't detect the object in the road AND that the human supervisor also didn't see and react in time. Since we've ruled out bad lighting conditions, the human supervisor obviously got lulled into a state of inattention because they didn't need to be driving.

However, that's not the case for the LIDAR. What I noticed though is that there's all sorts of radiation reflection off that building... could the pedestrian have been in a LIDAR blind spot due to too much other radiation in its operating range?

And if that's the case, what are Uber and other autonomous vehicle companies doing to handle similar situations? It seems like Uber is either falling back to something that also doesn't work, or is depending on visible light as processed by the dashcam.

This incident points out why the more expensive spread spectrum systems are extremely important; just like with human driving, you can't always depend on your main collision avoidance sensors being effective.

What it seems to me *should* have happened here is the car should have emitted a warning and signaled for manual takeover when it lost LIDAR resolution. But if the tech Uber is using is continually dropping out for short bursts, it's possible that they instead suppress that data unless it continues for more than a few milliseconds.

If Uber AV cannot even handle such close to perfect condition (dry air, wide open road, no other vehicle sighted, only one obstacle pedestrian, buildings are sparse and quite far) and got confused by them all to the extent it could not even recognize the pedestrian, then why/how/what the hell it was doing there driving around at night on public roads in the first place ??

because Uber is dudebo corp man. They are going to disrupt their way through pedestrians. Arizona is run by 'free market can never be wrong' GOP so they said sure if you want to do it we can't stand in the way of the free market with regulations or oversight. That is Obama communism nonsense.

Put them together and you have Uber running primitive grossly incapable vehicles on public roads with no oversight. Who could have foreseen that would get someone killed?

[/quote]What I see is a bunch of people trying to Zapruder something all to pieces when the simple fact of the matter is, well, simple:

The driver should have seen her. The driver didn't. The car should have seen her, it appears not to have done so.

If you remove the self-driving tech from the equation -- which we can do because the tech is KNOWN to be early, incomplete, and totally untrustworthy -- what you have is someone operating a motor vehicle without paying attention to the road, made possible by a system that could have been, but was not, designed to force the driver to pay attention, even though that attention was absolutely (obviously) required for the safe operation of the vehicle.

They can Zapruder the film all to hell and back, and it won't change that simple fact.[/quote]

Its actually more complex then that, because we have to ask ourselves, CAN you design a safety driver system that actually works? If that driver had been looking at the road, would she have been able to determine the AV isnt reacting properly, decide to act on her own, put her hands on the steering wheel, put her foot on the brake, and act, all in the span of 2 seconds? Are you going to tell me that in this situation, that is humanly possible?

And, of course, if we arrive at that conclusion, how in the WORLD can we train drivers and set them up in an AV in order to actually respond to a situation like this??

Thank you for writing this article. It was immediately obvious to me that the Uber dashcam video was misleading. Why? Because I've driven at night. It's really that simple. I know what it's like to drive in pitch black with low beams on. You can still safely drive 40 mph with the amount of illumination you get. I also knew that the Volvo XC90 would have decent headlights (although apparently not great) because it's a modern luxury car. Reading accounts of people saying "I would have hit that pedestrian too" had me bashing my head against the wall. Do these people really think nighttime driving is as barenuckle as the Uber dashcam video depicted? If so, then they shouldn't be driving at night.

When I first heard, it sounded like the pedestrian had walked out from the outer lane and could have been hidden by a post or something. But if they were walking across several lanes in this kind of road then it doesn't seem possible that the LIDAR could have reasonably missed it.

I think Uber is at fault here, and should have some serious penalties.

None of the self-driving companies (the ones with real test products on the road) are using LIDAR even Musk won't use it as it's to expensive. At best Uber is using the same Logitech webcams you have at home.

Where on earth do you people get this stuff from? Tesla is the *only* AV program I know of not to use lidar. Everyone else is, including Uber.

Probably from their reputation firm guidebook "how to create confusion and havoc in diacussion forums"

No it's not. Just look at the Uber video. You can clearly see street light in the upper part of the frame (very low exposure but still visible). So there were active street lights there based on Ubers video no effect of street light is visible on the ground which is clearly impossible. There should have been a lot more light and that Uber video is not representative at all.

You can't even claim that you could be blinded before because the overpass doesn't have illumination so if anything you are driving from less illuminated part to a more illuminated part.

Finally, the park being dark has nothing to do with conditions on the street that is clearly illuminated.

How many times have you yourself biked, walked, or driven down this road? It's a freeway access route for me and has been for 11 years.

I love that park. I also walk from this intersection home from the lightrail. There's also some curves in the road at this part. Looping Papago on my motorcycle is the only place I can find some neighborhood twisties.

I know this area very well, I live there.

Exactly zero times since I live in Europe. However the video evidence from both Uber and others mentioned in this article clearly contradict you, as do several other posters that claim to live there.

Someone is lying or confused here and I have a feeling it is you as nobody else is talking about a dark park when this accident happened on a (well) lit road.

Without seeing the video, how do we know she did not just move into the path of the vehicle at the same time she became visible?

Why don't you just watch the video instead of speculating what it might show?

Isn't it surprising all these people suddenly cropping up proposing this and that funny hypothesa do not seem to actually have watched the videos?

What I see is a bunch of people trying to Zapruder something all to pieces when the simple fact of the matter is, well, simple:

The driver should have seen her. The driver didn't. The car should have seen her, it appears not to have done so.

If you remove the self-driving tech from the equation -- which we can do because the tech is KNOWN to be early, incomplete, and totally untrustworthy -- what you have is someone operating a motor vehicle without paying attention to the road, made possible by a system that could have been, but was not, designed to force the driver to pay attention, even though that attention was absolutely (obviously) required for the safe operation of the vehicle.

They can Zapruder the film all to hell and back, and it won't change that simple fact.

Offtopic, but congratulations on coining a new verb, to zapruder. (Seriously.)

But, just to be clear, you're saying there was another Uber vehicle behind the grassy knoll?

A few of these points are way more clean than people want to make them and only one is fuzzy.

1. The pedestrian shouldn't have been in the middle of the road with oncoming traffic. This is a highly unusual situation that human drivers would not be prepared for, would have been cursing about at best, and might have had trouble working around at worst.

Not sure where in the world you live in, but in the city where I live, also in most of the world's urban area, pedestrians crossing roads are not highly unusual situation.All human drivers where I live in (one of the world's biggest cities) are prepared for pedestrians crossing roads.

This area isn't so urban actually. For the record I live there.

Interesting how you keep claiming to live in the area but your statements have been contradicted by others who also claim to live in the area.

You claim the road was as pitch black as the Uber video show, which is venhemently contradicted by the others who also live in the area.

There's a concert venue on the south west corner of mill and curry. There's a lightrail train that runs north south on mill and heads west on curry. You'd see the train in the median if this was in the well lit part of this area, also there's crosswalks so close to the lights crossing the road would be nonsense. Obviously no one would walk across the train tracks...

As there is no light rail in the dash cam footage this is either on mill north of curry or on curry east of mill.

The videos you see are deceiving.

Again, I am not defending uber but rather the tempe police on their assessment. There is a 1200 acre park in this area where my friend in a local astronomy club goes with his crew to set up scopes for kids.

I don't know all the details, but I'm confused as to why the person decided that they have the right of way when crossing a street.

They didn't. People break the law all the time. I can't count the number of times in my life I had to take evasive action to prevent an accident despite having the right of way. If an AV is going to work without killing people on a weekly basis it will have to handle the reality that <gasp> not everyone follows the law all the time. That includes pedestrians, bicyclists, and other vehicle operators.

I hate that this is a body count argument, but if AV only killed a person a week that would be a 630,000% improvement over what we have now.

I'm really worried that this is going to turn into such a liability issues for all AV companies that pursuing the technology will be economically unfeasible.

Well no they wouldn't. There are nearly a million times as many non AV out there. Based on the low number of miles driven before the first fatality there is a less than 4% chance that Uber is equal to the average human driver. We all know just how bad the average human driver is and there is a 96% probability that Uber is worse.

If all human drivers were replaced tomorrow with Uber AV we would be seeing a skyrocketing vehicle fatality rate.

I was specifically referring to the 1 AV death per week statement, not Uber in specific.

But lets look at Uber self-driving cars in specific. As of Dec 2017 Uber had 2 million miles on their self driving cars. (http://triblive.com/business/technology ... n-100-days) That would put it death/injuries rate at 0.5 deaths/injuries per million miles. (1 death, no serious injuries)

For 2016 there were 40,000 deaths + 4.6 million serious injuries in 3,170,000 million miles for human driven cars. That puts the human rate at 1.46 deaths/injuries per million miles.

Something about the Uber footage is really not right. If you look at the right hand side of the road about 50M beyond he accident, there’s a streetlight that is casting light on the ground. That should give you a good idea of how bright it should be underneath a streetlight given the ISO of that particular camera.

Now look at the left side of the road in the Uber image about 25 ft. beyond the accident. No pool of light of similar intensity. Yet in the other images it’s clear that there’s a street light on the left hand side of the street at that location. Where is the pool of light from that street light in the Uber video?

IHS shows the XC90 with a range just under 250 feet (76 meters) with "low beams" on.

This is true for the right lane, but the range in the left lane is only about 110 ft.

At the moment Herzberg becomes visible in the Uber video, she's already in the Uber car's lane.

Without seeing the video, how do we know she did not just move into the path of the vehicle at the same time she became visible?

Why don't you just watch the video instead of speculating what it might show?

I found the video. Not sure why it's not embedded in the article and just linked to instead (oh ad revenue, right). Anyway, she is moving pretty quickly but not paying attention to the oncoming car at all. At 00:02 her feet are visible but that would be in the plane of the road and not detected as a hazard. At that point she is on the left edge of the lane just barely in the lane. By 00:03 she is fully in the lane and also fully visible. This however does not give enough time to avoid the collision. My point is that both the car, and the pedestrian were moving while also in a low visibility scenario. Should the car have better visibility at night? Yes. However I can see how it might not have detected this. I can also see why it might have detected it but classified the object as not an obstruction initially.

This just shows that humans can't be trusted as instant backup for a near fully autonomous car.

I don't know this shows that. Cadillac's Super-Cruise semi-autonomous cruise control has driver attentiveness monitors. No reason Uber couldn't use something simlar. Waymo and Cruise AV have driven more miles than Uber's fleet, and done so in California where they are required to report manual disengagements, and in fact they have report a (decreasing) number of disengagements -- so it works at least some of the time if you actually make safety a priority. I don't know if the other players have attentiveness monitors for safety drivers or not. Waymo for sure used to always have two drivers -- one paying attention to the computer and one paying attention to the road. Only after they collected a lot of data and gained confidence in the system did they reduce to one driver, and then an operator. I assume Cruise AV followed a similar path.

What I see is a bunch of people trying to Zapruder something all to pieces when the simple fact of the matter is, well, simple:

The driver should have seen her. The driver didn't. The car should have seen her, it appears not to have done so.

If you remove the self-driving tech from the equation -- which we can do because the tech is KNOWN to be early, incomplete, and totally untrustworthy -- what you have is someone operating a motor vehicle without paying attention to the road, made possible by a system that could have been, but was not, designed to force the driver to pay attention, even though that attention was absolutely (obviously) required for the safe operation of the vehicle.

They can Zapruder the film all to hell and back, and it won't change that simple fact.[/quote]

Its actually more complex then that, because we have to ask ourselves, CAN you design a safety driver system that actually works? If that driver had been looking at the road, would she have been able to determine the AV isnt reacting properly, decide to act on her own, put her hands on the steering wheel, put her foot on the brake, and act, all in the span of 2 seconds? Are you going to tell me that in this situation, that is humanly possible?

And, of course, if we arrive at that conclusion, how in the WORLD can we train drivers and set them up in an AV in order to actually respond to a situation like this??[/quote]

I don't know. You know why? Because the driver wasn't looking at the road, as the driver should have been.

Maybe this, maybe that, right? But the driver wasn't maybe not paying attention, the driver was absolutely not paying attention, although the driver absolutely should have been.

And the pedestrian is absolutely dead, whether she might not have been or not.

Pretty sure that's both the logical and legal definition of negligence.

There's a concert venue on the south west corner of mill and curry. There's a lightrail train that runs north south on mill and heads west on curry. You'd see the train in the median if this was in the well lit part of this area, also there's crosswalks so close to the lights crossing the road would be nonsense. Obviously no one would walk across the train tracks...

Nowhere does the light rail run in the Mill Avenue median.You're full of it.

IHS shows the XC90 with a range just under 250 feet (76 meters) with "low beams" on.

This is true for the right lane, but the range in the left lane is only about 110 ft.

At the moment Herzberg becomes visible in the Uber video, she's already in the Uber car's lane.

Without seeing the video, how do we know she did not just move into the path of the vehicle at the same time she became visible?

Why don't you just watch the video instead of speculating what it might show?

I found the video. Not sure why it's not embedded in the article and just linked to instead (oh ad revenue, right). Anyway, she is moving pretty quickly but not paying attention to the oncoming car at all. At 00:02 her feet are visible but that would be in the plane of the road and not detected as a hazard. At that point she is on the left edge of the lane just barely in the lane. By 00:03 she is fully in the lane and also fully visible. This however does not give enough time to avoid the collision. My point is that both the car, and the pedestrian were moving while also in a low visibility scenario. Should the car have better visibility at night? Yes. However I can see how it might not have detected this. I can also see why it might have detected it but classified the object as not an obstruction initially.

These cars have to be able to detect things and drive defensively. It has to be able to see a toddler that is not currently in danger but could potentially be in danger. It cannot simply assume anything out of its way will stay out of its way. It should have seen her prior to being in the way and noted where she was headed or could potentially head.

Like I mentioned in a previous thread, it should drive defensively when pedestrians are around.

A passenger taking over for a production autonomous vehicle wouldn't happen fast enough due to the expected inattention, we are talking about a safety driver -- they are being paid and expected to sit at the ready to take over, as the prototype won't catch all hazard scenarios.

And how effective could they be? As a regular driver, I can certainly put my foot on the brake to disengage the cruise control when traffic has stopped, so their is no reason a safety driver can't be incredibly effective. And there should be metrics to help track when safety drivers are getting less effective so they get a break or end their shift. [Of course, a question is how close is Uber monitoring any of this]

If I pay you or write it in EULA (in case of driving assistance systems) that you have to be bulletproof it will not change how bulletproof you are (not at all).

It doesn't matter if it is their job or if EULA says it or if God himself says it. Humans are incapable of doing that. Every single piece of experimental evidence very clearly shows that we are not able to maintain that level of attention without any interaction.

Safety driver is there to take the blame and satisfy politicians who also need someone to take the blame. What caused this accident is Ubers bad technology that should not be on the roads and nothing else!

A few of these points are way more clean than people want to make them and only one is fuzzy.

1. The pedestrian shouldn't have been in the middle of the road with oncoming traffic. This is a highly unusual situation that human drivers would not be prepared for, would have been cursing about at best, and might have had trouble working around at worst.

Not sure where in the world you live in, but in the city where I live, also in most of the world's urban area, pedestrians crossing roads are not highly unusual situation.All human drivers where I live in (one of the world's biggest cities) are prepared for pedestrians crossing roads.

This area isn't so urban actually. For the record I live there.

Interesting how you keep claiming to live in the area but your statements have been contradicted by others who also claim to live in the area.

You claim the road was as pitch black as the Uber video show, which is venhemently contradicted by the others who also live in the area.

There's a concert venue on the south west corner of mill and curry. There's a lightrail train that runs north south on mill and heads west on curry. You'd see the train in the median if this was in the well lit part of this area, also there's crosswalks so close to the lights crossing the road would be nonsense. Obviously no one would walk across the train tracks...

As there is no light rail in the dash cam footage this is either on mill north of curry or on curry east of mill.

The videos you see are deceiving.

Again, I am not defending uber but rather the tempe police on their assessment. There is a 1200 acre park in this area where my friend in a local astronomy club goes with his crew to set up scopes for kids.

In confused. Was Uber AV driving through the concert venue or the light rail track or the park where your friend in a local astronomy club goes to?

Or was the Uber car driving in that exact location of the street where it hit the pedestrian?

Isn't it interesting how you are trying so hard to try to give impression you know the area, and yet doesn't even know the exact location of the accident.

With Uber's track record, it wouldn't even surprise me if that apparently bad calibration was not an oversight.

Their approach of go fast, ignore oversight, and only bother to ask permission after you're caught may have worked for their taxi service. But for a self driving car that's a lethal combination. I'm surprised there haven't been more deaths due to them.

This overall is a horrible thing for the self driving car programs. Because now all companies working on that have to take a step back and prove they have been doing the due diligence the entire time by showing they have been taking steps for pedestrian safety.

A passenger taking over for a production autonomous vehicle wouldn't happen fast enough due to the expected inattention, we are talking about a safety driver -- they are being paid and expected to sit at the ready to take over, as the prototype won't catch all hazard scenarios.

And how effective could they be? As a regular driver, I can certainly put my foot on the brake to disengage the cruise control when traffic has stopped, so their is no reason a safety driver can't be incredibly effective. And there should be metrics to help track when safety drivers are getting less effective so they get a break or end their shift. [Of course, a question is how close is Uber monitoring any of this]

If I pay you or write it in EULA (in case of driving assistance systems) that you have to be bulletproof it will not change how bulletproof you are (not at all).

It doesn't matter if it is their job or if EULA says it or if God himself says it. Humans are incapable of doing that. Every single piece of experimental evidence very clearly shows that we are not able to maintain that level of attention without any interaction.

Safety driver is there to take the blame and satisfy politicians who also need someone to take the blame. What caused this accident is Ubers bad technology that should not be on the roads and nothing else!

What really bugs me is, as I've said, the technology exists right now to force driver engagement and attention. Tesla does it, Cadillac does it even better. Even my Lincoln will nag at me if I take my hands off the wheel while it's in adaptive cruise / lane keeping mode.

Uber didn't bother. They didn't think they had to. That's very Uber of them.

A passenger taking over for a production autonomous vehicle wouldn't happen fast enough due to the expected inattention, we are talking about a safety driver -- they are being paid and expected to sit at the ready to take over, as the prototype won't catch all hazard scenarios.

And how effective could they be? As a regular driver, I can certainly put my foot on the brake to disengage the cruise control when traffic has stopped, so their is no reason a safety driver can't be incredibly effective. And there should be metrics to help track when safety drivers are getting less effective so they get a break or end their shift. [Of course, a question is how close is Uber monitoring any of this]

If I pay you or write it in EULA (in case of driving assistance systems) that you have to be bulletproof it will not change how bulletproof you are (not at all).

It doesn't matter if it is their job or if EULA says it or if God himself says it. Humans are incapable of doing that. Every single piece of experimental evidence very clearly shows that we are not able to maintain that level of attention without any interaction.

Safety driver is there to take the blame and satisfy politicians who also need someone to take the blame. What caused this accident is Ubers bad technology that should not be on the roads and nothing else!

Last year Waymo Safety Drivers took control in 21 incidents that could have resulted in a collision (the highest risk category of disengagements). Waymo later did simulations on those incidents and found in 13 of incidents there would have been a collision if the safety driver had not taken control.

Is a safety driver flawless? No. There will be AV fatalities even with safety drivers but to pretend they can't improve the overall safety of the vehicle is bogus. They can. Waymo's public driving record shows that. We know this because all that is required to be public record in CA. Of course Uber intentionally left CA to avoid having to provide public disclosures on their AV program on public roads.

This incident perfectly illustrates the fallacy of driver/pilot automation.

Humans, being human and not computers, get bored. You can't have a computer take over the driving and expect the human to supervise it with the same level of attention, the speed of analysis, and the speed of reaction, as would be expected if the human were driving.

This has been shown in aircraft where computer controlled systems fail, the humans simply aren't able to cope fast enough because they fail to get the constant feedback needed to stay both sharp and to gain experience. For example, Air France 447 that crashed over the Atlantic.

The solution is that either the computer is driving, or the human is driving. In this case, the computer was driving and it failed. It is simply naive to assume that the human supervisor failed here. Her brain got bored and decided to do something else.

I've read about Cadillac's new driver augmentation system and it scares the hell out of me. These drivers are going to let the car drive itself and switch their attention to other things. It is human nature. Telsa's system allow permit such inattention by the driver. And as seen before, it sometimes fails with fatal results.

The answer is that we have to either accept that computers fail like humans do and that there will be accidents.

Or we have to ban the computer driving and limit them to training only. Perhaps another 10 to 20 years of collecting training data and rerunning simulations to get them to the point where their failure rate is a fraction of the human failure rate is required.

Bottom line. This hybrid solution of a prototype driving computer plus an inattentive human supervisor is just a recipe for disaster.

At first I thought this happened down in the old Mill St. It kind of made sense that on a busy night you could step out of a line of parked cars and surprise a car coming down on the adjacent lane. But the actual area's up north of the bridge, totally different and really hard to believe.

I'm honestly surprised that anyone thinks, of all places, that Mill Ave at night on a weekend is a good place to test unproven autonomous driving technology. Granted this incident occurred on the other side of the river from where all the ASU nightlife happens, but the car was just coming from there, so it's obvious Uber has no problems "testing" their vehicle down a street full of club-hopping drunk college kids.

In confused. Was Uber AV driving through the concert venue or the light rail track or the park where your friend in a local astronomy club goes to?

Or was the Uber car driving in that exact location of the street where it hit the pedestrian?

Isn't it interesting how you are trying so hard to try to give impression you know the area, and yet doesn't even know the exact location of the accident.

Seriously, I watched the video and could have pointed, to within a hundred feet, exactly where in Tempe this happened. Because unlike some people who make such claims, I actually did live there and actually did drive, walk, run, and bike on Mill Avenue for years and years.

That's the sort of familiarity you have with a place you have actually lived in.

It sounds like maybe the answer (or rather an mitigation tactic for inattention while these technologies are developed) is to have dual controls like an airplane, and two professionals on the level of a commercial airline pilot (in regards to training and pay) in each test vehicle who can take short shifts of "driving". Both sets of controls would work all the time unless conflicting inputs are given and then the current primary would override the secondary inputs to avoid scenarios where both drivers reacted, but reacted differently.

It isn't a perfect idea, but it's better than a system of what I assume is a low level employee plunked down alone for long hours with few breaks in these things.

A passenger taking over for a production autonomous vehicle wouldn't happen fast enough due to the expected inattention, we are talking about a safety driver -- they are being paid and expected to sit at the ready to take over, as the prototype won't catch all hazard scenarios.

And how effective could they be? As a regular driver, I can certainly put my foot on the brake to disengage the cruise control when traffic has stopped, so their is no reason a safety driver can't be incredibly effective. And there should be metrics to help track when safety drivers are getting less effective so they get a break or end their shift. [Of course, a question is how close is Uber monitoring any of this]

If I pay you or write it in EULA (in case of driving assistance systems) that you have to be bulletproof it will not change how bulletproof you are (not at all).

It doesn't matter if it is their job or if EULA says it or if God himself says it. Humans are incapable of doing that. Every single piece of experimental evidence very clearly shows that we are not able to maintain that level of attention without any interaction.

Safety driver is there to take the blame and satisfy politicians who also need someone to take the blame. What caused this accident is Ubers bad technology that should not be on the roads and nothing else!

Last year Waymo Safety Drivers took control in 21 incidents. Waymo later did simulation analysis and found in 13 of those would have resulted in a collision if the safety driver had not taken control.

Is a safety driver flawless? No. There will be AV fatalities even with safety drivers but to pretend they can't improve the overall safety of the vehicle is bogus. They can. Waymo's public driving record shows that. We know this because all that is required to be public record in CA. Of course Uber intentionally left CA to avoid having to provide public disclosures on their AV program on public roads.

I didn't say they can't provide benefit but that it is irresponsible and dishonest to depend on them for safety. Waymo is obviously doing it differently compared to Uber but even they have said that safety driver is at best an extra safety system but nothing that can be relied on.

Do the report say if they use single drivers or teams and also how long the driving stretch is? Would be very interesting to see how they handle it as they seem to be aware of limitations of safety drivers.