Wonder if they will hold the software programmer liable for the death.

In reality it will be the first piece of much evidence used to ensure that in the not too far off future no pedestrians or cyclists will be allowed on roads with autonomous vehicles ‘for their own safety’ and that people will only be able to cross on crossings by law even in the uk.

Will be watching this with great interest. Having read about Arizona relaxing the rules on self-driving cars much further than other states, I wonder if the victim’s family might sue the state government.

I do get the impression that “jaywalking” carries much more stigma over there though.

Well jaywalking is an offence in some jurisdictions. Don’t know about that one though. And of course it shouldn’t affect the software design (there are many reasons why a pedestrian or other obstacle might be in the road).

The liability stuff is way more complicated than sue the programmer as there are a lot of options depending on where they are and what might be considered reasonable for the vehicle to have anticipated

e.g. https://en.wikipedia.org/wiki/Autonomous_car_liability

The references section there gives you an idea of just how much work is going in to it.

On that note it did say there was a driver in the car to take control, so assuming the human in charge of the car was paying attention, it may have been an unavoidable accident.. But without more details it’s all just conjecture.

I work in the industry and IMO people have got wwaayy more confidence in this technology than they should have at the moment. Governments all over the world seem to be rushing to hand out licenses to test autonomous cars on the roads to attract the R&D centres, way before the technology is ready. The civil service should be doing work to make sure the legislation is up to scratch and everything is tested properly, but they are hopelessly outmatched in terms of salaries, skills and the sheer pace of change of the companies and technology they are trying to regulate. It seems like they are unable to tell how mature a particular companies technology is. Personally, I wouldn’t trust anyone other than Google.

Spot on northwind if you want to keep killing people until you hit perfection then you are missing the point.

in the not too far off future no pedestrians or cyclists will be allowed on roads with autonomous vehicles ‘for their own safety’ and that people will only be able to cross on crossings by law even in the uk.

The UK has these already, we call them motorways and in places dual carriageway, see also tunnels and some bridges. It is a logical step to create higher speed close uninterrupted transit lanes so that cars eta can move quickly without having to deal with merging and pedestrians etc

Is it just me who thinks these things shouldn’t be on the road yet?

In the 2 months since I’ve been back in the UK I reckon I’m, have seen a couple of thousand drivers that should be off the road before any driverless car.

With a human driver overseeing things with complete manual override under strict conditions? I don’t see a problem with it. To take issue would be the same as blaming ABS or cruise control for having an accident.

It wouldn’t surprise me if this was driver error, or pedestrian error, or even a mechanical failure, but I’m reserving judgment.

It’s a bit sensationalist just to assume it’s the automation aspect of the cars fault when there are so many variables and no specific information has been released.

Google cars could have killed dozens, you don’t know about it because they’d remove it from their search results. (said in jest but when you hold the keys to the eyeball castle it’s possible to guide opinions)

It’s a statistical certainty autonomous cars will kill given enough road time, though it might be a lot less than meatbags do. What level is acceptable to society? If I was investing a billion or two in this tech it’s a question I would have already asked.

TBH in autonomous mode you can’t really expect the “driver” to take over in an emergency- in some slower emerging thing like a breakdown or contention or something, then sure but in a collision it’s just not going to happen, the driver will be switched off and unready.

The bottom line is that real world testing is required to make these things better, because it’s only connecting with the countless idiot things that happen in the real world that you’ll get a result that works there. In exactly the same way as we instruct learners on the street then let them out when they’re only barely capable. Self-driving clearly isn’t there yet- but if we don’t let it out in the wild then it probably never will be.

TBH in autonomous mode you can’t really expect the “driver” to take over in an emergency

A good point, but in a controlled test, or proof of concept test, which this hopefully was as it was on a real road with real hazards, you’d expect the human in the car to be ready to brake or take evasive action at any second if things didn’t look right.

Until recently, they have required a real person to be sat in the front of the car and ready to take over – but recently California officials approved the testing of such vehicles without humans in the front seats.

I wonder if uber were trying to run before they could walk, considering the first company to market with a viable autonomous vehicle stands to make a (forgive the pun) killing.

It will be interesting to see if the human in the car was actually ready to intervene, or if there even was a human in the car.

Either way it looks very bad for uber, and california I guess.. there’s some criminal negligence there for sure.

I wonder whether any of those calling for such cars to be banned as a result of this will manage the logical connection that exactly the same arguments could be used for banning cars with human drivers.