Here's one way to consider this situation. Human error is the main cause of many traffic accidents. And those human errors are mostly attributable to lack of attention or poor judgement. An automated system may be able to prevent many of these accidents. But can an automated system prevent accidents due to external factors such as road hazards, errant pedestrians, etc.? Automated controls are mostly reactive in nature, and they would not be able to foresee potential problems as well as a good driver could.

Turning right across traffic is a major cause of fatalities and injuries, perhaps we should redesign or traffic systems to be turn left only, though this might be a problem in places like the Nullarbor Plain and Montana...

Why must autonomous cars be self-driving 100% of the time? In an urban/dense environment with lots of variables (people, driveways, cross streets, parking spots, etc) it makes no sense to talk of implementing driverless cars any time soon. So why bother talking about such unrealistic situations which are primarily low speed where this is little/no improvement in convenience or safety? Where appropriate, like expressways, where the intent is for all traffic to be moving at the same speed, the same direction, for long periods of time, driverless makes sense and would be easy to implement in the short term with the large proportion of convenience and safety benefit.

And the airplane talk is not relevant. Plane crashes are a result of the fact that a plane must maintain some minimum speed to stay airborne. If they could just stop, figure out the issue and not move again until that issue was sorted or over-ridden, there would be almost no crashes. Cars can have any velocity between 0 and Vmax. Problem? Stop the car.

Why must autonomous cars be self-driving 100% of the time? In an urban/dense environment with lots of variables (people, driveways, cross streets, parking spots, etc) it makes no sense to talk of implementing driverless cars any time soon. So why bother talking about such unrealistic situations which are primarily low speed where this is little/no improvement in convenience or safety?

I disagree about the convenience and the safety aspect in urban driving. Driving in urban areas can easily become hell during rush hour, and while the drivers aren't in as much danger, pedestrians certainly are. That said, it probably would be a lot harder logistically to automate urban driving than it would be to automate limited access highway driving.

Turning right across traffic is a major cause of fatalities and injuries, perhaps we should redesign or traffic systems to be turn left only, though this might be a problem in places like the Nullarbor Plain and Montana...

I think the aerospace parallels are highly relevant, not least because autonomous air flight has been available for many years and the most complex part of flying, the landing, has also been available as an automated procedure at many airports for many years. Also drones have become a significant part of the US Air force combat strength.

Despite this long existence of autonomous aircraft there is a great unwillingness by the FAA in the USA and the CAA in UK not only to delete the pilot but even to see him/her as less in charge of the aircraft than the computers. In aviation there is still, it seems, a presumption that, ultimately the pilot knows best.

I am not in any part of aviation safety so I can only guess but I think this is driven by two factors. Firstly the auto systems cannot understand and react to every possible event and human brains are still seen as superior to computers in this respect in flying. Secondly pilots who make mistakes will die from them so keeping final control with the pilot remains a key safety principle.

In fact a lot of concern has been raise by aviation safety bodies about the degradation of manual pilot skills by constant use of auto landing etc. Some quite strong words have been used like " lack of basic flying practice" It would seem that in aviation flight automation is very much welcomed as are sensors ( e.g. radar) but there is a matching reluctance to cede ultimate on board human responsibility and control.

So there does seem to be a very different attitude between autonomous car developers like Google and the FAA/CAA. As somebody has already pointed out will a driver be able to avoid any legal liability for a crash as long as he/her was in the car?

In fact a lot of concern has been raise by aviation safety bodies about the degradation of manual pilot skills by constant use of auto landing etc. Some quite strong words have been used like " lack of basic flying practice" It would seem that in aviation flight automation is very much welcomed as are sensors ( e.g. radar) but there is a matching reluctance to cede ultimate on board human responsibility and control.

More than 15 years ago American Airlines conducted these training seminars.

.........I don't like the idea of flying in aircraft that are soley under the control of computers, especially, as is apparently the case, the humans up the sharp end have not been trained to take over and fly it at high altitude..........

Tony Matthews,

All modern commercial aircraft are now fly-by-wire, and thus are 100% computer controlled. There is no longer any direct mechanical connection from the pilot's inputs to the engines or aircraft control surfaces. The human pilot can only override the autopilot software. The engines and control surface actuators will still only respond to electronic signals. But rest assured, all flight critical control systems have triple redundancy. Which if you think about it is a much safer arrangement than having only two pilots.

All modern commercial aircraft are now fly-by-wire, and thus are 100% computer controlled. There is no longer any direct mechanical connection from the pilot's inputs to the engines or aircraft control surfaces. The human pilot can only override the autopilot software. The engines and control surface actuators will still only respond to electronic signals. But rest assured, all flight critical control systems have triple redundancy. Which if you think about it is a much safer arrangement than having only two pilots.

slider

I refer you back to the previous posting about the Qantas flight on which the tripley redundant computers cocked up.

And playing Jeopardy better than a human is not that impressive, there's no death penalty if the computer gets it wrong...

Turning right across traffic is a major cause of fatalities and injuries, perhaps we should redesign or traffic systems to be turn left only, though this might be a problem in places like the Nullarbor Plain and Montana...

Let's make an OHS assessment of driving:

We're proposing to let anyone operate heavy machinery at high speed, frequently within less than a metre from others doing the same with the only mitigations being a small amount of training and experience.

We're proposing to let anyone operate heavy machinery at high speed, frequently within less than a metre from others doing the same with the only mitigations being a small amount of training and experience.

Nobody is proposing that - That is what we already do.

Suggested reading: Mind over Mind by Chris Burdik, particularly the chapters about the part the subconscious plays in Expectancy where Berdik shows that unconcious expectations can also ifluence behaviour when people have to react quickly and concious deliberation may get in the way - As far as I am aware, no computer has yet developed a subconscious...

When we walk, run or drive we react to things we are unaware of, for instance, we may, in the process of ducking, realise why we ducked, there was a ball coming at us at high speed. We don't look, say to ourselves "Oh, there's a ball coming at my head" and then duck, it is a sub-conscious reaction to imminent danger and one we are not in control of. I entirely agree that a computer controlled car can handle the mundane, the ordinary flow of traffic, what I dispute is that it can necessarily handle the extraordinary, that it can make the value judgements required in extremis; do I run over the child or into the car coming the other way; is that a dog or a fox, there are legal ramifications in the UK if I run over a dog, so do I take avoiding action or not; is that a child or a monkey ambling into the road, and if not a child how long to check the data base to confirm that it is a monkey; there's a car on the wrong side of the road coming towards me, do I attempt to stop in a straight line or try to take avoiding action, and if so, which of the several options - These decisions are usually taken by humans without conscious though, computers only work in the conscious domain.

There are no doubt instances of the kind of thing you describe. For every such instance there are thousands of deaths caused by simple driver error including intoxication, distraction, lack of sleep, heart attack . . . . . . . . . . .

It will not be long before AI is a match for human intelligence. Long before that day arrives, driverless cars will be much safer than those with drivers.

There are no doubt instances of the kind of thing you describe. For every such instance there are thousands of deaths caused by simple driver error including intoxication, distraction, lack of sleep, heart attack . . . . . . . . . . .

It will not be long before AI is a match for human intelligence. Long before that day arrives, driverless cars will be much safer than those with drivers.

You are slightly missing the point, we do a lot of things when driving that we are not aware of; on ice and snow we are aware that we might start sliding before it actually happens so subconsciously we have our correction prepared and carry it out, computers are reactive, they cannot yet make value judgements - I don't care what they can be programmed to recognise, whether thy can tell a male from a female, they have no real intelligence they have the eponymous Artificial intelligence, they are not truly aware. Take real awareness away, and therefore the subconscious avoidance of potential accidents, from the driving of cars and will the accident rate actually increase in some areas?

Don't come back and make bland assertions about how good computers are getting, provide evidence of how they are dealing with these grey areas. I have given you sources for how the brain works, many posters seem to think that because it is being done by Google, everything is perfect, no need to provide evidence.

It seems we are torn asunder by two totally incompatible schools of thought: that computers are stupid and couldn't possibly replace human drivers for another 20 years, and that computers are limitless in potential and could possibly replace human drivers in another 20 years.

It seems we are torn asunder by two totally incompatible schools of thought: that computers are stupid and couldn't possibly replace human drivers for another 20 years, and that computers are limitless in potential and could possibly replace human drivers in another 20 years.

I hadn't noticed that, what I had noticed was assertions that computer will have no trouble replacing humans, and a few who think that it has yet to be proved as there are some areas where parts of the human psyche cannot be 100% replicated.

I hadn't noticed that, what I had noticed was assertions that computer will have no trouble replacing humans, and a few who think that it has yet to be proved as there are some areas where parts of the human psyche cannot be 100% replicated.

At the risk of repeating myself for the 20th time (though I won't be the first one here to commit such an offense), the fact that you demand human psyche to be replicated is a big part of the problem with your thinking. Why does human psyche need to be replicated? A lot of the heuristics our brains use are just workarounds for the shortcomings that our thought processes have that computers don't have.

At the root, you have a severe case of failure of imagination. Instead of thinking about how computers can replicate the result of human driving, you're thinking about how computer can replicate the actions of humans driving cars. That's a highly sub-optimal approach to automation, because from the get-go you set as a goal mimicking something that is already a big compromise, and as a result limit the potential for improvement.

At the risk of repeating myself for the 20th time (though I won't be the first one here to commit such an offense), the fact that you demand human psyche to be replicated is a big part of the problem with your thinking. Why does human psyche need to be replicated? A lot of the heuristics our brains use are just workarounds for the shortcomings that our thought processes have that computers don't have.

At the root, you have a severe case of failure of imagination. Instead of thinking about how computers can replicate the result of human driving, you're thinking about how computer can replicate the actions of humans driving cars. That's a highly sub-optimal approach to automation, because from the get-go you set as a goal mimicking something that is already a big compromise, and as a result limit the potential for improvement.

In this thread there have been plenty of examples given of computers doing things you claim they can't do. Every time the example was brought up, you either ignored it and continued playing the broken record, or moved the goal posts.

Well, in the last decade for US airlines, the fatality rate due to accidents was 1/10th of the fatality rate of the preceding decade. Of course, there is a lot of statistical noise with data like that, given how rarely planes crashe in general, but I'd still say these numbers look like a big improvement.

You are slightly missing the point, we do a lot of things when driving that we are not aware of; on ice and snow we are aware that we might start sliding before it actually happens so subconsciously we have our correction prepared and carry it out, computers are reactive, they cannot yet make value judgements

You clearly do not appreciate the irony of this assertion, which actually demonstrates that human drivers do best when they act in the same way as computers - reacting automatically without conscious thought.

But-- and this is significant psychologically-- if accidents are the result of computer error they are frightening in a way that accidents involving human error are not. Even if we are significantly safer under automated control, we thus feel less safe. Given our strong tendency to overestimate our own competence it's easy to tell oneself that one's own far above average skills will make the difference in those crucial split seconds.

The resistance to automated control and the death and injury toll that will result from this irrationality must be added to the column of human error induced harm. Human error extends far beyond the physical operation of the vehicle.

You clearly do not appreciate the irony of this assertion, which actually demonstrates that human drivers do best when they act in the same way as computers - reacting automatically without conscious thought.

Computers are neither conscious nor unconscious and they do not "think", nether do they have prescience, they compute, they cross reference a set of pre-programmed parameters, if the information is not in the computer I want to know how it makes the illogical leap to a course of action it has no knowledge of? It is a question no-one seems to be able or prepared to answer, all you do is tell me how clever computers are, which of course, they are not, any more than a ruler is clever. Computers are only as good as the people who program them and the geniuses who program them do not know the awkward turn half-way down my street, that is to say, thay cannot predict every situation that any one car anywhere in the world will face in the next 5 minutes, let alone all the cars over the next 5 years; so stop telling me how clever computers are and show me how they programmers are dealing with the random, I have no interest in the predictable, the predictable presents no problem, if it did it wouldn't be unpredictable would it.

There you go again, making claims that have been refuted before, but pretending they weren't. Google backgammon computer algorithms, which beat the best humans through superior strategy they learned on their own, if you have any interest in learning about this subject rather than repeating false claims time and again. And before you ask how backgammon strategy is applicable to that (not that this question is necessary to defeat such a false blanket claim), backgammon algorithm were developed with artificial neural networks, which were also employed in coming up with computer algorithms that could fly planes on their own.

There you go again, making claims that have been refuted before, but pretending they weren't. Google backgammon computer algorithms, which beat the best humans through superior strategy they learned on their own, if you have any interest in learning about this subject rather than repeating false claims time and again. And before you ask how backgammon strategy is applicable to that (not that this question is necessary to defeat such a false blanket claim), backgammon algorithm were developed with artificial neural networks, which were also employed in coming up with computer algorithms that could fly planes on their own.

Ah! Backgammon, that 3 dimensional game where the shape and dimensions of the board and some of the rules change every few seconds. I am not disputing that computers can adapt and change their own algorithms within a fixed set of positions and rules, and you keep offering examples of them being able to do that, and once again I say that there are no lives at risk if the computer gets it wrong. To be self adaptive the program has to run many times, remember the condequences of the various plays it makes and use them for future reference - correct? The program learns by experience, but it is not actually "thinking", it wouldn't suddenly stop and decide that the game would be more satisfying if it altered a few of the rules, it wouldn't see the need to, why would it, it knows exactly what it is doing. If the world's best computer was driving a car, would it be able to change its own rules instantaneously in order to deal with a problem it had never encountered, would it stop to think about it until sometime after the accident had happened, or would it go beserk like the robot in the Seimens factory when it found that all its sensors were reading 0,0,0,0,0,0,0,0,0; and it had no idea what that was or more importantly, what to do about it - You see, nobody had told it that might happen, the programmers never took it into consideration, after all, why would that happen?

As I said before, I will happily accept autonomous vehicles driving down my road when the manufacturers accept a strict liability clause in the sales contract - As far as I am concerned, they can't assert that their self-driving cars can do it better than us and then, when there is an accident, dump the blame on the owner of the vehicle. I am waiting for the first court case involving the self-parking Fords with interest, it will inform us as to the future legal implications and driver liabilities of autonomous cars.

You continue to point out the advantages of human drivers over AI and I grant you are correct - for now. You continue to ignore the other side of the coin - the vast majority of road fatalities would have been avoided by a moderately smart computer compared to the handful of cases wher a human might have done a better job.

I am waiting for the first court case involving the self-parking Fords with interest, it will inform us as to the future legal implications and driver liabilities of autonomous cars.

Good thing you've out-thought all the legal people. I'm surprised you have time to waste on the internet, a man capable of such great insights should be designing autogyros or fan powered drag racers. With beam axles. And seamless gearshifts. I did a very bad search for self parking cars and lawsuits, and once I got rid of the hot air and armchair lawyers and science fiction fans, sadly not much was left, which is odd, since self parking cars have been on sale for more than a year, and have already generated a lot of customer feedback (mainly variations on "why is my goddamn car refusing to park in a space so big I could get a bus in").

Have there been situations where a human could have done a better job? Yes

Ross Stonefeld,

With commercial aircraft travel in the US, the safety record regarding fatalities is now almost perfect. For many of the past few years there have been no fatalities on US commercial flights. In fact, you are statistically far safer flying on a modern commercial aircraft than you are sitting in your own living room. Most of the in-flight problems that now occur are due to human error.

Sure, computers can read terrain, but can they read subtle clues? Just lust week, I was driving in town, and I saw a pedestrian right in front of me with her eyes and mouth wide open, screaming on top of her lungs. Immediately I intuitively concluded that I was driving on the sidewalk again, and tragedy was narrowly averted.

Will computers have advanced face and voice recognition software to perceive the same clues like I did? Until they're capable of such human ingenuity, I'm going to be very skeptical that they can drive safer than me.

Fortunately, where I live the pedestrians have finally realised that the pavement (sidewalk) is not their sole preserve - most cyclists use it, and it is a convenient place for motorists to park. The downside is that they wander onto the road (pavement). The old ones are the worst, as they set off at a reasonable pace - albeit on the diagonal, hence increasing the distance - but then slowing in the middle of the opposite lane as if in thought, or fighting for breath. I admire the young mothers who test the traffic by pushing their toddler-laden pram or buggy out first. Highly commendable. I sometimes congratulate them with a lengthy blast on my horn.

Sure, computers can read terrain, but can they read subtle clues? Just lust week, I was driving in town, and I saw a pedestrian right in front of me with her eyes and mouth wide open, screaming on top of her lungs. Immediately I intuitively concluded that I was driving on the sidewalk again, and tragedy was narrowly averted.

Will computers have advanced face and voice recognition software to perceive the same clues like I did? Until they're capable of such human ingenuity, I'm going to be very skeptical that they can drive safer than me.

If you habitually drive on the sidewalk, then I have no doubt that they are just around the corner...

I don't like the idea of flying in aircraft that are soley under the control of computers, especially, as is apparently the case, the humans up the sharp end have not been trained to take over and fly it at high altitude. I don't like the idea of being in a car that is solely under the control of a computer, unless I am in the position to take control the instant that I feel impending doom, and it doesn't look like the computer is doing a good job, in which case I might just as well be driving.

However, I have learned to accept that computers, sensors and related gizmos are going to get better and better, to the point where I will be happy to give up control most of the time. I may not really enjoy that scenario, assuming I'm still here to experience it, but having had several red-face moments with technology - auto-focus? Impossible! Oh... - I now have faith in engineers to do the supposed impossible, given time.

At least I didn't proclaim, just before the event, that man would never walk on the moon, like one eminent British astronomer.

Ah yes but did they? Or was it a conspiracy acted out in the Nevada desert? I HAD to put that up!!

Airplane "talk" is actually fully relevant and indeed statistically in aviation "there are almost no crashes", with respect to car transportation. That accident in particular has opened up critical issues about safety systems, humans and their interaction but, after such quote above, who cares discuss any further.

Actually considering the ratio of cars to aircraft they are quite prone for accidents. This is not just crasheds but electronic failures, mechanical failures where the plane lands without undue drama. But still a major problem. Listen to the evening news where just about daily you hear reports of planes landing when a cockpit fire, a [generally faulty] alarm sounds, an engine or other mechanical problems. And the ratio of aircraft to cars? .05% being generous.And far too many of the things do crash, mostly without major damage to occupants. Talk to any commercial pilot.

Actually considering the ratio of cars to aircraft they are quite prone for accidents. This is not just crasheds but electronic failures, mechanical failures where the plane lands without undue drama. But still a major problem. Listen to the evening news where just about daily you hear reports of planes landing when a cockpit fire, a [generally faulty] alarm sounds, an engine or other mechanical problems. And the ratio of aircraft to cars? .05% being generous.And far too many of the things do crash, mostly without major damage to occupants. Talk to any commercial pilot.

Aircraft are far more complex things than cars. And if an aircraft breaks down, you have a serious problem, If your car packs up, you are probably just stuck on the hard shoulder. All of which is nothing to do with computerised controls. An engine failure, a hydraulic or electrical fault, or some other mechanical fault could happen whether under human or computer control.

Alongside fleshbot operators? In America, the legal issues to get approval for mixed roads will be pretty big. I mean in the sense of insurance companies agreeing to cover, etc. Not because they'll be pro or anti, but because they'll want to know what's what. I'd almost expect 100% driverless to come before mixed-class.

It's already occuring, Google have done more than 300,000 accident free miles in normal city/suburban conditions. A driver is sitting in the car, but the car is doing all the driving.

If you mean which year will it be for sale to the general public, I'd go for 2020.

As I read the literature from circa 2010, around 140,000 miles with "occasional human intervention" -- the human on board touching the brake or steering wheel. Without human intervention, around 1,000 miles. Do I have that wrong?

As I read the literature from circa 2010, around 140,000 miles with "occasional human intervention" -- the human on board touching the brake or steering wheel. Without human intervention, around 1,000 miles. Do I have that wrong?

Just slightly out of date I think, I knew I'd read the 300,000 figure somewhere.

To date, they have logged 300,000 miles with only one accident – caused by a human-controlled car that ran into one of them. And they have now logged 50,000 miles without a human having to take the wheel.

When the autopilot tripped out, there was a small indicated altitude loss (90m), to which the PF reacted with a dramatic rearward input on his sidestick. The vertical acceleration imparted was 1.6g, the nose pitched up reaching an attitude of +12˚, and the aircraft's speed began to drop fast. Exactly 46sec after the loss of reliable airspeed information, states the report, the stall warning sounded. Then, 2sec later, the aircraft exited its flight envelope, with buffet indicating a full stall. Meanwhile, the crew rapidly lost situational awareness and control of the aircraft, never to recover it.

It's a pity one of us weren't flying this plane with our obviously superior situational awareness.