If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

That's job ONE when you are the " Pilot in Command " . Fly the aircraft.... always . If you are driving a car,
you are very much.... STILL.... the " Pilot In Command " . Can we put proximity detectors on vehicles.... coupled to the braking system and the accelerator ? Yes. Can we put a speed limiter on vehicles? Definitely , and we probably should do.
This technology does not replace a Pilot-in-command , though. It's intention is to supplement an unrecognized condition
and make an autonomous correction, like when you are in stop-and-go traffic, and absentmindedly start rolling , while playing with your CD player and tap the vehicle in front of you. You meet all kinds of new people. Now, yes, the car will stop itself. It should probably nag you to pay attention as well. This is all together different at 80 MPH with a stopped car in your lane, in 4 inches of wet snow , and everything is white. Answer: see you in church.
Technology will help drivers to avoid more trouble , like limiting your ability to do really stupid things like tail-gaiting
cars in a merging situation or speeding in a school zone .
But...................Nothing is going to replace the "Pilot in command" , which is You.

Yes the vigilance problem is a tricky one. I'm a fan of the Tesla but don't own one yet.

In my fevered imaginations I'm driving the thing along the road in autopilot but it has occurred to me that it's unlikely I will be able to sustain vigilance without the constant prompting of needing to adjust speed and lane position etc that normal driving entails.

I suppose NOT watching a dvd while doing it would be a good start...

I don't know if it's just me... but I have found that even with just a cruise control managing the speed, I tend to do essentially silly things.... For instance if the traffic slows ahead I take no action hoping that it clears enough to still maintain that cruise speed. I end up much closer than I would normally be to the car ahead, simply because I was reluctant to intervene.

Mostly my intervention would be a frantic clicking downward of the speed control hoping to get it to cruise at the new lower speed without having to disengage the system by braking. There is no rational reason for this behaviour of mine... it seems that after engaging the tech solution to speed control I am reluctant to abort the process.

Although it doesn't directly translate to the Tesla system, I suspect that I will be similarly reluctant to intervene with it also when I have it engaged. Obviously when a truck has filled the windscreen I'll be abandoning my fascination with the tech in favour of a robust braking procedure... but will it be too late then because I was hoping for too long that the radar collision avoidance would do it for me?

I think there is a human factor involved here that will be hard to mitigate until the systems achieve full autonomy.... until then people will probably continue to do non-rational silly things with their toy.

I think you raise a good point there, the reluctance to interfere with an automated system. This will increase with time as drivers come to trust the system. Accidents happen after very long periods of apparently safe driving. The mix of normal and automated systems on the same roads is a big challenge, but it's equivalent to the mix of learner or inexperienced drivers. In the future, all drivers will be inexperienced compared with the current ageing generation. Will they be quick to over-ride the autopilot and face later censure from courts?

sicut vis videre estoWhen we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.Originally Posted by Ken G

If they don't over-ride the autopilot, they may wind up expired. This is serious stuff here. Even a washing machine requires supervision.....(leaking supply valve soaks entire house etc ) . In view of this, why should drivers become less experienced?
Drive or drive not. " Sure and you should be thinking about your future ." Mrs. Martin McFly

self driving advances and I noticed a subtle idea.
it worries lawyers that accidents will cause law suits blaming the software designers. Or the manufacturers.
so there is a suggestion to have a dial in the car set to vary the way the system takes decisions from one extreme "always try to save my life" to the other "always try to save the life of third parties" This would put the first port of call onto any owner who chose the first option.
This is no joke, the legal blame game will otherwise inhibit the technology. There are more complex ways to choose the optimum kill strategy when the system detects an impending danger. Humans will have to think about it and choose.

sicut vis videre estoWhen we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.Originally Posted by Ken G

self driving advances and I noticed a subtle idea.
it worries lawyers that accidents will cause law suits blaming the software designers. Or the manufacturers.
so there is a suggestion to have a dial in the car set to vary the way the system takes decisions from one extreme "always try to save my life" to the other "always try to save the life of third parties" This would put the first port of call onto any owner who chose the first option.
This is no joke, the legal blame game will otherwise inhibit the technology. There are more complex ways to choose the optimum kill strategy when the system detects an impending danger. Humans will have to think about it and choose.

I haven't heard that suggestion, but it seems impossible to me. There isn't any way to know that a person is going to be killed or not. The better option for me seems to be to avoid the most immediate danger and then try to deal with subsequent ones. So you try to drive around the closest person, then try to avoid the next obstacle.

I haven't heard that suggestion, but it seems impossible to me. There isn't any way to know that a person is going to be killed or not. The better option for me seems to be to avoid the most immediate danger and then try to deal with subsequent ones. So you try to drive around the closest person, then try to avoid the next obstacle.

no it is a real thing. the algorithm can be faced with a suddenly developing situation where a choice has to be made. There is a well developed set of experiments where subjects have to choose whether to divert a runaway truck to save lives but kill someone else. These experiments delve into human motivation/morality. The suggestion here is to force a human owner to decide in advance how the algorithm should prioritise actions. For example a pedestrian suddenly walks out in front, the car can just run them down or swerve into a ditch, risking injury to passengers. By adding the dial, the designers hope to avoid being in court for taking the wrong decision, the blame rests with the owner of the car just as it does if she is driving traditionally. This issue is central to the development of driverless technology.

sicut vis videre estoWhen we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.Originally Posted by Ken G

sicut vis videre estoWhen we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.Originally Posted by Ken G

There is probably an even easier way to try to force the owner of the self-driving car to be responsible (rather than the "ethics" switch) - when you purchase the car, you take legal responsibility for accidents involving the vehicle.

The problem however, either with such a legal agreement, or some algorithm adjusting switch, is that neither will stop lawsuits. And neither will cover all possible ways that things might go wrong. For example, the "protect me" versus "protect others" select will not help the choice between "run over 3 adults" or "run over 2 babies".

I have a suspicion that for the legal reasons being discussed, we will not see fully automatic cars for a long time (outside of some very limited uses). What we will see (and we are already seeing) are augmented cars - the cars will not self-drive, but they will "assist" the driver in many ways. We already have cars that will brake if you get too close to the vehicle in front of you, adjust the steering or at least warn you if you are not in your lane or to avoid hazards, or automate certain tasks, like parking the car. I suspect we will see that more and more.

My colleague who alerted me to this works on software debugging in this field and he agrees with you, he suspects this ethics issue is much bigger than the technical challenge. He discussed the issues professionally with potential insurers and they were very reluctant, anticipating lawsuits that human drivers do not face. Humans panic and only get blamed if they were obviously negligent but robots have actionable manufacturers.

sicut vis videre estoWhen we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.Originally Posted by Ken G

There is probably an even easier way to try to force the owner of the self-driving car to be responsible (rather than the "ethics" switch) - when you purchase the car, you take legal responsibility for accidents involving the vehicle.

I went to a session about self-driving cars a few years ago at (I think) the annual AAAS meeting, and one of the panelists was a lawyer working in this area. Somebody asked a question about how we would decide who has responsibility in the case of an accident, and the lawyer basically said, "we will do it just as we do now. The reason we have juries is that every case is different and it is never possible to simply lay responsibility by a rule." His argument was that self-driving cars don't require any change in our legal system (though of course they do require regulatory changes).

It seems to me that regulation would be key. If a manufacturer's self driving system meets regulatory standards then the manufacturer should not be able to be held responsible. At some point when all actors, including the self driving system manufacturer, have done the best that can be reasonably expected the verdict should be that no one is at fault. Of course the tricky part is figuring out what can / should be reasonably expected.

There is probably an even easier way to try to force the owner of the self-driving car to be responsible (rather than the "ethics" switch) - when you purchase the car, you take legal responsibility for accidents involving the vehicle.

The problem however, either with such a legal agreement, or some algorithm adjusting switch, is that neither will stop lawsuits. And neither will cover all possible ways that things might go wrong. For example, the "protect me" versus "protect others" select will not help the choice between "run over 3 adults" or "run over 2 babies".

Another problem with this is that the software running the cars would not be static - just as Microsoft, Apple, Android and other software platform developers push updates onto our various computers and devices, the computers that run the cars would also be subject to pushed updates. Most of these would be for the better, applying lessons learned from accidents or fixing previously undiscovered flaws. Still, the software that the car came with when you bought it might not be the same software it has a few years later, and the consumer might not have full control over that. You would be taking responsibility for something that could be changed without your explicit knowledge or permission, that would be covered under the more general agreements signed at purchase.

Originally Posted by Darrell

It seems to me that regulation would be key. If a manufacturer's self driving system meets regulatory standards then the manufacturer should not be able to be held responsible. At some point when all actors, including the self driving system manufacturer, have done the best that can be reasonably expected the verdict should be that no one is at fault. Of course the tricky part is figuring out what can / should be reasonably expected.

Regulation is probably the key indeed. Another problem with that is that the current developers seem to be taking very different routes in programming and the amount of AI-ish self-adjustment that goes into it. It can be hard for an agency to accredit a software package that is constantly changing and updating itself, as some of the current auto-driving software packages (such as Tesla) currently do.

My guess is that self-driving will be coming, but fairly gradual until a tipping point is reached a few decades from now. Most of the first few level 4 and 5 self driving platforms will be fleet vehicles limited to certain areas, such as shuttle buses and delivery vans. Some level 4 cars will be sold to the general public, but the added sensors and liability issues will push the price high enough to limit it to wealthy technophiles. Meanwhile the collision avoidance and lane following will make its way into the more everyday cars, which will be good for most everyone.

ETA Later: Very obviously, the ride sharing companies want to be early adopters as well. Uber already has level 4-ish cars operating in one area.

yes I find an irony that the automation will almost certainly reduce overall accidents. It will combine the hub to hub nature of air travel with the point to point nature of road traffic with, I would think, automated high speed road trains, very closely packed and automating the gaps required to have traffic joining, saving energy in the slipstream. Any software error then might cause mega pile ups. Or indeed hacking vulnerability is a worry.

sicut vis videre estoWhen we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.Originally Posted by Ken G

yes I find an irony that the automation will almost certainly reduce overall accidents. It will combine the hub to hub nature of air travel with the point to point nature of road traffic with, I would think, automated high speed road trains, very closely packed and automating the gaps required to have traffic joining, saving energy in the slipstream. Any software error then might cause mega pile ups. Or indeed hacking vulnerability is a worry.

Hacking is indeed a concern.

My sister has had her identity stolen seven times after she got a government job, and she and all her friends and family were deeply investigated on record by Homeland Security. Then the company that secured DHS accounts against hacking, was itself hacked. So all her, my, and our family's personal details (and millions of others) are now online and for sale.

"I'm planning to live forever. So far, that's working perfectly." Steven Wright

It seems to me that regulation would be key. If a manufacturer's self driving system meets regulatory standards then the manufacturer should not be able to be held responsible. At some point when all actors, including the self driving system manufacturer, have done the best that can be reasonably expected the verdict should be that no one is at fault. Of course the tricky part is figuring out what can / should be reasonably expected.

At present, there are occasionally accidents involving elevators, which are automated vehicles. When an accident happens, it is usually found that it is either a design failure, or a maintenance failure, or a passenger doing something wrong. So building to regulation doesn’t absolve the manufacturer if there is, for example, a flaw that is not covered by regulation.