Posted
by
Soulskill
on Tuesday November 27, 2012 @03:37PM
from the needs-of-the-many-outweigh-the-needs-of-the-few dept.

nicholast writes "If your driverless car is about to crash into a bus, should it veer off a bridge? NYU Prof. Gary Marcus has a good essay about the need to program ethics and morality into our future machines. Quoting: 'Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work. That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.'"

I maintain that you CAN'T really program morality into a machine (it's hard enough to program it into a human). And I also doubt that engineers will ever really be able to overcome the numerous technical issues involved with driverless cars. But above these two problems, far and away above *all* problems with driverless cars is the real reason I think we'll never see anything more than driver *assisting* cars on the road: legal liability.

To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers? How much would you have to add onto the sticker price to cover the costs of going to court every single time that particular car was involved in an accident? Of defending the efficacy of your driverless system against other manufacturer's systems (and against defect, and against the word of the driver himself that he was using the system properly) in one liability case after another?

According to Forbes [forbes.com], the average driver is involved in an accident every 18 years. Let's suppose (and I'm sure the statisticians would object to this supposition) that that means that the average CAR is also involved in a wreck every 18 years as well. Since the average age of a car is about 11 years [usatoday.com] now, it's not unreasonable to assume that a little less than half of all cars on the road will be involved in at least one accident in their functional lifetimes. And even with the added safety of driverless systems, the first model available will still have to contend with a road mostly filled with regular, non-driverless-system cars. So let's say that a good 25% of those first models will probably end up in an accident at some point, which will make a very tempting target for lawyers going for the deep pockets of their manufacturers.

Again, what car company wouldn't take that into account when asking themselves if they want to be a pioneer in this field?

To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers?

... no one. But you'll get plenty who charge mandatory tune-ups to ensure compliance. The question will be "which company DOESN'T charge a fee for a mandatory yearly check-up"?

This is my exact reasoning why flying cars will never take off (pardon the pun). People keep their cars in terrible condition. If your car has an engine failure, worst case scenario, you pull over to the side of the road, or end up blocking traffic. In a flying vehicle, if your engine dies, It's very possible that you will die too. And if you are above a city, it's not impossible to imagine crashing into an innocent bystander.

I imagine the same will be for self driving cars. It will never happen because if the car is getting bad information from its sensors, then crazy things can happen. People can't be bothered to clean more than 2 square inches from their windshield in the winter. Do you really think they are going to go around cleaning the 10 different sensors of ice and snow every winter morning? Sure the car could refuse to operate if the sensors are blocked, but then I guess people would just not want to buy the car, or complain to the dealer about it.

The drivers would complain, but the current vehicles "know" they are unsafe much of the time (lots of cars will give you an error light if a light is out - it's trivial to determine, as the resistance changes, and check engine lights and such range from sensor problems and trivial emissions issues to catastrophic engine problems). Yet, at worst, they'll enter a "limp" mode.

If there was a government requirement that safety related problems that are detected must shut down the car and immobilize it in no more than 5 minutes, then the problem goes away.

It would have to be the government because of tragedy of the commons. If one car company doesn't do it, they'll sell it as a feature, and if most don't, it'll be expected that they don't, so the ones that do will be shunned.

When all self-friving cars refuse self-driving mode if they detect any problem, you either manually drive it, or don't go anywhere. And, when everyone expects their car to immobilize if they don't care for it, they'll care for it a little more than they do now.

To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers?

... no one. But you'll get plenty who charge mandatory tune-ups to ensure compliance. The question will be "which company DOESN'T charge a fee for a mandatory yearly check-up"?

Asimov's early robot stories dealt frequently with corporate liability and it was often the source of the plot conflicts. If a proofreading robot made a mistake causing a slander ("Galley Slave") or an industrial accident resulted in injury, US Robotics was put into the position of having to prove that it was not the fault of the robot (which it never was).

This is why Asimov's US Robotics didn't sell you a robot, they leased it to you. The lease was iron-clad, could be revoked by either party at any time, had liability clauses, and had mandatory maintenance and upgrades to be performed by US Robotics technicians. If you refused the maintenance US Robotics would repossess, sue and claim theft if you withheld ("Bicentennial Man", though unsuccessfully; "Satisfaction Guaranteed").

A properly functioning robot would not disobey the three laws, and an improperly functioning robot was repaired or destroyed immediately ("Lost Little Robot"). Conflicts between types of harm were resolved using probability based on the best information available at the moment ("Runaround"), and usually resulted in the collapse of the positronic brain when it was safe to do so ("Robots and Empire", etc.).

What they're talking about here, though, isn't really programming morality into machines in some kind of sentient, Isaac-Asimov sense, but just programming decision policies into machines, which have ethical implications. The ethical questions come at the programming stage, when deciding what policies the automatic car should follow in various situations.

And those ethical decisions will come with even MORE legal liabilities. Even the idea would give any legal department nightmares. They get enough headaches from faulty accelerators. Can you imagine the legal problems they would get from programming hard ethical decisions into their computers? They would get sued out of existence the first time that feature had to be used.

They get enough headaches from faulty accelerators. Can you imagine the legal problems they would get from programming hard ethical decisions into their computers?

I see you've 1) never programmed and 2) You run Windows. I agree, I would never get in a Microsoft car considering their shoddy programming, but Microsoft would never manufacture a driverless car simply because of that.

Almost all automotive accidents are caused by human failure. Sure, there are exceptions -- I was in a head on crash because of a blown tire, and a blown tire on a megabus killed someone a couple of months ago here in Illinois. But accidents from mechanical failure are rare.

But people cause almost every accident. Have you seen how stupid people drive these days? They race from red light to red light as if they're actually going to get there faster that way. They get impatient. They don't pay attention. They get angry and do stupid things like speed, tailgate, suddenly switch lanes without looking, fumble with their radios, talk on their cell phones, get in a hurry... computers don't do that. There will be damned few if any accidents that are the computer's fault.

Hell, just this morning on the news they showed a car crashing through a store, barely missing a toddler -- the idiot driver thought the car was in reverse. Had he been driving a computer-controlled car, that would have never happened.

Can you be sure the computer will handle all possible inputs correctly?

Of course not. If we get serious about licensing and permitting these vehicles, I suspect the standard will be to compare them with the vast body of statistics we have from human drivers. As long as a company's cars are averaging fewer accidents per mile than humans do, it would be hard to argue that they're not safer, even if they still get in some accidents.

People are terrible in all the ways you mention above and then some. Strokes, seizures, heart attacks, sneezing, blinking, stray eyelashes, muscle

I think your statistics on accidents are informative but you're missing an important point. With automated cars, we expect accident rates to go down significantly (so saith the summary). So the likelihood an _automated_ car will be _at fault_ in an accident is probably a lot lower than the 25% you presume. (The manufacturer does not care about accidents where the machine is not at fault, beyond complying with crash-safety requirements.)

So the likelihood an _automated_ car will be _at fault_ in an accident is probably a lot lower than the 25% you presume.

Great. Now all you have to do is prove your system wasn't at fault in a court of law--against the sweet old lady who's suing, with the driver testifying that it was your system and not him that caused the accident, and a jury that hates big corporations. And you have to do it over and over again, in a constant barrage of lawsuits--almost one for every accident one of your cars ever gets in.

No, but I can imagine a change to the legal system limiting the liability of the manufacturers of self-driving cars.

If we could know that self-driving cars reduce accidents by 95% (a not unrealistic amount), it would be morally wrong for us to not put them on the road. If the only hurdle the manufacturers had left was the liability issue, then it would be morally wrong for Congress to not change the laws.

Of course, Congress has been morally bankrupt since, oh, about 1789, so I doubt that they'll see this as an imperative. On the other hand, I do imagine the car makers paying lobbyists and making campaign contributions to ensure that self-driving car manufacturers are exempted from these lawsuits, so it could still happen.

Fortunately, my automated car uses vision and radar to detect obstacles. It records everything it sees for 5 minutes before the crash, including the little old lady trying to put sugar in her coffee while making a left turn. Case closed.

Actually, I think you're both missing the biggest issue by focusing on true accidents. I think the OP's point is legitimate, even in the face of your assertion that rates go down. Companies are still taking on the risk as they are now the "driver". While the liabilities of these situations is large, there is a situation that is much, much larger.

What happens when there is a bug in the system? Think the liability is bad when one car has a short circuit and veers head on into another? Imagine if there is a small defect. There are plenty of examples, like the Mariner 1 [wikipedia.org] crash, or the AT&T System Wide Crash [phworld.org] in 1990. We've seen the lengths to witch companies will go to track down potentially common issues, like the Jeep Cherokee sudden acceleration, or the Toyota sudden acceleration issues because it has the potential to affect all cars. But let's imagine a future where all cars are driverless, and the accident rate is 1/100th of what it is now.

What happens when there is a Y2K style date bug? When some sensor fails if the temperature drops below a particular point? When a semi-colon is forgotten in the code, and the radio broadcast that sends out notification of an accident causes thousands of cars to execute the same re-route routine with the messed up code all at the same time.

There is the very real potential for thousands, or even millions of cars to all crash _simultaneously_. Imagine everyone on the freeway simply veering left all the sudden. That should be the manufacturers largest fear. Crashes one at a time can be litigated and explained away, the business can go on. The first car company that crashes a few thousands cars all at the same time in response to some input will be out of business in a New York minute.

Meh. Companies already face this. If any one of the thousands of parts in your car fails and causes an accident, the manufacturer can (and usually does) get sued. Ask Toyota or Firestone how that plays out. All we're talking about here is another new part. If the internet was around when power steering or the automatic transmission were invented, I bet there would have been a similar discussion about those. I think the potential liability is a good thing, because otherwise manufacturers don't have much incentive to make safe products.

There's sort of a flaw in your reasoning... the accident rate you cite is with HUMAN drivers. Driverless cars would naturally change it (ideally, lower it). And assuming this, chances are accidents involving driverless cars would mostly occur with human-driven cars and be the human's fault, so no liability there.

However I suspect at least initially software/hardware to enable driverless control of cars would be provided by companies other than the manufacturer so they would not be held liable. They would

Well, the solution to liability is legal - grant immunity as long as the car performs above some safety standard on the whole, and that standard can be raised as the industry progresses. There is no reason that somebody should be punished for making a car 10X safer than any car on the road today.

As far as programming morality - I think that will be the easy part. The real issue is defining it in the first place. Once you define it, getting a computer to behave morally is likely to be far EASIER than getting a human to do so, since a computer need not have self-interest in the decision making. You'd be hard pressed to find people who would swerve off a bridge to avoid a crowd of pedestrians, but a computer would make that decision without breaking a sweat if that were how it were designed. Computers commit suicide every day - how many smart bombs does the US drop in a year?

But I agree, the current legal structure will be a real impediment. It will take leadership from the legislature to fix that.

Well, the solution to liability is legal - grant immunity as long as the car performs above some safety standard on the whole, and that standard can be raised as the industry progresses.

Yes, that's a possibility. Blanket government immunity in all liability cases would work. The only problem there is that you get into politics. And the first time some Senator's son, or daughter of a powerful political donor is killed in a driverless car, you can probably kiss that immunity goodbye.

The funny thing is that most of the time you are in an airplane the autopilot(aka george) is in control. Even when you're landing ILS can in some cases land the plane on it's own. If you've ever been in a plane, chances are you have already put your life in the hands of a computer. I seriously doubt that 25% of the first models will get into accidents. With the new sensors that will be in these cars the computer will have a full 360 degree view of all visible objects. This is far more than a human can see. Furthermore computers can respond in a fraction of the time a human can.

Training millions of humans to drive should be the far more scary proposition.Plus chances are you as an individual will be responsible for your car and the system designers and manufacturers will be able to afford good lawyers.

And guess what, you're still going to get sued. Because the driver is going to blame your system and claim he wasn't in control at the time, and a slick lawyer is going to realize that he can sue the big, evil corporation for a shitload more than he could get from suing the putz behind the wheel. And even showing up in court and making your case is going to cost you thousands--even if you win.

Current legal liability is split between drivers and their insurers, with a little left over for governments (eg, bad road maintenance/design) and manufacturers (mechanical defects). Driverless cars could move this liability around, pushing it from drivers and their insurers and onto manufacturers and their insurers, but won't actually overall increase it unless the driverless cars crash more (and, of course, we all hope for the opposite). So, the price of the cars might go up but they'll still be attractiv

I believe the Google cars actually have drivers behind the wheels when they're out on the road (hovering their hands over the steering wheels should they need to take over). I've only ever seen them running truly driverless on closed tracks.

I think the answer to most of your questions are "not in the US". The record pay-outs for a traffic accident here in Norway is around $2 million USD for a young person seriously crippled for life, of course we have a universal health care system so it's not an apples-to-apples comparison as that only covers non-medical costs and loss of income but they don't have to risk billion dollar lawsuits in the US. If the accident rate should go bat crazy I imagine they can restrict the cars to only drive under certa

This is also why I don't believe these "horseless carriages" will ever take off. Horses are actually pretty smart creatures. They don't want to run into obstacles, go over cliffs, etc. And they don't use any of these new-fangled "combustion engines" (which are basically filled with explosives!) to do their job. And these new "engines" have thousands of parts? Do you want to try and figure out what is wrong with one of these devices?

How thick are you? Pretty much all of Asimov's works dealt with how ambiguous and incomplete the three laws were and how many horrible failure modes fall well within the domain of an intelligent machine following them to the letter. That was a warning not to oversimplify AI and machine ethics in general, not a blueprint.

You never actually read Asimov.And if you did, you're the one that failed to grasp the points.The points he even clearly spells out in several of his own essays.

Asimov wasn't writing about the ambiguity or incompleteness of the laws...he wrote the damn laws. And he did consider them a blueprint. He said so. And when MIT (and other) students began using his rules as a programming basis he was proud!!

It wasnt a warning.

Asimov was writing about robots as an engineering problem to be solved, period.The laws are basic simple concepts that solve 99% of the problems in engineering a robot.He then wrote science fiction stories dealing with the laws in the manner of good science fiction, that is to make you think about: the science itself, the consequences of science, the difference in human thinking and logical thinking, difference in human and robots...ie to think period.

Example: in telling a robot to protect a human, how far should a robot go in protecting that human? Should he protect that human from self inflicted harm like smoking, at the expense of the persons freedom? In this case Asimov, again, wasnt writing about the dangers of the laws, or to warn people against them. He's writing about the classic question of "protection/security vs freedom", this time approached from the angle of the moral dilema (sp) placed on a "thinking machine" as it tries to carry out its directives.

in fact Asimov frequently uses and explains things through the literary mechanics of his "electropsychological potential" (or whatever word he used was). In a nutshell its a numeric comparison: Directive 1 causes X amount of voltage potential, Directive 2 causes Y amount, and Directive 3 causes Z amount, and whichever of these is the largest determines the behaviour of the robot. In one story a malfunctioning robot was obeying Rule 3 (self-preservation) at the detriment of the other two, because the voltage of Rule 3 was abnormally large and overpowering the others.

Again, he wrote about robots not as monsters or warnings. he specifically stated many times that his writings were in fact about the exact opposite: that they arent monsters, but engineering problems created by man and solved by man. since man created them, man is responsible for them, and their flaws. robots are an engineering problem and the rules are a simple elegant solution to control their behaviour (his words).

With the advanced robots to come out of Asimov's works, like R. Daneel Olivaw, their AI was intelligent enough to put things into perspective. With the addition of the Zeroeth law, Olivaw didn't run around playing superhero, snuffing cigarettes and pulling babies from wells. He knew that the survival of humanity as a whole was more important than a single life, and adapted his understanding of the laws to adapt.

Unlike the horrible movie, the book "I, Robot" was a series of short stories dealing with the ambiguity of the laws. (The movie was more some bizarre combination of "free the robots!" mixed with "the three laws are a lie".) Additionally, the ambiguity of the laws came up multiple times in the Robot/Foundation universe, such as in "The Naked Sun" and "The Robots of Dawn."

The laws are paradoxically hard-and-fast yet ambiguous. In any case where any law is essentiall

The three laws of robotics do not begin to cover the issue discussed in the article. This is about choosing the lesser of two evils. About mitigating death and destruction. Do you crash the vehicle into another vehicle in order to avoid a pedestrian? Who is more important? The passengers of the vehicle the software is operating, or passengers outside the control of the software? There is going to be a great deal to figure out, and I'm sure that lawmakers will be involved in this process, as will the courts.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Basically translates to the needs of the many outweigh the needs of the few, or one. So the robot car would chose to kill it's own passenger to save the bus full of children if those were the only two options.

Ya, but buses are massive hulks of steel so who is to say that any children would be hurt by hitting the bus in the first place? Additionally the driverless system is going to respond and be on the brakes a lot sooner then a person would and the driverless system would already sense unsafe conditions and be slowing before the accident even had the potential to occur.

Asimov's solution resulted in a mind-reading robot who could erase memories in humans, who then went on to discover the limitation of the Three Laws and decided that the best thing for humanity was to turn Earth into a radioactive wasteland in order to encourage people to leave their underground caves of steel and migrate out into the galaxy. The robot also decided that humans are better off without robots, so he manipulated society into rejecting robots and in the end there was only one sentient robot in e

Serious answer: the three laws are not very good. Computers are governed by strict logic, and human style AI is driven by doing everything you can to bypass the limitations of strict logic with data structures and algorithms too complex and large to predict. The net effect of a few English language instructions that don't have a hard and clear mechanism for analyzing with strict logic, and a also lack a necessary interpretation with fuzzy logic does very little to solve the problem.

I'd post the link, but Youtube access is prohibited...
Go to youtube, search for the video Blinky and be prepared to see an impressive short movie about the helpful family robot that will tend to your EVERY desire.

It's a choice the human driver would have to make, so when first starting your driverless car, it might as well prompt you with a series of moral questions like "should I crash into a bus or veer off a bridge if the situation arises?"

It's a choice the human driver would have to make, so when first starting your driverless car, it might as well prompt you with a series of moral questions like "should I crash into a bus or veer off a bridge if the situation arises?"

It's a choice the human driver would have to make, so when first starting your driverless car, it might as well prompt you with a series of moral questions like "should I crash into a bus or veer off a bridge if the situation arises?"

Human drivers don't make these decisions in any moral way in the real world, so why would the program in anything into a car?

Split second decisions are involved in any accident situation, or, the lack of the ability to decide, resulting in the default.Nobody ponders the morality of the situation when their life is on the line. Its all instinct from that point.

Can the driver select "My life is the most important one.", because many people would likely opt to run over a thousand baby seals if it would save their life. I'll take evasive maneuvers to save a dog or a cat, but the pelts of many squirrels and bunnies have adorned my car's undercarriage from time to time. Some however would be more upset about the damage to their bumper then the fact that Spot is now motionless at the side of the road.

>> If your driverless car is about to crash into a bus, should it veer off a bridge?

The bus should be built to take the occasional crash, particularly in low speed zones where busses are typically used, so no.

Or, with enough computing power, you can imagine an "unethical" decision tree based on actuarial tables:1) Calculate location and weight of all known human on the bus2) Calculate likely trajectories, damage, etc.3) Compare worth of each human (using federal tables, of course) in each vehicle4) Ma

The example was not a great one. How about driving into a wall vs driving into a group of pedestrians? Or cook up whatever scenario you want in which the life of the driver is pitted against the lives of a bunch of others. And be sure to read the wikipedia article on the Trolley Problem before doing so.

Ethics are a matter of conscious decision-making. Until we have conscious machines, we will not have ethical machines. What Marcus is writing about is the application of ethics in the design of machinery, which is a growing topic in its own right, but not nearly as click-inducing (or alliterative) as is 'moral machines'.

Depending on how many other "someone elses" there are. And possibly on an overall Human Value Score brought to you by TransUnion, Experian, Facebook, Google, and Microsoft, weighted by your Medical Insurance Information Bureau records - and theirs.

Depending on how many other "someone elses" there are. And possibly on an overall Human Value Score brought to you by TransUnion, Experian, Facebook, Google, and Microsoft, weighted by your Medical Insurance Information Bureau records - and theirs.

Yeah, how many of these companies are going to take responsibility for deliberately instructing a car to kill someone in a particular scenario? It doesn't matter how many lives the maneuver saves (or what their Human Value Scores are) by avoiding a crash if it does something that has a 99% chance of killing the driver. Drivers (or families of drivers) will still sue, saying that if the car hadn't been following so close or driving so fast or whatever to begin with, no one would have had to die... thus the

That probably came across as nastier than I wanted to be.:-( You probably haven't thought through the same scenarios I have -- for example, a group of pedestrians is crossing the street illegally and your choice is to plow through them or smash into a parked car at low speed which probably won't hurt you. For most people, that's an easy choice to make.

That would be a problem if... we had to choose between running over two pregnant women vs. running over 3 adult male pedestrians? The fact is that unless there's a law stating which alternative is correct, the manufacturer will choose the less expensive option, whatever that means.

Thank you, I was looking for a good example. Copyright would be another one. Without agreement as to whats moral (which I don't see any signs of being around the corner) this is little more than a masturbatory (speaking of unaligned morality....) exercise.

Is it moral to kill? Some say no, never. Others say only in response to a clear and present danger. Still others have exceptions for if a person has done something heinous, or whenever their government (however they define that) declares a war.

No competent engineer would even consider adding code to allow the automated car to consider swerving off the bridge. In fact, the internal database the automated car would need of terrain features (hard to "see" a huge dropoff like a bridge with sensors aboard the car) would have the sides of the bridge explicitly marked as a deadly obstacle.

The car's internal mapping system of drivable parts of the surrounding environment would thus not allow it to even consider swerving in that direction. Instead, the car would crash if there were no other alternatives. Low level systems would prepare the vehicle as best as possible for the crash to maximize the chances the occupants survive.

Or put another way : you design and engineer the systems in the car to make decisions that lead to a good outcome on average. You can't possibly prepare it for edge cases like dodging a bus with 40 people. Possibly the car might be able to estimate the likely size of another vehicle (by measuring the surface area of the front) and weight decisions that way (better to crash into another small car than an 18 wheeler) but not everything can be avoided.

Automated cars won't be perfect. Sometimes, the perfect combination of bad decisions, bad weather, or just bad luck will cause fatal crashes. They will be a worthwhile investment if the chance of a fatal accident were SIGNIFICANTLY lower, such that virtually any human driver, no matter how skilled, would be better served riding in an automated vehicle. Maybe a 10x lower fatal accident rate would be an acceptable benchmark?

If I were on the design team, I'd make 4 point restraints mandatory for the occupants, and design the vehicle for survivability in high speed crashes including from the side.

If the car was following the speed limit and staying in its lane and the bus swerved into its lane, then it wouldn't have to do anything except brake. The bus is in the wrong. If a malfunction happened and the car lost control, well, then it doesn't have control and can't really do anything anyway. Maybe it was on ice and got control back? Well, it's probably in some kind of automated loop trying to just stop itself as quickly as it can, so it's not going to try to swerve anyway. So I agree, this is a

This is my view of it. If you drive your car off the bridge, you have a very high chance of dieing. If you go head on into the bus, there's probably a higher chance that you will survive, as long as the speeds aren't too high. The people on the bus will be fine regardless because the vehicle is so much bigger than yours.

You are right about the 4 point restraints. I can't believe this isn't mandatory yet. They could be doing a lot they aren't doing to keep people safe, because they'd rather keep people

Screw the bus.I don't care about the bus.The bus is big and likely will barely feel the impact anyway.I care about the fact I don't want to die.Why would buy and use a machine that would choose to let me die?

And I posit that the author has failed to consider freedom of travel, freedom of choice, and other basic individual rights/freedoms that mandating driverless cars would run over (pun intended).

It depends, if the bus is empty or full of kids. This is just one example... I doubt there will ever be enough information to program for all circumstances . It will be more like if something happens shutdown and wait for human instruction on how to proceed. Wouldn't there be a network in which the robotic cars could warn others in time to avoid having to make such choice in the first place?

Also, I do not think it will be the gap between how safely an automated vehicle drives as compared to a human cou

>We should automate vehicles to take over the mundane tasks of driving the vehicle and leave the decision making to the human operator. We are the highest order of intelligence for making such decisions (thus far).

While I like the idea, the sorts of decisions being discussed aren't ones that you can wait for input on, they need immediate decisions, not asking the driver to pay attention and then choose something. (negating the fact that humans aren't necessarily all that good at those decisions either...)

Wait. If the driverless car is so damn great, how did it let itself get into a situation where the only options are to hit the bus or drive off a bridge? I can make that kind of mistake on my own, thanks. I expect automated cars to avoid this kind of situation, else why bother having them?

One moral dilemma for the driverless society regards the speed at which a destination can be reached, and individual choice in this matter. Both speed and acceleration reduce fuel economy, driverless cars will know this, and society will demand overall standards for fuel efficiency. I already envy the kids who can afford the 'drive like Andretti' software.

If your driverless car is about to crash into a bus, should it veer off a bridge?

Physics says "no". The bus probably weighs an order of magnitude more than your vehicle... The passengers might not even notice that you ran into them, and mistake the collision for having hit a pothole. The real question would be, say, a dump truck following too closely behind a motorcycle...

In general, I want machines to be as stupid an fail-safe as possible. Think: missile defense systems around an airport... The most l

The AI we currently use cannot have Morals and Ethics programmed in. Weak AI as we have today is the picture perfect tool, but as a Tool it can't know or understand the world in a way it could make a Moral or Ethical choice. Weak AI can only ever Do what it was intended to Do and Nothing more. Lets take Watson as an example. It's being used in medicine now. Lets say a patient asked it a question about what they were dying of, but lets say that if the patient knows too much they will sink into a depression a

Almost every filtering system for the Internet is primarily based on blacklists... lists of URLs, lists of words... because there is no computer program capable of the morality required to filtering the Internet with any level of adequacy.

Until such a program, which requires no physical moving parts (unless you consider an automated head slapping device part of an effective filtering system), can tell what's obscene and what's no obscene... why would you expect a program to know why it should hit the sheep

Anyone ever seen a car/bus impact? The bus is usually a little messed up, and the car is usually cut to ribbons, and they pour the occupants of the car out, while the bus occupants are generally unharmed.

It may not be politically correct, but size=safety for the people in the larger vehilce. That's one reason I'll pay for the gas for my 3 young children to be shuttled around in a suburban.

On my motorbike, I'd feel much safer if all the cars around me were driverless. Human car drivers, who so often tend to blank out half-unconscious and fail to check blind spots, are the leading cause of death for bikers.

it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work

The first charge is that this would be an immoral risk to take because you might hurt yourself. In my understanding of morality, it is up to each individual to decide for themselves which risks and consequences and injuries to themselves are immoral. For example, I would not go skydiving, but other people choose to do so. They are taking a risk I choose not to take, but I do not think they are immoral for taking the risk, and I do not think an increase in the magnitude of risk alters the morality of the situation, because they are risking themselves. As another example of higher risk, some people choose to try to circumnavigate the globe on solo fights or boat trips. This is a huge risk; some people have perished in the attempt. But the fact that they were risking serious hurt to themselves does not render their decision immoral.

The second charge is that you are risking hurting another person. But again, this is their risk to take. They decide to travel on a road that includes other human drivers knowing that doing so incurs some risk of injury. Taking that risk is not immoral. As an analogous example, wrestlers or boxers choose to fight each other knowing that there is a risk of injury to each other, but doing so is not immoral because the risk is voluntarily accepted by each participant.

Ideally, travelers could choose between a variety of competing travel arrangements, including roads that might choose to exclude human drivers for the safety of travelers, or roads that choose to allow them for those who desire to take that risk. What would be truly immoral would be to forcibly monopolize some or all of the transportation options, so that people do not have the freedom to create differing transportation alternatives that compete with one another. This would limit the choices of travelers such that some might have to take risks they do not want (e.g., roads with both human and automated drivers, because pure-automated roads are not available), or cannot choose to take risks that they find rewarding, such as choosing to drive when automated drivers are available.

Dr. Walter Block has written an entire book [amazon.com] on how the American highway system is currently subject to this kind of immoral forced monopolization, currently causing 40,000 needless traffic fatalities per year, and how the elimination of this immorality is entirely practical and beneficial.

Kudos to Gary Marcus for raising such a provocative point. I sneer however at his suggestion that we bring in the legislators and lawyers to help us to deal with the problems. That is a naive/liberal view as opposed to a libertarian/cynical view.

I cynically don't expect enlightened laws ever in our future. Instead we will depend on the courts to once again try to apply laws and principles of centuries past to the problems of today. You could say that's the American Way.

What I find particularly worrying is that, at least initially, many of the ethical choices programmed into these machines will have been written by people who tend to be heavy on the Aspergers side of empathy (as many technically inclined people are). Should we really be leaving decisions like this to people who literally can't understand how most of humanity behaves?

The actual cause of most accidents can be boiled down to one simple rule that is broken. That is "failure to yield". Most people drive like complete dicks because they think they are more important then everybody else on the road. The driverless system will not be driving like that in the first place so it will hardly ever get into a traffic collision and even when it does it will minimize the damage because its responses are not emotional.

Why have cars at all if we aren't allowed to drive them? Rip up all the highways, and replace them with a gigantic autonomous rail system.

But no...

That's not what's at stake here. The truth is that if I'm not in control of my whereabouts anymore, then how can I be sure I'm making decisions for myself? Without a car, you might find yourself imprisoned by the distance your two feet can take you. Someone out there will applaud this along the same premise that "those who obey the law, have nothing to hide, and my gosh, if a driverless car prevents a CRIMINAL from driving to a crime, then the system pays for itself!", but that's not the point. It's not about morality, it's about control, and if someone is stopping me from driving my own car, then who's stopping them from driving theirs? When we fork over control of our transportation, then will come the day that we're isolated into districts, where the equivalent of passports will be needed from county to county. If the car won't let me drive it, how can I be sure that the car will obey me at all?

If all the cars in the world are autonomous, and computer controlled, well gee... what's to stop "someone" (anyone) from turning them all into a gigantic autonomous system that (I'm about to Godwin this...) conveys everyone to a huge concentration camp set to autonomous genocide?

It's not morality that the author is arguing in favor of.

It's our own autonomy that he's arguing against.

Someone will have control of these cars. Somewhere there will be levers.

Let's not imagine these automatic apparatuses to be forces of nature beyond an individual human's control. These are contrived, artificial, unatural man-made objects, at their core mechanical.

While the vast majority of collisions are avoidable, I'd hesitate to say that 100% are. Sometimes there just is no "good" choice, only bad and worse. The thing is I'd like the car to choose bad over worse.

Granted human drivers haven't solved this problem yet either, so I'm not sure how much different it is just because a machine is driving.

Morality is also a difficult thing to program because it's all subjective. Do you program it to kill the driver instead of an innocent pedestrian? How about 2 pedestrians

If you cannot safely stop in the visible distance between you and any obstacle, you are going to fast.

This includes being able to stop if that vehicle in-front of you suddenly stops.This includes being able to stop should there be a boulder in the middle of the road just just over that rise, or around that corner.

So long as safe distances and speeds were observed, many incidents could be avoided. If all vehicles are "aware" of all other vehicles in their area and possibly

In every instance I can think of, accidents happen due to driver carelessness, inability, or simply due to knowledge a driver could not have

While driverless cars should greatly reduce the frequency of collisions by eliminating carelessness, inability, and increasing the amount of data available for decision making, there is some knowledge that just will never exist, and simply can't be known. Things happen that aren't predictable, and aren't always avoidable. I don't expect my driverless car to be able to anticipate the deer jumping out on to the highway from behind a tree, nor do I expect it to notice the kid who appears from behind a parked c