When facing an unavoidable accident, self-driving cars should save the lives of their passengers regardless of the harm caused to others.

Pros

Cons

In collisions between multiple vehicles, it is likely that each self-driving car could not both identify and follow the truly optimal outcome that reduces overall harm for everyone. In the face of this uncertainty, each car is probably better off acting only in the interest of its own passengers.

The designers of self-driving cars have a legal responsibility to minimize harm to the primary users (i.e. passengers) of their product.

The adoption of self-driving cars will be limited if people are afraid that their car may deliberately choose to harm them.

When people purchase a product they are purchasing the ability to use that product for their own ends and desires. To the extent that a product has the ability to decide between outcomes it should always decide in a manner that matches the desires of its owner.

Behaviour that does not prioritise the owner creates an incentive for owners to hack or modify their car's programming.

Most drivers would instinctually act to save their own lives in an unavoidable accident. This just replicates this behaviour.

It is immoral for self-driving cars to deliberately favour one human life over another. If there are situations where they must, the only moral option is to choose arbitrarily/randomly.

It is immoral for a self-driving car to not try to save the greatest number of lives and/or prevent the greatest amount of harm.

Adding additional safety features, such as always favouring the safety of passengers, enables offsetting behaviours on the part of car designers and car passengers that (at the margins) increases risk in other areas.

If harm must be caused by a self-driving car, it should be inflicted on those who are most at fault for the accident.

The saving of passengers' lives at all costs is not necessarily what these passengers would want given time to consider the alternatives.

If the passengers in question were instead the third party at risk (i.e. if they were pedestrians or passengers in another vehicle) they would want the self-driving car to consider their own lives. It is moral hypocrisy for passengers to always favour only their own lives and to not extend the same consideration to others.

The ability for a machine to make moral decisions regarding greater or lesser harm is dangerous.

Owners who drive cars that will harm others over themselves are opening themselves up to liability for the harm they cause.