27 March 2019

Driverless Car Ethics

Welcome back. When my son, Noah, was working his way through a B.S. in Mechanical Engineering, we seldom spoke about his technical courses. But I do remember discussing one assignment he had in an elective humanities course; it pertained to what I now know to be labeled the trolley problem.

Although this thought experiment in ethics has been around for over a century with any number of variants, consider the following: A runaway trolley is going to hit several people. You can pull the lever, directing the trolley to a sidetrack, where it will strike only one person. Should you pull the lever? Is there a moral difference between doing harm and allowing harm to happen?

Noah graduated and moved on; however, the trolley problem is now being addressed in different studies of programming driverless vehicles (aka self-driving or autonomous vehicles). I thought you’d find examples of that work of interest.

The Social DilemmaA 2016 study by researchers from France’s University of Toulouse Capitole, the University of Oregon and MIT examined the trolley problem in six online surveys of 182 to 393 U.S. participants.

Overall, the participants seemed to agree that autonomous vehicles (AVs) should be programmed to be utilitarian, minimizing the number of casualties. Yet given the incentive for self-protection, few would be willing to ride in utilitarian AVs. Further, they would not approve of regulations mandating self-sacrifice, and such regulations would make them less willing to purchase an AV.

German Guidelines

Report of German EthicsCommission on Automated and Connected Driving (see link in P.S.).

Trying to stay ahead of the issue, Germany’s Federal Minister of Transport and Digital Infrastructure appointed an Ethics Commission on Automated and Connected Driving.

The commission’s 2017 report included 20 ethical rules. One, for example, states in part: In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited…

Study Results at Odds with GuidelinesA 2018 study by researchers from Germany’s Osnabrück University found that what is morally justified may not be socially acceptable. Their study had 189 participants (average age 24) complete several virtual reality simulations of driving alone on different two-lane roads. Obstacles emerged on both lanes giving the participants four seconds to switch or not switch lanes before hitting someone (which wasn’t shown).

The study found nearly all participants would change lanes to hit fewer people; over half would sacrifice themselves to save others, especially as the number saved grew; most would hit an elderly person before an adult, more so before a child; and most would swerve onto the sidewalk to save a greater number of people.

Introducing ProbabilitiesA more recent study by a research team from Germany’s Max Planck Institute for Human Development and University of Göttingen examined trolley problem options when the probabilities of hitting the pedestrian or bystander were known or unknown (872 U.S. participants). They also considered how people retrospectively evaluate those options when a road user has been harmed (766 U.S. participants).

They found that participants placed particular weight on staying in the lane. This tendency was seen when probabilities were known or uncertain and in hindsight after accidents occurred. Staying in the lane was considered more morally acceptable, particularly for autonomous vehicles.

International SurveyIn the most recently published study, collaborators from MIT, Harvard, the University of British Columbia and University of Toulouse Capitole reported nearly 40 million trolley-problem decisions made by people from 233 countries.

Two key findings were: - Globally, the strongest preferences are for saving humans over animals, more lives and young lives.- Three distinct moral clusters of countries could be identified, suggesting that groups of territories might converge on shared preferences, while between-cluster differences may pose problems.

Wrap UpResearch on programming driverless cars continues. The MIT-led study noted they could not do justice to the complexity of autonomous vehicle dilemmas, even with the large sample they obtained--we need to have a global conversation to express our preferences to those who will program the vehicles and to those who will regulate them.