Conference: "Moral Algorithms: the Ethics of Autonomous Vehicles"

The Center for Ethics and Human Values and its co-sponsors present a one-day conference on Moral Algorithms and Self-Driving Cars.

The development of autonomous vehicles requires us to operationalize moral judgment. If vehicles are to make decisions that minimize harm in crash imminent situations, we need to address vexing questions of what constitutes the minimization of harm. Under what conditions is it permissible to cause harm to some in order to avoid harm to others? Are the numbers of victims and the severity of harm all that matters morally or does it matter, also, whether we cause the harm or merely allow it to occur? Does it matter whether the harm that we cause is the instrument of avoiding the harm we prevent or merely an unintended side-effect? These questions can no longer be confined to the seminar room. They arise in the laboratory as we design intelligent systems to make decisions formerly left to humans. To address these situations, we need to develop “moral algorithms,” algorithms that resolve “tragic choices” in morally defensible ways.