AI systems can process large quantities of data, detectregularities in them, draw inferences from them, anddetermine effective courses of action — sometimesfaster and better than humans and sometimes as partof hardware that is able to perform many different,versatile, and potentially dangerous actions. AI sys-tems can be used to generate new insights, supporthuman decision making, or make autonomous deci-sions. The behavior of AI systems can be difficult tovalidate, predict, or explain: AIs are complex, reasonin ways different from humans, and can change theirbehavior through learning. Their behavior can alsobe difficult to monitor by humans in case of fast deci-sions, such as buy-and-sell decisions in stock mar-kets. AI systems thus raise a variety of questions(some of which are common to other information-processing or automation technologies) that can bediscussed with the students, such as the following:Do we need to worry about their reliability, robust-ness, and safety?

Do we need to provide oversight or monitoring of
their operation?

How do we guarantee that their behavior is consistent
with social norms and human values?

How do we determine when an AI has made the
“wrong” decision? Who is liable for that decision?

How should we test them?

For which applications should we use them?

Who benefits from them with regard to standard of
living, distribution and quality of work, and other
social and economic factors?

Rather than discussing these questions abstractly,
one can discuss them using concrete examples. For
example: under which conditions, if any, should AI
systems be used as part of weapons? Under which
conditions, if any, should AI systems be used to care
for the handicapped, elderly, or children? Should
they be allowed under any conditions to pretend to
be human (UK Engineering and Physical Sciences
Research Council 2011, Walsh 2016)?

Case Studies
Choices for case studies include anecdotes constructed to illustrate ethical tensions, or actual events (for
example, in the form of news stories), or science fiction movies and stories.

News headlines can be used to illuminate ethical
issues that are current, visible, and potentially affect
the students directly in their daily lives. An example
is “Man killed in gruesome Tesla autopilot crash was
saved by his car’s software weeks earlier” by the
Register (Thomson 2016), or “Microsoft’s racist chatbot
returns with drug-smoking Twitter meltdown,” by
The Guardian (Gibbs 2016).

Science fiction stories and movies can also be usedto illuminate ethical issues. They are a good sourcefor case studies since they often “stand out in theireffort to grasp what is puzzling today seen throughthe lens of the future. The story lines in sci-fi moviesoften reveal important philosophical questionsregarding moral agency and patiency, consciousness,identity, social relations, and privacy to mention justa few” (Gerdes 2014). Fictional examples can often bemore effective than historical or current events,because they explore ethical issues in a context thatstudents often find interesting and that is independ-ent of current political or economic considerations.As Nussbaum puts it, a work of fiction “frequentlyplaces us in a position that is both like and unlike theposition we occupy in life; like, in that we are emo-tionally involved with the characters, active withthem, and aware of our incompleteness; unlike, inthat we are free of the sources of distortion that fre-quently impede our real-life deliberations” (Nuss-baum 1990).

Science fiction movies and stories also allow one to
discuss not only ethical issues raised by current AI
technology but also ethical issues raised by futuristic
AI technology, some of which the students might
face later in their careers. One such question, for
example, is whether we should treat AI systems like
humans or machines in the perhaps unlikely event
that the technological singularity happens and AI
systems develop broadly intelligent and humanlike
behavior. Movies such as Robot & Frank, Ex Machina,
and Terminator 2 can be used to discuss questions
about the responsibilities of AI systems, the ways in
which relationships with AI systems affect our experience of the world (using, for example, Turkle
[2012]) to guide the discussion), and who is responsible for solving the ethical challenges that AI systems encounter (using, for example, Bryson, [2016])
to guide the discussion). The creation of the robot in
Ex Machina can be studied through utilitarianism or
virtue ethics.

Teaching Resources
The third edition of the textbook by Stuart Russell
and Peter Norvig (2009) gives a brief overview on the
ethics and risks of developing AI systems (section
26. 3). A small number of courses on AI ethics have
been taught, such as by Jerry Kaplan at Stanford University (CS122: Artificial Intelligence — Philosophy,
Ethics, and Impact) and by Judy Goldsmith at the
University of Kentucky (CS 585: Science Fiction and
Computer Ethics). Other examples can be found in
the literature (Bates et al. 2012; Bates et al. 2014; Burton, Goldsmith, and Mattei 2015, 2016a). Burton,
Goldsmith, and Mattei are currently working on a
textbook for their course and have already provided
a sample analysis (Burton, Goldsmith, and Mattei
2016b) of E. M. Forster’s The Machine Stops (Forster
1909). A number of workshops have recently been
held on the topic as well, such as the First Workshop
on Artificial Intelligence and Ethics at AAAI 2015, the
Second Workshop on Artificial Intelligence, Ethics,