the old age home reinforces this) Frank’s friend Robot
isn’t gone, because he planted a garden, and it’s still
growing, and its “fruits” — the stolen jewels, which
Frank is able to pass on successfully to his kids
because he had that place to hide them — are in a
sense the legacy of that relationship and collaboration. So the movie may also be making an argument
about a kind of selfhood that exists in the legacy we
leave in the world, and that Robot’s legacy is real,
even though he himself is gone.

This movie has a very strong virtue ethics focus:
whether one considers the plan to conduct the jewel
heist Robot’s, or Frank’s, or a jointly derived plan, the
terms on which Robot agrees to let the heist go forward push Frank to new levels of excellence at the
particular skill set required to be a jewel thief. On
multiple occasions, Frank’s experience, and his well-established habitus as an observer of potential targets, leads him to be better than Robot in assessing a
given situation. When Frank reevaluates, late in the
movie, whether that’s the right sort of excellence to
strive for, that readjustment seems to take place in
terms of virtue ethics — What sort of self do I want
to be? What sort of legacy do I want to leave? —
rather than remorse for having broken the law.

Does Utilitarianism Help? Utilitarianism can offer us
a new way of contextualizing why Frank’s criminal
tendencies should be understood as ethically wrong.
A subset of utilitarianism, consequentialism, particularly “rule utilitarianism,” justifies a social norm
against theft in terms of the long-term consequences
for society. If people typically respect each other’s
property rights, everyone is better off: there is less
need to account for unexpected losses, and less need
to spend resources on protecting one’s property.
When some people steal, everyone is worse off in
these ways, though the thief presumably feels that
his ill-gotten gains compensate for these losses.

Although a major plot theme of the movie is their
struggle to avoid capture and punishment for the
theft, Robot and Frank show little concern for the
long-term social consequences of their actions. Frank
justifies his career in jewel theft by saying that he
“deals in diamonds and jewels, the most value by the
ounce, lifting that high-end stuff, no one gets hurt,
except those insurance company crooks.” This quote
is later echoed by Robot, quoting Frank’s words back
to him to justify actions. This raises questions about
what an ethical design of an eldercare robot would
entail — should it have preprogrammed ethics, or
should it allow the humans around it to guide it in its
reasoning? There are some basic, high-level decisions
a designer will have to make about how the robot
should act.

Conclusions and Additional QuestionsThe movie raises a number of important questionsabout how an eldercare robot should behave, in relat-ing to the individual person being cared for, and inrelating to the rest of society. Based on what we seeof Robot’s behavior, we can make some guesses abouthow Robot’s ethical system, or perhaps just its goalstructure, has been engineered. These projections canand should lead to a serious discussion, either in classor in writing, about whether this is how we thinkthat eldercare robots should decide how to act. Somepossible questions for discussion about eldercarebots:If an elderly person wishes to behave in ways that vio-late common social norms, should a caretaker robotintervene, and if so, how?

If the elderly person seriously wants to die, should the
robot help them to die?

If the elderly person asks the robot to help make
preparations for taking his or her own life, does the
robot have an obligation to inform other family members?

If the elderly person wants to walk around the house,
in spite of some risk of falling, should the robot prevent it?

Extrapolating into other domains, a caretaker robot
for a child raises many additional issues, since a child
needs to be taught how to behave in society as well,
and a child’s instructions need not be followed, for a
variety of different reasons.

Many of these questions touch on earlier fields of
ethical inquiry including medical ethics: Should
there be limits on patient autonomy? What do we do
when two different kinds of well-being seem to conflict with each other? They also converge with some
key questions in education ethics: How do we train
young people to take part in society, and to weigh
their own concerns against the good of others? What
methods of informing/shaping them are most effective? These very general questions are important, but
they become easier to talk about in the context of a
particular story and set of characters.

Teaching Ethics in AI Classes
Since AI technologies and their applications raise ethical issues, it makes sense to devote one or more lectures of an introductory AI class (or even a whole
course) to them. Students should ( 1) think about the
ethical issues that AI technologies and systems raise,
( 2) learn about ethical theories (deontology, utilitarianism, and virtue ethics) that provide frameworks
that enable them to think about the ethical issues,
and ( 3) apply their knowledge to one or more case
studies, both to describe what is happening in them
and to think about possible solutions to the ethical
problems they pose; ( 1) and ( 2) could be covered in
one lecture or two separate lectures. In case of time
pressure, ( 1) through ( 3) could all be covered in one
lecture. An additional case study could be assigned as
homework, ideally a group-based one. AI ethics is a
rich topic that can also support a full-semester
course, with additional readings and case studies.