This chapter begins our serious discussion of ethics, which is
(roughly speaking) the study of "the good," or of "right and wrong." It turns
out that there are many different ways to define ethics, many different
theories of ethics, and in fact, many different kinds of theories of ethics,
as well as many different application areas. We will be able to explore only
a little of this vast territory, and therefore we must make some tough choices
about what to leave out; we will put some emphasis on some very recent
approaches that are based on science, in particular, socio- and evolutionary
biology, and cognitive science.

In chapter 4, we saw that narratives, and even physical objects like
bridges, chairs and mugs, embody values in
definite, natural ways. Values in this sense are assessments of what is
valuable or important for some individual or social group at
some time. Such values can be relatively explicit, but they can also be
implicit, and even rather hidden. In a story, evaluative material plays the
crucial role of connecting the events reported to the values of the social
group within which the story is being told, and the same relation holds (but
perhaps in less obvious ways) for physical objects, as well as for
non-physical objects like standards, programs, and even theorems.

We have also previously argued that the technical and the social are
inseparable, and also that the social and the ethical are inseperable, so that
both social and ethical issues are inseperable from technical issues. A
deeper level of analysis notices that is it mainly through values that the
technical connects to the social. This chapter will clarify the relation
between ethics and values, and the nature of ethical theories, and will
consider some examples in a bit of detail.

Since values determine what is considered valuable, they also
determine what is considered "good" by an individual or group. Thus
values are closely connected with ethics. However, the approach taken here is
different from that of much of what is usually called ethics. One relevant
distinction is as follows: a descriptive approach tries to accurately
describe what some phenomenon is actually like, whereas a normative
approach tries to say what it should be like. Descriptive
theories are essentially models of some kind, which can be compared to
actual phenomena for validation; if the theory does not match the model well
enough, then the model is wrong. However, normative theories cannot be
validated in this way: if there is a discrepency, then one says that the
phenomenon is wrong, rather than the model! Thus descriptive theories try to
be scientific, while normative theories do not. For example, a descriptive
grammar tells how people actually speak (or write) some language, whereas
a normative grammar tells how (someone thinks) people ought to speak
(or write) it.

Similarly, descriptive ethics describes what people actually value,
whereas normative ethics tells what they ought to value. This raises a
second relevant distinction, between approaches that are based on
behavior and those that are based on intention; most Western
approaches are based on behavior, but I am more inclined to consider the
underlying values, and hence intentions; some reasons for this are
discussed in Section 5.3 below. Notice that our
analyses of narrative are descriptive,
and are intended to bring out implicit values; these analyses are not
normative, since we do not say what values should be there. A more
conventional version of this distinction would say that descriptive ethics
says how people actually behave, whereas normative ethics says how people
should behave; however, I question this on the grounds that "how they actually
behave" is too broad, and fails to capture the kind of behavior that is of
interest for ethics.

It should not be thought that normative approaches are arbitrary and
useless, despite the fact that some of them have been to a greater or lesser
extent. For example, if you want to be accepted as an educated person, you
need to have certain habits of speech, and you need to avoid others; this can
be described by a normative grammar, and many such grammars can be found in
ESL (English as a Second Language) schools around the world. On the other
hand, some normative grammar that that is still taught today can have a
negative effect, e.g. use of the subjunctive tense in hypothetical clauses, as
in

If it were to rain, one might become wet.

To talk this way is to risk seeming an elitist snob, though of course this was
not always the case, and there are still some places where such talk is the
norm, e.g., among Oxford dons.

Most ethical systems (and most grammars) have been normative, and there
have been many different attempts to justify such systems; we will explore why
some of these justifications are significantly better than others. In fact,
we will seek to use descriptive ethics to justify some normative ethical
principles. But first, let's look at some ethical issues that are of special
interest to computer science students.

Universities, including UCSD, are places where important ethical issues
constantly arise. The one that has perhaps captured the most attention is
cheating by students, and we will consider this in some detail, although we
will also mention some others. It is interesting to notice that the use of
information technology raises new ethical issues about cheating, or at least
raises old issues in new ways.

UCSD, and CSE, have a variety of resources regarding student cheating, many
of which are online. I will assume that it is already relatively clear what
should and should not be done, and instead will focus on arguments given in
favor of and against various behaviors, not just their content, but also their
form, and their presuppositions. This will help us see the values that are
involved, and will also help us to prepare for material on theories of ethics to come later, where we will be concerned
with the quality or "strength" of various arguments.

We will consider five documents. In a scientific approach, documents are
regarded as data, as neutral texts to be examined critically and
objectively, rather than as having some special status arising from the
authority of their authors; we will seek to discover their underlying values,
just as we did with narratives. This is called textual analysis (or to
be a bit fancy hermeneutics); when documents are treated this way, they
are often called texts, giving that common word a special nuance. These
are the documents

Prof. Scott Baden's Integrity of
Scholarship Agreement, a handout specifically for use in computer
science classes, which has been copied, edited, and used by several other CSE
professors; it includes excerpts from the

The Baden handout gives a pretty clear statement of what actions are
prohibited, but says almost nothing about why they are prohibited.

The UCSD policy document has a little more: it says that academic ethical
standards are needed to maintain "the integrity of scholarship" and to
"protect the validity of university grading;" these phrases occur in its first
and second sentences, respectively, but are vague.

The document on sources has much more explanation, which interestingly
has a very different focus, mostly on the development of scholarly skills, and
on traditions of mutual help and fairness within the academic community. It
also admits quite openly (in its third sentence) that the proper treatment of
outside sources can raise "perplexing problems." It is based on an earlier
document from Dartmouth College, has a highly intellectual tone, is slanted
towards the humanities, and (being from 1972) says nothing about the enormous
possibilities for cheating that arise in connection with the web.

The Gillespie email is very interesting for us, because it gives many
specific reasons why students should not cheat, which can be analyzed for the
ethical theories upon which they rely.

The Baden email introduces a different issue, which is how faculty feel
about cheating, and how it affects them.

The first three documents appear to refer in some indirect way to the
possibility of sanctions. The first uses the word "self-destructive" in the
second clause of its first sentence, but this is not further explained, and
can be interpreted in many ways. The second document does not mention
sanctions, but within the "Academic Regulations" section of the UCSD
General Catalog, it is followed by material that give very detailed
descriptions of responsibilities and procedures for dealing with academic
dishonesty; however, there is nothing there about the reaons for having these
very complex procedures. The third document repeatedly refers to various
sorts of "trouble" that one might get into, but does not say more about their
nature. The Gillespie email contains some explicit references to sanctions
for students who cheat. (By the way, I obtained permission from the authors
to use their emails).

These texts do not explicitly explain why UCSD would care about cheating.
A good hint appears in a 1990 Wall Street Journal article on problems
in prep schools, which said that prep schools (which are expensive secondary
schools intended to prepare students for admission to Ivy League colleges like
Harvard) are in trouble because fewer of their students are actually being
admitted to these universities, and parents are complaining. They are
therefore being driven to morally dubious practices, such as inflating grades
and (in at least one case) even falsifying student records. But if there is
an article about such practices in the Wall Street Journal, then they
are in much more trouble, because their reputation with these universities is
likely to be diminished. To use a business metaphor, the bottom line is that
these schools need to protect the reputation of their product (which is
students) or they will lose customers (see also the movie review, Emperor's Club has Wall Street Lessons).

The same reasoning applies to UCSD: if student cheating causes the quality
of students to appear higher than it actually is, then employers who hire
students, and universities that accept them for graduate school, will become
suspicious and the university's reputation will be damaged, which in turn will
hurt its ability to attract good faculty, good students, and funding from
government and industry; these three are the lifeblood of a modern university.
Schools of all kinds are very careful about their reputations, for just such
reasons.

Notice that we have not used the same analysis techniques on these
documents as we did on narratives, since we do not have narrative structure.
Instead, we looked at and compared multiple related documents, interviewed
participants, stakeholders, etc. Actually, such techniques are also valid for
the analysis of narratives, and if you have a serious need for a good
understanding of some situation, it is advisable to use as many techniques as
you can; for example, an accurate understanding of values is often a crucial
part of a good requirements analysis.

It is often said that any scientific approach to ethics must be descriptive
rather than normative. However, we will see that it is possible, given some
reasonable assumptions, to draw normative conclusions from descriptive
theories. But before that, we review some of the older work on normative
ethical systems.

Writings on ethical theory go back thousands of years, some high points of
traditional thought being the teachings of Plato, Aristotle, Aquinas, Buddha,
and Christ. Ethical theory has long been a central concern of Western
culture, and there is a vast literature, which we cannot possibly cover in any
detail, but can only hope to hit a few key points. A relatively readable
survey of some areas of ethics, oriented towards the concerns of this course,
can be found in the book by Deborah Johnson that is in our List of Recommended Books. It is usual to divide
ethical theories into four main categories, called absolutist, relativist,
utilitarian, and deontological. A somewhat unusual concern will be to try to
uncover some of the main presuppositions behind the various kinds of ethical
theory. Note that there is no uniformity of opinion in ethics, and in
particular, that many different versions can be found within each category of
ethical theory.

Absolutist theories say that there are absolute moral laws, which
can be applied in any situation to determine what actions are right and wrong.
Theories of this form are also called moral law theories. The example
most likely to be familiar to many readers is fundamentalist interpretations
of the Christian Bible, and in particular, of the Ten Commandments.

Relativist theories say that what is right depends on the situation,
but there are different views on how broadly to define "the situation."
Weak relativist theories say that the situation includes the local
social and physical context of an act. Strong relativist theories
broaden this to include the entire social context, and say that right and
wrong are relative to the standards of the society in which an individual
lives. Radical relativist theories say that it is impossible to decide
what what is right, and that there are no standards. All relativist
approaches agree that there is no single absolute standard for right and
wrong. For example, it may be right to kill in one situation, but not in
another.

Utilitarian theories claim that what is best can be determined by
some approach that is similar to what we now call a cost-benefit analysis,
i.e., maximizing some measure of utility. The name most closely associated
with this approach is Jeremy Bentham
(1748-1832). Utilitarian approaches are of course very widely used in
business, and they are supported by information technology such as spread
sheets, and various kinds of economic models. Utilitarianism is a kind of
weak relativism. One problem with utilitarianism is determining how to
measure utility. For example, Bentham wanted to maximize overall "happiness"
- but how can we measure happiness, and how can we be sure that it is even the
right thing to measure? Businesses want to maximize profits, but they too
have difficulties in measuring the costs and benefits of their actions.

There is an interesting blend of utilitarianism with moral law theory: one
can try to justify absolute moral laws by determining their utility. For
example, one can argue for the moral law which says that lying is wrong, by
examining the consequences if it were followed by all individuals. Approaches
of this kind are called rule utilitarianism, in contrast to act
utilitarianism, which denies that general rules can be given, and instead
calls for examining the consequences of particular actions. Some variants of
rule utilitarianism call for examining the consequences for a whole society
instead of a single individual; indeed, one of Bentham's original intentions
was to justify parts of the English legal system in this way.

Consequentialist theories say that an act should be judged right or
wrong depending on its consequences. This is at least a weak relativist
position, and possibly a strong relativist position, depending on how the
consequences are evaluated. Utilitarian approaches are also necessarily
consequentialist.

Deontological theories (sorry about the awkward Greek-based
terminology; it is unfortunately typical in philosophy) focus on the
principle of an act, rather than on the act itself or its immediate
consequences. The most important name here is Immanuel Kant
(1724-1804), and his approach has probably been the most influential on
today's "common sense" understanding of morality, and on most philosophy of
ethics, except perhaps the most recent. Perhaps the two most famous ideas due
to Kant were both called categorical imperative by him (Kant actually
had three versions):

people should never be treated merely as means to some end, but rather
should always be treated as ends in themselves.

Rules or actions should be judged by whether or not they can be
universalized, i.e., by whether or not it would be good if everyone
behaved that way.

These principles can be used to justify or criticize many ethical
principles; for example, it has been used to argue that killing another person
is always wrong. The second principle is similar to the so called the
golden rule of Christianity, which is often stated in the form "Do
unto others as you would have them do unto you." This is not an accident,
because Kant's goal was to give a philosophical justification for the
Judeo-Christian ethical tradition without any appeal to theology, and in
particular, without involving God in any way.

It is interesting to notice that it is not easy to use the categorical
imperative either to justify or to refute the Old Testament principle of
"an eye for an eye and a tooth for a tooth." It is also difficult to
justify or refute its converse, which is the New Testament principle to "turn
the other cheek." (It is a good exercise to try to do these arguments and see
what difficulties arise.)

Kant's categorical imperative is an absolute moral law, but it has the
unusual character of being a meta-law or second order law, that
is, a law about laws; moreover, any moral law that it justifies will be an
absolute moral law. The chief problems with this approach are that it is
difficult to apply such meta-laws to concrete situations, and it is impossible
to do so in a rigorous manner, as Kant himself recognized. Kant did suggest a
way to deal with this problem, which is to regard a proposed moral law as a
natural law (analogously to those of physics) and ask whether one would wish
to live in a universe govened by that law; this is a more concrete version of
Kant's principle of universalization. It is interesting to notice (following
Mark Johnson in his book on our List of
Recommended Books) that this is a suggestion to use a metaphor, relating
moral laws to physical laws.

We will now try to uncover some basic presuppositions or values of the
above approaches to ethical theory. All of these approaches focus on actions,
and they also (except for some versions of rule utilitarianism) focus on
individual agents. Moreover, they all presuppose the autonomy of
individual agents, in the sense that each agent has his or her own goals and
plans, the freedom to carry them out, and can be held responsible for the
resulting actions. All of them also presuppose the rationality of
individual agents, i.e., the capacity to think and to act rationally; in fact,
they assume unlimited rationality, in the sense of placing no upper
bound on the complexity of the reasoning that may be required. Real agents of
course have limited rationality, i.e., a finite capacity for reasoning,
and (for example) are unlikely to be able to apply the categorical imperative
in real time-limited situations; actually, most people have some trouble
understanding how to apply the categorical imperative in any kind of situation
at all. Utilitarianism presupposes that a rational, objective, and preferably
numerical value can be assigned to each relevant factor in such a way that
they can be integrated and maximized. A more general and less obvious
presupposition is a split between mind and body, so that rational agents are
presumed to be free from the constraints of embodiment, i.e., of having
a body; these constraints would include emotional attachment, bias, prejudice,
selfishness, and so on.

All of these presuppositions are to some degree untrue of real agents in
the real world. For example, our minds and bodies are not separate; research
in many areas has shown a variety of important ways in which they are
integrated, such as the fact that emotional associations play a key role in
creating and retrieving memories. And real agents do not behave (purely)
rationally, let alone possess unbounded rationality.

An alternative to the focus on the actions of agents in all the above
approaches is to consider the cognitive states of agents. This often
occurs in real world moral reasoning, as when someone decides to tell a lie in
order to avoid hurting the feelings of someone else; for example, you may not
want to tell your spouse that you are dying of cancer until some time after
you learn of it yourself. Let us call such approaches cognitivist, as
opposed to the behavioralist approaches that focus on actions. Our
concern with the values of agents instead of their behavior is cognitivist in
this sense, since it considers the intentions of agents; let us call
such approaches intentionalist. One argument for taking an
intentionalist approach is that bad intentions can have a negative effect on
the agent; for example, merely having the intention to murder decreases the
sensitivity and capacity for being human. Intentionalist theories assume
(limited) autonomy but do not assume rationality. Buddhist ethics, which go
back over 2,500 years, tends to take such an approach. Since this reflects my
own view, I want to emphasize it is by no means required that you should agree
with me, and indeed, one main point of this course is to get you to think for
yourself about such issues.

Notice that the intention of an agent plays an important role in some areas
of the law. For example, the three different kinds of murder, usually called
first degree murder, second degree murder, and manslaughter, are distinguished
by the intention of the killer: first degree murder is premeditated; second
degree murder is unpremeditated (i.e. spontaneous); and manslaughter is
accidental; i.e., the agent has the intention to murder in the first two
cases, but not in the third, and has the intention well in advance in the
first, but not in the second. Thus murder law is partially intentionalist,
rather than purely behavioralist, which would require focusing on only the act
of murder itself.

We can find examples of the use of various approaches to ethics in the
documents on student cheating that we considered earlier. Threats (such as
references to sanctions) are of course consequentialist. Discussions of loss
and gain, and of "win-win" (as in the Gillespie
email) or "lose-lose" situations are utilitarian. The references to
faculty morale in the Baden email are
consequentialist and cognitive. Real world examples of moral reasoning often
involve combining several different approaches to ethics.

We have reviewed the standard philosophical theories of ethics above, plus
intentionalism, but not the recent work that draws on socio- and evolutionary
biology, and on cognitive science. Socio-biology is concerned with
discovering the biological origins of, or at least influences on, human (and
sometimes primate) social behavior; it is closely related to evolutionary
biology because its arguments are in general based on neo-Darwinian evolution.
Since evolution is so basic to modern biology, we will speak of just
socio-biology instead of something like socio-evolutionary-biology.
Cognitive science concerns how humans (and perhaps primates, etc.)
think, but the branch that has been most closely concerned with ethics is
cognitive linguistics, which is concerned with the cognitive abilities
and structures that lie behind language; researchers in this area have been
especially concerned with metaphor, as illustrated by the observation of Mark
Johnson about Kant's use of metaphor to implement the categorical imperative
that we discussed earlier.

The socio-biological approach to ethics argues that some fact about human
behavior is as it is because it has been selected through evolution,
due to its ability to increase the chances of survival (or more precisely, of
reproduction); therefore it is considered good. Notice that this depends on
the "bridging assumption" that survival of the human species is a good thing,
which cannot be proved scientifically. One very clear example is the
prohibition of incest, which has survival value because incest decreases the
quality of the gene pool; therefore avoiding incest is good. Almost all
societies have had an incest taboo, and those that didn't had trouble
surviving, such as the Egyptian pharoahs and European royality. This approach
to ethics is still rather controversial, though it is gaining in acceptance.
Some reasons for doubting it are substantial, but others are ideological, or
are based on misunderstandings. One misunderstanding is to believe that
behavioral traits arising through evolution are necessarily binding, whereas
in fact many are dispositional, i.e, they manifest as dispositions to
certain behaviors, which can however be overridden. For example, if your body
needs food, you become hungry; this has obvious survival value, but it does
not require you to eat - e.g., you might be on a hunger strike,
voluntarily not eating even though you are very hungry.

Arguments from socio-biology can also take the form of asserting that some
behavior has survival value because it helps groups work together to
accomplish valuable tasks that no individual could do alone; this argument
form can be applied to larger and larger groups, up to the level of whole
societies. For example, lying can have a negative effect on the survival of a
group engaged in hunting; therefore lying is bad (i.e., prohibiting lying is
good) for such groups. Similarly, effective leadership in a group has
survival value, because the group can perform better; therefore it is good.
Notice that such arguments involve the assumption that culture can be
considered to evolve (or rather, to co-evolve) with its biological
basis; this assumption is still somewhat controversial, but it is increasing
accepted. Notice that socio-biological arguments start with the empirical
scientific theory of evolution, and then use it to justify to an ethical norm.
Much much more could be said here, but we don't have time for it; for more
information, see the books authored by E.O. Wilson and edited by Leonard Katz
in our List of Recommended Books.

The best known name in cognitive linguistics is probably George
Lakoff, who is responsible for a considerable deepening of our understanding
of metaphor. A basic insight in this area is that certain metaphors,
which tend to be the most powerful and pervasive, are embodied, i.e.,
grounded in an essential way in the human body. For example, in connection
with our earlier discussion of the "myth of progress," it is interesting to
consider metaphorical expressions for progress, such as the following (which
you might hear in relation to some computer programming project):
We are moving forward.
It's advancing piece by piece.
We've cleared some major hurdles.
It's dead in the water.
These all have an underlying scheme of movement from the current location of
the speaker along some path towards a goal (or for the last one,
non-movement). Lakoff calls such families of metaphors basic image
schemas, and gives very many examples in several books. For another
example, "up" has a basic spatial meaning, but by metaphorical
extension also has meanings that relate to administrative hierarchy, mood, and
much more (e.g., "He's way up in management" and "I'm feeling really up this
morning."), all of which have a positive connotation. Metaphor analysis is
another tool that we can put in our methodological toolbox for analyzing texts
that relate to socio-technical systems.

The thrust of work on ethics within cognitive linguistics is to ask what
kind of moral reasoning is natural to humans, given what we now know
about their cognitive capabilities, which includes work on the structure of
concepts, as well as work on metaphor. Mark Johnson (in his book in our List of Recommended Books) argues that
moral reasoning in real life is often based on metaphor, whose source domain
is some very basic human situation. This work is of course descriptive and
not normative; but (like the socio-biological approach) it does give very
strong grounds for rejecting radical moral relativism, in that actions can be
argued to be right or wrong based on similarity to prototypical situations.
Similarly, it gives strong grounds for rejecting moral law theory, in that
humans do not seem to have the cognitive capabilities need to use such
theories. Cognitive linguistics seems better suited to critiquing general
ethical theories than to supporting particular moral decisions.

A key insight from cognitive science is that (real human) concepts are not
defined by sets of predicates, as is often assumed in artificial intelligence
research. Instead, careful experiments by Eleanor Rosch and others have shown
that most concepts are defined by prototypes, and that the more a
percept looks like the prototype, the more it is considered to be an instance
of that concept; thus (real human) concepts are inherently fuzzy, in
the sense of not having well-defined boundaries. For example, the concept of
"bird" (for most Americans) is defined by the protype of a robin.
Moreover, many concepts have extensions from some "core" meaning that are
given by metaphors; these are called metaphorical extensions. The
concepts used in (real world) moral reasoning are no exception to this general
pattern, again as shown in Mark Johnson's book. These observations constitute
an argument against any kind of absolute moral law theory, based on the
"bridging assumption" that the actual use of a good moral theory should be
natural to ordinary humans. It is interesting to notice that case law, in the
US legal system, is essentially arguing from prototypes, called
precedents in law, by metaphorical extension to other cases.

Much more can be said about cognitive and socio-biological approaches to
morality, but unforunately, we don't have the time. The material in this
section has not yet appeared in any book on computer ethics, and I would guess
it will be some time before it does, due to its inherent complexity and
difficulty, despite its strong affinities with information technology. So you
are getting a sneak preview of material that I think will be increasingly
important in the twenty-first century.

The (extract from the) essay "Codes of professional ethics" by Ronald
Anderson, Deborah Johnson et al., in Computerization and Controversy,
ed. Rob Kling, pp. 876-877, is important for the way it puts codes of ethics
into historical perspective, and for its information about the official
motivation of the ACM Code. It begins as follows:

Historically, professional associations have viewed codes of ethics as
mechanisms to establish their status as a profession or as a means to regulate
their membership and thereby convince the public that they deserve to be
self-regulating. Self-regulation depends on ways to deter unethical behavior
of the members, and a code, combined with an ethics review board, was seen as
the solution. Codes of ethics have tended to list possible violations and
threaten sanctions for such violations.

Anderson, Johnson et al. note that the original 1972 ACM Code of Ethics took a
similar approach, but "had difficulties implementing an ethics review system."
In fact, the new (1992) code is based on voluntary compliance, with no
sanctions except possible termination of membership in the ACM, which has very
little bite; moreover, there does not appear to be any procedure for
initiating such a termination. However, I consider that their arguments for
an (essentially) unenforced code are very weak. How can the public have much
trust in a profession that produces products that are so full of technical
errors and ethical problems? One need only compare the ACM approach with that
of the AMA (American Medical Association) to see what can be done to improve
things: requiring a license to practice, and requiring an internship and an
exam to get the license, and enforcing technical and ethical standards with
review boards with the power to revoke licenses; the legal profession has
taken a similar approach.

It is interesting to ask why there is such a significant
difference from the approach that has been so successful for other
professional societies. A good starting point can be found in the fine
article "Confronting ethical issues of systems design in a web of social
relationships" by Ina Wagner (in Computerization and Controversy,
ed. Rob Kling, pp. 889-902), in her discussion of the differences in workplace
power between doctors and nurses. Somewhat like nurses, computer scientists
may have little control over their schedules, and little say in what they work
on, or how they do it (except insofar as they become managers instead).
Likewise, our professional societies have little political influence or legal
power. Of course, the medical and legal professions have roots going back
thousands of years, and have been well organized for centuries. This seems to
suggest that the near term outlook is dim for significant improvements in the
enforcement of professional ethics in computer science.

The ACM Code of
Ethics has a clearly Kantian tone. First, notice that it uses the Kantian
term "imperative" for its rules. Second, notice that most of the rules have
exactly the kind of form that could be justified by Kantian universalization.
This is especially apparent with the first group of rules (1.1 - 1.8),
particularly the first four, and the Kantian influence can also be seen in the
brief justifications for the rules often given at the very beginning of each
corresponding section of the so-called Guidelines. The introduction to the
code also refers to derivation of rules from more general principles; possibly
there is an ACM document somewhere that makes this more explicit (please let
me know if you find one).

Ina Wagner's article "Confronting ethical issues of systems design in a web
of social relationships" (pp. 889-902 in Kling's book) is a case study
concerning the design of an automated scheduling system for use of operating
theatres in a hospital. Many interesting observations are made about ethical
issues, some of which may be a bit shocking. For example, Wagner found that
it was the values of surgeons that controlled scheduling, and that among these
values was to perform as many maximally complex operations as possible, since
that confers status within their social group. The value system of nurses,
which includes spending time consoling patients, plays no role in scheduling.
Wagner also raises questions about her own ethical responsibilities as a
computer scientist doing a requirements analysis in this situation: for
example, is it her responsibility to write requirements based on a democratic
vision of how the hospital should operate, in which the interests of nurses
have equal weight with those of surgeons? Although such an approach is
recommended by the participatory design methodology to which she subscribes,
it is highly unlikely to be implemented in this setting. Also there are
important but difficult privacy issues involved in running an effective
scheduling system. The surgeons do not want to make their personal schedules
available, whereas on the contrary, nurses have no choice at all in scheduling
their time. Also, please notice the strong political dimension in this social
situation, the gender issue, the point that computerization involves making at
least some priorities explicit, and that this may be against the wishes of
some stakeholders; it is also interesting to note the observation that
self-deception in the use of time is very common. This excellent article is
well worth reading very carefully, and preferably more than once.

The article "Power in systems design," by Bo Dahlbom and Lars Mathiassen
(in Kling's book, pp. 903-906) is a brief case study of an ethical issue,
namely the unauthorized inclusion of a feature into a system by a programmer
who wanted to protect the privacy of a particular class of users. This
article raises some important questions but fails to provide definitive
answers. It also appears to advocate what we call social determinism, in a
variant that the authors call social constructionism; they also rail
against what they call technological sonambulism (this term is due to
Langdon Winner). There is also a cute paradox in this paper: whereas
technologists are often technical determinists, who believe that they have
little or no moral responsibility for the work they do, these authors argue
that technological determinism in fact entails great moral responsibility; on
the other hand, scholars in the humanities are often social constructionists,
and these authors argue that this position naturally leads to a diminished
moral responsibility. They also argue that, although technological
determinism is ultimately false, our collective technological sonambulism
renders is effectively true to a great extent, since we ignore our
responsibilities to consider social consequences.

"Information and computer scientists as moral philosophers and social
analysts," by Rob Kling, in Computerization and Controversy, pp 32-38,
provides good motivation for having material like that in this course as a
part of every computer professional's education; it should be required reading
for every computer science department chairperson.