[Footnote added 17 December 2005: I recommend the splendid
Wikipedia
article. (No, I didn't contribute to it, but I applaud
the
authors, starting with Brian Schack in July 2002, followed by many scores
of people progressively since then) ]

Introduction

With the death of Isaac Asimov on April 6, 1992, the world lost a prodigious
imagination. Unlike fiction writers before him, who regarded robotics as
something to be feared, Asimov saw a promising technological innovation to be
exploited and managed. Indeed, Asimov's stories are experiments with the
enormous potential of information technology.

This article examines Asimov's stories not as literature but as a
gedankenexperiment - an exercise in thinking through the
ramifications of a design. Asimov's intent was to devise a set of rules that
would provide reliable control over semi-autonomous machines. My goal is to
determine whether such an achievement is likely or even possible in the real
world. In the process, I focus on practical, legal, and ethical matters that
may have short- or medium-term implications for practicing information
technologists.

Part 1, in this issue, reviews the origins of the robot notion and explains the
laws for controlling robotic behaviour, as espoused by Asimov in 1940 and
presented and refined in his writings over the following 45 years. Next month,
Part 2 examines the implications of Asimov's fiction not only for real
roboticists but also for information technologists in general.

Origins
of robotics

Robotics, a branch of engineering, is also a popular source of inspiration
in science fiction literature; indeed, the term originated in that field. Many
authors have written about robot behaviour and their interaction with humans,
but in this company Isaac Asimov stands supreme. He entered the field early,
and from 1940 to 1990 he dominated it. Most subsequent science fiction
literature expressly or implicitly recognizes his Laws of Robotics.

Asimov described how, at the age of 20 he came to write robot stories:

"In the 1920's science fiction was becoming a popular art form for the
first time ..... and one of the stock plots .... was that of the invention of a
robot .... Under the influence of the well-known deeds and ultimate fate of
Frankenstein and Rossum, there seemed only one change to be rung on this plot -
robots were created and destroyed their creator ... I quickly grew tired of
this dull hundred-times-told tale .... Knowledge has its dangers, yes, but is
the response to be a retreat from knowledge? .... I began in 1940, to write
robot stories of my own - but robot stories of a new variety ...... My robots
were machines designed by engineers, not pseudo-men created by
blasphemers"1,2

Asimov was not the first to conceive of well-engineered, non-threatening
robots, but he pursued the theme with such enormous imagination and persistence
that most of the ideas that have emerged in this branch of science fiction are
identifiable with his stories.

To cope with the potential for robots to harm people, Asimov, in 1940, in
conjunction with science fiction author and editor John W. Campbell, formulated
the Laws of Robotics. 3,4 He subjected all of his fictional robots
to these laws by having them incorporated within the architecture of their
(fictional) "platinum-iridium positronic brains". The laws (see below) first
appeared publicly in his fourth robot short story,
"Runaround"5.

A robot must protect its own existence as long as such protection does not
conflict with the First or Second Law.

The laws quickly attracted - and have since retained - the attention of readers
and other science fiction writers. Only two years later, another established
writer, Lester Del Rey, referred to "the mandatory form that would force
built-in unquestioning obedience from the robot".6

As Asimov later wrote (with his characteristic clarity and lack of modesty),
"Many writers of robot stories, without actually quoting the three laws, take
them for granted, and expect the readers to do the same".

Asimov's fiction even influenced the origins of robotic engineering.
"Engelberger, who built the first industrial robot, called Unimate, in 1958,
attributes his long-standing fascination with robots to his reading of
[Asimov's] 'I, Robot' when he was a teenager", and Engelberger later invited
Asimov to write the foreword to his robotics manual.

The laws are simple and straightforward, and they embrace "the essential
guiding principles of a good many of the world's ethical systems"7. They also
appear to ensure the continued dominion of humans over robots, and to preclude
the use of robots for evil purposes. In practice, however - meaning in Asimov's
numerous and highly imaginative stories - a variety of difficulties arise.

My purpose here is to determine whether or not Asimov's fiction vindicates the
laws he expounded. Does he successfully demonstrate that robotic technology
can be applied in a responsible manner to potentially powerful, semi-autonomous
and, in some sense intelligent machines? To reach a conclusion, we must
examine many issues emerging from Asimov's fiction.

History

The robot notion derives from two strands of thought, humanoids and
automata. The notion of a humanoid (or human-
like
nonhuman) dates back to Pandora in The Iliad, 2,500 years ago and even
further. Egyptian, Babylonian, and ultimately Sumerian legends fully 5,000
years old reflect the widespread image of the creation, with god-
men
breathing life into clay models. One variation on the theme is the idea of the
golem, associated with the Prague ghetto of the sixteenth century. This clay
model, when breathed into life, became a useful but destructive ally.

The golem was an important precursor to Mary Shelley's Frankenstein: The
Modern Prometheus (1818). This story combined the notion of the humanoid
with the dangers of science (as suggested by the myth of Prometheus, who stole
fire from the gods to give it to mortals). In addition to establishing a
literary tradition and the genre of horror stories, Frankenstein also
imbued humanoids with an aura of ill fate.

Automata, the second strand of thought, are literally "self-
moving
things" and have long interested mankind. Early models depended on levers and
wheels, or on hydraulics. Clockwork technology enabled significant advances
after the thirteenth century, and later steam and electro-
mechanics
were also applied. The primary purpose of automata was entertainment rather
than employment as useful artifacts. Although many patterns were used, the
human form always excited the greatest fascination. During the twentieth
century, several new technologies moved automata into the utilitarian realm.
Geduld and Gottesman8 and Frude2 review the chronology of
clay model, water clock, golem, homunculus, android, and cyborg that culminated
in the contemporary concept of the robot.

The term robot derives from the Czech word robota, meaning forced work or
compulsory service, or robotnik,meaning serf. It was first used by
the Czech playwright Karel Çapek in 1918 in a short story and again in
his 1921 play R. U. R., which stood for Rossum's Universal Robots.
Rossum, a fictional Englishman, used biological methods to invent and mass-
produce
"men" to serve humans. Eventually they rebelled, became the dominant race, and
wiped out humanity. The play was soon well known in English-
speaking countries.

Definition

Undeterred by its somewhat chilling origins (or perhaps ignorant of them),
technologists of the 1950s appropriated the term robot to refer to machines
controlled by programs. A robot is "a reprogrammable multifunctional device
designed to manipulate and/or transport material through variable programmed
motions for the performance of a variety of tasks"9. The term robotics, which
Asimov claims he coined in 194210 refers to "a science or art
involving both artificial intelligence (to reason) and mechanical engineering
(to perform physical acts suggested by reason)"11.

As currently defined, robots exhibit three key elements:

programmability, implying computational or symbol-
manipulative
capabilities that a designer can combine as desired (a robot is a computer);

mechanical capability, enabling it to act on its
environment rather than merely function as a data processing or computational
device (a robot is a machine); and

flexibility, in that it can operate using a range of
programs and manipulate and transport materials in a variety of ways.

We can conceive of a robot, therefore. as either a computer-
enhanced
machine or as a computer with sophisticated input/output devices. Its computing
capabilities enable it to use its motor devices to respond to external stimuli,
which it detects with its sensory devices. The responses are more complex than
would be possible using mechanical, electromechanical, and/or electronic
components alone.

With the merging of computers, telecommunications networks, robotics, and
distributed systems software. and the multiorganizational application of the
hybrid technology, the distinction between computers and robots may become
increasingly arbitrary. In some cases it would be more convenient to conceive
of a principal intelligence with dispersed sensors and effectors, each with
subsidiary intelligence (a robotics-
enhanced
computer system). In others, it would be more realistic to think in terms of
multiple devices, each with appropriate sensory, processing, and motor
capabilities, all subjected to some form of coordination (an integrated
multi-robot system). The key difference robotics brings is the complexity and
persistence that artifact behaviour achieves, independent of human
involvement.

Many industrial robots resemble humans in some ways. In science fiction, the
tendency has been even more pronounced, and readers encounter humanoid robots,
humaniform robots, and androids. In fiction, as in life, it appears that a
robot needs to exhibit only a few human-
like
characteristics to be treated as if it were human. For example, the
relationships between humans and robots in many of Asimov's stories seem almost
intimate, and audiences worldwide reacted warmly to the "personality" of the
computer HAL in 2001.' A Space Odyssey, and to the gibbering rubbish-
bin
R2-
D2
in the Star Wars series.

The tendency to conceive of robots in humankind's own image may gradually yield
to utilitarian considerations, since artifacts can be readily designed to
transcend humans' puny sensory and motor capabilities. Frequently the
disadvantages and risks involved in incorporating sensory, processing, and
motor apparatus within a single housing clearly outweigh the advantages. Many
robots will therefore be anything but humanoid in form. They may increasingly
comprise powerful processing capabilities and associated memories in a safe and
stable location, communicating with one or more sensory and motor devices
(supported by limited computing capabilities and memory) at or near the
location(s) where the robot performs its functions. Science fiction literature
describes such architectures.12,13

Impact

Robotics offers benefits such as high reliability, accuracy, and speed of
operation. Low long-
term
costs of computerized machines may result in significantly higher productivity,
particularly in work involving variability within a general pattern. Humans can
be relieved of mundane work and exposure to dangerous workplaces. Their
capabilities can be extended into hostile environments involving high pressure
(deep water), low pressure (space), high temperatures (furnaces), low
temperatures (ice caps and cryogenics), and high-
radiation
areas (near nuclear materials or occurring naturally in space).

On the other hand, deleterious consequences are possible. Robots might directly
or indirectly harm humans or their property; or the damage may be economic or
incorporeal (for example, to a person's reputation). The harm could be
accidental or result from human instructions. Indirect harm may occur to
workers, since the application of robots generally results in job redefinition
and sometimes in outright job displacement. Moreover, the replacement of humans
by machines may undermine the self-
respect
of those affected, and perhaps of people generally.

During the 1980s, the scope of information technology applications and their
impact on people increased dramatically. Control systems for chemical processes
and air conditioning are examples of systems that already act directly and
powerfully on their environments. And consider computer-
integrated
manufacturing, just-
in-
time
logistics, and automated warehousing systems. Even data processing systems have
become integrated into organizations' operations and constrain the ability of
operations-
level
staff to query a machine's decisions and conclusions. In short, many modern
computer systems are arguably robotic in nature already; their impact must be
managed - now.

Asimov's original laws (see above) provide that robots are to be slaves to
humans (the second law). However, this role is overridden by the higher-order
first law, which precludes robots from injuring a human, either by their own
autonomous action or by following a human's instructions. This precludes their
continuing with a programmed activity when doing so would result in human
injury. It also prevents their being used as a tool or accomplice in battery,
murder, self-
mutilation,
or suicide.

The third and lowest level law creates a robotic survival instinct. This
ensures that, in the absence of conflict with a higher order law, a robot will

seek to avoid its own destruction through natural causes or accident;

defend itself against attack by another robot or robots; and

defend itself against attack by any human or humans.

Being neither omniscient nor omnipotent, it may of course fail in its
endeavors. Moreover, the first law ensures that the robotic survival instinct
fails if self-
defense
would necessarily involve injury to any human. For robots to successfully
defend themselves against humans, they would have to be provided with
sufficient speed and dexterity so as not to impose injurious force on a
human.

Under the second law, a robot appears to be required to comply with a human
order to (1) not resist being destroyed or dismantled, (2) cause itself to be
destroyed, or (3) (within the limits of paradox) dismantle itself.1.2
In various stories, Asimov notes that the order to self-
destruct
does not have to be obeyed if obedience would result in harm to a human. In
addition, a robot would generally not be precluded from seeking clarification
of the order. In his last full-
length
novel, Asimov appears to go further by envisaging that court procedures would
be generally necessary before a robot could be destroyed: "I believe you should
be dismantled without delay. The case is too dangerous to await the slow
majesty of the law. . . . If there are legal repercussions hereafter, I shall
deal with them."14

Such apparent inconsistencies attest to the laws' primary role as a
literary device intended to support a series of stories about robot behavior.
In this, they were very successful: "There was just enough ambiguity in the
Three Laws to provide the conflicts and uncertainties required for new stories,
and, to my great relief, it seemed always to be possible to think up a new
angle out of the 61 words of the Three Laws."1.

As Frude says, "The Laws have an interesting status. They . . . may
easily be broken, just as the laws of a country may be transgressed. But
Asimov's provision for building a representation of the Laws into the positronic-
brain
circuitry ensures that robots are physically prevented from contravening
them."2 Because the laws are intrinsic to the machine's design, it
should "never even enter into a robot's mind" to break them.

Subjecting the laws to analysis may seem unfair to Asimov. However, they have
attained such a currency not only among sci-
fi
fans but also among practicing roboticists and software developers that they
influence, if only subconsciously, the course of robotics.

Asimov's
experiments with the 1940 laws

Asimov's early stories are examined here not in chronological sequence or on
the basis of literary devices, but by looking at clusters of related ideas.

Any set of "machine values" provides enormous scope for linguistic
ambiguity. A robot must be able to distinguish robots from humans. It must be
able to recognize an order and distinguish it from a casualrequest. It must "understand" the concept of its own existence, a
capability that arguably has eluded mankind, although it may be simpler for
robots. In one short story, for example, the vagueness of the word
firmly in the order "Pull [the bar] towards you firmly" jeopardizes a
vital hyperspace experiment. Because robot strength is much greater than that
of humans, it pulls the bar more powerfully than the human had intended, bends
it, and thereby ruins the control mechanism15.

Defining injury and harm is particularly problematic, as are the distinctions
between death, mortal danger, and injury or harm that is not life-threatening.
Beyond this there are psychological harm. Any robot given, or developing, an
awareness of human feelings would have to evaluate injury and harm in
psychological as well as physical terms: "The insurmountable First Law of
Robotics states: ' A robot may not injure a human being....' and to repel
a friendly gesture would do injury " 16 (emphasis added). Asimov
investigated this in an early short story and later in a novel: A mind-reading
robot interprets the first law as requiring him to give people not the correct
answers to their questions but the answers that he knows they want to hear
14,16,17.

Another critical question is how a robot is to interpret the term human. A
robot could be given any number of subtly different descriptions of a human
being, based for example on skin color, height range, and/or voice
characteristics such as accent. it is therefore possible for robot behaviour
to be manipulated: "the Laws, even the First Law, might not be absolute then,
but might be whatever those who design robots define them to be"14. Faced with
this difficulty, the robots in this story conclude that ..." if different
robots are subject to narrow definitions of one sort or another, there can only
be measureless destruction. we define human beings as all members of the
species, Homo sapiens."14

In an early story, Asimov has a humanoid robot to represent itself as a human
and stand for public office. It must prevent the public from realizing that it
is a robot, since public reaction would not only result in its losing the
election but also in tighter constraints on other robots. A political
opponent, seeking to expose the robot, discovers that it is impossible to prove
it is a robot solely on the basis of its behavior, because the Laws of
Robotics force any robot to perform in essentially the same manner as a good
human being7.

In a later novel, a roboticist says, "If a robot is human enough, he would be
accepted as a human. Do you demand proof that I am a robot? The fact that I
seem human is enough"16. In another scene, a humaniform robot is
sufficiently similar to a human to confuse a normal robot and slow down its
reaction time14. Ultimately, two advanced robots recognize each other as
"human", at least for the purposes of the laws14,18.

Defining human beings becomes more difficult with the emergence of cyborgs,
which may be seen as either machine-enhanced humans or biologically enhanced
machines. When a human is augmented by prostheses (artificial limbs, heart
pacemakers, renal dialysis machines, artificial lungs, and someday perhaps many
other devices), does the notion of a human gradually blur with that of a robot?
And does a robot that attains increasingly human characteristics (for example,
a knowledge-based system provided with the "know-that" and "know-how" of a
human expert and the ability to learn more about a domain) gradually become
confused with a human? How would a robot interpret the first and second laws
once the Türing test criteria can be routinely satisfied? The key outcome
of the most important of Asimov's robot novellas 12 is the tenability of the
argument that the prosthetization of humans leads inevitably to the
humanization of robots.

The cultural dependence of meaning reflects human differences in such matters
as religion, nationality, and social status. As robots become more capable,
however, cultural differences between humans and robots might also be a factor.
For example, in one story19 a human suggests that some laws may be bad and
their enforcement unjust, but the robot replies that an unjust law is a
contradiction in terms. When the human refers to something higher than
justice, for example, mercy and forgiveness, the robot merely responds. "I am
not acquainted with those words".

The assumption that there is a literal meaning for any given series of
signals is currently considered naive. Typically, the meaning of a term is
seen to depend not only on the context in which it was originally expressed but
also on the context in which it is read (see, for example, Winograd and
Flores20). If this is so, then robots must exercise judgment to
interpret the meanings of words and hence of orders and of new data.

A robot must even determine whether and to what extent the laws apply to a
particular situation. Often in the robot stories a robot action of any kind is
impossible without some degree of risk to a human. To be at all useful to its
human masters, a robot must therefore be able to judge how much the laws can be
breached to maintain a tolerable level of risk. for example, in Asimov's very
first robot short story, "Robbie [the robot] snatched up Gloria [his young
human owner], slackening his speed not one iota, and, consequently knocking
every breath of air out of her."21Robbie judged
that it was less harmful for Gloria to be momentarily breathless than to be
mown down by a tractor.

Similarly, conflicting orders may have to be prioritized, for example, when two
humans give inconsistent instructions. Whether the conflict is overt,
unintentional, or even unwitting, itnonetheless requires a
resolution. Even in the absence of conflicting orders, a robot may need to
recognize foolish or illegal orders and decline to implement them, or at least
question them. One story asks, "Must a robot follow the orders of a child; or
of an idiot; or of a criminal; or of a perfectly decent intelligent man who
happens to be inexpert and therefore ignorant of the undesirable consequences
of his order?"18

Numerous problems surround the valuation of individual humans.
First, do all humanshave equal standing ina
robot's evaluation? On the one hand they do: "A robot may not judge whether a
human being deserves death. It is not for him to decide. He may not harm a
human - variety skunk orvariety angel."7 On the
other hand they might not, as whena robot tells a human, "In
conflictbetween your safety and that of another, I must guard
yours."22In another short story, robots agree that
they "must obey a human beingwho is fit by mind, character,
andknowledge to give me that order." Ultimately, this leads
the robot to "disregard shape and form in judging between human beings" and to
recognize his companion robot not merely as human butas a
human "more fit than the others."18 Many subtle problems can be
constructed. For example. a person might try forcing a robot to comply with an
instruction to harm a human (andthereby violate the first
law) by threatening to kill himself unless the robot obeys.

How is a robot to judge the trade-
off
between a high probability of lesser harm to one person versus a low
probability of more serious harm to another? Asimov's stories refer to this
issue but are somewhat inconsistent with each other and with the strict wording
of the first law.

More serious difficulties arise in relation to the valuation of multiple
humans. The first law does not even contemplate the simple case of a single
terrorist threatening many lives. In a variety of stories, however, Asimov
interprets the law to recognize circumstances in which a robot may have to
injure oreven kill one or more humans to protect one or more
others: "The Machine cannot harm a human being more than minimally, and that
only tosave a greater number" 23
(emphasisadded). And again: "The First Law is not absolute.
What if harming a human being saves the lives of two others, or three others,
or even three billion others? The robot may have thought that saving the
Federation took precedence over the saving of one life."24

These passages value humans exclusively on the basis of numbers. A later story
includes this justification: "To expect robots to make judgments of fine points
such as talent, intelligence, the general usefulness to society, has always
seemed impractical. That would delay decisionto the point
where the robot is effectively immobilized. So we go by
numbers."18

A robot's cognitive powers might be sufficient for distinguishing
between attacker and attackee, but the first law alone does not provide a robot
with the means to distinguish between a "good" person and a "bad" one. Hence, a
robot may have to constrain a "good" attackee's self-
defense
to protect the "bad" attacker from harm. Similarly, disciplining children and
prisoners may bedifficult under the laws, which would limit
robots' usefulness for supervision within nurseries and penal
institutions.22 Only after many generations of self-
development
does a humanoid robot learnto reason that "what seemed like
cruelty [to a human] might, in the long run, be kindness."12

The more subtle life-
and-
death
cases, such as assistance in the voluntary euthanasia of a fatally ill or
injured person to gain immediate access to organs that would save several other
lives, might fall well outside a robot's appreciation. Thus, thefirst law would require a robot to protect the threatened human,
unless it was able to judge the steps taken to be the least harmful strategy.
The practical solution to such difficult moral questions would beto keep robots out of the operating theater.22

The problem underlying all of these issues is that most
probabilities used as input to normative decision models are not objective;
rather, they are estimates of probability based on human (or robot) judgment.
The extent to which judgment is central to robotic behavior is summed up in the
cynical rephrasing of the first law by the major (human) character in the four
novels: "A robot must not hurt a human being, unless he can think of a way to
prove it isfor the human being's ultimate good after
all."19

Tocope with the judgmental element in robot decision
making, Asimov's later novels introduced a further complication:
"On......[worlds other than Earth], . . . the Third Law is distinctly stronger
incomparison to the Second Law. . . . An order for self-
destruction
would be questioned and there would have to be a truly legitimate reason for it
to be carriedthrough - a clear and present danger."16
And again, "Harm through an active deed outweighs, in general, harm
through passivity - all things being reasonably equal. . . .
[A robot is] always to choose truth over nontruth, if the harm
isroughly equal in both directions. In general, that is."16

The laws are not absolutes, and their force varies with the individual
machine's programming, the circumstances, the robot's previous instructions,
and its experience. To cope with the inevitable logical complexities, a human
would require not only a predisposition to rigorous reasoning, and a
considerable education, but also a great deal of concentration and composure.
(Alternatively, of course, the human may find it easier to defer to a robot
suitably equipped for fuzzy-
reasoning-
based judgment.)

The strategies as well as the environmental variables involve complexity. "You
must not think . . . that robotic response is a simple yes or no, up or down,
in or out. ... Thereis the matter of speed of
response."16In some cases (for example, when a human must be
physically restrained), the degree of strength to be applied must also be
chosen.

A deadlock problem was the key feature of the short story in which Asimov
first introduced the laws. He constructed the type of stand-
off
commonly referred to as the "Buridan's ass" problem. It involved a balance
between a strong third-
law
self-
protection
tendency, causing the robot to try to avoid a source of danger, and a weak
second-
law
order to approach that danger. "The conflict between the various rules is
[meant to be] ironed out by the different positronic potentials in the brain,"
but in this case the robot "follows a circle around [the source of danger],
staying on the locus of all points of ... equilibrium."5

Deadlock is also possible within a single law. An example under the
first law would be two humans threatened with equal danger and the robot unable
to contrive a strategy to protect one without sacrificing the other. Under the
second law, two humans might give contradictory orders of equivalent force. The
later novels address this question with greater sophistication:

What was troubling the robot was what roboticists called an equipotential
of contradiction on the second level. Obedience was the Second Law and [the
robot] was suffering from two roughly equal and contradictory orders. Robot-
block
was what the general population called it or, more frequently, roblock for
short . . . [or] `mental freeze-
out.'
No matter how subtle and intricate a brain might be, there is always some
way of setting up a contradiction. This is a fundamental truth of
mathematics.16

Clearly, robots subject to such laws need to be programmed to recognize
deadlock and either choose arbitrarily among the alternative strategies or
arbitrarily modify an arbitrarily chosen strategy variable (say, move a short
distance in any direction) and reevaluate the situation: "If A and not-
A
are precisely equal misery-
producers
according to his judgment, he chooses one or the other in a completely
unpredictable way and then follows that unquestioningly. He does not
go into mental freeze-
out."16

The finite time that even robot decision making requires could cause another
type of deadlock. Should a robot act immediately, by "instinct," to protect a
human in danger? Or should it pause long enough to more carefully analyze
available data - or collect more data - perhaps thereby discovering a better
solution, or detecting that other humans are in even greater danger? Such
situations can be approached using the techniques of information economics, but
there is inherent scope for ineffectiveness and deadlock, colloquially
referred to as "paralysis by analysis."

Asimov suggested one class of deadlock that would not occur: If in a given
situation a robot knew that it was powerless to prevent harm to a human, then
the first law would be inoperative; the third law would become relevant, and it
would not self-
immolate in a vain attempt to save the human.25It does
seem, however, that the deadlock is not avoided by the laws themselves, but
rather by the presumed sophistication of the robot's decision-
analytical
capabilities.

A special case of deadlock arises when a robot is ordered to wait. For example,
"[Robot] you will not move nor speak nor hear us until I say your name again.'
There was no answer. The robot sat as though it were cast out of one piece of
metal, and it would stay so until it heard its name again."26 As
written, the passage raises the intriguing question of whether passive hearing
is possible without active listening. What if the robot's name is next used in
the third person rather than the second?

In interpreting a command such as "Do absolutely nothing until I call you!" a
human would use common sense and, for example, attend to bodily functions in
the meantime. A human would do nothing about the relevant matter until
the event occurred. In addition, a human would recognize additional terminating
events, such as a change in circumstances that make it impossible for the event
to ever occur. A robot is likely to be constrained to a more literal
interpretation, and unless it can infer a scope delimitation to the command, it
would need to place the majority of its functions in abeyance

The faculties that would need to remain in operation are the:

sensory-
perceptive
subsystem needed to detect the condition;

the recommencement triggering function;

one or more daemons to provide a time-
out
mechanism (presumably the scope of the command is at least restricted to the
expected remaining lifetime of the person who gave the command); and

ability to play back the audit trail so that an overseer can discover the
condition on which the robot's resuscitation depends.

Asimov does not appear to have investigated whether the behavior of a robot
in wait-mode is affected by the laws. If it isn't, then it will not only fail
to protect its own existence and to obey an order, but will also stand by and
allow a human to be harmed. A robotic security guard could therefore be
nullified by an attacker's simply putting it into a wait-state.

For a fiction writer, it is sufficient to have the laws embedded in robots'
positronic pathways (whatever they may be). To actually apply such a set of
laws in robot design, however, it would be necessary to ensure that every robot:

had the laws imposed in precisely the manner intended; and

was at all times subject to them - that is, they could not be overridden
or modified.

It is important to know how malprogramming and modification of the laws'
implementation in a robot (whether intentional or unintentional) can he
prevented, detected, and dealt with.

In an early short story, robots were "rescuing" humans whose work required
short periods of relatively harmless exposure to gamma radiation. Officials
obtained robots with the first law modified so that they were incapable of
injuring a human but under no compulsion to prevent one from coming to harm.
This clearly undermined the remaining part of the first law, since, for
example, a robot could drop a heavy weight toward a human, knowing that it
would be fast enough and strong enough to catch it before it harmed the person.
However, once gravity had taken over, the robot would be free to ignore the
danger.25 Thus, a partial implementation was shown to be risky, and
the importance of robot audit underlined. Other risks include trapdoors, Trojan
horses, and similar devices in the robot's programming.

A further imponderable is the effect of hostile environments and stress on the
reliability and robustness of robots' performance in accordance with the laws.
In one short story, it transpires that "The Machine That Won the War" had been
receiving only limited and poor-
quality
data as a result of enemy action against its receptors and had been processing
it unreliably because of a shortage of experienced maintenance staff. Each of
the responsible managers had, in the interests of national morale, suppressed
that information, even from one another, and had separately and independently
"introduced a number of necessary biases" and "adjusted" the processing
parameters in accordance with intuition. The executive director, even though
unaware of the adjustments, had placed little reliance on the machine's output,
preferring to carry out his responsibility to mankind by exercising his own
judgment.27

A major issue in military applications generally28 is the
impossibility of contriving effective compliance tests for complex systems
subject to hostile and competitive environments. Asimov points out that the
difficulties of assuring compliance will be compounded by the design and
manufacture of robots by other robots.22

Sometimes humans may delegate control to a robot and find themselves unable
to regain it, at least in a particular context. One reason is that to avoid
deadlock, a robot must be capable of making arbitrary decisions. Another is
that the laws embody an explicit ability for a robot to disobey an instruction,
by virtue of the overriding first law.

In an early Asimov short story, a robot "knows he can keep [the energy beam]
more stable than we [humans] can, since he insists he's the superior being, so
he must keep us out of the control room [in accordance with the first
law]."29 The same scenario forms the basis of one of the most vivid
episodes in science fiction, HAL's attempt to wrest control of the spacecraft
from Bowman in 2001: A Space Odyssey. Robot autonomy is also reflected
in a lighter moment in one of Asimov's later novels, when a character says to
his companion, "For now I must leave you. The ship is coasting in for a
landing, and I must stare intelligently at the computer that controls it, or no
one will believe I am the captain."14

In extreme cases, robot behavior will involve subterfuge, as the
machine determines that the human, for his or her own protection, must be
tricked. In another early short story, the machines that manage Earth's economy
implement a form of "artificial stupidity" by making intentional errors,
thereby encouraging humans to believe that the robots are fallible and that
humans still have a role to play.23

The normal pattern of any technology is that successive generations show
increased sophistication, and it seems inconceivable that robotic technology
would quickly reach a plateau and require little further development. Thus
there will always be many old models in existence, models that may have
inherent technical weaknesses resulting in occasional malfunctions and hence
infringement on the Laws of Robotics. Asimov's short stories emphasize that
robots are leased from the manufacturer, never sold, so that old models can be
withdrawn after a maximum of 25 years.

Looking at the first 50 years of software maintenance, it seems clear that
successive modification of existing software to perform new or enhanced
functions is one or more orders of magnitude harder than creating a new
artifact to perform the same function. Doubts must exist about the ability of
humans (or robots) to reliably adapt existing robots. The alternative -
destruction of existing robots - will be resisted in accordance with the third
law, robot self-
preservation.

At a more abstract level, the laws are arguably incomplete because the frame of
reference is explicitly human. No recognition is given to plants, animals, or as-
yet-
undiscovered
(for example, extraterrestrial), intelligent life forms. Moreover, some future
human cultures may place great value on inanimate creation, or on holism. If,
however, late twentieth-
century
values have meanwhile been embedded in robots, that future culture may have
difficulty wresting the right to change the values of the robots it has
inherited. If machines are to have value sets, there must be a mechanism for
adaptation, at least through human-
imposed
change. The difficulty is that most such value sets will be implicit rather
than explicit; their effects will be scattered across a system rather than
implemented in a modular and therefore replaceable manner.

At first sight, Asimov's laws are intuitively appealing, but their application
encounters difficulties. Asimov, in his fiction, detected and investigated the
laws' weaknesses, which this article (Part 1 of 2) has analyzed and classified.
Part 2, in the next issue of Computer, will take the analysis further
by considering the effects of Asimov's 1985 revision to the laws. It will then
examine the extent to which the weaknesses in these laws may in fact be endemic
to any set of laws regulating robotic behavior.

Recapitulation

Isaac Asimov's Laws of Robotics, first formulated in 1940, were primarily a
literary device intended to support a series of stories about robot behavior.
Over time, he found that the three laws included enough apparent
inconsistencies, ambiguity, and uncertainty to provide the conflicts required
for a great many stories. In examining the ramifications of these laws, Asimov
revealed problems that might later confront real roboticists and information
technologists attempting to establish rules for the behavior of intelligent
machines.

With their fictional "positronic" brains imprinted with the mandate to (in
order of priority) prevent harm to humans, obey their human masters, and
protect themselves, Asimov's robots had to deal with great complexity. In a
given situation, a robot might be unable to satisfy the demands of two equally
powerful mandates and go into "mental freezeout." Semantics is also a problem.
As demonstrated in Part 1 of this article (Computer, December 1993,
pp. 53-
61),
language is much more than a set of literal meanings and Asimov showed us that
a machine trying to distinguish, for example, who or what is human may
encounter many difficulties that humans themselves handle easily and
intuitively. Thus, robots must have sufficient capabilities for judgment -
capabilities that can cause them to frustrate the intentions of their masters
when, in a robot's judgment, a higher order law applies.

As information technology evolves and machines begin to design and build other
machines, the issue of human control gains greater significance. In time. human
values tend to change; the rules reflecting these values, and embedded in
existing robotic devices. may need to be modified. But if they are implicit
rather than explicit, with their effects scattered widely across a system, they
may not be easily replaceable. Asimov himself discovered many contradictions
and eventually revised the Laws of Robotics.

After introducing the original three laws, Asimov detected. as early as
1950, a need to extend the first law, which protected individual humans, so
that it would protect humanity as a whole. Thus, his calculating machines "have
the good of humanity at heart through the overwhelming force of the
First Law of Robotics"1 (emphasis added). In 1985 he developed this
idea further by postulating a "zeroth" law that placed humanity's interests
above those of any individual while retaining a high value on individual human
life.2 The revised set of laws is shown in the sidebar.

Asimov pointed out that under a strict interpretation of the first law, a robot
would protect a person even if the survival of humanity as a whole was placed
at risk. Possible threats include annihilation by an alien or mutant human
race, or by a deadly virus. Even when a robot's own powers of reasoning led it
to conclude that mankind as a whole was doomed if it refused to act, it was
nevertheless constrained: "I sense the oncoming of catastrophe . . . [but} I
can only follow the Laws."2

In Asimov's fiction the robots are tested by circumstances and must
seriously consider whether they can harm a human to save humanity. The turning
point comes when the robots appreciate that the laws are indirectly modifiable
by roboticists through the definitions programmed into each robot: "If the Laws
of Robotics, even the First Law, are not absolutes, and if human beings can
modify them, might it not be that perhaps, under proper conditions, we
ourselves might mod - "2 Although the robots are prevented by
imminent "roblock" (robot block, or deadlock) from even completing the
sentence, the groundwork has been laid.

Later, when a robot perceives a clear and urgent threat to mankind, it
concludes, "Humanity as a whole is more important than a single human being.
There is a law that is greater than the First Law: `A robot may not injure
humanity, or through inaction, allow humanity to come to harm."2

Defining
"humanity"

Modification of the laws, however, leads to additional considerations.
Robots are increasingly required to deal with abstractions and philosophical
issues. For example, the concept of humanity may be interpreted in different
ways. It may refer to the set of individual human beings (a collective), or it
may be a distinct concept (a generality, as in the notion of "the State").
Asimov invokes both ideas by referring to a tapestry (a generality) made up of
individual contributions (a collective): "An individual life is one thread in
the tapestry, and what is one thread compared to the whole?

.....Keep your mind fixed firmly on the tapestry and do not let the
trailing off of a single thread affect you."2

A human roboticist raised a difficulty with the zeroth law immediately after
the robot formulated it: "What is your `humanity' but an abstraction'? Can you
point to humanity? You can injure or fail to injure a specific human being and
understand the injury or lack of injury that has taken place. Can you see the
injury to humanity? Can you understand it? Can you point to it?"2
The robot later responds by positing an ability to "detect the hum of the
mental activity of Earth's human population, overall. . . . And, extending
that, can one not imagine that in the Galaxy generally there is the hum of the
mental activity of all of humanity? How, then, is humanity an abstraction? It
is something you can point to." Perhaps as Asimov's robots learn to reason with
abstract concepts, they will inevitably become adept at sophistry and polemic.

The
increased difficulty of judgment

One of Asimov's robot characters also points out the increasing complexity
of the laws: "The First Law deals with specific individuals and certainties.
Your Zeroth Law deals with vague groups and probabilities."2At this point, as he often does, Asimov resorts to poetic license
and for the moment pretends that coping with harm to individuals does not
involve probabilities. However, the key point is not affected: Estimating
probabilities in relation to groups of humans is far more difficult than with
individual humans.

It is difficult enough, when one must choose quickly . . . ,to decide which
indivi dual may suffer, or inflict, the greater harm. To choose between an
individual and humanity, when you are not sure of what aspect of humanity you
are dealing with, is so difficult that the very validity of Robotic Laws comes
to be suspect. As soon as humanity in the abstract is introduced, the Laws of
Robotics begin to merge with the Laws of Humanics which may not even
exist.2

Robot
paternalism

Despite these difficulties, the robots agree to implement the zeroth law,
since they judge themselves more capable than anyone else of dealing with the
problems. The original laws produced robots with considerable autonomy, albeit
a qualified autonomy allowed by humans. But under the 1985 laws, robots were
more likely to adopt a superordinate, paternalistic attitude toward
humans.

A robot must protect its own existence as long as such protection does not
conflict with the Zeroth, First, or Second Law.

Asimov suggested this when he first hinted at the zeroth law, because he had
his chief robotpsychologist say that "...we can no longer understand our own
creations. . . . [Robots] have progressed beyond the possibility of detailed
human control."1In a more recent novella, a robot
proposes to treat his form "as a canvas on which I intend to draw a man." but
is told by the roboticist, "It's a puny ambition. ... You're better than a man.
You've gone downhill from the moment you opted for organicism."3

In the later novels, a robot with telepathic powers manipulates humans to act
in a way that will solve problems,4although its
powers are constrained by the psychological dangers of mind manipulation.
Naturally, humans would be alarmed by the very idea of a mind-
reading robot; therefore, under the zeroth and first laws, such a robot would
be permitted to manipulate the minds of humans who learned of its abilities,
making them forget the knowledge, so that they could not be harmed by it. This
is reminiscent of an Asimov story in which mankind is an experimental
laboratory for higher beings5 and Adams' altogether more flippant
Hitchhiker's Guide to the Galaxy, in which the Earth is revealed as a
large experiment in which humans are being used as laboratory animals by, of
all things, white mice.6Someday those manipulators
of humans might be robots.

Asimov's The Robots of Dawn is essentially about humans, with robots
as important players. In the sequel Robots and Empire, however, the
story is dominated by the two robots, and the humans seem more like their
playthings. It comes as little surprise, then, that the robots eventually
conclude that "it is not sufficient to be able to choose [among alternative
humans or classes of human] . . . ; we must be able to shape."2Clearly, any subsequent novels in the series would have been
about robots, with humans playing "bit" parts.

Robot dominance has a corollary that pervades the novels: History "grew less
interesting as it went along; it became almost soporific."4 With life's
challenges removed, humanity naturally regresses into peace and quietude,
becoming "placid, comfortable, and unmoving" - and stagnant.

So
who's in charge?

As we have seen, the term human can be variously defined, thus significantly
affecting the first law. The term humanity did not appear in the original laws,
only in the zeroth law, which Asimov had formulated and enunciated by a
robot.2 Thus, the robots define human and humanity to refer to
themselves as well as to humans, and ultimately to themselves alone. Another of
the great science fiction stories, Clarke's Rendezvous with
Rama,7 also assumes that an alien civilization, much older than
mankind, would consist of robots alone (although in this case Clarke envisioned
biological robots). Asimov's vision of a robot takeover differs from those of
previous authors only in that force would be unnecessary.

Asimov does not propose that the zeroth law must inevitably result in
the ceding of species dominance by humans to robots. However, some concepts may
be so central to humanness that any attempt to embody them in computer
processing might undermine the ability of humanity to control its own fate.
Weizenbaum argues this point more fully.8

The issues discussed here, and in Part 1, have grown increasingly
speculative, and some are more readily associated with metaphysics than with
contemporary applications of information technology. However, they demonstrate
that even an intuitively attractive extension to the original laws could have
very significant ramifications. Some of the weaknesses are probably inherent in
any set of laws and hence in any robotic control regime.

Asimov's
laws extended

The behavior of robots in Asimov's stories is not satisfactorily explained
by the laws he enunciated. This section examines the design requirements
necessary to effectively subject robotic behavior to the laws. In so doing. it
becomes necessary to postulate several additional laws implicit in Asimov's
fiction.

Perceptual and cognitive apparatus

Clearly, robot design must include sophisticated sensory
capabilities. However, more than signal reception is needed. Many of the
difficulties Asimov dramatized arose because robots were less than omniscient.
Would humans, knowing that robots cognitive capabilities are limited, be
prepared to trust their judgment on life-
and-
death
matters? For example, the fact that any single robot cannot harm a human does
not protect humans from being injured or killed by robotic actions. In one
story, a human tells a robot to add a chemical to a glass of milk and then
tells another robot to serve the milk to a human. The result is murder by
poisoning. Similarly, a robot untrained in first aid might move an accident
victim and break the person's spinal cord. A human character in The
NakedSun is so incensed by these shortcomings that he accuses
roboticists of perpetrating a fraud on mankind by omitting key words from the
first law. In effect, it really means "A robot may do nothing that to its
knowledge would injure a human being, and may not, through inaction,
knowingly allow a human being to come to harm."9

Robotic architecture must be designed so that the laws can
effectively control a robot's behavior. A robot requires a basic grammar and
vocabulary to "understand" the laws and converse with humans. In one short
story, a production accident results in a "mentally retarded" robot. This
robot, defending itself against a feigned attack by a human, breaks its
assailant's arm. This was not a breach of the first law, because it did not
knowingly injure the human: "In brushing aside the threatening arm . . . it
could not know the bone would break. In humanterms,
no moral blame can be attached to an individual who honestly cannot
differentiate good and evil."10 In Asimov's stories, instructions
sometimes must be phrased carefully to be interpreted as mandatory. Thus, some
authors have considered extensions to the apparatus of robots, for example,
a"button labeled `Implement Order' on the robot's
chest,"11analogous to the Enter key on a
computer's keyboard.

A set of laws for robotics cannot be independent but must be conceived as part
of a system. A robot must also he endowed with data collection, decision-
analytical,
and action processes by which it can apply the laws.Inadequate sensory, perceptual, or cognitive faculties would undermine
the laws' effectiveness.

Additional
implicit laws

In his first robot short story, Asimov stated that "long before enough can
go wrong to alter that First Law, a robot would be completely inoperable. It's
a mathematical impossibility [for Robbie the Robot to harm a human]."12
For this to be true, robot design would have to incorporate a high-
order
controller (a "conscience"?) that would cause a robot to detect any potential
for noncompliance with the laws and report the problem or immobilize itself.
The implementation of such a meta-
law
("A robot may not act unless its actions are subject to the laws of robotics")
might well strain both the technology and the underlying science. (Given the
meta-
language
problem in twentieth-
century philosophy, perhaps logic itself would be strained.) This difficulty
highlights the simple fact that robotic behavior cannot be entirely automated;
it is dependent on design and maintenance by an external agent.

Another of Asimov's requirements is that all robots must be subject to the laws
at all times. Thus, it would have to be illegal for human manufacturers to
create a robot that was not subject to the laws. In a future world that makes
significant use of robots, their design and manufacture would naturally be
undertaken by other robots. Therefore, the Laws of Robotics must include the
stipulation that no robot may commit an act that could result in any robot's
not being subject to the same laws.

The words "protect its own existence" raise a semantic difficulty. In The
Bicentennial Man, Asimov has a robot achieve humanness by taking its own
life. Van Vogt, however, wrote that "indoctrination against suicide" was
considered a fundamental requirement.13 The solution might be to
interpret the word protect as applying to all threats, or to amend the wording
to explicitly preclude self-
inflicted
harm. Having to continually instruct robot slaves would be both inefficient and
tiresome. Asimov hints at a further, deep-
nested
law that would compel robots to perform the tasks they were trained for:

Quite aside from the Three Laws, there isn't a pathway in those brains that
isn't carefully designed and fixed. We have robots planned for specific tasks,
implanted with specific capabilities.'14 (Emphasis
added.)

So perhaps we can extrapolate an additional, lower priority law: "A robot
must perform the duties for which it has been programmed, except where that
would conflict with a higher order law." Asimov's laws regulate aroundrobots' transactions with humans and thus apply where robots have
relatively little to do with one another or where there is only one robot.
However, the laws fail to address the management of large numbers of robots. In
several stories, a robot is assigned to oversee other robots. This would be
possible only if each of the lesser robots were instructed by a human to obey
the orders of its robot overseer. That would create a number of logical and
practical difficulties, such as the scope of the human's order. It would seem
more effective to incorporate in all subordinate robots an additional law, for
example, "A robot must obey the orders given it by superordinate robots except
where such orders would conflict with a higher order law." Such a law would
fall between the second and third laws.

Furthermore, subordinate robots should protect their superordinate robot. This
could be implemented as an extension or corollary to the third law; that is, to
protect itself, a robot would have to protect another robot on which it
depends. Indeed, a subordinate robot may need to be capable of sacrificing
itself to protect its robot overseer. Thus, an additional law superior to the
third law but inferior to orders from either a human or a robot overseer seems
appropriate: "A robot must protect the existence of a superordinate robot as
long as such protection does not conflict with a higher order law."

The wording of such laws should allow for nesting, since robot overseers may
report to higher level robots. It would also be necessary to determine the form
of the superordinate relationships:

a tree, in which each robot has precisely one immediate
overseer, whether robot or human;

a constrained network, in which each robot may have
several overseers but restrictions determine who may act as an overseer; or

an unconstrained network, in which each robot may have
any number of other robots or persons as overseers.

This issue of a command structure is far from trivial, since it is central
to democratic processes that no single entity shall have ultimate authority.
Rather, the most senior entity in any decision-
making
hierarchy must be subject to review and override by some other entity,
exemplified by the balance of power in the three branches of government and the
authority of the ballot box. Successful, long-
lived
systems involve checks and balances in a lattice rather than a mere tree
structure. Of course, the structures and processes of human organizations may
prove inappropriate for robotic organization. In any case, additional laws of
some kind would be essential to regulate relationships among robots.

The sidebar shows an extended set of laws, one that incorporates the additional
laws postulated in this section. Even this set would not alway's ensure
appropriate robotic behavior. However, it does reflect the implicit laws that
emerge in Asimov's fiction while demonstrating that any realistic set of design
principles would have to be considerably more complex than Asimov's 1940 or
1985 laws. This additional complexity would inevitably exacerbate the problems
identified earlier in this article and create new ones.

A robot may not take any part in the design or manufacture of a robot
unless the new robot's actions are subject to the Laws of Robotics

While additional laws may be trivially simple to extract and formulate, the
need for them serves as a warning. The 1940 laws' intuitive attractiveness and
simplicity were progressively lost in complexity, legalisms, and semantic
richness. Clearly then, formulating an actual set of laws as a basis for
engineering design would result in similar difficulties and require a much more
formal approach. Such laws would have to be based in ethics and human morality,
not just in mathematics and engineering. Such a political process would
probably result in a document couched in fuzzy generalities rather than
constituting an operational-
level,
programmable specification.

Implications
for information technologists

Many facets of Asimov's fiction are clearly inapplicable to real information
technology or too far in the future to be relevant to contemporary
applications. Some matters, however, deserve our consideration. For example,
Asimov's fiction could help us assess the practicability of embedding some
appropriate set of general laws into robotic designs. Alternatively, the
substantive content of the laws could be used as a set of guidelines to be
applied during the conception, design, development, testing, implementation,
use, and maintenance of robotic systems. This section explores the second
approach.

The Laws of Robotics designate no particular class of humans (not even a
robot's owner) as more deserving of protection or obedience than another. A
human might establish such a relationship by command, but the laws give such a
command no special status: another human could therefore countermand it. In
short, the laws reflect the humanistic and egalitarian principles that
theoretically underlie most democratic nations.

The laws therefore stand in stark contrast to our conventional notions about an
information technology artifact, whose owner is implicitly assumed to be its
primary beneficiary. An organization shapes an application's design and use for
its own benefit. Admittedly, during the last decade users have been given
greater consideration in terms of both the human-
machine
interface and participation in system development. But that trend has been
justified by the better returns the organization can get from its information
technology investment rather than by any recognition that users are
stakeholders with a legitimate voice in decision making. The interests of other
affected parties are even less likely to be reflected.

In this era of powerful information technology, professional bodies of
information technologists need to consider:

liability for harm resulting from either malfunction or use in conformance
with the designer's intentions; and

complaint-
handling
and dispute-
resolution procedures.

Once any resulting standards reach a degree of maturity, legislatures in the
many hundreds of legal jurisdictions throughout the world would probably have
to devise enforcement procedures.

The interests of people affected by modern information technology applications
have been gaining recognition. For example, consumer representatives are now
being involved in the statement of user requirements and the establishment of
the regulatory environment for consumer electronic-
funds-
transfer
systems. This participation may extend to the logical design of such systems.
Other examples are trade-
union
negotiations with employers regarding technology-
enforced change, and the publication of software quality-
assurance
standards.

For large-
scale
applications of information technology, governments have been called upon to
apply procedures like those commonly used in major industrial and social
projects. Thus, commitment might have to be deferred pending dissemination and
public discussion of independent environmental or social impact statements.
Although organizations that use information technology might see this as
interventionism, decision making and approval for major information technology
applications may nevertheless become more widely representative.

Closed-
system
versus open-
system
thinking

Computer-
based
systems no longer comprise independent machines each serving a single location.
The marriage of computing with telecommunications has produced multicomponent
systems designed to support all elements of a widely dispersed organization.
Integration hasn't been simply geographic, however. The practice of information
systems has matured since the early years when existing manual systems were
automated largely without procedural change. Developers now seek payback via
the rationalization of existing systems and varying degrees of integration
among previously separate functions. With the advent of strategic and
interorganizational systems, economies are being sought at the level of
industry sectors, and functional integration increasingly occurs across
corporate boundaries.

Although programmers can no longer regard the machine as an almost entirely
closed system with tightly circumscribed sensory and motor capabilities, many
habits of closed-
system
thinking remain. When systems have multiple components, linkages to other
systems, and sophisticated sensory and motor capabilities, the scope needed for
understanding and resolving problems is much broader than for a mere
hardware/software machine. Human activities in particular must be perceived as
part of the system. This applies to manual procedures within systems (such as
reading dials on control panels), human activities on the fringes of systems
(such as decision making based on computer-
collated
and -
displayed
information), and the security of the user's environment (automated teller
machines, for example). The focus must broaden from mere technology to
technology in use.

General systems thinking leads information technologists to recognize that
relativity and change must he accommodated. Today, an artifact may be applied
in multiple cultures where language, religion, laws, and customs differ. Over
time, the original context may change. For example, models for a criminal
justice system - one based on punishment and another based on redemption - may
alternately dominate social thinking. Therefore, complex systems must be
capable of adaptation.

Blind
acceptance of technological and other imperatives

Contemporary utilitarian society seldom challenges the presumption that what
can be done should be done. Although this technological
imperative is less pervasive than people generally think, societies
nevertheless tend to follow where their technological capabilities lead.
Related tendencies include the economic imperative (what can be done more
efficiently should be) and the marketing imperative (any effective demand
should be met). An additional tendency might be called the "information
imperative," the dominance of administrative efficiency, information richness,
and rational decision making. However, the collection of personal data has
become so pervasive that citizens and employees have begun to object.

The greater a technology's potential to promote change, the more carefully a
society should consider the desirability of each application. Complementary
measures that may be needed to ameliorate its negative effects should also be
considered. This is a major theme of Asimov's stories, as he explores the
hidden effects of technology. The potential impact of information technology is
so great that it would be inexcusable for professionals to succumb blindly to
the economic, marketing, information, technological, and other imperatives.
Application software professionals can no longer treat the implications of
information technology as someone else's problem but must consider them as part
of the project.15

Human
acceptance of robots

In Asimov's stories, humans develop affection for robots, particularly
humaniform robots. In his very first short story, a little girl is too closely
attached to Robbie the Robot for her parents' liking.'12 In another
early story, a woman starved for affection from her husband and sensitively
assisted by a humanoid robot to increase her self confidence entertains
thoughts approaching love toward it/him.16

Nonhumaniforms, such as conventional industrial robots and large, highly
dispersed robotic systems (such as warehouse managers. ATMs, and EFT/POS
systems) seem less likely to elicit such warmth. Yet several studies have found
a surprising degree of identification by humans with computers.17,18
Thus, some hitherto exclusively human characteristics are being
associated with computer systems that don't even exhibit typical robotic
capabilities.

Users must be continually reminded that the capabilities of hardware/software
components are limited:

they contain many inherent assumptions;

they are not flexible enough to cope with all of the manifold exceptions
that inevitably arise; and

they do not adapt to changes in their environment;

authority is not vested in hardware/ software components but rather in the
individuals who use them.

Educational institutions and staff training programs must identify these
limitations; yet even this is not sufficient: The human-
machine
interface must reflect them. Systems must be designed so that users are
required to continually exercise their own expertise, and system output should
not be phrased in a way that implies unwarranted authority. These objectives
challenge the conventional outlook of system designers.

Human
opposition to robots

Robotsare agents of change and therefore potentially
upsetting to those with vested interests. Of all the machines so far invented
or conceived of, robots represent the most direct challenge to humans.
Vociferous and even violent campaigns against robotics should not be
surprising. Beyond concerns of self interest is the possibility that some
humans could be revulsed by robots, particularly those with humanoid
characteristics. Some opponents may be mollified as robotic behavior becomes
more tactful. Another tenable argument is that by creating and deploying
artifacts that are in some ways superior. humans degrade themselves.

System designers must anticipate a variety of negative reactions against their
creations from different groups of stakeholders. Much will depend on the number
and power of the people who feel threatened - and on the scope of the change
they anticipate. If, as Asimov speculates,9 a robot-
based
economy develops without equitable adjustments, the backlash could be
considerable.

Such a rejection could involve powerful institutions as well as individuals. In
one Asimov story, the US Department of Defense suppresses a project
intended to produce the perfect robot-
soldier.
It reasons that the degree of discretion and autonomy needed for battlefield
performance would tend to make robots rebellious in other circumstances
(particularly during peace time) and unprepared to suffer their commanders'
foolish decisions.19 At a more basic level, product lines and
markets might be threatened, and hence the profits and even the survival of
corporations. Although even very powerful cartels might not be able to impede
robotics for very long, its development could nevertheless be delayed or
altered. Information technologists need to recognize the negative perceptions
of various stakeholders and manage both system design and project politics
accordingly.

The
structuredness of decision making

For five decades there has been little doubt that computers hold significant
computational advantages over humans. However, the merits of machine decision
making remain in dispute. Some decision processes are highly structured and can
be resolved using known algorithms operating on defined data items with defined
interrelationships. Most structured decisions are candidates for automation,
subject, of course, to economic constraints. The advantages of machines must
also be balanced against risks. The choice to automate must be made carefully
because the automated decision process (algorithm, problem description. problem-
domain
description, or analysis of empirical data) may later prove to be inappropriate
for a particular type of decision. Also, humans involved as data providers,
data communicators, or decision implementers may not perform rationally because
of poor training, poor performance under pressure, or willfulness.

Unstructured decision making remains the preserve of humans for one or more
of the following reasons:

humans have not yet worked out a suitable way to program (or teach) a
machine how to make that class of decision;

some relevant data cannot be communicated to the machine;

"fuzzy" or "open-
textured"
concepts or constructs are involved;

such decisions involve judgments that system participants feel should not
be made by machines on behalf of humans.

One important type of unstructured decision is problem diagnosis. As Asimov
described the problem, "How..... can we send a robot to find a flaw in a
mechanism when we cannot possibly give precise orders, since we know nothing
about the flaw ourselves'? `Find out what's wrong' is not an order you can give
to a robot; only to a man."20 Knowledge-
based
technology has since been applied to problem diagnosis, but Asimov's insight
retains its validity: A problem may be linguistic rather than technical,
requiring common sense, not domain knowledge. Elsewhere, Asimov calls robots
"logical but not reasonable" and tells of household robots removing important
evidence from a murder scene because a human did not think to order them to
preserve it.9

The literature of decision support systems recognizes an intermediate case,
semistructured decision making. Humans are assigned the decision task, and
systems are designed to provide support for gathering and structuring
potentially relevant data and for modeling and experimenting with alternative
strategies. Through continual progress in science and technology, previously
unstructured decisions are reduced to semistructured or structured decisions.
The choice of which decisions to automate is therefore provisional, pending
further advances in the relevant area of knowledge. Conversely, because of
environmental or cultural change, structured decisions may not remain so. For
example, a family of viruses might mutate so rapidly that the reference data
within diagnostic support systems is outstripped and even the logic becomes
dangerously inadequate.

Delegating to a machine any kind of decision that is less than fully structured
invites errors and mishaps. Of course. human decision-
makers
routinely make mistakes too. One reason for humans' retaining responsibility
for unstructured decision making is rational: Appropriately educated and
trained humans may make more right decisions and/or fewer seriously wrong
decisions than a machine. Using common sense, humans can recognize when
conventional approaches and criteria do not apply, and they can introduce
conscious value judgments. Perhaps a more important reason is the arational
preference of humans to submit to the judgments of their peers rather than of
machines: If someone is going to make a mistake costly to me, better for it to
be an understandably incompetent human like myself than a mysteriously
incompetent machine.8

Because robot and human capabilities differ, for the foreseeable future at
least, each will have specific comparative advantages. Information
technologists must delineate the relationship between robots and people by
applying the concept of decision structuredness to blend computer-
based
and human elements advantageously. The goal should be to achieve complementary
intelligence rather than to continue pursuing the chimera of unneeded
artificial intelligence. As Wyndham put it in 1932: "Surely man and machine are
natural complements: They assist one another."21

Risk
management

Whether or notsubjected to intrinsic laws or design
guidelines, robotics embodies risks to property as well as to humans. These
risks must be managed; appropriate forms of risk avoidance and diminution need
to be applied, and regimes for fallback, recovery, and retribution must be
established.

Controls are needed to ensure that intrinsic laws, if any, are operational at
all times and that guidelines for design, development, testing, use, and
maintenance are applied. Second-
order
control mechanisms are needed to audit first-
order
control mechanisms. Furthermore, those bearing legal responsibility for harm
arising from the use of robotics must be clearly identified. Courtroom
litigation may determine the actual amount of liability, but assigning legal
responsibilities in advance will ensure that participants take due care.

In most of Asimov's robot stories, robots are owned by the manufacturer even
while in the possession of individual humans or corporations. Hence legal
responsibility for harm arising from robot noncompliance with the laws can be
assigned with relative ease. In most real-
world
jurisdictions, however, there are enormous uncertainties, substantial gaps in
protective coverage, high costs, and long delays.

Each jurisdiction, consistent with its own product liability philosophy, needs
to determine who should bear the various risks. The law must be sufficiently
clear so that debilitating legal battles do not leave injured parties without
recourse or sap the industry of its energy. Information technologists need to
communicate to legislators the importance of revising and extending the laws
that assign liability for harm arising from the use of information technology.

Enhancements
to codes of ethics

Associations of information technology professionals, such as the lEEE
Computer Society, the Association for Computing Machinery, the British Computer
Society, and the Australian Computer Society, are concerned with professional
standards, and these standards almost always include a code of ethics. Such
codes aren't intended so much to establish standards as to express standards
that already exist informally. Nonetheless, they provide guidance concerning
how professionals should perform their work, and there is significant
literature in the area.

The issues raised in this article suggest that existing codes of ethics need to
be reexamined in the light of developing technology. Codes generally fail to
reflect the potential effects of computer-
enhanced
machines and the inadequacy of existing managerial, institutional, and legal
processes for coping with inherent risks. Information technology professionals
need to stimulate and inform debate on the issues. Along with robotics. many
other technologies deserve consideration. Such an endeavor would mean
reassessing professionalism in the light of fundamental works on ethical
aspects of technology.

Asimov's Laws of Robotics have been a very successful literary device. Perhaps
ironically, or perhaps because it was artistically appropriate, the sum of
Asimov's stories disprove the contention that he began with: It is not possible
to reliably constrain the behavior of robots by devising and applying a set of
rules.

The freedom of fiction enabled Asimov to project the laws into many future
scenarios; in so doing, he uncovered issues that will probably arise someday in
real-
world
situations. Many aspects of the laws discussed in this article are likely to
be weaknesses in any robotic code of conduct. Contemporary applications of
information technology such as CAD/CAM, EFT/POS, warehousing systems, and
traffic control are already exhibiting robotic characteristics. The
difficulties identified are therefore directly and immediately relevant to
information technology professionals.

Increased complexity means new sources of risk, since each activity depends
directly on the effective interaction of many artifacts. Complex systems are
prone to component failures and malfunctions, and to intermodule
inconsistencies and misunderstandings. Thus, new forms of backup, problem
diagnosis, interim operation, and recovery are needed. Tolerance and
flexibility in design must replace the primacy of short-
term
objectives such as programming productivity. If information technologists do
not respond to the challenges posed by robotic systems. as investigated in
Asimov's stories, information technology artifacts will be poorly suited for
real-
world
applications. They may be used in ways not intended by their designers, or
simply be rejected as incompatible with the individuals and organizations they
were meant to serve.

Isaac
Asimov, 1920-1992

Born near Smolensk in Russia, Isaac Asimov came to the United States with
his parents three years later. He grew up in Brooklyn, becoming a US citizen
at the age of eight. He earned bachelor's, master's, and doctoral degrees in
chemistry from Columbia University and qualified as an instructor in
biochemistry at Boston University School of medicine, where he taught for many
years and performed research in nucleic acid.

As a child, Asimov had begun reading the science fiction stories on the racks
in his family's candy store, and those early years of vicarious visits to
strange worlds had filled him with an undying desire to write his own adventure
tales. He sold his first short story in 1938 and after wartime service as a
chemist and a short hitch in the Army, he focused increasingly on his
writing.

Asimov was among the most prolific of authors, publishing hundreds of books on
various subjects and dozens of short stories. His Laws of Robotics underlie
four of his full-length novels as well as many of his short stories. The World
Science Fiction Convention bestowed Hugo Awards on Asimov in nearly every
category of science fiction, and his short story "Nightfall" is often referred
to as the best science fiction story ever written. The scientific authority
behind his writing gave his stories a feeling of authenticity, and his work
undoubtedly did much to popularize science for the reading public.

References
to Part 1

1. I. Asimov, TheRest of the Robots (a collection of
short stories originally published between 1941 and 1957), Grafton Books,
London, 1968.

2. N.Frude, The Robot Heritage, Century Publishing. London.
1984.

3. I. Asimov, I, Robot (a collection of short stories originally
published between 1940 and 1950), Grafton Books, London, 1968.

The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 50 million in early 2015.