No technology has ever had a greater influence on philosophy
than modern information and communication technology.

First, this technology has profoundly altered the ways in which
philosophers carry out their daily work. Philosophers use computers
to write their papers and books, they use email to keep in contact
with their colleagues, they use the intranets to which they are
connected to borrow books from the university library and to
consult invaluable resources such as the Philosophers
Index, and last but certainly not least, they have not lagged
behind the rest of the educated world in discovering the World Wide
Web as a rich source of information.

Secondly, the computer has taken a firm grip on the
philosophical imagination. Are people computers? Are their minds
comparable to computer software? Is the whole universe a kind of
computer? Should we not try to formulate our philosophical theories
(for example in epistemology) as computer programs, so as to make
them precise and testable? (Formerly, formal logic was the only
tool we had to regiment our philosophical thinking.)

Thirdly, the wide-spread use of information and communication
technology raises its own ethical problems. One need only think of
issues such as privacy protection, the gap between the information
rich and the information poor and the continual well-intended
attempts of governments to limit the freedom of speech on the
Internet to get a feeling for what is at stake here. These issues
have given rise to a new field of ethics, computer ethics, which
has a similar status as other fields of applied ethics such as
medical ethics and business ethics.

Bynum's and Moor's book The Digital Phoenix is a
collection of 26 essays (plus an introduction) by 27 authors which
gives a fine survey of the many ways in which information and
communication technology are currently influencing philosophy.

The theme of The Digital Phoenix is not completely new.
More than twenty years ago, Aaron Sloman wrote The Computer
Revolution in Philosophy (Harvester Press, 1978), in which he
stated: "I am prepared to go so far as to state that within a few
years, if there remain any philosophers who are not familiar with
some of the main developments in artificial intelligence, it will
be fair to accuse them of professional incompetence, and that to
teach courses in philosophy of mind, epistemology, aesthetics,
philosophy of science, philosophy of language, ethics, metaphysics,
and other main areas of philosophy, without discussing the relevant
aspects of artificial intelligence will be as irresponsible as
giving a degree course in physics which includes no quantum theory"
(p. 5).

Even more closely related to The Digital Phoenix is
Leslie Burkholder, ed., Philosophy and the Computer
(Westview Press, 1992). This collection of 16 essays by 28 authors
is similarly devoted to the "computational turn" in philosophy, as
Burkholder put it.

Although there is some overlap between The Digital
Phoenix and Sloman's and Burkholder's earlier books, the new
volume nevertheless contains much new material. How could it be
otherwise in such a rapidly developing new field? Books like
The Digital Phoenix should appear every five years or so.
(And indeed, Moor told me he is already working on a successor
volume.)

There is one thing about The Digital Phoenix which is
definitely not new: its title. In 1995, Bjørn Lynne brought
out a CD, Dreamstate (Centaur Discs, CENCD009), whose
sixth track has the same title. Bynum and Moor seem to have been
unaware of this fact.

The Digital Phoenix is divided into two parts. Part I,
the largest part of the book, is concerned with the impact of the
computer on philosophical issues. It shows that the computer is
having a great influence on the content of philosophy. The
much smaller Part II discusses the more mundane issue of the
computer's influence on the ways in which professional philosophers
carry out their daily activities. I will discuss these two parts of
the book in the same order.

Before beginning, one remark about Bynum's and Moor's
introduction: on p. 2, they claim that "the widely accepted
Church-Turing thesis states that whatever is computable is
computable by a Turing Machine." The Church-Turing thesis states no
such thing. It only states that whatever is "effectively
computable" (i.e., computable by a human being who mechanically
follows some set of instructions and uses only pencil and paper) is
computable by a Turing Machine. This is well explained in Jack
Copeland's article on the Church-Turing thesis (and its
perversions) in the Stanford Encyclopedia of Philosophy at
http://plato.stanford.edu/.

Part I begins with two articles on epistemology, a
discipline often regarded as the heart of philosophy.

In his "Procedural Epistemology", John L. Pollock describes his
OSCAR project which is aimed at the construction of a general
theory of rationality and its computer-implementation in an
artificial rational agent. Human reasoning is defeasible in the
sense that new information may cause us to retract previously held
beliefs. Pollock's objective is to develop precise rules about how
this can and should be done. He tries to formulate such theories in
terms of computer programs for the following three reasons. First,
this allows us to test whether the theory actually works. "As
mundane as this constraint may seem, I am convinced that most
epistemological theories fail to satisfy it." Secondly, in order to
make a computer model, we have to make the theory precise and work
out the details. "That can have a very therapeutic effect on a
profession that is overly fond of handwaving." Thirdly, it is often
difficult to foresee the consequences of one's theories when
applied to complicated situations. If they are formulated in terms
of computer programs, one may simply run the program and see what
happens.

Pollock's work on defeasible reasoning is remarkable because it
belongs to both philosophy and AI. There is certainly more work of
this type. Much current work on conditional reasoning, epistemic
logic, deontic logic and the logic of action similarly belongs both
to philosophy and AI. It is regrettable that Bynum and Moor do not
even mention this work.

In "Epistemology and Computing", Henry Kyburg agrees with
Pollock that "fast digital computers are a wonderful boon to doing
certain kinds of philosophy, for example epistemology, in the sense
that they provide a kind of philosophical laboratory." However, he
criticizes Pollock's approach on philosophical grounds. He sketches
a different model, FLORENCE, that embodies some different
principles. FLORENCE does not yet exist, so Kyburg is certainly
doing some "handwaving" of the type described by Pollock. However,
suppose that it existed and worked as well as OSCAR. Which model
should one prefer in such a case? Interestingly, Kyburg suggests
that "the answer to this question may well call for philosophical
experimentation of a kind that can only be done by computers."

From epistemology, we move to a related field, the philosophy
of science.

In "Computation and the Philosophy of Science", Paul Thagard
briefly describes the computational approach to the philosophy of
science of which he was one of the pioneers. In this field, one
uses AI techniques to model, amongst other things, the context of
discovery. (A subject that was more or less taboo in traditional
philosophy of science.) In her "Anomaly-Driven Theory Redesign:
Computational Philosophy of Science Experiments", Lindley Darden
presents an example of a program of this type. Her TRANSGENE
program addresses the problem of how a scientific theory (Mendelian
genetic theory in this case) is properly modified given an anomaly.
Her project is different from traditional philosophy of science in
that she cannot afford to be vague about the details of the growth
of scientific theories.

Next, we encounter two papers on reason and argument.

In his "Representation of Philosophical Argumentation", Theodore
Scaltsas describes the Archelogos project, in which many scholars
cooperate to generate a hypertext database which will contain
analyses of the arguments used in ancient Greek philosophical
texts, including commentaries on these arguments made by ancient
commentators. This is a useful project which, however, uses the
computer in a totally routine way.

In their "Computers, Visualization, and the Nature of
Reasoning", Jon Barwise and John Etchemendy describe their
well-known programs Turing's World, Tarski's World and Hyperproof,
which are used in logic courses all over the world. The authors
describe how the graphical output of their programs inspired them
to explore a wholly new area of logic, namely the study of
inference processes involving non-sentential
representations (such as diagrams).

Even metaphysics is affected by the computer. The first
paper in this category is Eric Steinhart's "Digital Metaphysics".
This title sounds interesting, but it is not clear to me what the
author wants to say. First, he says that all physically possible
worlds consist of "universal computers" (p. 118). Right after
this, he says that the basic components of these worlds "are not
classical computers (i.e., not Turing or von Neumann machines), but
are more powerful in ways not yet clear" (p. 119). A few pages
later, however, we read that "actual infinities entail paradoxes"
(p. 121), that "there are no infinitely complex things in
nature" (p. 123) and that nature therefore consists of "finite
state-machines" (p. 125, p. 129). It is hard to make
sense of this inconsistent set of remarks. And even if it could be
made sense of, it seems clear that these are empirical
rather than metaphysical issues. Only physics can decide whether
space-time is discrete, whether nature can be adequately modeled by
finite automata, and so on.

Mark A. Bedau's "Philosophical Content and Method of Artificial
Life", on the other hand, delivers exactly what its title promises.
Artificial life studies computational structures and processes
which exhibit lifelike behavior. One of its most fascinating
discoveries is that complex global behavior may sometimes emerge
from very simple rules. Philosophers should take notice of this. As
Bedau puts it, "It is hard to avoid the fallacy of putting too much
stock on our a priori intuitions when contemplating
complex systems" (p. 147). He vividly illustrates this point
by means of Daniel Dennett's Darwin's Dangerous Idea
(Simon and Schuster, 1995). "Dennett assumes that evolution by
natural selection can explain human concerns like mind, language,
and morals. But Dennett's assumption is only an article of faith.
He never attempts to construct an evolutionary explanation for
mind, language, and morality; he never "puts his model where his
mouth is" and checks whether natural selection really could explain
these phenomena, even in principle... He's only guessing... Maybe
natural selection can explain [these phenomena], maybe it can't; we
just don't know yet" (p. 147). In other words, Darwin's
Dangerous Idea is a good example of the typical behavior of
the armchair philosopher which Pollock referred to--handwaving.

The section philosophy of mind contains a paper by Paul
M. Churchland, "The Neural Representation of the Social World", and
a paper by William G. Lycan, "Qualitative Experience in Machines".
Churchland's paper, interesting though it is, is just an excerpt
from chapters 6 and 10 of his The Engine of Reason, the Seat of
the Soul (MIT Press, 1995). It describes how neural networks
can be taught to recognize emotional facial expressions and how
such networks can learn to make moral judgments without knowing any
ethical rule. This is clearly relevant for moral philosophy.
Lycan's very well-written article shows that there is no reason at
all to maintain that machines cannot have qualitative experiences.
Every undergraduate student should read this paper!

In the section philosophy of artificial intelligence, we
first encounter the life-long critic of AI, Hubert L. Dreyfus. His
"Response to My Critics" is difficult to understand because it
presupposes knowledge of the critics' original critiques. Dreyfus
is inspired by Merleau-Ponty and Heidegger, although he seems to
have some difficulties reconciling these two. I think we may safely
ignore these two figures from the past. Merleau-Ponty is based on
antiquated neuroscience and Heidegger's work has led to so many
conflicting interpretations that it makes one think of a Rorschach
blot.

James H. Moor's "Assessing Artificial Intelligence: Chess and
the Turing Test" is more interesting and informative. He discusses
both Deep Blue's encounters with Kasparov and the annual Loebner
Prize Contest (also known as the Turing tournament). Both series of
events make it clear that there are huge differences between
current artificial intelligence and human intelligence.

Under philosophy of computation, we first find a paper by
Selmer Bringsjord entitled "Philosophy and "Super" Computation". In
this paper, Bringsjord describes some abstract devices which are
more powerful than the universal Turing machine. He argues that
people are such devices. Unfortunately, the crucial passage of his
argument is missing! It should have been at the bottom of
p. 246, but it has disappeared as a result of erroneous
cutting and pasting. Bringsjord's conclusion that people have
"super-minds" seems to be based on his thesis that the set of
interesting stories is decidable but not enumerable. I think this
basis is too weak to support such a strong conclusion.

The second paper in the Philosophy of Computation section is
James H. Fetzer's "Philosophy and Computer Science: Reflections on
the Program Verification Debate". Here he recounts the early
history of the program verification debate, which started with his
observation that formal methods cannot guarantee that real life
computers behave as they should.

The last section of Part I is devoted to ethics and
creativity.

Terrell Ward Bynum's "Global Information Ethics" is a very fine
introduction to computer ethics. It first discusses some historical
milestones (such as Norbert Wiener's pioneering work), then
discusses several alternative definitions of the field, next
presents sample topics in computer ethics (computers in the
workplace, computer security, software ownership, professional
responsibility), and ends with a brief look at the future.
According to Bynum, computer ethics is rapidly evolving into a
broader field, namely Global Information Ethics. This
transformation is due to the advent of global networks like the
World Wide Web. Sample topics of study include global laws, global
cyber-business, global education and the gap between the
information rich and the information poor. Bynum's essay is highly
recommended to anyone who wants a brief survey of computer ethics
and a preview of developments in the near future.

In "How Computers Extend Artificial Morality", Peter Danielson
briefly describes his work on the emergence of moralized
interaction in populations of artificial agents. According to
Danielson, "important parts of morality are artificial cognitive
and social devices... which allow cooperation unattainable
otherwise" and which thus indirectly benefit individual agents. He
uses the computer to test his ideas because "Ethics is so charged
with prejudice--intuition--that we need powerful tools to keep our
theories honest and open to surprising--i.e.
counter-intuitive--ideas. Ethics is an area where we should expect
informal tools--such as the thought experiment--to be unreliable,
because the equipment we run them on--our morally shaped minds,
constrained by principle and norm--isn't up to the task of
following out unwanted consequences" (p. 292). Danielson's
chapter is perhaps too short to do him justice. It is better to
read his book, Artificial Morality (Routledge, 1992).

Finally, in her "Computing and Creativity", Margaret A. Boden
argues that we have gained much insight into creativity by trying
to make creative computer programs.

Part II of The Digital Phoenix (which occupies only
one-fifth of the book) has a completely different character than
Part I. It contains the following chapters: "Teaching Philosophy in
Cyberspace" (Ron Barnette); "Philosophy Teaching on the World Wide
Web" (Jon Dorbolo); "Multimedia and Research in Philosophy" (Robert
Cavalier); "Teaching of Philosophy with Multimedia" (John L.
Fodor); "Resources in Ethics on the World Wide Web" (Lawrence M.
Hinman); "The APA Internet Bulletin Board and Web Site" (Saul
Traiger); "Using Computer Technology for Philosophical Research: An
APA Report" (Robert Cavalier); "Using Computer Technology for
Teaching Philosophy: An APA Report" (Ron Barnette); "Using Computer
Technology for Professional Cooperation: An APA Report" (Lawrence
M. Hinman). Developments are occurring so fast in these areas that
much of this material is already outdated. As far as ethics is
concerned the following points may be worth mentioning. First,
multimedia CD-ROMs are ideal for the presentation of ethical cases
in all their details. Secondly, the World Wide Web is a continually
surprising source of information. For example, there is much more
legislation and jurisdiction available on the Web than one might
expect.

Bynum's and Moor's book does not present a complete survey of
the interface between philosophy and computer technology. I missed
the following topics. (1) The enormous influence which the
computer metaphor has had in the philosophy of mind. (2) The
interaction between logic and artificial intelligence. Modal logic,
epistemic logic, deontic logic, non-monotonic logic, temporal
logic, conditional logic and the logic of action are topics which
originated within philosophy but which are nowadays being studied
by both philosophers and artificial intelligence researchers and
computer scientists. There is a lively exchange of ideas. There is
much more in this area than just Pollock's work. (3) The
philosophy of virtual reality and hypertext as described, for
example, in Michael Heim's The Metaphysics of Virtual
Reality (Oxford U.P., 1993). (4) The work by Patrick Grim
and his collaborators on philosophical computer modeling.

Nevertheless, Bynum and Moor have done a fine job. Their book
certainly delivers what its subtitle promises.

I have only one worry. At the present time, the whole world
seems to be under the spell of information and communication
technology. Is it wise for philosophers to follow this trend?

Take, for example, the case of Arthur Prior. The tense logic
which he invented in the 1950s was the result of a love of ancient
and medieval logic and a concern to make conceptual room for
freedom of the human will. It is nowadays being used in formal
reasoning about the behavior of concurrent programs. (See Jack
Copeland's article on Prior in the Stanford Encyclopedia of
Philosophy at http://plato.stanford.edu/.) It
seems safe to say that Prior's thinking about ancient and medieval
philosophy and the freedom of the will was in the long run more
useful to computer science than any thinking of his about the
computer would have been!

Now contrast this with the Mission Statement of the Philosophy
Department at Carnegie Mellon, which states that they are
interested in "automated theorem proving, machine learning,
language technology, game and decision theory" and that "the
teaching and learning of this material is supported by appropriate
technology (computer tutors, interactive multimedia software)"
(cited in The Digital Phoenix, p. 390). One wonders
whether anything as truly original as Prior's tense logic can ever
come out of such an environment.

As a second case in point, consider the answer of Donald Knuth,
the acknowledged father of computer science, to the question why he
does not use email. "I have been a happy man ever since January 1,
1990, when I no longer had an email address. I'd used email since
about 1975, and it seems to me that 15 years of email is plenty for
one lifetime. Email is a wonderful thing for people whose role in
life is to be on top of things. But not for me; my role is to be on
the bottom of things. What I do takes long hours of studying and
uninterruptible concentration. I try to learn certain areas of
computer science exhaustively; then I try to digest that knowledge
into a form that is accessible to people who don't have time for
such study" (Donald E. Knuth, "Email (Let's Drop the Hyphen)", at
http://Sunburn.Stanford.EDU/~knuth/email.html).

The same seems to apply with even more force to philosophers.
Aren't they precisely the people who are expected by society to be
"on the bottom of things"? Shouldn't they turn their backs on these
new technologies and work in relative isolation, just like Prior
did and Knuth does?

Related to this is the worry, expressed by many, about the
possibly stifling influence of the Internet on human creativity. As
Tsichritzis put it, "The global networks help propagate innovation,
but they breed conformity. How can researchers get something new
and significant if they are in constant communication?" (Dennis
Tsichritzis, "The Dynamics of Innovation", in Peter J. Denning and
Robert M. Metcalfe, eds., Beyond Calculation: The Next Fifty
Years of Computing, Springer-Verlag, 1997, quotation from
p. 261.)

This brings me to my last point. Philosophers are supposed to be
critical. It would have been nice if The Digital Phoenix
had included at least one paper by a dissenter who is not at all
happy with the current computer-related developments in
philosophy.