Thursday, July 17, 2014

Costica Bradatan is an associate professor in the Honors College at Texas Tech University and the religion and comparative studies editor for The Los Angeles Review of Books. He is the author of the forthcoming Dying for Ideas. The Dangerous Lives of the Philosophers.”

If there was ever a time to think seriously about failure,
it is now.

We are firmly in an era of accelerated progress. We are
witness to advancements in science, the arts, technology, medicine and nearly
all forms of human achievement at a rate never seen before. We know more about
the workings of the human brain and of distant galaxies than our ancestors
could imagine. The design of a superior kind of human being – healthier,
stronger, smarter, more handsome, more enduring – seems to be in the works.
Even immortality may now appear feasible, a possible outcome of better and better
biological engineering.

Certainly the promise of continual human progress and
improvement is alluring. But there is a danger there, too — that in this more
perfect future, failure will become obsolete.

Why should we care? And more specifically, why should
philosophy care about failure? Doesn’t it have better things to do? The answer
is simple: Philosophy is in the best position to address failure because it
knows it intimately. The history of Western philosophy at least is nothing but
a long succession of failures, if productive and fascinating ones. Any major
philosopher typically asserts herself by addressing the “failures,” “errors,”
“fallacies” or “naiveties” of other philosophers, only to be, in turn,
dismissed by others as yet another failure. Every new philosophical generation
takes it as its duty to point out the failures of the previous one; it is as
though, no matter what it does, philosophy is doomed to fail. Yet from failure
to failure, it has thrived over the centuries. As Emmanuel Levinas memorably
put it (in an interview with Richard Kearney), “the best thing about philosophy
is that it fails.” Failure, it seems, is what philosophy feeds on, what keeps
it alive. As it were, philosophy succeeds only in so far as it fails.

So, allow me to make a case for the importance of failure.

Failure is significant for several reasons. I’d like to
discuss three of them.

Failure allows us to see our existence in its naked
condition.

Whenever it occurs, failure reveals just how close our
existence is to its opposite. Out of our survival instinct, or plain
sightlessness, we tend to see the world as a solid, reliable, even
indestructible place. And we find it extremely difficult to conceive of that
world existing without us. “It is entirely impossible for a thinking being to
think of its own non-existence, of the termination of its thinking and life,”
observed Goethe. Self-deceived as we are, we forget how close to not being we
always are. The failure of, say, a plane engine could be more than enough to
put an end to everything; even a falling rock or a car’s faulty brakes can do
the job. And while it may not be always fatal, failure always carries with it a
certain degree of existential threat.

Failure is the sudden irruption of nothingness into the midst
of existence. To experience failure is to start seeing the cracks in the fabric
of being, and that’s precisely the moment when, properly digested, failure
turns out to be a blessing in disguise. For it is this lurking, constant threat
that should make us aware of the extraordinariness of our being: the miracle
that we exist at all when there is no reason that we should. Knowing that gives
us some dignity.

In this role, failure also possesses a distinct therapeutic
function. Most of us (the most self-aware or enlightened excepted) suffer
chronically from a poor adjustment to existence; we compulsively fancy
ourselves much more important than we are and behave as though the world exists
only for our sake; in our worst moments, we place ourselves like infants at the
center of everything and expect the rest of the universe to be always at our
service. We insatiably devour other species, denude the planet of life and fill
it with trash. Failure could be a medicine against such arrogance and hubris,
as it often brings humility.

Our capacity to fail is essential to what we are.

We need to preserve, cultivate, even treasure this capacity.
It is crucial that we remain fundamentally imperfect, incomplete, erring
creatures; in other words, that there is always a gap left between what we are
and what we can be. Whatever human accomplishments there have been in history,
they have been possible precisely because of this empty space. It is within
this interval that people, individuals as well as communities, can accomplish
anything. Not that we’ve turned suddenly into something better; we remain the
same weak, faulty material. But the spectacle of our shortcomings can be so
unbearable that sometimes it shames us into doing a little good. Ironically, it
is the struggle with our own failings that may bring the best in us.

The gap between what we are and what we can be is also the
space in which utopias are conceived. Utopian literature, at its best, may
document in detail our struggle with personal and societal failure. While often
constructed in worlds of excess and plenitude, utopias are a reaction to the
deficits and precariousness of existence; they are the best expression of what
we lack most. Thomas More’s book is not so much about some imaginary island,
but about the England of his time. Utopias may look like celebrations of human
perfection, but read in reverse they are just spectacular admissions of
failure, imperfection and embarrassment.

And yet it is crucial that we keep dreaming and weaving
utopias. If it weren’t for some dreamers, we would live in a much uglier world
today. But above all, without dreams and utopias we would dry out as a species.
Suppose one day science solves all our problems: We will be perfectly healthy,
live indefinitely, and our brains, thanks to some enhancement, will work like a
computer. On that day we may be something very interesting, but I am not sure
we will have what to live for. We will be virtually perfect and essentially
dead.

Ultimately, our capacity to fail makes us what we are; our
being as essentially failing creatures lies at the root of any aspiration.
Failure, fear of it and learning how to avoid it in the future are all part of
a process through which the shape and destiny of humanity are decided. That’s
why, as I hinted earlier, the capacity to fail is something that we should
absolutely preserve, no matter what the professional optimists say. Such a
thing is worth treasuring, even more so than artistic masterpieces, monuments
or other accomplishments. For, in a sense, the capacity to fail is much more
important than any individual human achievements: It is that which makes them
possible.

We are designed to fail.

No matter how successful our lives turn out to be, how
smart, industrious or diligent we are, the same end awaits us all: “biological
failure.” The “existential threat” of that failure has been with us all along,
though in order to survive in a state of relative contentment, most of us have
pretended not to see it. Our pretense, however, has never stopped us from
moving toward our destination; faster and faster, “in inverse ratio to the
square of the distance from death,” as Tolstoy’s Ivan Ilyich expertly describes
the process. Yet Tolstoy’s character is not of much help here. The more
essential question is rather how to approach the grand failure, how to face it and
embrace it and own it — something poor Ivan fails to do.

A better model may be Ingmar Bergman’s Antonius Block, from
the film “The Seventh Seal.” A knight returning from the Crusades and plunged
into crisis of faith, Block is faced with the grand failure in the form of a
man. He does not hesitate to engage Death head-on. He doesn’t flee, doesn’t beg
for mercy — he just challenges him to a game of chess. Needless to say, he
cannot succeed in such a game — no one can — but victory is not the point. You
play against the grand, final failure not to win, but to learn how to fail.

Bergman the philosopher teaches us a great lesson here. We
will all end in failure, but that’s not the most important thing. What really
matters is how we fail and what we gain in the process. During the brief time
of his game with Death, Antonius Block must have experienced more than he did
all his life; without that game he would have lived for nothing. In the end, of
course, he loses, but accomplishes something rare. He not only turns failure
into an art, but manages to make the art of failing an intimate part of the art
of living.

Monday, April 14, 2014

Dictionary of the Social Sciences, Edited by Craig Calhoun, Oxford
University Press

Refers (especially following Max Weber) to action conceived as a
means to a separate and distinct end, as opposed to action conceived as an end
in itself. The difference between instrumental and noninstrumental action has
been a recurring subject of philosophical interest and debate since Aristotle,
who recognized it as fundamental to considerations of human action. It has
consequently been defined and redefined in a number of ways, and enlisted in a
variety of competing and sometimes incompatible contexts. In the twentieth
century, the term itself is strongly associated with the pragmatism of John
Dewey, who argued that ideas should be judged not on the basis of truth and
falsehood, but rather in terms of the ends they serve. Even where Dewey is
concerned, however, certain kinds of ideas escape instrumental
reasoning—quintessentially art, the noninstrumental object par excellence of
the Western philosophical tradition since Immanuel Kant. Certain activities
straddle the instrumental–noninstrumental divide; Hannah Arendt's defense of
the intrinsic value of democratic action over the various specific ends that it
serves is a prominent example.

As an approach to science, instrumentalism contrasts with theories
of knowledge that regard objects as possessing a true or intrinsic nature that
science can qualify and categorize. Other users of the term, however, strongly
associate it with the scientific and technical mastery of the world rooted in
subject–object relations. Instrumentalism, in this context, is deeply embedded
in the Western idea of self. The dominance of instrumental reason has
consequently been a subject of profound concern for the critical theorists of
the Frankfurt school and other critics of Western modernity, such as Martin
Heidegger. Here, instrumental reason appears not as a liberatory alternative to
static or metaphysical conceptions of truth, but as a pervasive logic of
existence, linked to capitalism, the marketplace, and technology, which
destroys other sources and forms of value.

An approach to any relationship, practice, or object that
prioritizes ends over means. Instrumentalists will use whatever methods or
resources are expedient in order to realize their goals. At best, they are
pragmatists able to adapt to the opportunities and constraints of a situation.
At worst, however, they can act immorally or with impunity to achieve their
ends—hence the critics’ motto ‘The ends can never justify the means’. In human
geography and many other social sciences, for example, modern capitalist
society has been criticized for using nature as a mere means to the end of
amassing more wealth and improving standards of living. The result has been
both a failure to treat the non-human world as an end in itself and to respect
its needs or rights.

Monday, March 24, 2014

Emrys WestacottDepartment of Philosophy, Alfred University in
Western New YorkPhilosophy Now, Issue 79, June/July 2010

Imagine that right after briefing Adam about which fruit was
allowed and which forbidden, God had installed a closed-circuit television
camera in the garden of Eden, trained on the tree of knowledge. Think how this
might have changed things for the better. The serpent sidles up to Eve and
urges her to try the forbidden fruit. Eve reaches her hand out – in paradise
the fruit is always conveniently within reach – but at the last second she
notices the CCTV and thinks better of it. Result: no sin, no Fall, no expulsion
from paradise. We don’t have to toil among thorns and thistles for the rest of
our lives, earning our bread by the sweat of our brows; childbirth is painless;
and we feel no need to wear clothes.

So why didn’t God do that and save everyone a lot of grief? True,
surveillance technology was in its infancy back then, but He could have managed
it, and it wouldn’t have undermined Eve’s free will. She still has a choice to
make; but once she sees the camera she’s more likely to make the right choice.
The most likely explanation would be that God doesn’t just want Adam and Eve to
make the right choices; he wants them to make the right choices for the right
reasons. Not eating the forbidden fruit because you’re afraid you’ll be caught
doesn’t earn you moral credit. After all, you’re only acting out of
self-interest. If paradise suffered a power cut and the surveillance was
temporarily down, you’d be in there straight away with the other looters.

So what would be the right reason for not eating the fruit? Well,
God is really no different than any other parent. All he wants is absolute,
unquestioning obedience (which, by an amazing coincidence, also happens to be
exactly what every child wants from their parents.) But God wants this
obedience to be voluntary. And, very importantly, He wants it to flow from the
right motive. He wants right actions to be driven not by fear, but by love for
Him and reverence for what is right. (Okay, He did say to Adam, “If you eat
from the tree of knowledge you will die” – which can sound a little like a
threat – but grant me some literary license here.)

Moral philosophers will find themselves on familiar ground here. On
this interpretation, God is a follower of the eighteenth century German
philosopher Immanuel Kant. (This would, of course, come as no surprise to
Kant.) According to Kant, our actions are right when they conform to the moral
rules dictated to us by our reason, and they have moral worth insofar as they
are motivated by respect for that moral law. In other words, my actions have
moral worth if I do what is right because I want to do the right thing. If I
don’t steal someone’s iPod (just another kind of Apple, really) because I think
it would be wrong to do so, then I get a moral pat on the back and am entitled
to polish my halo. If I don’t steal the iPod because I’m afraid of getting
caught, then I may be doing the right thing, and I may be applauded for being
prudent, but I shouldn’t be given any moral credit.

Highway Star

These musings are intended to frame a set of questions: What is the
likely impact of ubiquitous surveillance on our moral personalities? How might
the advent of the surveillance society affect a person’s moral education and
development? How does it alter the opportunities for moral growth? Does it
render obsolete the Kantian emphasis on acting from a sense of duty as opposed
to acting out of self-interest? Such questions fall under the rubric of a new
field of research called Surveillance Impact Assessment.

Here is one way of thinking: surveillance edifies – that is, it
builds moral character – by bringing duty and self-interest closer together.
This outlook would probably be favoured by philosophers such as Plato and
Thomas Hobbes. The reasoning is fairly simple: the better the surveillance, the
more likely it is that moral transgressions will be detected and punished.
Knowing this, people are less inclined to break the rules, and over time they form
ingrained rule-abiding habits. The result is fewer instances of moral failure,
and patterns of behaviour conducive to social harmony. A brief history of
traffic surveillance illustrates the idea nicely:

Stage One (‘the state of nature’): Do whatever you please – it’s a
free for all. Drive as fast as you want, in whatever condition you happen to be
in. Try to avoid head-on collisions. Life is fast, fun and short.

Stage Two: The government introduces speed limits, but since they
are not enforced they’re widely ignored.

Stage Three: Cops start patrolling the highways to enforce the
speed limits. This inhibits a few would-be tearaways, but if you’re clever you
can still beat the rap; for instance, by knowing where the police hang out, by
tailing some other speedster, or by souping up your car so the fuzz can’t catch
you.

Stage Four: More cops patrol the highways, and now they have radar
technology. Speeding becomes decidedly imprudent, especially on holiday
weekends or if you’re driving past small rural villages that need to raise
revenue.

B) Buy a car with cruise control and effortlessly avoid
transgression;

C) Carry on as before, monitoring your speed continually and
keeping an eye out at all times for likely police hiding spots. Those who
choose this option are less likely than the cruise controllers to doze off, but
they’ll find driving more stressful.

Stage Five: To outflank the fuzz-busters, police use cameras, and
eventually satellite monitors, which become increasingly hard to evade.
Detection and prosecution become automated, so speeding becomes just stupid.
The majority now obey the law and drive more safely.

Stage Six: Cars are equipped by law with devices that read the
speed limit on any stretch of road they’re on. The car’s computer then acts as
a governor, preventing the car from exceeding the limit. Now virtually every
driver is an upstanding law-abiding citizen. If you want to speed you have to
really go out of your way and tamper with the mechanism – an action analogous
to what Kant would call ‘radical evil’, which is where a person positively
desires to do wrong.

It’s easy to see the advantages of each successive stage in this
evolution of traffic surveillance. At the end of the process, there are no more
tearaways or drunk drivers endangering innocent road users. Driving is more
relaxing. There are fewer accidents, less pain, less grief, less guilt, reduced
demands on the health care system, lower insurance premiums, fewer days lost at
work, a surging stock market, and so on. A similar story could be told with
respect to drunk driving, with breathalyzers performing the same function as
speed radar, and the ideal conclusion being a world in which virtually every
car is fitted with a lock that shuts the engine off if the driver’s blood
alcohol concentration is above a certain limit. With technology taking over,
surveillance becomes cheaper, and the police are freed up to catch crooked
politicians and bankers running dubious schemes. Lawbreaking moves from being
risky, to being foolish, to being almost inconceivable.

But there is another perspective – the one informed by Kantian
ethics. On this view, increased surveillance may carry certain utilitarian
benefits, but the price we pay is a diminution of our moral character. Yes, we
do the wrong thing less often; in that sense, surveillance might seem to make
us better. But it also stunts our growth as moral individuals.

From this point of view, moral growth involves moving closer to the
saintly ideal of being someone who only ever wants to do what is right. Kant
describes such an individual as having (or being) a ‘holy will’, suggesting
thereby that this condition is not attainable for ordinary human beings. For
us, the obligation to be moral always feels like a burden. Wordsworth captures
this well when he describes moral duty as the “stern daughter of the voice of
God.” Why morality feels like a burden is no mystery: there is always something
we (or at least some part of us) would sooner be doing than being virtuous. We
always have inclinations that conflict with what we know our duty to be. But
the saintly ideal is still something we can and should aim at. Ubiquitous
surveillance is like a magnetic force that changes the trajectory of our moral
aspiration. We give up pursuing the holy grail of Kant’s ideal, and settle for
a functional but uninspiring pewter mug. Since we rarely have to choose between
what’s right and what’s in our self-interest, our moral selves become not so
much worse as smaller, withered from lack of exercise. Our moral development is
arrested, and we end up on moral autopilot.

Purity vs Pragmatism?

Now I expect many people’s response to this sort of anxiety about
moral growth will be scathing. Here are four possible reasons for not losing
sleep over it:

1) It is a merely abstract academic concern. Surely, no matter how
extensive and intrusive surveillance becomes, everyday life will still yield
plenty of occasions when we experience growth-promoting moral tension: for
instance, in the choices we have to make over how to treat family, friends, and
acquaintances.

2) The worry is perfectly foolish – analogous to Nietzsche’s
complaint that long periods of peace and prosperity shrink the soul since they
offer few opportunities for battlefield heroics and sacrifice. Our ideal should
be a world in which people live pleasanter lives, and where the discomfort of
moral tension is largely a thing of the past. We might draw an analogy with the
religious experience of sinfulness. The sense of sin may have once helped
deepen human self-awareness, but that doesn’t mean we should try to keep it
alive today. The sense of sin has passed its sell-by date; and the same can be
said of the saintly ideal.

3) The saintly ideal is and always was misguided anyway. What
matters is not what people desire, but what they do. Excessive concern for
people’s appetites and desires is a puritan hangover. Surveillance improves
behaviour, period. That is all we need to concern ourselves with.

4) Kantians should welcome surveillance, since ultimately it leads
to the achievement of the very ideal they posit: the more complete the
surveillance, the more duty and self-interest coincide. Surveillance technology
replaces the idea of an all-seeing God who doles out just rewards and
punishments, and it is more effective, since its presence, and the bad
consequences of ignoring it, are much more tangibly evident. Consequently, it
fosters good habits, and these habits are internalized to the point where
wrongdoing becomes almost inconceivable.

That is surely just what parents and teachers aim for much of the
time. As I send my kids out into the world, I don’t say to myself, ‘I do hope
they remember they have a duty not to kill, kidnap, rape, steal, torture
animals or mug old ladies.’ I assume that for them, as for the great majority
in a stable, prosperous society, such wrongdoings are inconceivable: they
simply don’t appear on the horizon of possible actions; and that is what I
want. This inconceivability of most kinds of wrongdoing is a platform we want
to be able to take for granted, and surveillance is a legitimate and effective
means of building it. So, far from undermining the saintly ideal, surveillance
offers a fast track to it.

Scrutiny vs Probity?

This would be a nice place to end. A trend is identified, an
anxiety is articulated, but in the end the doubts are convincingly put to rest.
Hobbes and Kant link arms and head off to the bar to drink a toast to their
newly-discovered common ground.

But matters are not that simple. Wittgenstein warns philosophers
against feeding on a diet of one-sided examples, and we need to be wary of that
danger here. Indeed, I think that some other examples indicate not just that
Kant may have a point, but that most of us implicitly recognize this point.

For instance, imagine you are visiting two colleges. At Scrutiny
College, the guide proudly points out that each examination room is equipped
with several cameras, all linked to a central monitoring station. Electronic
jammers can be activated to prevent examinees from using cell phones or
Blackberries. The IT department writes its own cutting-edge plagiarism-detection
software. And there is zero tolerance for academic dishonesty: one strike and
you’re out on your ear. As a result, says the guide, there is less cheating at
Scrutiny than on any other campus in the country. Students quickly see that
cheating is a mug’s game, and after a while no-one even considers it.

By contrast, Probity College operates on a straightforward honour
system. Students sign an integrity pledge at the beginning of each academic
year. At Probity, professors commonly assign take-home exams, and leave rooms
full of test takers unproctored. Nor does anyone bother with
plagiarism-detecting software such as Turnitin.com. The default assumption is
that students can be trusted not to cheat.

Which college would you prefer to attend? Which would you recommend
to your own kids?

Or compare two workplaces. At Scrutiny Inc., all computer activity
is monitored, with regular random audits to detect and discourage any
inappropriate use of company time and equipment, such as playing games,
emailing friends, listening to music, or visiting internet sites that cause
blood to flow rapidly from the brain to other parts of the body. At Probity
Inc., on the other hand, employees are simply trusted to get their work done.
Scrutiny Inc. claims to have the lowest rate of time-theft and the highest
productivity of any company in its field. But where would you choose to work?

One last example. In the age of cell phones and GPS technology, it
is possible for a parent to monitor their child’s whereabouts at all times. They
have cogent reasons for doing so. It slightly reduces certain kinds of risk to
the teenager, and significantly reduces parental anxiety. It doesn’t scar the
youngster’s psyche – after all, they were probably first placed under
electronic surveillance in their crib when they were five days old! Most
pertinently, it keeps them on the straight and narrow. If they go somewhere
other than where they’ve said they’ll go, or if they lie afterwards about where
they’ve been, they’ll be found out, and suffer the penalties – like, their cell
phone plan will be downgraded from platinum to regular (assuming they have real
hard-ass parents). But how many parents really think that this sort of
surveillance of their teenage kids is a good idea?

Surveillance Suggestions

What do these examples tell us? I think they suggest a number of
things.

First, the Kantian ideal still resonates with us. If we regarded
the development of moral character as completely empty, misguided or
irrelevant, we would be less troubled by the practices of Scrutiny College or
Mom and Pop Surveillance.

Second, the fear that surveillance can actually become so extensive
as to threaten an individual’s healthy moral development is reasonable, for the
growth of surveillance is not confined to small, minor or contained areas of
our lives: it seems to be irresistibly spreading everywhere, percolating into
the nooks and crannies of everyday existence, which is where much of a person’s
moral education occurs.

Third, our attitude to surveillance is obviously different in
different settings, and this tells us something important about our hopes,
fears, expectations and ideals regarding the relationship between scrutinizer
and scrutinizee. The four relationships we have discussed are: state and
citizen; employer and employee; teacher and student, and parent and child. In
the first two cases, we don’t worry much about the psychological effect of
surveillance. For instance, I expect most of us would readily support improved
surveillance of income in order to reduce tax evasion. But we generally assume
that government, like employers, should stay out of the moral edification
business.

It is possible to regard colleges in the same way. On this view,
college is essentially a place where students expand their knowledge and
develop certain skills. As in the workplace, surveillance levels should be
determined according to what best promotes these institutional goals. However,
many people see colleges as having a broader mission – as not just a place to
acquire some technical training and a diploma. This broader mission includes
helping students achieve personal growth, a central part of which is moral
development. Edification is then seen not just as a happy side-effect of the
college experience, but as one of its important and legitimate purposes. This,
I think, is the deeper reason why we are perturbed by the resemblance between
Scrutiny College and a prison. Our concern is not just that learning will
suffer in an atmosphere of distrust: it is also that the educational mission of
the college has become disappointingly narrow.

Finally, most of us agree that the moral education of children is
and should be one of the goods a family secures. If not there, then where? So
one good reason for parents not to install a camera over the cookie jar is that
children need to experience the struggle between obligation and inclination.
They even need to experience what it feels like to break the rules and get away
with it; to break the rules and get caught; to break the rules, lie about it
and not get caught; and so on. To reference Wordsworth again, in his
autobiographical poem ‘The Prelude’, the emergence of the young boy’s moral
awareness is connected to an incident when Wordsworth stole a rowing boat one
evening to go for the eighteenth century equivalent of a joy ride. No-one
catches him, but he becomes aware that his choices have a moral dimension.

This is not the only reason to avoid cluttering up the house with
disobedience detectors, of course. Another purpose served by the family is to
establish mutually-satisfying loving relationships. Moreover, the family is not
simply a means to this end; the goal is internal to the practice of family
life. Healthy relationships are grounded on trust, yet surveillance often
signifies a lack of trust. For this reason, its effect on any relationship is
corrosive. And the closer the relationship, the more objectionable we find it.
Imagine how you’d feel if your spouse wanted to monitor your every coming and
going.

These two objections to surveillance within the family – it
inhibits moral development, and it signifies distrust – are connected, since
the network of reasonably healthy relationships provided by a reasonably
functional family is a primary setting for most people’s moral education. The
positive experience of trusting relationships, in which the default expectation
is that everyone will fulfill their obligations to one another, is in itself
edifying. It is surely more effective at fostering the internalization of
cherished values than intimidation through surveillance. Everyone who strives
to create such relationships within their family shows by their practice that
they believe this to be so.

Conclusions

The upshot of these reflections is that the relation between
surveillance and moral edification is complicated. In some contexts,
surveillance helps keep us on track and thereby reinforces good habits that
become second nature. In other contexts, it can hinder moral development by
steering us away from or obscuring the saintly ideal of genuinely disinterested
action. And that ideal is worth keeping alive.

Some will object that the saintly ideal is utopian. And it is. But
utopian ideals are valuable. It’s true that they do not help us deal with
specific, concrete, short-term problems, such as how to keep drunk drivers off
the road, or how to ensure that people pay their taxes. Rather, like a distant
star, they provide a fixed point that we can use to navigate by. Ideals help us
to take stock every so often of where we are, of where we’re going, and of
whether we really want to head further in that direction.

Ultimately, the ideal college is one in which every student is
genuinely interested in learning and needs neither extrinsic motivators to
encourage study, nor surveillance to deter cheating. Ultimately, the ideal
society is one in which, if taxes are necessary, everyone pays them as freely
and cheerfully as they pay their dues to some club of which they are devoted
members – where citizen and state can trust each other perfectly. We know our
present society is a long way from such ideals, yet we should be wary of
practices that take us ever further from them. One of the goals of moral
education is to cultivate a conscience – the little voice inside telling us
that we should do what is right because it is right. As surveillance becomes
increasingly ubiquitous, however, the chances are reduced that conscience will
ever be anything more than the little voice inside telling us that someone,
somewhere, may be watching.

Wednesday, March 12, 2014

Jan Narveson, Distinguished Professor Emeritus of Philosophy, University of Waterloo in Ontario, Canada.

Encyclopedia of Applied Ethics (Second Edition) 2012, Pages 51–55

Egoism and altruism come in two forms: psychological and normative
– theories about what we do and what we ought to do. Psychological egoism
succumbs to the distinction between interests in ourselves, strictly, and
interests that are ours but not directed at ourselves. The first, egoism
proper, is clearly false. The second allows altruism. Ethical egoism falls
before the familiar observation that exclusive devotion to ourselves does not
make us as happy as a life of love and involvement. Moral egoism is irrational,
asking us to sanction actions by others that are strongly against our own
interests.

Psychological versus Ethical Egoism

Egoism and altruism are subject even more than usual in philosophy
to crucial ambiguities that must be sorted out before one can say anything
helpful. There are two types of egoism – psychological and ethical. The first
is a theory about human motivations: What makes us tick? According to egoism,
we are actuated entirely by self-interest, even when it appears to be
otherwise. Altruism, of course, denies this. However, the second is a normative
theory – a theory about what we should do. It says that we should, or ought to,
act only in our own interest. How these are related is perhaps the central
question for the subject.

Substantive versus Tautological Psychological Egoism

Just what is self-interest? Here, the groundbreaking work of Joseph
Butler (1692–1752) paves the way for us. Does ‘self-interest’ mean interests in
ourselves or only interests of ourselves? Everything hangs on that one-word
difference.

Interests in ourselves are interests that one’s own self have
certain things, held to be good by the agent – things that can be identified
and had independently of the goods of others. For example, hunger is a desire
for food in one’s own stomach; desires for physical comfort, for optimal
temperature, or for the absence of pain in one’s own body are further examples.
The desire to feel pleasure and avoid pain is the most widely held candidate
for the status of fundamental motivator of all action. Interests in others, by
contrast, would be such desires as love or hatred, where we are essentially
directing our actions toward the production of certain states of other people.
The kind of psychological egoism holding that we are exclusively motivated by
the first kind of interests may be called strict or substantive.

Interests of ourselves, on the other hand, simply refer to whose
interests they are but not at all to what the interests are taken in – what
they are aimed at affecting, either in or outside of ourselves. As Butler
pointed out, it is trivial to say that all action is motivated by interests or
desires of the agent – that is what makes them the agent’s actions at all. We
might call this tautologous psychological egoism. By contrast, it is not at all
trivial to say that the object of our actions is only to produce certain
conditions of ourselves. Of course, the situation is complicated by the fact
that we can get pleasure, for example, from our perception of the condition of
certain other people, or even of all other people. When we do so, are we
motivated by the prospect of pleasure from our relations to those people? If
so, is that to be accounted egoism, despite the fact that the source of our
pleasure in these cases is the pleasures – or perhaps the pains – of others? On
the face of it, we need yet another distinction. This would be between theories
that deny that we even get pleasure from other-regarding acts and theories that
accept this but insist that it is only because of the pleasure we get from them
that we perform them.

Superficial versus Deep Theories

The question whether psychological egoism may be true becomes very
difficult, however, when we distinguish between what we may call superficial
or, in the terminology of contemporary philosophers of mind, folk-psychological
versions of the theory and what we may call deep theories.

At the superficial level, psychological egoism is, as Butler also
pointed out, overwhelmingly implausible. People make huge sacrifices for those
they love, including sacrifice of their very lives. Moreover, they sometimes go
out of their way to do evils to other people even at the cost of ruining their
own lives in the process.

Deep theories are quite another matter. No one can claim to have
refuted the view that there is some variable within the soul – or the nervous
system – such that all actions really do maximize it, regardless of all else in
the environment. To make this work, of course, it would have to be noted that
other factors in the environment would certainly interact with that variable,
whatever it is.

The most popular candidate for what we may be trying to maximize,
no doubt, is net pleasure: the quantity, pleasure minus pain, or more
plausibly, of positive affect minus negative affect (or plus negative affect if
we think of the latter as a negative quantity). Perhaps the man who falls on
the grenade to save his comrades reckons that he could not live with himself if
he did not do it. Perhaps, as Kant supposed, our self-interested motivations
are hidden from our own view so that despite our pretentions to altruism, we
are really always acting in our own interest after all. However, once the
common-sense idea that pleasure and pain are things we are aware of is
abandoned, the situation changes radically. We then need a good theory
explaining just what the quantity in question is really supposed to be –
electrical magnitudes in brain cells, perhaps? We would need to explain how the
theory is to be assessed in empirical terms. Clearly, appraising any such view
will be very difficult. In this article, we will not further discuss such
possibilities. Regarding commonsense distinctions, psychological egoism may
surely be dismissed as simply wrong and based entirely on the confusion between
egoism as a substantive view and egoism in its tautological form.

Varieties of Motivation

A major complication in the discussion of this issue is just how we
are to count certain human motivations in relation to the egoism versus
altruism categories. There are many cases in which someone acts without
obviously intending to promote his or her own pleasure but also without
obviously intending to benefit anyone else. Suppose that Henry conceives a
passion for building a replica of the Great Pyramid in his backyard. He labors
mightily and for years, often with considerable pain, and it damages his health
considerably in the process. Is he here pursuing pleasure? If we say so, then
the notion of pleasure has become very broad. Is he doing it for anyone else’s
benefit? Not obviously, at least.

For the purposes of this article, we account motivations that seem
to be for the pursuit of states of affairs having no evident connection with
either the agent’s or anyone else’s benefit as being, nevertheless, for the
agent’s benefit inasmuch as they are attempts to attain something that he or
she wants, or at least feels impelled, to do. If this is not very much like
satisfying hunger or sex, that is the point. The category of the
self-interested is hugely broad so that the explanatory value of appeals to
self-interest is rather low.

Altruism – Psychological Version

If psychological egoism has its difficulties, psychological
altruism is also problematic. Again, discussion requires more care with
definitions. Altruism may be understood as concern with others – but how much,
and which others? At the opposite extreme from egoism is the view that we only act
for the sake of others, and we do so for the sake of all others. To the
author’s knowledge, no one has ever seriously advocated such a theory. The
English philosopher David Hume (1711–76) hypothesized that some altruism is to
be found in every human – a general feeling for all other humans, at the least.
Taken as a superficial-level generalization about people, this is fairly
plausible, and it would be even more so if we said that nearly everyone is at
least slightly altruistic regarding most other humans. But it is, of course,
not very precise, and how to describe it more accurately, and account for it,
is an interesting question. Interaction with one’s mother in infancy, for
example, and later with peers, may play a role in the genesis of such dispositions.

Ethical Incompatibility with Psychological Egoism

We now turn to the ethical versions of egoism. Here, we do well to
begin by recognizing that for ethical egoism to be meaningful at all, strict
psychological egoism, at least in its superficial forms, must be false. If
Jones is unable to seek anyone’s well-being but his own, then there is
obviously no point in telling him that he ought to be seeking someone else’s.
Ought, as philosophers state, implies can. (Even here, the distinction of
shallow from deep theories is essential because it is by no means clear that
ethics is incompatible with deep self-interest. Possibly, totally altruistic
behavior is best for oneself, and the truly selfish person is the saint or the
hero who devotes his or her life to helping others.)

Ethical versus Moral Egoism

For the purposes of this article, we are assuming the plain or
superficial level of discussion, which we all understand fairly well, and ask
whether ethical egoism is plausible in those terms. However, now we must make
still another distinction, and again a crucial one – this time with the word
‘ethical.’ By ethical, do we refer to the general theory of how to live? Or do
we mean, much more narrowly, rules for the group? Egoism will look very
different in these two very different contexts. If the question is, should I
aim to live the best life I can? it is extremely difficult to answer in the
negative. Each one of us should, surely, try to live the best life we can – the
most satisfying, most rewarding, most pleasant, etc., life we can manage. Of
course, this leads to the very large question of what the ultimate values of
life are and how to maximize them – what sort of life will do that. For
example, as already noted, it is quite possible that a life of self-sacrifice
is nevertheless the most fulfilling or rewarding. Perhaps it is even the most
pleasant, although this seems to strain the idea. Furthermore, perhaps there
are other values more important than pleasure.

Egoism Not Necessarily Selfish

As the previous discussion suggests, egoism as a theory of life may
be very different from what the word at first suggests. When we think of
egoists, we think, first, of people who are highly self-centered, who tend to
ignore others and their needs, even their rights. Another word for this is
egotism, which is a personality trait rather than a theory. At the extreme, we
think of the psychopath, who will kill, rape, steal, lie, and cheat without
compunction in order to achieve certain narrow ends for himself – usually the
increment of his monetary income, but by no means always. However, does the
psychopath live a good life? Would someone setting out to live the happiest,
most rewarding life he or she can become psychopathic? That is extremely
implausible. The wisdom of philosophers through the millennia has been uniform
on this point: If you want to be happy yourself, then you need friends, loved
ones, and associates, and you need to treat all these people with respect, most
of them with kindness as well, and some of them with real love.

This last discussion shows the difficulty of contrasting egoism and
altruism at this level. Should we literally sacrifice our own overall happiness
or well-being for others? Which others? And especially, of course, why? To
suggest that one should do such a thing – if it is possible – seems to be to
suggest that those other persons are somehow superior to you. Why should we
believe any such thing?

Notice that as a universal theory, this last would run into logical
difficulties. Whatever ‘worthy’ means, if A is more worthy than B, then B is
less worthy than A, by definition. Thus, it is impossible for everyone to be
more worthy than everyone else.

Egoism and Morality

At this point, let us turn to the other member of the distinction I
have made – morality. To talk about morality is to talk of what should be the
rules governing the general behavior of the group. One question this
immediately raises is, which group? Without getting too involved in the
question of cultural relativism, let us supply two answers to this question:
(1) the group in question – that is, the group of which the persons we are
addressing are members, with whom they interact, fairly frequently; and (2) the
group consisting of literally everybody – all humans. Again, most classic
philosophers assumed the latter, and the assumption is certainly not an
unrealistic one.

What matters, however, is that we are now addressing the question
not simply what to do in life generally but what to do in relation to our
fellows: How do we carry on our dealings with them, our interactions? It is
when we address this question that egoism, in any sense of that word in which
it is meaningful, becomes enormously implausible. For if we think of the egoist
as the one who pursues only his or her own interest, regardless of others, so
that our image is of near-psychopathic behavior, then to recommend that as the
rule for a group seems completely absurd, even unintelligible. Egoism, again
speaking at the superficial level, must address the question of conflicts of
interest. If A’s interests are incompatible with B’s – meaning, simply, that if
A achieves what he is after, then B is frustrated in his pursuit of what he is
after – then a rule addressed to the two of them, telling them both to ignore
the other and go for it, is silly. If both try to follow it, at least one will
fail. In fact, most likely both of them will, especially if we take into
account any aspects of their values that extend beyond the narrow one that was
the subject of conflict. For example, if the two come to blows, then at least one
and probably both will suffer injuries that they would prefer not to have
inflicted on them. A rule for a group, if it is to be even remotely plausible,
will have to do better than that. When there are conflicts, it is going to have
to tell us who is in the right and who is in the wrong – who gets to go ahead
and who has to back down.

Trying to incorporate real egoism, of the first kind identified at
the outset, into the very matter of moral rules is an invitation to conceptual
disaster. That Jones ought to do x and Smith ought to do y, even though Jones’s
doing x entails Smith’s not doing y, is rightly regarded as a nonsense rule.
Two boxers in the ring ought both, of course, to try to win, but to say that
both ought to win is nonsense because by definition that is impossible. A
morality for all, therefore, cannot look like directions to the cheering
section for one of the fighters but, rather, like the rule book for boxing,
which tells both of them that they have only so many minutes between breaks, that
they may not hit below the belt, and so forth. Morality in our second and
narrower sense of the term consists of the rules for large, natural groups –
that is, groups of people who happen, for whatever reason, to come in contact
with each other rather than groups that come together intentionally for
specific purposes. For such groups, the rules are going to have to be impartial
rather than loaded in favor of one person or set of persons as against another.
Thus, such rules simply cannot be egoistic.

Altruism and Morality

Can they be altruistic? That gets us to our last question. Should
the rules for groups tell everyone to love everyone else, as fully as if
everyone were one’s dear sibling or spouse? The answer to this is surely in the
negative, as Nietzsche pointed out. One or at most a very few lovers or loved
ones is all any of us can handle. Truly to love someone, we must elevate that
person well above the crowd, pay more attention to him or her than to others –
not merely an equal amount – and so on. Altruism construed as the general love
of humankind, therefore, simply cannot be using the term ‘love’ in its full
normal sense. A doctrine of general altruism must retreat very far from that.
Indeed, it is clear that general altruism has exactly the same problem as
general egoism: Both are shipwrecked as soon as we see the inevitable
asymmetries and partialities necessarily involved in love, whether of oneself
or anyone else.

How far, then, do we retreat? Here, a variety of answers have been
given, and we need to make one last distinction – between two departments or
branches of morals. One branch is stern, and associated with justice, rules
that are to be enforced by such heavy-duty procedures as punishments; the other
is associated with commendations and praise, warm sentiments, and so forth.
Following Kant, we may call these respectively the theory of justice and the
theory of virtue or, in a slightly different vein, justice and charity. One is,
in short, the morality of the stick, whereas the other is the morality of the
carrot. That is, it is appropriate to reinforce the rules of justice by such
methods as punishment, including incarceration, or even death. However, the
other is to be encouraged – we cheer for those who do well but we do not resort
to punishment for lesser performance of those actions that are merely virtuous
rather than downright required.

Now, our question may be put thus: Is altruism to be regarded as
figuring prominently in, or perhaps even constituting the basis of, either of
these, both, or neither of these?

Again, it is highly plausible to deny altruism any significant role
in the first. If you and I are enemies, it is pretty pointless to tell us to
love each other, but it is not at all pointless to tell us to draw some lines
and then stay on our side of them. For example, we are to refrain from killing,
stealing from, lying to, cheating, maiming, or otherwise damaging even if we
hate each other. We shall all do better if we simply rule out such actions.
This is negative morality, the morality of thou shalt not, and it applies
between absolutely everyone and absolutely everyone else, be they friends or
enemies or strangers, with the sole qualification being that those who
themselves are guilty of transgressing one of those restrictions may be
eligible for punishment. It is absurd to point to love as the basis of such
rules. Rather, it is our interests, considered in relation to how things are
and how other people are, that undergird these vital rules of social life.

On the other hand, when it comes to helping those in need, showing
kindness to people, being thoughtful, helpful, supportive, and so on – in
short, of being, in the words of Hume, agreeable and useful to others – it is
more plausible to ascribe this to sentiment – to some degree of altruism. Even
so, it is by no means clear that it is necessary because in being nice to
others, we inspire them to be nice to us, and so even considerations of
self-interest fairly narrowly construed will teach us the value of these
other-regarding virtues. However, it is also plausible to suggest that we
admire and praise people who go well beyond the minimum in these respects: We
put the Mother Theresas of the world, the heroic life-savers, those who go the
extra mile and then some, on pedestals, and rightly so. As Hume conjectures, it
seems implausible to confine those tendencies to self-interest in any narrow
sense of that word.

Conclusion

To get a clear view of this long-discussed subject, we need to make
several important distinctions. First, we must distinguish the normative or
ethical from the psychological version of egoism. Second, in both cases, we
need to distinguish between egoism in the narrow sense in which it entails an
interest exclusively in the self, defined as independent of all others, and the
much broader – really vacuous – sense in which it essentially means that one
acts only on one’s own desires, whatever their objects. Third, we must
distinguish shallow or superficial from deep versions. Finally, we must
distinguish ethics in the very broad sense of one’s general view of life from
morality, in the much narrower sense of a canon or set of rules for the conduct
of everyone in society. It is then seen that ethical egoism says essentially
nothing, whereas moral egoism proposes what is obviously unacceptable.

Thursday, March 6, 2014

ADDRESS OF POPE FRANCIS TO THE NEW NON-RESIDENT AMBASSADORS
TO THE HOLY SEE: KYRGYZSTAN, ANTIGUA AND BARBUDA, LUXEMBOURG AND BOTSWANA

Your Excellencies,

I am pleased to receive you for the presentation of the Letters
accrediting you as Ambassadors Extraordinary and Plenipotentiary to the Holy
See on the part of your respective countries: Kyrgyzstan, Antigua and Barbuda,
the Grand Duchy of Luxembourg and Botswana. The gracious words which you have
addressed to me, for which I thank you heartily, have testified that the Heads
of State of your countries are concerned to develop relations of respect and
cooperation with the Holy See. I would ask you kindly to convey to them my
sentiments of gratitude and esteem, together with the assurance of my prayers
for them and their fellow citizens.

Ladies and Gentlemen, our human family is presently experiencing
something of a turning point in its own history, if we consider the advances
made in various areas. We can only praise the positive achievements which
contribute to the authentic welfare of mankind, in fields such as those of
health, education and communications. At the same time, we must also
acknowledge that the majority of the men and women of our time continue to live
daily in situations of insecurity, with dire consequences. Certain pathologies
are increasing, with their psychological consequences; fear and desperation
grip the hearts of many people, even in the so-called rich countries; the joy
of life is diminishing; indecency and violence are on the rise; poverty is
becoming more and more evident. People have to struggle to live and,
frequently, to live in an undignified way. One cause of this situation, in my
opinion, is in our relationship with money, and our acceptance of its power
over ourselves and our society. Consequently the financial crisis which we are
experiencing makes us forget that its ultimate origin is to be found in a
profound human crisis. In the denial of the primacy of human beings! We have
created new idols. The worship of the golden calf of old (cf. Ex 32:15-34) has
found a new and heartless image in the cult of money and the dictatorship of an
economy which is faceless and lacking any truly humane goal.

The worldwide financial and economic crisis seems to highlight
their distortions and above all the gravely deficient human perspective, which
reduces man to one of his needs alone, namely, consumption. Worse yet, human
beings themselves are nowadays considered as consumer goods which can be used
and thrown away. We have started a throw-away culture. This tendency is seen on
the level of individuals and whole societies; and it is being promoted! In
circumstances like these, solidarity, which is the treasure of the poor, is
often considered counterproductive, opposed to the logic of finance and the
economy. While the income of a minority is increasing exponentially, that of
the majority is crumbling. This imbalance results from ideologies which uphold
the absolute autonomy of markets and financial speculation, and thus deny the
right of control to States, which are themselves charged with providing for the
common good. A new, invisible and at times virtual, tyranny is established, one
which unilaterally and irremediably imposes its own laws and rules. Moreover,
indebtedness and credit distance countries from their real economy and citizens
from their real buying power. Added to this, as if it were needed, is
widespread corruption and selfish fiscal evasion which have taken on worldwide
dimensions. The will to power and of possession has become limitless.

Concealed behind this attitude is a rejection of ethics, a
rejection of God. Ethics, like solidarity, is a nuisance! It is regarded as counterproductive:
as something too human, because it relativizes money and power; as a threat,
because it rejects manipulation and subjection of people: because ethics leads
to God, who is situated outside the categories of the market. God is thought to
be unmanageable by these financiers, economists and politicians, God is
unmanageable, even dangerous, because he calls man to his full realization and
to independence from any kind of slavery. Ethics – naturally, not the ethics of
ideology – makes it possible, in my view, to create a balanced social order
that is more humane. In this sense, I encourage the financial experts and the
political leaders of your countries to consider the words of Saint John
Chrysostom: "Not to share one’s goods with the poor is to rob them and to
deprive them of life. It is not our goods that we possess, but theirs"
(Homily on Lazarus, 1:6 – PG 48, 992D).

Dear Ambassadors, there is a need for financial reform along
ethical lines that would produce in its turn an economic reform to benefit
everyone. This would nevertheless require a courageous change of attitude on
the part of political leaders. I urge them to face this challenge with
determination and farsightedness, taking account, naturally, of their
particular situations. Money has to serve, not to rule! The Pope loves
everyone, rich and poor alike, but the Pope has the duty, in Christ’s name, to
remind the rich to help the poor, to respect them, to promote them. The Pope
appeals for disinterested solidarity and for a return to person-centred ethics
in the world of finance and economics.

For her part, the Church always works for the integral development
of every person. In this sense, she reiterates that the common good should not
be simply an extra, simply a conceptual scheme of inferior quality tacked onto
political programmes. The Church encourages those in power to be truly at the
service of the common good of their peoples. She urges financial leaders to
take account of ethics and solidarity. And why should they not turn to God to
draw inspiration from his designs? In this way, a new political and economic
mindset would arise that would help to transform the absolute dichotomy between
the economic and social spheres into a healthy symbiosis.

Finally, through you, I greet with affection the Pastors and the
faithful of the Catholic communities present in your countries. I urge them to
continue their courageous and joyful witness of faith and fraternal love in
accordance with Christ’s teaching. Let them not be afraid to offer their
contribution to the development of their countries, through initiatives and
attitudes inspired by the Sacred Scriptures! And as you inaugurate your
mission, I extend to you, dear Ambassadors, my very best wishes, assuring you
of the assistance of the Roman Curia for the fulfilment of your duties. To this
end, upon you and your families, and also upon your Embassy staff, I willingly
invoke abundant divine blessings. Thank you.

Saturday, March 1, 2014

This article is the text of a lecture given by Richard Rorty in
April 2004 at the Centre for Cultural Studies in Tehran. The lecture was
presented in the series of lectures by Western intellectuals in Tehran,
organized by Ramin Jahanbegloo. Besides Rorty, speakers included Jürgen
Habermas, Noam Chomsky, Ágnes Heller, Timothy Garton Ash, Michael Ignatieff,
Adam Michnik, and Paul Ricoeur. Ramin Jahanbegloo was arrested on 30 March 2006
by Iranian Police and held in custody despite international outcry. After five
months of investigation he was released from custody in August 2006. This
happened after he admitted under duress that he had cooperated with western
diplomats in plotting a "velvet revolution" in Iran that would
overthrow the current regime and replace it with a western-type democracy.

Philosophy is a ladder that Western political thinking climbed up,
and then shoved aside. Starting in the seventeenth century, philosophy played
an important role in clearing the way for the establishment of democratic
institutions in the West. It did so by secularizing political thinking –
substituting questions about how human beings could lead happier lives for
questions about how God's will might be done. Philosophers suggested that
people should just put religious revelation to one side, at least for political
purposes, and act as if human beings were on their own – free to shape their
own laws and their own institutions to suit their felt needs, free to make a
fresh start.

In the eighteenth century, during the European Enlightenment,
differences between political institutions, and movements of political opinion,
reflected different philosophical views. Those sympathetic to the old regime
were less likely to be materialistic atheists than were the people who wanted
revolutionary social change. But now that Enlightenment values are pretty much
taken for granted throughout the West, this is no longer the case. Nowadays
politics leads the way, and philosophy tags along behind. One first decides on
a political outlook and then, if one has a taste for that sort of thing, looks
for philosophical backup. But such a taste is optional, and rather uncommon.
Most Western intellectuals know little about philosophy, and care still less.
In their eyes, thinking that political proposals reflect philosophical
convictions is like thinking that the tail wags the dog.

I shall be developing this theme of the irrelevance of philosophy
to democracy in my remarks. Most of what I shall say will be about the
situation in my own country, but I think that most of it applies equally well
to the European democracies. In those countries, as in the US, the word
"democracy" has gradually come to have two distinct meanings. In its
narrower, minimalist meaning it refers to a system of government in which power
is in the hands of freely elected officials. I shall call democracy in this
sense "constitutionalism". In its wider sense, it refers to a social
ideal, that of equality of opportunity. In this second sense, a democracy is a
society in which all children have the same chances in life, and in which
nobody suffers from being born poor, or being the descendant of slaves, or
being female, or being homosexual. I shall call democracy in this sense
"egalitarianism".

Suppose that, at the time of the US presidential election of 2004,
you had asked voters who were wholeheartedly in favour of re-electing President
Bush whether they believed in democracy. They would have been astonished by the
question, and have replied that of course they did. But all they would have
meant by this is that they believe in constitutional government. Because of
this belief, they were prepared to accept the outcome of the election, whatever
it turned out to be. If John Kerry had won, they would be angry and disgusted.
But they would not have dreamt of trying to prevent his taking office by going
out into the streets. They would have been utterly horrified by the suggestion
that the generals in the Pentagon should mount a military coup in order to keep
Bush in the White House.

The voters who in 2004 regarded Bush as the worst American
president of modern times, and who desperately hoped for Kerry's success, were
also constitutionalists. When Kerry lost, they were sick at heart. But they did
not dream of fomenting a revolution. Leftwing Democrats are as committed to
preserving the US constitution as are rightwing Republicans.

But if, instead of asking these two groups whether they believe in
democracy, you had asked them what they mean by the term "democracy",
you might have received different replies. The Bush voters will usually be
content to define democracy simply as government by freely elected officials.
But many of the Kerry voters – and especially the intellectuals – will say that
America – despite centuries of free elections and the gradual expansion of the
franchise to include all adult citizens – is not yet a full-fledged democracy.
Their point is that although it obviously is a democracy in the constitutional
sense, it is not yet a democracy in the egalitarian sense. For equality of
opportunity has not yet been attained. The gap between the rich and the poor is
widening rather than narrowing. Power is becoming more concentrated in the
hands of the few.

These leftwing Democrats will remind you that of the likely fate of
the children of badly educated Americans, both black and white, raised in a
home in which the full-time labour of both mother and father brings in only
about $40 000 a year. This sounds like a lot of money, but in America children
of parents at that income level are deprived of many advantages, will probably
be unable to go to college, and will be unlikely to get a good job. For
Americans who think of themselves as on the political Left, these inequalities
are outrageous. They demonstrate that even though America has a democratically
elected government, it still does not have a democratic society.

Ever since Walt Whitman wrote his essay "Democratic
Vistas" in the middle of the nineteenth century, a substantial sector of
educated public opinion in the US has used "democracy" to mean
"social egalitarianism" rather than simply "representative
government". Using the term in this way became common in the Progressive
Era and still more common under the New Deal. That usage permitted the civil
rights movement led by Martin Luther King, the feminist movement, and the gay
and lesbian rights movement to portray themselves as successive attempts to
"realize the promise of American democracy".

So far I have said nothing about the relation of religion to
American democracy. But for an understanding of the ongoing context between
constitutionalist and egalitarian understandings of democracy it is important
to realize that Americans on the political Left tend to be less religiously
committed and religiously active than people on the political Right. The
leftists who are religious believers do not try very hard to bring their
religious convictions and their political preferences together. They treat
religion as a private matter, endorse the Jeffersonian tradition of religious
tolerance, and are emphatic in their preference for the strict separation of
church and state.

On the political Right, however, religious and political
convictions are often interwoven. The hardcore Bush voters are not only
considerably more likely to go to church than the hardcore Kerry voters, but
are considerably more likely to sympathize with Bush's insistence on the need
to elect officials who take God seriously. They often describe the United
States of America as a nation especially blessed by the Christian God. They
like to say that theirs is "a Christian country", and not to realize
that this phrase is offensive to their Jewish and Muslim fellow citizens. They
tend to see America's emergence as the only superpower left standing not just
as an accident of history, but as evidence of divine favour.

Because of this different stance toward religious belief, one might
be tempted to think of the opposition between the political Right and the
political Left as reflecting a difference between those who think of democracy
as built upon religious foundations and those who think of it as built upon
philosophical foundations. But, as I have already suggested, that would be
misleading. Except for a few professors of theology and philosophy, neither
rightist nor leftist American intellectuals think of democracy in the sense of
constitutionalism as having either sort of foundation.

If asked to justify their preference for constitutional government,
both sides would be more likely to appeal to historical experience rather than
to either religious or philosophical principles. Both would be likely to
endorse Winston Churchill's much-quoted remark that "Democracy is the
worst form of government imaginable, except for all the others that have been
tried so far." Both agree that a free press, a free judiciary, and free
elections are the best safeguard against the abuse of governmental power
characteristic of the old European monarchies, and of fascist and communist
regimes.

The arguments between leftists and rightists about the need for
egalitarian social legislation are also matters neither of opposing religious
beliefs nor of opposing philosophical principles. The disagreement between
those who think of a commitment to democracy as a commitment to an egalitarian
society and those who have no use for the welfare state and for government
regulations designed to ensure equality of opportunity is not fought out on
either philosophical or religious grounds. Even the most fanatic
fundamentalists do not try to argue that the Christian scriptures provide
reasons why the American government should not redistribute wealth by using
taxpayers' money to send the children of the poor to college. Their leftist opponents
do not claim that the need to use taxpayer's money for this purpose is somehow
dictated by what Kant called "the tribunal of pure reason".

Typically the arguments between the two camps are much more
pragmatic. The Right claims that imposing high taxes in order to benefit the
poor will lead to "big government", rule by bureaucrats, and a
sluggish economy. The Left concedes that there is a danger of
over-bureaucratization and of over-centralized government. But, they argue,
these dangers are outweighed by the need to make up for the injustices built
into a capitalist economy – a system that can throw thousands of people out of
work overnight and make it impossible for them to feed, much less educate,
their children. The Right argues that the Left is too much inclined to imposing
its own tastes on society as a whole. The Left replies that what the right
calls a "matter of taste" is really a matter of justice.

Such arguments proceed not by appeals to universally valid moral
obligations but by appeals to historical experience – the experience of
over-regulation and over-taxation on the one hand and the experience of poverty
and humiliation on the other. The rightists accuse the leftists of being
sentimental fools – bleeding-heart liberals – who do not understand the need to
keep government small so that individual freedom can flourish. The leftists
accuse the rightists of heartlessness – of being unable or unwilling to imagine
themselves in the situation of a parent who cannot make enough money to clothe his
daughter as well as her schoolmates are clothed. Such polemical exchanges are
pursued at a pragmatic level, and no theological or philosophical
sophistication is required to conduct them. Nor would such sophistication do
much to strengthen either side.

So far I have been talking about the form that contemporary
American political disagreements take, and emphasizing the irrelevance of
philosophy to such disputes. I have been arguing that neither the agreement
between Left and Right on the wisdom of retaining constitutional government nor
the disagreement between them about what laws to pass has much to do with
either religious conviction or philosophical opinion. You can be a very
intelligent and useful participant in political discussion in contemporary democratic
societies such as the US even though you have no interest whatever in either
religion or philosophy.

Despite this fact, one still occasionally comes across debates
among philosophers about whether democracy has "philosophical
foundations", and about what these might be. I do not regard these debates
as very useful. To understand why they are still conducted, it helps to
remember the point I made at the outset: that when the democratic revolutions
of the eighteenth century broke out, the quarrel between religion and
philosophy had an importance it now lacks. For those revolutions were not able
to appeal to the past. They could not point to the successes enjoyed by
democratic and secularist regimes. For few such regimes had ever existed, and
those that had had not always fared well. So their only recourse was to justify
themselves by reference to principle, philosophical principle. Reason, they
said, had revealed the existence of universal human rights, so a revolution was
required to put society on a rational basis.

"Reason" in the eighteenth century was supposed to be
what the anti-clericalists had to compensate for their lack of what the clergy
called "faith". For the revolutionaries of those times were
necessarily anti-clerical. One of their chief complaints was the assistance
that the clergy had rendered to feudal and monarchical institutions. Diderot,
for example, famously looked forward to seeing the last king strangled with the
entrails of the last priest. In that period, the work of secularist
philosophers such as Spinoza and Kant was very important in creating an
intellectual climate conducive to revolutionary political activity. Kant argued
that even the words of Christ must be evaluated by reference to the dictates of
universally shared human reason. For Enlightenment thinkers such as Jefferson,
it was important to argue that reason is a sufficient basis for moral and
political deliberation, and that revelation is unnecessary.

The author of both the Virginia Statute of Religious Freedom and of
the American Declaration of Independence, Jefferson was a typical leftist
intellectual of his time. He read a lot of philosophy and took it very
seriously indeed. He wrote in the Declaration that "We hold these truths
to be self-evident: that all men are created equal, that they are endowed by
their creator with certain inalienable rights, that among them are life,
liberty and the pursuit of happiness". As a good Enlightenment
rationalist, he agreed with Kant that reason was the source of such truths, and
that reason was sufficient to provide moral and political guidance.

Many contemporary Western intellectuals (among them Juergen
Habermas, the most influential and distinguished living philosopher) think that
there was something importantly right about Enlightenment rationalism. Habermas
believes that philosophical reflection can indeed provide moral and political
guidance, for it can disclose principles that have what he calls
"universal validity". Foundationalist philosophers like Habermas see
philosophy as playing the same role in culture that Kant and Jefferson assigned
to it. Simply taking thought will reveal what Habermas calls
"presuppositions of rational communication", and thereby provide
criteria which can guide moral and political choice.

Many leftist intellectuals in America and in the West generally
would agree that democracy has such a foundation. They too think that certain
central moral and political truths are, if not exactly self-evident,
nonetheless transcultural and ahistorical – the product of human reason as
such, not simply of a certain sequence of historical events. They are annoyed
and disturbed by the writings of anti-foundationalist philosophers like myself
who argue that there is no such thing as "human reason".

We anti-foundationalists, however, regard Enlightenment rationalism
as an unfortunate attempt to beat religion at religion's own game – the game of
pretending that there is something above and beyond human history that can sit
in judgment on that history. We argue that although some cultures are better
than others, there are no transcultural criteria of "betterness" that
we can appeal to when we say that modern democratic societies are better than
feudal societies, or that egalitarian societies are better than racist or
sexist ones. We are sure that rule by officials freely elected by literate and
well-educated voters is better than rule by priests and kings, but we would not
try to demonstrate the truth of this claim to a proponent of theocracy or of
monarchy. We suspect that if the study of history cannot convince such a
proponent of the falsity of his views, nothing else can do so.

Anti-foundationalist philosophy professors like myself do not think
that philosophy is as important as Plato and Kant thought it. This is because
we do not think that the moral world has a structure that can be discerned by
philosophical reflection. We are historicists because we agree with Hegel's
thesis that "philosophy is its time, held in thought". What Hegel
meant, I take it, was that human social practices in general, and political
institutions in particular, are the product of concrete historical situations,
and that they have to be judged by reference to the needs created by those
situations. There is no way to step outside of human history and look at things
under the aspect of eternity.

Philosophy, on this view, is ancillary to historiography. The
history of philosophy should be studied in the context of the social situations
that created philosophical doctrines and systems, in the same way that we study
the history of art and literature. Philosophy is not, and never will be, a
science – in the sense of a progressive accumulation of enduring truths.

Most philosophers in the West prior to the time of Hegel were
universalist and foundationalist. As Isaiah Berlin has put it, before the end
of the eighteenth century Western thinkers viewed human life as the attempt to
solve a jigsaw puzzle. Berlin describes what I have designated as their hope
for universal philosophical foundations for culture as follows:

There must be some way of putting the pieces together. The all-wise
being, the omniscient being, whether God or an omniscient earthly creature –
whichever way you like to conceive of it – is in principle capable of fitting
all the pieces together into one coherent pattern. Anyone who does this will
know what the world is like: what things are, what they have been, what they
will be, what the laws are that govern them, what man is, what the relation of
man is to things, and therefore what man needs, what he desires, and how to
obtain it.[1]

The idea that the intellectual world, including the moral world, is
like a jigsaw puzzle, and that philosophers are the people charged with getting
all the pieces to fit together presupposes that history does not really matter:
that there has never been anything new under the sun. That assumption was
weakened by three events. The first was the spate of democratic revolutions at
the end of the eighteenth century, especially those in America and in France.
The second was the Romantic Movement in literature and the arts – a movement
that suggested that the poet, rather than the philosopher, was the figure who
had most to contribute to social progress. The third, which came along a little
later, was the general acceptance of Darwin's evolutionary account of the
origin of the human species.

One of the effects of these three events was the emergence of
anti-foundationalist philosophy – of philosophers who challenge the jigsaw
puzzle view of things. The Western philosophical tradition, these philosophers
say, was wrong to think that the enduring and stable was preferable to the
novel and contingent. Plato, in particular, was wrong to take mathematics as a
model for knowledge.

On this view, there is no such thing as human nature, for human
beings make themselves up as they go along. They create themselves, as poets
create poems. There is no such thing as the nature of the state or the nature
of society to be understood – there is only an historical sequence of
relatively successful and relatively unsuccessful attempts to achieve some
combination of order and justice.

To further illustrate the difference between foundationalists and
non-foundationalists, let me return to Jefferson's claim that the rights to
life, liberty and the pursuit of happiness are self-evident. Foundationalists
urge that the existence of such rights is a universal truth, one that has
nothing in particular to do with Europe rather than Asia or Africa, or with
modern history rather than ancient history. The existence of such rights, they
say, is like the existence of irrational numbers such as the square root of two
– something that anybody who thinks hard about the topic can be brought to
recognize. Such philosophers agree with Kant's claim that "the common
moral consciousness" is not an historical product but part of the
structure of human rationality. Kant's categorical imperative, dictating that
we must not use other human beings as mere means – must not treat them as mere things
– is translated into concrete political terms by Jefferson and by the authors
of the Helsinki Declaration of Human Rights. Such translations simply
reformulate moral convictions that should have seemed as self-evidently true in
the days of Plato and Alexander as they are now. It is the business of
philosophy to remind us of what, somehow, deep in our hearts, we have always
known to be true. Plato was, in this sense, right when he said that moral
knowledge is a matter of recollection – an a priori matter, not a result of
empirical experimentation.

In contrast, anti-foundationalists like myself agree with Hegel
that Kant's categorical imperative is an empty abstraction until it is filled
up with the sort of concrete detail that only historical experience can provide.
We say the same about Jefferson's claim about self-evident human rights. On our
view, moral principles are never more than ways of summing up a certain body of
experience. To call them "a priori" or "self-evident" is to
persist in using Plato's utterly misleading analogy between moral certainty and
mathematical certainty. No statements can both have revolutionary political
implications and be self-evidently true.

To say that a statement is self-evident is, we
anti-foundationalists believe, merely an empty rhetorical gesture. The
existence of the rights that the revolutionaries of the eighteenth century
claimed for all human beings had not been evident to most European thinkers in
the previous thousand years. That their existence seems self-evident to Americans
and Europeans two hundred-odd years after they were first asserted is to be
explained by culture-specific indoctrination rather than by a sort of
connaturality between the human mind and moral truth.

To make our case, we anti-foundationalists point to unpleasant
historical facts such as the following: The words of the Declaration were
taken, by the supposedly democratic government of the US, to apply only to
people of European origin. The American Founding Fathers applied them only to
the immigrants who had come across the Atlantic to escape from the monarchical
governments of Europe. The idea that native Americans – the Indian tribes who
were the aboriginal inhabitants – had such rights was rarely taken seriously.
Recalcitrant Indians were massacred.

Again, it was only a hundred years after the Declaration of
Independence that the citizenry of the US began to take women's rights
seriously – began to ask themselves whether American females were being given
the same opportunities for the pursuit of happiness as were American males. It
took almost a hundred years, and an enormously costly and cruel civil war,
before black Americans were given the right not to be held as slaves. It took
another hundred years before black Americans began to be treated as
full-fledged citizens, entitled to all the same opportunities as whites.

These facts of the history of my country are sometimes cited to
show that America is an utterly hypocritical nation, and that it has never
taken seriously its own protestations about human rights. But I think that this
dismissal of the US is unfair and misleading. One reason it became a much
better, fairer, more decent, more generous country in the course of two
centuries was that democratic freedoms – in particular freedom of the press and
freedom of speech – made it possible for public opinion to force the white
males of European ancestry to consider what they had done, and were doing to
the Indians, the women, and the blacks.

The role of public opinion in the gradual expansion of the scope of
human rights in the Western democracies is, to my mind, the best reason for
preferring democracy to other systems of government that one could possibly
offer. The history of the US illustrates the way in which a society that
concerned itself largely with the happiness of property-owning white males
could gradually and peacefully change itself into one in which impoverished
black females have become senators, cabinet officers, and judges of the higher
courts. Jefferson and Kant would have been bewildered at the changes that have
taken place in the Western democracies in the last two hundred years. For they
did not think of equal treatment for blacks and whites, or of female suffrage,
as deducible from the philosophical principles they enunciated. Their
hypothetical astonishment illustrates the anti-foundationalist point that moral
insight is not, like mathematics, a product of rational reflection. It is
instead a matter of imagining a better future, and observing the results of
attempts to bring that future into existence. Moral knowledge, like scientific
knowledge, is mostly the result of making experiments and seeing how they work
out. Female suffrage, for example, has worked well. Centralized control of a
country's economy, on the other hand, has not.

The history of moral progress since the Enlightenment illustrates
the fact that the important thing about democracy is as much a matter of
freedom of speech and of the press as about the ability of angry citizens to
replace bad elected officials with better elected officials. A country can have
democratic elections but make no moral progress if those who are being
mistreated have no chance to make their sufferings known. In theory, a country could
remain a constitutional democracy even if its government never instituted any
measures to increase equality of opportunity. In practice, the freedom to
debate political issues and to put forward political candidates will ensure
that democracy in the sense of egalitarianism will be a natural consequence of
democracy as constitutional government.

The moral of the anti-foundationalist sermon I have been preaching
to you is that for countries that have not undergone the secularization that
was the most important effect of the European Enlightenment, or that are only
now seeing the emergence of constitutional government, the history of Western
philosophy is not a particularly profitable area of study. The history of the
successes and failures of various social experiments in various countries is
much more profitable. If we anti-foundationalists are right, the attempt to
place society on a philosophical foundation should be replaced by the attempt
to learn from the historical record.

.

.

search this blog

---- In Solidarity with Quebec's Student Movement ----

To respond “at the right times with reference to the right objects, toward the right people, with the right aim, and in the right way, is what is appropriate and best, and this is characteristic of excellence.”Aristotle (EN, 1106b21-3)