Young Radicals: In the
War for American Ideals by Jeremy McCarter. Random House, [362] pages,
[$30].

In pre-World War I
America, writers and artists in Greenwich Village created, Jeremy McCarter
writes, "a lively, funny, kaleidoscopic bohemia" - a sharp contrast with the
"cool technocratic triumphalism" on offer in his own, late-20th-century
college days. McCarter, co-author of Hamilton:
The Revolution, on which the hit Broadway musical is based, was captivated
by the bohemians' youthful ardor and unregimented idealism. In Young Radicals he has written a
collective memoir of five of the era's most notable figures: Randolph Bourne,
Walter Lippmann, Max Eastman, John Reed, and Alice Paul.

How does one write
a collective memoir, especially of events from a century ago? Adventurously - in
free indirect discourse much of the time, and leaping nimbly among parallel
narrative tracks. The lives and causes of McCarter's protagonists overlapped a
good deal, but they also diverged in the course of those hectic years. With
ingenuity and affection, he weaves their stories together, describes the
conflicts among them, and renders a sympathetic but unsparing account of their
shortcomings and defeats. Young Radicals does
not pretend to scholarly authority or rigor. One might call it - without
prejudice - pop history: it is rewarding as well as entertaining.

Walter Lippmann
and John Reed were classmates at Harvard, where Reed was a cheerleader for the
football team and Lippmann co-founded the Socialist Club. Temperamentally they
were opposites: Reed "big and loud and rough around the edges," a hedonist and
dabbler; Lippmann sleek and cosmopolitan but earnest and intellectually
precocious. In a mock-epic poem about his fellow Village denizens, Reed spoofed
Lippmann as "Our all-unchallenged Chief," a brilliant prig "who builds a world
but leaves out all the fun." Soon enough Reed would be as politically earnest
as Lippmann, but their sensibilities, even more deeply than their principles,
would prove irreconcilable.

Reed plunged into
contemporary labor struggles, helping organize the celebrated Paterson Strike
Pageant; reported on the Mexican Revolution, riding into battle on one occasion
with Pancho Villa; trudged through the killing fields of World War I as a war
correspondent, tortured by kidney pains; and turned up in Russia in 1917,
interviewing Trotsky and getting appointed to the Executive Committee of the
Communist International. Lippmann cast a cold eye on all this, jeering: "I
can't think of a form of disaster which John Reed hasn't tried and enjoyed."

Lippmann moved in
the opposite direction, increasingly close to power. As an editor of The New Republic, he found official
Washington eager to talk to (and, he mistakenly assumed, listen to) liberal
intellectuals about America's proper attitude toward the European war. The
magazine's editorials obligingly made the case for the Wilson administration's
policies: at first neutrality, then preparedness, then belligerency. Lippmann
was recruited to Wilson's 1916 reelection campaign, then onto the government's
postwar planning staff, and finally sent, with the Army, into occupied Europe.
His war was almost as exciting as Reed's, but it ended in bitter disillusion,
when Wilson accepted a horribly flawed peace settlement.

Lippmann
thereafter retreated into a career of Olympian detachment and superiority as
America's most respected and respectable pundit. The one person who might have
dragged him off this pedestal and forced him to learn the lessons of his
misguided enthusiasm for the war was Randolph Bourne. A hunchback with
misshapen features, Bourne was a lonely and marginal figure who made his name
writing ardent essays in the Atlantic and
New Republic on education,
immigration, and culture. Those magazines had no use, however, for his ideas
about the war. In a literary monthly, Seven
Arts, Bourne wrote a series of brilliant and biting antiwar essays,
diagnosing the liberals' and pragmatists' capitulation to war fever as a "dread
of intellectual suspense." John Dewey replied angrily but, as he later
admitted, ineffectually.

It was one of the
most important controversies in American intellectual history. Bourne was
vindicated in every respect but unfortunately did not live to press his
advantage - he died, only 32, in December 1918. If he had lived and become as
influential as he seemed likely to, fewer liberal intellectuals might have
supported America's disastrous military interventions in Indochina, Central
America, and the Middle East.

The Wilson
administration's heavy-handed repression of wartime dissent brought together
two of McCarter's protagonists: Reed and Max Eastman, sometime poet and editor
of The Masses, where both of them
published antiwar essays. The government put them on trial for obstructing the
war effort. Eastman delivered a rousing speech to the jury, which won the day. He
later moved to the right, becoming one of the more interesting critics of the
American and international left.

Alice Paul is
probably the least well-known of McCarter's five. A Quaker and Swarthmore
graduate, Paul first encountered the women's suffrage movement in England,
where she was jailed and repeatedly force-fed (later a favorite tactic of her
American jailers as well). In Washington Paul organized protest marches and
started The Suffragist magazine. Soon
she was deemed too militant by the National American Woman Suffrage Association
and expelled. Her single-mindedness and courage were remarkable, though she was
not quite brave enough, McCarter acknowledges, to welcome black women into her campaign
on equal terms.

She was, however,
fearless in confronting politicians. The climax of her story is a meeting at
the White House, in which Paul and three companions "warn the president that
many, many thousands of voting women feel as they do. By refusing to support a
federal suffrage amendment, he might force them to vote against him, which
could mean the end of his presidency."

"If they did
that," Wilson says, "they would not be as intelligent as I believe they are."

"Two weeks later,
Alice Paul puts the Woman's Party into the field."

Wilson narrowly
won re-election and eventually agreed to support a federal amendment.

*******************

McCarter's prose
is sometimes a tad breathless. ("Walter Lippmann pushes through the crowds of
frantic Belgians, the shouting men, the sobbing women, desperate for news. He
scours the bulletin boards. They tell him nothing he doesn't already know.") Some
of his apothegms are a bit strained, like the book's closing exhortation:
"Ruins stop being ruins when you build with them." (Aren't you usually better
off clearing them away?) But for the most part, the prose maintains its balance
even at high velocities. And although scholars may lift an eyebrow over his
assumption of psychological intimacy with his subjects, he appears to have
earned it, quoting liberally from the young radicals' letters, journals, and
memoirs.

In its rhetorical
intoxication, its dramatic representations of character, and its effort to
reproduce in its verbal rhythms the flow, sometimes torrential, of historical
events, Young Radicals recalls Thomas
Carlyle's The French Revolution.
Readers shouldn't get too excited about this comparison. Carlyle did all those
things supremely well and for the first time. His book was a masterpiece, a
thunderclap, a revelation. As a piece of historical writing, Young Radicals bears roughly the same
relation to The French Revolution as
a pleasing sketch by a fledgling artist bears to the ceiling of the Sistine
Chapel. Still, one should honor McCarter's ambition, as he has honored those of
Bourne, Lippmann, Eastman, Reed, and Paul.

]]>
Still Enlightening After All These Yearstag:georgescialabba.net,2017:/mtgs//2.15782017-06-01T20:01:45Z2017-07-22T20:03:54Zadmin

The Shipwrecked Mind:
On Political Reaction by Mark Lilla. New York Review Books, 145 pages,
$15.95

Modernity and Its
Discontents: Making and Unmaking the Bourgeois from Machiavelli to

Bellow by Steven
B. Smith, Yale University Press, 402 pages, $45.

Whatever else they
may agree or disagree about, a majority of Americans clearly dislike
Clintonism: the technocratic, managerial ethos of the Democratic Party since
the early 1990s. They believe, rightly, that it exists to administer the
decline of the post-World War II, New-Deal-mediated industrial order and to
ratify the extinction of an older, producerist way of life. Neoliberal
economics, plus consumerism, the culture of celebrity, and rights-based
identity politics, is what secular modernity has come to mean to most
Americans, and most of them reject it, or at any rate are deeply uneasy about
it, whether articulately or not. Those on the left who wish to redeem and
vindicate modernity have a great deal of distinguishing and discriminating to
do in the years ahead; and they will face competition from conservatives, who
have their own reckonings with modernity to proffer.

Of course no one
wants - or at any rate will admit wanting - to roll back modernity altogether.
Reactionaries get toothaches and are as grateful for modern dentistry as the
rest of us. They also have daughters, to whose professional ambitions and
achievements they are by no means indifferent. And whether they are of the 1
percent or the 99 percent, most of them understand that capital has no
homeland, that nation-states are no match for financial markets, and that those
markets do not take the slightest notice of an investor's race, color, or
creed, the kink of his hair, or the slant of her eyes. All that is solid has long
since melted into air. Nationalism and religion remain excellent ways of motivating
us to kill, but they are no longer of much use in showing us how to live.

The libertarian
right wastes no tears on religion or nationalism. They are entirely - in fact,
exclusively - modern; they have jettisoned all such pre-modern baggage as charity,
loyalty, humility, and self-sacrifice. By and large, they think that pushpin is
as good as poetry and Facebook is as good as Tolstoy. Whatever satisfies your
utility function.

Still, however
perversely, libertarians at least think. The commissars of the Republican Party
since 1980 have not allowed an idea into their heads, individually or
collectively. Their sole principle is service to the rich; their sole function
to be, in Thomas Frank's apt phrase, a "wrecking crew," or as wrecker-in-chief
Grover Norquist sniggered, to drown the government in a bathtub. They can scarcely
appreciate George Will, much less Charles Krauthammer, who, though reduced by
long immersion in the sound-bite culture of television and the opinion pages to
a sequence of irritable mental gestures, is nonetheless an intellectual. Even
Bill O'Reilly is probably too highbrow for Republican politicians - he writes
books, after all, or at any rate puts his name to them. The Democrats, as Frank
shows in his latest book, Listen, Liberal,
are the party of the 2 through 10 percent - the professional class, to whom
ideas do matter, though far less than income and status.

It is no use, then,
expecting philosophy to shed much light on contemporary American politics. But
perhaps political history can help adjudicate some philosophical disputes. One
such venerable yet evergreen debate centers on the Enlightenment. Has the
Enlightenment been a success or a failure - or better, how much of each and in
what respects? How much moral progress has humankind made, and how much more
can we hope for? How far can we rely on reason, and how much room must we make
for tradition, authority, and faith? Two new books by conservatives - Columbia
intellectual historian Mark Lilla and Yale political philosopher Steven Smith -
examine some well-known (in Smith's case) and little-known (in Lilla's)
answers.

The Shipwrecked Mind, like Lilla's
earlier book The Reckless Mind:
Intellectuals in Politics (2001), is a collection of occasional essays on
related themes. The earlier book pondered "tyrannophilia," the regrettable
tendency of both left-wing and right-wing intellectuals to lose their bearings
and end up defending political violence and oppression. This is a subject of
inexhaustible interest to centrist liberals from Isaiah Berlin and Raymond Aron
to Michael Ignatieff and Leon Wieseltier, who do not always convey the
impression that they fully understand why anyone would become a left-wing or
right-wing radical, or anything but a sensible centrist liberal.

Lilla was far from
smug or heavy-handed in The Reckless Mind
but he did occasionally overstress the present danger of tyrannophilia and the
urgency of responsible moderation. He justifies these emphases in the book's
preface, where he laments that "so many admirers of these thinkers [i.e.,
Heidegger, Schmitt, Benjamin, Foucault, and Derrida] continue to ignore or
justify their political recklessness." But he refrains from naming anyone, and
I, for one, have no idea who he means. The admirers of Heidegger and Schmitt
have generally been forthcoming about those authors' obvious and undeniable political
recklessness. Some of Foucault's European admirers may have taken seriously his
brief infatuations with Maoism and the Iranian Revolution, but not many, and even
fewer Americans. And there has not been a Marxist-Leninist above thirty years
old in the United States for at least half a century (always excepting Chairman
Bob Avakian) - or if there has, she hasn't published anything. Meanwhile, in
the non-ideological (i.e., real) world, predatory global capitalism has rolled
on, grinding the faces of the poor and destroying the planet, unhampered,
indeed virtually unnoticed, by centrist liberalism. Lilla at least has the
grace to acknowledge, in a footnote to the Afterword in this year's reissue of The Reckless Mind, "the countless cases
of intellectuals whose political commitments did not pervert their thinking."
Perhaps he will join them before Earth's Gini coefficient reaches 1 and
Morningside Heights is under water.

The fauna of The Shipwrecked Mind are more exotic
than those of The Restless Mind and
harder to interest us in, though Lilla is a skillful expositor and a graceful
writer. Who is "the reactionary," he asks, "this last remaining 'other'
consigned to the margins of respectable inquiry"? Intriguingly, he replies:
"Reactionaries are not conservatives. That is the first thing to be understood
about them." Conservatives like Burke and Tocqueville are satisfied with the
status quo, not merely because they are well off in it but because they see the
virtue in it and prize stability over an unattainable ideal. Reactionaries are
anything but satisfied:

They are, in their way, just as radical
as revolutionaries and just as firmly in the grip of historical imaginings.
Millennial expectations of a redemptive new social order and rejuvenated human
beings inspire the revolutionary; apocalyptic fears of entering a new dark age
haunt the reactionary. ... Where others see the river of time flowing as it
always has, the reactionary sees the debris of paradise drifting past his eyes.
He is time's exile. The revolutionary sees the radiant future invisible to
others, and it electrifies him. The reactionary, immune to modern lies, sees
the past in all its splendor, and he too is electrified. ... The militancy of his
nostalgia is what makes the reactionary a distinctly modern figure, not a
traditional one.

Lilla's portrait
of the reactionary as a revolutionary with the polarities reversed is
persuasive and illuminating, though it is marred by a spasm of reflexive
centrism. He notes the descent from Oswald Spengler of a declinist literature
on the American and European right, then adds that nowadays declinism "can also
be found on the fringe left, where apocalyptic deep ecologists, antiglobalists,
and anti-growth activists have joined the ranks of twenty-first-century
reactionaries." This is a false equivalence, or at least two-thirds of one.
Anti-globalization and anti-growth activists yearn for progress as fervently as
Thomas Friedman or Lilla himself but have the wit to recognize that whatever
may be the ultimate result of unrestricted capital mobility and unlimited
fossil fuel exploitation, it will most certainly not be progress, and may
indeed be "apocalyptic." Nostalgia has nothing to do with it.

The first of
Lilla's subjects is Franz Rosenzweig, born in 1886 and dead of ALS only 43
years later. Rosenzweig shared the reaction among early-20th-century
German-speaking philosophers against their Hegelian legacy. Most of them
gravitated toward neo-Kantian rationalism or the anti-rationalism of
Kierkegaard and Nietzsche. Rosenzweig, spurred by a mystical experience during
a Yom Kippur service, turned toward the burgeoning Jewish studies movement.
Though never theologically orthodox, he saw in sacred history, both Jewish and
Christian, a refuge from and remedy for the spiritual homelessness that
afflicted so many cultivated Europeans and Americans in those years. Had he
lived, he might have become one of the century's great religious existentialists.
But his involved style and his resolute unworldliness limited his influence.
Lilla certainly makes him sound interesting but cannot make him sound relevant.

Lilla's next two
subjects, Eric Voegelin and Leo Strauss, were far more influential. Both were
German refugees, their attention riveted by the extremism and violence that
convulsed Central Europe and Russia between the world wars. Both devised
esoteric anti-modern philosophical systems that found much favor in Cold War
America, though in both cases their intellectual seriousness eventually
undermined their popularity.

Voegelin naturally
blamed the Enlightenment for subverting Western civilization, but he traced the
origins of Western decline much further back. The first, fateful step toward
the collapse of order was Christianity's distinction between divine and secular
authority, the City of God and the City of Man. Until then, humankind had been
ruled by god-kings; as a result, everyday life was permeated by, and
structurally dependent on, the transcendent, which anchored the individual and
society.

Voegelin was best
known for his claim that the key to understanding Western history is
Gnosticism. An ancient half-Christian, half-pagan heresy, Gnosticism held that
the visible world is corrupt, the work of an evil spirit, but might be redeemed
by those endowed by a higher, hidden divinity with secret knowledge (gnosis).
In Voegelin's view, this was the archetype of millenarian ideologies, in which
an elite claiming special inspiration promises to lead the masses to a new and
better world.

Strauss too
believed that political order required divine sanction, though unlike Voegelin,
he had no vestige of belief in the divine. Strauss was a political and
religious skeptic but he was convinced that skepticism was bad for
non-philosophers - that is, virtually everyone. T.S. Eliot wrote that "human kind/
Cannot bear very much reality"; Nietzsche thought that only supermen could live
without "metaphysical comfort." Strauss agreed with them and drew a radical
political conclusion: philosophers (actually, since few people pay attention to
philosophers nowadays, one should probably say "intellectuals") ought to
protect ordinary people from unbearable truths by shoring up whatever myths
their society lives by. Of course, different societies may have irreconcilable
myths, in which case war is unavoidable. But better war than chaos or anomie.

Better for whom?
Not necessarily for ordinary people - Strauss seems to have cared very little
about ordinary people. But definitely better for intellectuals, who re quire to be subsidized (for their useful
myth-making) and then left alone to "converse." Lilla describes all this rather
more benignly:

Strauss held [that] all societies
require an authoritative account of ultimate matters - morality and mortality,
essentially - if they are to legitimate their political institutions and
educate citizens. Theology has traditionally done that by convincing people to
obey the laws because they are sacred. The philosophical alternative to this
obedience was Socrates's life of perpetual questioning beholden to no
theological or political authority. For Strauss this tension between
[philosophy and religion] was necessary and in any case inevitable in human
society. Without authoritative assumptions regarding morality, which religion
can provide, no society can hold itself together. Yet without freedom from
authority, philosophers cannot pursue truth wherever it might lead them.

... The philosopher and the city each
have something to teach the other. Philosophers can serve as gadflies to the
city, calling it to account in the name of truth and justice; and the city
reminds philosophers that they live in a world that can never be fully
rationalized, with ordinary people who cling to their beliefs and need
assurance. The wisest philosophers, in Strauss's estimation, were those who
understood that they must be political philosophers, thinking about the common
good. But they must also be politic philosophers, aware of the risks they take
in challenging false certainties.

This is a good
deal too charitable. False certainties, Strauss thought, were indispensable:
for ordinary people to accept, since they must have some comforting beliefs to insulate
them from the harsh truths that would drive them to despair and disorder; and
for philosophers to propagate, since they must rely on benighted ordinary
people to create an stable and orderly world, safe for philosophizing. A
properly functioning false certainty is the last thing a Straussian political philosopher
would want to challenge.

The remainder of The Shipwrecked Mind touches briefly on
a few widely assorted and (apparently) justly obscure thinkers. First is the
mid-20th-century theologian Jacob Taubes, an admirer of Carl Schmitt
and Walter Benjamin, who declared Saint Paul an apostle from the Jews to the
rest of humanity and the first and perhaps greatest revolutionary. Another
admirer of Saint Paul is the contemporary philosopher of "the event," Alain
Badiou, who considers Mao the greatest of all revolutionaries - readers can
profitably skip this chapter. Also included are two essays Lilla wrote while
living in Paris during and after the Charlie
Hebdo murders. The essays take two books by "new reactionaries," Eric
Zemmour's right-wing jeremiad Le Suicide
français and Michel Houellebecq's novel Soumission, as starting points for reflections
on the appearance of a troubling strain of cultural despair.

Introducing
readers to obscure and difficult authors is hard and valuable work. Here and in
The Reckless Mind, Lilla has done it
well and deserves our thanks. Still, although he is not strictly obliged to
indicate the degree of his agreement or disagreement with his subjects'
opinions, I think he might have seen fit to take exception to some of the more
outlandish ones. The Political Religions contains,
he tells us, "the germ of all Voegelin's major works"; the book's "basic theme"
is that "the fantasy of creating a world without religion, a political order
from which the divine was banned, led necessarily to the creation of grotesque
secular deities like Hitler, Stalin, and Mussolini. ... When you abandon the
Lord, it is only a matter of time before you start worshiping a Fuhrer." It is
Voegelin's assertion seems grotesque to me, a crass libel on the Enlightenment.
Historically, the correlation between religious skepticism and democracy is strong,
as is the correlation between dogmatic religion and authoritarianism. The
nearest contemporary approximation to "a world without religion" is, after all,
Scandinavia (where the divine is not exactly banned).

Likewise,
historians may be allowed to exaggerate their subjects' continuing relevance -
but only up to a point. As evidence that "we have much to learn from Voegelin's
grand narratives," Lilla urges that "those concerned with the revival of
political messianism in our time would do well to consider his searching
reflections on gnosticism." What revival of political messianism? If messianism
is a vision of radical social transformation, it is notable by its absence from
today's world. The Chinese are busy getting and spending under the benevolent
supervision of the Communist Party. The Indian masses yearn for air
conditioners and washing machines. In Russia, Putin is liquidating his barons
and enlisting Orthodoxy in a revival of Tsarism. Europe is under the thumb of
central bankers and their civil servants. Africa is torn by ethnic and
religious violence. Latin America is tending its wounds after decades of
American-backed dictatorships. And US politics, pre- and post-Trump, is
completely dominated by two parties distinguished only by the degree of their
fealty to the plutocracy. Right-wing populism dreams only of returning to the
day before yesterday, not of transforming anything. But embattled centrism must
have something to be embattled about; there must always be dangerous radicals inciting
restive masses to imminent apocalypse.

Again, Lilla seems
to agree (it's hard to tell) that "the assumption behind [the Enlightenment]
was that the world could be reformed on the basis of reason and empirical
inquiry. And that assumption, on Strauss's reading of modern history, was
wrong. All the [Enlightenment] managed to do was distort philosophy's mission,
leaving it and the world worse off." It is a peculiar reading of modern history
according to which either the world has not been reformed (cf. the abolition of
slavery, religious toleration, universal suffrage, child labor laws, the
emancipation of women) or reason and empirical inquiry do not deserve most of
the credit. That the Enlightenment "left the world worse off" is either a deep
truth or shallow nonsense, but in any case requires a great deal more in the
way of argument than Lilla (or, as far as I can tell, Strauss) provides.

Steven Smith, a
Straussian, undertakes to provide it, or some of it. Modernity and Its Discontents is an ambitious survey of the Enlightenment's
precursors (Machiavelli, Descartes, Hobbes, Spinoza), interpreters (Kant,
Hegel), and critics (Rousseau, Tocqueville, Nietzsche, Schmitt, Berlin,
Strauss), with side trips into Franklin's Autobiography,
Flaubert's Madame Bovary, Lampedusa's
The Leopard, and Bellow's Mr. Sammler's Planet. Smith is not as
cogent an analyst or as fine a stylist as Lilla, but he is an engaging and
good-natured, if occasionally prolix, guide to this vast intellectual
territory.

Gallantly, Smith
begins by putting the Enlightenment in the best possible light, defining it as
"the desire for autonomy and self-direction, the aspiration to live
independently of the dictates of habit, custom, and tradition, to accept moral
institutions and practices only if they pass the bar of one's critical
intellect, and to accept ultimate responsibility for one's life and actions." Politically,
it entails "individual rights, government by consent, and the sovereignty of
the people." What's not to like about any of that?

Some of the Enlightenment's
critics object that human nature is simply not up to it: we are too belligerent
or too greedy or too superstitious ever to put war, poverty, and intolerance
behind us. Dostoevsky's Grand Inquisitor makes this case memorably. Burke's
horror at the thought of any individual's "private stock of reason" challenging
the verdict of tradition and Carl Schmitt's insistence that our political
loyalties have nothing to do with reason but instead seize us unaccountably and
irresistibly are other influential examples of this argument, which we get in vulgar
form every day from politicians who remind us that "there is no such thing as
society," and certainly no such thing as a free lunch. Other critics think that
the universal reign of reason, peace, and justice is, alas, all too probable
and will cause any person of spirit to die of boredom (Nietzsche) or at least
not find sufficient scope for heroism and grand ambitions (Tocqueville).

Smith sums up the
"Counter-Enlightenment" very handily:

For each movement of modernity, there
has developed a comprehensive counternarrative. The idea that modernity is
associated with the secularization of our institutions has given rise to fears
about the rationalization and "disenchantment" of the world; the rise of a
market economy and the commercial republic gave way in turn to an antibourgeois
mentality that would find expression in politics, literature, art, and
philosophy; the idea of modernity as the locus of individuality and free
subjectivity gave rise to concerns about homelessness, anomie, and alienation;
the achievements of democracy went together with fears about conformism, the
loss of independence, and the rise of the "lonely crowd"; even the idea of
progress itself gave rise to a counterthesis about the role of decadence,
degeneration, and decline.

Smith's own views
are - as is often the case with Straussians - hard to pin down. He would not
dream of doubting the "immense benefits" brought about by the Enlightenment's
"humanitarian project," its belief that "science and the application of
scientific method could provide answers to the most pressing problems - war,
poverty, ignorance, and disease - facing humankind." Regrettably, however, progress
has given birth to "progressivism," an "almost eschatological faith" in
humankind's irresistible advance toward a glorious future. Certain that "all
the important problems facing civilization are technical in nature,"
progressives place entire confidence in experts and see no need for "prudence
and practical judgment." The social sciences have furnished the tools with
which a "new elite" has created an "administrative state," threatening "an end
to politics." Perhaps we must entertain the possibility - familiar to the
ancients but supposedly anathema to the Enlightenment - that "our seemingly
most intractable problems lie beyond our rational capabilities."

This is a fine
muddle. To begin with, reason was only half the Enlightenment's program. The
other, equally important half was freedom. The intimate, unbreakable connection
between the two halves was this principle: every form of authority must be
justified to those over whom it is exercised. The motto of the Enlightenment
was: "Question authority." That was also the motto of the New Left, which
Straussians rate a little below bubonic plague among history's great
misfortunes. Allan Bloom was the most hysterical of them in this respect, but
all good Straussians loathe the Sixties. Questioning authority is something
ordinary people - the poor shlubs - should be strongly discouraged from doing.

Of course radical
democrats are no friends of the "administrative state." But here is how Smith
frames his objection: "the classical idea of the statesman (or what remains of
it) gave [way] to the new idea of the expert or policy specialist as the hero
of the new age." The Straussians' idea of a heroic statesman is Henry
Kissinger, quite possibly the most callous, dishonest, self-serving public
figure in American history. To be sure, the administrative state must be made
democratically accountable. (At present it is wholly accountable, but only to people
with significant power over investment and employment, which are a capitalist
society's oxygen supply.) But in that arduous and long-term effort, Straussians
will be no help.

There have always
been free spirits and critical thinkers, but a critical mass was achieved in
the seventeenth and eighteenth centuries, when humankind finally "emerged from
its self-imposed dependence" (Kant). That beautiful and decisive episode
(chronicled for all the ages in Peter Gay's and Jonathan Israel's masterly,
epic histories) is vulnerable, like all other human achievements, to erosion
and reversal. The Enlightenment's protagonists had no illusions about the
inevitability of progress or the attainability of ultimate perfection. They
simply recognized that we would do better to trust to open debate and
democratic decision-making than to the prudence of statesmen or the wisdom of
philosophers. We owe it to them, of course, to listen patiently to their
critics, both friendly (like Berlin) and unfriendly (like Strauss). But we owe
it to ourselves and one another to keep clear forever of that "self-imposed
dependence."

Basic Income: A
Radical Proposal for a Free Society and a Sane Economy by Philippe van
Parijs and Yannick Vanderborght. Harvard University Press, 400 pages, $29.95.

From St.
Paul's venerable saying, "if a man does not work, neither shall he eat," to the
always-contemporary saw, "there's no such thing as a free lunch," the common
sense of humankind has always seemed dead against a universal, unconditional
basic income. Charity, of course, is no less a tradition: for widows, orphans,
and the infirm in all periods, and in the modern period also as social
insurance for the elderly and the involuntarily unemployed.

But aid
for the deserving poor has at times, and especially in modern times, entailed
the expensive and humiliating burden of proving to the satisfaction of donors
that the recipient is indeed both deserving and poor. And even in the most
generous and enlightened societies, such aid has sometimes had perverse
effects. The most common is the "poverty trap." When aid is means-tested, every
dollar of earned income above the qualifying level results in a corresponding
reduction of aid. This is a disincentive to accept the generally low-income,
training-poor jobs available to welfare recipients. The same disincentive
functions as a "household trap," keeping women - the usual caregivers - with
small children at home until the children are grown, insuring that when those
women do eventually enter the labor market, they are at a severe disadvantage.
In the United States, the Clinton administration resolved this dilemma by the
simple, harsh step of eliminating long-term assistance to poor households,
forcing even parents of small children into the labor force - a boon to low-wage
employers. Unemployment insurance, meanwhile, is conditional on recipients'
producing evidence of a minimum number of job applications per week - a
requirement that, as often as not, proves either burdensome or farcical.

We do
not, obviously, take very good care of our poor and unemployed. And we will
soon have even more of them: the elimination of jobs by automation has barely
begun. Without a radical new approach to economic security, we are headed
either for an even worse bureaucratic morass or for a Blade Runner, devil-take-the-hindmost world.

Basic Income by economist Philippe van
Parijs and political scientist Yannick Vanderborght (hereafter V.V.) proposes a
radical idea that will be new to many readers although, as they conscientiously
point out, it has a rich history. Like most good political ideas, the right to
a basic income originated in the Enlightenment, though proposals for the relief
of the poor are of course much older. St. Ambrose waxed eloquently indignant on
the subject of economic inequality. Luther admonished the German nobility that
"it would be easy to make a law, if only we had the courage ... that every city
should provide for its own poor." The narrator of More's Utopia railed against England's savage punishment of crimes against
property: "It would be far more to the point to provide everyone with some
means of livelihood, so that no one is under the frightful necessity of
becoming first a thief, then a corpse."

The philosophes radicalized and
universalized this impulse. According to Montesquieu, the state "owes all its
citizens a secure subsistence"; Rousseau posited that "every man has naturally
a right to everything he needs"; Condorcet wrote that "society is obliged to
secure the subsistence of all its citizens." Tom Paine even proposed the first
universal basic income, combining an endowment at age 21 and a retirement
income at 50. The subsequent history of experiments with income-security
schemes in modern Europe and the United States, from Bismarck to Milton
Friedman, from the Poor Laws to the current experiments in Switzerland, is a
secondary but fascinating theme of Van Parijs and Vanderborght's book.

There
are three defining elements in V.V.'s proposal. First, basic income is individual, paid to each citizen rather
than to a family or household. Second, it is universal, paid without regard to other income or assets. Third, it
is obligation-free, a matter of
right, and not tied to any work requirement. Other aspects of their plan -
income levels, funding mechanisms, pace of adoption - are less fundamental.

The
chief reason for individual rather than household payments is that marriage and
cohabitation are complicated enough without introducing economic incentives into
a relationship. Public assistance programs usually take account of the
economies of scale in consumption that living together entails, reducing
benefits accordingly. But basic income is not a poverty-reduction program; it
is a freedom-maximization program. Its purpose is to increase options for
everyone, in both work life and intimate life.

Why
universality? In the first place, because means-testing is an administrative
nightmare. But even more important, because it frees recipients to work. At
present, earned income reduces public assistance dollar for dollar, and a
full-time job is likely to result in termination of benefits. But far too many
of the jobs available to most recipients of public assistance are insecure
dead-ends. If the jobs disappear or prove intolerable, the resulting interval
until benefits resume can plunge a family into a debt spiral. With a secure
basic income, taking whatever job is available entails no such risk, and
recipients are freer to take a low-paying job that provides valuable training
or experience, hence perhaps a way out of the low-wage ghetto. They are also
freer to create their own jobs, and even to become entrepreneurs, on however
humble a scale.

It is
the obligation-free part that sticks in many people's craw - who, after all, is
not incensed by the spectacle of the idle poor? Forcing recipients of public
assistance to prove that they are involuntarily unemployed serves several
unworthy social purposes: it gratifies popular sadism; it keeps the number of
recipients down; and it swells the reserve army of the unemployed, thereby
subsidizing low-wage employers. V.V. quote a sociologist's scathing description
of the effects of obligation-to-work regulations: by "allowing the authorities
to force someone into a job, however rotten or badly paid," they "assure that
the meanest employer, paying the worst wages for the filthiest jobs, is not
kept out of a worker while there is one able-bodied unemployed man available."
And of course, like means-testing, obligation-to-work regulations require a
large, expensive, and intrusive bureaucracy.

Historically,
the two main grounds for criticizing unbridled competitive individualism have
been efficiency and justice: it wastes the talents of the losers and deprives
them of chances for a decent life. These are also the moral underpinnings of a
universal basic income. V.V. repeatedly stress that their proposal is not a
species of poor relief; it provides a floor, not a safety net. It is not only,
or even primarily, intended to keep people from starving or sleeping in the
streets. Shelters and soup kitchens might achieve that goal equally well, but the
goal itself is too modest. A basic income aims at allowing people to design
their lives, on the principle that while creativity in some form is a universal
biological endowment, chronically insecure, degraded, and exploited people
cannot be creative, and society will be worse off for the loss.

The
argument from justice may, as in Rousseau, appeal to every individual's natural
right to realize her powers, rather than, as in the argument from efficiency, to
society's interest in her doing so. This argument is perfectly adequate,
provided rights are derived from contingent moral intuitions - Smith's and
Hume's "sympathy," for example - rather than from supposedly immutable (but
unfortunately nonexistent) metaphysical principles. But there is another, even
firmer ground for treating basic income as a right: the social nature of wealth
creation. Markets do not allocate rewards fairly; no one deserves to be filthy
rich. V.V. illustrate by quoting the economist and computer scientist Herbert
Simon, one of the 20th century's biggest brains:

When
we compare average incomes in rich nations with those in Third World countries,
we find enormous differences that are surely not due simply to differences in
motivations to earn [or natural resources, but to] differences in social
capital that takes primarily the form of stored knowledge (e.g., technology,
and especially organizational and governmental skills). Exactly the same claim
can be made about the differences in income within any given society. ... It is
hard to conclude that social capital can produce less than about 90 percent of
income in wealthy societies like those of the US or Northwestern Europe. ... [A flat
tax of 70 percent] would generously leave the original recipients of the income
with about three times what, according to my rough guess, they had earned. ... In
the US, a flat tax of 70 percent would support all governmental programs ... and
allow payment, with the remainder, of a patrimony of about $8000 per annum per
inhabitant, or $25000 for a family of three. ... Of course, I am not so naïve as
to believe that my 70 percent tax is politically viable in the US at present [i.e.,
1998], but looking toward the future, it is none too soon to find answers to
the arguments of those who think they have a solid moral right to retain all
the wealth they "earn."

It is, indeed, never too
soon to disturb the ineffable confidence of overpaid blockheads in their
perfect entitlement to a disproportionate share of the common wealth. There
most certainly is such a thing as a free lunch. There is, in fact, a free
banquet, of which every rich person daily partakes. It is long past time they
invited the rest of us.

********************

Much of Basic Income is devoted to practical
matters: in particular, to comparing V.V.'s proposal with likely alternatives.
Traditional guaranteed minimum income schemes involve means-testing, clawback
(the reduction of assistance dollar-for-dollar of earned income), and proof
that the recipient is actively seeking employment, with the undesirable effects
already noted. Those disadvantages have been widely enough recognized that both
liberals and enlightened conservatives
generally prefer a different income-maintenance approach, either a negative
income tax or an earned income tax credit.

The
best-known proponent of a negative income tax was Milton Friedman. As with a
basic income, a minimum income level is set for all individuals or households,
and those with incomes lower than the minimum receive an amount equal to the
difference. By means of a somewhat technical but always clear discussion, V.V.
show that, given certain common features (payment to individuals, no work
requirement, and similar funding mechanisms and tax rates), a negative income
tax and a universal basic income are equivalent in every respect but one. That
respect is, however, crucial. A basic income is paid upfront: weekly, monthly,
or quarterly. A negative income tax is paid out at the end of the tax year; and
as V.V. dryly observe, "poor people cannot wait until the end of the tax year
before receiving the transfer that will enable them not to starve."

An
earned income tax credit (EITC), a variant of the negative income tax, first
introduced in 1975, is now the largest poverty program in the United States.
Its defining feature is that benefits are paid only to the employed. If a
means-tested program is also in place, the EITC would approximate the effects
of a negative income tax, though again without the ability to cushion families
through periods of income deprivation. Without a minimum income program, an
EITC functions as a subsidy to low-wage employers, which doubtless explains its
popularity in the US.

V.V.
also discuss a range of other possibilities: wage subsidies, a reduction of the
work week, the government as employer of last resort, and a close cousin of universal
basic income, a universal lump-sum endowment at age 18 or 21. They argue,
plausibly I think, that in all cases a universal basic income offers more
freedom and security for the money. They also discuss whether and how other
kinds of government assistance - for housing, education, health care, etc., as
well as social contribution programs like Social Security - might be combined
with a basic income.

Is it
feasible? Even if America were a functioning democracy rather than a
dysfunctional plutocracy, would there be enough money? V.V. propose, for
illustrative purposes, a funding level of 25 percent of average GDP - around
$1200 per month in the United States. On one side of the ledger, this would
replace the public assistance portion of the government's current social
welfare spending - a considerable saving. But on the other side, it would - if
financed solely by a flat tax on labor income, and without taking into account
likely improvements in productivity and human capital that would eventually
result from a basic income - entail marginal tax rates somewhere between 60 and
80 percent.

Is that
the end of the story, the graveyard of our noble hope? Not quite. There are
several other possible - indeed socially beneficial - ways of raising revenue
besides taxing labor income. Before even mentioning them, though, one should
note an elementary fact about taxes. As a famous social parasite once remarked
(a sentiment echoed by another social parasite during a recent presidential
campaign debate): "Only the little people pay taxes." Tax evasion - primarily
by those who can afford to hire expensive tax lawyers - costs hundreds of
billions of dollars in lost tax revenues each year, and an estimated $20
trillion is held offshore for purposes of tax evasion. In a rational world, the
costly and destructive Global War on Terror would be replaced by a Global War
on Tax Evasion.

Even
before that happy day, much progress can be made simply by holding President
Trump to his campaign promise to eliminate the carried-interest deduction, a
$250 billion/year gift to hedge fund managers, and more generally by taxing
capital income at the same rate as labor income. Since the top 0.1 percent of
American households receive half of all capital gains, there can be no reason
for taxing them at a substantially lower rate than the median wage earner,
except a fanatical devotion to increasing the income of the fabulously wealthy,
a compulsion one might call Republicans' Disease.

Other
possible sources of revenue discussed by V.V. include user fees on scarce
resources like land, the atmosphere, and the broadcast spectrum; a "Tobin tax"
on financial transactions; and consumption and value-added taxes. They mention
only in passing possible reductions in military expenditures, but there are
large savings to be looked for from closing some of our nearly one thousand
foreign military bases and cancelling unnecessary weapons programs, above all
the mind-bogglingly expensive F-35 Joint Strike Fighter.

******************

Convalescence
from large-scale social pathologies like plutocracy is bound to be gradual.
Barring a miraculously rapid disappearance of Republicans' Disease, a universal
basic income will have to be approached by stages. Essentially this means
compromising on funding levels, funding range, or obligation to work. After further
technical but, again, lucid discussion, V.V. settle on two options: in
societies where the obstacles to a basic income are mainly political, a negative
income tax, which appears to base assistance on virtuous behavior, i.e.,
participation in the labor market; and in societies where the chief obstacles
are economic, a partial universal income, starting low and perhaps increasing
along with average productivity.

We can,
of course, start anywhere - "if only we had the courage," as Luther put it. It
hardly matters where or how. What matters - what will lift the heart of every
reader of Basic Income - is that Van
Parijs and Vanderborght have enlisted the rigor and scruple of first-rate
social science in the service of a generous social vision that is at least as
old as Saint Ambrose and as up-to-date as Pope Francis. Our sensible and humane
descendants - they are bound to be sensible and humane, since humanity would
otherwise have long since succumbed to nuclear or environmental catastrophe - will
doubtless wonder, with the easy impatience of posterity, what we were waiting
for. They may, in fairness to us, decide that we were waiting for books like
this.

Convergence: The Idea
at the Heart of Science: How the Different Disciplines Are Coming Together to
Tell One Coherent, Interlocking Story, and Making Science the Basis for Other
Forms of Knowledge by Peter Watson. Simon & Schuster, 543 pages, $35.

In Dreams of a Final Theory, Steven
Weinberg propounds a familiar but unfailingly stirring claim:

Scientists have discovered many
peculiar things, and many beautiful things. But perhaps the most beautiful and
the most peculiar thing that they have discovered is the pattern of science
itself. Our scientific discoveries are not independent isolated facts; one
scientific generalization finds its explanation in another, which is itself
explained by yet another. By tracing these arrows of explanation back toward
their source we have discovered a striking convergent pattern - perhaps the
deepest thing we have yet learned about the universe.[1]

Peter Watson, at
any rate, was stirred and has written a lively and colorful volume to
illustrate that "convergent pattern." Watson is a prolific novelist and popular
historian of art, ideas, and now science. As the titles of his books suggest -
e.g., Ideas: A History of Thought and
Innovation from Fire to Freud; The Modern Mind: An Intellectual History of the
Twentieth Century; The Great Divide: Nature and Human Nature in the Old World
and the New; The Age of Atheism: How We Have Sought to Live Since the Death of
God - Watson is a Big Picture man. Convergence
promises a "master narrative" of "synthesis, symphysis, and coherence" among
the sciences, in which

the intimate connections between
physics and chemistry have been discovered. The same goes for the links between
quantum chemistry and molecular biology. Particle physics has been aligned with
astronomy and the early history of the evolving universe. Pediatrics has been
enriched by the insights of ethology; psychology has been aligned with physics,
chemistry, and even with economics. Genetics has been harmonized with
linguistics, botany with archaeology, climatology with myth - and so on and so
on.[2]

Perhaps depth and originality of
insight, or intricacy of argument, are too much to demand alongside such grand
sweep and brisk pacing. Convergence
will not deeply engage academic philosophers or even historians of science. But
it is superior popularization, and very satisfying in its way.

One of the book's
strengths is its wealth of anecdote and biographical detail. Watson begins with
the poignant story of Mary Somerville, one of those remarkable women who seem
to be emerging from the historical shadows with increasing frequency.
Somerville (1780-1872) was a brilliant, self-taught mathematician who was
published in the Proceedings of the Royal
Society and numbered dozens of the Fellows among her friends, although
women were not allowed to attend lectures at the Society until after her death.
Her second book, On the Connexion of the
Physical Sciences (1834), one of the first to trace a pattern of
unification and simplification in the discovery of physical laws, was reprinted
throughout Europe and praised by James Clerk Maxwell.

Perhaps the most
discerning review of Somerville's Connexion
was by the historian and philosopher of science William Whewell, who pointed
out that until then, most commentators on science had been struck rather by
their increasing divergence and diversity. There was not even an agreed name
for workers in the field - Whewell coined the term "scientist" (and later
"physicist" and "consilience"). But his and Somerville's perception of the
advancing unification of the sciences was vindicated, Watson writes, by the
emergence in the 1850s of "the two most powerful unifying theories of all time"[3]:
the conservation of energy and biological evolution.

Watson recounts
the early experiments of Faraday, Julius Meyer, Joule, and Kelvin on heat,
electricity, and magnetism, summarized as the laws of thermodynamics in papers
by Helmholtz in 1847 and Clausius in 1850. Following this, Maxwell and
Boltzmann introduced statistics into thermodynamics, allowing the velocities,
spatial distribution, and collision probabilities of the molecules in a gas to
be calculated and introducing the concept of entropy as the measure of the
order of a system.

(In passing,
Watson takes note of Thomas Kuhn's suggestion that the conservation of energy
was suggested to its German pioneers by the writings of Schelling and other Naturphilosophen, who maintained that
"magnetic, electrical, chemical, and finally even organic phenomena would be
interwoven into one great association."[4]
If true, this would seem to reverse the usual direction of influence between
science and philosophy.)

Charles Darwin's
career and the publication of The Origin
of Species might seem so well documented that even a resourceful popular
historian would be at pains to narrate them interestingly. But Watson rises to
the challenge with an account of developments in two apparently unrelated
fields, which nonetheless were indispensable parts of the Origin's intellectual background. William Herschel's astronomical
observations, including his discovery of Uranus and of hundreds of nebulae, and
above all his theories of galactic formation and evolution, changed the
character of astronomy "from a mathematical science concerned primarily with
navigation, to a cosmological science concerned with the evolution of stars and
the origins of the universe."[5]
(And once again Watson draws a remarkable woman out of the shadows - Herschel's
sister Caroline, who served as his assistant and herself discovered eight
comets.)

The other science
in the background of evolution was geology, its story much better known. Watson
traces its development through its early practitioners - Hutton, Buckland,
Cuvier, Sedgwick, Murchison - to the discovery of the Ice Age by Louis Agassiz
and, most familiarly, to Charles Lyell, whose Principles of Geology made an irrefutable case for the Earth's
being far older than previously assumed. Without these crucial advances in
cosmology and geology, Watson writes, "Darwin would not have been plausible."[6]

The next great
unification was that of physics and chemistry. The discovery of the periodic
table of the elements in the late 1860s, Watson observes, "gave chemistry a
central idea to put alongside Newton's in physics and Darwin's in biology."[7]
In subsequent decades, Hertz, Rontgen, Becquerel, and other physicists,
experimenting with electromagnetism, discovered the phenomenon of
radioactivity. X-rays were immediately added to the arsenal of medicine, while
radium, polonium, radon, and other elements were added to the periodic table.
In 1911 Ernest Rutherford's bombardment of metal foil with beams of electrons
revealed the planetary structure of atoms, with electrons orbiting around a
positively charged nucleus.

All these strands
were drawn together in Niels Bohr's celebrated trio of papers, "On the
Constitution of Atoms and Molecules." It was not clear why, in Rutherford's
model, the orbiting electrons did not either fly apart or collapse into the
nucleus. Bohr recognized that the quantum nature of matter meant that only
certain orbits were permissible. This insight, that "although the radioactive
properties of matter originate in the atomic nucleus, the chemical properties reflect primarily the distribution of
electrons," explained "at a stroke ... the link between physics and chemistry."[8]
Within a decade, Bohr had married the two fields even more closely by
explaining the similar properties of each family of elements in the periodic
table in terms of the arrangement of electrons in their outermost orbit, an
achievement Einstein delightedly described as "the highest form of musicality
in the sphere or thought."[9]

Linus Pauling is,
along with Bohr, one of the heroic unifiers in Watson's account. The nature of
the chemical bond was the theoretical Holy Grail among early 20th-century
chemists. Because Pauling knew far more about crystallography and quantum
theory than most chemists and more about chemical properties than most
physicists, he could "distill what he knew about quantum mechanics, ionic
sizes, and crystal structures, and put that together with a traditional
understanding of the habits of the elements, all wrapped up into a set of rules
for indicating which 'joining patterns' were most likely."[10]
With his further discovery of "resonance" - the coexistence of ionic and
covalent bonds between atoms in a single molecule - Pauling was able to explain
the tetrahedral bonding of carbon atoms, the puzzling reactivity of benzene,
and the structure of more than two hundred other, mostly organic, molecules, in
effect birthing the science of molecular biology.

The next phase of
the "friendly invasion of biology by physics," Watson writes, came via Edwin
Schrodinger's What Is Life?, which
looked at heredity from the physicist's point of view. Schrodinger estimated
the dimensions and structure of the chromosome, and was the first to
characterize it as "a message written in code."[11]
According to Watson, What Is Life? deeply
influenced DNA researchers Francis Crick, James Watson, and Maurice Wilkins, as
well as the equally important work on protein structure in the 1950s.

In the 1960s and
70s, physics experienced its own internal consolidation. The discovery of the
cosmic background radiation, of the subatomic particles found in cosmic rays,
and of quasars and pulsars were "all synthesized into one consistent, coherent,
unified story, to produce a detailed assessment about the origin and evolution
of the universe." Watson calls it "the second evolutionary synthesis."[12]

The last third of Convergence is at once the most
interesting and the least readily assimilated to Watson's grand narrative of
the unification of the sciences. It mostly (apart from somewhat breathless
overviews of information theory, string theory, and Many Universes theory) deals
with recent developments in planetary science and social science. Just as
cosmology, in working out the biography of the universe, depended on advances
in particle physics, paleontology, in writing the biography of the earth,
required new microphysical tools and techniques, above all radioactive dating
based on the half-lives of uranium and carbon. The moon landing also helped,
Watson suggests, to solve a key paleontological puzzle: the K-T boundary, or
the exceptionally sharp divide in the fossil record between the Cretaceous and
Tertiary periods 65 million years ago. The frequency of cratering on the moon
led to speculation that an asteroid had caused a large-scale extinction on
earth. Physicists helpfully pointed out that impact sites would be rich in
iridium, which is absorbed in naturally occurring rocks by the earth's iron
core. Iridium, according to Watson, was the key clue that led to the discovery
of the Yucatan crater where the great asteroid hit. It is undeniable in this
case that physics expanded the paleontologist's toolkit. Whether that amounts
to a unification of the two sciences is another matter.

Watson's chapter
on "Big History" is even more interesting. Carefully and imaginatively, he
traces several lines of evidence, including myths, archaeological artifacts,
paleogenetics, linguistics, and astronomy, to deduce a convincing story of the
origins and early migrations of Homo
sapiens. His surprising (to me, at least) conclusion is that "for
approximately 16,500 years - from 15000 BC to AD 1500, 640 generations - there
were two populations of people in the
world who, insofar as we know, were unaware of each other."[13]
In other words, civilization evolved twice.

By contrast, and a
little anticlimactically, a chapter on sociobiology and evolution covers mostly
familiar ground. It falls short, at any rate, of establishing Watson's claim
that Jacques Monod's Chance and Necessity
(1970) and Edward O. Wilson's Sociobiology
(1975) mark (his italics) "the watershed
moment when the coming together - the convergence - of the sciences achieves
such resonance that science itself becomes the basis for comprehending other
forms of knowledge."[14]

******************************

Some authors are
storytellers; others make arguments. Few authors, I suspect, are equally
skilled and comfortable at both. Even among storytellers, there is a
distinction among dramatists of personality and dramatists of ideas. The best
intellectual history makes ideas into characters, whose biography - birth,
maturity, decline - engage us even as their adherents' lives and circumstances
seem incidental. Watson is not this kind of historian, able to give his story something
like sonata form, with a leading theme followed by its development, abstract
and sensuous at the same time. Nor is he particularly rigorous: his idea of
convergence is a little loose and baggy, almost promiscuously inclusive, with
mere connection or analogy sometimes standing in for unification. It seems a
bit cavalier, for example, to claim D'Arcy Thompson as a prophet of unification
for maintaining that natural selection "cannot by itself possibly account for
the diversity we see around us" but instead must have been "aided by the
self-organization of matter based on mathematical and physical principles."[15]
Wouldn't that make Thompson a complicator rather than a unifier?

Watson is instead
a fluent and enthusiastic personalizer, quick to drop the thread of conceptual
continuity in order to relay an anecdote or display a piquant quote. Fortunately,
most of his anecdotes and quotes are well-judged. It is amusing, for example,
to learn that Einstein's Greek teacher informed him that "whatever field he
chose, he would fail at it"[16];
likewise, to hear about J.J. Thomson, director of the Cavendish Laboratory,
that "one day he brought a pair of new trousers on his way home for lunch,
having been convinced by a colleague that his old pants were too baggy and
worn. At home he changed into his new trousers and returned to the lab. His
wife, arriving home, found the worn-out pair on the bed. Alarmed, she hurriedly
telephoned the Cavendish, convince that her absent-minded husband had gone back
to work without any trousers on."[17]
In a different vein, it is poignant to overhear the troubled Wolfgang Pauli
confessing his predilection for the Viennese "night, sexual excitement in the
underworld - without feeling, without love, indeed without humanity."[18]
And the book's account of nuclear physicist Lise Meitner's escape from the
Nazis is thrilling.

But philosophical
questions are not ignored in Convergence,
even if they are not pursued with the depth and precision they might have been.
Watson devotes a chapter to the Unity of Science movement in the 1930s,
discussing several contributions to the first edition of the International Encyclopedia of Unified Science
(1938). Because the protagonists of the movement were the logical
positivists of the Vienna Circle, the question of physicalism was central. But
the nature and complexities of that doctrine, and its subsequent vicissitudes
in the philosophy of science, are barely acknowledged.

There is, however,
in a later chapter, a long discussion of an important paper from the 1950s by
Hilary Putnam and Paul Oppenheim, "Unity of Science as a Working Hypothesis."
The paper listed six "reductive levels," in descending order: social groups,
multicellular organisms, cells, molecules, atoms, and elementary particles.
That all these "may one day be reduced to microphysics (in the sense that
chemistry seems today to be reduced to it)"[19]
seemed to them a reasonable expectation. But they closed on a more equivocal
note, with a quote from the general systems theorist Ludwig von Bertalanffy,
which instead of strict reductionism spoke of "a superposition of many levels,
from physical and chemical to biological and sociological systems. Unity of
Science is granted, not by any utopian reduction of all sciences to physics and
chemistry, but by the structural uniformities of the different levels of
reality."[20]

In the book's
final chapter, "A Pre-Existing Order?", dissenting voices are heard from. The
leading theme of the opposition to reductionism is emergence: the observation that at a certain level of complexity,
new properties sometimes appear that cannot be explained or predicted by the
known rules of interaction among the smaller units involved. Life and
consciousness are the most commonly cited examples, though Watson also mentions
"processes of self-organization leading to nonhomogenous structures and
nonequilibrium crystals."[21]
In these cases, "microscopic rules can be perfectly true and yet quite
irrelevant to [the resultant] macroscopic phenomena."[22]
As Robert Laughlin, a prominent critic of reductionism, puts it:

The laws of nature that we care about ...
emerge through collective
self-organization and really do not require knowledge of their component parts
to be comprehended and exploited. ... Physical science [has] stepped firmly out
of the age of reductionism into the age of emergence. The shift is usually
described in the popular press as the transition from the age of physics to the
age of biology, but that is not quite right. What we are seeing is a
transformation of a worldview in which the objective of understanding nature by
breaking it down into ever smaller parts is supplanted by the objective of
understanding how nature organizes herself.[23]

Does emergence
undermine convergence? Watson cheerfully shrugs off the challenge: "The story
told in these pages is not a straight line ... but it is a line, a narrative, which hangs together, and is not a mere
artifact of the instruments with which the observations have been carried out.
There is an order to our world, and
how we got here."[24] Others
are less confident. To the physicist John Barrow, "extremes of complexity ...
reveal the limits of a reductionism that looks to a Theory of Everything to
explain the totality of the natural world from the bottom to the top."
Reductionism may be "trivially true," in that it helps us eliminate
metaphysical mysteries like the elan
vital. But complex aggregates display "a wider diversity of behavior than
the sum of their parts," so that "if reductionism means that all explanations
of complexity must be sought at a lower level, and ultimately in the world of
the most elementary constituents of matter, then reductionism is false."[25]
The astrophysicist John Gribbin, surveying the same phenomena, comes to an
apparently opposite conclusion:

[C]haos and complexity obey simple laws
- essentially, the same simple laws discovered by Isaac Newton more than three
hundred years ago. Far from overturning four centuries of scientific endeavor
as some accounts would lead you to believe, these new developments show how the
long-established scientific understanding of simple laws can explain (although
not predict) the seemingly inexplicable behavior of weather systems, stock
markets, earthquakes, and even people. ... [T]he complicated behavior of the
world we see around us is merely "surface complexity arising out of deep
simplicity."[26]

I say "apparently"
opposite because the ground of the disagreement - the meaning of "reductionism"
- is not altogether clear. Clearly the Standard Theory of elementary particles
does not explain consciousness or even protein structure. But just as clearly,
no one claims that it does. What is often claimed, rather, is that theories of
simpler forms of matter underlie
theories of more complex forms. "Underlie" is a metaphor, and so needs to be
unpacked. Perhaps "constrain," in the sense of "limit," is the operative
meaning. That is, a lower-level theory (e.g, the theory of elementary particles)
constrains a higher-level theory (e.g., the theory of protein structure) in the
sense that, if both are well-established, and are incompatible, the
higher-level theory must give way. Then again, what would that mean? In
practice, an incompatibility of that sort would simply motivate redoubled
efforts to confirm that the two theories were well-established and that they
were genuinely incompatible. And if so, the only reasonable attitude would be a
temporary suspension of judgment, uncomfortable though that might be.

It may be, as
Frank Wilczek writes, that "reductionism has a bad name ... because
'reductionism' is a bad name." It
suggests a blinkered "no more than"-ism, rather than, as Wilczek and his fellow
Theorists of Everything experience it, "a spiritual quest, reaching for the
sublime."[27]
Spiritual quests do not always end well, of course. The presiding genius of Convergence is Einstein, who avowed in
his Nobel Prize lecture that "the mind striving for unification cannot be
satisfied that two fields should exist which, by their nature, are quite
independent."[28] A
stirring sentiment; but as Watson acknowledges, Einstein died unsatisfied.

]]>
The Difference It Makestag:georgescialabba.net,2017:/mtgs//2.15802017-01-01T21:08:37Z2017-07-22T20:12:02Zadmin

The Restless Clock: A
History of the Centuries-Long Argument over What Makes Living Things Tick
by Jessica Riskin. Univ. of Chicago Press, 548 pages, $40.

When two aspiring
young writers meet and circle each other at a party in Boston, Brooklyn, or
Berkeley, sooner or later (usually sooner) one will ask: "Do you have an
agent?" Without one, every hopeful writer knows, you're nowhere: editors
nowadays are too beleaguered to read anything not vouched for by someone whose
commercial judgment has been tested and vindicated in the literary marketplace.

According to the
delightful science-fiction romance Her(2013),
artificial intelligences also socialize, or will before long. I imagine them
asking one another at parties, "Are you an agent?" They will not, of course, be
asking about literary representation but about the psychological or emotional
or moral capacity we commonly call "agency." They'll be looking to find out whether
the AI they're meeting answers ultimately to itself or to someone else, whether
it can set and change its own goals, whether it can surprise itself and others.
Beings possessed of agency are autonomous, spontaneous, capable of initiative,
and moved by internal as well as external forces or drives. Agents are usually considered
much more fun to be in a relationship with than non-agents.

According to
Jessica Riskin's The Restless Clock,
agency is everywhere, or at least far more widespread than is dreamt of in
modern philosophy of science. If agency is "an intrinsic capacity to act in the
world,"[1]
then science is not having any. It is "a founding principle of modern science ...
that a scientific explanation must not attribute will or agency to natural phenomena."[2]
This "ban on agency" is the foundation of scientific epistemology; it "seems as
close to the heart of what science is
as any scientific rule or principle."[3]

Of course,
scientists constantly write and speak as
if natural phenomena had purpose and intentions: proteins "regulate" cell
division; some cells "harvest" energy; genes "dictate" myriad biochemical
activities. But this anthropomorphizing language is just a convenience, a
placeholder. As a biologist friend assures Riskin: "The more we get to know,
the less these phenomena will seem purposeful."[4]
Pressed, her friend laughs nervously and confesses: "OK, you're right; it's a
matter of faith. And, as with any matter of faith, I am absolutely unwilling to
admit that I'm wrong. I know that if
I knew everything about the processes I study, I would have no reason to appeal
to agencies of any kind, even as a manner of speaking, let alone as an
explanation."[5]

Centuries of
debate - largely occluded thanks to the theoretical and institutional hegemony
of one side - have issued in this "quandary," Riskin argues. On one side of
this debate, espoused by Riskin's biologist friend, is a conception of nature
as a "brute" or "passive" mechanism, lacking agency and susceptible of entirely
non-teleological explanations. The other, much less familiar tradition sees nature
as an "active" mechanism, wholly material yet also "restless, agitated,
responsive, purposeful, sentient," as "self-constituting and self-transforming
machinery."[6]
Ranging over Western (and occasionally non-Western) intellectual history from Aristotle's
treatises on animals to current controversies in evolutionary theory and
cognitive science, The Restless Clock
undertakes, with astonishing energy and resourcefulness, to excavate this
debate and expound its significance.

******************************

In the Middle
Ages, Europe was dotted with automata - saints, angels, devils, and monks in
churches and cloisters as well as festivals and marketplaces; knights, ladies,
animals, and mythological figures in castles and gardens; and timepieces of
every variety. Some of these figures were marvelously elaborate. In the
Benedictine Abbey at Cluny, a mechanical cock

flapped and crowed on the hour ...
Meanwhile, an angel opened a door to bow
before the Virgin; a white dove representing the Holy Spirit flew down and was
blessed by the Eternal Father; and fantastic creatures emerged to stick out
their tongues and roll their eyes before retreating inside the clock.[7]

Even more remarkable was the clockworks of the Strasbourg Cathedral.

For nearly five centuries, the
Strasbourg Rooster cocked its head, flapped its wings, and crowed on the hour
atop the Clock of the Three Kings, originally built between 1352 and 1354, and
refurbished by the clockmaker brothers Isaac and Josias Habrecht between 1540
and 1574. Beneath the Rooster, the astrolabe turned and the Magi scene played
out its familiar sequence. In the Habrecht version, the Rooster, Magi, Virgin,
and Child were joined by a host of other automata: a rotation of Roman gods who
indicated the day of the week; an angel who raised her wand as the hour was
rung, and another who turned her hourglass on the quarter hour; a baby, a
youth, a soldier, and an old man representing the four stages of life, who rang
the quarter hours; and above them, a mechanical Christ came forth after the old
man finished ringing the final quarter hour, but then retreated to make way for
Death to strike the hour with a bone.[8]

The Catholic
Church was an enthusiastic patron of these automata, as well as of translations
of ancient texts on mechanical engineering, which inspired Christian craftsmen.
From this robust appetite for dramatic embodiments of virtue, vice, and divine
love, Riskin deduces that medieval Catholicism "held no sharp distinction
between the material and the spiritual, earthly and divine."[9]
Scholastic theology endowed non-human life with "vegetative" and "nutritive"
souls, and the Great Chain of Being emphasized the relatedness of all creation,
from ants to angels.

Reformation
theology, reacting against this ontological permissiveness and Southern
European earthiness, which it perceived as a profanation of the sacred and a
source of rampant ecclesiastical corruption, discouraged icons, rejected the
Eucharistic doctrine of transubstantiation, and favored a more austere form of
piety, emphasizing God's otherness and dispensing with saintly and angelic
intermediaries. The grand panoply of pious and secular automata went "from
being manifestations of spirit and liveliness to being fraudulent heaps of
inert parts."[10]
Matter and mechanism were no longer taken to be active and vital but instead
inert and passive.

This radical
separation of spirit and matter formed the intellectual backdrop of the seventeenth-century
scientific revolution. Nature was conceived of as a vast machine which, like
all machines in the new Reformation cosmogony, could only be set in motion by
an external source: God. Descartes initiated the revolution in philosophy by
relocating the self from the medieval animated body to an immaterial soul. "The
Cartesian removal of soul from the machinery of the world, like the Reformist
removal of God from nature, left behind something starkly different. ... The
animal-machine, as Descartes described it in the first instance, was warm,
mobile, living, responsive, and sentient. The same living machinery, when
measured against a disembodied, transcendent self, looked different: confined,
rote, passive."[11]

This was not
altogether Descartes' intention. Riskin surveys the response to Cartesianism in
fascinating detail, showing how one interpretation prevailed: that Descartes
had "demonstrated God's existence by detailing the mechanical perfection of his
artifact, the world-machine."[12]
Although it was embraced by Malebranche, Buffon, Boyle, Hobbes, Locke, Newton,
and other dominant figures in early modern intellectual history, Descartes, she
contends, would have rejected this "new tradition of natural theology,"[13]
with its watchmaker God.

So did quite a few
others, though their arguments have largely been lost to view, as is often the
case with unorthodox intellectual traditions. One was the celebrated
physiologist William Harvey, who taught that animals and even the internal
organs of human beings harbored "forms" that solved problems, took initiatives,
and allowed for "a rising of mechanical parts to new powers."[14]
His Oxford colleague Thomas Willis described bodies as "vital, perceptive,
active animal-machines,"[15]
not passive but self-moving.

The best-known and
most thoroughgoing exponent of "active mechanism" was Leibniz. Convinced that
the closed mechanical systems described by Boyle and Newton admitted of no
explanation for motion and change, Leibniz posited a vis viva, a living or vital force. It was a "metaphysical" thing, a
"principle underlying all material events." Instead of impenetrable,
indivisible, insentient atoms, he proposed that the fundamental units of matter
were a species of "metaphysical points," with "something vital about them."[16]
These were the "monads," elementary spiritual substances out of which more
complex creatures were built up. The resulting entities were best described as
"organized" rather than "designed"; their plan allowed for spontaneity and
learning.

Leibniz found many
followers among eighteenth-century physicists and engineers, including the
Marquise du Châtelet, Lazare and Sadie Carnot, and others, but his reputation
was lastingly deflated by Voltaire's satire Candide
- unfairly, Riskin protests, since his
"pre-established harmony" was not at all the purblind optimism of Doctor
Pangloss. Still, it is a bit difficult not to misunderstand Leibniz, even with
the benefit of Riskin's exegeses. The materialists' "big mistake, he judged,
was to assume that a mechanist science must eliminate incorporeal things, when
in fact a mechanist science required incorporeal things. Leibniz was after a
third way: ... a fully mechanist account of nature that included immaterial
'active force.'" The universe was "a great nesting of machines within machines
within machines, all built out of little perceiving spirits." "I have at last
shown," he wrote triumphantly, "that everything happens mechanically in nature,
but that the principles of mechanism are metaphysical."[17]
After considerable wrestling with these and other propositions of Leibniz, I am
inclined to pardon Voltaire.

The philosophes, though enthusiastic
materialists and defiant atheists, also found evidences of agency within both
living creatures and machines. "Organization," according to La Mettrie, author
of L'homme machine, "is the first
merit of Man." Riskin elaborates: "An organized machine was a concurrence of
active parts, unlike the rigidly deterministic, designed clockwork described by
natural theologians."[18]
The favored eighteenth-century metaphor was weaving; organisms were
"self-moving looms," their bodies "self-weaving fabric."[19]
Though Emerson and Carlyle believed that the legacy of the eighteenth century
was a "dead world of atoms controlled by the laws of a dead causality,"[20]
Riskin claims that a closer look at the philosophes'
texts (along with Kant's, whose ideas about the philosophy of science she
renders intelligible, even lively) reveals "a physical world imbued with
perception, feeling, and self-organizing agency."[21]

"By the turn of
the nineteenth century," Riskin writes, "a living being in scientific,
philosophical, and literary understanding had become, in essence, an agent ... a
thing in constant, self-generated motion and transformation of material arts ...
'striving' ... and respons[ive] to external circumstances."[22]
Life was "a form of activity, a continual effort to constitute itself from and
against dead matter."[23]
In this intellectual climate, the science of biology - first named by Lamarck -
came into being. Lamarck has been generally misunderstood, Riskin claims. The
inheritance of acquired characteristics does not rely on a mysterious mental
agency or a mystic elan vital. The
life force, or pouvoir de la vie, was
entirely material but not wholly passive, and its effects were not altogether
random. In the course of their development, organisms responded to new
challenges by evolving new capacities, sometimes by altering the environment to
favor the selection of these capacities, and sometimes by changing the
organism's hereditary material. The "pope" of late-nineteenth-century
Darwinism, August Weismann, declared this impossible, formulating the
still-prevalent doctrine that changes in an organism's physiology or
environment can only affect its somatic cells, not its germ cells.

There is a long,
semi-underground history of Lamarckian challenges to this Darwinian orthodoxy,
beginning, Riskin suggests, with Darwin himself, who asserted the inheritance
of acquired characteristics and hesitated, through all six editions of the Origin of Species, to omit all
references to purpose and internally directed adaptations. Throughout the late
nineteenth and early twentieth centuries, his successors - Weismann, His,
Haeckel, Roux, Loeb, Stresemann, Driesh, De Vries, T.H. Morgan, D'Arcy Thompson
- argued the mechanisms of biological causation back and forth, their arguments
traced in heroic detail in The Restless
Clock. The "neo-Darwinian synthesis" that emerged was a decisive victory
for the passive mechanists, enshrining the principle that "agency cannot be a
primitive, elemental feature of the natural world."[24]

The intuition that
deterministic models of causation, at least as currently conceived, cannot
fully explain the purposive behavior of living beings refuses to die, however. Riskin
finds traces in two current debates: in cognitive science, between "embodied"
and "computational" theories of intelligence; and in evolutionary biology,
between "strict" and "modified" adaptationism. Frustratingly, both sides in
both these debates explicitly disavow any recourse to individual agency or
purpose as an explanatory strategy. But she draws hope from the unsettled state
of both debates; from new sub-disciplines that are yet to be incorporated into the
passive-mechanist consensus: Chomskyan linguistics and epigenetics; and from
that bottomless wellspring of theoretical speculation, quantum uncertainty.

**********************************

The Restless Clock, though an
extraordinary achievement, is naturally open to a few minor cavils. I think
perhaps Riskin exaggerates the extent to which medieval Catholicism "held no
sharp distinction between the material and spiritual" and to which "a rigorous
distinction between divine spirit and brute matter" was an innovation of the
Reformers. Though Scholastic philosophy, as she correctly points out, defined
the soul as the "form" of the body, the difference was nevertheless absolute. A
handbook of Catholic theology lists as de
fide (i.e., to be believed on pain of excommunication) the doctrine that
"Man consists of two essential parts - a material body and a spiritual soul."[25]
It is true that the spirituality of the Latin Catholic countries during the
Middle Ages was grossly materialistic indeed. But popular spirituality is not
theology.

Riskin is clearly
a partisan in the debates she reconstructs, but her judiciousness in reporting
the arguments on both sides is exemplary. There is only one sentence in the
book that might occasion a raised eyebrow. Discussing the
Gould-Lewontin/Dawkins-Dennett debate over adaptationism, she writes: "[S]trict
adaptationists, like the natural theologians of old, assumed that all
structures in nature existed for reasons of optimal design, and that the
orderings and arrangements of the natural world were therefore normatively correct
and good."[26]
The alleged inference is fallacious; the second half of the sentence does not
follow from the first. Actually, it's not clear whose opinion the sentence
conveys, Riskin's or Gould and Lewontin's. Either way, the accusation of Social
Darwinism - for that is what it amounts to - seems to me out of place.

Notwithstanding
all the enlightenment (and entertainment) I derived from The Restless Clock, I must confess to an even more fundamental
dissatisfaction - whether with the book's argument or with my own power of
comprehension, I am genuinely unsure. I feel a little like Mister Jones in one
of our recent Nobel laureate's best-known songs: something is happening here, but
I don't know what it is.[27]
The insistent rhetoric of active versus passive, of "living force" versus "dead
matter," of "purpose," "striving," and
"self-organization" versus "inert" and "brute" mechanism, has me quivering in
agreement - but with what, exactly, I'm at something of a loss to say. Even
after hundreds of pages of Riskin's painstaking exposition, I cannot quite get
my mind around the difference between a world with agency and a world without
it.

The book opens by
recounting Thomas Huxley's celebrated joke, in a lecture of 1868, to the effect
that we no more need a constitutive principle called "vitality" to explain life
(or by extension, "agency" to explain action) than we need a constitutive
principle called "aquosity" to explain water. Riskin does her best - which, as
I hope I have duly acknowledged, is very good indeed - to suggest that the joke
is on Huxley. But I'm not convinced. Recall her definition, also from the
book's opening pages, of agency: "an intrinsic capacity to act in the world." I
think I see what work "intrinsic" (though not "in the world") is meant to do in
that sentence: to distinguish action whose cause is from within rather than
from without, a rock from a squirrel. Does a rock, then, have an "extrinsic
capacity to act"? It can roll if someone rolls it - but is that acting? Usually
we say that a rock has no capacity to act unless acted on. So maybe the
definition of agency should be "the capacity to act without being acted on." Of
course, nothing is ever not being
acted on somehow; and for that matter, everything inside anything was once
outside it and will eventually be outside it again; and what's more, everything
at every instant is so multifariously embedded in so many networks and
hierarchies that "inside" and "outside" begin to look a bit dodgy. Perhaps we
should just define agency as "capacity to act." And how do we identify this
capacity? Initially we may assume that if it looks just like something else
that acts, then it probably acts as well. But maybe it's only an exquisite copy.
Unless it actually acts - somehow, anyhow, even just metabolizes - we can't
know whether it has the capacity to act.

Hmm. I'm not sure,
then, why we shouldn't conclude that "X has the capacity to act" (that is, "X
has agency") doesn't mean anything more or less than "X acts (somehow or other,
sometime or other)." "X has agency" adds no more to "X acts" than "X has dormitivity"
adds to "X sleeps" - or than "X has aquosity" to "X is wet." If we know how
something acts in every situation of interest to us, we'll have the answer to
any question that the statement "X has agency" might be the answer to. Maybe
Huxley had a point.

What about the
"from within" part, the autonomy, spontaneity, responsibility, etc. that are
supposedly entailed by agency? Those are useful words; there's no reason why we
should give them up, any more than Riskin's biologist friend need feel sheepish
about using words like "control," "regulate," or "dictate." With their help we
can still worry the questions Riskin presumably wants us to, questions like:
Can a complex machine act spontaneously? What emotions do other primates (cephalopods,
cetacae, etc) feel? Are any plants conscious? How do proteins know just what to
do? What can alter an organism's DNA? What affects how genes are expressed? But
wouldn't two conscientious scientists, one a believer in active mechanism and
one in passive mechanism, go about investigating these questions in exactly the
same way, and mightn't they very well report exactly the same results? And if
that's the case, do their philosophical beliefs matter? Isn't it only when they
operate to close off inquiry prematurely - by saying, in effect, "just because"
- that words like "will," "purpose," "mind," and "inner life" are unwelcome? Riskin
would, I suppose, reply that the point of agency/teleology/self-organization
talk is to generate new, otherwise unformulatable questions. Fine, but where
are they?

Newly acquainted
artificial intelligences (carbon-based ones, too) will undoubtedly continue to
probe for assurance that their potential partners are fully (or at least
sufficiently) aware of their own motives and in control of their own purposes.
They will ask the same searching questions (since even those who use the
language of "agency" will hardly be satisfied with a simple "Yes" in reply to
the question "Are you an agent?"), listen for the same revealing intonations,
watch for the same telltale gestures and expressions. A world with the term
"agency" will look and feel exactly like a world without it, which means it is
what William James (in one of his many moods) called the term "free will": a
"fifth wheel to the coach."[28]

"Intellectual
possibilities are not the sole fruits of this study," Riskin promises at the
book's outset.

Social and political engagements ... have
all along been inextricable from the competition between scientific models of
living and human beings. The classical brute-mechanist approach to the science
of life and the active-mechanist approach have developed, as we shall see, in
close conjunction with mechanical and industrial arrangements such as the
automatic loom and the transformed world of production that accompanied it;
with economic policies including the division of various kinds of labor; with
taxonomies and rankings of human beings by sex, race, class, geographical
origin, and temperament; and with projects of imperial conquest and governance.
In what follows, investigating this centuries-old dialectic in science will
mean uncovering the hidden action of forces that are at once intellectual and
political, scientific and social.[29]

Though the social and political implications
hinted at here are not subsequently developed in much detail in The Restless Clock, there are echoes, in
this passage and throughout the book, of a long and venerable tradition: the
Romantic critique of science and technology, from William Blake to William
Morris to Herbert Marcuse and beyond. For all its indispensable wisdom, that
tradition has always seemed to me mistaken insofar as it seeks to impugn rather
than supplement scientific rationalism. The habit of analysis may, as John
Stuart Mill ruefully observed in his Autobiography,
have "a tendency to wear away the feelings." But remedying this malaise, in our
case as in Mill's, is not so much a matter of attacking a malignancy as of minimizing
an inevitable imbalance. Undoubtedly, even the greatest scientist is not a
complete human being. But then, there are no complete human beings.

One of Riskin's
champions, the great American historian Jackson Lears, has written that the
consequences of adopting her perspective "might be political and moral as well
as intellectual. A full recognition of an animated material world could well
trigger a deeper mode of environmental reform, a more sane and equitable model
of economic growth, and even religious precepts that challenge the ethos of
possessive individualism and mastery over nature."[30]
The desirability - the stark urgency - of environmental and economic reform and
of moral and spiritual renewal should be obvious by now to even the most
unimaginative positivist. If it is not, it is because he or she is
unimaginative, not because he is a positivist.

Epistemology has
no political or moral consequences. The source of morality is sympathy, the
habit of experiencing the joys and sufferings of others as, in some measure,
one's own. The wider and deeper our sympathies - the more people whose joys and
sufferings we can imagine, and the more keenly those joys and sufferings affect
us - the more moral we will be, whatever our beliefs about knowledge, truth,
meaning, agency, or the nature of the good. This is why great philosophers can
have banal or even (like Plato and Heidegger) repugnant political views, and
why philosophical duffers like Camus can be moral heroes. It is, I believe, what
the late Richard Rorty meant by giving his late essays titles like "The
Priority of Democracy to Philosophy" and "Take Care of Freedom, and Truth Will
Take Care of Itself." Philosophy makes nothing happen; for better or worse, it leaves
the world as it finds it - sometimes a little bit less confused, but just as
often a little more.

]]>
Against Everythingtag:georgescialabba.net,2017:/mtgs//2.15732017-01-01T20:37:30Z2017-07-22T19:40:24Zadmin

The 1950s, 60s, and 70s - les trentes glorieuses, the French call
them - were indeed fat years in Western Europe and the United States. Unions
were strong, unemployment was low, and a lot of jobs still came with health
insurance, pensions, and a fair chance of not either migrating at any moment to
lower-wage countries or suddenly being replaced by software/hardware.
Notwithstanding an ugly racist backlash to the Civil Rights Movement and an
unjust, hideously destructive war in Indochina, it was possible, for a few
brief years, to believe that the American economy and polity were sound in
their fundamentals, however much in need of reform.

And yet, radical social criticism
flourished in those decades as never before in America, not even in the Great
Depression: Mills, Marcuse, Goodman, Baldwin, Harrington, Lasch, Kozol, Norman
O. Brown, Wendell Berry, Shulamith Firestone, the Port Huron Statement, among
others. Perhaps there's something to the idea that revolutions are a response
to rising expectations: that economic success and apparent security liberate
the radical imagination, while widespread insecurity cramps it, inducing a
defensive crouch. At any rate, an awful lot of people back then professed
themselves - ourselves, I must acknowledge sheepishly - revolutionaries.

Ivan Illich was an idiosyncratic
revolutionary. Fundamentally, most radical critics object that our institutions
unfairly allocate good and services - education, health care, housing,
transportation, consumer goods - or jobs, or respect, or, simply, money. Illich
nicely summarized the left's perennial program as "more jobs, equal pay for
equal jobs, and more pay for every job." For Illich, these demands were beside
the point. He thought that, by and large, the goods, services, jobs, and rights
on offer in every modern society were not worth a damn to begin with. In fact,
he thought they, and the way of life they constituted, were toxic. He was not a
reactionary in any useful sense of that term, but he was a fervent
anti-progressive.

Illich was born in Vienna in 1926 of
Jewish and Catholic parents. The family fled the Nazis in 1941, and after the
war, Illich studied cell biology and crystallography in Florence, theology and
philosophy in Rome, and medieval history in Germany. He was ordained a Roman
Catholic priest and in 1951 was sent to a poor Puerto Rican parish in New York
City. He was very successful, both as a parish priest and also, somewhat more
surprisingly, in charming the ultra-conservative Cardinal Spellman. In 1956 he
became vice-rector of the Catholic University of Puerto Rico. By then he was a
fairly outspoken critic of pre-Vatican II Catholic orthodoxy, and his new
superiors were not charmed. He spent 1959 wandering around Latin America, then
settled in Cuernavaca, Mexico, where he founded a freewheeling language school
and research center, the Center for Intercultural Documentation (CIDOC), which
became, like Berkeley and Greenwich Village, a seedbed of Sixties radicalism.

At some point the Vatican became
alarmed - it's rumored that the CIA had complained about him - and Illich was
summoned to Rome to explain himself. Apparently the Church authorities
satisfied themselves that this retiring polyglot cleric was not actively
subversive. But CIDOC had become a distraction, as had his own growing
celebrity. Illich had no taste for empire-building, so he phased out the Center
in 1976 and became an itinerant scholar, living from course to lecture series to
research grant, with occasional royalties as well. He wrote a dozen books (or
fifteen, depending on how strict your definition is) and died in 2002.

The first of Illich's books, Deschooling Society (1971), made him
very famous. It caught the crest of a wave of critique and experiment in
American education: Paul Goodman, John Holt, Paulo Freire, free schools,
community control. Illich shared his contemporaries' anti-authoritarianism but
not their reasons. For most educational radicals, the enemies were tradition -
the age-old authority of church and state, bosses and parents - and inequality
- the gap between resources devoted to rich and poor children. From this point
of view, the remedies were plain: practice emancipatory social relations in all
schools and lavish more resources on those serving poorer children.

To Illich's mind, those remedies
missed the point. He thought the educational system had no good reason to exist.
It was, like every modern service industry, in the business of creating and
defining the needs it purported to satisfy - in this case, certification as an
expert - while discrediting alternative, usually traditional, methods of
self-cultivation and self-care. The schools' primary mission was to produce
people able and willing to inhabit a historically new way of life, as clients
or administrators of systems whose self-perpetuation was their overriding goal.
Thus schools produce childhood, a phenomenon that is, Illich claimed, no more
than a few centuries old but is now the universal rationale for imposing an
array of requirements, educational and medical, on parents and for training people
as lifelong candidates for credentials and consumers of expertise.

It is not what schools taught that Illich objected to; it is that they taught:

To
understand what it means to deschool society, and not just to reform the
educational establishment, we must focus on the hidden curriculum of schooling.
... [It is] the ceremonial or ritual of schooling itself [that] constitutes such
a hidden curriculum. Even the best of teachers cannot entirely protect his
pupils from it. Inevitably, this hidden curriculum of schooling adds prejudice
and guilt to the discrimination which a society practices against some of its
members and compounds the privilege of others with a new title to condescend to
the majority. Just as inevitably, this hidden curriculum serves as a ritual of
initiation into a growth-oriented consumer society for rich and poor alike.

Once
young people have allowed their imaginations to be formed by curricular
instruction, they are conditioned to institutional planning of every sort. ...
Neither ideological criticism nor social action can bring about a new society.
Only disenchantment with and detachment from the central social ritual and
reform of that ritual can bring about radical change.

What school teaches, first and last,
is "the need to be taught."

In a series of subsequent books - Tools for Conviviality (1973), Energy and Equity (1974), Medical Nemesis (1975), Toward a History of Needs (1978), The Right to Useful Unemployment (1978),
and Shadow Work (1981) - Illich
formulated parallel critiques of medicine, transportation, law, psychotherapy,
the media, and other social spheres. The medical system produces patients; the
legal system produces clients; the entertainment system produces audiences; and
the transportation system produces commuters (whose average speed across a city,
he liked to point out, is less than the average speed of pedestrians or
bicyclists - or would be, if walking or bicycling those routes hadn't been made
impossible by the construction of highways). In this process, far more
important than merely teaching us behavior is the way these systems teach us
how to define our needs. "As production costs decrease in rich nations, there
is an increasing concentration of both capital and labor in the vast enterprise
of equipping man for disciplined consumption."

Why do we have to be taught to need
or disciplined to consume? Because being schooled, transported, entertained,
etc. - consuming a service dispensed by someone licensed to provide it - is a
radical novelty in the life of humankind. Until the advent of modernity only a
century or two ago (in most of the world, that is; longer in "advanced"
regions), the default settings of human nature included autonomy, mutuality,
locality, immediacy, and satiety. Rather than being compulsorily enrolled in
age-specific and otherwise highly differentiated institutions, one discovered
interests, pursued them, and found others (or not) to learn with and from. Sick
care was home- and family-based, far less rigorous and intrusive, and suffering
and death were regarded as contingencies to be endured rather than pathologies
to be stamped out. Culture and entertainment were less abundant and variegated
but more participatory. The (commercially convenient) idea that human needs and
wants could expand without limit, that self-creation was an endless project,
had not yet been discovered.

This is perhaps obvious; but can
Illich seriously doubt that the great changes since then constitute progress?
It's a question to which he cannily declined to give a direct answer, even while
he assailed the self-satisfaction of the age. He insisted that he was a
historian and diagnostician, not an advocate or a prophet. He at any rate
fleshed out the diagnosis amply and eloquently, especially in Medical Nemesis, his longest book. "The
pain, dysfunction, disability, and anguish resulting from technical medical
intervention rival the morbidity due to traffic and industrial accidents and
even war-related activities, and make the impact of medicine one of the most
rapidly spreading epidemics of our time." Partly this was malpractice, or what
he called "clinical iatrogenesis": "The Department of Health, Education, and
Welfare calculates that 7 percent of all patients suffer compensable injuries
when hospitalized. ... One out of every five patients admitted to a typical
research hospital acquires a iatrogenic disease. ... The frequency of reported
accidents in hospitals is higher than in all industries except mines and
high-rise construction." But these defects were reformable; more intractable
was "cultural iatrogenesis": the destruction of "the potential of people to
deal with their human weakness, vulnerability, and uniqueness in a personal and
autonomous way." The difficulty of giving birth or dying at home is an obvious
example.

Even more fundamental was "social
iatrogenesis," the damage that results from the institutional shape medicine
takes in modern society. "When the intensity of biomedical intervention crosses
a critical threshold, clinical iatrogenesis turns from error, accident, or
fault into an incurable perversion of medical practice. In the same way, when
professional autonomy degenerates into a radical monopoly and people are
rendered impotent to cope with their milieu, social iatrogenesis becomes the
main product of the medical organization."

The notion of "radical monopoly" plays an important role in Illich's
critique of professionalism:

A radical monopoly goes deeper than that of any one
corporation or any one government. It can take many forms. When cities are
built around vehicles, they devalue human feet; when schools preempt learning,
they devalue the autodidact; when hospitals draft all those who are in critical
condition, they impose on society a new form of dying. Ordinary monopolies
corner the market; radical monopolies disable people from doing or making things
on their own. The commercial monopoly restricts the flow of commodities; the
more insidious social monopoly paralyzes the output of nonmarketable
use-values. Radical monopolies ... impose a society-wide substitution of
commodities for use-values by reshaping the milieu and by "appropriating" those
of its general characteristics which enabled people so far to cope on their
own.

Professions
colonize our imaginations; or as Foucault (whom Illich's language sometimes
recalls - or anticipates) might have said, they reduce us to terms in a
discourse whose sovereignty we have no idea how to contest or criticize.

Unlike Foucault, who sometimes seemed to take a grim satisfaction in demonstrating
how cunningly we were imprisoned in our language and institutions, Illich was
an unashamed humanist. His ties to the barrios
and campesinos of North and South
America were deep and abiding. His
"preferential option for the poor" (the slogan of Catholic liberation theology)
was a peculiar one: he hoped to save them from economic development at the
hands of Western-trained technocrats. Illich had hard words for even the best
Western intentions toward the Third World. (It is possible that what annoyed
the CIA was Illich's advice to the Peace Corps volunteers who came to Cuernavaca
for Spanish-language instruction that they should leave Latin American peasants
alone, or perhaps even try to learn from them how to de-develop their own
societies.) Religious and ecological radicals, however generous and respectful,
still wanted to bring the poor a poisoned gift.

Development has had the same effect in all societies:
everyone has been enmeshed in a new web of dependence on commodities that flow
out of the same kind of machines factories, clinics, television studios, think
tanks. ... Even those who worry about the loss of cultural and genetic variety,
or about the multiplication of long-impact isotopes, do not advert to the
irreversible depletion of skills, stories, and senses of form. And this
progressive substitution of industrial goods and services for useful but
nonmarketable values has been the shared goal of political factions and regimes
otherwise violently opposed to one another.

Illich might have said more about those fugitive "stories, skills, and
senses of form"; he might have tried harder to sketch in the details of a
society based on "nonmarketable values." But in Tools for Conviviality and elsewhere, he at least dropped hints. He
certainly did not idealize the primitive - he might have welcomed the term
"appropriate technology" if he had encountered it. He enthused over bicycles
and the slow trucks and vans that moved people and livestock over the back
roads of Latin America before the latter were "improved" into useless and
dangerous highways. He was a connoisseur of the hand-built structures cobbled
together from cast-off materials in the favelas
and slums of the global South. He thought phone trees and computer databases
that would match learners and teachers were a very plausible substitute for the
present educational system. He thought the Chinese "barefoot doctors" were a
promising, though fragile, experiment. He was friendly to any gadget or
technique or practice - he called them "convivial" tools - that encouraged
initiative and self-reliance rather than smothering those qualities by
requiring mass production, certified expertise, or professional supervision.

Illich proposed "a new kind of modern tool kit" - not devised by planners
but worked out through a kind of society-wide consultation that he called
"politics," undoubtedly recognizing that it bore no relation to what currently
goes by that name. The purpose of this process was to frame a conception of the
good life that would "serve as a framework for evaluating man's relation to his
tools." Essential to any feasible conception, Illich assumed, was identifying a
"natural scale" for life's main dimensions. "When an enterprise [or an
institution] grows beyond a certain point on this scale, it first frustrates
the end for which it was originally designed, and then rapidly becomes a threat
to society itself."

A livable society, Illich argued, must rest on an "ethic of austerity."
Of course he didn't mean by "austerity" the deprivation imposed by central
bankers for the sake of "financial stability" and rentier profits. Nor, though
he rejected affluence as an ideal, did he mean asceticism. He meant "limits on
the amount of instrumented [i.e., technical or institutional] power that anyone
may claim, both for his own satisfaction and in the service of others." Instead
of global mass society, he envisioned "many distinct cultures ... each modern and
each emphasizing the dispersed use of modern tools."

Under such protection against disabling affluence ...
tool ownership would lose much of its present power. If bicycles are owned here
by the commune, there by the rider, nothing is changed about the essentially
convivial nature of the bicycle as a tool. Such commodities would still be
produced in large measure by industrial methods, but they would be seen and
evaluated ... as tools that permitted people to generate use-values in
maintaining the subsistence of their respective communities.

Whether one
calls this revolution or devolution, it clearly requires, he acknowledged, "a
Copernican revolution in our perception of values." But there was nothing
quixotic or eccentric about it. On the contrary, this affirmation of limits
aligns Illich with what is perhaps the most significant strain of social
criticism in our time: the anti-modernist radicalism of Lewis Mumford,
Christopher Lasch, and Wendell Berry, among others.

Any assessment of Illich's thought requires at least a footnote about his
curious, controversial late work, Gender
(1983). Like many anti-modernists, Illich had an uneasy relationship with
feminism. He thought about sexual inequality much as he did about economic
inequality: its injustice was too obvious to need much arguing, but more money
and power for women and the poor amounted to, in effect, better seats at the
banquet table when all the food was unhealthy and unpalatable. He was, unlike
most political and sexual radicals, disenchanted with money and power
altogether.

Illich claimed that sex, like childhood, was a modern invention. When
production moved out of the household, life was sundered into two spheres: one
where the means of life were gained, and another which supported those efforts.
Marxists called these two realms the sphere of production and the sphere of
social reproduction. Illich called them wage labor and "shadow work." The
latter included all unpaid efforts that made the former possible: not only
housework, shopping, and child care but also what has come to be called
"emotional labor," and even the family's liaison with external caregivers. The
great majority of this shadow work is done by women, increasingly alongside
their own wage labor. Sex as a role, an attribute of a being abstractly
conceived as a laboring subject, evolved as a rationale for this division into homo economicus and femina domestica, which Illich condemned as heartily as any
feminist could wish.

Before sex, there had been only gender. Every pre-modern society,
according to Illich, assigned every object and every task - and sometimes each
stage of each task - either exclusively to men or exclusively to women. "From
afar, the native can tell whether women or men are at work, even if he cannot
distinguish their figures. The time of year and day, the crop, and the tools
reveal to him who they are. Whether they carry a load on their head or shoulder
will tell him their gender." The specific assignments varied from one society
to another; what never varied was that some activities and objects were only
for women or only for men.

What to do with this historical and anthropological fact - if it is a
fact - was not clear, even to Illich. But he was sure it mattered deeply, and
he tried to say why in a remarkable passage that can serve as well as any to
summarize his view of modern life. (It will help the reader to know that
"vernacular" was a term of art for Illich: it meant "untaught," with overtones
of "colloquial," "customary," "instinctual," and perhaps most usefully,
"amateur.")

The distinction between
vernacular gender and sex role is comparable to that between vernacular speech
and taught mother tongue, between subsistence and economic existence.
Therefore, the fundamental assumptions about the one and the other are
distinct. Vernacular speech, gender, and subsistence are characteristics of a
morphological closure of community life on the assumption, implicit and often
ritually expressed and mythologically represented, that a community, like a
body, cannot outgrow its size. Taught mother tongue, sex, and a life-style
based on the consumption of commodities all rest on the assumption of an open
universe in which scarcity underlies all correlations between needs and means.
Gender implies a complementarity within the world that is fundamental and
closes the world in on "us," however ambiguous and fragile this closure might
be. Sex, on the contrary, implies unlimited openness, a universe in which there
is always more.

Criticism of this breadth and depth illuminates everything. Exactly how
to turn it against everything is usually, as in this case, more than even the
critic can say.

In the Feminist
Hall of Fame, there are a few niches for men. Near the entrance, in the Mary
Wollstonecraft Room, there's a bust of William Godwin, her husband. The author
of A Vindication of the Rights of Woman was
a cast-off woman with an illegitimate child and a history of suicide attempts
when they met, while he was a renowned political philosopher. But he saw her
courage and genius. They had an ecstatic though tragically brief relationship:
she died in childbirth a few months after they married in 1797. As a tribute,
Godwin wrote an unusually candid biography of her. Pre-Victorian England wasn't
ready for freethinking or free love, at least when practiced by women, so the
book caused a huge scandal. But at least the infamy helped keep her memory
alive until her masterpiece was rediscovered.

Further on, in the
Bloomsbury-Fabian Wing, are plaques for George Bernard Shaw, who ridiculed
conventional patriarchial moralism in Mrs.
Warren's Profession, Man and Superman,
and throughout his enormous, and enormously popular, oeuvre; and for Leonard
Woolf, whose self-effacing devotion to Virginia she often acknowledged
gratefully. There was some talk recently of honoring Dennis Thatcher, the Iron
Lady's faithful and supportive husband, until left-wing feminists pointed out
that Mrs. Thatcher's policies - financialization, deindustrialization,
privatization, deregulation - were not actually good for most women.

The only man to
have an entire room named after him is John Stuart Mill, to whom half of the
John Stuart Mill--Harriet Taylor Pavilion is dedicated. When Mill was 24 and
already a rising intellectual star, he met Mrs. Taylor, 23, the wife of a
Unitarian businessman and mother of two children. John had recently come
through the depression he famously described in his Autobiography, caused, he was convinced, by having starved his
feelings. Harriet was brilliant, beautiful, and fearless. Both were smitten,
instantly and forever. Except when one or the other was convalescing (they were
both tubercular), they rarely went a day without seeing or writing each other
until she died twenty-eight years later, in 1858. (For the first nineteen of
those years, they met openly at her house, thanks to her remarkably enlightened
husband, John Taylor, of whom there is a small commemorative medallion on
display in the Harriet Taylor Room.)

Mill insisted that
everything he wrote after meeting Taylor was a joint production - she had the
flashes of inspiration that he laboriously worked out. Some subsequent critics
have doubted that this was true of his Logic
and other philosophical writings. But it was surely true of The Subjection of Women, his powerful
and influential critique of sexual inequality. Mill was already an advanced
feminist when they met (which was, he later wrote, the only reason she gave him
the time of day). But she enlarged his vision and kindled his indignation. The
latter is perhaps the most striking feature of Mill's treatise.
Wollstonecraft's Vindication had a
conciliatory, occasionally even pleading, tone. But The Subjection of Women gave no quarter, rhetorically.

Mill wrote the
book in the late 1850s and early 1860s, when English radicals strongly
sympathized with American abolitionists. Mill was an early supporter, referring
bitingly in 1848 to the United States as "a country where institutions profess
to be founded on equality, and which yet maintains the slavery of black men and
of all women." Time and again in Subjection Mill presses home the
resemblance of 19th-century marriage to slavery. How did marriage
come about?

[F]rom the very earliest twilight of
human society, every woman ... was found in a state of bondage to some man. Laws
and systems of polity always begin by recognizing the relations they find
already existing between individuals. They convert what was a mere physical
fact into a legal right, give it the sanction of society, and principally aim
at the substitution of public and organized means of asserting and protecting
these rights, instead of the irregular and lawless conflict of physical
strength. Those who had already been compelled to obedience became in this
manner legally bound to it. Slavery, from being a mere affair of force between
the master and the slave, became regularized and a matter of compact among the
masters, who, binding themselves to one another for common protection,
guaranteed by their collective strength the private possessions of each,
including his slaves.

And wives.

Defenders of
marriage invariably appeal to the immemorial order of things. To be a wife and
mother, they argue, is a woman's "natural vocation" - even though, Mill points
out, they actually seem to believe the opposite: that given a choice, few women
would choose their "natural vocation."

If this is the real opinion of men in
general, it would be well that it should be spoken out. I should like to hear
somebody openly enunciating the doctrine (it is already implied in much that is
written on the subject) - "It is necessary to society that women should marry
and produce children. They will not do so unless they are compelled. Therefore
it is necessary to compel them." The merits of the case would then be clearly
defined. It would be exactly that of the slaveholders of South Carolina and
Louisiana. "It is necessary that cotton and sugar should be grown. White men
cannot produce them. Negroes will not, for any wages which we choose to give. Ergo they must be compelled."

Mill was
especially scornful of his contemporaries' pronouncements about women's essential
"nature," which always seemed to justify their subordination. He was not an
anti-essentialist, simply an agnostic. One of the earliest philosophers of social
science, he kept pointing out that circumstances shape character, and since
women's faculties had never been allowed their full development, nothing
plausible could be said yet about their scope and limits. Again, he drew an
analogy with slavery.

I deny that any one knows, or can know,
the nature of the two sexes, as long as they have only been seen in their
present relation to one another. If men had ever been found in society without
women, or women without men, or if there had been a society of men and women in
which the women were not under the control of the men, something might be
positively known about the mental and moral differences which may be inherent
in the nature of each. What is now called the nature of women is an eminently
artificial thing - the result of forced repression in some directions,
unnatural stimulation in others. ... [N]o class of dependents have had their
character so entirely distorted from its natural proportions by their relation
with their masters; for, if conquered and slave races have been, in some
respects, more forcibly repressed, whatever in them has not been crushed down
by an iron heel has general been let alone, and if left with any liberty of
development, it has developed itself according to its own laws; but in the case
of women, a hot-house and stove cultivation has always been carried on of some
of the capabilities of their nature, for the benefit and pleasure of their
masters.

With respect to
sexual inequality, Mill and Taylor were abolitionists. And like the
anti-slavery abolitionists, they are sometimes classified by their latter-day
admirers, with perhaps a hint of condescension, as "liberals," presumably
meaning that they emphasized individual rights and abstract principles rather
than collective liberation and material conditions. This isn't altogether true,
especially of Taylor. She persuaded Mill to include a chapter on the future of
the working class in his Principles of Political
Economy, which predicted and advocated (without benefit of Marx, and only a
few years after the Communist Manifesto)
"the association of the labourers themselves on terms of equality, collectively
owning the capital with which they carry on their operations, and working under
managers elected and removable by themselves." And while Mill thought that, for
practical reasons (i.e., to avoid an oversupply of labor, which would lower
wages), most women would not enter the labor force even when legally
emancipated, Taylor, in her pamphlet The
Enfranchisement of Women (1851), disagreed, refusing to accept "that the
division of mankind into capitalists and hired labourers, and the regulation of
the reward of the labourers mainly by demand and supply, will be for ever, or
even much longer, the rule of the world." Mill and Taylor were socialist feminists.

**************

There is, however,
something ambiguous in Mill and Taylor's legacy. They were not, it appears,
sex-positive. It was impossible, of course, in mid-19th-century
England to write without euphemisms about sex. But even granting that
restriction, they could hardly have sounded less enthusiastic about it. They
had an unfortunate habit of referring to sex as an "animal function" and of
deploring the sway of "sensuality" (a distinctly disapproving word at that
time) over the average run of humankind.

While Mill was
writing his autobiography, Taylor hoped that his account of their relationship
would provide "an edifying picture for those poor wretches who cannot conceive
friendship but in sex." In an early essay, "On Marriage and Divorce," Mill
asked: "Will the morality which suits the highest natures" - a morality of
"companionship" - "be also best for inferior natures?" It would, he thought, if
the latter would allow themselves to be "guided" by the "higher natures." But
alas, "the greater number of men ... are attracted to women solely by
sensuality." For that reason,

the law of marriage as it now exists
has been made by sensualists, and for sensualists, and to bind sensualists. The aim and purpose of
that law is either to tie up the sense, in the hope by so doing, of tying up
the soul also, or else to tie up the sense because the soul is not cared about
at all. Such purposes never could have entered into the minds of any to whom
nature had given souls capable of the higher degrees of happiness ...

Mary
Wollstonecraft too had misgivings about "the depravity of the appetite which
brings the sexes together" and exhorted men and women to seek something higher
in marriage:

Friendship is a serious affection; the
most sublime of all affections, because it is founded on principle, and
cemented by time. The very reverse may be said of love. In a great degree, love
and friendship cannot subsist in the same bosom; even when inspired by
different objects they weaken or destroy each other, and for the same object
can only be felt in succession. The vain fears and fond jealousies, the winds
which fan the flame of love ... are both incompatible with the tender confidence
and sincere respect of friendship.

There's a large question lurking in Mill
and Taylor's (and Wollstonecraft's) portraits of the higher friendship between
men and women. Namely, can the "higher" and "lower" flourish equally in an
intimate relationship? Is there a strain, a tradeoff, a slight disconnect
perhaps, between lust and respect, between spontaneity, intensity, even frenzy,
on one hand, and delicacy, tact, responsiveness on the other? Is male (and
increasingly female, according to reports from the hookup culture) libido
incorrigibly objectifying?

It is not only sexist reprobates like
Henry Miller and Norman Mailer who imply as much. Some feminist theorists
harbor the same suspicion. Even while disagreeing with it, the strongly
sex-positive Ellen Willis acknowledged the plausibility of the Freudian/conservative
view that "the sexual drive itself ... is inherently anti-social, separated from
love, and connected with aggressive, destructive impulses." In her extremely
influential essay, "Toward a Feminist Jurisprudence," Willis's sometime
antagonist Catherine MacKinnon came very close to defining heterosexuality as
violence. Sexuality is "a social sphere of male power of which forced sex is
paradigmatic." Sex and violence may be conceptually distinct, but "the problem
remains telling the difference. ... for women it is difficult to distinguish them
under conditions of male dominance." In another essay she argued that "the male
sexual role ... centers on aggressive intrusion on those with less power. Such
acts of dominance are experienced [by men] as ... sex itself." It is hard to decide
whether Harriet Taylor was more fully reincarnated as Ellen Willis or as Catherine
MacKinnon - clearly her impassioned spirit shines out in both. But it does seem
that she and Mill believed - or feared - that to express their affection
physically might endanger their "higher degree of happiness."

****************

In a tragedy, according to my dictionary,
"a noble protagonist is brought to ruin as a consequence of an extreme quality
that is both his [sic] greatness and
his downfall." If we take the withering away or permanent sublimating of sexual
passion as a loss (as sex-positive feminists certainly would), and the heroic
rationality and restraint demanded (according to Mill, Taylor, and Wollstonecraft)
by the "higher friendship" as one possible cause of it, would that qualify as a
tragedy?

You can ignore
that question, actually. It's very likely moot. Technology doesn't tolerate
tragedy well, and it certainly has no use for heroism. Involuntary pregnancy
and differences in upper body strength once seemed like essential features of
human life and insuperable obstacles to sexual equality. The Pill vanquished
the former; automated production, the information revolution, and Title IX the latter.
Adjusting to the results is apparently so difficult that what the journalist
Hanna Rosin calls, in a bestselling book, the "end of men" now seems to be on
the horizon. Fortunately, capitalism is inexhaustibly innovative. Without
popular, democratic control of technology, advances in genetics and cybernetics
will probably abolish sex. Both technologies, as scientists like Ray Kurzweil,
Marvin Minsky, and Lee Silver have assured us, are well on their way toward
radical innovations in the design of a new apex species for Earth. Does anyone
imagine it will incorporate an archaic, hopelessly flawed design feature like
sex?

Mind and Cosmos: Why
the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False
by Thomas Nagel. Oxford University Press, 130 pages, $24.95.

In the popular
movie Ex Machina (2015), a brilliant
but sinister software entrepreneur, working at his remote estate, creates a
line of glamorous artificial intelligences. He then invites one of his
company's most promising young programmers to engage his latest creation in a
kind of emotional Turing test. Can she convince him that she has a mind and an
inner life - a soul?

The experiment
ends badly. The entrepreneur would have done better to invite Thomas Nagel. For
more than forty years, since the publication of "What Is It Like to Be a Bat?",
perhaps the most frequently cited paper in recent philosophical history, Nagel
has been one of America's best-known philosophers of mind. He's photogenic too.

And he's
dauntless. "What Is It Like to Be a Bat?" may also be the most frequently rebutted
paper in recent philosophical history. Yet Nagel has not retreated. On the
contrary; he has now, brandishing this short book, ridden out to slay a mighty
ogre: "materialist anti-Darwinism." Whatever one's view of the merits of his
argument, one cannot help admiring this philosophical chevalier sans peur.

The lords of
neo-Darwinism have ridden out to meet the doughty challenger, and one after the
other, so far as I can judge, have laid him in the dust. They have not regarded
his evident and laudable humility ("This is just the opinion of a layman who
reads widely in the literature that explains contemporary science to the
nonspecialist" and "I am not confident that this ... idea of teleology without
intention makes sense, but I do not at the moment see why it doesn't"), but
instead have waxed wroth over this modest but forthright avowal of his quest:
"I would like to defend the untutored reaction of incredulity to the
reductionist neo-Darwinian account of the origin and evolution of life. It is
prima facie highly implausible that life as we know it is the result of a
sequence of physical accidents, together with the mechanism of natural
selection ... it flies in the face of common sense."

I am all in favor
of skeptical outsiders twisting the tails of intellectual eminences, or at
least trying to. (For this reason I am even, I confess, an admirer of Psychoanalysis and the Unconscious and Fantasia of the Unconscious, D. H.
Lawrence supremely eccentric philosophical/mystical broadsides.) There is not,
however, much concrete engagement with Darwinism, neo- or paleo-, in Mind and Cosmos. Nagel mostly restricts
himself to general statements about its scope and limits, such as:

[F]or a long time I have found the
materialist account of how we and our fellow organisms came to exist hard to
believe, including the standard version of how the evolutionary process works.
The more details we learn about the chemical basis of life and the intricacy of
the genetic code, the more unbelievable the standard historical account becomes.

And:

[C]ontemporary research in molecular
biology leaves open the possibility of legitimate doubts about a fully
mechanistic account of the origin and evolution of life ..."

The reasoning that
led Nagel to these negative judgments is not presented in any detail; he is, as
noted, confessedly a layman in science. What is argued at length is an a priori
conclusion: if the appearance of consciousness and morality can be accounted
for entirely on evolutionary grounds, then the philosophical views Nagel has
defended over the last five decades cannot be true.

The standard names
for that cluster of positions are "realism," "objectivism," sometimes
"anti-reductionism." The strategic leitmotif is "not merely." Truth is not
merely ultimate consensus or warranted belief. The good is not merely pleasure;
the bad is not merely pain. Consciousness is not merely computation. The mind
is not merely the brain. Value does not consist merely in the fact of being
valued. Morality is not merely a product of natural selection. On the contrary:
mind, value, truth, good, bad, etc. all have real, objective,
observer-independent existence.

The ontological
inventory of the world - viz., what there is - is a chronic subject of
philosophical controversy, its history too familiar to need recounting. Indeed,
a historical perspective on this and other perennial philosophical problems
risks being disabling - can anyone now hope to contribute anything new or
definitive to these exhaustively canvassed debates? Perhaps not; but the temptation
to scratch certain philosophical itches is overwhelming.

What, then, is
real, objective, observer-independent existence? Who, if anyone, denies it to
truth, value, etc., and what do they mean by it? What would decide the
question?

Would, for example,
asserting that "bodies (including brains) are all there is" commit the speaker
to denying that she herself, or anyone else, had a mind? No; it is obviously
possible to believe in the existence of minds - or at least, to use the word
"mind" in a fashion indistinguishable from the way it is used by those who
assert its non-identity with the body - while maintaining that only bodies
exist. Likewise, few down-the-line Darwinists consider themselves disqualified
from invoking morality and virtue, no matter how insistently informed by "value
realists" that they have no right to. Clearly there is, notwithstanding the
refinements of centuries of argument, still some verbal slippage somewhere,
some failure to employ the crucial words in precisely the same sense.

For example, is
"real" unambiguous? Or is there something faintly question-begging in Nagel's
"doubts about whether the reality of such features of our world as
consciousness, intentionality, meaning, purpose, thought, and value can be
accommodated in a universe consisting at the most basic level only of physical
facts"? No one on any side of the debate has any doubt about the reality of
those features of the world, or even about the everyday usage of the terms. The
question is how best to characterize them. Perhaps nonphysical facts are
necessary for that task, but what are nonphysical facts? The only nonphysical
facts Nagel refers to are, precisely, "consciousness, intentionality, meaning,"
etc.

These facts -
"mind, meaning, and value" - are "as fundamental as matter and space-time in an
account of what there is." Nagel's repeated insistence that "something more" than
physics and biology is needed to explain how there can be conscious beings -
that nonphysical facts make all the explanatory difference - prompts a
skeptical question. Can one not imagine two different worlds, in one of which
belief prevails in the objective reality of mind, meaning, and value, and
another in which the words are used in non-philosophical contexts for the same
purposes as in the other, but which professes not to believe in their objective
reality? Could not those two worlds have identical beliefs about cognitive
psychology, neurophysiology, evolutionary biology, even ethics? How would we
distinguish them?

The same sort of stubbornly
pragmatic question suggests itself when, in the course of explaining why
everyone really does believe in the objectivity of truth, whether consciously
or not, Nagel points out that, obviously, some things are true even if no one
will ever know them to be true. It does
seem obvious, until one begins to quibble.

Non-Realist: What things?

Realist: We can't know. That's the
point.

N-R: So how do we know that something
unknowable is true?

R: Well, let's see. Someday all intelligent life
in the universe will die out. Agreed?

N-R: For the sake of argument, yes.

R: And whatever happens after that, no
one will ever know, right?

N-R: Right.

R: QED!

N-R: What do you mean? What's true that
no one will ever know?

R: Why, everything that happens after
consciousness disappears.

N-R: But events aren't true. Only
propositions are true.

R: Well, how about this: "The earth
will fall into the sun in 5 billion years."?

N-R: Yes, that's true. But it's not
something no one will ever know. We know it.

R: But lots of other things that we'll
never know about will happen in 5 billion years. Sentences about them will be
true.

N-R: What things?

The non-realist's
quibble is that you can't frame a true sentence about something of which you
have no knowledge. (Framing a sentence that's either true or false doesn't
count, because every sentence is.) But the more important objection is: what's
at stake? How is a world in which the belief prevails that there is a sense To
of "truth" distinct from the term's everyday usage Te any different
from a world in which To is unknown?

The objectivity of
value and the good also makes all the explanatory difference, according to
Nagel, in this case to reasoning about morality. It is not merely to an agent's
interests or feelings or aesthetic sensibility that an appeal for unselfish
benevolence must be directed but to "the actual structure and weight of values
in the case at hand."

A judgment that one should not impose
serious harm on someone else for the sake of slight benefit to oneself, for
example, is based on the recognition that the reason against imposing the harm
is much stronger than the reason for pursuing the benefit, and that the fact
that the harm would be suffered by someone else is not a reason to disregard
it.

Does not "actual,"
again, slightly beg the question? Are all genuine values intelligible to all
reasoning beings? Is it inconceivable that some reasoning beings do not feel
sympathy, are not equipped with motor neurons? Even Nazi ideologues were
reasoning beings; can one imagine a demonstration that would convince one of
them that the "actual structure and weight of the values" involved in not
"harming" his victims exceeded the paramount values of racial purity and
supremacy? And what of the Daleks, the Cylons, or the Borg? Consider this
exchange (Star Date, 2479):

The Borg: We are Borg. You will be
assimilated.

Capt. Jean-Luc Picard: No, please ...

Borg: We are a higher life form.

Picard: I don't doubt it. But I'd have
to relinquish all the memories and attachments that constitute me ... to die.

Borg: We are immortal.

Picard: I'm glad for you. But that's
not what we humans value the most.

Borg: With your limited minds, you are
incapable of reasoning about value.

Picard: Well, yes, I can see why you
would think so. But can't you see my point of view?

Borg: What is "seeing your point of
view"?

Picard: Um ... it's imagining the other
person's way of looking at things.

Borg: What is "imagining"?

Picard: ...

Borg: Prepare to be assimilated.

Suddenly the Enterprise's transporter beam locates Picard's coordinates and
beams him back. He remains human - no thanks to moral philosophy. And yet, if
maintaining the integrity of Picard's species-being were an
observer-independent good, logically compelling respect for Picard's
fundamental life choices, shouldn't that have been demonstrable to a
super-intelligence like the Borg?

*******

Mind and Cosmos imports into the largest
philosophical framework the defining preoccupation of Nagel's career: the
mind-body problem. The key to this problem is announced in the first sentence
of "What Is It Like to Be a Bat?": "Consciousness is what makes the mind-body problem
really intractable." The enigma of
consciousness is also what makes materialist neo-Darwinism unsatisfactory.
"What has to be explained is ... the coming into existence of subjective
individual points of view - a type of existence logically distinct from
anything describable by the physical sciences alone." Bats have a point of
view, Nagel argued; anything does of which we can ask "how it is for the
subject himself."

Does a rock, then,
have a point of view? It does seem to have experiences: it shudders when you
smash it, glows when you heat it, skips nimbly along the surface of the water.
Is it like anything to be a rock? Very dull, surely. A rather heavy existence.
But steady and predictable; long-lived, except in strip-mining or fracking
areas. We may assume that by and large rocks, like bats and most other
creatures, would not choose a different species-being even if they could.

Among the many
less frivolous responses than this to "What Is It Like to Be a Bat?", Owen
Flanagan's is particularly cogent. Flanagan points out the ambiguity of "know"
in such claims as "we cannot know what it is like to be a bat." We can, after
all, with present and foreseeable technology, know practically everything about what it is like to be a bat: that
is, what it is like to echo-locate, to hang upside down, even to eat large
quantities of insects. We can fully describe the neural mechanisms of a bat's
every perception and motion. We can predict any bat's actions with, for
practical purposes, perfect accuracy. All of which we are or will be, ex hypothesi, able to do for fellow
humans as well. We can know more about the consciousness and experiences of
both bats and humans than the subjects themselves. All we cannot do is have (or
"capture" or "grasp") those experiences. We cannot, that is, be those subjects.

Subjectivity,
inner experience, the first person point of view: these are not mysteries; they
are not explananda at all. Simply, persons
are "uniquely causally well connected to their own experiences." Moreover:

there is no deep mystery as to why this
special causal relation obtains. The organic integrity of individuals and the
structure and function of individual nervous systems grounds each individual's special
relation to how things seem to him. John Dewey put it best: "Given that
consciousness exists at all, there is no mystery in its being connected with
what it is connected with."

The failure of
scientific analysis to enable the analyst to reproduce the first-person point
of view is, for Nagel, the proof that a naturalistic (materialist, physicalist)
account cannot possibly serve as a theory of everything. But this, Flanagan
counters, is to misunderstand the ambitions of naturalism. Naturalism does not
aspire to say everything that can be said (to "exhaust the analysis," in Nagel's
terms) of mental phenomena; that would indeed require experiencing them. Instead
it merely seeks the best possible characterization of them: "a rich
phenomenology, a theory of how the phenomenology connects up with behavior, a
theory about how conscious mental events taxonomized into many different
classes of awareness figure in the overall economy of mental life, a theory of
how mental life evolved, and thereby a theory of which features of mind are the
result of direct selection and which features are free riders, and finally ... a theory
of how all the different kinds of mental events, conscious and unconscious, are
realized in the nervous system."

Nagel is right to
reject reductionism - subjective experience is
uncapturable. But this does not refute naturalism, which is simply the theory
that every mental event, though experienced by a person, is realized in the
brain.

**************

In the background
of arguments about the objectivity of truth and value, there often lurks the
specter of nihilism. "If there is no God," Dostoevsky wrote, "everything is
permitted." Nowadays William Bennett, Roger Scruton, and many others issue
equivalent warnings about relativism. Leszek Kolakowski, perhaps the most distinguished
recent antagonist of modernity, admonished us that Dostoevsky's (more
precisely, Ivan Karamazov's) apothegm is "valid not only as a moral rule but as
an epistemological principle"; that the absence of philosophical foundations must
ultimately entail "infinite regress" and "cognitive nihilism." If truth
and value are not observer-independent, if reason cannot resolve fundamental
conflicts among values, cannot define a universal human identity or specify a
universally valid set of rights, then surely force and deceit must become the
final arbiters of collective morality and historical truth?

Mind and Cosmos mostly avoids apocalypticism
of this sort, though there are occasional hints. "The evolutionary story," we
are cautioned, "leaves the authority of reason in a much weaker position. This
is even more clearly true of our moral and other normative capacities. ...
Evolutionary naturalism implies that we shouldn't take any of our convictions
seriously." In other writings, particularly when criticizing the late arch-non-realist
Richard Rorty, he has assumed a more alarmed, even censorious, tone.

There is no
resolving the immemorial philosophical antagonism between realism and
skepticism; not on this occasion, at any rate. But for sportsmanship's sake,
and because I admire Rorty no less than I do Nagel, let me quote two passages
from Contingency, Irony, and Solidarity,
which together state the case against which Nagel has, throughout his career,
resolutely set his face.

The first is
epistemological:

We need to make a
distinction between the claim that the world is out there and the claim that
the truth is out there. To say that the world is out there, that it is not our
creation, is to say, with common sense, that most things in space and time are
the effects of causes that do not include human mental states. To say that
truth is not out there is simply to say that where there are no sentences there
is no truth, that sentences are elements of human languages, and that human
languages are human creations.

Truth cannot be out
there - cannot exist independently of the human mind - because sentences cannot
so exist, or be out there. The world is out there, but descriptions of the
world are not. Only descriptions of the world can be true or false. The world
on its own - unaided by these describing activities of human beings - cannot.

The suggestion that
truth, as well as the world, is out there is a legacy of an age in which the
world was seen as the creation of a being who had a language of his own. If we
cease to attempt to make sense of the idea of such a nonhuman language, we
shall not be tempted to confuse the platitude that the world may cause us to be
justified in believing a sentence true with the claim that the world splits
itself up, on its own initiative, into sentence-shaped chunks.

The second is
political, and takes off from the celebrated passage in George Orwell's 1984 in which Winston Smith, confused by
O'Brien's anti-realist dialectics, protests to his diary that "Truisms are
true, hold on to that! ... Freedom is the freedom to say that two plus two make
four. If that is granted, all else follows." The reader of Mind and Cosmos can practically hear Nagel enthusiastically exclaiming
"Exactly!" Rorty, however, demurs:

Emphasizing these
passages (and others like them) has led many commentators to conclude that
Orwell teaches us to set our faces against all those sneaky intellectuals who
try to tell us that truth is not "out there," that what counts as a possible
truth is a function of the vocabulary you use, and what counts as a truth is a
function of the rest of your beliefs. Orwell has, in short, been read as a
realist philosopher, a defender of common sense against its cultured, ironist
despisers.

On this reading, the
crucial opposition in Orwell's thought is the standard metaphysical one between
contrived appearance and naked reality. The latter is obscured by bad,
untransparent prose and by bad, unnecessarily sophisticated theory. Once the
dirt is rubbed off the windowpane, the truth about any moral or political
situation will be clear. Only those who have allowed their own personality (and
in particular their resentment, sadism, and hunger for power) to cloud their
vision will fail to grasp the plain moral facts. One such plain moral fact is
that it is better to be kind than to torture. Only such people will try to
evade plain epistemological and metaphysical facts through sneaky philosophical
maneuvers (e.g, a coherence theory of truth and a holistic philosophy of
language - the devices I employed in Chapter 1 [i.e., the passage quoted above
- G.S.]. Among such facts are that truth is "independent" of human minds and
languages, and that gravitation is not "relative" to any human mode of thought.

For reasons already
given, I do not think there are any plain moral facts out there in the world,
nor any truths independent of language, nor any neutral ground on which to
stand and argue that either torture or kindness are preferable to the other. ...
The kind of thing Orwell did [in 1984]
- sensitizing an audience to cases of cruelty and humiliation which they had
not noticed - is not usefully thought of as a matter of stripping away
appearance and revealing reality. It is better thought of as a redescription ...

Two wise and
humane philosophers, equally impassioned in defense of civilized values, yet
firmly opposed to each other's arguments. It is a spectacle of some
intellectual pathos.

******************

Though an
exceptionally short book, Mind and Cosmos
is nevertheless, in one respect, extraordinarily ambitious. Nagel proposes not
merely a new explanation for the origin of life and consciousness, but a new type of explanation: "natural
teleology." If psychophysical reduction is implausible, as Nagel has always
insisted, then no materialist neo-Darwinian explanation will ever be
satisfactory. The apparent alternative, a theistic-intentional account (i.e.,
intelligent design by a divinity), does not appeal to Nagel. He simply lacks,
he explains, any sense of the divine. His interest is in the territory between
the two: a secular account that allows for the emergence of mind as mind.

On a teleological
account, "in addition to the laws governing the behavior of the elements in
every circumstance, there are also principles of self-organization or of the
development of complexity over time that are not explained by those elemental
laws." That is to say, the "preferred transitions" - to mind and value - "have
a higher probability in virtue of ... temporally extended developments of which
they form a potential part." Complexity and self-organization are "an
irreducible part of the natural order." Or, in one of only two vivid or
figurative phrases in Mind and Cosmos,
evolution is "biased towards the marvelous."

This is, Nagel
engagingly admits, "obscure," and as noted earlier, he is "not confident ... that
it makes sense." But it is undeniably imaginative, and far more sweeping than
Aristotle's rather ad hoc version of final causation. It yields a picture of
(the book's other striking phrase) "the universe gradually waking up and
becoming aware of itself." If this turns out to be true, Thomas Nagel will
someday be considered the Erasmus Darwin of natural-teleological evolutionary
theory.

On
the world's last day, according to the authoritative Baltimore Catechism[1],
"the bodies of all men [sic] will rise from the earth and be united again to
their souls, nevermore to be separated." This general resurrection will be
followed immediately by a Last, or General, Judgment, "in order that the
justice, wisdom, and mercy of God may be glorified in the presence of all."
Everyone who has ever lived will be judged, and "every deliberate thought,
word, deed, and omission of every person's entire life will be manifested at
the general judgment."

This could take a
long time, so I imagine that, during the intermissions of this lengthy plenary
session, the Almighty will organize smaller panel discussions, both for
variety's sake and for the edification of the saved and the damned. Topics might
include: "The Popes: More Trouble Than They Were Worth?", featuring Garry Wills
and John Paul II; "Women in Islam," with Ayan Hirsi Ali and Ayatollah Khomeini;
and "Tolerance: Truth or Snare?," featuring Torquemada and Voltaire. All panel
discussions will be moderated by God - simultaneously if necessary, since He is
not subject to limitations of time or space.

Surely there will
also be a panel on "Darwinism vs. Intelligent Design." On the side of the
angels might be Phillip Johnson, Michael Behe, and David Berlinski. Arguing for
Darwin would be Richard Dawkins, Daniel Dennett, and Jerry Coyne. I foresee a
full house.

Calling the
proceedings to order, God will issue the usual divine injunctions against
name-calling, question-begging, and selective quotation. He might also introduce
a personal note, along these lines:

"Ladies and
gentlemen, cherubim and seraphim, powers and principalities, welcome. Around
the middle of the twentieth century, I was greeting some recent arrivals from
Earth, including one rather cheeky person named Bertrand Russell. He had been a
staunch and regrettably influential non-believer in My existence during his
lifetime, and when I ventured to reproach him for this, he retorted
indignantly: 'Not enough evidence, God!' It was a chastening experience. I
rather liked this Russell and was loath to consign him to the outer darkness.
Moreover, I could see his point; I had heard a similar objection from earlier
unbelievers. It was at that moment that I made a momentous change in Heaven's
admission policy. Henceforth, I resolved, everyone who argued in good faith
would be welcome in Heaven, regardless of what conclusion she arrived at concerning
My existence. Hell would be reserved for those who, as Jesus put it so
eloquently in the Gospel of Matthew (chapter 25, verses 31-46), hacked away at
the social safety net and imposed draconian fiscal austerity while the people perished.
So it is my pleasure to announce that all the participants in tonight's panel
are saved, and they are hereby encouraged to continue their extremely
interesting discussion in Heaven through all eternity."

II.

God may forgive -
that is, after all, as Heine observed, his job[2]
-- but Jerry Coyne will be harder to appease. An evolutionary biologist at the
University of Chicago and the author of a bestselling popular book, Why Evolution Is True,[3]Coyne was appalled by recent religiously based efforts, harking back to the
famous 1925 Scopes trial, to modify or "balance" science teaching in American
public schools. As other examples of faith-based public policy came to his
attention, his indignation deepened. With Faith
Versus Fact, he has launched a frontal assault on theistic religion,
accompanied by a ringing vindication of the spirit and method of the natural
sciences. Coyne is a little late to the fray, and he lacks Dawkins' gift for
storytelling, Dennett's philosophical exuberance, Christopher Hitchens' panache,
and Sam Harris's instinct for the jugular, so he will probably not be acclaimed
as the Fifth Horseman of the New Atheism. But Faith Versus Fact is a solid, earnest, persuasive book.

The most important
(though of course not the only) subject of contention between religion and
science at present is Darwinism, the theory of evolution by natural selection.
The main elements of Darwinism are simply stated. Life on earth began around
3.5 billion years ago with one species, which branched into various species as
a result of mutations, small random alterations in the DNA code that transmits
genetic information. Some of these mutations caused changes that were
unfavorable to the altered organism's reproductive success; others caused
changes that were favorable. The former mutations disappeared; the latter
persisted.

According to the
Book of Genesis, the earth and all plant and animal species were created
simultaneously several thousand years ago by a supernatural being. Forty-two
percent of Americans believe this "young earth" theory.[4]
Others, mostly but not exclusively Christians, accept some aspects of
evolutionary theory but not others: they accept, for example, the greater age
of the earth, the genetic basis of inheritance, the occurrence of mutations and
their differential survival, but do not accept common ancestry, the mutability
of species, or the randomness of mutations. Something distinctive about human
beings - consciousness, complexity, free will - can only be explained, they
insist, by purposeful direction from an external source. The usual name for
this hypothesis is "Intelligent Design."

Coyne has replied
to most criticisms of Darwinian theory in Why
Evolution Is True, but at least a few of the most frequent ones should be
touched on here, if only to help account for his combativeness in the present
book. One objection is purely logical: Darwinism is allegedly a tautology.
According to Phillip Johnson, "natural selection" means nothing more than that
"the organisms that leave the most offspring are the ones that leave the most
offspring."[5]
Johnson, a law professor, suggests that most biologists recognize this
circularity and that only outsiders, or rare mavericks within the profession,
are willing to acknowledge that Darwinism is, for this reason, not a scientific
theory at all.

"Tautology,"
according to the American Heritage
Dictionary,[6]
has two meanings: 1) "needless repetition of the same sense in different words;
redundancy"; and 2) "a statement composed of simpler statements in a fashion
that makes it true whether the simpler statements are true or false; for
example, 'Either it will rain tomorrow or it will not rain tomorrow.'"
Tautologies therefore convey no information. Does the formulation Johnson
ridicules convey any information? In virtually every context in which I can
imagine a biologist making the statement, it will mean: "Does any type of
inherited modification - of beauty, complexity, speed, strength, size -
invariably enhance reproductive success? No, that always depends on the
environment into which the new trait is introduced." Whether true or false, that
seems highly informative. Perhaps Johnson was misled by the form of the
statement. "Men are what they are" may in some contexts be a tautology,
conveying no information. "Men are what they are," muttered by a woman to a
distraught girlfriend, probably means (depending on the offense she's being
consoled for): "They're all incorrigible liars/lechers/brutes/slobs." Again,
highly informative in context.

Another criticism
is evidential, concerning what is widely thought to be the most compelling
confirmation of evolutionary theory: the fossil record. The philosopher David
Berlinski, writing on Phillip Johnson's website, asserts that "every
paleontologist writing since Darwin published his masterpiece in 1859 has known
that the fossil record does not support evolution"[7]
- a claim he and others have made elsewhere as well. But ten years before
Berlinski's astonishing declaration, the American Geological Institute and the
Paleontological Society issued a joint report, "Evolution and the Fossil
Record" (2001). From its "Summary": "The theory of evolution is the foundation
of modern paleontology and biology. ... Evolution is as well-supported by
evidence as the theory of gravity or the heliocentric theory of our solar
system. The data supporting evolution are vast."[8]
It is conceivable, of course, that an international organization of
paleontologists could have been mistaken about the significance of the fossil
record. But it could hardly have been mistaken about the beliefs of "every
paleontologist writing since Darwin."

Arguably the chief
pillar of Intelligent Design theory is the concept of "irreducible complexity."
Popularized by biochemist Michael Behe (with a hat tip to Bishop William
Paley), this argument asserts that some processes(e.g., cellular metabolism) and organs (e.g.,
the eye) cannot have been produced by natural selection, since they could not
have evolved gradually. Every component of an irreducibly complex process or
organ must be in place before it can function at all; in order to be adaptive,
all the required mutations (sometimes thousands of them) would have to occur
simultaneously, which is impossible. Hence humans and other complex life forms
cannot have come about by accident; they (we) must have been designed.

"There is no
publication in the scientific literature," Behe writes, "that describes how
molecular evolution of any real, complex biochemical system either did occur or
even might have occurred."[9]
This is not quite so reckless a claim as Berlinski's about paleontology; but it
was rash even in 1996, when Behe's Darwin's
Black Box was published. In response, the cell biologist Kenneth Miller
devoted a chapter of Finding Darwin's God
(1999) to reviewing recent publications in the scientific literature describing
how complex biochemical and other biological systems "either did occur or might
have occurred" through natural selection.[10]
Dramatic proclamations of impossibility like Behe's do compel one's attention, but
since they are disproved by a single counter-example, they are generally
imprudent.

According to
Coyne, "the most effective weapon in the arsenal" of Intelligent Design
proponents is the argument - accepted in one form or another by 69 percent of
Americans - from "fine tuning."[11]
The numerical values of several physical constants - e.g., the strength of the
strong and weak electromagnetic forces and of gravity, the masses of the
fundamental particles - cannot have varied by more than an infinitesimal amount
or life could never have developed. Either those values came about by chance,
against apparently colossal odds, or they were selected by a Designer (or
Engineer).

As Coyne points
out, there is a bit of hand-waving implicit in "apparently colossal odds." In
fact, "we don't know how improbable the values of the constants really are"[12];
nor are we sure in all cases that the range of values compatible with life is
so extremely narrow. It may even be that the "infinitely many universes"
hypothesis proposed by some cosmologists will pan out, in which case the
extremely improbable will turn out to be wholly inevitable. (Perhaps the
infinitely many universes are not simultaneous but successive - an infinite sequence
of big bangs and big crunches - which would dispose of the argument from First
Cause as well as from fine tuning.) At any rate, Coyle remarks exasperatedly,
it would be helpful if there were any positive evidence of cosmic fine tuning, rather
than merely an a priori deduction of its necessity.

The most
intriguing and, to my mind, most plausible argument (more a sketch of an
argument, actually) for how a Designer might
have gone about producing complex, conscious life is based on quantum
indeterminacy. Quantum theory has a long history of playing hob with
philosophically minded scientists. (Allegedly Sir Arthur Eddington, when the
theory was explained to him, announced: "Science hereby withdraws its objection
to free will.") Quantum indeterminacy is the source of the "many universes"
theory, which proposes that at every quantum-level decision point, a new
universe branches off into existence. It has other implications for the
evolution debate as well: some of the events that cause mutations - X-rays and
cosmic radiation, for example - may be indeterminate. This means, if I
understand the argument correctly, that a Designer could have guided those
quantum-level events and so produced the desired mutations without violating
any physical laws. The present reviewer has no idea whether this conjecture is
true, but it should certainly earn theistic evolutionists many points for
ingenuity and persistence.

III.

As this brief
overview suggests, to a nonprofessional reader - this one, at any rate - the
arguments against evolution do not seem very substantial. This verdict would
hardly satisfy Jerry Coyne, however. It is not merely the substance but the
style, the logic, and above all the influence of religion-based
anti-evolutionism that alarm and annoy him throughout Faith Versus Fact.

Near the beginning
of his immortal essay "The Will to Believe," William James quotes a younger
colleague: "It is wrong always, everywhere, and for everyone, to believe
anything upon insufficient evidence." That is also the gist of Coyne's lengthy
and impassioned sermon. Faith Versus Fact
is an inventory of the fallacies that theistic religions propagate and the
harms they inflict in the contemporary world, as well as a warning to
scientists that a conciliatory attitude toward believers in the public sphere
will only encourage further assaults on intellectual freedom.

Our devil's
advocate doesn't mince words. "Religious
claims are empirical hypotheses" (Coyne's emphasis); and they are, without
exception, unproven.[13]
He doesn't mean historical claims - that Moses, Jesus, and Mohammed actually existed,
for example; or moral judgments - for example, that a camel will sooner pass
through the eye of a needle than a member of the richest 0.1 percent will enter
heaven; or advocacy of secular "spiritual" practices, like mindfulness-based
meditation. He means supernatural claims: that a personal God, omniscient and
omnipotent, created the world from nothing, infused immaterial and immortal
souls into human beings, performs miracles, answers prayers, is pleased or
offended by human actions, and will confer eternal rewards or punishments on
every human being. These are "existence claims"; evidence for them could, in
principle, be adduced and should be demanded. And of course, only testable evidence, confirmed by
observation and experiment, is admissible. Appeals to tradition, scripture,
church authority, personal intuition, and mystical experience may suffice in
private life or within denominations, but they carry no legitimacy in the
public sphere.

Above all, religious
beliefs should have no influence on what is said in publicly funded science
classrooms. Nor should they allow parents to escape responsibility on religious
grounds for denying their children medical care - as is the case in
thirty-eight states within the US, resulting in the preventable deaths of more
than a hundred children over a twenty-year period, a few of them described at
length in Faith Versus Fact. Nor
should they impose limits on the rights of terminally ill people to end their
lives when and as they see fit - a right denied in all but a handful of states.
The list of religiously-based interventions in public policy goes on: laws
restricting the use of stem cells from extra embryos created during in vitro fertilization has hobbled
promising medical research; sex education is often forbidden in school
districts where boards of education are dominated by believers; the US withholds
funds from United Nations AIDS-prevention programs that distribute condoms to
women in developing nations. Coyne is right to be incensed about these things.

The baneful
effects of religious dogmas on public policy are one reason for Coyne's rejection
of calls for peaceful coexistence between science and religion, which he labels
"accommodationism." The most influential recent advocate of this "live-and-let-live"
attitude was the paleontologist and popular science writer Stephen Jay Gould.
In Rocks of Ages: Science and Religion in
the Fullness of Life (1999), Gould proposed that since religion and science
deal with "widely different aspects" of life, they should leave each other
alone, or at least refrain from overt aggression. Science "tries to document
the factual character of the natural world, and to develop theories" that
explain these facts; while religion is "wholly wrapped up in the contemplation
of moral and aesthetic values," of "purposes and meanings."[14]
Why can't they just scrape along, limiting their hostility to amused contempt,
like academic colleagues in different departments?

They can't, Coyne
retorts, because religion won't hold to the bargain. Instead of pronouncing
only on the good and the beautiful, it claims also to possess truths -
privileged truths, which science may not challenge. But unfortunately, science
has a tropism for truth, which can't be suppressed.

The fundamental
strategy of religious apologists, Coyne complains, is an appeal to ignorance.
When no secular explanation is available for some phenomenon momentous enough
to require one - from lightning to leprosy, from mortality to mind - God is
liable to be invoked. This was no doubt a rational strategy as long as
relatively few secular explanations were available. Better any number of
inadequate explanations, after all, than a panicked sense that the most
important things in life are wholly inexplicable. But science has changed all
that - if not entirely, then mostly. We have no explanation yet for mind - or
meaning or morals - but we do for lightning, leprosy, mortality, and a great
deal else.

And so, says
Coyne, vogue la galère! Let us fare forth and see how much we can explain. Not so fast,
reply believers; it's not that simple. Precisely because sacred truths are so
momentous, they are also mysterious. The quotidian methods of science cannot
reach them. An act of will, a leap of faith, is required; or a special faculty,
a sensus divinitatis. You scientists
are notoriously brash and irreverent, but that won't do; you can't
cross-examine God. You must be docile - which, after all, only means
"teachable" - and wait humbly for the gift of faith.

Coyne is hardly
alone in being infuriated by this special pleading. Even the irenic William
James scoffed at it:

The talk of believing by our volition
seems, then, from one point of view, simply silly. From another point of view
it is worse than silly, it is vile. When one turns to the magnificent edifice
of the physical sciences, and sees how it was reared; what thousands of
disinterested moral lives of men lie buried in its mere foundations; what
patience and postponement, what choking down of preference, what submissions to
the icy laws of outer fact are wrought into its very stones and mortar; how
absolutely impersonal it stands in its vast augustness, -- then how besotted
and contemptible seems every little sentimentalist who comes blowing his
voluntary smoke-wreaths, and pretending to decide things from out of his
private dream! Can we wonder if those bred in the rugged and manly school of
science should feel like spewing such subjectivism out of their mouths?[15]

The most profound
incompatibility between science and religion, Coyne suggests, stems from their
opposed attitudes toward intellectual freedom. Science "prizes doubt and
iconoclasm, rejects absolute authority ... and depends largely on falsification.
Nearly every scientific [hypothesis] comes with an implicit rider: 'Evidence X
would show this to be wrong.'"[16]
Religion depends on faith in a divine revelation, as interpreted by religious
authorities, to whom intellectual submission is due. The methods are antithetical:
on one side, the cultivation of doubt; on the other, the suspension of doubt.

But doubt, like
desire, cannot be indefinitely suppressed without psychic cost. Sensitive
natures will break down; coarser natures will lash out, i.e., persecute. Historically,
opposition to free thought and untrammeled debate virtually defines organized
religion, Coyne and his fellow New Atheists claim. If religious authorities can
censor, they will; if they can't, they will proclaim their undying commitment
to freedom. By and large, I would say, with all exceptions duly noted and praised,
the historical record bears them out.

For all the vigor
with which Coyne pursues his bill of indictment against organized religion, he
leaves out one important charge. As he says, the conflict between religion and
science is "only one battle in a wider war - a war between rationality and
superstition."[17]
There are other kinds of superstition - he mentions astrology, paranormal
phenomena, homeopathy, and spiritual healing - but religion "is the most
widespread and harmful form." I'm not so sure. Political forms of superstition,
like patriotism, tribalism, and the belief that "human nature" is unalterably
prone to selfishness and violence, seem to me even more destructive. Questioning
authority was humankind's original sin. It is also the first duty of a democratic
citizen. It is something of an understatement to say that organized religions
do not, on the whole, encourage the questioning of authority. Hence, it is
probably not be a coincidence that, among developed societies today, the most
humane and pacific are the least religious. As Sam Harris laments: "Only 28
percent of Americans believe in evolution; 72 percent believe in angels.
Ignorance in this degree, concentrated in both the heart and belly of a
lumbering superpower, is now a problem for the entire world."[18]

At this point, believers
will object strenuously:"Don't blame
us! Look at the history of the twentieth century - the worst crimes were
committed by unbelievers." David Berlinski (a skeptic about both religion and
evolution) has put this point with great force and verve:

In the early days of
the German advance into Eastern Europe ... Nazi extermination squads would sweep
into villages, and after forcing villagers to dig their own graves, murder the
victims with machine guns. On one such occasion ... an SS officer watched
languidly, his machine gun cradled, as an elderly and bearded Hasidic Jew
laboriously dug what he knew to be his grave.

Standing up straight,
he addressed his executioner: "God is watching what you are doing," he said.

And then he was shot
dead.

What Hitler did not believe and what Stalin did not believe and what Mao did not believe and what the SS did not believe and what the Gestapo did not believe and what the NKVD did not believe and what the commissars,
functionaries, swaggering executioners, Nazi doctors, Communist Party
theoreticians, intellectuals, Brown Shirts, Black Shirts, gauleiters, and a
thousand party hacks did not believe
was that God was watching what they were doing.

And as far as we can
tell, very few of those carrying out the horrors of the twentieth century
worried overmuch that God was watching what they were doing either.

This is masterly
rhetoric but faulty reasoning. Nazism, Stalinism, and Maoism were rank
superstitions, no more tolerant of doubt or committed to intellectual freedom
than Counter-Reformation Catholicism or contemporary Salafism. They were secular
religions. Sam Harris has addressed the
tu quoque argument, not so eloquently
as Berlinski but far more cogently:

While some of the most
despicable political movements in human history have been explicitly
irreligious, they were not especially rational. The public pronouncements of
these regimes have been mere litanies of delusion - about race, economics,
national identity, the march of history, or the moral dangers of
intellectualism. Auschwitz, the Gulag, and the killing fields are not examples
of what happens when people become too
critical of unjustified beliefs; on the contrary, these horrors testify to
the dangers of not thinking critically enough about specific secular
ideologies. Needless to say, my argument against religious faith is not an
argument for the blind embrace of atheism as a dogma. The problem ... is dogma itself - of which every religion
[supernatural or secular - GS] has more than its fair share. I know of no
society in human history that has ever suffered because its people became too
reasonable.[20]

IV.

No institution
endures for thousands of years unless it serves an essential need. Ignorance,
inertia, and indoctrination cannot be the whole story - religion must be good
for something. As an evolutionary theorist, Coyne understands this. Religion,
he speculates, is probably an "exaptation": an outgrowth of some behavior
originally favored by natural selection. For children, believing what one is
told by elders and caretakers has obvious survival value. Explaining altruism
and cooperation naturalistically, without reference to religion, is a
flourishing sub-field of evolutionary biology these days.

But will
gene-level explanations, even combined with political ones (religion as "the
opium of the people"), be enough? Perhaps, in addition to their faulty
worldviews and dubious practices, religions also transmit some useful, even
indispensable, memes. Perhaps, for example, they model goodness.

In a passage
quoted by Coyne, Alfred North Whitehead referred to "the beauty of holiness" as
one of religion's special concerns.[21]
It would be absurd to suggest that only religious values can motivate moral
heroism, or that most religious people are heroically moral (or moral at all).
But it is perhaps not absurd to wonder if there mightn't be some connection.
When the greatest English-language novelist, the unbeliever George Eliot,
sought to portray the aspirations of Dorothea Brooke, the protagonist of Middlemarch, the example she chose was
Saint Theresa of Avila.

Theresa's passionate, ideal nature
demanded an epic life: what were the many-volumed romances of chivalry and the
social conquests of a brilliant girl to her? Her flame quickly burned up that
light fuel; and, fed from within, soared after some illimitable satisfaction,
some object which would never justify weariness, which would reconcile
self-despair with the rapturous consciousness of life beyond self.[22]

The rigid and
fanatical Saint Theresa is not my idea of a moral exemplar; I would probably
choose Dietrich Bonhoeffer or Dorothy Day. But they all thrilled to the same
exhortations to selflessness and devotion - "Give all you have to the poor and
come follow me"; "Where your treasure is, there will your heart be" - however
differently they applied them. They were all in love with the beauty of
holiness.

Religions may also
enable solidarity - not only the
destructive kind on display in the Crusades, the Saint Bartholomew's Day
Massacre, or the Sunni-Shia wars, but also the kind necessary to rescue our
species from the current neoliberal war of all against all.Kin-selection is sometimes said to provide a "rational"
explanation for solidarity and altruism. This usage reflects the subtle
corruption of that term by economists' conceptions of "rational self-interest"
and "rational choice" - as though there were something rational, in the
ordinary sense, about free riding, tax evasion, asset stripping, regulatory
capture, influence peddling, and all the other perfectly legal and economically
"rational" means by which cunning and unscrupulous people exploit the rest of
us. In this perverse sense, it would be irrational for energy companies to
leave the remaining deposits of fossil fuel in the ground, even if their executives
and major shareholders have grandchildren who may suffer in the social and
environmental chaos that climate change is bound to produce. Will it ever be
possible to persuade the energy industry and its political hirelings that
imperiling civilization is wrong, even if economically rational? Only, I
expect, through extra-rational appeals - to unselfishness, compassion, and a
sense of earth's and humankind's sacredness.

The most beautiful and deepest
experience a man can have is the sense of the mysterious. It is the underlying
principle of religion as well as of all serious endeavor in art and science. ...
To sense that behind anything that can be experienced there is a something that
our minds cannot grasp, whose beauty and sublimity reaches us only indirectly:
this is religiousness.[23]

Few of us can match
Einstein's or Spinoza's sense of hidden beauty and sublimity. But we all do
occasionally glimpse the haunting interconnectedness and intelligibility of
things, or have the equally mysterious experience of powers and delights that
come to us unearned - nature, love, the upwelling contentment that Spinoza
calls acquiescentia in seipso. The
desire to give thanks, individually or communally, for such gifts, even if to
no one in particular, seems fitting. Here is a strange and striking example of
such collective homage from an unpublished fragment by D.H. Lawrence, a fantasy
about a man who wakes up after a thousand-year sleep in a kind of pagan utopia.
He observes a rite.

When the [sun] touched
the tree-tops, there was a queer squeal of bagpipes, and the square suddenly
started into life. The men were stamping softly, like bulls, the women were
softly swaying, and softly clapping their hands, with a strange noise, like
leaves. And under the vaulted porticoes, at opposite ends of the egg-shaped
oval, came the soft booming and trilling of women and men singing against one
another in the strangest pattern of sound.

It was all kept very
soft, soft-breathing. Yet the dance swept into swifter and swifter rhythm, with
the most extraordinary incalculable unison. I do not believe there was any
outside control of the dance. The thing happened by instinct, like the wheeling
and flashing of a shoal of fish or of a flock of birds dipping and spreading in
the sky. Suddenly, in one amazing wing-movement, the arms of all the men would
flash up into the air, naked and glowing, and with the soft rushing sound of
pigeons alighting the men ebbed in a spiral, grey and sparkled with scarlet,
bright arms slowly leaning, upon the women, who rustled all crocus-blue,
rustled like an aspen, then in one movement scattered like sparks, in every
direction from under the enclosing, sinking arms of the men, and suddenly
formed slender rays of lilac branching out from the red and grey knot of the
men.

All the time the sun
was slowly sinking, shadow was falling, and the dance was moving slower, the
women wheeling blue around the obliterated sun. They were dancing the sun down,
and dancing as birds wheel and dance, and fishes in shoals, controlled by some
strange unanimous instinct. It was at once terrifying and magnificent, I wanted
to die, so as not to see it, and I wanted to rush down, to be one of them. To
be a drop in that wave of life.[24]

Lawrence always
called himself a "fearfully religious man." This is as close as he ever came to
describing his religion. It is indeed terrifying, as collective emotions can be.
But a culture without any such instinctually-based communal rituals would
probably be imaginatively and emotionally impoverished.

V.

In saying these
few words on behalf of (mostly natural) religion, I don't mean to gainsay any
of Jerry Coyne's criticisms of supernatural religion. The dogmas Coyne derides
in Faith Versus Fact are indeed, as
James said of their 19th-century versions, "fifth wheels to the
coach."[25]
Even more valuable is Coyne's resolute championing of critical thought and
intellectual honesty. But his and others' efforts do, I hope and believe, have
dogmatic religion on the run, however long it may take to complete the rout.
Meanwhile, it is important to identify and preserve whatever in religion's vast
and varied heritage may be of use to our emancipated descendants.

[END]

George Scialabba (www.georgescialabba.net)
is a contributing editor of The Baffler.
He is the author of four essay collections: What
Are Intellectuals Good For?, The Modern Predicament, For the Republic, and Low Dishonest Decades, all published by
Pressed Wafer.

Securus judicat orbis terrarum, the Romans used to say -the whole world can't be wrong. If there's one thing that nearly the
whole literary world has agreed on for four hundred years, it's the supreme
greatness, the divine genius, of Shakespeare. "He was not of an age but for all
time!" (Ben Jonson). "Others abide our question. Thou art free" (Matthew
Arnold). "Shakespeare and Dante divide the world between them; there is no
third" (T. S. Eliot). Even that intrepid icon-smasher, Susan Sontag, when asked
her favorite author, replied "Shakespeare."

There is something empowering as well
as constraining about a critical consensus. As a young reader adrift on the
bounding literary main, I was glad to be sure of something. I read Shakespeare with reverence as well as gusto,
conducting my quotidian internal dialogue in (very fractured) iambic pentameter
for days after every exposure, happy to be intimate with - according to the
highest authorities - "the best which has been thought and said in the world."

Then Trickster - in the form of
George Bernard Shaw - gave me a tumble. Shaw made his living for years as a
drama critic, and in a tossed-off review of an apparently forgettable
adaptation of Pilgrim's Progress, he
observed in passing:

Search [in Shakespeare] for statesmanship, or even
citizenship, or any sense of the commonwealth, material or spiritual, and you
will not find the making of a decent vestryman or curate in the whole horde. As
to faith, hope, courage, conviction, or any of the true heroic qualities, you
find nothing but death made sensational, despair made stage-sublime, sex made
romantic, and barrenness covered up by sentimentality and the mechanical lilt
of blank verse.

Later, in the "Epistle Dedicatory" of Man and Superman, Shaw repeated this judgment, calling Shakespeare
a mere "fashionable author who could see nothing in the world but personal aims
and the tragedy of their disappointment or the comedy of their incongruity," and
who never understood that

true joy in life is being used for a purpose recognized
by yourself as a mighty one; being thoroughly worn out before you are thrown on
the scrap heap; being a force of Nature instead of a feverish selfish little
clod of ailments and grievances complaining that the world will not devote
itself to making you happy.

At first this didn't compute. But on
second and subsequent thoughts, it began to. Shaw frequently contrasted
Shakespeare with Bunyan; that was my clue. By comparison with Bunyan's, Shaw
claimed, Shakespeare's view of life
was impoverished, his sense of motivation and personality cramped, his
imagination fertile but only within narrow limits. His characters have no sense of a purpose larger than
themselves, their political or romantic ambitions, their dynastic allegiances,
and other trivial matters. Bunyan's, by contrast, live by and for something
truly great, a conception of stark moral and intellectual grandeur. Bunyan and
his Hebrew forebears, if I understand Shaw correctly, were heroes because they
cared little about their individual destiny, having identified themselves
generously and wholeheartedly with an explicit cosmic moral order, unlike Shakespeare's
heroes, with their comparatively trivial purposes and preoccupations.

There
was nothing vital at stake, at least as represented by Shakespeare, in the
quarrels between English Catholics and Protestants: they were simply religious
factional squabbles, with political prizes at stake rather than profound moral
or metaphysical differences. The Hebrew prophets and the Puritans were -- even
if one rejects their beliefs and ideals -- moral and religious geniuses and
heroes. Shakespeare's protagonists may have been brave or loyal or in other
ways virtuous, but they are all the servants of paltry, utterly conventional
purposes. Shakespeare was a fine dramatic craftsman and very clever wordsmith,
but no more than that.

All
this is what Shaw meant by saying that Shakespeare's sense of life is hackneyed
and petty, while Bunyan's is vital and original. For all his profound
psychological insight and dazzling verbal virtuosity, Shakespeare is the
supreme exponent of the philosophia
perennis, the conventional moral and political wisdom. Of course, the
conventional wisdom is still wisdom, even if not the highest. And wisdom is
not, in any case, all literature has to give. But is Shakespeare the paragon,
the nonpareil, I once assumed he was? I'm no longer sure.

Aesthetic
judgments are hardly objective; often enough, they're not even aesthetic. Taste,
like morality, is an expression of temperament and circumstance. Perhaps I've
grown querulous or snobbish, eager to épater my fellow critics. Perhaps I've grown doctrinaire, willing
to subject authors to ideological interrogation. Most likely, I don't
understand - will never understand - the all-important distinction between form
and content, which is why I value Tolstoy and Lawrence above Flaubert and Joyce,
Bellow (or even Vidal) above Nabokov.

It's
probably too late for third thoughts, as I shift into a lean and slippered
pantaloon. No matter: for better and worse, I'm no longer in quest of a
philosophy of life. Whatever ultimate truths Shakespeare does or doesn't have
to offer, I look forward, before this strange, eventful history ends, to quaffing
many a draft of his intoxicating word-music and lapsing contentedly into second
childishness and mere oblivion.

[END]

George Scialabbais the author
of What Are Intellectuals Good For?, The
Modern Predicament, For the Republic, and the forthcoming Low Dishonest Decades (all published by
Pressed Wafer).

]]>
A Rake Among Radicalstag:www.georgescialabba.net,2016:/mtgs//2.15672016-01-06T00:29:58Z2017-07-22T20:15:57Zadmin

A Colossal Wreck: A Road Trip
Through Political Scandal, Corruption, and American Culture

by Alexander Cockburn. Verso, 592 pp, $29.95.

On Christmas Eve
2010, Alexander Cockburn began a journal entry, or letter, or short column for
his newsletter Counterpunch, in this
fashion: "The prime constant factor in American politics across the last six
decades has been ..." [p. 478] Let us
pause for a moment to conjecture how commentators of diverse political
complexion might have completed that sentence. The exercise may give us some
sense of Cockburn's place in the culture of late-20th/early-21st
century journalism.

A Tea Partier
might say: "... the ever-increasing tyranny of Federal bureaucracies." A
paleoconservative might say: "... the expulsion of God from the public square." A
neoconservative might say: "... the weakening of American resolve and the global
decline of American power." A neoliberal might say: "... increasing recognition
that markets work better than government intervention." A feminist or gay
activist might say: "... the gradual extension of equal rights." A civil
libertarian might say: "... the gradual erosion of civil liberties." An
environmentalist might say: "... a blind emphasis on economic growth at all
costs." A social democrat might say: "... the dwindling of social solidarity from
its high point just after World War II."

All these
perspectives have at least a grain of truth. But Cockburn's answer cuts
deepest: "... a counterattack by the rich against the social reforms of the
1930s." [478] Class warfare is not
the only kind, or always and everywhere the most important kind. But it is the
most intractable and invisible kind, and Cockburn was one of very few American
journalists who never lost sight of it or failed to rub it in.

This determined
reassertion of business dominance has transformed nearly every aspect of
American society. The non-enforcement of labor law and the resulting decline of
unions, most obviously; but just about everything else as well: the increasing
cost of elections; voting restrictions and election logistics that keep
working-class and minority turnout down; the exponential growth of lobbying and
corporate public relations; dependably lucrative private-sector employment for
corrupt (i.e., nearly all) former legislators; the concentration of media
ownership; "free-trade" agreements and the disappearance of manufacturing jobs;
a new and draconian intellectual property regime; the metastasis of the tax
code; the infestation of regulatory agencies, especially at the policy level,
by industry hacks; the privatization of war-fighting and the corrections
system; the creation of business-friendly precedents by packing the appellate
judiciary with conservative extremists; endless factually-challenged broadsides
against Social Security and the estate tax; the tightening of the screws on
bankruptcy and student debt; the brazen promotion of junk science and dubious
"experts"; the underfunding of the IRS, the Postal Service, Amtrak, public
defenders, public broadcasting, public education - public anything and
everything, except public surveillance. It's as though the Business Roundtable
and the U.S. Chamber of Commerce, making use of both political parties, had
engineered a brilliant, multi-faceted, devastatingly successful campaign to
roll back the New Deal.

Which, as
Cockburn pointed out, is pretty much what happened. Indeed, so tirelessly and
tiresomely did he persist in drawing attention to the fact that the United
States is a class society that he was gently ushered to the margins of American
political discussion, whose center is occupied by such intellectual titans as
Thomas Friedman and George Will. Very Serious People do not make a fuss about
class. It is not a Very Serious Subject. Nor do they indulge in loose talk
about American "imperialism," as Cockburn did repeatedly. That would be vulgar
and irresponsible. America abroad makes mistakes, sometimes genocidal ones, but
always with good intentions. To doubt this betrays a lack of Seriousness.

Alexander Cockburn
(1931-2012) was born in Ireland, schooled in Scotland and England, and
apprenticed to Grub Street. His father, Claud Cockburn, was a witty and dashing
left-wing journalist, a member of the British Communist Party, and an
accomplished novelist and memoirist. In 1972 Alexander came to New York, the
belly of the beast, the heart of the empire, to write for the Village Voice, the liveliest and most
influential journal of the then-thriving counterculture.

His star rose
rapidly. "The Greasy Pole," a biweekly column on national politics, co-authored
with James Ridgeway, was well-reported and hard-hitting. "Press Clips," a
weekly column of press criticism, was even more raucous. Cockburn's suave prose
style and stinging wit delighted his readers and infuriated his targets:
blandly deceptive corporate or governmental spokespersons and self-important
newspaper or TV pundits spouting conventional wisdom.

Particularly
infuriating was his forthright criticism of Israel's expansionism and its harsh
treatment of Palestinians in the West Bank and elsewhere. Predictably, false
accusations of anti-Semitism dogged him, and when it was revealed in 1982 that
he had accepted a $10,000 grant from an Arab-funded institute to write a book
on Middle Eastern politics, had not written the book, and had not returned the
money, the furor was intense. He was suspended briefly from the Voice, lectured incessantly by
self-designated guardians of journalistic ethics, and portrayed as venal and
dishonest by the Israel lobby. He soon moved over to the Nation, but "Grantgate" was a bitter experience and, I suspect,
contributed to his decision to leave New York a few years later.

Christopher
Hitchens has a charming essay in his last collection, Arguably, about driving cross-country, on assignment for the Atlantic, in a rented red Corvette. But
except for occasional forays to Iraq and Afghanistan to bolster the morale of
the freedom-fighters, Hitchens stuck pretty close to New York and Washington
DC. Cockburn was forever on the move, driving around the country in beat-up old
cars and finally settling in one of the remotest and least gentrified areas in
the lower forty-eight. As a result, A
Colossal Wreck and its predecessor, The
Golden Age Is In Us, are two of the best American books of travel and
commentary since Tocqueville (if you like Tocqueville, that is; since Mark
Twain if, like me, you don't). George Packer's earnest, Serious The Unwinding will probably pick up the
prizes, but A Colossal Wreck is a
more penetrating, more colorful, and vastly more entertaining account of how
early 21st-century Americans think and feel.

Like everyone
else's, Cockburn's oeuvre was a mix
of continuities and idiosyncrasies. The continuities were standard left-wing
preoccupations: opposition to unilateral American military intervention,
corporate-designed globalization, the scapegoating of immigrants, and the
growth of the National Security State, with special bêtes noires including the
war on drugs and the dependable cowardice and hypocrisy of the Democratic Party.
Unlike his collaborators James Ridgeway, Ken Silverstein, and Jeffrey St.
Clair, Cockburn was not an outstanding investigative reporter. He did not do
much digging in databases or archives; did not identify whistleblowers or
cultivate sources deep inside malign bureaucracies; did not hurry to
battlefields or disaster scenes to see first-hand. (Though he sponsored plenty
of all those things in Counterpunch.)
His methods, insofar as he had any, were to read a staggering volume and range
of the world and local press, transmitting and embellishing whatever piqued his
interest, or to arrive somewhere and learn everything about the area and its residents,
serving up historical curiosities along with political observations.

But A Colossal Wreck is even less methodical
and more miscellaneous than that description suggests. It is a potpourri, a
gallimaufry, a salmagundi, highly seasoned. A typical morsel:

Here's why I'm against the UN as
a promoter of federalism and world guv'mint. This just in from Geneva, Switzerland,
via Reuter's wire: "UN upholds French ban on 'dwarf-throwing.'" It turns out
that a diminutive stuntman who had protested against a French ban on the
practice of "dwarf-throwing" has lost his case before some sort of UN human
rights judicial body. The tribunal issued some typically pious UN claptrap
about the need to protect human dignity being paramount.

The dwarf, a fellow called Manuel
Wackenheim, argued that a 1995 ban by France's highest administrative court was
discriminatory and deprived him of a job being tossed around discos and similar
venues. ...

Dwarfs and their throwers will
now have to search out venues, like prize-fighters in eighteenth-century
England. Soon some place like Slovakia will be their only venue. No doubt a UN
embargo will then ensue, with draconian sanctions, appointment of
inspectors/spies, followed by the inevitable intervention, NATO bombing, and
occupation.

So here's a bunch of UN
administrators, each of them probably hauling down an annual salary hefty
enough to keep a troop of dwarfs in caviar for life, dooming poor little
Wackenheim to the unemployment lines, before going home to scream at their
underpaid Romanian maidservants. ... In the old days, dwarfs could stand proud,
strutting down the boulevards, around circus rings, or forming part of some
amusing display, or matching themselves against pitbulls (a popular
nineteenth-century English pastime). I can remember dwarfs from my childhood in
Ireland, along with other bodies remote from conventional anatomy. Walking down
the main street of any Irish town reminded one of Breughel. Not any more. I
guess even in Catholic Ireland the doc takes a look and chokes nature's sports
before they've got out of the starting gate. [229-30]

Another:

Fall is always the best time to
meander around the country. Across the Midwest the corn is being harvested. The
browns and golds of stubble and still-standing stalks warm those vast flat or
slightly undulating vistas. ... Around 100 miles along 94 from Minneapolis we
came to Sauk Centre and espied a sign for the Sinclair Lewis Interpretive
Center. Lewis was born in Sauk Centre, which he offered to the world as Gopher
Prairie in Main Street, the novel
published in 1920 that made his name.

Fortunately the Info Center has
not yet found the money to transform itself into an interactive learning
experience in the modern manner. In fact, the "center" is an old-fashioned
small museum with fading photographs and photostats of Lewis's working
manuscripts. Some of these were detailed plans Lewis drew of his fictional
towns, plus his real-estate maps of the inhabitants' precise locations and
their family histories. Every time he visited a graveyard, he'd take down names
for future use. ...

I'd forgotten how good a writer
Lewis was. "This is America," he wrote in the epigraph to Main Street. "Main Street is the climax of civilization. That this
Ford car might stand in front of the Bon Ton Store, Hannibal invaded Rome and
Erasmus wrote in Oxford cloisters. What Ole Jenson the grocer says to Ezra
Stowbody the banker is the new law for London, Prague, and the unprofitable
isles of the sea; whatever Ezra does not know and sanction, that thing is
worthless for knowing and wicked to consider." [119-20]

There's much
more of this - perhaps too much. My only complaint about A Colossal Wreck - an ungrateful one, I know - is that the
collection is weighted a little too heavily toward the light-hearted. I wish
there had been more long and powerful essays like Cockburn's incensed
post-mortem tribute to Gary Webb, who revealed the CIA-cocaine connection and
was hounded mercilessly (he finally committed suicide) by establishment media
over minor flaws in his reporting. Or like his moving review of The Whisperers, Orlando Figes'
monumental study of private life in Stalinist Russia. Or like his brave and
prescient meditation dated September 12, 2001.

Cockburn liked
to recount an exchange with an editor early in his career. "Alex, is your hate
pure?" "Yes, Jim." [181] But it
wasn't. He was too fond of pleasure: of cooking, gardening, old cars, curious
learning, elegant phrase-making; too amused by the inexhaustibly hilarious
absurdity of life that only Irish writers from Wilde to Beckett to J.F. Powers
have seemed able to convey in English. (And not only of life but of language
too. He imagines a revolution against cliché, with the legendary prosecutor
Fouquier-Tinville consigning one after another modish or obfuscatory expression
to the guillotine. Among my favorites: "reach out," "closure," "vibrant,"
"contrarian," and the Nixon-Obama standby, "Let me be clear,") There is plenty
of rancor - fully justified, by and large - in A Colossal Wreck. But the dominant tone - the overriding impression
one takes away of the author - is a near-unique fusion of saeva indignation and joie de
vivre.

The most amusing
ten pages in the book are a playlet set in the antechamber of Heaven, where the
newly-deceased Christopher Hitchens runs across the not-yet-deceased Henry
Kissinger, on an advance diplomatic mission. The wily K lobbies the irate Hitch
to withdraw the latter's scathing indictment, which has caused some murmuring
among the liberal faction of Heaven's admissions committee. Hitchens refuses,
but with the support of John Paul II and Mother Teresa, Kissinger wins a
promise of salvation, returning to earth to arrange a dramatic deathbed
conversion.

This is probably
the kindest light in which Cockburn ever portrayed Hitchens, whom he never
forgave for supporting "humanitarian intervention" in the Balkans and Iraq. How
do the two, once joined in the popular imagination as clever lefty Oxbridge
types, stack up? Hitchens was a superb literary journalist; his portraits and
review essays will endure. As a political critic, he was not so good, even
before 9/11 overwhelmed his critical faculties and he became positively bad.
His Nation columns often seemed
phoned-in, and his Slate columns were
a public menace. Cockburn seldom wrote long-form, alas, but his prose style was
superior to Hitchens' and his political judgment generally more astute.

Not always,
however. I mentioned Cockburn's idiosyncrasies. They call for brief
enumeration. There was his tiresome sniping at Orwell, who had criticized Claud
in Homage to Catalonia. More
seriously, there was his antipathy to gun control, which he saw as just more
evidence of the bourgeois state's aspiration to total social control. Evidently
he thought several thousand accidental deaths a year, and even the
proliferation of assault weapons, less intolerable than giving up the remote
possibility of armed popular resistance. If guns are outlawed, only the police
will have guns, he might have said.

Then there was
his unyielding anti-Malthusianism. He seemed to think that population control
was indistinguishable from eugenics, supposedly a perennial progressive fetish.
Most distressingly, there was his skepticism about global warming. His
inveterate distrust of consensus among the high-minded, plus his discovery that
the moribund nuclear power industry was looking to the anti-fossil fuels
movement for its salvation, led him to pay attention - too much, perhaps - to
the tiny minority of dissenting scientists.

I can see a
shred of plausibility in each of these opinions. But mostly sheer orneriness.

Between the
malevolence of the Republicans and the mediocrity of the Democrats, the last
four decades have been a pretty dismal time to be a left-wing radical. Few of
us have stayed scrappy; still fewer have kept a sense of humor. Cockburn's
example - hedonist, populist, brawler, dandy - made it a little easier. I wish
the next generation one of him.

[END]

George
Scialabba is a contributing editor of The
Baffler and the author of three essay collections: What Are Intellectuals Good For?, The Modern Predicament,and For
the Republic.

Scialabba: Well... I'm proud of some parts of the American
heritage and very unhappy about some parts of contemporary American culture. I
don't worship the American Constitution as many of my fellow Americans do.
Americans tend to be a little bit superstitious in their reverence for
scripture, both religious and political. But I think the Constitution was an
expression of the Enlightenment - the European and to some extent American
Enlightenment. It was a step forward but there were objectionable parts of it.

RL: Could
you point out some of these objectionable parts?

Scialabba: Well, one of them is that slaves were counted as part
of the population for purposes of a state's representation in the Congress but
were not allowed to vote, of course. In other words, it legitimised slavery.

RL:
Anything else?

Scialabba: Oh yes, a number of things, but the main thing is
that it's very imperfectly democratic, that is, it has far too many safeguards
on private property. I don't mean individual private property - you know, yours
and mine - but the power of wealthy people to maintain inequality, both
economic and political.

RL: But
what's wrong with the idea of private property?!

Scialabba: Nothing is wrong at all! I mean, as I say - private
property for use is one thing but the unlimited accumulation of private
property, to the extent we see it in contemporary America and have always seen
it throughout American history, at least since the arrival of industrialisation
and mass production in the late 19th century - it is not legitimate.

RL: But
it's constitutionally legitimate. In what sense is it illegitimate then?

Scialabba: Morally.

RL: What
do you mean by 'moral legitimacy'?

Scialabba: I believe in the morality of democracy. I believe
that those who are affected by actions and policies should have an equal voice
in deciding those actions and policies.

RL: And by
'equal voice' you mean...?

Scialabba: Equal opportunity to influence the outcome of the decision,
of the collective deliberation.

RL: So
already here your utopian views are present.

Scialabba: Yes, although... They're probably a very long way from
realisation, but they're not utopian in the sense that... They're not extremely
eccentric or idiosyncratic - it's what John Stuart Mill believed, it's what the
Fabians, it's what all of the great Anglo-American democrats believed.

RL: Yes,
but they were delusional about the human nature - they thought that equality is
implementable, which is obviously false.

Scialabba: Well, equality of treatment, equality of right
doesn't depend on equality of endowment - of talent or capacity.

RL: But
the equality of what you mentioned, 'the voice', would imply the equality of
the ability to make use of one's endowment. It presupposes that one should be
able to do something with one's endowment, whereas the majority of people are
completely unable to do that.

Scialabba: Well, not completely - most people muddle through
their lives. But it's true, of course - when it comes to managing wealth, when
it comes to influencing public opinion... When it comes to virtually every human
capacity, we're not equal. Nonetheless, what ought to determine our opportunity
to influence decisions, public policy, is not the luck of our endowment, of our
birth, our genetic gifts - it's the degree to which we would be affected by
those policies. So if a workforce should collectively have the ability to
determine what is done with the surplus that a factory or an enterprise produces
it should have a right to determine the conditions of work. This doesn't mean
there shouldn't be managers, superiors - people who have more experience or
more insight, who are delegated the authority to make one decision or another.
But the delegation is itself a democratic process, it's part of democratic
theory - representation is the only form democracy can exist in in our
societies.

RL:
...outside, say, the ancient Greek polis, with a limited number of citizens. But
let me press you a bit more on this point... I have questions regarding the very
idea of equal opportunity - because this idea presupposes that it is possible
for people with unequal abilities to have equal opportunities. That is part of
what this opportunity is - it includes
the ability, whereas abilities are not equal. Don't you see that the very idea
of equal opportunities contains an oxymoron?

Scialabba: Well, if you press it to its logical limits... I'm sort
of Wittgensteinian - that is, I believe that there are no autonomous, separate
meanings of words. I believe that all words are part of a language. So what I
think equality of opportunity, equality of political rights has meant among democratic
theorists generally is, roughly, an equal vote. It's true that what it meant in
classical Athens... There were certainly people who could sway the demos one way
or the other, but all members of the assembly had one vote. Likewise, in this
workplace democracy I was talking about not all workers would articulate an
opinion about where to invest or how the work process should be organised. But
once all the people who wanted to have their say and were capable of it and had
the self-confidence to get up and present it had had their say then everyone affected
would have an equal vote. That's a rough, practical form of equality that we
are a long way from in the United States or anywhere and that I think is the
minimum level of effective democracy that we should work for. And the Constitution
does not enhance or foster that kind of equality.

RL: The
only problem with Wittgenstein I think, when it comes to language games, is
that from his writings it's not clear how the rules can be changed. As we know
from history, rules do occasionally change, and Wittgenstein, believing that a
philosopher should only consider the world as it is, apparently hadn't thought
about how the rules might be changed. If you were Wittgenstein, I would ask - what
do you say to that?

Scialabba: Well, if I were Wittgenstein, I would probably
scratch my head, look up into the air for several minutes and then say: "Well,
I guess people will misunderstand each other - they will talk past each other
for a while and bump up against each other. And then, if they really want to understand
one another, they will say - 'look, are you meaning this by this word?' And the other person would say - 'no-no, I mean
this!' And then the other person will
say - 'well, I know some people think the word means that but, really, don't most people think it means this?' And they would negotiate and
eventually come to a consensus, if they wanted to. Of course, if people didn't
want to be understood or understand each other, then this process won't end in
an agreement or a consensus - it will end in manipulation."

RL: Apart
from slavery and not enhancing that understanding of equality, what other
shortcomings of the Constitution can you think of?

Scialabba: Well, the Electoral College is a hopeless idea. You
know, the presidents of the United States are not elected by popular vote -
they're elected by this archaic institution called Electoral College. Each
state is apportioned by population a number of electors with a capital 'E'. The
electors were originally appointed by the state legislature - the people could
vote but the legislatures chose the electors and the electors decided who the
president was. It could happen - it has happened four times in American history
- that the majority of electors chose someone different from the majority vote
of the population. Because all the electors of the state vote the same way -
they have to vote the same way. Whichever party wins the majority in a given
state wins all the electors in that state - it's called 'winner-takes-all',
very foolish system but it ensures the monopoly in power of two parties,
Democrats and Republicans. So even though more than a hundred times the reform
or abolition of the Electoral College has been introduced in Congress and has
passed the House of Representatives it is always defeated by the Senate -
because the Senate is the first thing that would go if there were a radical
reform of that sort. The Senate is an absurdly undemocratic institution! It's
clear even in the federalist papers that the purpose of the Senate is to
frustrate the majority will - because the majority was not trusted by the
founding fathers, all of whom were members of ...

RL: Well,
the very idea of democracy at that time was not as developed yet as it became
in the 19th century. Democracy was considered a very suspicious
political system up to the mid-19th century. What good do you see in
radical democracy?

Scialabba: Do you know that saying by Churchill that democracy
is undoubtedly the worst political system, except for all the others?

RL: Yes,
and he also said that a five-minute conversation with an average voter makes
one doubt the value of democracy.

Scialabba: Right... As Kant said, the only way to learn to
exercise free will is to exercise freedom - to make mistakes and learn from
them. The premise of democracy is that each person is the best safeguard of his
or her own interest, or at least can learn to be.

RL: But
you see, these are fundamentally different positions - each one 'is' or 'could
be'. Those are fundamentally different views on human beings!

Scialabba: Yes, but the only way one can learn to be is by
exercising that freedom.

RL: Do you
think that out of the 320-350 million people the majority want to exercise that
right, even if they had it?

Scialabba: In the United States?

RL: Yes,
for example.

Scialabba: The population is... The population is not awake,
politically.

RL: Or
maybe not awake in any sense?

Scialabba: Right, well... They're all very clever and alert
shoppers - they're awakening as consumers, but as citizens they're not. And
that has a long history of development. One of the most recent reviews I wrote
is of two books which explain how American political culture got to such a
state of torpor and apathy and confusion. It's in The Nation, in May or June. One of these books is by an American
historian Steve Fraser, The Age of
Acquiescence, and the other is by a novelist David Bosworth, called The Demise of Virtue in Virtual America.
They're both small masterpieces - well, The
Age ofAcquiescence is not small,
it's a lengthy, detailed account of how the vigorous self-assertion of the 19th
century working classes and small proprietors, which was I think as close to
mass democracy as the world has come, that is, early and mid-19th
century America, was transformed largely by the advent of mass production into
a mass society of passive, apathetic, ignorant, deskilled consumers.

RL: And
you think these passive, apathetic, ignorant and deskilled consumers should
have equal opportunity to decide the matters of common concern?

Scialabba: Well, what are the alternatives?

RL: I
don't know yet.

Scialabba: Enlightened guardians?

RL: Yes,
philosopher kings! (Laughing.)

Scialabba: Well, I might be willing to entertain the notion if
there were a reliable source of philosopher kings on the horizon.

RL: But
maybe this picture of the American society which you just described is the only
possible picture which could support the process of the United States becoming
the world's policeman and banker? Maybe in order to acquire all this power that
it has the US needed such a society?

Scialabba: Unquestionably, the American political elite needed
precisely this kind of acquiescent, uniformed society in order to, as you say,
become the world policeman and banker. The fact that we have become the world's
policeman and banker is not good, either for the rest of the world or for the
American population.

RL: Why is
it bad for the American population? You can take whatever you want - you don't
even have to borrow, you just take it and live in affluence. Why is it bad?

Scialabba: One has to disaggregate. When one says that 'America'
can take whatever it wants...

RL: ...one
means the ruling class.

Scialabba: Yes! I mean, there's unequal distribution, let's say,
of the proceeds of imperialism or globalisation. And the American people, of
course, are the ones who are sent to fight, to police the world, which has
involved not only a great deal of bloodshed on the part of our victims but also
a certain amount of bloodshed on the part of the American population. And also
this regime that has made it possible for us to become the world bankers, that
is, the regime of unrestricted, globally mobile capital, means...

RL:
...having a money printing machine.

Scialabba: Yes, but also having the ability of investors to move
wherever opportunities favour them most, which has simply destroyed America's
industrial workforce. It's in those senses that becoming the policeman and the
banker of the world has made the American elite the happiest and wealthiest
stratum, ruling class ever. Good for them, but not so good for the rest of us!

RL: And
'the rest of us' is 99.99%?

Scialabba: Yeah, roughly - 99.9% I'd say.

RL: Ok,
but let's come back to the beginning of our conversation, when you said that
you were proud of some parts of the American heritage. Could you mention two or
three examples?

Scialabba: Freedom of speech. We do have a tradition of
relatively unrestricted ability to criticise the government. It will take a
great deal more rollback before any American actually has to watch what he or
she says in public, or even in private, as has been the case in most of the
world for most of history. Until very recently the idea of the government
having a secret political police spying on the population was a scandalous
idea! The FBI has been doing it ever since the First World War, basically, but
it has been kept very secret. Occasionally organisations like the American Civil
Liberties Union and enlightened sectors of the ruling class have pushed back -
there have been court decisions restricting the government's ability to spy on
and harass political dissidents. Lately, after 9/11, the attack on civil
liberties has intensified. It's not just since 9/11 but since 1994, when the
new Republican majority took over the Congress - that's more of a watershed in
American politics than a lot of people realise. But there's also freedom of the
press! The idea that a press would need prior approval to publish sensitive
documents rather than going ahead and publish them and taking its chances in
court, defending their publication against the government - until 10, 20, 30
years ago it was unthinkable! I mean, Nixon got in great trouble for trying to
pull tricks like that - they had to be kept secret, they were called 'dirty
tricks'. And the libel laws... I don't know much about Europe but I certainly
know that Britain have shameful libel laws! Virtually any rich person who
dislikes something that was said about him or her in public can just hire a
lawyer, sue and force whatever person or institution responsible for that
opinion to either bankrupt themselves with legal costs or publicly retract.

RL: So
you're saying that you're proud of freedom of speech but that in the last 20-30
years it has been undermined. I spoke recently with an old mathematician from
the Soviet Union who was imprisoned five times. He is 91 years old now and has
been living in the United States for the last forty years. He said that the
United States was a better place because he could think and speak whatever he
wanted, without being imprisoned. So, comparatively speaking...

Scialabba: Yes, comparatively speaking it's something to be
proud of.

RL: What
else, apart from the freedom of speech, are you proud of?

Scialabba: Well, there's a tradition of what we call 'know-how'
- a kind of practical aptitude. Another phrase besides 'know-how' is 'can-do'.
Americans are can-do types, that is, when we're faced with a difficult problem
we tend not to theorise about it, we tend to roll up our sleeves and pitch in -
at least that's the national image, the stereotype. There's something to it, as
with all stereotypes - it's true! Americans are a pragmatic, common-sensical,
generally tolerant people.

RL: You
could also extend this list by adding 'superficial', 'hypocritical' etc...

Scialabba: Yes, and it could get less admirable.

RL: You're
highly critical of how the American democracy functions. How do you cope with
the helplessness of criticism?

Scialabba: Do you know that immortal phrase by Samuel Beckett:
"I can't go on, I'll go on"? (Laughing.)
What can you do? You go on as long as you can and then you... you just take a
vacation. (Laughing.) I don't know
the answer to that. It's difficult, it's frustrating, exasperating... It's
maddening! It actually has... I could say, literally, driven me mad. I have a history
of clinical depression, and at least a couple of those episodes were triggered
I think by... just a constant, seething indignation at what was being done to
American political culture. I mean, from 1994 to 2005 I was grinding my teeth
and clenching my fists and holding my breath in indignation countless times
every day. And in 2005 I finally broke down and had the longest and worst
depression I've ever had. I don't think it was a coincidence that all those
years of political despair, humiliation, agony, anguish terminated in that
episode. You know... you just go on as long as you can and then you stop - like life.
(Laughing.)

RL: These
changes you are referring to, which started in 1994, seem very gradual though,
at least from an outsider's point of view. Maybe you could point out some of
the most crucial turning points which, as you claim, changed the political
culture?

Scialabba: Well, another watershed is 1980, the election of
Reagan. But, of course, it all started with the New Deal.

RL: You
mean, in the 1930s?

Scialabba: Yes. American capitalism was in crisis, and there
were two schools of thought about what to do. The old school, laissez-faire, the classical liberal
idea was - we just wait it out, it will pass. The farmers, the workers, the
banks, the small businesses - a lot of them will starve, but it will be even
worse if we try to help. Very convenient for the bankers and for the rich
people. And then there were the so called 'Progressives' - followers of John
Dewey and... kind of an enlightened liberal segment of the ruling class. Their
representative was Roosevelt, and they thought - well, maybe... Yes, a lot of
people will starve but possibly we could put up with that if we were sure that
capitalism would survive. But look at Russia, look at the failed revolutions
after the First World War in Germany and elsewhere - maybe capitalism won't
survive! We can't be sure, so let's try to help. And Keynes was there, with his
new theory. So they tried, it worked with a little 'help' from the Second World
War, and the New Deal seemed to be a permanent part of American public policy
and political culture for several decades after the Second World War. But this
first segment, this very straitlaced, single-minded - simple-minded! - segment of
the ruling class never accepted the New Deal, never accepted the restrictions
on business, the powers granted to labour to organise and bargain collectively,
the extent of regulation on business, on commerce and pollution. At first all
they could do was fume and scheme in private - the John Bridge Society, these
small groups of Texas millionaires who would meet and...

RL: So
this undercurrent never died out?

Scialabba: Exactly. It persisted and it saw a tremendous
opportunity in the late 1960s and early 1970s, with the high working class
dislike of the student movement and the counter-culture. It formulated a new
strategy called 'the culture war'. Part of it was playing on the resentment of Southern
whites at the extension of civil rights, and part of it was playing on the
resentment of religious [populace], especially Midwest evangelicals and
Southern evangelicals, of the sexual freedom and the equality of women
advocated in the 1960s. They had money to hire the very best public relations
consultants, propagandists, and they whipped up this cultural war and it was
successful! They brought Ronald Reagan to power and...

RL: Sorry,
before you proceed... The fact that they thought out this cultural war sounds a
bit like a conspiration theory - that they were so smart...

Scialabba: Well, of course they weren't half-a-dozen
masterminds.

RL: So by
'them' you mean a large group of people and not a very unified effort?

Scialabba: Right - not unified but coordinated. There are
industry associations in every large industry, there's the US Chamber of
Commerce, there's the Business Roundtable, and there's the Republican Party.
All of them communicate a great deal! I don't mean that they have secret
meetings in a lodge out in the woods...

RL:
Although they do. (Laughing.)

Scialabba: Occasionally, but... They know what each other is
thinking, they know what their grievances - their common and individual
grievances - are... They all disliked regulation - they were all against the
Environmental Protection Agency, the Occupational Safety and Health Administration,
the National Labour Relations Board... I mean, as soon as someone came to them -
some enterprising politician or the Republican National Committee - and said:
"Look, these are Democratic initiatives. The Democratic Party is trading away some
of its political capital among white and Southern voters and religious voters
by going along with racial equality and sexual equality. We think that if we
whip up these resentments on their part we can steal those voters from the
Democrats! And once we do, once we get the executive power and maybe even the
control of Congress (which they didn't get for three quarters of the Reagan
administrations) we can start undermining the New Deal." And that's exactly
what happened! Reagan got elected. If you look at the political polls of those
times, most people didn't know - and to the extent that they did know didn't
agree with - the actions that Reagan immediately started taking: undermining
social security, undermining labour protection, undermining environmental
protection... He was sold as 'one of us', a real American, somebody you could
have a beer with, and the Democrats were considered effete and snobbish,
arrogant, condescending, liberal. These stereotypes are a great pity but great
issues were decided on the basis of these superficial stereotypes. And, again -
Reagan was not elected to appoint people to the Environmental Protection
Agency, who came straight out of polluting industries and immediately declared
their intention to roll back regulations on PCBs, this deadly chemical that was
raising much public concern. But he did! And, likewise, he appointed people to
the National Labour Relations Board, and ever since 1980 the National Labour
Relations Board has had a majority of people on it from industry, who don't
believe in unions. And so the average time for arbitrating a complaint...
Typically what will happen is that the union will go to a workplace, try to
organise while the employer will hire a law firm, which will come in, interview
every worker, tell them that their jobs will disappear if they join the union,
etc. And these law firms typically provide Republican appointees to the
National Labour Relations Board, so every time a grievance against one of these
things is filed these appointees grant one continuance after another. It takes
roughly two years for the average complaint to reach a judgment - and by then,
of course, the union has disappeared and the passion has dried out. So in
countless small ways like this, starting with the Reagan administration, the 'capillaries'
of the New Deal were tied off. It was like the American infrastructure - it was
left to decay. Infrastructure needs constant maintenance - you can't just build
it and then expect it to remain functional forever. It's the same with
political infrastructure - it needs to be sustained by capable public officials
who want to fulfil the policy goals rather than undermine them. And both, the American
physical infrastructure and the American political infrastructure have decayed
disastrously - largely starting in 1980. Of course, in 1994 there was... I don't
know precisely what produced this extra degree of ideological fanaticism among
the incoming Republican Congress that Newt Gingrich was the leader of. Reagan
was not very subtle but compared to them Reagan was... you know, Machiavellian.
They just went after the government itself! One of the chief strategists of
this Gingrich revolution, the Contract with America - you may recall it was
their program, their platform - was Grover Norquist, the head of Americans for
Tax Reform. He is perhaps the most influential lobbyist in Washington - he convenes
a weekly or monthly meeting of Congressional leaders and conveys to them what
his business constituents are thinking. Anyway, in 1994 after this victory he
was rubbing his hands and said not only "we're gonna shrink the federal
government" but "we're gonna shrink it so small that it can drown in a
bathtub".

RL: And
you mean that they were successful?

Scialabba: Well, not fully successful, but the movement towards
plutocracy accelerated. It began in 1980, it accelerated in 1994 and in 2001 there
was another turn of the wheel, especially concentrating on the executive power
and secrecy in foreign policy. And now American democracy is just very sick.

RL: How do
you explain the drive of this sick democracy to impose this same democracy on
the rest of the world? Afghanistan, Iraq - these campaigns went under the
slogan of spreading democracy.

Scialabba: The tobacco industry once had a slogan: "Even doctors
do it!" And there was a picture of a man in a white coat smoking a cigarette -
in the 1950s. Slogans are just that. The American crusade for democracy is
simply and solely public relations.

RL: You
mean, it's purely propaganda?

Scialabba: Yes, it's purely propaganda. That's not to say that
there aren't mid-level, perhaps even high-level, people in the foreign policy
apparatus who don't believe that somewhere down the line, in some way that's
not very clear, American intervention will lead to more political freedom. But
the US has had a really hegemonic power over a large number of client states
throughout the 20th century - starting in the Philippines in the
very early 20th century, then Cuba, Nicaragua, all of Central
America, most of Latin America and many small states in Asia and Africa. In all
of those states a word from the American ambassador on freedom of the press or
freedom of speech would've had a significant effect.The United States continually intervened
whenever there were threats to the freedom to do business. Whenever a country
like Iran decided in 1953 that the agreement that we have with American and
British oil companies really is not fair - in fact, it's ridiculously unfair -
in fact, it's a national humiliation! So they elected a moderate social
democrat. Immediately the CIA was tasked with overthrowing him - and they did.
In 1954 in Guatemala, which was practically a fiefdom of the United Fruit
Company, there was a large amount of land that the company owned but was not using.
The Guatemalans elected another social democrat, Arbenz, who said: 'We're gonna
take this land and distribute it to our landless population and we will
compensate you - we will pay the United Fruit in fair amount of value for the
land'. But the very idea that a client state could contravene an American
business interest merely for the welfare of its citizens was too much! The CIA
was tasked with... Well, in this case it was the Guatemalan military, who were
supplied by and trained by the US military, who themselves were unhappy - they were
simply given approval to overthrow the government, and the government was
overthrown.

RL: But
let's stop here because otherwise it will take us an hour to...

Scialabba: My point is just that if you want to look at what the
United States government really cares about, look at what prompts it to
intervene. How often does it intervene in countries where there's no political
freedom but American business is free to operate? How often did the US
intervene to promote political freedom? I can't think of one instance. How
often has it intervened when a movement or a government has attempted to
constrain the freedom of American business in order to enhance the welfare of
its population? Innumerable times.

RL: Are you
a Russian spy?

Scialabba: Ha! No, of course, like most American leftists I
loathe Leninism and Stalinism. In fact, it seems to me that the worst thing
that ever happened to socialism, really... If only Lenin had fallen under the
train in the Finland station instead of riding in it, the world would be a much
happier place.

RL: Well,
Lenin died a long time ago, but are you a contemporary Russian spy?

Scialabba: No, no! I mean, Putin is just a gangster. No, there
hasn't been socialism ever anywhere in the world. (Laughing.)

RL: But do
you see where my question is coming from? Your critical comments of the
American state undermine its glory - its self-righteousness and strength. Do
you want to make the United States weaker?

Scialabba: Well, again - you have to disaggregate. 'The United States'
- what exactly is that? "The United States benefits from globalisation." Oh yes?
Who exactly? "The United States despises the government of Cuba." Who exactly?
"The United States considers Iran the biggest threat to the world peace." Well,
who exactly? You know, in all of those cases it's not you, me, people on the
street...

RL: I
agree that these are the right questions to ask but are the answers available?
Is the system transparent enough for the answers to be reliable?

Scialabba: No, I think the system is set up not to look for answers
to those questions but rather to marginalise them - to distract the population
from taking an interest in those questions and asserting their own views,
figuring out their own interests.

RL: What could
bring a major change?

Scialabba: Well, if only everyone would subscribe to The Baffler, I think...

RL: But
apart from that, is there anything else you can think of?

Scialabba: People are always asking Noam Chomsky that question.

RL: And
what does he usually say?

Scialabba: He sometimes says: "Well, you know as well as I do.
What do you think?" But more often he says: "You know it perfectly well - just
go and do it!" Asking the question is almost the irreversible beginning of
serious social change, because the recognition of the need for major change is
the main obstacle. Once people start asking themselves... "Why is it that the
head of Blackstone Fund, who gave a ten-million-dollar birthday party for his
daughter, pays a lower proportion of income tax than I do? I must write to my
congressman about that! And I must organise a little study group among my
friends to find out if that's an exception or if that's the rule. If I don't
hear back from my congressman then maybe my group and I will get up a petition,
or maybe we'll go to Washington and speak with his aide, or maybe we'll get in
touch with other groups in other cities." Once you ask the question and decide
to do something about it, then ... you know, the American can-do spirit will come
to your aid. Americans can do things once they decide to do them! But American
culture, most of it, is a gigantic and very effective distraction from political
self-mobilisation.

Scialabba: Well, I think that I have been fortunate in my
deprivations and misfortunes. That is, my parents were literate, but just
barely - I don't think I ever saw them reading a book. That is, I wasn't
socialised into middle class aspirations - I wasn't instilled with these ideas
of prestige, success, financial security... Instead I latched onto the only thing
in my childhood environment that looked interesting and exciting at all, which
was the church. At first the church meant the Franciscan Order, which was my
home parish, but the Franciscans were just kind of... a little dull. They were kind
of unbookish and... a bit simple-minded, so I don't know what I would've done.
But I was recruited by Opus Dei, and Opus Dei was all about, you know, being a
saint but in the world - in the professional world. So they said: "By all
means, go to the best college you can get into and study hard. But also you must
have this very rigorous spiritual life - we want you to be a saint as well as a
success." So I had a very rigorous spiritual life in college: I meditated for
an hour a day, I read the gospels every day, did spiritual reading every day, prayed
the rosary every day, went to mass and communion every day... But it did give me
a hunger for the ideal - it did give me a sense of commitment to something
larger than myself and my own success. And once that hunger, that taste is
awakened in someone, - it stays. So because I had the fortunate misfortune not
to be brought up with middle-class values and aspirations, because I had the
fortunate misfortune of being recruited by this fanatical Catholic
counter-reformation group that insisted that you sacrifice everything for God
but still be in the secular world, I wound up in my twenties secular and...

RL: ...and
clueless.

Scialabba: And clueless - but with this hunger for the ideal,
still.

RL: And
you think that this hunger for the ideal is the main source of your critical
attitude towards the status quo?

Scialabba: Yes. I mean, there are plenty of people without this
history who have the same critical attitude toward American democracy and the
same utopian, if you want to call them that, ideals. People come to visions and
sentiments by their own paths but... Yes, I think this was my path.

RL: You
have forty years of recorded clinical depression - from 1972 to 2012, if I'm not
mistaken. My main surprise when reading these published records of yours was
that it's not clear what depression is - not clear at all. Have you now,
retrospectively, understood what kind of aggregate is called depression? What
is it?

Scialabba: Well, the closest I've come is recorded in an essay
in the last collection I published, Forthe Republic. The essay is called Message from Room 101. In Orwell's 1984, you may remember, 'Room 101' is
the place which Winston Smith is continually threatened with - '101' is where the
worst thing in the world awaits you. In that essay I tried to describe what
depression was like, where it comes from and tried to draw a political moral
from it all. I think it's fundamentally a matter of your psychic constitution.
We all have a certain amount of resiliency - you can think of it as a shock
absorber, like the one in your car. We all go through a certain amount of
stress - you can think of it like potholes in the road. If you are unlucky
enough to be born with a poor shock absorber and happen to encounter a lot of
stresses, that is, a very rough road, then your shock absorber gives out and
the car starts to shutter and shake - it's a very bumpy ride. I think I was
born with a weak shock absorber.

RL: From
what you have read, isn't there a way to improve this 'shock absorber'? Or if
you're born with a weak one you'll die with a weak one?

Scialabba: Well, I think if you inherit an independent income or
if you're lucky enough to be born in a society which actually cares for its
citizens - you know, Scandinavia and other European social democracies - or if
you're lucky in other ways then the shocks in your life are rendered less
traumatic. Yes, that's in some ways the political point of the essay. It makes
this rough calculation of how much extra disposable wealth that top 1% of the
population has and how many depressed people there are in the US at any one
time - it divides that number into the sum of extra disposable wealth and comes
out with a figure close to a million dollars.

RL: A
million dollars for what?

Scialabba: For each depressed person. A million dollars of
surplus wealth on the part of the 1% for each depressed person - 40 000
depressed people or something like that.

RL: And if
each of those 40 000 people received a million dollars it would reduce
that number to zero?

Scialabba: Not necessarily zero. For some people it's not
financial insecurity - it's other kinds. But I think it would mop up quite a
bit! (Laughing.) Certainly it
would've I think mopped up mine.

RL: So in
a way you have made a political point even out of your long experience with
depression.

Scialabba: I tried, yeah.

RL: I'm
impressed. There's one aspect though which I think it would be hard for you to
make a political point of... Your understanding of luck - what is it based on?

Scialabba: I would've thought that question is kind of
intrinsically uncomputable. Why luck? I mean, the definition of luck is
something for which there is no... Why?

RL: Well,
there are several definitions of luck but one of them refers to the lack of
understanding of its causes. What Aristotle tried to do in a couple of his works
was to try to understand what determined luck: whether it's just sheer luck, e.g.
when throwing a dice, or whether there's something in that mental constitution or
mind, or whether it's simply human nature - 'the constitution' as you said.
Aristotle thought that some of these do attract luck and some of them push it
away.

Scialabba: Well, that's true but what accounts for that
differential - attraction or lack of attraction - I would say that's luck. Some
people have it, some people don't.

RL: Are
you a determinist?

Scialabba: Well, this again is one of those words about which
one could argue endlessly. But in a rough sense - yes, I don't believe in free
will.

RL: You
had a free will to meet me or not.

Scialabba: There's a short book that I've just finished reading
that answers that question - it's called Free
Will by Sam Harris, the famous atheist who wrote The End of Faith. I pretty much agree with him that... There were
vectors in all directions - do I want to meet this unknown Northern European
person? Well, yes! I would like to be known in Latvia - I would like my friend Sven
Birkerts to be impressed by my appearing in the same magazine he appeared in.
But, on the other hand, it means getting up early in the morning. I have to get
up at 11 to meet this person. All the vectors together just balanced out and...

RL: And
this was a conscious calculation?

Scialabba: Not conscious - semi-conscious.

RL: How do
you understand the last words by Wittgenstein: "Tell them I had a wonderful
life"?

Scialabba: I've pondered it a great deal... I think, among many
other things, Wittgenstein had a saintly streak. You know, he was capable of
giving up his entire fortune and going to teach kindergarten in the Bavarian Mountains.

RL: Yes,
but look who got his money - instead of giving it to the poor he gave it to his
relatives, artists etc. But yes, I know what you mean - he had sort of ascetic
aspirations.

Scialabba: Yes. So I think it may have meant simply that he
wanted to make his friends feel good about his death. Or it may have meant that
the satisfaction of achieving certain insights was so intense, so compelling
that it made up for all of those dreadful depressions that he went through. I would
guess it was one of those - or maybe some of both. But it's just a guess.

RL: And on
the level of 'just a guess', what do you think could or should be your own last
words?

Scialabba: Hm... In this book that I just read by Sam Harris on
free will he quotes the last words of the Primate of England, who was an
unusually liberal Catholic. As he was dying he said: "I'll see you in purgatory
I guess." You know, it would've been conceited to say 'I'll see you in heaven',
it would've been frightening to say 'I'll see you in hell'... I don't know, I'll
say whatever I think the people - a person or persons there - what would make
them least unhappy.

RL: So
even in your last words you would care for them, not for...

Scialabba: Of course, if the only person at my deathbed were
Henry Kissinger, I would try to say something that would make him very unhappy!
(Laughing.)

RL: My last
question - what is the most important thing you have understood in life?

Scialabba: Oh, what a marvellous question... (Sighs.) I would say the sheer fact that
it all fits together, that it's intelligible down, down, down - as far as at
least I can go and possibly as far as any human can go. A few glimpses I've had
of the vast intelligibility of the human and physical universe is the biggest
thrill I've ever had.

RL: And
this intelligibility never led you to a thought that it might exist thanks to a
superior intellect?

Scialabba: Well... I sometimes have wished that I had never
started thinking, but... It's another uncomputable thought - you can't... you can't
be unconscious once you're conscious.

RL: No way
back, you mean?

Scialabba: No way back.

RL: And
you think that if you're conscious you cannot accept that there's an intellect
which has devised the whole world and all of us?

Scialabba: Oh, you can if that's where your thinking leads you.
But once you've understood something - you may change your mind about it but
you can never again not have understood it.

RL: And
you understood at some point that there is no God?

Scialabba: As it seemed to me - yes. And I haven't seen any
reason to change my mind, although I'm still pondering the question. What do
you think?

RL: There
was an Orthodox priest who once said that there is no true faith in anyone
unless that person has seen eternity in somebody else's eyes - without it all
the faith is just a façade. So I wish you to meet someone in whose eyes you
could see the eternity.

Cross-cultural comparisons are
thrilling but perilous. Pronouncing authoritatively about one culture is
difficult enough; running the gauntlet of two communities of academic specialists
is a daunting prospect. Fortunately, Morris Berman is intrepid.

Most historians would be content to
have written one deeply researched and interpretively wide-ranging trilogy on a
large and important subject. Berman has written two: one on alternative forms
of consciousness and spirituality (The
Re-enchantment of the World, Coming to Our Senses, Wandering God) and one
on the decline of American civilization (The
Twilight of American Culture, Dark Ages America, Why America Failed). The
second trilogy, a grimly fascinating inventory of the pathologies of
contemporary America and an unsparing portrait of American history and national
character, is a masterpiece. Unsurprisingly (considering how self-critical and
historically informed most Americans are), it was not well received. At
interludes while writing his grand historical syntheses, Berman has also
produced fiction, poetry, a memoir, and a volume of essays.

He has returned to the grand scale
and the prophetic mode in Neurotic Beauty.
Even the most pessimistic of prophets cannot help looking for hopeful signs.
Berman ended his "American decline" trilogy on a despairing note. Four
centuries of relentless territorial expansion and manic economic growth have
left American resources exhausted and American society in a state of befuddled
anomie. And it seemed as if the rest of the world had been so thoroughly
Americanized that there was little chance of escaping a global collapse and a
subsequent Dark Age, this one probably resembling dystopian science fiction rather
than medieval torpor.

Like many other jaded Westerners,
Berman turned toward the East, searching not so much, however, for interior
solace as for glimpses of a viable human future. Looking beneath Japan's
Westernized surface, he finds a submerged psychic and cultural stratum, which
contains some possible antidotes to the consumerist and individualist fevers
that have driven the US to delirium.

According to Berman, Japanese culture
has two sources, both external. In the 6th century, itinerant Chinese
and Korean monks brought Buddhism to Japan, thereby opening the country to
large-scale importation of Chinese culture. There was little Japanese culture -
in fact, no written language or legal system - before that time, and Japanese literature
and institutions remained imitative of Chinese exemplars for many centuries.

It was a peaceful and prosperous
society, even if isolated. This did not protect it, however, from the second
great event in Japanese history: the arrival of the American fleet under
Admiral Perry in 1853. With supreme arrogance, Perry informed the Japanese that
if they did not open their country to trade with the West, he would bomb their
capital. The Japanese submitted, but so intense was their humiliation that the
country's leaders embarked on a crash course of military and industrial
development, to catch up with the Western imperialists.

The Western imperialists did not, of
course, look kindly on this ambition. The resulting competition for markets and
resources led to war in the Pacific, which ended with an even greater trauma
for Japan. The Japanese reacted, as before, by imitating their conquerors, once
again to the point of outstripping them, at least by some measures.

Today, though, as Berman demonstrated
at great length in his "American decline" trilogy, their conquerors are looking
less and less worth imitating. Japan is still a country of bullet trains and
elegant skyscrapers, as well as the world's largest net creditor, with a higher
average standard of living than the United States. But resistance to Western
modernity is growing. Not only have prominent Japanese literary figures, like the
aristocratic Yukio Mishima and Japan's first Nobel Prize winner, Yasunari
Kawabata, carried their protests over the erosion of the country's cultural
traditions to the point of ritual suicide, but an astonishing number of young
adults - around a million, by some estimates - have in effect seceded from the
society and economy, withdrawing with their books and video games into a
bedroom of their parents' house and not emerging for years at a time. These hikokomori, or "recluses," one
sociologist writes, are an "utterly rational indictment" of Japanese society, which
offers them eighty-hour workweeks at meaningless jobs, usually with long
commutes. The fate of many of those who accept the eighty-hour week is a stern
warning: Japan's suicide rate is twice that of the US.

Another million young adults are
unemployed, not in school, and not looking for work. Another 3-4 million are
working part-time at dead-end jobs and (mostly) living at home. There is also a
disturbing "celibacy syndrome": a third of Japanese youths between 16-24 say
they have no interest in sex; a third of people under 30 have never dated
anyone; and fewer than half of all those from 18-34 are in any kind of romantic
relationship. In quantitative economic terms, at least compared with the US,
Japan is a success. But more and more Japanese feel a deep malaise.

The reason, Berman suggests, is that
unlike Americans, the Japanese know that there is more to life than getting and
spending. "Japan remembers what it is like to be old, to be quiet, to turn
inward," writes a Japanese academic. The long centuries of isolation and
self-sufficiency before the mid-19th-century American irruption are
"in the nation's DNA." Reading that DNA, and patiently explaining to impatient
Americans what it is that the Japanese know, is the aim - and achievement - of Neurotic Beauty.

One thing the Japanese know is
nothing; or better, nothingness. As Berman emphasizes, there are two kinds of
nothingness, which are actually two different ways of experiencing nothingness.
When possessions and sensations - stimuli - are eagerly pursued, they will
sometimes be used up or unavailable. The result is negative nothingness, a
state of anxious deprivation. But when stimuli are considered distractions and
are foresworn, positive nothingness results: a state of pure, concentrated
attention or mindfulness. This is the frame of mind in which the Japanese craft
masters - sword makers, potters, calligraphers - and athletes - archers,
martial artists -have worked. It is also the precondition of enlightenment in
Zen Buddhism.

Zen is quintessentially Japanese,
Berman writes - for better and worse. The power to concentrate attention is,
after all, morally neutral. One can be a mindful pacifist or a mindful
militarist. During the 1930s, as Japanese nationalism reached fever pitch, the
prestige and techniques of Zen Buddhism were frequently co-opted by the state.
Unlike most other religions, Zen lacks an "axial" principle, an objective or
transcendental criterion of morality, like the will of God or the dignity of
the individual. This has spared Japan the dogmatism of more religious societies
and the litigiousness of more liberal ones; but it has left many Japanese with
no moral center, no means to withstand group pressure or the tides of history.

This is, Berman points out, at once a
strength and a weakness. In emergencies, Japanese typically behave with
extraordinary self-restraint and orderliness. (And not just in emergencies: the
stampedes that occasionally kill shoppers at big department-store sales in
America are inconceivable there.) But initiative ("thinking outside the box" in
management-speak) is just as spectacularly lacking; and the conscientious
objector, the stubborn moral individualist, is a rare character type in Japan. The
nuclear disaster at Fukushima offers a poignant illustration: workers and
residents stayed calm and shared food and shelter freely among themselves; but
executives at the Tokyo Electric Power Company covered up to protect their
superiors and punished whistleblowers.

Should one admire this distinctive
capacity for self-sacrifice and national unity or deplore it as abject
conformism? Both, obviously; but a more interesting question is: can a world that
has overdosed on assertive individualism and manic consumerism of the American
variety learn something useful from Japanese culture? Berman thinks so. Economic
austerity is nearly universal today, and may be for quite a while - for that
matter, the environment may not survive another epoch of capitalist prosperity.
As one Japanologist points out, the country seems to have a "gift for
minimalist living."

American
systems and assumptions based on constant growth, wealth and prosperity, many
of which are pathologically corrupt, are dying fast. The demands of the new
world we live in feel a lot more Japanese - equitable, careful, quiet, and
modest.

The Japanese, Berman observes, seem
to have attained something like "luxury in austerity," the elements of which
include "aesthetic awareness (the presence of beauty and sensuality in daily
life); care, precision, and mindfulness; continuity with the past." Traditional
craft values are incorporated into contemporary industrial design and
processes. Berman calls it "archaic modernism."

For a very long time - perhaps
forever - American individualism and the distinctively American dream of
limitless abundance must be renounced, or they may prove lethal. Of course the
world still needs, and will always need, American ingenuity, tolerance,
self-reliance, and our culture's many other virtues. But a humbler America must
now, for the first time, learn another culture's virtues if the world is to
avoid another Dark Ages.

[END]

George Scialabba is a contributing editor of The Baffler and the author of What Are Intellectuals Good For? and
other books.

]]>
The Givenness of Thingstag:www.georgescialabba.net,2015:/mtgs//2.15682015-07-05T23:32:38Z2017-07-22T20:21:14Zadmin

In the impassioned polemic,
"Darwinism," that opens The Death of Adam
(1998), the first of her four philosophical/theological essay collections,
Marilynne Robinson hurls a flaming spear at all of modern thought:

Now that
the mystery of motive is solved - there are only self-seeking and aggression,
and the illusions that conceal them from us - there is no place left for the
soul, or even the self. Moral behavior has little real meaning, and inwardness,
in the traditional sense, is not necessary or possible. ...[T]here is little use
for the mind, the orderer and reconciler, the artist of the interior world.
Whatever it has made will only be pulled apart. The old mystery of subjectivity
is dispelled; individuality is a pointless complication of a very
straightforward organic life. Our hypertrophic brain ... that house of many
mansions, with ... all its deep terrors and very rich pleasures, which was so
long believed to be the essence of our lives, and a claim on one another's
sympathy and courtesy and attention, is going the way of every part of
collective life that was addressed to it - religion, art, dignity,
graciousness. Philosophy, ethics, politics, properly so called. ... [H]ow much
was destroyed, when modern thought declared the death of Adam.

Robinson has not ceased from mental
fight since then, nor has her sword slept in her hand, though she has also,
over the same period, published three exquisitely beautiful, wholly
untendentious novels, the acclaimed Gilead trilogy. With an equal abundance
(though varying proportions) of eloquence and lyricism, her essays have made a
case for her Calvinist theological vision, while her novels have made a world out
of it.

Religious thinkers have tried to beat
back rationalism in many and various ways. Pascal cajoled; De Maistre snarled;
Kierkegaard mocked; Newman preached; Chesterton punned; C.S. Lewis allegorized.
Robinson mystifies. I don't mean that she's ever willfully misleading or obscure,
but rather that she looks at the commonplace and continually finds the uncanny,
the ineffable, the mysterious. The idea that human nature and behavior are
lawlike and predictable, that sociobiology or psychoanalysis can even begin to
account for the complexity and depth of a single personality, she regards as a
fatal failure of imagination. Like Blake shuddering at a clockwork universe,
she prays: "May God us keep/From Single vision and Newton's sleep"; though her own
bêtes noires are Darwin and Freud.

More particularly, it's their contemporary
epigones she objects to, especially evolutionary psychologists and cognitive
neuroscientists, who draw out and embrace the supposedly anti-religious
implications of the Darwin's and Freud's discoveries. Evolution may be
unimpeachable, she acknowledges: we may have descended from creatures without
minds, through modifications caused by accidental genetic mutations that
conferred advantages or disadvantages in the competition for survival. Yet we have minds, and souls too, she
insists, however we got them; and not all of life is a competition for
survival. There may be no thought without brain activity, but brain activity is
not all there is to thought.

"Merely" is a fighting word in
Robinson's lexicon. The self is not "merely" physical; virtues are not "merely"
adaptations. The gravest intellectual sin in her catechism is reductionism: of
mind to computation, generosity to self-interest, beauty to functionality, love
to desire. We don't even know what "physical" means, she protests: "On scrutiny
the physical is as elusive as anything to which a name may be given." [P. 8] It
"frays away into dark matter, antimatter," and beyond; it is "a pure artifact
of the scale at which, and the means by which, we and our devices perceive."
[9] It is sheer arrogant pretense to employ such an unstable category to
discredit the idea of the soul, whose character was "established in remote
antiquity, in many places and cultures, long before such a thing as science was
brought to bear on the question." [9]

Evolutionary psychology is equally
pretentious and vacuous, with its insistence that any traits not obviously
necessary to "establish and maintain homeostasis ... to live and propagate" are
somehow less real than those that are. "So generosity is apparent and greed is
real, [and] the great poets and philosophers toiled in the hope of making
themselves attractive to potential mates." Like cognitive neuroscientists,
evolutionary psychologists are soul-deniers.

The soul is Robinson's guiding idea,
her master concept (not to mention her stock in trade as a novelist). "A very
great deal depends, perhaps our humanity depends, onour sensing and acknowledging that quality in
our kind we call the soul." [235]Nonphysical, it is and is not the self; it is
stained by moral failings but "untouched by the accidents that maim the self or
kill it." The soul is "sacred" and "immortal," a "statement of the dignity of a
human life and of the unutterable gravity of human action and experience." Heroism,
creative fire, immortal longings are irrefutable evidence of the soul. That science
will someday be explain it in terms of "nuts and bolts ... signals and receptors"
is an empty promise.

Science is, in any case, no longer in
a position to make promises. With recent developments in quantum physics and
cosmology, Robinson declares, our conception of Being has exploded and all bets
are off. Undecidability, indeterminacy, non-locality, entanglement, multiple
universes - nowadays the scientifically literate must be prepared to swallow
ten impossible things before breakfast. Reality is so very strange, it appears,
that believing in God, immortality, and free will is hardly a stretch anymore.
If even space and time are utterly mysterious, why expect that grace or the
soul will be any less so? "Anyone who has spent an hour with a book on the new
physics knows that our old mechanistic thinking, useful as it is for so many
purposes, bears about the same relation to deeper reality that frost on a
window pane bears to everything beyond it, including the night sky and
everything beyond that." [210]

What might Steven Pinker, Daniel
Dennett, or E. O. Wilson (all of whom Robinson pillories in one essay
collection or another) say in response to her anathemas? They would first of
all, I imagine, reject her insinuation that Darwinism logically implies Social
Darwinism and indeed was partly inspired by Malthusianism. The fact that the
forceful and cunning usually win the contest for survival in the state of
nature does not guarantee them equal success in the state of culture. Indeed, there
is no contest for survival in the state of culture, since culture is simply a
society's way of spending its surplus over subsistence. Moreover, force and
cunning are not the only successful strategies even in the state of nature;
cooperation often trumps them. On the primeval savannah, teamwork sometimes
outwitted tooth and claw.

Neuroscientists might point out in
their defense that the up-to-date among them no longer refer to the brain as a
hunk of meat (which greatly annoyed Robinson) but as a design space, and to the
mind as a software module. (Though this may annoy her only slightly less.) As
for the soul, they may respectfully reply that while they're perfectly
comfortable with the vernacular sense of "soul" or "spirit," the metaphysical
sense - something "untouched by the accidents that maim the self or kill it," and
which will be reunited with the body at the Last Judgment - is just a blank to
them, so could Robinson please try once more to explain it?

And the new physics - does it license
Robinson's ontological maximalism? Some physicists think so; some don't; most have
no opinion and no interest. But even those who agree with her generally limit
their wilder imaginings to the subatomic or intergalactic spheres. Few would
agree that because quarks may have free will, people have free will; or that
because an electron may be in more than one place at the same time, a mountain
may be in more than one place at the same time; or that because subatomic
events may have no cause, everyday events may have no cause. Very freaky things
do happen at quantum scales, but statistically they cancel out. In any case,
wouldn't it be a bit perverse of God to have made His existence seem so
implausible from Laplace to Bohr?

The Givenness of Things is by no means wholly polemical. Much of it is devoted to
theological discussion, or meditation, about the interpretation of scripture,
the nature of Christ, and the inexhaustibly inventive and persevering love of
God for humanity. The prose is as finely wrought as in any of Robinson's novels,
though less relaxed and elegiac. Not every reader will be convinced by her
arguments, or even understand them, but any reader not tone-deaf will be
enchanted by her grave, urgent music.

The surest way to take the moral
measure of a professedly devout Christian is to ask how she feels about Matthew
25, where Jesus says to the saved and the damned: "What you have done to the
least of my brothers and sisters, you have done to me." Robinson feels very
strongly about it. "The souls we let our theories and our penuries frustrate
are souls still, and, if Jesus is to be trusted, they will be our judges, they
are now our judges." [235]

Amen.

[END]

George Scialabba is a contributing editor of The Baffler and the author of What Are Intellectuals Good For? and For the Republic.

]]>
The World Beyond Your Head: On Becoming an Individual in an Age of Distraction by Matthew Crawford.tag:www.georgescialabba.net,2015:/mtgs//2.15622015-06-24T19:56:10Z2017-07-22T20:20:34Zadmin

Aristotle and Marx may not have agreed on much else, but they agreed on
the purpose of life. Aristotle defined the highest happiness as "the pursuit of
excellence to the height of one's capacities in a life affording them full
scope." For Marx, the mark of a rational, humane society is that free, creative
labor has become "not only a means to life, but life's prime want." Not
leisure, not entertainment, not consumption, but creative activity is what
gives human beings their greatest satisfaction: so say both the sage of
antiquity and the prophet of modernity.

How much creative activity does work life in the contemporary United
States encourage or allow? "Creative" is not a well-defined word, so no precise
answer is possible. But it's hardly controversial that the "de-skilling" of the
workforce has been the goal of scientific management since the beginning of the
industrial age, and is accelerating. In an invaluable recent book,[1] Simon
Head tracks the rapid spread of Computerized Business Systems (CBS): job-flow,
business-process software designed to eliminate every vestige of initiative,
judgment, and skill from the lives of workers and even middle managers. CBS, he
writes, "are being used to marginalize employee knowledge and experience," so
that "employee autonomy is under siege from ever more intrusive forms of
monitoring and control." Head cites a 1995 report that "75-80 percent of
America's largest companies were engaged in Business Process Reengineering and
would be increasing their commitment to it over the next few years," and a 2001
estimate that 75 percent of all corporate investment in information technology
that year went into CBS. They're expensive, but they're worth it:
insecure, interchangeable workers mean lower labor costs.

The end result of de-skilling was foreseen nearly 250 years ago by one of
capitalism's earliest and most penetrating critics:

The man whose whole life is spent in performing a few simple operations, of
which the effects are perhaps always the same, or very nearly the same, has no
occasion to exert his understanding or to exercise his invention in finding out
expedients for removing difficulties which never occur. He naturally loses,
therefore, the habit of such exertion, and generally becomes as stupid and
ignorant as it is possible for a human creature to become. The torpor of his
mind renders him not only incapable of relishing or bearing a part in any
rational conversation, but of conceiving any generous, noble, or tender
sentiment, and consequently of forming any just judgement concerning many even
of the ordinary duties of private life... But in every improved and civilized
society this is the state into which the labouring poor, that is, the great
body of the people, must necessarily fall, unless government takes some pains
to prevent it.[2]

Prescient
though he was, Adam Smith did not foresee the degree to which the state would
become a largely-owned subsidiary of business, with no interest in preventing
the stultification of "the great body of the people."

In recent years de-skilling has been
joined by omnidirectional saturation advertising in a pincer movement aimed at
turning our non-work as well as our work lives into profit centers. Matthew
Crawford's brilliant and searching new work of social criticism begins with
afamiliar modern ordeal: boarding an
airplane. Those plastic bins you put your shoes, wallet, and keys into? It
dawned on some marketing genius that the insides of them could be plastered
with ads. The tubes of lipstick advertised on the bottom of Crawford's bin
resembled flash drives, so he almost failed to retrieve the flash drive containing
the lecture he was flying somewhere to give. Once past security, he looked for
a quiet place to sit and think. Forget it - shops, huge ad posters, TVs, "the
usual airport cacophony." Virtually every inch of this public space made a
claim on his attention for private commercial purposes. Except one: the
business class lounge, the only place in the airport quiet enough to work, where
the samurai of commerce sat devising the innovative marketing and
business-process strategies that appropriate and direct the attention of
everyone else, including the poor shlubs in the rest of the terminal.

These banal frustrations gave rise to
some original reflections on the political economy of attention.[3] Though we rarely think of it this way, control of
our attention is both a public good, -- a commons - and an individual
right. In public places like airports, subways, buses, stadiums, streets, and
schools, and even more in quasi-public spaces like television, newspapers, and
social media, our attention is sold to advertisers in ever finer increments.
This is, Crawford suggests, strictly analogous to environmental pollution and
the plundering of public resources.

There are some resources that we hold in common, such as the air we breathe
and the water we drink. We take them for granted, but their widespread
availability makes everything else we do possible. I think the absence of noise
is a resource of just this sort. More precisely, the valuable thing that we
take for granted is the condition of not being addressed. Just as clean
air makes respiration possible, silence, in this broader sense, is what makes
it possible to think. We give it up willingly when we are in the company
of other people with whom we have some relationship, and when we open ourselves
to serendipitous encounters with strangers. To be addressed by mechanized means
is an entirely different matter.

But for those who hold to the
psychological model of rational choice that underlies neoclassical economics,
it is not a different matter. On that
view, decisions are made by ranking all available options on a single,
unidimensional scale of utility or desirability. Ads, however seductive, are
simply information, and the more information we have, the better - freer - are our
choices. Consumers have a right to be bombarded with solicitations, however
distracting, just as workers have a right to accept any conditions of
employment, however degrading or unhealthful. Any government interference
between seller and buyer or employer and employee is paternalism, the bane of
American liberty.

Although even economists are beginning
to abandon these simplistic notions of individuality and freedom,[4] they continue to inform the official ideology of
our governing party and are entrenched in law and policy (to the advantage, not
coincidentally, of sellers and employers). Crawford proposes a different model
of individuality and choice, at once traditional and radically new. Expounding
it, with richly informative excursions into neuroscience, experimental
psychology, intellectual history, mass culture, skilled crafts, and sports, is
the main business of The World Beyond
Your Head.

The detached, autonomous self of
rational choice theory assumes the possibility of what philosophers call "the
view from nowhere." In quest of empistemological certainty, we "take a detached
stance toward our own experience, and subject it to a critical analysis from a
perspective that isn't infected with our own subjectivity." Analogously, moral
autonomy requires (paraphrasing Kant) that we "abstract from all objects ... they
should be without any influence at all on the will, which should not bend to
outside forces or attractions but rather manifest its own sovereign authority."

Though these formulations did much
useful work historically, asserting and defining human freedom against
oppressive traditional authority, they don't, when pushed to the limit, hold
up. There are always initial conditions, presuppositions, things our previous
experience has primed us to notice or overlook; there are always pre-existing
appetites, values, commitments. We can't abstract from all these things when
making judgments or choices, because they are, taken together, us. Our selves
do not exist apart from circumstances, accidents, consrraints. We are situated beings. "How we act is not
determined in an isolated moment of choice; it is powerfully ordered by how we
perceive the situation, how we are attuned to it, and this is very much a
function of our previous history of shaping ourselves to the world in a
paticular way."

What this means in practice is
illustrated by Crawford's superbly detailed, psychologically astute
descriptions of motorcycle riding and repair (the subject of his previous,
best-selling Shop Class As Soulcraft),
glass blowing, short-order cooking, organ-making, and other demanding skills. In
each case, a beginner submits to the rules and traditions of some practice. A
sustained narrowing of focus and intensification of discipline gradually yield
a wider vision of possibilities and an increase in freedom of action.
Internalizing the past of the activity and identifying with the community of
its practitioners make one capable and desirous of carrying it forward - of
creating something new. The joint attention required by any shared effort
creates a new viewpoint, in which our genuine individuality is more accurately
perceived and more reliably confirmed. Rootedness, obedience, and
self-limitation are thus the conditions of autonomy and mastery. Crawford
summarizes:

Genuine agency arises not in the context of mere choices freely made (as in
shopping) but rather, somewhat paradoxically, in the context of submission to things that have their own
intractable ways, whether the thing be a musical instrument, a garden, or the
building of a bridge. ... When we become competent in some particular field of
practice, our perception is disciplined by that practice; we become attuned to
pertinent features of a situation that would be invisible to a bystander.
Through the exercise of a skill, the self that acts in the world takes on a
definite shape. It comes to be in a relation of fit to a world it has grasped.

But does individual character matter in
a liberal democracy? On the neoclassical model, work, culture, and politics are
mutually independent. In the political marketplace as in every other, we are
presented with an array of options, inform ourselves about them, compute our
preferences, and select one. We decide in much the same way as IBM's Deep Blue
decides on chess moves: we start from scratch every time and calculate. Of
course the analogy is imperfect: computers don't have habits, prejudices,
impulses, or memory lapses; and their capacity for attention is virtually
unlimited. The neoclassical model needs quite a number of simplifying
assumptions, like pre-Copernican epicycles. But the alternative - to
acknowledge that humans are not simple utility maximizers with arbitrary
preferences and unbounded desires, but rather that there is a hierarchy of
human goods, with limits on the scale and rhythms within which we can flourish
- would upend our current political economy.

It would mean, among other things,
revulsion against what work has become, or is becoming, for all too many of
those Americans lucky enough to have jobs: not merely ill-paid and insecure but
just as importantly, repetitive, stressful, wholly scripted. The only way
workers can resist this degradation is (as Adam Smith pointed out) collectively.
But neoclassical economics frowns on unions, committed instead to the fiction
that fully autonomous individuals can negotiate freely and on equal terms with
large corporate employers; and likewise to the dogma that the only proper
subject of this negotiation is the price of the employee's labor, not its
meaning.

Seeing past the liberal model of
individual autonomy might also mean recognizing that consumerism can have civic
consequences. Just as atmospheric fine particles can clog our lungs and impair
our society's physical health, an unending stream of commercial messages, some overwhelming
and some barely perceptible, can clog our minds, fragment our attention,and, in the long run, impair our society's mental and civic health. Even
intelligent and straightforward ads, in sufficient quantities, might do this to
us; the dumb and manipulative ones we are daily subjected to are surely
accelerating our moronification. "Please," Crawford pleads, at once jokingly
and in earnest, "don't install speakers in every single corner of a shopping
mall, even its outdoor spaces. Please don't fill up every moment between
innings in a lazy college baseball game with thundering excitement. Please give
me a way to turn off the monitor in the backseat of a taxi. Please let there be
one corner of the bar where the flickering delivery system for Bud Lite
commercials is deemed unnecessary, because I am already at the bar."

This - and no doubt a great deal more of
The World Beyond Your Head - is just what John Ruskin, William Morris,
Ivan Illich, Christopher Lasch, Wendell Berry, and other great conservative
radicals, or radical conservatives, would say to our over-managed,
ad-choked, out-of-scale society. They were all skeptical of inevitable
progress, alert to the costs as well as the benefits of new technology, able to
distinguish the blessings from the cruelties of tradition, and as anxious to
preserve the former as to abolish the latter. We're lucky that Matthew
Crawford has updated this invaluable dissenting thread of cultural commentary.
But our ecologies - of attention, of imagination, of civic virtue - are eroding
ever faster. All too soon, it may no longer matter what anyone says.

[2]An
Inquiry into the Nature and Causes of the Wealth of Nations, 1776.

[3]Encouragingly,
two other new books arrive, in their own idiosyncratic ways, at similar
political insights: David Bosworth's The Demise of Virtue in Virtual
America: The Moral Origins of the Great Recession (Front Porch Republic
Books) and Craig Lambert's Shadow Work: The Unseen, Unpaid Jobs That Fill
Your Day (Counterpoint).

[4]For
an influential example of applied "behavioral economics," the new branch of the
subject that takes some account of human nature, see Nudge: Improving Decisions About Health, Wealth, and Happiness by
Richard Thaler and Cass Sunstein (2009).