Topic Page

Nation Topics - Books and the Arts

Subsections:

Articles

News and Features

Like Pop-Up Video--one of the many things the movie-industry left
never anticipated--ancillary factoids keep imposing themselves on Paul
Buhle and Dave Wagner's Radical Hollywood:

1. When the oft-dubbed "revolutionary" Lew Wasserman (longtime MCA mogul) died this past June 3, obit writers made the old archcapitalist
sound like he'd been the happy end of a Bolshevik dream--the man who
finally took the power away from the studios and gave it to the people
(OK, very rich, well-placed people).

2. Wasn't it Ronald Reagan--"FBI collaborator," the man deemed "too
dumb" for membership in Hollywood's CP of the 1930s and the star of the
blacklisted screenwriter Val Burton's last movie (Bedtime for
Bonzo)--who helped decontrol the studios' ownership of movie
theaters, i.e., the means of distribution?

3. Showing that memory is fleeting even among the most
progressive-minded people, the Stockholm International Film Festival of
1997 jumped the gun on the Academy Awards and hosted a retrospective of
work by friendly witness Elia Kazan--its organizers claiming, quite
convincingly, that they were completely unaware of the then-raging (sort
of) Kazan Kontroversy.

4. Showing that memory is as tenacious as the ego it's attached to,
Hollywood Ten member Ring Lardner Jr., honoree of the
screenwriter-centric Nantucket Film Festival of 1998, still had the
energy to rail against the system--although the preponderance of his
outrage was not over his HUAC-imposed prison time but the liberties
Joseph Mankiewicz and Louis B. Mayer had taken fifty-odd years before
with his script for Woman of the Year.

If there are unwritten messages within Radical Hollywood, one
might be that artistic vanity and general cupidity are neither exclusive
nor native to a particular political persuasion, nor even the movie
industry itself. And that nothing ever changes. Current cinephiles fear
and loathe the fact that in today's movie business, "business" takes
precedence over "movies." But by 1933, after the bankruptcies of Fox,
Paramount and RKO, the money men had already taken over. (As the authors
write, "Bankers were good at firing studio workers...but were notably
untalented at making films." Make it "lawyers" and it might be 2002.)
Back in 1919, Charles Chaplin, Douglas Fairbanks, D.W. Griffith and Mary
Pickford organized the first independent-of-the-studios Hollywood movie
company, United Artists--the DreamWorks of its time. Last year's
threatened strike by the Writers Guild--which, together with the strike
threat by the Screen Actors Guild, is still affecting studio production
schedules--was largely about credits, because they translate into
salaries; in 1933, meeting secretly, Hollywood's leading screenwriters
(including such leftist lights as John Howard Lawson, John Bright,
Samuel Ornitz and Lester Cole) gathered to organize, largely over the
issue of credits, and for the same reason. Variety, Hollywood
"bible" and noted mangler of the English language, played the game with
the mobbed-up craft union IATSE (International Alliance of Theatrical
Stage Employees) back in Depression-era Hollywood. It plays plenty of
games today.

And then (sigh) there's that oh-so-predictable outcry over pop cinema's
influence on/instigation of sociocriminal behavior--the knee-jerk
finger-pointing at Hollywood every time a Columbine happens (but never,
you may notice, a 9/11). This is hardly a newsflash either: The release
of such hard-nosed gangster thrillers as The Public Enemy,Scarface and Little Caesar in the early 1930s helped lead
to the establishment of the Legion of Decency, the Production Code, the
Hays Office, the bluenosed rule of in-house censor Joseph Breen and
decades-long cultural prosperity for those who preferred their movie sex
infantilized and their view of America strained through fine mesh. How
the Christian right does long for those thrilling days of yesteryear.

The story of the left in Hollywood, in other words, is the story of
today in Hollywood; but if you're looking for correlations and parallels
you won't find many in Radical Hollywood. Not that parallels are
always what you need: As the blacklisted writer/director Abraham Polonsky (Force of Evil, Body and Soul,Tell Them Willie Boy Is Here) told interviewer David Walsh a few
months before his death in 1999, "In the old days, if something like
this [the Kazan Oscar] was going on, you'd make a few telephone calls,
you'd have a thousand people there. No more. Nobody believes in
anything, except in the finance capitalist." Did anyone in the whole of
Hollywood--or the entire United States Congress, for that matter--make a
peep of support for the recent and quite reasonable California appellate
court decision on the Pledge of Allegiance? If they did, it was drowned
out by the sound of scuttling feet, heading for the political lifeboats.

This last episode was certainly too late for inclusion or comment inRadical Hollywood, but it points up both the stasis and mutation
in what we have to recognize, however reluctantly, as the cultural
capital of the country--and whose history is far more alive than this
book would imply. Encyclopedic in the most frightening sense, RH
is thorough and wide-ranging, and fairly exhaustive in ferreting out
every possible leftist association in any vaguely relevant movie
produced by Hollywood from the New Deal through the postwar Red Scare.
But the authors are also straitjacketed by their own theses: One, that
there was a leftist subtext imposed on many of the movies that
the right held in fear and contempt. (Who knew?) And two, that the
movies were simply superior during the more or less lefty days of
Hollywood.

They may be right. "The content of films was better in 1943 than it is
in 1953," Hollywood Ten-ster Dalton Trumbo is quoted as saying, and the
authors contend that "any reasonable calculation" would confirm what
Trumbo says. But reasonable calculation has nothing to do with the very
subjective business of judging art. One might as well reduce the entire
argument to a single question: What do you prefer? Movies with the
left-leaning Humphrey Bogart? Or movies with Ronald Reagan? It may not
seem to be a contest. But it wouldn't be an example of the scientific
process, either.

Despite their tabloidy subtitle--"the untold story behind America's
favorite movies"--Buhle and Wagner don't dabble much in the anecdote,
gossip or movie-set story that would have lubricated their prose or
perhaps even parted their sea of subordinate clauses. Still, famous
names abound. "As FBI reports suggested," Lucille Ball, Katharine
Hepburn, Olivia de Havilland, Rita Hayworth, Humphrey Bogart, Danny
Kaye, Fredric March, Bette Davis, Lloyd Bridges, John Garfield, Anne
Revere, Larry Parks (The Jolson Story), the wives of March and
Gene Kelly, and Gregory Peck's fiancée--to say nothing of the
scores of writers Buhle and Wagner profile and analyze, or their more
loosely affiliated or merely sympathetic directors and stars--were all
in or close to the Communist Party. Why? For one thing, the authors say,
because these were the people of 1930s and '40s Los Angeles who were
smarter, consequently more liberal, and enjoying a more egalitarian and
humanistic worldview than their constipatedly conservative counterparts.
But it was, they point out, also a result of Hollywood's (and America's)
bigotry and its effect on social life: The comically titled West Side
Writing and Asthma Club, an ostensibly nonpolitical alternative for Jews
barred from Los Angeles's beach clubs and marginalized in the better
restaurants, became a hotbed of anti-Nazi sentiment (which, of course,
made it politically suspect). Eventually, through the Asthma Club, even
one of the world's leading, albeit largely apolitical, Marxists
(Groucho) could channel donations to the Popular Front.

That the Communist Party in Hollywood was largely a "social agency," as
the authors call it, was what helped make the McCarthy-era hearings and
HUAC roundups so wide-ranging and terrifying, even if, after the
Hitler-Stalin Pact, the LA branch of the party "had died...but simply
not known it," as the exiled Carl Foreman (High Noon) put it. How
such screenwriters, who are Buhle and Wagner's principal subjects,
maintained their political principles while clawing their way up the
studio ladders is something left amorphous. Lardner, ever aware of the
contradictions in being a high-priced proletarian, said in his
autobiography I'd Hate Myself in the Morning (his famous response
to J. Parnell Thomas about why he wouldn't name names) that he picketed
Warner Bros. when Mussolini's son came calling, and told David O.
Selznick not to make Gone With the Wind because it was pro-Klan.
But he was an artist, too, a hungry one, and a man who knew the siren
song of fame and fortune never quite harmonized with "The
Internationale."

The authors exhibit a weakness for locating leftist content and
associations where they need to and and shoehorning certain movies into
their theses (their view of Universal's horror catalogue as anti-Wall
Street seems particularly windy). But by the time Radical
Hollywood gets to the era of film noir--which they call "arguably
the only fully realized American 'art film' genre"--it feels as if the
rest of the book has been prologue. Clearly, the authors know and love
the period and what it did to American cinema in the aftermath of World
War II--countering the forced fairy tale of Hollywood with a new, frank,
sexually liberated, sexually sophisticated, sexually metaphorical take
on the dark view of postwar, postnuclear existence (although, strangely,Radical Hollywood never analyzes noir via the A-bomb, despite the
celebrated apocalyptic imagery of such genre classics as Robert
Aldrich's Kiss Me Deadly). That noir also refashioned the
traditional portrayals of the sexes--at a time when, the authors point
out, the country's postwar recovery and strength were being
propagandized as dependent on the American male and his renewed sense of
self--made it one of the most important cultural developments of the
twentieth century, if not the nation's entire cultural history. No
wonder it fell victim to the strangling effects of creeping McCarthyism.

Radical Hollywood, whether or not it's "the untold story behind
America's favorite movies," certainly puts a new spin on those films,
especially for those already familiar with them--readers who,
unfortunately, will be those most distracted by the authors' rather
habitual way with the errant fact. Some are trivial: Edward G. Robinson
didn't say "Mother of God..." at the end of Little Caesar; he
said "Mother of Mercy," as any schoolchild knows (any schoolchild,
granted, with an unnatural obsession with movies). William Randolph
Hearst may have "attributed the 'subversive' label to anything that
smacked of egalitarian liberalism," but he didn't do it in the pages of
the Los Angeles Times, because he never owned the Los Angeles
Times. In assessing the populist perspective of Destry Rides
Again, Buhle and Wagner seem oblivious to the fact that James
Stewart's character is the son of the more famous Destry. The
famously Hungarian-born director Michael Curtiz (director of the
leftist-written Casablanca, among many others) is identified at
one point as a "German refugee." John Wayne's "first major screen role"
wasn't in 1938's Pals of the Saddle, but Raoul Walsh's 1930The Big Trail. Warner Bros.' "self-serving prologue" at the
beginning of The Public Enemy may have been self-serving--it
mentions the social impact of the studio's own PE and Little
Caesar while omitting UA's Scarface--but it wasn't on the
original 1931 print; it was added for a re-release several years later.

Jean Renoir's The Southerner marked William Faulkner's "only
notable screenplay contribution"? How about The Big Sleep?Mildred Pierce? And let's not forget To Have and Have Not,
in which he rewrote Hemingway, by all reports to their mutual delight.
And Katharine Hepburn didn't lose the "box-office poison" appellation
after Holiday but after The Philadelphia Story, whose film
rights she bought because she knew it would remake her career.

But let's imagine this litany of errors is itself a metaphor for the
intrinsic unreality of the left in Hollywood. It's a subject that Buhle
and Wagner have attacked with energy and all the right intentions; the
reader may wish that he or she were given a bit more reason to stick
with the book through its thicker moments, but there's no denying the
authors' enthusiasm, erudition and engaging way of summarizing plot
lines and associations. Still, it's a weird tale they're telling. As
they relate early on, Polonsky recounted in his later years that one of
the oft-discussed issues among the Hollywood left wing was what, in
fact, they were all doing there. Should they be in Hollywood, making pap
and trying to inject it with a social conscience? Or secede from the
union and create film art independently? As Polonsky put it, the answer
was simple: "Filmmaking in the major studios is the prime way that film
art exists." And so it was. And is. And unfortunately--thanks to an
American indie movement that has lost its lure for youth, a dissipated
market for the once-hip foreign film and a general tendency toward
divorce between American art and American politics--so it is likely to
remain.

I don't know if it's some childhood image left over from Victory at
Sea or from a book of pictures my uncle brought back from the
service, but when I think about the war in the Pacific, I see pink
cumulus clouds piled high, one upon another, on the decks of aircraft
carriers. It's not the iconic image of violent battle that usually represents the war, but my imagination seems to be
telling me that the iconic images aren't the whole story, that serenity
and beauty coexisted alongside the bloodshed and were a large part of
the day-to-day reality of the war.

It's for similar reasons that I think the nitty-gritty details of life
near Ground Zero as presented in one of the first theatrical responses
to 9/11, comic monologist Reno's Rebel Without a Pause, appeal to
me so. They provide relief from the media's iconic packaging, which has
been beamed at us ever since the attack on the Trade Towers and the
(rarely mentioned) Pentagon attack.

With a deluge of energy, Reno, who lived near the towers from 1981,
relates what it was like in lower Manhattan "that gorgeous day." She
recreates the clicking sound, like the noise an old machine gun would
make, that was the sound of the floors collapsing into one another. She
exhibits dismay at the total absence of Conelrad and the Emergency
Defense System. ("Maybe this wasn't enough of an emergency.") She
tells a story about finding her ATM emptied out at 9 am and the bank
refusing to open its doors so customers could get their money.

But mostly it's the human reactions to catastrophe that are so
wonderful, so wildly hilarious. The rumors that the terrorists are holed
up with machetes in a macrobiotic restaurant on Prince Street; people
rushing home to have their televisions validate what they'd just seen
with their own eyes; and what Reno calls the "hierarchical bragging
rights of pain and knowledge"--New Yorkers one-upping each other over
what they knew and what they'd suffered.

Reno's warnings about changes in constitutional protections make for a
very disturbing second half of her monologue, though she herself doesn't
seem to fear the new spy agency powers: She gives voice to her every
political thought, no matter how out there it is. She points out how
cheaply reporters have been won over by chummy Don Rumsfeld, and she
contemplates Henry Kissinger being arrested for war crimes. Reno even
suggests that Florida be allowed to float down to Uruguay, "where all
the other fascists are."

She also reveals some interesting facts, like ones you find in this
magazine but not in the major media. For instance, Hamid Karzai, the new
president of Afghanistan, used to work for Unocal. And this from Frank
Lindh, who saw the show the night before I did: FBI agents treated his
son kindly because even they knew "he was a hapless kid."

After a while, I began feeling the tingle of what I hope was just my own
paranoia (although as I learned the last time--when Watergate lanced the
Nixonian pustule--paranoia can be a very accurate predictor of reality).
Reno talks about what is being done to our civil liberties in the
context of Christian fundamentalist influence on this Administration. At
342 pages, the USA Patriot Act, she suggests, wasn't written in the days
after 9/11, and the Padilla case has clearly crossed the line of
innocent until proven guilty. She builds a picture of how really
extremist the Bush people are and how far to the right the President has
taken the country. So far, in fact, that Colin Powell is the "Communist
of this Administration."

Such points may be made with laughter, but Reno brings a fierceness to
her criticisms and an urgency to her concerns about the current
Administration that we are only beginning to see in the big world, and
then over financial wheeler-dealering and privilege, not civil liberties
and constitutional guarantees.

You will walk away from Reno with a clear sense that the changes aren't
minor, and they won't fall only on bad guys and enemies. It's a real
turning point: Democracy is up for grabs.

The San Francisco Mime Troupe's free summer show, Mr. Smith Goes to
Obscuristan, likewise treats the aftermath of 9/11. In it,
Condoleezza Rice (Velina Brown) and Dick Cheney (Cheney lookalike Ed
Holmes) seek to sell the Bush presidency as an Administration that cares
about democracy, not profits, and so devise a plan to send 9/11
firefighter Jeff Smith (the always wonderful Michael Gene Sullivan) to
oversee the first free election in the Central Asian, formerly Soviet,
republic of Obscuristan. The winner of this contest is certain to be
warlord and privatizer Automaht Regurgitov (Victor Toman), since he is
the only candidate. That is, until the oppositionist Ralif Nadir (Amos
Glick) throws his hat into the ring, arguing that "people should vote
their hearts, not their fears." (Of course, had one or two percent of
Florida's Nader voters forsworn that advice, the Mime Troupe wouldn't
have a Bush Administration to satirize.)

(Or would they?)

Smith, who has been kept ignorant by outfits like SNN, the Selective
News Network, believes America wants freedom for everyone. He is,
however, disillusioned when it becomes clear that there is oil in
Obscuristan and that the Administration's real interest is that
Regurgitov win, since he will insure the atmosphere necessary for US
investment. Smith then sets out to prove that the ordinary American
doesn't want to screw Obscuristan over, and by the end of the day
rescues Nadir, who was kidnapped and branded a terrorist. He also helps
bring an SNN reporter and the US ambassador over to the side of a fair
shake for Obscuristan.

The Mime Troupe hits many of the right points: that energy sources are a
major factor in our involvement in Central Asia, for instance, and that
much of the weaponry in the area was originally supplied by the CIA. And
they raise questions about just how free our own elections are. Given
that, I was left pondering why Mr. Smith seemed so tepid and not
particularly funny compared with Rebel Without a Pause. It's
doubly strange given that the Mime Troupe brought in the usually very
funny monologist, independent filmmaker and former Nation intern
Josh Kornbluth (Red Diaper Baby and Haiku Tunnel) to help
write the script.

The difference is, I think, that Reno articulates things you hadn't
thought about, or says things you may have thought a lot about, but in
ways that create the old shock of recognition. As when she says, "The
people of Missouri were so worried about Ashcroft making decisions, they
voted for the dead guy."

There are moments like that in Mr. Smith. Barbara Bush (Ed Holmes
again, this time in a gray wig and pearls) explains the rules of the oil
game to George W., and the whole facade of her Betty Crockerdom smacks
right up against her tough capitalist intelligence. This is a Barbara
Bush who says, "Never send a member of the working class to do an
aristocrat's job." But such moments are rare. For the most part, the
Mime Troupe's most incisive statements, such as "Only an American would
confuse a fixed election with a real one" or "Welcome to democratic
nations like Saudi Arabia who protect human rights," simply restate our
perceptions or are so bitterly ironic that a lot of the laughter I heard
was sniggering.

Given that the source of the satire is Capra's populist classic, Mr.
Smith Goes to Washington, I think the Mime Troupe missed a real
opportunity to have us question ourselves by asking, Who is Mr. Smith
and what is he about? In the mythos of Mime Troupe plays, the ordinary
American is decent and fair, and in every respect there's a lot of
daylight between him and the ruling class, and therefore between us and
what our government does in our name. The Mime Troupe believes that like
Jeff Smith, the ordinary American has been kept in ignorance by the
media, and that if he only knew what was really going on, he would rise
up and change things.

That conveniently ignores the fact that ordinary Americans are of many
minds, and that many of us do understand that our comfort is based on
the deprivation (and worse) of people in other parts of the world. So
then, you have to ask whether we feel we can't do anything about it or
whether we don't want to. How much is the ordinary American willing to
give up to see people elsewhere get a larger slice of the pie?

And what is the usefulness of a mythos of unquestioned fairness and
decency, and in this play, as in other Mime Troupe efforts, of a sellout
who regains her soul and of a decisive victory over the people's
enemies? It's positive, but does it send us out of the park feeling
hopeful and intent on action? Or do we feel that a lot of what we
witnessed was too simple and fantastic?

The appeal in Mr. Smith is ultimately to idealism, to looking out
for the other guy and doing the right thing. Reno, on the other hand,
talks about self-interest: that we are losing our rights and that some
of us were slaughtered. "The [US] government," she says, "created the
mujahedeen that came to my town and killed us." That seems a much
stronger motive for action.

Mr. Smith Goes to Obscuristan will be performed through Labor Day
in various Northern California locales (415-285-1717 or www.sfmt.org).Rebel Without a Pause played a week at the Brava Theater Center
in San Francisco in June and went on to an extended run at the Lion
Theater on 42nd Street in New York City.

If Canadian writer Yann Martel were a preacher, he'd be charismatic,
funny and convert all the nonbelievers. He baits his readers with
serious themes and trawls them through a sea of questions and confusion,
but he makes one laugh so much, and at times feel so awed and chilled,
that even thrashing around in bewilderment or disagreement one can't
help but be captured by his prose.

That's largely why I took such pleasure in Life of Pi, Martel's
wonderful second novel, which playfully reworks the ancient sea voyage,
castaway themes of classics like Defoe's Robinson Crusoe, Swift'sGulliver's Travels, Coleridge's The Rime of the Ancient
Mariner, Melville's Moby-Dick and (in some of its more
fantastical aspects) Homer's The Odyssey, to explore the role of
religion in a highly physical world. What's more, it's a religious book
that makes sense to a nonreligious person. Although its themes are
serious and there are moments of awful graphic violence and bleak
despair, it is above all a book about life's absurdities that makes one
laugh out loud on almost every page, with its quirky juxtapositions,
comparisons, metaphors, Borgesian puzzles, postmodern games and a sense
of fun that reflects the hero's sensual enjoyment of the world. Although
Martel pays tribute to the past by using the typical castaway format
(episodic narrative, focus on details of survival, moments of shocking
violence and reflections on God and nature), his voice, and the fact
that his work is more fantastic, more scientifically sound and funnier
than that of his predecessors, infuses the genre with brilliant new
life. If this century produces a classic work of survival literature,
Martel's novel is surely a contender.

Life of Pi is the unlikely story of a 16-year-old Indian boy, Pi
Patel, adrift in a boat with a hungry tiger after the ship carrying his
zookeeper father, mother, brother and many animals sinks in the middle
of their journey from India to Canada. (It's the mid-1970s and Pi's
father decides to emigrate after Prime Minister Indira Gandhi starts
jailing her enemies and suspending civil liberties.) Pi is at once a
Hindu, Christian and Muslim (echoes of the pacific Mahatma Gandhi here)
who believes that all religions are about "love." But having grown up
among animals, he's also practical and grounded. Early in the book, his
three religious teachers meet, and Pi gets his "introduction to
interfaith dialogue," a big argument that ends only when he is asked for
his opinion. He quotes Gandhi, "All religions are true," adding, "I just
want to love God," which floors them all. Then he goes out with his
parents for ice cream. Most of the rest of the book is a challenge to
Pi's simple faith, as this sweet yet unsentimental hero experiences a
situation where, it would seem, survival is everything. Aside from the
detailed descriptions of hands-on survival techniques that almost rival
Ishmael's whaling lore in Moby-Dick, the book poses the
questions: Can faith survive in the face of doubt and suffering? Can the
love of God and one's fellows remain pure in an angry, violent world?

Despair sets in from the beginning. Not only does Pi lose his parents,
but he is facing life on the ocean wave with a tiger (named Richard
Parker), a zebra, an orangutan and a hyena. Pi watches them kill each
other, with Richard Parker finishing off the hyena. The boat is littered
with animal carcasses. As the days go by, Pi, a vegetarian, learns how
to kill with his bare hands, batter turtles to death and eat uncooked
flesh. He weeps. He is "dumb with pain and horror." But he survives,
marking his territory with his urine, as animals do, to keep Richard
Parker at bay, feeding him and finally teaching the tiger (by using a
whistle) that he, Pi, is master here.

It's true that his three faiths recede to a whisper on the boat. He
confesses that it is Richard Parker, and the practical matter of
avoiding being eaten by him, that gives him "purpose," even "peace" and
perhaps "wholeness," and thus keeps him alive. "If he died, I would be
left alone with despair, a foe even more formidable than a tiger.... He
pushed me to go on living." Pi keeps up with his religious rituals, but
he finds his faith wavering. In one funny scene, he yells out his
beliefs to make them more real. "I would touch the turban I had made
with the remnants of my shirt and I would say aloud, 'THIS IS GOD'S
HAT!'" Then he points at Richard Parker and says, "THIS IS GOD'S CAT!"
The boat is "GOD'S ARK!" The sea, "GOD'S WIDE ACRES!" The sky, "GOD'S
EAR!" But, he says, "God's hat was always unravelling," and "God's ear
didn't seem to be listening."

You might say he's trying to persuade himself. But it's clear that he
continues to appreciate the beauty of the sea and sky, and the sparse
life around him, in which, as a Hindu, he sees his connection to God.
There are wonderful poetic descriptions of the fish around the boat as a
little city, of Richard Parker's beauty and of a dorado fish that, as it
dies, begins to "flash all kinds of colours in rapid succession. Blue,
green, red, gold and violet flickered and shimmered neon-like on its
surface as it struggled. I felt I was beating a rainbow to death." Even
when his journey is "nothing but grief, ache and endurance," it is
"natural," he says, that he "should turn to God."

But religion is only one element of the book's exploration of faith.
Martel is also interested in the faith of his readers. He wants them to
believe his story. He has his narrator pose a larger, Keatsian "beauty
is truth" argument against the glorification of reason, "that fool's
gold for the bright." It's as if he were suggesting that storytelling is
a kind of religious experience because it helps us understand the world
in a more profound way than a just-the-facts approach (or by
implication, dogma, fundamentalism and literalism). Two passages that
some reviewers have picked out as the least convincing (for their lack
of literal accuracy!), I find illustrate Martel's attempt to show the
power of storytelling at its best. Fantastic, yes, but utterly
convincing. The first is Pi's encounter with a blind, cannibalistic
Frenchman whom Pi runs into at the exact moment he too has gone blind
for lack of nourishment. Their obsessive conversation about food is one
of the funniest and most farcical moments in the book. The second is
Pi's sojourn on a flesh-eating island, which is one of the most chilling
symbolic illustrations of evil I have read. (If the pious Swiss Family
Robinson finds utopia, the religious Pi finds dystopia!)

Good postmodernist that he is, Martel wants to use the very telling of
the tale--multiple narrators, a playful fairytale quality ("once upon a
time" and "happy ending" are mentioned in passing), realistically
presented events that may be hallucinations or simply made up--to push
at the limits of what's believable, yet still convince the reader of his
literary, not literal, veracity. He wants to prove that it's possible to
remain curious about and connected to the world, yet to accept that
there are always going to be aspects of life (and literature) that
remain mysterious.

Pi's doubts about his faith are mirrored by the seeds of doubt Martel
sows in the mind of the reader throughout the narrative. Every moment of
certainty is undercut by the potential for disbelief, and that's when
Martel seems to ask: Am I convincing you now? He sifts the story through
various narrators, beginning with an author-narrator that at first one
thinks is Martel himself but is only Martel-like, introducing the story
as if it were true. Martel has said in interviews that some of this
information is factually accurate. Like his narrator, he was trying to
write a novel about Portugal that wouldn't come alive when he got the
idea for Life of Pi on a trip to India. Martel also briefly
acknowledges his special debt to Brazilian Jewish writer Moacyr Scliar,
whose novella Max and the Cats also has a hero who survives the
sinking of a ship filled with zoo animals and spends days at sea in a
boat with a large cat, in this case a jaguar. Scliar's is the
mini-version that Martel fleshes out with more lyrical language and the
fruits of zoological research.

But there reality stops. There's the whiff of an old-fashioned quest or
allegorical tale in the introduction, for the Martel-like narrator first
learns the story from Francis Adirubasamy, a family friend of Pi's, who
tells him that Pi's story will make him "believe in God." And he plays
with the reader's sense of reality when he has Adirubasamy talk about Pi
as "the main character" whom the narrator proceeds to track down in
Canada. And just how believable is Pi? Now in his 40s, Pi apologizes for
his memory and tells the story as a series of out-of-sequence
events--jumping back and forth between his early childhood, his teenage
years and his time at sea. He can barely remember what his mother looks
like, but he appears able to recall whole conversations from his
childhood. He even asks the narrator to "tell my jumbled story in
exactly one hundred chapters, not one more, not one less." (He does.)
One begins to wonder if Pi made up Richard Parker. Despite his knowledge
that people anthropomorphize animals because of their "obsession" with
putting themselves "at the centre of everything," Pi seems
disproportionately haunted by the fact that when the boat hits Mexico,
Richard Parker takes off without a backward glance. Perhaps the loss of
the tiger symbolizes the greater loss of his family, or of his own
innocence. Perhaps Pi invented the tiger to keep himself sane. The
reader is left to decide.

In a final test of the reader's faith in the narrative, Martel has Pi
tell an alternate, allegedly more believable version of the story at the
end--lacking not only Richard Parker but also the humor, poetry and
detail of the tiger story--to please a couple of doubting Japanese
shipping officials. He asks them which they think is the "better" story.
Of course, the tiger story is the finer, more thoughtful literary
creation and therefore (Martel suggests) has a truth more lasting than
the second, more journalistic version, with its "dry, yeastless
factuality."

Even if one accepts the twists and turns of the narrative, one faces the
further challenge of tracking down clues hidden in a warren of allusions
for more definitive answers to questions about Pi's religious faith, and
whether the narrator (and the reader) will be persuaded of the story's
original premise that it will make one believe in God. That symbolism is
important in this book is made clear at first by the most obvious symbol
of Pi's name, self-chosen because it's the short version of his real
name Piscine (after a family friend's favorite Parisian swimming pool),
and he is inevitably called "Pissing" by classmates. Nothing could be
grittier. In contrast, Pi is like ¼, what mathematicians call an
"irrational number," that is, 3.14 if rounded off, but with endlessly
unfolding decimal places if carried out. Martel couples this mysterious
abstraction with a concrete image--"And so, in that Greek letter that
looks like a shack with a corrugated tin roof, in that elusive,
irrational number with which scientists try to understand the universe,
I found refuge"--to show that, as a boy, Pi is in harmony with things as
they are as well as with his sense of the unknowable.

That Pi's attitude to religion may have changed after his ordeal is
buried in the hidden symbolism hinted at by Pi's college studies in
religion and zoology, described on the opening page as if to emphasize
their importance as a key to the story. (This is after the lifeboat
comes to shore in Mexico, and Pi goes to Canada to start a new life.)
His specialties are the sixteenth-century Jewish mystic Isaac Luria and
the sluggish three-toed sloth (symbol of the Trinity?) whose miraculous
capacity to stay alive, he says, "reminded me of God." (An echo of his
own survival, perhaps? A hint that God seems more elusive these days?)
More important, Luria's cabalistic ideas may hold the key to Pi's
experience at sea. His philosophy (Luria thought the secrets of the
universe lay in numbers) echoes the symbolism of ¼, and the formula
for figuring out the dimension of a circle and its radius (connecting
perimeter and center). Luria believed that God's light contracted from
the center of the universe, purging itself of evil elements, leaving an
empty space (a circle) in which human life developed. But God also sent
down a ray of light (like a radius) so that the few remaining divine
sparks could reconnect with Him. To achieve this fusion with God, and by
implication eliminate evil from the world, Luria believed, people must
live an ethical life. The original divine contraction is called
variously tzimtzum, zimzum or simsum. It's no
coincidence that Martel called the sinking ship Tsimtsum. Thus Pi at sea
was experiencing his own void (or withdrawal of God), in which elements
of evil fight with the instinct to do good. Richard Parker saved his
sanity, and Pi's goodness kept Richard Parker (and perhaps his own
faith) alive. By introducing this strain of mystical Jewish thought,
Martel not only further illustrates Pi's contention that all religions
are essentially the same in that they stem from love but he also uses
mysticism to underscore the profound ways in which literature can
present life's truths. Skeptics, however, might see Pi's study of Luria
as a move away from his earlier, purer faith toward a more structured
mysticism. That would explain his comment at the end of the book, when
he confesses his need for "the harmony of order."

Though one can read Life of Pi just for fun, trying to figure out
Pi's relationship to God makes one feel a bit like the castaway hero
wrestling slippery fish into his lifeboat for dinner. An idea twists and
turns, glittering and gleaming, slaps you in the face with its tail and
slips away. Did the story really happen? Does it make one believe in
God? What kind of God? Early on the narrator says, "This story has a
happy ending." But Pi also tells his interviewer, "I have nothing to say
of my working life, only that a tie is a noose, and inverted though it
is, it will hang a man nonetheless if he's not careful," which suggests
a man with at least some conflict on his mind. On the other hand, Martel
may also be suggesting that work is less important to Pi than God and
family--the narrator gives us glimpses of Pi's shrine-filled house and
his loving relationship with his wife, son and daughter. However, when
Pi is showing him family pictures, the narrator notes, "A smile every
time, but his eyes tell another story." I believe Martel's point is that
doubt inevitably accompanies faith. But the opposite explanation, that
after Pi's life-threatening experiences his faith is a mere prop for his
anxiety, might work just as well.

Does it matter that the answer to all questions in this novel is both
yes and no? One answer comes in the form of Pi's question moments after
the ship has sunk and he's sitting in the lifeboat, bewailing the loss
of his family and God's silence on the topic: "Why can't reason give
greater answers? Why can we throw a question further than we can pull in
an answer? Why such a vast net if there's so little fish to catch?" And
that, of course, is the nature of faith. One can't argue it through, one
just believes. Faith in God (as the younger Pi sees it) "is an opening
up, a letting go, a deep trust, a free act of love." It's also "hard to
love," Pi adds, when faced with adversity. The same might be true of a
good novel, as readers are taken to the edge of their understanding by
something new. If the reader lets go of preconceptions, the experience
can be liberating and exciting. Martel may be sowing seeds of
uncertainty about God, but there's no doubt that he restores one's faith
in literature.

More than thirty years ago, in an essay called "Uncle Tom and Tiny Tim:
Some Reflections on the Cripple as Negro," I suggested that cripples
emulate the civil rights movement by focusing on political solutions to
the problems of living under difficult physical conditions. (It's a lost
battle, but I continue to prefer the term "cripple" to the bland "disabled.") The problems cripples faced seemed as much the result of our inability to
define our needs as they were the fault of a society quite willing to
live with its ignorance of those problems and quite willing not to see
us at all unless absolutely forced to. It wasn't until the late 1960s
that cripples began to believe that they had the right to demand that
America meet their needs.

Anyone who has spent significant time living with a serious physical
condition probably has had an experience similar to the following:
entering a restaurant with another person, he (or she) finds that the
waiter is addressing not him but the person he is with. He is a
category, and categories are simply assumed to be unable to take
responsibility even for something as minor as placing an order. Yet even
such infantilization can seem liberating if the cripple realizes that
the problem it bespeaks is political rather than psychological: One
infantilizes the other by assuming attitudes held by society at large.
And this process is something that the cripple, too, is encouraged to
do. Even Randolph Bourne, as tough a social critic as America ever
produced, looks inward in his famous essay "The Handicapped," published
back in 1911. Writing about other issues, Bourne understands that
political problems demand political solutions. But when it concerns the
cripple, among whose ranks he was numbered, he was curiously
inner-directed and soft.

The demand for the rights of cripples was already under way as I was
writing "Uncle Tom and Tiny Tim." And while I would be happier without
much of the rhetoric of the Disability Rights Movement, to its credit,
it has helped change the consciousness of those who must confront the
world with physical disabilities. Both its success and its burgeoning
political potential seemed wishful thinking in 1969, when I still
dismissed its prospects. But that success was confirmed with the
enactment of the Americans With Disabilities Act in 1990. Despite its
admitted weaknesses, few Congressional acts more deserve the term
"landmark legislation." The Americans With Disabilities Act promised
those forced to live with severe physical impairments the possibility of
legal if not functional equality. Its most profound accomplishment, even
allowing for the vagueness of definition that has come to haunt it, was
to accept the idea that cripples have the right to specific
accommodations that meet their employment needs. For a population
battling the indignities of permanent illness, its promise was
comparable to that of the Civil Rights Act for African-Americans in
1964.

Twelve years after its passage, that promise seems about to be swamped
by a legal system in which what constitutes a workplace disability is
undefined and perhaps undefinable. The confusion about what would seem
to be the most elementary of definitions--what is meant when we speak of
a disability--threatens to weaken if not make the act virtually useless.
The cripple's demand for rights still commands a good deal of public
interest and a degree of public sympathy. Yet the Americans With
Disabilities Act has not led to widespread political activity on behalf
of the nation's cripples. Their quest for equality is not only
threatened with that most severe of American sins, being relegated to
political unfashionability, but the question of what a disability is
shows few signs of being resolved in favor of those whom the act was
supposed to help. Recent Supreme Court rulings in which disability was
ill defined must be seen as setbacks for those who look to the judiciary
to enforce what the act called for, a policy of accessibility and
inclusiveness. The Court ruled in April by a 5-to-4 majority in US
Airways v. Barnett that US Airways' seniority system took precedence
over the right of a disabled worker to transfer to a more suitable job.
In Toyota Motor Manufacturing v. Williams, the Court ruled
unanimously that the definition of disability must mean substantial
limitations on abilities "central to daily life," not just the job. And
the Court also unanimously held, in mid-June in Chevron U.S.A. v.
Echazabal, that employers had the right to refuse to hire a worker
whose health they believed might be impaired by performing a particular
job.

For this alone Ruth O'Brien's Crippled Justice is a welcome
addition to the literature on living with disability. A professor of
political science at the City University of New York, O'Brien approaches
her subject armed with an analytical perspective nurtured by her earlier
work. Her first book, Workers' Paradox: The Republican Origins of New
Deal Labor Policy, 1886-1935, already reflected her interest in the
subject of workers' rights. Yet even academic inquiries can be rooted in
personal experience. "Had I not sustained what is now a ubiquitous
workplace injury," she writes, "a debilitating case of bilateral
tendinitis in my hands and forearms, I might never have explored the
development and implementation of...disability policy." Yet the focus of Crippled Justice is neither
personal nor anecdotal. It is a serious inquiry into the history of
public policy as that policy has affected large numbers of men and women
crippled by illness, accident or birth. As serious scholarship is
expected to be, it is factual and analytical. The past few decades have
witnessed a rich expansion of memoirs and essays by writers forced to
struggle with their own physical or mental deterioration, books that
depict what life is like for those who must live it with severe illness.
But the kind of political analysis O'Brien offers in Crippled
Justice is what, I believe, cripples need now.

Analysis demands perspective, particularly when it begins in personal
experience. While bilateral tendinitis may not have the same sort of
consequences as, say, pushing through life in a wheelchair or trying to
earn a living as a blind person, the experience limited O'Brien's normal
ability to function. It turned her temporarily from normal to cripple.
And however temporary an experience, it was also sufficiently
dehumanizing to give her a strong sense of what life is like for those
forced to live with more severe conditions. The first discovery one
makes on entering the shadowy world of cripples is that one no longer
defines need, ability and ambition for oneself. The experience of living
with disability forced Ruth O'Brien to recognize that the cripple must
"struggle over the same issues that women and minorities battle." But
she also saw that the problems cripples faced were in some ways less
soluble and in others more mechanical than the problems of other groups.
Nothing would be more beneficial to cripples as a group than a fantasy
I've held for the past decade--a law that would make it mandatory for
every elected official in the country to live a single week each year as
a cripple.

If nothing else, that would show that the problems involved are as
political as they are psychological. And that is why I am grateful thatCrippled Justice restricts itself to the conditions cripples
confront in the workplace. To the writer, physical disability offers a
personal confrontation. And as is the case with writers, that
confrontation is about language. But what the cripple confronts in the
workplace, as O'Brien shows, are confrontations that have solutions. And
those solutions are political. What she tells us about the history of
disability policy in the workplace may not be as powerful or as dramatic
as, say, Andre Dubus writing about the changes that were imposed upon
his life by the sudden transition he underwent from being a normal man
to being wheelchair-bound. Nor does Crippled Justice offer us the
savage honesty of Harold Brodkey writing about his own impending death
from AIDS. O'Brien's focus is more mundane, which is to say that it is
more political: She is interested in the possibility of a meaningful
work life for those who lack the talent of a Dubus or a Brodkey.

We do not, of course, read memoirs and essays to create public policy
but to recreate individual lives. Yet if the experience of being forced
to live as a cripple is invariably personal, the reality of howone lives that life is invariably political. I have no choice but to
accept being in a wheelchair. On the other hand, the New York through
which I push myself has any number of choices in how it reacts to my
need for that wheelchair. It is able to define how I live, what is now
subsumed under that horrendous phrase "quality of life," through the
public policy decisions it makes. Such seemingly trivial items as the
condition of the streets through which I push speak less eloquently but
more truthfully of what is or isn't possible for me than Dubus's essays
or my own essays or Nancy Mairs's essays. Public policy defines the
boundaries of the cripple's life. Mundane issues such as the condition
of the streets and the accessibility of restaurants and stores and
theaters (and how the Court defines disability) speak to the cripple's
ability to live with dignity.

The first half of Crippled Justice offers a historical overview
of the rehabilitation of the cripple in America. The ideas dominating
medical and social policy after the end of the Second World War in 1945
were largely formulated by two physicians, Dr. Howard Rusk and Dr. Henry
Kessler. (War may be unhealthy for children and other living things, but
it has done wonders for the fields of prosthetics and rehabilitation
medicine.) Rusk and Kessler are among the villains of the book, since,
along with Mary Switzer, the federal bureaucrat responsible for the
Vocational Rehabilitation Act of 1954, they created models of
rehabilitation still largely followed today. From Freud and even more
from William Menninger, rehabilitation medicine was inspired to shift
its focus from the need to treat the cripple's physical symptoms to the
need to treat the whole person. And the models were psychological.
O'Brien describes "the deep strain of individualism in American
liberalism" as the source of the mistaken path rehabilitation medicine
took. Yet I am not convinced that individualism is so negative in the
life of the cripple. No one can overcome the effects of disability
through mere willpower or a well-developed work ethic--but a
well-developed sense of self helps if one is to be a "success" as a
cripple. One might even suggest that the successful cripple must combine
a free-market head with a socialist soul. Perhaps more than others do,
he needs to see himself as singular. After all, what else can account
for all those memoirs about the singularity of the experience of
disability? The best passage I know about living as a cripple--as moving
to me as Shylock's "Hath not a Jew" speech--wasn't written by a cripple
but by a healthy Saul Bellow at the height of his powers. Put into the
mouth of the poolroom entrepreneur in The Adventures of Augie
March, its power derives from how it speaks for us cripples as it
speaks about Einhorn's aching sense of his individual quandary.

O'Brien is on more solid ground when writing about how Rusk and Kessler
expected the "sick" individual to "adjust" to what they viewed as a
"healthy" society. The cripple unable to make the adjustment was a
social and psychological problem. Even so, one can argue that the
individualism O'Brien finds irritating is the cripple's best chance to
find salvation. Ambition should be made of sterner stuff than turning
all problems into psychological barriers. At the same time, the desire
to get even with an unjust fate shouldn't be dismissed lightly.
Liberalism may have a lot to answer for where attitudes toward the
cripple are concerned, but excessive concern with individualism is not
the biggest item on that bill. Still, the psychologizing of disability
was a mistake for which we continue to pay a price. And it remains, I
believe, the source of the Court's restricted vision of workplace
disability.

The conditions cripples face in the workplace cannot be conquered by
their adjusting to normal society but by society making certain minor
but necessary adjustments to their problems. By the 1970s the
psychological definition of the cripple had already shown how limited it
was. But is it better to define the cripple legally? Despite its immense
promise, the Americans With Disabilities Act is, as O'Brien writes, "an
idiosyncratic body of law." Where once cripples had to convince the
world of their ability to meet standards set by normals, they are now
expected to meet thresholds of disability set by a Court that seems
oblivious to the obvious. When the issue is as clear-cut as it was in
the case of the golfer on the PGA Tour, Casey Martin, whose bone
deterioration made it impossible for him to walk the links although it
didn't prevent him from playing golf, the courts seem willing to allow
the spirit of the original act to serve as its definition. But even that
makes the judiciary our "modern-day experts of vocational rehabilitation
because of the idiosyncratic nature of disability." The Court has not
yet claimed the right to define whether an individual is or is not a
cripple. But by insisting on its right to define what constitutes
disability in the workplace, it has assumed the power of defining what
the consequences of being a cripple are. As far as work is
concerned, cripples "have gone from being subjects of medicine to
subjects of law." Whether this is an improvement over the psychologizing
of disability is certainly open to question. The conclusion ofCrippled Justice is not despairing but it is skeptical. And for
good reason. In a valuable study of workplace disability as both a
political and social issue, O'Brien has performed a service to anyone
interested in social justice. Unfortunately, recent Supreme Court
decisions threaten to make her skepticism the book's lasting legacy.
Whether defined by the judges or doctors, it seems to be the cripple's
fate to be defined as the other.

Dispatches from adolescent territory reach me occasionally through my
niece Michelle, who has moved into her teen years like theWehrmacht hitting Belgium. Her most recent posting has taught me
this about contemporary film culture: While visiting a Midwest resort
town with a friend, Michelle was delighted to discover a street of quaint shops, as well as a theater that played old movies. Which old movies, I wanted to know.
"Spider-Man," she said.

In the hope that this column might fall into the hands of teenagers, I
therefore begin with an apology. Some of the movies I am about to
discuss have been running for two weeks, or even longer. That's enough
for them to have earned most of whatever theatrical revenue they can
expect; enough that they are now being pushed into the back reaches of
the public's attention, so that next week's movies can be marketed. I
want to write about these pictures precisely because they were
made to be forgotten (like Men in Black II); or, conversely,
because they are already starting to fade, despite their makers' best
intentions.

I also want to write about a film that just might stick in the mind:Langrishe, Go Down, starring Judi Dench and Jeremy Irons. But
there I'm cheating. Although that film is only now being released, it
doesn't really count as current, since it was made in 1978.

To people who dislike movies and attend only films, it might seem
obvious that Men in Black II can't compete against Langrishe,
Go Down (which has not only Dench and Irons to its credit but also a
screenplay by Harold Pinter). But then, to my mind, Langrishe, Go
Down can scarcely compete against the original Men in Black,
which so brightened the summer of 1997. While that picture cheerfully
fulfilled every duty of a sci-fi special-effects comedy, it also won a
permanent place in memory by developing a theme that should interest
thoughtful teenagers and adults alike.

In its portrayals of agents Kay and Jay (Tommy Lee Jones and Will Smith)
and of the coroner who stumbled onto their secrets (Linda Fiorentino),Men in Black proposed that knowledge has to be paid for, and that
the cost is often loneliness. Fiorentino, you may recall, played a
scientist whose zeal for research allowed her no living companions.
Smith played a New York cop who had to choose between satisfying his
curiosity and maintaining relations with his friends and family--not
much of a decision in his case, since he was already thoroughly
alienated. (In a training exercise, Smith shot to death a cute little
blond girl but left unmolested a fanged and tentacled potato from Outer
Space, with which he seemed to empathize.) As for Jones, he strutted and
snapped his way through the movie as if a show of bravado were all that
could keep him going. "We are a gullible species," he sighed at one
point, as if wishing he might lay down his burden and rejoin the
credulous. Everyone except Smith understood this ragged man was on his
last case.

Clearly, Jones should have stayed in the retirement he achieved at the
end of Men in Black. Smith should have remained partners with
Fiorentino, and the sequel (if there had to be one) ought to have been
written by Ed Solomon, who so ingeniously handled the original. Maybe he
would have titled the picture Men and Women in Black. Instead, we
get the throwaway Men in Black II, which disposes of Fiorentino
in half a line of dialogue and uses the same method to eliminate the
wife for whom Jones once pined. (It's as if the audience could be purged
of memory, just like the movie's neuralized civilians.) With these
impediments to buddy-movie business cleared away, the screenplay (by
Robert Gordon and Barry Fanaro) can proceed to reunite Smith and Jones
and replay, with slight variations, the simpler gags from the first
picture.

Time passes, hope sinks and a theme emerges, unfortunately. Men in
Black II shows that only two kinds of women exist on other planets:
shining saints and snaky monsters. If this is so, then Earth must be
bigger and more varied than the whole rest of the universe--a notion
that runs counter to the spirit I recall with such joy from the first,
the one true, Men in Black.

As you may know, Men in Black was based on a comic book by Lowell
Cunningham; so it has something in common with Road to Perdition,
a gangster picture spawned from a graphic novel by Max Allan Collins and
Richard Piers Rayner. Under the fussy and portentous direction of Sam
Mendes (who previously postured his way through American Beauty),Road to Perdition is clearly a far more ambitious movie thanMen in Black II. It boasts the very substantial talents of Tom
Hanks and Paul Newman in lead roles, an unnerving performance by Jude
Law in a crucial supporting part and magically dark, dense
cinematography by Conrad L. Hall. The story would seem to be worth
telling (it's about murderous gangster fathers and the sons who are
either loyal or disloyal to them, either willing or unwilling to follow
their path); and the setting is the Depression-era Midwest, which always
helps a movie. And yet very little of Road to Perdition lingers,
except for a feeling that you've been carried along.

Most of the carrying happens when mob hit man Michael Sullivan (Hanks)
is driving around the wintry plains with his 12-year-old son, Mike
(Tyler Hoechlin). The two are both fleeing a killer (Law) and chasing
the men who dispatched him--a situation that allows for a couple of
good, tense confrontations. Since Hanks thinks it would be helpful to
empty Al Capone's bank accounts, there's also a series of jolly
robberies. I would guess these episodes take up about fifteen minutes of
the movie. The rest is murk, forced lyricism and mounting corpses.
Perhaps you won't care when I reveal that almost no one survives, since
the deaths never matter. They just happen, like ticks of a metronome.
Each beat gives Sam Mendes the opportunity to make pretty arrangements:
an image of violence framed by a man's legs, a flash at a nighttime
window, a brightly lit homage to David's Death of Marat, a
tracking shot of men silently collapsing in the rain. Watching these
stage-derived tableaux vivants, I began to think better of the
movie-mad energy of Miller's Crossing, in which the Coen brothers
invested their overcoats-and-hats gangsters with both drive and
character. Maybe Miller's Crossing has also turned out to be
forgettable in large part; but its core moments (such as the scene of
John Turturro begging for his life) dig right into you, as if they were
newly installed neural pathways to the heart.

Road to Perdition? A passing flutter.

John Sayles can't be accused of prettifying his films, and he would
never kill a character for lack of anything better to do. What's more,
he despises the grand simplifications that are so common in comic books,
graphic novels and pop moviemaking. In Sunshine State, he sets up
for ridicule the fabulations of history pageants and real-estate
developers, so he can show off to better advantage his own, more
intricate vision of the social network. It's a strategy he's used in
many earlier films, just as he visited Texas and Alaska before this
excursion into Florida. From Sayles, you get highly specific landscapes,
reliable accounts of politics and commerce, and (more often than not)
actresses to die for--in this case, in alphabetical order, Jane
Alexander, Mary Alice, Angela Bassett and Edie Falco.

All this is admirable. I just wish Sayles would also put a little movie
into the movie.

Sunshine State isn't claptrap, like Divine Secrets of the
Ya-Ya Sisterhood, but it shares that picture's claptrap method of
being almost entirely expository. In scene after scene, Sayles tells you
exactly what he thinks you should know about Florida, often by putting
into the mouth of a character the kind of cliché-twisting
monologue that keeps rational people away from Off Broadway plays. I
think this is a waste of good actors--and the effects are nowhere more
evident than in the parts of Sunshine State you forget, or that
Sayles forgot. Tell me, if you've seen the picture: Can you recall what
finally becomes of Terrell (Alex Lewis), the troubled teenager whose act
of vandalism begins the story? He's hustled away so perfunctorily, once
he's served the purpose of uniting two strands of the plot, that he
might as well be Linda Fiorentino. And can you remember anything the
American Indian construction worker does in the movie, other than wait
around to be an American Indian at a crucial moment? For a filmmaker
with a social conscience, Sayles is awfully quick to use characters as
means, rather than ends.

So, for a dose of something eccentric and memorable, I turn toLangrishe, Go Down.

David Jones directed this picture in 1978 for the BBC, working from
Harold Pinter's screenplay. New York's Film Forum is now giving the
movie a much-belated theatrical release (July 17-30), no doubt on the
strength of Judi Dench's ascent to stardom. She is, in fact, a wonder in
the role of Imogen Langrishe, one of a household of spinsters living in
ever-more-impoverished gentility on an estate outside Dublin. The period
is the 1930s, when such descents from grandeur were not uncommon for the
Irish gentry; nor would it have been unlikely for a self-styled scholar
from Bavaria (Jeremy Irons) to show up in the neighborhood to do
research, and to assert with sudden, unmotivated violence that he is
indifferent to politics, absolutely indifferent.

Sayles himself could not ask for a more realistic, closely observed
setting. (In this regard, Langrishe, Go Down owes a lot to its
source, the novel of the same title by Aidan Higgins.) But the way the
film's seduction and repulsion play themselves out--you understood,
surely, that Dench and Irons have an affair--is utterly unpredictable.
Irons turns himself into a fun-house mirror version of the
self-important German intellectual, complete with an accent that keeps
migrating toward Transylvania. He never stops talking; whereas Dench,
who is given relatively few lines, speaks volumes with her eyes and the
set of her mouth. You understand, without a word, how she sees through
Irons. She's amused by him; she feels this may be the last amusement
she'll get; and she enjoys it, until the underlying frustration and rage
break through.

To all this, David Jones adds a fragmented, time-shuffling montage
that's reminiscent of Alain Resnais. Or is the film's structure also a
Pinter contribution, like the lines of dialogue that continually run
askew? All I know is that this odd little movie has lodged in my brain,
not comfortably, perhaps, but permanently.

William J. Bennett, former Secretary of Education, ex-chairman of the
National Endowment for the Humanities, candidate for President in 2000
in the Republican primaries, has written an intemperate little book
called Why We Fight. Using the horror of 9/11, the book crackles
with protestations of his patriotism as he lobs shells at those who do
not share his views. Apparently Bennett had no moral choice but to write
what he had to say in order to save the Republic. "I sensed in my bones
that if we could not find a way to justify our patriotic instincts, and
to answer the arguments of those who did not share them, we would be
undone."

If Bennett had his way, those who did not hold his views would be dealt
with very harshly indeed. He leaves it to the reader to guess what he
would do with those he views as "unpatriotic." But there are ample
clues. Civil liberties are not his concern, neither in this book, as he
makes clear, nor for that matter anywhere else. He states that he is for
military tribunals "and the detention of suspects within our own borders
for questioning." For how long Bennett does not say. Nor does he tell us
whether there is the same standard for a non-American as for an American
citizen. Until recently there were hundreds being held in detention,
sanctioned by an act of Congress that gives the Bush Administration
virtual carte blanche in handling suspects without warrants, and perhaps
even without recourse to the regular court system. (Most of the
detainees have been quietly deported.) This exercise of power is a
complement to Administration foreign policy, as it is apparently
prepared to intervene in or invade nations even if there is no evidence
that they are involved in terrorism or backing terrorists. The domestic
implications are spelled out well by Bennett, but none of it bothers
him. His gravamen against the left and those who disagree with
him--members of the "peace party," as he calls his adversaries--is that
they "have caused damage, and they [you] need to be held to account."Nation editors and thinkers like Eric Foner, Richard Falk, Katha
Pollitt and Jonathan Schell, take heed. They are not alone as enemies of
Bennett--New York Times editors, Harvard (Bennett is an
ungrateful alum) and assorted scholars, Noam Chomsky, students and the
professoriate generally should watch out. They are targets in Bennett's
campaign for an inquisition, twenty-first-century style. He is concerned
that "the Foners of the United States" have led a minority of Americans
away from being true believers. As Bennett so indelicately puts it, "A vast
relearning has to take place," undertaken by everyone, especially
"educators, and at every level." "The defect" in our education and
morals "can only be redressed by the reinstatement of a thorough and
honest study of our history, undistorted by the lens of political
correctness and pseudosophisticated relativism." In other words, there
has to be a moral cleansing in America.

The word "reinstatement" does not tell us what Bennett is attempting to
reinstate, though. From Why We Fight we learn of Bennett's deep
distress at American education, where his notions of American history
seem less persuasive than they were in the days when nineteenth-century
historians acted as propaganda instruments for war, racism and America's
imperial superiority. Those were the days when "a vast relearning" was
not necessary. He quotes approvingly Professor Donald Kagan, the Yale
historian, who tells us that those who do not hold to their definition
of patriotism and their reading of history suffer from "failures of
character [emphasis added by Bennett], made by privileged people who
enjoy the full benefits offered by the country they deride and detest,
its opportunities, its freedom, its riches, but who lack the basic
decency to pay it the allegiance and respect that honor demands."
Bennett does concede at one point that while it is incumbent on those
who hew to the Kagan version of truth to point out the despicable
behavior of the naysayers, we must also "[respect] their right to be
irresponsible and even subversive of our safety."

There are other views of patriotism, of course. One was promulgated by
the leading American philosopher John Dewey, an independent thinker not
given either to religions or secular religions, namely Communism. He
surely would have been measured for a Soviet gulag. But he would also
have been on Bennett's enemies list for his belief that scoundrels too
often fly the flag of patriotism and nationalist triumphalism:

On the side in which public spirit is popularly known as patriotism this
widening of the area of interest has been accompanied by increased
exclusiveness, by suspicion, fear, jealousy, often hatred, of other
nations.... The self interest of the dynastic and military class
persistently keeps the spark of fear and animosity alive in order that
it may, upon occasion, be fanned into the flames of war. A definite
technique has grown by which the mass of citizens are led to identify
love of one's own country with readiness to regard other nations as
enemies.... And in many cases, it is becoming clear that particular
economic interests hide behind patriotism in order to serve themselves.
So far has this feeling gone that on one side there is a definite
attempt to attach the stigma of "unpatriotic" to everything designated
international; to cultivate that kind of "hundred percent Americanism"
which signifies practically suspicion and jealousy of everything
foreign.

In other words, Americanism can serve as a code word for "contempt of
other peoples," Dewey concluded.

The disinterested observer must wonder whether it is inaccurate to note
the emergence of dynastic classes whose political power is linked to the
intelligence community, the military and big business. It would be
absurd to deny at this point that there are classes and groups that
profit from war and military preparedness. It is equally naïve to
believe that the constitutional contract of civil liberties is so strong
that prosecutors, local police, freewheeling inquisitors and others will
not spy and inform on and harass the different and the dissident. War
mobilization is the perfect cover story for such abuses. The problem is
made worse because legal and structural changes in governing and
consciousness are legitimized through law, for example in the USA
Patriot Act. That is to say, the legacy of Bush will live long after he
returns to Crawford, Texas.

But what about the doubter? What about today's or next year's or next
decade's "little guy," a man like Winston in Orwell's 1984, who
didn't go along or know how to because the contradictions were so
profound between the stories that were given from one year to the next
that he knew enough not to believe in this year's lies? Suppose he
wondered why Ferdinand Marcos of the Philippines was our friend one year
and the next we helped overthrow him, or why the hapless former
Panamanian leader Manuel Noriega, a man once on the CIA payroll, became
the occasion for our invasion of Panama, ostensibly because of his
involvement with drug payoffs? The results were much destruction and the
death of several hundred Panamanians. Bennett's defense of violence
takes on frightening characteristics. Somehow he believes that, quoting
Orwell favorably, "Those who 'abjure' violence can only do so because
others are committing violence on their behalf." He goes on to wrap
himself in the comfort of the armed forces. But surely he can't mean
this about Panama, El Salvador, Colombia, etc. Violence was not being
committed there on behalf of those who objected here. Indeed, it is a
stretch to imply that these actions did anything for the American
people.

Imagine the naïve citizen who doesn't understand hypocrisy and
strategies of evasion, contradiction or double standards. That person
might wonder why we went to war in Afghanistan when the perpetrators of
the 9/11 destruction were for the most part Saudis. Referring to
Augustine and Jean Bethke Elshtain, Bennett claims that "notresorting to force leads to evils far greater than the one we
oppose." But surely it would be nice to know who the enemy is, and drop
the bombs on the correct culprit. Whether the naïve person who
holds such views and then organizes others to express their doubts
should be held without bail as a suspect is unclear from the Bennett
text. What is clear is that doubters should be shunned and punished.
They are raining on Bennett's "war party" (his term), a parade in which
he is a proud adjutant.

Bennett's animus toward his fellow Americans is unforgiving especially
in reference to those who were part of the movements of the 1960s, which
had the effect of concretizing ideals into practice--and at no small
cost. Perhaps his anger against the movement members was that they
employed nonviolence and used or stumbled into a social method that
broke "facts" open and found values that contradicted the stated
democratic ideals of inclusivity, equality and sheer decency. It is no
wonder that this social method is one that helps ourselves and the young
demystify events, their causes and implications. His disdain for the
peace party goes back to the Vietnam War. At that time, the peace party,
made up of the flaccid and pusillanimous, didn't support the "bomb them
back to the Stone Age" position of Gen. Curtis LeMay. Bennett, the angry
moralist, remains upset that the LeMay position didn't get much of a
hearing, although the general ran for Vice President with George
Wallace, and the tonnage of bombs dropped on Vietnam by the United
States was greater than the amount dropped in World War II. As Bennett
opines, it was the Gandhian nonviolence people of the peace party who
subverted an American victory in Vietnam because "those among us who
espoused the LeMay position were scarcely to be heard from." His
argument is uncomfortably reminiscent of the German generals and the
right during the Weimar Republic who claimed that the Germans lost World
War I because they were "stabbed in the back" by the left.

As a good Republican, Bennett bristles at those who might doubt the
motives and methods of the Bush Administration. After all, how could
anyone doubt those patriots who took power under questionable
circumstances, who had already used every sleazy trick to get one of
their fellow rightists onto the Supreme Court and vault into the White
House a man who'd lost the popular vote, installed as it were, by a
5-to-4 decision of the Supreme Court? Because Bennett is a dogmatic man
he is not burdened with self-doubt but has a surfeit of faith. (Bennett
lets us know that he is a religious man, a Catholic who has no doubts
about his faith and his belief in the Catholic Church, its teaching and
activities. It is his kind of faith, religion itself, which he
understands to be the backbone of America, much the way other believers
throughout the world, such as Osama bin Laden, perhaps, link their faith
to their political judgments.)

To Bennett, 9/11 was a moment of clarity between good and evil. "Good
was distinguished from evil, truth from falsehood." But there was more
to the question. He was concerned that some said the United States
helped bring the disaster about through its foreign and military
policies. After all, the skeptics wondered, didn't the United States
train and militarily assist the radical fundamentalists against the
Soviet Union? And then didn't our assets turn against the United States
when Afghanistan was left a broken nation? And did the United States
overstay its welcome in Saudi Arabia, whose people include chief backers
of the radical fundamentalists? These were not idle questions, nor was
it idle and unpatriotic to analyze from top to bottom the ethos of
American invulnerability. The United States had placed its faith in a
forward defense. But on that terrible day, the idea of fighting wars on
other people's territory was severely damaged. Wouldn't these questions
suggest a comprehensive review of American foreign policy? But Bennett
the purist claims that he is not interested in policy. He is interested
in right and wrong, good and evil. Bennett, the consummate Washington
insider, is not one, apparently, to get his hands dirty with the
realities of policy-making and everyday life--i.e., what to do--although
working through his principles would have horrendous consequences for a
democratic society.

The reader may ask whether there is anything about which Bennett and I
agree. And here the answer is yes. Certainly the assault on American
cities was an atrocious attack by a gang of zealots. On why they thought
to undertake their suicide mission Bennett and I disagree. Perhaps the
perpetrators wanted to give the United States a lesson in cost-benefit
analysis to show that all the high-tech military equipment in the world
does not make the United States invulnerable. (Indeed, because of the
interconnectedness of our communications system, the United States is as
vulnerable as any Third World country.) The zealots may have been imbued
with an anti-Western spirit that has rankled for over a thousand years
and finally erupted against the United States, paradoxically for the
same reason Bennett has had grave questions about American society: its
relativism, sensuality, individuality and lack of religious discipline.
Relativism has acquired a vulgar connotation, and Bennett uses its
burlesqued meaning as a stick against nonbelievers and the peace party.
He compares Stanley Fish, the dean of liberal arts and sciences at the
University of Illinois in Chicago, a leader of the postmodernist school
of literary theory, to mass murderer Charles Manson, who said that he
thought no man could really know and represent another, "to communicate
one reality through another, and into another, reality."

"Stanley Fish himself could hardly have put it better," writes Bennett:

Do we, then, have no independent and objective standard for determining
why Professor Fish should be allowed to teach at a prestigious
institution of higher learning while Charles Manson should languish in
prison just because he followed a doctrine he shares with Professor Fish
to its logical conclusion--the conclusion that since everything is
relative, everything can be justified and all is permitted.

One does not have to be a postmodernist, which I am not, to be deeply
offended by Bennett's comment. Bennett picks up on Leszek Kolakowski's
views that to follow principles to their logical conclusion can lead to
disaster. But Bennett overlooks a fundamental truth. The question is how
to determine an "independent and objective standard," what goes into
that judgment and who decides what that standard is. By analyzing this
set of questions we learn our own weaknesses, that of the standard
setters and those who seek to impute their values into an objective
reality. We can analyze and judge, from our perspective, actions and
behaviors. People can then choose between Fish and Manson.

Right and wrong may come from God or moral sentiments, which the
philosophers Francis Hutcheson and David Hume spoke of. These
sentiments, better stated as capacities that people have, may be
degraded by social roles, institutions, laws, poor upbringing, whatever
causes a person to turn toward the pathological. Obviously, if one
believes in the Enlightenment and historical progress, ways of acting do
emerge that are acceptable as against actions that are no longer
acceptable either as a result of social agreement or because there are
moral sentiments that make their way through historical struggle.
Bennett, who appears to be all over the map philosophically, does hold
as a constant his belief in Plato, who in turn held tightly to the idea
of an antidemocratic society, one based on hierarchy and strict class
lines. Plato, according to Bennett, disposed of the relativism that his
apostle now sees as the cause of our decay. But what exactly is
relativism? Bennett also quotes approvingly Abelard's dialectical idea
of sic and non (the debate surrounding opposite
propositions) as being the probable "basis of all learning itself...of
our very outlook on the world." But Abelard's method can be read two
ways. One is that the questions undertaken invariably lead to the same
question expressed in new ways (aporia), or it is a method that
is supposed to give the right answer expressed by a church that defines
what reason and faith are.

Relativism is really a special form of democratic skepticism that
encourages us to examine and extend our inquiry beyond the appearance of
an event even in the case of recognizable and accepted facts. The
relativist points out that the fact can be seen from different vantage
points, and, more important, that a fact has within itself an entire
story that can and should be explored. Now the question is, how does
this apply to 9/11?

First, there is the fact of its occurrence. In a policy sense it becomes
critical for us to understand how and why the event occurred, what the
implications are, what its immediate causes were. For its various flaws,
relativism is an attempt to move to a coherent, if invariably incomplete
picture of what happened and what lay behind the event. It is the only
way we can learn what to do. It takes a dim view of professed views of
what is "good" and "evil" not because they don't exist but because ideas
of an absolutist nature that are put into practice can lead to the most
horrendous consequences. It is why law, including international law, is
so important, for it imposes boundaries even for the protection of the
evildoer. In policy terms, matters of good and evil are transposed into
causes, consequences and manageable categories for people who cannot
know the whole truth, and for people who seek a means of understanding
rather than mere retaliation or dogma.

This form of analysis leads to certain conclusions. The first is that
9/11 almost immediately became a social and political question of what
to do. It was a moral question for those caught between their pacifist
beliefs and their concern for justice for their fellow citizens. For
Bennett that terrible day was the moment not only to get mad (angry) in
his terms but to get even. Bennett is obsessed with the idea that there
is not enough anger in American society. We are all caught in this
unmanly process of Roger Fisher and William Ury's ideas of "getting to
yes," that is, finding avenues of agreement between people, states and
groups. If this formulation does not have value then humanity cannot
escape the vise of dominator/dominated. Nor can it find ways of controlling and sublimating anger,
violence and rage. Nor will humanity be able to escape forever the
further use of nuclear weapons.

There is a smidgen of truth to Dean Rusk's and Bennett's idea that the
American people have to be pulled kicking and screaming into war. But
this belies the work of a state that has been involved, depending on
one's count, in more than 150 interventions and wars since its founding.
Only someone given to deceiving himself would not recognize the American
state as a warrior state. There are many reasons Bennett chooses not to
see this reality--that is to say, in Bennett's history book there are
many blank pages. Thus, the United States made continuous war on Indians
for the better part of a hundred years, always with its eye on the
prize: to take as much land as it could from them. The Mexican-American
war can hardly be seen in a different light. This is an old story told
well and critically by historians--a story Bennett would sugarcoat for
the young, with claims of an American destiny. Is that what the "vast
relearning" is to be about? Whether the United States had high moral
purpose or crass economic motives in employing violence and deceit does
not change the reality about the means used.

It should go without saying that there is a matter of supreme importance
for Bennett with which I do agree. It is that there is no place for
anti-Semitism in twenty-first-century civilization--whether it comes as
the virulent form that has erupted among too many in Muslim nations or
whether it exists as a residue in American politics (peace to the memory
of Richard Nixon). But it's there, whether in the Middle East, Europe or
in American politics.

This anti-Semitism does not excuse Israel's foreign and military
policies, which put at risk the state of Israel, in my view; but Bennett
is among the staunchest of Israel's supporters. He says there is "an
understanding, almost religious in nature, that to our two nations above
all others has been entrusted the fate of liberty in the world." There
is a consistency in his view. He wants no appeasement toward the
Palestinians, seeking their subjugation and cautioning the Bush
Administration; I suppose that weak fellow General Powell had better
watch his step in his concern to temper this ugly war. Or maybe it's his
back.

Here the prudent analyst might have learned something from Vietnam.
There was much pressure to remove the corrupt and seemingly feckless
Diem from his position. And after he was removed, with American backing,
the leadership structure of South Vietnam ended in turmoil. We may
expect the same to occur if the Israelis, with American concurrence,
manage to force into place among the Palestinians a Middle East version
of a puppet leader. Bennett's view of American foreign policy demands
that we look only at the depredations of Osama, Palestinian terrorists
and certain nations on his enemies list. He claims that he is interested
in objectivity, but he is unprepared or unwilling to look at those
issues that may or may not have salience. This has little to do with
good and evil, except as those words are used to obfuscate. The moral
asymmetry he assumes should be surrendered, so that the universal
standards Bennett says he is for can be applied to the United States as
well.

Another place of agreement between us is in Bennett's recognition that
through enormous struggle, the United States has sought to concretize
its shifting ideals of freedom and racial and economic justice into the
reality of everyday life. There are some exceptions, but there is little
to suggest that those who hold Bennett's views were the ones who were
part of the movements that changed the face of this nation into one that
others throughout the world admire for its freedoms. These struggles
were paid for dearly by the various social movements so the likes of
Bennett and me could live in relative comfort. It was not the
right--whether the ultramontane elements of Catholic hierarchy, Judge
Gary, J. Edgar Hoover, Joe McCarthy, Phyllis Schlafly, Antonin Scalia,
the George Bushes or William F. Buckley--that made this nation one that
championed "intellectual, moral and political freedom," to use the
philosopher A.E. Murphy's phrase.

But back to "why we fight" in international terms: Being a believing
Catholic, Bennett is concerned that "just war" be recognized as a
doctrine that has modern utility; one applicable to American reprisals.
As ironic as it may appear, "just war" is a weak reed to hang from in
order to support a war without end. Just war is predicated on struggles
between nations; it is not a struggle between a gang and a nation. A
just war has a beginning, middle and end, and it is not supposed to do
more damage than the original harm. Bennett argues that the opinions of
others (sometimes good to have) should in no way deter any unilateral
action the United States cares to take--that is to say, those who
control the reins of power. Bennett has thus adopted just war as his
rationalization for militarism.

One last word. An American-initiated alternative must be offered to that
part of the world that is writhing in pain. It is one that gets rid of
weapons of mass destruction through general disarmament. (This includes
our own.) It is one that supports the pacific settlements of disputes.
This does not mean the fashioning of imperial law but of expanding
international law. That the United States does not support the
International Criminal Court and has pulled out of various international
treaties is not a good sign for the United States or the world's future.
The alternative includes international economic rights, the buildup of
regional forces to act under the aegis of the UN Security Council,
massive health and economic assistance, and a system that makes clear
that intelligence is a feature of a free society--it is public property,
not that of the few or of the state. The alternative recognizes and
supports claims of plural cultures without undercutting in any way the
ideals and struggles that have defined human rights in the United
States, namely women's rights, civil liberties, civil rights, labor
rights, gender rights, environmental rights. It recognizes that
education, housing, religion, free inquiry and health are rights to be
expanded and cherished. This charge is not likely to be fulfilled by
calls for wars without end and claims of patriotism meant to mystify,
and worse.

For readers of this magazine and millions of other Americans, the
initial horror of September 11 was compounded by the sobering
realization that George W. Bush would be at the helm for the aftermath.
With a cabal of fundamentalists, crackpots and fascists whispering in
his ear, Dubya became the world's most dangerous weapon. Perhaps, we hoped, the rather low esteem in which he was held by the American people, the news media and much of Congress might save us.

No such luck. Congress and the mainstream media lined up behind him in
lockstep. Instances of his much-vaunted ignorance wound up on the
cutting-room floor. One cable network ran daily promos of Bush spurring
on World Trade Center rescue workers, declaring that he had "found his
voice" amid the rubble. Pundit Peggy Noonan declared Bush's post-9/11
speech to Congress no less than "God-touched"; he had "metamorphosed
into a gentleman of cool command...[with] a new weight, a new gravity."
Yet, despite the rise in his approval ratings, many harbored lingering
doubts about the extent to which a "new" Bush existed.

Among the many critical viewpoints drowned out in the wake of the
attacks was Mark Crispin Miller's The Bush Dyslexicon, the first
systematic critical examination of the President's mistakes,
misstatements and malapropisms. Fortunately, this clever volume has been
reissued with updated material on Bush's sayings and doings since that
time.

Bush's propensity for mangling the English language is no secret to
anyone. No doubt we all have our favorites, which we've gleefully shared
with friends, family, co-workers and comrades. Miller, a professor of
media ecology at New York University, has compiled what is clearly the
largest collection of Dubya-isms to date, among them these treats:

§ On his qualifications to be President: "I don't feel I've got all
that much too important to say on the kind of big national issues"
(September 2000); and "Nobody needs to tell me what I believe. But I do
need somebody to tell me where Kosovo is" (September 1999).

§ On coping with terrorism and other threats: "[We'll] use our
technology to enhance uncertainties abroad" (March 2000); and "We'll let
our friends be the peacekeepers and the great country called America
will be the pacemakers" (September 2000).

§ On Russia: "And so one of the areas where I think the average
Russian will realize that the stereotypes of America have changed is
that it's a spirit of cooperation, not one-upmanship; that we now
understand one plus one can equal three, as opposed to us, and Russia we
hope to be zero" (November 2001).

Miller vividly illustrates the depth of ignorance--as opposed to
stupidity--that leads this President away from direct contact with
journalists whenever possible. Miller also demonstrates that Bush's
"problem" with language is not easily separated from his "problem" with
policy and politics. If we focus exclusively on his stormy relationship
with proper grammar and logical sentence structure, Miller argues, we
risk underestimating what his presidency means for the United States and
the world. "Our president is not an imbecile but an operator just as
canny as he is hard-hearted.... To smirk at his alleged stupidity is,
therefore, not just to miss the point, but to do this unelected
president a giant favor."

Loosely organized by subject matter-- "That Old Time Religion," "It's
the Economy, Your Excellency"--the book's chapters chronicle several
intertwined aspects of the chief executive: the politics of style that
characterize his behavior and demeanor; the media's role in crafting him
as a valid presidential candidate and, post-9/11, a changed man; the
Bush family's political legacy and troubled public image; and, finally,
the real meaning behind Dubya's flubs and gaffes.

Miller documents in detail how major news outlets have from the
beginning provided a heavily edited public transcript of Bush's
statements and have helped steer viewers away from his lack of policy
knowledge. Even more disturbing are the ways the media have simply
reported Bush's "ideas" without comment. Commenting on a Kansas
school-board vote to end evolution's exclusivity in the state science
curriculum (later overturned), for example, Bush declared, "I personally
believe God created the earth" (September 1999); later, he opined,
"After all, religion has been around a lot longer than Darwinism"
(September 2000).

The abundant evidence Miller provides of Dubya getting pass after pass
in the media seems particularly alarming. In addition to general
"cover," Cokie Roberts, Sam Donaldson and other famed "journalists" and
newspeople consistently let Bushisms fly with little or no comment. Note
this flub on the fate of Elián González's potential
citizenship during an airing of ABC's This Week:

Well, I think--I--It--listen, I don't understand the full ramifications
of what they're going to do. But I--I--I--think it'd be a--a--a
wonderful gesture. I guess the man c--the boy could still go back to
Cuba as a citizen of the United States.... I hadn't really thought about
the citizenship issue. It's an interesting idea, but if I were in the
Senate, I'd vote aye.

Roberts gave no response to the nonsensical Bush, nor did Chris Matthews
in this bizarre MSNBC Hardball episode in May 2000:

Matthews: When you hear Al Gore say "reckless, irresponsible," what do
you hear from him, really?...

Bush: I hear a guy who's not confident in his own vision, and,
therefore, wants to take time tearing me down. Actually, I--I--this may
sound a little West Texan to you, but I like it when I'm talking about
what I'm--what I--

Matthews: Right.

Bush:--when I'm talking about myself, and when he's talking about
myself, all of us are talking about me.

Matthews: Right.

Of course, these snippets pale in comparison to the alacrity with which
the media papered over the fact that our current President was not
elected by a majority of the populace.

This is quite a contrast from the dis-ease with which the fourth estate
treated Bush's predecessors. Miller traces the phenomenon back to
Richard Nixon, whom he calls the "godfather" of Bush-era politics. Like
Bush, Nixon was not a man well liked by the television cameras; nor, as
the White House tapes reveal, was he an especially enlightened man, with
his pedestrian literary interpretations, paranoid hatred of Jews,
virulent racism, sexism and homophobia. "You know what happened to the
Greeks!?" Nixon bellowed to Haldeman and Ehrlichman: "Homosexuality
destroyed them. Sure, Aristotle was a homo." Nixon's angry and, as
Miller describes it, "low-born" personality manifested itself throughout
his televisual life, particularly during the scandal that brought down
his presidency.

Inheriting this image problem was Dubya's patriarch, George Bush senior,
who not only worked for Nixon politically but also shared in his
televisually and verbally handicapped style. Whereas Nixon came off as a
classless bully, Bush suffered from sissiness, the infamous Wimp Factor:
"Bush's posh class background was his major TV problem, the cameras
mercilessly outing the big pantywaist within.... In fact, the Bush clan,
although fabulously wealthy, is not aristocratic enough to do well on
TV, if by that modifier we mean elegant and polished. First of all, the
Bushes often have let fly in the most boorish way--as when Barbara Bush
hinted coyly that Geraldine Ferraro was a 'bitch.'"

In an effort to analyze Bush Sr.'s wanna-be aristocratic demeanor,
Miller proceeds to call him a "Yalie faggot" and argues that the Bush
family's privilege put the elder Bush in the toughest of spots relative
to his macho Republican predecessors. On losing a straw poll in Ames,
Iowa, for example, Bush noted, "A lot of people who support me were at
an air show, they were off at their daughter's coming-out party, they
were teeing up at the golf course." Miller makes it abundantly clear how
frequently Bush Sr. not only missed, but miscalculated, the mark.

The point is that on television, class is not an economic issue but a
style issue. Given what Miller terms the Kennedy "savoir-faire," the
Bush family is at a distinct image disadvantage. Unfortunately, Miller
frequently analogizes Bush's moneyed privilege with a certain kind of
homosexuality--offensive behavior in a critic himself trying to "out"
Nixon's ignorance and homophobia. And he contrives that Barbara's
complaining of another woman's bitchiness is somehow anathema to
aristocratic behavior.

At root, these strangely aristocratic cheap shots smack of a kind of
backhanded liberal Kennedy worship. It is impossible to miss the
implication that America's royal family is the standard-bearer of
sufficiently presidential (read: aristocratic and classy) demeanor.
Given that JFK was an ethically challenged, commie-hunting political
lightweight, Miller's willingness to engage in macho class snobbery
points to the disturbing presence in the book of a crass partisanship
better suited to a Democratic media flack than a scholar of the left.

Symptomatic of this is the fact that for much of the book Miller seems
to forget the high degree of political convergence between Bush and
neoliberal New Democrats like Al Gore. One cannot help wondering if
Miller thinks a Gore Administration would not have responded to
September 11 with military action, and with legislation that expanded
the already egregious powers given the government in the
Clinton-sponsored Counter Terrorism Initiative of 1995. This see-no-evil
quality of the book is all the more telling because it represents the
very type of amnesia that Miller says afflicts us all after years of
corporate-led media idiocy. When he harps on Clinton's downfall at the
hands of the right without sufficiently stressing Bill's own
never-ending rightward shift throughout his eight years in office, one
wonders if Miller's own political memory lapsed from 1992 to 2000. It is
not until near the end of the book that he turns tail and concedes Al
Gore's rather striking resemblances to a war-happy Republican candidate,
as Gore "spoke more expertly, but just as deferentially, straining to
out-hawk the jut-jawed W, arguing that he would raise the military
budget even higher and retrospectively saluting the preposterous
invasions of Grenada and Panama."

Finally, Miller's critique of the "politics of style" turns in upon
itself. Miller obtains the lion's share of Bushisms from precisely those
style-obsessed media outlets he accuses of bringing down Clinton and
building up Bush: the New York Times, Talk,Glamour, 20/20 and Larry King Live appear all over
Miller's source citations, and he is just as dependent on, and dedicated
to, the politics of style as they are. At the end of the book, one
cannot help suspecting that Miller's beef with the politics of style is
that it took down his guy while it has yet to take down the other guy.

This hedging makes crucial parts of the book read like sour grapes and
detracts from the moments of sharp observation that Miller offers
elsewhere. He clearly grasps the very real danger of the Bush
Administration--his most intriguing observation is that Bush is not
always a rhetorical bumbler. As Miller conducts his repeated dissections
of various Bushisms, it becomes clear that this man is in fact possessed
of considerable guile. In an interview with Charlie Rose, in August
2000, Bush speaks about Saddam Hussein:

Rose: OK. What if you thought Saddam Hussein, using the absence of
inspectors, was close to acquiring a nuclear weapon?
Bush: He'd pay a price.
Rose: What's the price?
Bush: The price is force, the full force and fury of a reaction.
Rose: Bombs away?
Bush: You can just figure that out after it happens.

Here we see Dubya apparently willing and even eager to bomb a country
with which we are not at war--yet. Two years before the recent
enunciation of a "Hitting First" policy of pre-emption and even more
recent revelations of an existing attack plan from land, sea and air,
Bush's warring language was unambiguous. Likewise, when speaking of
anger and vengeance post-9/11, he is nothing if not clear, and his
dyslexic tendencies are nowhere in evidence. Down-homish and
cringe-inducing though it may be, "evildoers" is a phrase whose meaning
is singular, and Bush's repeated use of it has not been subject to the
usual emendations or "clarifications" of his handlers. Similarly, Bush
famously threatened to "smoke 'em out" of their holes, another
inappropriate, unpresidential, phrase; yet no one was confused about
what it meant for Al Qaeda.

The Bush Dyslexicon makes it clear that even after the 11th of
September, Bush's personality was far from "God-touched" or even
transformed; in fact, provided with the opportunity to inflate his
defense budget, savage Social Security and go after the Taliban as if in
a 'coon hunt, Bush was just this side of gleeful at the prospect for
revenge. Hardly had the mourning American public time to collect itself
before Dubya encouraged the military to "smoke 'em out of their caves,
to get 'em runnin' so we can get 'em" in order, as Bush himself put it,
to "save the world from freedom."

Given the potentially dire consequences of Bush's post-9/11 policy
agenda, though, it seems strangely incongruous that Miller so often goes
for the breezy, snappy rhetoric and eschews a more forthrightly
analytical tone. It may be therapeutic to laugh in the face of danger,
but somehow these do not seem to be particularly funny times.

A half-century ago T.H. Marshall, British Labour Party social theorist,
offered a progressive, developmental theory for understanding the
history of what we have come to call citizenship. Taking the experience
of Englishmen to define the superior path, he postulated a hierarchy of
citizenships: civil rights, political rights and social rights. The last of these became the
category in which twentieth-century Europeans have understood claims on
the state to health, welfare, education and protection from avoidable
risk. They conceived of these citizenships as stages in an upward climb
toward an ever better democracy.

Marshall's schema looked only at European men. Feminists have pointed
out that women did not achieve citizenship in this order. In fact, women
often won some social rights--for example, protective legislation and
"welfare"--before achieving political ones such as the right to vote.
And women's individual civil rights were often overwhelmed and even
suppressed by legally imposed family obligations and moral sanctions.
(For example, a century ago courts generally interpreted the law of
marriage to mean that women were legally obligated to provide housework,
childcare and sexual services to husbands.) Equally problematic were
Marshall's obliviousness to British imperialism and what it meant for
Third World populations, including the fact that he conceived of the
British as civilizers rather than exploiters, and his apparent ignorance
of the conditions of second-class citizenship for racial/ethnic
subordinates within nation-states. In short, his historical hierarchy
was highly ideological.

But no one has yet done what Alice Kessler-Harris has in her newest
book, In Pursuit of Equity, reaching beyond Marshall and his
critics to suggest a new concept, economic citizenship. In this history
of how women have been treated in employment, tax and welfare policy,
Kessler-Harris--arguably the leading historian of women's labor in the
United States--synthesizes several decades of feminist analysis to
produce a holistic conception of what full citizenship for women might
entail. In lucid prose with vivid (and sometimes comic) illustrations of
the snarled thinking that results from conceiving of women as
dependents--rather than equal in heading families--she offers a vision
of how we can move toward greater democracy. In the process, she also
shows us what we are up against. Her book illustrates brilliantly how
assumptions about appropriate gender roles are built into all aspects of
policy.

She aims to resolve what is perhaps the central contradiction for
policy-makers and policy scholars who care about sex equality: the
contradiction between, on the one hand, valuing the unpaid caring work
still overwhelmingly performed by women and, on the other hand, enabling
women to achieve equality in wage labor and political power. Today, for
example, although all feminists oppose the punitive new requirements of
the policy that replaced Aid to Families with Dependent Children,
repealed in 1996, they are divided about what would constitute the right
kind of welfare system. Some find it appropriate that all adults,
including parents of young children, should be employed, assuming they
can get a living wage and good childcare. Others, often called
maternalists, believe a parent should have the right to choose full-time
parenting for young or particularly needy children. Behind this difference lie two different visions of
sex equality--one that emphasizes equal treatment of the sexes and individual rights
and responsibilities, another that seeks to make unpaid caring labor,
notably for the very young, the old and the ill, as honorable and valued
as waged labor.

Kessler-Harris would resolve this contradiction through a labor-centered
view of citizenship, a notion of economic citizenship based on equity,
or fairness, in the valuation of socially worthy labor. Previously, the
policy proposal closest to this principle of equity was "comparable
worth." Second-wave feminists saw that the Equal Pay Act of 1963 and
Title VII of the Civil Rights Act of 1964 had failed to equalize male
and female wages. Because the labor force is so segregated, and female
jobs are so consistently undervalued, equal pay alone cannot produce
justice to women (or men of color). The comparable-worth strategy called
for equal wages for work of comparable expertise and value, even when
the jobs differed. For example, consider the wage gap between truck
drivers and childcare workers. Truck drivers earned much more even than
registered nurses, whose training and responsibility was so much
greater. The women's movement's challenge to inequality in jobs took off
in 1979, when Eleanor Holmes Norton, then head of the Equal Employment
Opportunity Commission, called for evaluations of job skills to remedy
women's low wages. But her successor, Clarence Thomas, refused to
consider comparable-worth claims. Although some substantial victories
were achieved in state and union battles--for example, the American
Federation of State, County and Municipal Employees (AFSCME) won wage
increases averaging 32 percent and back pay retroactive to 1979 for
Washington State employees, 35,000 of whom shared a $482 million
settlement--the comparable-worth campaigns faded in the 1980s.

But even had the comparable-worth strategy been adopted, it could not
have recognized the hours spent in caring for children, parents,
disabled relatives and friends, not to mention the work of volunteering
in underfunded schools, cooking for homeless shelters, running kids'
basketball teams. Kessler-Harris is arguing for a citizenship that
respects unpaid as well as paid labor.

She has worked out the arguments in this book systematically over many
years. Several years ago, an article of hers with the deceptively simple
title "Where Are All the Organized Women Workers?" enlarged the
understanding of gendered "interests" from an exclusive focus on women
to take in men as well. She demonstrated that so long as men dominate,
aspirations understood and characterized as class interests often
express gender interests equally strongly. She uncovered how unions
often operated as men's clubs, built around forms of male bonding that
excluded women, primarily unconsciously but often consciously, too. In
this new book she extends her analysis of men's gendered interests to
reveal how labor unionists' inability to stop defending the privileges
of masculinity have held back labor's achievements. One vivid example
was unions' opposition to state-funded welfare programs and
health-and-safety regulation, stemming from anxiety that they would
deprive workers of their manly independence. Of course, unionist
resistance to state control over workplace and work-centered programs
also derived from a defense of workers' control. But this vision of
workplace democracy was inextricably masculinist, and workingmen's
understanding of their dignity rested on distinguishing themselves from
women.

In A Woman's Wage, Kessler-Harris showed that both Marxist and
neoclassical economics were mistaken in their joint assumption that the
wage was somehow a consistent, transparent token of the capital/labor
relation. By contrast, wage rates, wage systems, indeed the whole labor
market were constructed by gender interests and ideology as well as by
supply and demand or surplus value or the actual cost of subsistence. A
wonderful example from her new book: The Hawthorne experiments of the
late 1920s have been interpreted to show that women workers were more
tractable than men. In one study, a group of women workers adapted more
cooperatively and quickly to a speedup than did a group of male workers.
In seeking to explain this behavior, investigators examined the women's
home lives and even their menstrual cycles, while paying no particular
attention to the fact that the collective rather than individual wage
structure imposed on them was such that higher productivity could only
increase their total wages, while the men's piece-rate wage structure
offered no such guarantee--in fact, the men had reason to expect that
the piece rate would be lowered if they speeded up. We see here not a
"natural" gendered difference arising informally from culture and
socialization, but female and male workers responding rationally to a
gendered system imposed by employers.

In Pursuit of Equity argues that no one can enjoy civil and
political rights without social and economic citizenship. Marshall's
alleged gradual expansion of civil and political rights not only
excluded many others but actually strengthened women's exclusion from
citizenship. One fundamental premise of democratic capitalism--free
labor--was never fully extended to all women, whose labor was often
coercively regulated, not only by husbands but by the state.
Kessler-Harris shows how free labor developed in tandem with the "family
wage" ideal, that is, that husbands/fathers should earn for the entire
family and that women's destiny was domestic unpaid labor. The correlate
was that men "naturally" sought economic and social independence while
women "naturally" sought dependence. Ironically, most feminists of the
nineteenth century went along with this dichotomy and tried to root
women's citizenship in their essential family services rather than in
the free-labor definition of independence. That is, they argued for
rights on the basis of women's spiritual and material work in unpaid
caretaking labor.

The book demonstrates particularly effectively how the dominant modern
gender system--the family-wage norm--made it difficult for women to
become full citizens. In one closely documented section, Kessler-Harris
exposes the condescending and defensive assumptions of those who drafted
the Old Age Insurance program (which later became Social Security). The
drafters agreed, for example, that the widow of a covered man with young
children should be able to receive three-quarters of his pension until
she remarried or the children reached 18. A widow without children
lacked any rights to her husband's pension. But if this pension was her
husband's by right, as the designers insisted, then why were his heirs
not entitled to all of it as with all other parts of his property? If
the widow remarried, she would not have to give up the bank account or
house or car he had left her--why should she give up a Social Security
pension? One Social Security drafter argued that retaining such an
annuity after remarriage would make widows "a prize for the fellow that
has looked for it," assuming that women are entirely passive in marriage
decisions! The drafters were all convinced that "once a woman was no
longer dependent on the earnings of a particular male (dead or
alive)...his support for her should cease." In other words, his status
as breadwinner should continue even after his death. The drafters
rejected the idea of granting all widows of covered men an equal stipend
or one based on the number of children. It was important for her
benefits to be calibrated to his earnings so as to feed "the illusion
that families deprived of a father or husband would nevertheless
conceive him...as a continuing provider." "Why should you pay the widow
less than the individual himself gets if unmarried?" Because "she can
look after herself better than he can." Imagining women as less capable
of handling money than men, the designers removed the option of a
lump-sum benefit to widows, requiring them, unlike men, to receive
monthly stipends. To avoid "deathbed marriages," they allowed a widow to
collect only if she had been married and living with her husband for at
least a year before he died.

The concern with male status was reflected particularly comically in
discussions about the age at which a wife could start to receive her
share of her husband's benefits. Some argued for an earlier "retirement"
age for women because if both men and women were eligible at 65, this
would mean that men with younger wives--a common phenomenon--might not
get their full pension for a number of years after they retired. But
others argued that since men who married much younger women were more
likely to be those who had married more than once, granting women an
earlier retirement date might reward these men over single-marriage
retirees.

Several decades ago economist Heidi Hartmann pointed out that patriarchy
was as much a system of power and hierarchy among men as a male-female
relation, and Kessler-Harris confirms that insight. For example, the
entire debate about whether married couples should be able to report
separate incomes for IRS purposes concerned the inequalities this would
create between men with employed wives and men with nonemployed wives.
Fairness to women was not a prominent concern. The fact that employed
women's old-age insurance benefits were restricted according to their
marital status while men's weren't "did not seem like sex discrimination
[to the Social Security designers] but rather like equity to men."

At the core of In Pursuit of Equity is the understanding that
what is "fair" is historically changing. The problem we face today is
not that men deliberately built policies to subordinate women but that
when our basic economic policies were established, men and women alike
tended to see male breadwinning and female domesticity as "fair." That
standard is far, far from reality today. One result is a double standard
in which supposedly ideal family life, requiring a full-time mother, is
a privilege of wives of high-earning husbands.

In the United States, the resultant damage is worse than in Europe,
because here many fundamental aspects of citizenship flow from the labor
market. "Independence" today is generally defined as earning one's
living through wages, despite the fact that the resulting dependence on
employers leaves workers as vulnerable, if not more vulnerable, than
dependence on government stipends. Social rights vital for survival,
such as medical insurance, retirement pensions and workers'
compensation, typically derive from employment in this country, in
contrast to most developed countries, which provide such help as a
matter of right to all citizens or residents. This is one way in which
American wage workers, as Kessler-Harris says, were "in a different
relationship to the constitution than those who did care-giving work."
As a result the development of democratic capitalism, even the growth of
working-class power in some ways failed to strengthen women's economic
citizenship, even weakened it. Indeed, she shows how victories against
sex discrimination in the labor force in the 1960s inadvertently
confirmed the assumption that all women could and should work for wages,
thereby contributing to the repeal of welfare without creating the
conditions that would make it possible for poor women to support
themselves through employment.

This gendered citizenship became more visible and more obnoxious to
women as wage-earning became the female norm and as "alternative
families" gained political clout. For example, if every individual was
entitled to an old-age pension and unemployment compensation, we
wouldn't have to struggle about the inheritance rights of gay partners
or stay-at-home parents' need for support. Even today, banning sex
discrimination is difficult because it is difficult to get agreement on
what constitutes discrimination. In a few cases division among feminists
has held back the struggle. Kessler-Harris ends the book with a brief
reprise of EEOC v. Sears, Roebuck & Co., a 1980s marker of
this division and a case in which she herself played a significant role.
Sears admitted that very few women held any of its well-paying
commission sales jobs but argued that women were not interested in these
jobs because the positions were competitive, pressured, demanding.
Another historian of women testified for Sears against the women
plaintiffs, using her expertise to argue that women's primary attachment
to unpaid domestic labor led them to want only jobs which did not
conflict with it. Her arguments illustrated vividly the continuing
influence of this emphasis on male/female difference, not necessarily as
"natural" or essential but nevertheless beyond the appropriate scope of
legal remedy. Sears won the case.

There is one pervasive absence in Kessler-Harris's book--race--and the
omission weakens the argument substantially. Her understanding of how
the family-wage ideal works would have to be substantially complicated
if she made African-American women more central, for they were rarely
able to adopt a male breadwinner/female housewife family model and often
rejected it, developing a culture that expects and honors women's
employment more than white culture. Mexican-American women's experience
did not fit the family-wage model either, despite their reputation as
traditional, because so many have participated in agricultural and
domestic wage labor throughout their lives in the United States. Equally
problematic to the argument, prosperous white women who accepted the
family-wage model often didn't do unpaid domestic labor because they
hired poor immigrants and women of color to do it for low wages. These
different histories must affect how we envisage a policy that recognizes
labor outside the wage system, and they need to be explored.

One aspect of Kessler-Harris's economic citizenship concept is being
expressed today by progressive feminists trying to influence the
reauthorization of Temporary Assistance for Needy Families (TANF), the
program for poor children and their parents that succeeded AFDC. We are
pushing a House bill that would recognize college education and
childcare as work under the new welfare work requirements. This book is
a sustained argument for that kind of approach and should help it become
part of the policy discussion. It probably won't win. Some will call it
unrealistic. But today's policies are already wildly unrealistic, if
realism has anything to do with actual life. If we don't begin now to
outline the programs that could actually create full citizenship for
women, we will never get there.

One of the most persistent myths in the culture wars today is that
social science has proven "media violence" to cause adverse effects. The
debate is over; the evidence is overwhelming, researchers, pundits and
politicians frequently proclaim. Anyone who denies it might as well be
arguing that the earth is flat.

Jonathan Freedman, professor of psychology at the University of Toronto,
has been saying for almost twenty years that it just isn't so. He is not
alone in his opinion, but as a psychologist trained in experimental
research, he is probably the most knowledgeable and qualified to express
it. His new book, Media Violence and Its Effect on Aggression,
surveys all of the empirical studies and experiments in this field, and
finds that the majority do not support the hypothesis that violent
content in TV and movies has a causal relationship to real violence in
society. The book is required reading for anyone who wishes to
understand this issue.

I should say at the outset that unlike Freedman, I doubt whether
quantitative sociological or psychological experiments--useful as they
are in many areas--can tell us much about the effects of something as
broad and vague in concept as "media violence." As a group of scholars
put it recently in a case involving censorship of violent video games:

In a field as inherently complex and multi-faceted as human aggression,
it is questionable whether quantitative studies of media effects can
really provide a holistic or adequately nuanced description of the
process by which some individuals become more aggressive than others.

Indeed, since "media violence" encompasses everything from cartoons,
sports and news to horror movies, westerns, war documentaries and some
of the greatest works of film art, it baffles me how researchers think
that generalizations about "effects" can be made based on experiments
using just one or a few examples of violent action.

Freedman, by contrast, believes that the experimental method is capable
of measuring media effects. This may explain why he is so indignant
about the widespread misrepresentations and distortions of the research
data.

He explains in his preface that he became interested in this area by
happenstance, and was surprised when he began reading the research to
find that its results were quite the opposite of what is usually
asserted. He began speaking and writing on the subject. In 1999 he was
approached by the Motion Picture Association of America (MPAA) and asked
to do a comprehensive review of all the research. He had not previously
received organizational support and, as he says, "was a little nervous
because I knew there was a danger that my work would be tainted by a
connection with the MPAA." He agreed only after making it clear that the
MPAA "would have no input into the review, would see it only after it
was complete, and except for editorial suggestions, would be forbidden
to alter what I wrote. Of course," he says,

they asked me to do the review, rather than someone else, because they
knew my position and assumed or at least hoped that I would come to the
same conclusion after a more comprehensive review. But there was no quid
pro quo. Although I was nervous about being tainted, I am confident that
I was not. In any case, the conclusions of this review are not different
from those of my earlier review or those I expressed in papers and talks
between 1984 and 1999.

The book proceeds meticulously to examine the approximately 200 studies
and experiments that Freedman was able to find after an exhaustive
search. (He suggests that the exaggerated numbers one often
hears--1,000, 3,500 or simply "thousands" of studies--probably derive
from a statement made by psychologist John Murray in the early 1980s
when the National Institute of Mental Health sponsored a review of the
media violence research. Murray said that there were about 2,500
publications of all kinds that were relevant to the review. This is far
different, of course, from the number of empirical experiments and
studies.)

Freedman begins with laboratory experiments, of which he found
eighty-seven. Many commentators have noted the artificiality of these
experiments, in which snippets of a violent film or TV show are shown to
one group of viewers (sometimes children, sometimes adolescents or
adults), while a control group is shown a nonviolent clip. Then their
level of "aggression" is observed--or rather, something that the
experimenters consider a proxy for aggression, such as children hitting
a Bobo doll (an inflatable plastic clown), delivering a "white noise"
blast or--amazingly--answering yes when asked whether they would pop a
balloon if given the opportunity.

As Freedman and others have pointed out, these laboratory proxies for
aggression are not the real thing, and aggressive play is very different
from real-world violent or destructive behavior. He comments:

Quite a few studies with children defined aggression as hitting or
kicking a Bobo doll or some other equivalent toy.... As anyone who has
owned one knows, Bobo dolls are designed to be hit. When you hit a Bobo
doll, it falls down and then bounces back up. You are supposed to hit it
and it is supposed to fall down and then bounce back up. There is little
reason to have a Bobo doll if you do not hit it. Calling punching a Bobo
doll aggressive is like calling kicking a football aggressive. Bobos are
meant to be punched; footballs are meant to be kicked. No harm is
intended and none is done.... It is difficult to understand why anyone
would think this is a measure of aggression.

Freedman notes other serious problems with the design of lab experiments
to test media effects. When positive results are found, they may be due
simply to the arousal effect of high-action entertainment, or to a
desire to do what the subjects think the experimenter wants. He points
out that experimenters generally haven't made efforts to assure that the
violent and nonviolent clips that they show are equivalent in other
respects. That is, if the nonviolent clip is less arousing, then any
difference in "aggression" afterward is probably due to arousal, not
imitation. Freedman's favorite example is an experiment in which one
group of subjects saw a bloody prizefight, while the control group was
shown a soporific film about canal boats.

But the most striking point is that even given the questionable validity
of lab experiments in measuring real-world media effects, the majority
of experiments have not had positive results. After detailed analysis of
the numbers that the researchers reported, Freedman summarizes:
Thirty-seven percent of the experiments supported the hypothesis that
media violence causes real-world violence or aggression, 22 percent had
mixed results and 41 percent did not support the hypothesis. After he
factored out experiments using "the most doubtful measures of
aggression" (popping balloons and so forth), only 28 percent of the
results were supportive, 16 percent were mixed and 55 percent were
nonsupportive of the "causal hypothesis."

For field experiments--designed to more closely approximate real-world
conditions--the percentage of negative results was higher: "Only three
of the ten studies obtained even slightly supportive results, and two of
those used inappropriate statistics while the third did not have a
measure of behavior." Freedman comments that even this weak showing
"gives a more favorable picture than is justified," for "several of the
studies that failed to find effects actually consisted of many separate
studies." Counting the results of these separate studies, "three field
experiments found some support, and twenty did not."

Now, the whole point of the scientific method is that experiments can be
replicated, and if the hypothesis is correct, they will produce the same
result. A minority of positive results are meaningless if they don't
show up consistently. As Freedman exhaustively shows, believers in the
causal hypothesis have badly misrepresented the overall results of both
lab and field experiments.

They have also ignored clearly nonsupportive results, or twisted them to
suit their purposes. Freedman describes one field experiment with
numerous measures of aggression, all of which failed to support the
causal hypothesis. Not satisfied with these results, the researchers
"conducted a complex internal analysis" by dividing the children into
"initially high in aggression" and "initially low in aggression"
categories. The initially low-aggression group became somewhat more
aggressive, no matter which programs they watched, while the initially
high-aggression group became somewhat less aggressive, no matter which
programs they watched. But the children who were categorized as
initially high in aggression and were shown violent programs "decreased
less in aggressiveness" than initially high-aggression children who
watched neutral programs. The researchers seized upon this one highly
massaged and obscure finding to claim that their results supported the
causal hypothesis.

Freedman examines other types of studies: surveys that compare cities or
countries before and after introduction of television; experiments
attempting to assess whether media violence causes "desensitization";
longitudinal studies that measure correlations between aggressiveness
and preference for violent television over time. No matter what the type
of study or experiment, the results overall are negative. Contrary to
popular belief, there is no scientific support for the notion that media
violence causes adverse effects.

Why, then, have not only researchers and politicians but major
professional associations like the American Academy of Pediatrics and
the American Medical Association repeatedly announced that thousands of
studies have established adverse effects of media violence? One reason
was suggested to me recently by a pediatrician active in the AAP. The
organization's guidelines argue for scientific support for policy
statements. This puts the AAP in a serious bind when, as is the case
with media violence, its leaders have a strong opinion on the subject.
It's tempting then to accept and repeat assertions about the data from
leading researchers in the field--even when it is distorted or
erroneous--and that's what the professional associations have done.

Another factor was candidly suggested by Dr. Edward Hill, chair of the
AMA board, at a panel discussion held by the Freedom Forum in New York
City last year. The AMA had "political reasons," Dr. Hill said, for
signing on to a recent statement by professional organizations asserting
that science shows media violence to be harmful. The AMA is "sometimes
used by the politicians," he explained. "We try to balance that because
we try to use them also."

Because Jonathan Freedman believes the scientific method is capable of
measuring the impact of media violence, the fact that it hasn't done so
is to him strong evidence that adverse effects don't exist. I'm not so
sure. I don't think we need science to know from observation that media
messages over time can have a powerful impact--in combination with many
other factors in a person's life. Some violent entertainment probably
does increase aggression for some viewers, though for as many or perhaps
more, the effect may be relaxing or cathartic.

If the media do have strong effects, why does it matter whether the
scientific research has been misrepresented? In part, it's precisely
because those effects vary. Even psychologists who believe that the
scientific method is relevant to this issue acknowledge that style and
context count. Some feel cartoons that make violence amusing have the
worst effects; others focus on stories in which the hero is rewarded for
using violence, even if defensively.

But equally important, the continuing claims that media violence has
proven adverse effects enables politicians to obscure known causes of
violence, such as poverty and poor education, which they seem largely
unwilling to address. Meanwhile, they distract the public with periodic
displays of sanctimonious indignation at the entertainment industry, and
predictable, largely symbolic demands for industry "self-regulation."
The result is political paralysis, and an educational structure that
actually does little to help youngsters cope with the onslaught of mass
media that surround them.