Sunday, April 29, 2012

In Merchants of Culture: The Publishing Business in the Twenty-First Century, John B. Thompson has written a page-turner about those who make them (virtual and otherwise)

John B. Thompson begins this book with a publishing anecdote that will be familiar even to those on the margins of the business: the story of how Randy Pausch, a professor of computer science at Carnegie Mellon, gave a talk in 2007 as part of a series at the university with the title "The Last Lecture." As it turned out, Pausch was dying of pancreatic cancer, giving his well-received presentation an element of poignance that generated a wave of national publicity. What proved truly stunning, however, was how eager New York publishers were to acquire the book that became The Last Lecture: Pausch, a first-time big-time author was paid a $6.75 million advance by Hyperion, a Disney company. How could that possibly make sense?

In 400 chiseled pages, Thompson explains why such an offer came about, and why it made sense -- indeed, The Last Lecture proved to be a lucrative acquisition for Hyperion. He does so with the the methodological acumen of the sociologist he is (at the University of Cambridge). Thompson conducted hundreds of interviews for Merchants of Culture, supplemented by new interviews with many of his sources for this newly released second edition of the book (the first was published in 2010). Much of Thompson's analysis builds on that of his 2005 book Books in the Digital Age, which focused on scholarly publishing. Here he focuses on trade publishing, the hyper-commercial industry focused in New York and London.

It's in the nature of any project of this sort that it stands to date quickly. But Thompson has done a notably good job of keeping his findings timely -- the figures here run into mid-2011, capturing the arrival of the e-book transformation of the industry at that moment it shifted from an abstract possibility to an increasingly evident reality. In some sense, however, the book feels fresh and up-to-date because of an intuitive grasp of temporal proportion; his perspective dates back to the corporate consolidation of the publishing industry in the 1970s, and he traces trends that in many cases have been decades in the making.

The organizational strategy for Merchants of Culture consists of chapters focused on key constituencies in the industry: on on the rise (and decline) of retail chains; the growing power of literary agents; the consolidation of publishing houses; and so on. He also takes note of what is now an established trend of a blockbuster mentality so typical of the major media, along with emerging ones like "extreme publishing" (quickly-produced books designed to plug gaps in financial projections) and the "hidden revolution" in the manufacture and distribution of books. Naturally, he gives plenty of space to major players like Amazon.com, and the transformational role of the Kindle -- with attention to both those who celebrate as well as fear its power.

Thompson has a measured tone, and his goal here is clearly to explain how the field -- a term he identifies as a conceptual construct within sociology -- interlocks in ways that may not always be obvious to an outsider. He does, however, weigh in with some mild-mannered judgments. Thompson thinks a corporate mentality erodes the long-term attention to backlists that are crucial to the ecology of the industry. He notes that big-time publishers like Random House and HarperCollins, unwilling to tend backlists, have instead been buying them by acquiring other imprints, a strategy that has come close to running its course. He sees a polarization in the industry: business conditions are most propitious for behemoths with deep pockets or scrappy little houses, some of them academic players that run a trade operation on a shoestring. But he notes there's precious little ground for medium-sized houses like Farrar, Straus & Giroux (which leverage prestige and typically federate to maximize back-office resources). Thompson is also attentive to the fact that publishing can be most brutal not to first-time writers, but rather those who establish a track record that is found wanting and who must then struggle to survive in an increasingly indifferent field.

As someone who has worked in publishing as well as published books with trade, academic, and specialty publishers, I must say I have never encountered a work as incisive and complete as Merchants of Culture. This one will surely be a backlist perennial, and must reading for anyone with a stake in the business.

I didn't plan to read this book. I'd put it on a pile of forthcoming titles, one I consulted after finishing the last book I reviewed sooner than planned. I thumbed through the first few pages of a couple on that pile and found myself engaged by the portrait of New York City mayor-elect John Purroy Mitchel on New Year's Eve of 1913. Maybe this book about the year 1914 was worth embarking upon after all.

It was only after I was well into it that I realized More Powerful Than Dynamite has an arresting provenance that makes the particular manner of its execution all the more remarkable. At first I wasn't too surprised by blurbs that didn't quite come from the usual suspects. Kenneth Jackson, sure -- blue chip. Little odd to have him share a back cover with Noam Chomsky, though. And Marge Piercy. Don't think of Samuel G. Freedman as a fellow traveler. Bill Ayers? Don't imagine you'll find this book lying around Obama '12 campaign headquarters. Outside of radical circles, this is not exactly an endorsement a lot of writers would flaunt.

Turns out the Ayres connection is not merely incidental. The jacket copy informs us that Thai Jones was "born while his parents were fugitives from justice" and that he "went by a series of aliases until the age of four." Jones's previous and first book, The Radical Line: From the Labor Movement to the Weather Underground, One Family's Century of Conscience (2004), describes a genealogy of radical leftist politics. In the foreword of this book, Jones explains his interest in 1914 New York originated in a now largely forgotten anarchist bomb blast on upper Lexington Avenue that paralleled the notorious one by the Weather Underground in Greenwich Village in 1970. In both cases, radicals were victims of a blast they intended to inflict on others.

I rehearse this background for Dynamite because one might plausibly expect its author to carry the torch for his family's radicalism. Or, perhaps equally plausibly, to repudiate it with a fierceness that derives from that source. But this is a remarkably measured piece of writing by a truly gifted young man still in his thirties. Jones is a former reporter for Newsday, and this book began as a PhD dissertation at Columbia. It combines the lean prose of a journalist with the depth of an academic. But Jones's eye for detail is novelistic, and he is a master of understatement. He turns the neat trick of making moderation marvelous.

Many of the events discussed in Dynamite -- the Ludlow Massacre out in the Colorado coalfields; the reform efforts of the Mitchel administration; and, of course, the outbreak of the First World War -- will be familiar to students of the period. Ditto for a cast of characters that includes Woodrow Wilson, Upton Sinclair, and John D. Rockefeller, Jr. But this biographically-driven narrative is populated by a host of obscure ones like the International Workers of the World activist Fred Tannenbaum, police commissioner Arthur Woods, and the charismatic hunger-striker Becky Edelsohn, all of whom burst into life on these pages (nowhere more so than in the otherwise sleepy suburb of Tarrytown, which in May of 1914 gave Manhattan a run for its money in political drama). Jones narrates public demonstrations with cinematic clarity -- Occupy Wall Street was downright genteel compared to the string of uprisings in the city in the first half of 1914 -- even as he manages to capture the inner life of his characters with an empathy that's moving in its own right. So it is that we experience the radical Alexander Berkman's melancholy nostalgia for the terrorism of his youth, Mayor Mitchel's awkwardness in serving citizens he didn't particularly care to meet, and Commissioner Wood's careful, patient efforts to learn from previous police mistakes maintaining public order. We even feel some sympathy for poor John D. Rockefeller Sr., who can't get through a round of golf without being importuned for stock tips by grasping companions.

Which is not to say that Jones suspends judgments. He notes that Rockefeller Jr. was deeply anguished by the Ludlow situation, which it was his family responsibility to manage. "But," he notes, "while Rockefeller was unwilling to ignore the the inequities of business, he was equally unable to intercede against the executives of Colorado Fuel and Iron." This dithering literally proved fatal, a sin for which Rockefeller sincerely tried to atone. Conversely, Jones shows that while Woods showed far more respect for the First Amendment than any of his predecessors (more for tactical than philosophical reasons), he replied to criticism about authorizing unprecedented wiretaps of suspected radicals by saying, "There is altogether too much sappy talk about the rights of the crook . . . He is a crook. He is an outlaw. He defies what has been put down as what shall be done and what shall not be done by the great body of law-abiding citizens. Where does his right come in?" Jones wisely lets us draw our own conclusion without comment.

The author's self-control has a deeply historical quality; he shows us people living through dramas whose outcomes they could not know, struggling to understand what is happening to them and trying, not always successfully, to grow from their experiences. Young Fiorello LaGuardia was an admirer of Mayor Mitchel who honored his memory -- to a point. The leaders of his Progressive stripe "had attempted to separate government from politics, but that does not work in a democracy," a mistake LaGuardia did not make. One of the few people who comes off truly badly in this book is Walter Lippmann, who coined the phrase of its title. As he is in so many accounts of this period, Lippmann is everywhere and always seems to have a pithy remark that's both incisive and at least faintly condescending. He's heartless, and in his way is harder to take than Rockefeller the younger.

Toward the end of this book -- a little later than we should, really -- its larger argument comes into focus, which involves the role of Progressives as mediators between the plutocrats and radicals of the subtitle. Jones asserts that the events of 1914 were decisive in swinging reformers toward the right, which had lasting implications for American politics. Perhaps there's grist here for his next book.

In any case, Dynamite showcases a rare talent notable for its equipoise in balancing heart and head. Jones serves the memory of his subject with quiet grace. And he serves his readers with stories that deserve to be remembered. Here truly is a career worth following.

Saturday, April 21, 2012

Daniel Day-Lewis and the
Persistent Significance of the Frontier in American Cinema

Jim Cullen

The following is the text of my keynote address for the "Focus on Teaching" luncheon at the Organization of American Historians Annual
Meeting, Milwaukee, Wisconsin, April 21, 2012

The story of the forthcoming book on which
this talk is based begins in 2001, when I left academe and began working as a
high school teacher. In the
process of trying to plan the first semester of a U.S. history survey, I made a
curious discovery after generating a slate of movies I planned to show over the
course of the fall semester: every one of them starred Daniel Day-Lewis. There
was The Crucible. And Last of the Mohicans. And The Age of
Innocence. Later I added Gangs of New York and There Will Be
Blood. All told, there were nine times I ran an annual event I dubbed
"The Daniel Day-Lewis Film Festival."

Maybe it's not surprising that my
predilections would express themselves without conscious effort. But keep in
mind that we're talking about Daniel Day-Lewis here. As anyone
vaguely familiar with his work knows, Day-Lewis is legendary for the
extraordinary variety of characters he has played, and the vertiginous
psychological depth with which he has played them. I first became aware of
Day-Lewis in early 1985, when, in the space of a week, I watched him portray
the priggish Cecil Vyse in the tony Merchant-Ivory film adaptation of E.M.
Forster’s Room with a View and then saw him play Johnny, the punk East
End homosexual, in Stephen Frears's brilliantly brash My Beautiful
Launderette. Day-Lewis went on to have a distinguished career, winning the
first of two Academy Awards for his portrayal of the handicapped Irish poet
Christy Brown in My Left Foot in 1989, but between 1988 and 2007 he
played a string of American figures that ranged from a seventeenth century
Puritan to a twentieth-century art collector.

What could this mean, I wondered? Every
year like clockwork, I watched these films again with my students, marveling at
the inexhaustible nuances of Day-Lewis's performances. I began to ask myself:
Could it make sense to think of actors as historians? That people, in the
process of doing a job whose primary focus was not thinking in terms of
interpretation of the past, were nevertheless performing one? And that in doing
so repeatedly over the course of a career would articulate an interpretive
version of American history as a whole?

Of course, such people are aware when
they're dealing with historical situations (or contemporary situations with
historical resonances), and may make real effort to exercise historical imagination
as part of their work. But that's the point: it's part of their work. We
all understand that there are many people out there who "do" history
without writing books—archivists, curators, and, of course, filmmakers,
including both documentarians as well as writers and directors of feature
films, who work consciously and conceptually to craft an interpretive
experience for their audiences. What intrigues me about actors, though, are the
obvious limitations and obstacles to executing a purely historical function.
Their work is always embedded in a larger context in which their control of the
material is limited—actors do not typically write their own lines—and their
craft is collaborative, part of enterprises that will always be at as much
aesthetic and commercial as they will be historical. What’s interesting to me,
though, is the way in which very successful actors with a good deal of control
over their choices reveal patterns of thought that are widely shared but rarely
so evident.

Indeed, my primary interest is less in
Hollywood movies or actors than in the way history is absorbed into the fabric
of everyday life—messy, fragmented, more suggestive than direct. This is
actually how it’s lived for students: meta-narratives – of history as
progressive, or circular, or an illustration of the way you can’t fight city
hall – into which into which they plug the various incidents and movements they
learn about inside and outside the classroom. Those meta-narratives are a kind
of historiographic folklore. Every once in a while, historians are the source
(or, at least, powerfully shape) that folklore. In the case of Daniel
Day-Lewis, I gradually realized that this Irish immigrant had somehow absorbed
the frontier myth of Frederick Jackson Turner.

Turner is to the
historical profession what Sigmund Freud is to psychology: a towering giant of
a century ago one whose ideas are now consciously rejected by just about
everybody in his profession—and unconsciously absorbed by just about everybody
else. Turner's 1893 essay "The Significance of the Frontier in American
History" is probably the single most important piece of historical
scholarship ever published in the United States. Written at a time when the
modern research university was just emerging, it was an example of a literary
genre—the analytic essay of the kind you’re now hearing—that was just coming
into its own.

A Wisconsin
native, Turner first delivered "Significance" on the evening of July
12, 1893 at an AHA meeting in Chicago, held amid the fabled World Columbian
Exposition held in that city to celebrate the 400th anniversary of Christopher
Columbus's arrival in America. It seems almost comical to imagine the 31-year
old Turner (then, as now, young for a historian) standing in the front of a
room talking to about 200 colleagues while thousands of his fellow Americans were
taking amusement park rides and surveying the huge temporary stucco buildings
of the so-called White City, a site which was artificially lit thanks to the
technological innovations of the Westinghouse Corporation. But like Westinghouse lighting, the
so-called "Turner Thesis" unveiled in Chicago would prove to be more
durable than any of these fleeting material realities, in large measure because
it was so succinctly stated at the end of the first paragraph of his paper:
"The existence of an area of free land, its continuous recession, and the
advance of American settlement westward, explain American development."

From the vantage
point of over a century later, it may be hard to appreciate just how edgy an
assertion this really was. Turner had been trained back east at Johns Hopkins,
under the tutelage of the legendary Herbert Baxter Adams. Adams was a proponent
of the then-dominant "germ" theory, which argued that western
civilization owed its origins to the forests of Germany, out of which emerged a
Teutonic seed that spread across western Europe, jumped to America, and now
dominated the world. Like so much academic thought of the time, this approach
to history was modeled on science, both in its new emphasis on empirical
research and its use of a biological model—more specifically a (Social)
Darwinian model—to explain historical change.

Like his
predecessors, Turner embraced a process-driven approach to History—colleagues
and students remember him as an obsessive collector of data and maps—and
invoked science as fact and metaphor. But his inclinations were decidedly on
the environmental side of the Darwinian equation: he was fascinated protean
adaptability, not by fixed destiny. America was a place that did something to people, he said: it
made them Americans. Which is to say it turned them into something new and
unique in historical experience. And that's because they had lots of room to
evolve through a renewable cycle of scouts giving way to traders, farmers, and
capitalists in scattershot sequences that stretched from sea to shining sea.

Over the course
of ensuing decades, the Turner Thesis itself evolved from maverick idea to
common sense, ratified by Turner's appointment at Harvard in 1910. By
mid-century, it had a wide impact on subsequent historians. But in the second
half of the century the thesis came under increasing attack on a variety of
fronts. Some scholars questioned Turner's data, others its implications,
particularly his assertions that the frontier was the engine of U.S. democracy.
The most serious challenge came from those historians, notably Patricia
Limerick, who rejected the assumptions underlying the very idea of the frontier
and the implicit omissions involved in discussing "empty" land that
was in fact inhabited by multicultural populations. To Limerick, Turnerism was
little more than a racist fantasy, at one point joking that for her and
like-minded scholars the frontier had become “the f-word.”

Actually, Turner
did not consider the frontier an unalloyed good. While he viewed it as a
usefully nationalizing phenomenon as well as a wellspring of democracy, he also
recognized that a frontier mentality tended to resist even benevolent forms of
outside control, and fostered a grasping materialism. It also led to a lax
approach to government that fostered the creation of a spoils system. Moreover,
Turner clearly understood, even if he didn't dwell on it, that the extension of
the frontier was a matter of conquest for which he used the correct imperial
term of "colonization."

But the biggest
problem Turner has with the frontier in 1893 is that it's dead. He makes this
clear in the first sentence of "Significance," which discusses
recently updated information from the U.S. Census Bureau indicating the
disappearance of an unbroken line in the American West, which he described as
"the closing of a great historic moment." What the Mediterranean had
been the Greeks, the frontier had been to the Americans. "And now," he
wrote in a closing sentence laced with melancholy, "four centuries from
the discovery of America, at the end of a hundred years of life under the
Constitution, the frontier has gone, and with its going has closed the first
period of American history." The Turner Thesis, in effect, was the
frontier's obituary.

What would take
its place? Turner did not say. Richard Hoftstader would write 75 years later
that the latent pessimism of the frontier thesis was in sharp contrast to the
ebullient optimism Turner attributed to frontier communities. But while Turner never offered an alternative—indeed, he had considerable
trouble writing books, and never quite realized the huge potential suggested by
"Significance"—his politics were considered generally consonant with
those of his friend and colleague Woodrow Wilson. For such people, the frontier
was less a living reality—as it had been for the previous generation of
political reformers, the Populists—than a metaphor that denoted opportunity on
a large scale in a new domain. That’s why Turner called the closing of the
frontier the end of the first period
of American history.

The frontier
remained fertile symbolic terrain for much of the twentieth century, nowhere
more obvious than in the 1960 presidential campaign of John F. Kennedy, whose
slogan was "The New Frontier." But its appeal went a good deal beyond
politics, evident in the rhetoric of the space program as well as that of the
Internet. Nowhere, however, was its power more evident than in U.S. cultural
life. Turnerism is the bedrock of assumptions for the whole genre of the
Western, for example, and the Western, in turn, is the seedbed of other
cultural genres stretching from sci-fi to hip-hop. Along with the legacy of
slavery, the frontier is what makes American culture American.

But if people of
the 20th century experienced the transformation of the frontier from
reality into myth, those of the 21st are witnessing its
transformation from myth into memory. Now belief in the frontier as a living
symbol is itself receding in our imaginations. The proximate cause is
our economic situation, which has cast doubt on the upward mobility that so
many of us have considered our birthright so long, and which is so deeply
intertwined with our sense of a frontier. This sense of doubt is not new. It
has recurred periodically throughout American history, such as the Great
Depression and amid the political scandals and economic stagflation of the
1970s. The current narrative of geopolitical decline, however, is one of rare
and growing depth.

Here I’ll break
to say that I don’t have time to do justice to DDL’s whole body of work, but
instead will focus on three illustrative examples: The Crucible (1997); Last of
the Mohicans (1992) and Gangs of New
York (2002).

The Crucible is a story that’s typically read one of two ways. The first and perhaps
primary one is what prompted Arthur Miller to write it: as a warning about the
dangers of social conformity and letting irrational fears—in particular a fear
of Communism that dominated American public life at the time of the play’s
premiere—govern everyday life. The second tends to see the story in terms more
specific to its time and place: seventeenth century New England. Such an angle
of vision leads one away from viewing it as an indictment of American character
generally, and more one of self-righteous Puritanism specifically. Both of these views have cogency, of
course. But I’d like to look at The
Crucible as a frontier story.

There are some good historical
reasons to do so. Salem, Massachusetts is not typically seen as a frontier
town; after all, it was founded in 1626, even before Boston, and was 66 years
old when the witch trials took place. Still, if Salem itself was not in fact a
frontier, it was quite close to a bona fide one: the district of Maine, which
would be part of Massachusetts until 1820. For most of the seventeenth century,
the beaver and timber trade of northern New England were major sources of
prosperity for Massachusetts.

The outbreak of King Philip’s War
in Rhode Island in 1676, which spread northward and lingered until later in the
decade, broke a relatively long stretch of peaceable relations with the
region’s Indians. The outbreak of another war 1689—popularly known as King
William’s War, but known in the region as the Second Indian War—destabilized
the region still further. These wars destroyed lives, livelihoods and homes,
and created a significant number of refugees, some of them ending up in Essex county,
where Salem is located. Mary Beth
Norton has documented that a significant number of accused witches as well as
their accusers had ties that can be traced to Maine in the 1670s and 80s. Just how decisive a factor Indian war
really was in triggering the witch trials is open to debate. But it is
certainly plausible to see frontier-related stresses as a factor in what went
wrong in Salem in 1692.

As far as the makers of The Crucible were concerned, this is all
inside baseball. In the original script for the play—and in the movie—Miller
has the first of the accusers, Abigail Williams, pressure her confederate,
Betty Parris, by saying “I saw Indians smash my dear parents’ heads on the
pillow next to mine, and I have seen some reddish work done at night, and I can
make you wish the sun had never gone down!” This fictive context is important in establishing a basis for the core
malignancy of Williams’ character. But it’s more in the spirit of background
information than a proximate explanation for her behavior.

The most important element in establishing a frontier
dimension for the film version is the portrayal of Daniel Day-Lewis’s John
Proctor.To put it most simply, the film version of The Crucible underlines the degree to
which Proctor was an outside man. This was true in fact: the real Proctor, who
was about 60 in 1692, lived on the outskirts of Salem proper, where he operated
a tavern. Proctor appears to have been a local iconoclast: he was among the
first to ridicule the witchcraft proceedings; allegedly beat his servant, Mary
Warren, who confessed to witchcraft and accused others; and stood up for
Elizabeth, who was his third wife. This may be why he was the first male to be
accused of witchcraft, and why he was hanged for it.

The film version of The Crucible, exploiting the
possibilities of the medium, makes Proctor an outside man in a much more
literal sense as well. Our first view of him, about ten minutes into the film,
shows him threshing wheat in a field with his sons. The imagery seems to come
straight from a Winslow Homer painting: big open spaces, water in the distance,
brilliant blue sky. The camera pans from the inlet to the interior to reveal
his wife Elizabeth (a superb Joan Allen) summoning him. Over the course of the
story, walls will literally and figuratively close in on him.

In art and life, the Salem Witch
trials were a disaster wrought by Puritans. The deaths of nineteen people and
the concomitant misery that resulted was byproduct of the social conformity
implicit in the communitarian character of Puritanism, the most
institutionally-minded people in British North America. But one of the many
paradoxes of Puritanism is that this communitarian impulse was accompanied by
another, individualistic one, that was at least as powerful. The Puritans had
always placed great value on the primacy of the individual conscience; the
belief that one’s own relationship to God mattered more than what Pope or King
might say is precisely what brought them to America. And it’s that independence
of mind that led the John Proctors of New England to stand up to, and finally
defeat, tyranny from within.

This libertarian strand of
cultural DNA that had drifted across the ocean found a hospitable climate on
these shores. As Frederick Jackson Turner would later write in “Significance,”
“the frontier is productive of individualism.” Turner would often contrast
“antipathy to control” in the frontier mentality with that of the Eastern
establishment. As he well knew,
however, the Eastern establishment was itself
a frontier product, and never entirely transcended it. In an obvious and
irrefutable sense, John Proctor is a tragic figure. But as embodied by Daniel
Day-Lewis in this movie, he is a fierce and willful force whose intensity
cannot be contained by his death. His children, literal and figurative, will
conquer a continent—a topic that would be the focus the next film in the
Day-Lewis sequence of U.S. history.

* * *

In the almost two centuries since
its publication in 1826, James Fenimore Cooper’s Last of the Mohicans has been like the sheet music for a pop song:
a loose set of characters and plot points in a standard that has been
rearranged and embellished countless times. Like a lot of pop classics,
Cooper’s source material lay in the public domain, namely collective memory of
the French and Indian War, which ended a quarter-century before he was
born. Cooper, who was raised in
upstate New York—his father was a large, and controversial, landowner in the
baseball Mecca we know as Cooperstown—wrote
about a time when the region was a frontier, and in so doing wrote what many
scholars of the western consider an early example of the genre.

From a modern standpoint,
Cooper’s fiction is almost unreadable in its stilted language and slack pacing.
What has
lasted in Mohicans—what indeed has
proven to be amazingly supple—is a set of characters and a loose plot. In the
last hundred years, the principal medium through which this story has been
re-told has been film—hardly surprising, given the proto-cinematic quality of
the story. The first movie version of the novel, short and silent, came out in
1911. A 1920 version, also silent and selected for the National Film Registry,
an impressively executed piece of work with lots of exterior shoots, generally
follows the outline of the novel. A 1932 twelve-part serial version of the
story was cheap, unintentionally comical, but surely thrilling to people like
my father, who would have gone to see them as a kid part of a full slate of
Saturday matinee movie-going. The
best-known version of the movie prior to 1992 was the 1936 version starring
Randolph Scott, who went on to be a fixture of Westerns through the fifties.

So by the time director Michael Mann and co-screenwriter
Christopher Crowe tackled Mohicans in
the early 1990s, they had a treasure trove of material to work with.
That said, the most important precedent for the filmmakers of 1992 movie was a
long tradition of artistic license. The pivotal figure in this regard—the linchpin of
the movie, and that of the point I’m ring to make here—is the character of Hawkeye
(here called Nathaniel), more specifically the
Nathaniel of Daniel Day-Lewis. This is much more than a matter of which
lines of the script he utters. To put it simply, the Day-Lewis incarnation of
Cooper’s frontiersman is a singularly magnificent figure. Though he lacks the
muscularity of the typical movie-star hero, he is an impressive physical
specimen: lanky but taut, strong but agile. But Nathaniel’s presence is much
more than physical. The Hawkeye of all too many Mohicans—nowhere more so than the original—is a hayseed who’s not
(quite) as dumb as he looks. Randolph Scott’s Hawkeye is one of the better
ones, because the geniality he gives the character doesn’t undercut his sense
of competence. But Day-Lewis blows his predecessors away with the sheer
intensity of his self-assurance. He is a perfect Turnerian specimen, as at ease
in a pick-up game of lacrosse as he is dining at the cabin of his friends.

The fact that this protagonist is
not the entirely restless loner of
Cooper’s saga, that in this version there’s a place in his life for a woman who
by the end of the film will stand by his side wherever he may go, is very much
a part of the film’s larger design. The movie eschews the traditional funeral
scenes of most Mohicans by having that
last Mohican Chingachgook spread the ashes of his son Uncas over the western
mountains amid a setting sun. As sorry as we feel for Chingachgook, this
version of the movie—as I will discuss, there are in fact two 1992 versions,
with subtly, but significantly, different endings—has a hopeful feel. That’s because
we feel so strongly that the tragedy of Uncas notwithstanding, Hawkeye really
is Chingachgook’s son (we moderns consider race and even parenthood a social
construction, after all), and that in his presumed merger with his lover Cora—whose
name takes on a new significance—the seed of a new national identity will be
planted. As a hybrid, it will be resilient. And have plenty of room to grow. In
this, the first film Day-Lewis made about American history, he embodies the
frontier in its brightest phase and greatest height.

* *
*

One of the more notable—and,
given the circumstances of its unveiling in Chicago, ironic—limits of Frederick
Jackson Turner’s vision involved his difficulty incorporating cities into his
vision of U.S. history. As the esteemed environmental historian William Cronon
has observed, “Turner consistently chose to see the frontier as a rural place,
the very isolation of which created its special role in the history of American
democracy. Toward the end of his career, he looked with some misgiving on the
likelihood that there would be an ‘urban reinterpretation’ of American history
that might ‘minimize the frontier theme’—as if frontier history had little or
nothing to do with cities.”

And yet as Richard Hoftstadter,
himself also a critic of Turner admitted, “the great merit of Turnerism, for
all its elliptical and exasperating vagueness, was to be open-ended. The
frontier idea, though dissected at one point and minimized at another, keeps
popping up in new forms, posing new questions.”
It is in this spirit that a frontier perspective can help us understand the
role of Daniel Day-Lewis in the next installment of his cinematic history, Gangs of New York.

New York, it should be said, is
not typically viewed as frontier territory any more than Salem, Massachusetts
is. For one thing, it’s an island, not a continent. For another, it was
effectively urban from the moment of its Dutch inception as New Amsterdam. And
yet one can plausibly view Manhattan as a frontier in two senses. First, like the
rest of North America, New York was a geographic space that was settled along
an irregular line of development over a long period of time, albeit from south
to north rather than from east to west. And second, the frontier was a process
of demographic transformation, as immigrants of one kind or another gradually
gave way to other ethnic and racial groups, often in process of gentrification.

If Mohicans began
as a novel rooted in historical events, Gangs
began as a history laced with fiction. The core source material was The Gangs of New York, a 1928 book by
journalist and crime writer Herbert Asbury. The character Day-Lewis plays in
the movie, Bill Cutting, a.k.a. Bill the Butcher, is modeled on the real-life
figure Bill Poole.

It’s appropriately ironic that
the Butcher’s gang goes by the name of the Native Americans. The historically
accurate term denotes what was at the time a growing number of U.S. citizens
who were increasingly hostile to the rising tide of immigrants, especially
Irish immigrants. This tide would crest with “Know-Nothing” Party in the 1850s, a temporary but powerful
force in 19th century U.S. politics. Of course in our day the phrase
“Native American” is a synonym for Indian. Though a passionate racist who
considers only white, Anglo-Saxon Protestants real Americans, the Butcher’s
situation in Gangs of New York
resembles no one’s more aptly than that of a Delaware sachem confronted with
growing numbers of outside interlopers and deciding to take a stand against
them.

In an opening scene set in the
winter of 1846, the Butcher-led natives prevail in a gang fight with the Celtic
horde led by Priest Vallon (Liam Neeson), victorious despite their enemy’s
greater numbers. Yet the Butcher has only bought time. He can manage, even
absorb, the steady stream of new arrivals for an interval. Indeed, it’s one of
the paradoxes of the Butcher’s character that he can employ his former enemies,
and even tease them affectionately about their ethnic foibles. But like a
hydra-headed monster, Vallon’s legacy returns in the form of his son, whose
ironically Teutonic name—“Amsterdam”—will ultimately challenge the Butcher for
supremacy. In the meantime, however, the unwitting chief takes a shine to the
kid and nurtures him in the ways of tribal power. As such, he’s like a
triumphant Indian warrior who incorporates the kin of vanquished foes into his
own clan.

When, about two-thirds of the way
through the movie, the Butcher learns the true identity of his protégé, he
turns on him with ferocity. Bill goes to visit the newly elected (Irish)
sheriff of the Five Points, who has allied himself with Amersterdam, and deals
with him in a manner redolent of a Wild West standoff. Watch for what might
plausibly be termed a tomahawk throw.

SHOW CLIP

Gangs of New York represents a transposition of roles for Daniel Day-Lewis: in Last of the Mohicans, he was Hawkeye;
this time he’s effectively Chingachgook. Like generations of dime novel readers
and fans of westerns, we admire him in his savagery, which has a kind of nobility
even as it is unacceptable as a basis for contemporary society. As with Indians
of the frontier, Bill the Butcher must die so that we, a non-WASP multiracial
majority, might live. It’s
Leonardo DiCaprio’s Vallon who represents the synthesis of cultures who will
survive as a hearty hybrid and make a modern America.

And yet we remain haunted by the
specter of the natives.

*
* *

About halfway through this talk,
I mentioned that there were two different versions of the 1992 Last of the Mohicans. The first—the one
shown in theaters and in the VHS release of the movie on home video—concludes
the way most versions of the story typically do, with Chingachcook, sprinkling
the ashes of Uncas, declaring that he is the last of the Mohicans. It’s at that
point that the music swells, the camera pulls back, and the credits roll.

Here’s the second version.

SHOW CLIP

Frederick Jackson Turner’s “The Significance of the Frontier in American
History” was a lament wrapped in hope. Turner dealt with the current of existential
dread that runs through his realization that the frontier had closed by writing
sunny prose and by arming himself with a Progressive faith that new frontiers
would come along in the twentieth century to replace the old one. “In place of
old frontiers of wilderness, there are new frontiers of unwon science, fruitful
for the needs of the new race; there are frontiers of better social domains yet
unexplored,” he wrote ebulliently in 1914, three decades after “Significance.”
I can’t help but be moved by the old man’s lyricism: “Let us hold to our
attitude of faith and courage, and creative zeal. Let us dream as our fathers
dreamt and make our dreams come true.”

And so we did, from the moon to
that crabgrass frontier we know as suburbia, where these words are being
written. But here in the twenty-first century, the most obvious truth about the
frontier is that mythology itself is a finite resource. It gets consumed and
recycled no less than land. If
there is a saving grace—or, at any rate, a rough justice—in the racist
brutality that has threaded the myth of the frontier, it is that the people who
made it are themselves compost.

Timothy Noah is a reasonable person. Like many reasonable persons, he tries to bring people around to his point of view -- a point of view informed by statistics and expert opinion -- by supporting it evidence and anticipating objections. This is typically how informed analysts like himself assert the reality of climate change, for instance. It's also why people like him are sometimes baffled by the indifference, if not hostility, with which their opinions are met. It's not that they don't comprehend denial or cynical short-term self-interest. But can't their fellow Americans understand this is serious -- and that in the long run (which really isn't all that long) indifference and cynical self-interest are naive?

Noah believes they can understand (or at least some people can, and will). And so in The Great Divergence he marshals a great deal of evidence and sculpts it into an impressively svelte book to demonstrate that income inequality in the United States is real, growing, and dangerous. I believe him. Of course, I believed that income inequality is real, growing and dangerous before I ever picked up the book, and in this regard I'm like most of the people who will ultimately read it (or previously read the essays from Slate on which the book is based). Which is not to say that I didn't learn a good deal from him: I gained more clarity on which societal forces explain income inequality more credibly than others (women in the workforce and immigration are not really major factors, while the increasing costs of college education, the decline of unions, and Republican presidents really are major factors). Noah understands perfectly well that many of the measures he advocates, like raising taxes, re-regulating Wall Street, and improving education, are not likely to happen overnight. But as he shrewdly asserts, one need not have a detailed blueprint for every proposal, nor hope to resolve every issue, to still assert that problems are real and can at least be ameliorated. The Buffet Rule raising taxes on "millionaires and billionaires" (to quote President Obama's favorite meme of the moment) would not come close to erasing the national debt. But it's still a worthwhile start.

What leaves me a bit restive about The Great Divergence is his underlying notion that educating the public is going to make much difference at this point. If charts and exposition could make income inequality an irresistibly evident problem, it would have long since done so -- if not in 1973 or 1983, then by 1993 or 2003. Actually, the reality of the problem is virtually beyond dispute by now: the book opens with an admission from George W. Bush (!) that income inequality is a real and increasing. But that didn't mean he or the tens of millions of people who voted for him believed he should have done anything about it. Or that Obama should now.

In part, that's because no one can be certain why economic inequality has happened -- or, more to the point, whether attempts to fix it will do more good than harm. Noah notes that for a long time economists and political scientists resisted believing that government action affected income distribution. Now, he shows, that skepticism is disappearing. It doesn't quite seem intellectually bankrupt to wonder if economic opinion like the New England weather: if you don't like it, wait and it will change. And even if you can prove that the globe is getting hotter or income inequality is increasing, why should I think that authority Z's solutions are better than any other? Is it really impossible imagine a sober (social) scientist discovering that when levels of variable A reach B, then carbon dioxide levels may actually start to go down because of heretofore undiscovered factors C that will cumulatively have an unexpectedly large and positive impact on D?. How many times have you read a sentence like, "Economists/biologists/whatever used to think E, but now we understand F"?

Let me repeat: I believe Noah is right. The stories he tells with graphs and facts and expert opinion -- stories which in their broadest outlines I've heard before, though stories that are here succinct and sometimes vibrant -- are compelling and buttress the anecdotal view of the world I get from Paul Krugman's columns and books like Barbara Ehrenreich's Nickel and Dimed, which lodge their way into your conscious (and conscience) and stay there. I also believe that many of the people who say Noah is wrong are rationalizing their desire to ignore him and others like him. But all the economic statistics in the world are not going to convince those people. Of those that remain -- of those who really are open to having their minds changed -- I doubt there are many who will be converted by the largely liberal body of opinion that's surveyed here. Instead, they will likely need one of two arguments: 1) an appeal that rests at least as much on the heart as the head, a rhetorical equivalent of Uncle Tim's Cabin; or 2) a series of events, in all likelihood violent in nature, that dramatize the problem in ways that will really tranfix public attention.

Noah might well say that The Great Divergence was written in the hope that social reform would not need to rest on such volatile or tragic means. I understand and sympathize with that hope, even as I confess to a loss of confidence in this age of gridlock that these issues lend themselves to this style of discourse when it comes to issues like global economies or global climates, which are simply too big and dynamic to be understood with any certainty.

I realize I'm trying to make an oddly rational argument about the limits of expository reasoning. If I'm selling Noah and his readers short, or if you'd simply like to have your suspicions about inequality confirmed and explained, he's your man. If I'm right -- right about those limits, but also right in agreeing with his core message -- then he will yet have a day he can say to all who resisted his warnings, "I told you so." But I suspect he'll be too decent to gloat.

Monday, April 9, 2012

The following post was originally slated to be the conclusion of my book Sensing the Past: Hollywood Stars and Historical Visions. I have since decided to use something else instead. Still, this piece has a shape of it own that may have some appeal. I submit it for your perusal. -JC

One balmy Sunday September afternoon
while revising this book, I decide to take a trip to the movies. I want to see
the new Brad Pitt flick, Moneyball. I
don’t know much about the film, but I’ve always wanted to read the 2003 Michael
Lewis book on which it is based, and consider myself a longtime Pitt fan. (Born
in 1963, he could plausibly be a subject of one of my chapters, though his
still-unfolding career did not get underway until considerably later than that
of Jodie Foster, his nearest peer on these pages.) So I’m willing to take a
flyer on the movie for what I consider a leisure excursion.

My
oldest son left for college last week, and my wife is coaching my daughter’s
soccer game. That leaves my two 12 year-old twin boys, who have reached the
point where they can be left home alone. But amid doubts they’d much like Moneyball, I ask them whether they’d
like to accompany me.

“What’s it about?” my son Grayson asks.

“It’s about a baseball coach who uses
math to win games,” I reply.

“Sounds good,” he says. “I’ll go.”

“Ryland,
do you want to go to the movies?” I shout downstairs to his brother.

“Yes,”
he says.

This
warms my heart. For years, I’ve been taking my kids to the movies—Disney
movies, Pixar movies, gimmicky 3-Dmovies,
you name it. Sometimes this is a matter of giving my wife some time off or
simply to break the boredom of a summer day or school vacation (I often doze
away the middle half-hour of the movie). Other times, it’s a matter of
succumbing to the advertising-stoked demand for movies they hear about while
watching TV or surfing the Internet. But I take my kids to local multiplexes
because movie-going was one of the great pleasures of my childhood, and a
ritual I want to pass on to them. I do so mindful that in the post-home video,
digital downloading era, the days of theatrical release in theaters may well be
numbered, and I want them to have a childhood memory of a routine with their
father. So when Ryland tells me he wants to go to the movies—not asking or
caring what movie, only that we will
be going—I am happy that a love I’ve conveyed has taken root.

On
this particular afternoon, I drive the boys to a newly opened Cinema Du Lux
multiplex a couple miles from our home. It’s part of a large new large luxury
residential/retail complex known as Ridge Hill, still under construction. After parking on the
bottom level of a five-floor lot, we ascend to the theater on top and take our
stadium seats, the boys slurping away on a Slushie while the previews roll.

I
like Moneyball. It’s part of an
interesting chain of emotionally complex choices Pitt has been making lately—I
was arrested by his turn as a conflicted father in his last movie, Terrence
Malick’s gorgeous, Spartan, The Tree of
Life—and I’m struck while watching how much he looks like Robert Redford as
he ages. But the boys, alas, fall fast asleep. “Well there goes $17 bucks,” I
lament aloud, nudging them awake as the credits roll. But I am a happy man.
I’ve enjoyed the movie, and the boys awaken from their nap with good cheer,
basking in the gleaming lobby. Upon leaving the theater we pause before the
railing outside and survey the unfinished complex, with its roads, pavilions
and buildings almost complete. A departure from the indoor malls that have been
a fixture of my youth, this retail mecca has a village feel, albeit one of
notably affluent character. A product of growing inequality, its very bustle
will be symptomatic of social decay.

But
for the moment, I do not see this space, nestled along a ridge tracked by the
interstate, as a hulking ruin. I do not think of coming wars, foolishly
launched by politicians hoping they will detract from the ills they’ve promised
to address but are powerless to reverse. I do not fear for my childrens’ future
or the futility of my attempts to prepare them for challenges I can scarcely
imagine. Because, as I say, I am happy. Happy enough that had these thoughts occurred
to me, I would respond with others: of unexpected resilience, unforeseen
resources, or wisely conserved ones. Of valuable legacies sustained and passed
on, refracted through prisms that would delight me were I alive to see them
(and delight me even though I am not). Of worlds that are no less real or
capacious for merely flickering to life.

By now, the scenario is a familiar one: a sovereign state, deeply indebted by many years of public sector spending, is making foreign lenders nervous. The state needs more money; lenders insist on austerity. But government officials, chary of offending voters, prevaricate while various schemes for restructuring get floated and clamor builds in the streets. Eventually, the underlying realities assert themselves and retrenchment takes place.

We know this as a story of Greece and Ireland. But the states Alasdair Roberts are talking about are Illinois, Pennsylvania and Mississippi, among others. And the financial crisis in question erupted not in 2007, but in 1837. Welcome to what he calls the First Great Depression.

As he explains in his note on method, Roberts comes to this history from a relatively unusual angle. A professor of law and public policy at Suffolk University Law School, his previous three books have dealt with contemporary subjects. This well-documented tale, grounded in primary sources, is embedded between discussions of the current economic crisis. His larger point is geopolitical: Americans today fret about finding themselves at the mercy of foreign powers for the first time in its history. In fact, he says, in the broad sweep of U.S. history, autonomy has been the exception, not the rule. The past is prologue.

Roberts begins America's First Great Depression with an impressionistic survey of hard times, rich with anecdote. He does not outright reject the widespread view that the Panic of 1837 resulted from the foolish actions of the Jackson administration, which in destroying the Bank of the United States created a boom and bust by channeling cash to smaller banks that lacked the experience to manage it properly. But, he says, the origins of the crisis are closer to London than Washington; Great Britain had its own problems with credit, food supply, and a global marketplace that was far less well understood and fluid than it is today. And since Britain essentially underwrote the economic development of the whole western hemisphere, a Barings sneeze caused American flu.

Roberts's analysis of the American scene draws heavily on foreign perceptions of the United States, particularly British ones, which are quite critical. For such people, the American experiment is simply not working. He explains why in the most focused and satisfying chapter of the book, which looks at government policy on the state level, where politicians typically found the wrath of voters more frightening that that of lenders. A British ditty at the time captures the caustic mood of foreign elites, which would sting national pride for some time to come:

Yankee Doodle borrows cash,Yankee Doodle spends it,And then his snaps his fingers atThe jolly flat who lends itAsk him when he means to pay,He shews no hesitation,But he say's he'll take the shortest wayAnd that's repudiation!

In snubbing foreign lenders, however, the states also short-circuited internal improvement projects that died on the vine (so much for Pennsylvania's effort to compete with the Erie Canal by linking Philadelphia to Pittsburgh). They also essentially shut themselves out of the credit markets for many years.

The situation at the federal level wasn't much better. Washington never slumped into bankruptcy the way a series of states did; the government actually ran a surplus in the early 1830s that it promised to redistribute to the states. Ironically, however, the perception of plenty made matters worse when the feds began hemorrhaging revenue and pulled the plug on the program, sending desperate states into even more distress.

But Roberts shows the reverberations of the crisis went far beyond economic policy. Economic hard times corroded trust in political institutions, creating government gridlock. The Whig Party swept to power in 1840 by riding a wave of exasperation with Martin Van Buren's Democrats, only to find themselves similarly hobbled and tossed in subsequent congressional and presidential elections. Fiscal difficulties also constrained the nation's military. Though public opinion at the time was militantly expansionist with regard to borders in Maine, Oregon, and Texas, informed government officials realized that war -- especially war with hegemonic Great Britain -- would be folly at best. (Here it's worth noting in passing that Roberts portrays the ninth president of the United States, "His Accidency" John Tyler, who got the job when Whig William Henry Harrison died, as one of the architects of a stronger executive branch. To a great degree that's because he used his authority to trim national sails in the face of realpolitik.)

Finally, Roberts describes the depression as a crisis of civic order. He situates the obscure and often mystifying Rhode Island insurrection known as the Dorr War in the context of an outdated colonial constitution finally breaking down in the face of economic pressures. Ditto for the Anti-Rent uprisings in the upper Hudson Valley. The situation in Philadelphia, where riots broke out in 1844, was perhaps more a function of labor politics in the blast furnace of industrial capitalism. But it too derived directly from populist grievances with financial elites. As governor William Seward of New York observed at the time, there was widespread anger that "none but the educated, the refined, the financial, the brokers, the great commercial interests of society have the right to suspend the their just debts." His recognition of fiscal hypocrisy would sound familiar to the protesters of Occupy Wall Street.

In his laudable attempt to understand the economic mechanics of the nation from a broader perspective, Roberts sometimes seems to stretch his perspective a bit far. It's not clear, for example, that we need the kind of detailed assessment of the U.S. navy in the Age of Jackson that we get in this book. Nor it it clear that finance can really explain why the U.S. never went to war with Great Britain in the 1840s: even a flush nation would have had very good reasons to avoid one with a global superpower whose resources outmatched the United States in just about every meaningful sense at the time. Roberts makes a good case that the Mexican War was in some respects a proxy fight with Great Britain, though less so that credit markets were central to the rivalry (indeed, he seems to stray pretty far from them even in his own telling). But he's surely right that the ease with which the nation was able to borrow money again in its aftermath signaled the degree to which the nation had finally, fully, and visibly recovered from the Panic of 1837.

While this svelte book could have trimmed a bit more, it remains a valuable case study, because it's both detailed and resonant at the same time. As a snapshot of the United States at what might be considered an "in-between" moment of its history, the picture here is rich in suggesting where the nation had come from and where it was going. And as an invitation to consider the U.S. place in a post-hegemonic world, the book offers a glimpse of a plausible future which, while undeniably sobering, is not exactly apocalyptic. In that regard, America's First Great Depression is an oddly hopeful reality check.

About King's Survey

King's Survey is an imaginary high school history class taught by Abraham King, a.k.a. "Mr. K." Though the posts proceed in a loosely chronological fashion, you can drop in on the conversation any time. For more background on this series, see my other site, Conversing History. The opening chapter of "Kings Survey" is directly below.

“The Greatest Catholic Poet of Our Time . . . Is a Guy from the JerseyShore? Yup,” in The Best Catholic Writing 2007, edited by Jim Manney (Chicago: Loyola Press, 2007)

“I’s a Man Now: Gender and African-American Men,” in Divided Houses:Gender and the Civil War, edited by Nina Silber and Catherine Clinton (Oxford University Press, 1992).

THE COMPLETE MARIA CHRONICLES, 2009-2010

Most writing in the vast discourse about American education is analytic and/or prescriptive: It tells. Little of that writing is actually done by active classroom teachers. The Maria Chronicles, like the Felix Chronicles that preceded them (see directly below), takes a different approach: They show. These (very) short stories of moments in the life of the fictional Maria Bradstreet, who teaches U.S. history at Hudson High School, located somewhere in metropolitan New York, dramatize the issues, ironies, and realities of a life in schools. I hope you find them entertaining. And, just maybe, useful, whether you’re a teacher or not.–Jim Cullen