Jeanne Willettehttp://jeannewillette.com
The Arts BloggerThu, 09 May 2013 23:28:42 +0000en-UShourly1http://wordpress.org/?v=4.1The Betrayal of the American Dream by Donald L. Bartlett and James B. Steelehttp://jeannewillette.com/2012/10/01/the-betrayal-of-the-american-dream-by-donald-l-bartlett-and-james-b-steele/
http://jeannewillette.com/2012/10/01/the-betrayal-of-the-american-dream-by-donald-l-bartlett-and-james-b-steele/#commentsMon, 01 Oct 2012 17:00:25 +0000http://jeannewillette.com/?p=818THE CLOSING of the COMMONS

The New Enclosure Movement

Essentially “the American Dream” has always been a middle class dream. Thanks to carefully targeted government policy, the middle class has been systematically privileged and advantaged, while the lower classes lived under surveillance and were kept under control. Even in the Gilded Age, those glorious years before the hated personal income tax was ratified as a Constitutional amendment in 1913, aspirational Americans dreamed of owning their own farms or starting their own businesses or of finding a good job. Like “Liberty” and “Justice,” those dreams were The American Way.

But hiding behind those aspirations and fine words were government measures that worked in favor of the rich, making a mockery of sacred American words such as “equality” and “fairness.” It is the thesis of the latest book, The Betrayal of the American Dream, by Donald Bartlett and James Steele that it is not just the American Dream that has been “betrayed” but also that all of the Americans who are not rich have been betrayed. And even worse, these Americans have been betrayed by their fellow citizens, the very rich and the very powerful, who have essentially thrown them and their dreams under the bus…or the stretch limousine.

Indeed, the first chapter of The Betrayal of the American Dream is entitled “Assault on the Middle Class” and the account of the “assault” begins with a real person, Barbara Joy Whitehouse, one of the many people left behind in the stampede of the wealthy, in their urgency to help themselves, trampling over the rights and dignity of the ordinary person. One could say, “what else is new?” Or one could say, “This sounds familiar.” Or one could repeat the old adage, “The rich get richer and the poor get poorer.” But, this kind of attitude of selfishness which rips the fabric of society apart is new and the disregard of the rich for the communal and historic social compact is relatively recent. Before entering into the weeds of this angry and informative book—how the American dream was betrayed—the presentation of a couple of charts might be in order.

The Contemporary Middle Class

First is the now famous chart of the “flatlining” incomes of the middle class since 1970 juxtaposed to the rising of upper class incomes. The blue line is the income of the Middle Class and the Red Line represents the wealth of the Upper Class. The blue line is evenly stretched out across four decades, staying consistent and flat, even while the prices on everything rose. The red line rises like a star, soaring to the skies of unbelievable wealth, charting an upward bound path towards more money than any one human being could ever spend. The source is CNN Money:

The next chart is the equally famous “Parfait Chart,” which shows different colored layers, demonstrating the “thickness” of the layer entitled “Tax Cuts for the Rich.” The much derided Recovery Plan, also known as “The Stimulus,” from the Obama administration is a tiny pale layer squashed under the Bush Gift to his “Base,” as he called his wealthy supporters. This chart is courtesy of Center on Budget and Policy Priorities:

These charts are classic illustrations of ideology, or how the government favors the interests of the dominant class. The rich prosper and everyone else pays for their gains or to be more precise, the 99% hand over their hard earned money to the 1% in order to encourage these individuals to, at some unspecified point in time, to trickle something down upon the poor. If the Bush tax cuts for the wealthy, now twelve years old, are either increased or allowed to continue as the Republicans wish, the four year recovery from the Wall Street Crash is crushed and the debt continues to rise—along with the incomes of the rich. This parfait chart is instructive because it shows how marginal the Bush Wars (on the credit card) were compared to the Bush Giveaway to the most wealthy and the least needy in America. I present these charts for a reason, because these bright colors bring to mind another kind of economic map, literally a map that shows what happens when the the rich use government to take away from the lower classes. In the eighteenth century, this seizure of resources was called The Closing of the Commons or the Enclosure Movement.

The First Closing of the Commons

The chart above is actually a map of the Commons of an English village called Kibworth-Beauchamp, featured on the recent The Story of England, hosted by the incomparable Michael Wood. The Commons is land held in common by the people. The actual owner of the terrain is the squire of lord of the manor who, in an act of noblesse oblige, allows the people or the tenants who work the estate—the small farmers and the peasants—to have their own plots of farmland. The farmers planted and harvested as they wished and were allowed to keep the bounty for themselves. In the old days, this obligation to one’s tenants, inherited from the Feudal era, was a responsibility that came with wealth and privilege. The Lord and Lady took care of their own. As virtuous as it sounded, noblesse oblige was also smart public policy: it is easier to control contented workers than it is to quell discontented peasants. If both sides understand that the social and economic bargain is a two way street then the network of obligations of responsibilities becomes the warp and woof of social relations.

In a time of unlimited power of monarchs and aristocracy, this historical equalizing of the economic scales acted as a way to repay the peasants for their service, while at the same time tying these people to the land upon which they labored. According to Wood, these strips had been worked by the same families for generations. Each strip has an individual name; each strip had its own level of fertility. Some strips were less fertile or harder to work than others, while some were fertile and easy to farm. These strips were parceled out equally, so that on one family could benefit at the expense of others. Thus a rough equality of responsibility (if not income) that somewhat offset the imbalance of power was created. This age-old balance of power enabled the rich to placate the poor and gave hope to the nascent middle class, and, in England, staved off discontent and revolution. But this social agreement or this belief that everyone had obligations under the social conpact between the two classes came to a close during the eighteenth century.

The law locks up the man or woman
Who steals the goose from off the common But leaves the greater villain loose
Who steals the common from off the goose.

The law demands that we atone
When we take things we do not own
But leaves the lords and ladies fine
Who take things that are yours and mine.

The poor and wretched don’t escape If they conspire the law to break; This must be so but they endure Those who conspire to make the law.

The law locks up the man or woman
Who steals the goose from off the common And geese will still a common lack
Till they go and steal it back.

Anonymous

The Closing of the Commons or the Enclosure Movement ended, rather abruptly, a centuries old set of legal and social customs pertaining to the balance between privilege and powerlessness. Nowhere is this “shock of the new” better illustrated than in Thomas Gainsborough’s portrait of Mr. and Mrs. Robert Andrews (1748). In her iconic description of this painting, art historian, Ann Bermingham, alludes to “agrarian change.” On one hand we see the accouterments of privilege: the pretty blue silk dress and dainty pink shoes of Frances Andrews and the flintlock rifle and dead game displayed by Robert Andrews. She does not have to labor and he has the inalienable right to hunt on his own property. But the background of the painting, the landscape view that made all of the attributes possible, tells a new story: the Closing of the Commons.

We see the Enclosure Movement stretched out behind the newly married couple. The absence of labor or the workers who serve the estate is palpable. The wide open Commons are fenced in, walled in, making Enclosures for sheep. The reasons for Enclosure during a hundred year period are complex and varied over time and place. In her article article “Jane Austen and the Enclosure Movement: the Sense and Sensibility of Land Reform,” Celia Easton pointed out that

Owners of large estates began enclosing their land when the market and transportation infrastructure made an acre of land devoted to raising sheep more valuable than an acre of land devoted to raising barley. Sheep herding had immediate advantages over farming: lower labor costs, less dependency on weather, and easier land management. Extreme climactic events and dis- ease did threaten the main capital investment—the sheep themselves—but large landowners were less affected by these threats than small landowners, since their sheep had access to larger pasturage and shelter from inclement conditions. None of the decisions to enclose land to raise sheep would have been made, however, without a market for wool and the roads on which to transport it.

What we are seeing in Mr. and Mrs. Robert Andrews is that the wool trade became more economically profitable and that the centuries of farming the same strips of the Commons had exhausted the land. As Easton stated, for centuries, the English government had restricted Enclosure or the desire of the upper classes to make a greater profit, to protect the lower classes, but by the eighteenth century, profit motives overtook moral obligations or social concerns, and the Commons were Closed, either by parliamentary means or by unilateral actions on the part of the landowner. In 1748, Mr. and Mrs. Robert Andrews were on the cutting edge of Enclosure, slicing and dicing their lands and pushing the villagers off their ancestral lands. In other words, the land was outsourced to the sheep.

The Contemporary Closing of the Commons

The Closing of the Commons and the ultimate “betrayal” of the common people in the eighteenth century is similar to the “betrayal” Bartlett and Steele describe in their book. In our time, the post-World War II period, there were a series of government policies designed to raise the middle class, from the G. I. Bill to government projects, such as infrastructure to make interstate commerce more efficient—all of which elevated lower class white males (and their families) to the middle class. As seen in the photograph of Levittown above, it is true that these post-war laws were explicitly directed towards the white male population as a reward for their services in the War. Women and people of color were consciously left out of the post-war benefits boom and their war-time service were expressly not recognized. Both groups, the majority of the American population, were thus placed under the curse of “redlining,” and were denied loans for homes and entry into certain neighborhoods and access to certain jobs and schools.

But post-war government policy had a large and positive impact, creating an extended middle class with rising consumer power and rising incomes that allowed men and women to purchase the post-war avalanche of new commodities. But by 1970, a mere twenty years later, the party was over as outsourcing of good manufacturing jobs began, slowly at first, a trickle here and there, gradually widening into a stream, predicting the flood of jobs gushing towards Asia. Low and high skill manufacturing jobs (the usual domain of the white male) were shipped overseas where desperate workers did the same jobs at a fraction of the wages. The American worker and the middle class professional was left behind high and dry while the wealthy took advantage of law and tax policies they helped fashion to enrich themselves through outsourcing.

Ever since the Enclosure Movement, sociologists and economists have argued over whether or not the Closing of the Commons was theft from the people or whether, in the long, run the result was positive. Of course, as John Maynard Keynes pointed out, “in the long run, we are all dead,” and the long term benefits have proven to benefit one group, the rich, over the other group, those who work to make the rich richer. As in the eighteenth century, those in power have sloughed off the sense of responsibility while retaining the idea of privilege. Just as there was a refusal to accept age-old obligations two hundred years ago, today there are no thoughts of citizenship and no concern with giving back or paying forward for the greater good or the future of the nation. As the authors ot The Betrayal of the American Dream point out,

In our 1992 book America: What Went Wrong? we told the stories of people who were victims of an epidemic of corporate takeovers and buyouts in the 1980s. We warned that by squeezing the middle class, the nation was heading toward a two-class society dramatically imbalanced in favor of the wealthy. At the time, the plight of middle-class Americans victimized by corporate excess was dismissed by economists as nothing more than the result of a dynamic market economy in which some people lose jobs while others move into new jobs—“creative destruction…”

The issues now, as they were in the two centuries of the Enclosure Movement, is not the “creation” of new ways of making wealth but the “destruction” of the old ways and the impact of the “betrayal.” Most importantly, when it is asked who benefits from these economic changes, it becomes clear the the so-called “creativity” which benefits certain individuals also results in a destruction of the lives of the masses who cannot live long enough to benefit from future largesse. The result of the Enclosure Movement was a disconnect between the people and the land—Bermingham calls the effect “alienation.” The landowners severed the ancient obligations of the squire, and the peasants were separated from the land that they had long regarded as “theirs” to the extent that they named their plots.

Globalism and the Abandonment of the Land

Today, Globalization has become the new Enclosure Movement. In the process of moving towards a new international economy—and this is a point that Bartlett and Steele did not emphasize—the rich Americans, like American corporations, have less and less connection to their own nation: their wealth is global and consequently their interests or their fealties are international. The result is a waning of patriotism or a connection to the land (America) and the people who live in the land (America). It has been said by many political commentators, such as Matt Taibbi (Griftopiaand The Great Derangement) and Chrsytia Freeland (Plutocrats), that the new wealthy class is not American, they are citizen of the globe who merely happen to live in America. As global citizens, these mega-rich people have no obligation to America and therefore have no compunction about “betraying the American dream.”

Today, money (whether virtual or real) has replaced land as the major source of wealth. During the nineteenth and twentieth centuries, wealth came from ownership of businesses or corporations that were local that had and depended upon a symbiotic relationship between the communities and the laborers. Henry Ford understood that his workers needed to earn enough money in his factories to buy the cars they made. In the twenty-first century, this common sense understanding that labor and management had needs in common and that their relationships was reciprocal has dissolved. In fact an aerial photograph of homes in the Hamptons looks remarkably like the Enclosure Movement in action. The coast and the sea is all privately owned and controlled and enclosed.

Breaking the Social Bonds

But the sources of money in our century are global and not local. The global workers are speechless and powerless citizens of totalitarian nations which are in league with American corporations. Management does not manage workers; managers manage the income or the wealth of the company. American workers have been fired, outsourced and disenfranchised, losing their jobs, their futures and their governmental representation. As Bartlett and Steele write,

At a time when the federal government should be supporting its citizens by providing them with the tools to survive in a global economy, the government has abandoned them. It is exactly what members of the ruling class want. The last thing they want is an activist government—a government that behaves, let’s say, the way China’s does. Their attitude is “let the market sort it out.” The market has been sorting, and it has tossed millions out of good-paying jobs. Now that same ruling class and its cheerleaders in Congress are pushing mightily for a balanced budget at any cost. If it happens, it will be secured mostly by taking more out of the pockets of working people, driving yet another nail into the middle-class coffin. The economic elite have accomplished this by relentlessly pressing their advantage, an advantage that exists for the simplest of reasons: the rich buy influence.

The goals of a corporation are short term: make money now and don’t worry about the future. Or to put it another way—the corporations are no longer linked to a nation so they don’t have any stake in the people of any country. In other words, the relative ability of the American middle class to buy corporate products or commodities is irrelevant to the international business. The only relevancy is profit. It is a moral imperative to corporations that there is no higher good than higher profits. Hiring American workers is expensive: American wages are higher than in most Asian countries and, unlike European countries, American businesses are expected to provide health care benefits and manage retirement accounts. No sane profit-minded corporation would hire American workers when Asian workers could be hired at a fraction of the cost. The free market is free of responsibility and of allegiance to one’s flag. As the authors point out,

Corporate executives contend that they are forced to relocate their operations to low-wage havens to remain competitive. In other words, their domestic workers earn too much. Never mind that manufacturing wages are lower in the United States than in a dozen other developed countries.

But Bartlett and Steele are also interested in telling the story of how the wealthy have been able to not only remove the sources of their income from American shores but also how the wealthy protect their wealth. It is not just that the very rich and powerful have moved the jobs out of reach of the worker, it is they have also removed their money out of the reach of the government. And the government or the politicians have allowed the rich to strip America of the money the nation has earned for them. As Bartlett and Steele charge, the wealthy “lack a moral or civic compass” and are “without a purpose beyond its own perpetuation with no mission except to wall in the money within its ranks.” A case in point would be a Birkin bag that was auctioned off in 2011 for over $200,000: the cost of a modest middle class home in a modest Midwestern state or the amount of four middle class incomes.

That the purse costs as much as a home–and that home is probably in the hands of a bank that has foreclosed and refuses to refinance–raises the question of how much money is “enough?” Is the opportunity to own such an object so important that the possession overrides morality or common sense or American values? The authors assert that America has ceased to be a democracy and has, over time, devolved into a “plutocracy” in which the common people are not so much ruled by the rich as they are exploited by the rich. The rich can’t be bothered to be part of the government; it is easier to buy politicians to enact laws and rules that benefit their one driving desire—to accumulate money, more money, and then even more money.

Ironically, it was Wall Street that disclosed the emergence of the American plutocracy. As early as 2005, a global strategist at Citigroup, Ajay Kapur, and his colleagues coined the word “plutonomy.” They used it in an internal report to describe any country with massive income and wealth inequality. Among those countries qualifying for the title: the United States. At the time, the top 1 percent of U.S. households controlled more than $16 trillion in wealth—more than all the wealth controlled by the bottom 90 percent of the households. In their view, there really was no “average consumer,” just “the rich” and everyone else. Their thesis: “capitalists benefit disproportionately from globalization and the productivity boom, at the relative expense of labor,” a conviction later confirmed by America’s biggest crash since the Great Depression. The very rich recovered quite nicely. Most everyone else is still in the hole.

Indeed, we of the middle class are more than likely to stay in “the hole.” Bartlett and Steele made the case that,

Only once before in American history, the nineteenth-century era of the robber barons, has the financial aristocracy so dominated policy and finance. Only once before has there been such an astonishing concentration of wealth and power in an American oligarchy. This time it will be much harder to pull the country back from the brink. What is happening to America’s middle class is not inevitable. It’s the direct result of government policy, and it can be changed by government action.

It is important to realize to what an extent the moneyed class has become the equivalent of absentee landlords in the eighteenth century. The middle class is simply unimportant to them, their plans, their goals.

Despite obligatory comments about the importance of the middle class and why it should be helped, America’s ruling class doesn’t really care. They’ve moved on, having successfully created through globalization a world where the middle classes in China and India offer them far more opportunities to get rich.

In addition, Bartlett and Steele map out the thinking of corporate America. The “job creators” understand that there is a trade off between providing jobs for Americans or for the Indians and piously decided that it is good and righteous to elevate the inhabitants of Madras instead. The name of the game is “creative destruction” as jobs are created in China and are destroyed in America.

The result is a huge transfer of wealth from the middle class to the wealthy in this country, as well as to workers in China, India, and other developing nations. No one wants to deny people in those countries the right to improve their lot, but the price of uplifting them has been borne almost entirely by American workers, while in this country the benefits have flowed almost exclusively to a wealthy super-elite. Globalization was peddled on the basis that it would benefit everyone in this country. It hasn’t, and it won’t as long as current policies prevail.

The phrase “has been borne almost entirely by” used by Bartlett and Steele is one that can also be applied by the tax code: it is the middle class that pays the price of globalization and it is the middle class that pays the taxes that pay for America. And it is not just the rich individuals who refuse to pay their fare share it is also the corporations who similarly refuse to pay their taxes.

One explanation for the tax burden on middle America is that for years U.S multinational corporations have refused to bring home billions of dollars they’ve earned on overseas sales because they don’t want to pay taxes on those profits. Sitting in banks in the Cayman Islands, the Bahamas, Switzerland, Luxembourg, Singapore, and other tax-friendly jurisdictions is a staggering amount of money—an estimated $2 trillion, a sum equal to all the money spent by all the states combined every year, or more than half the size of the annual federal budget.

The Un-Freedom of the “Free Market”

We are told by the ruling class–or their mouthpieces, the politicians–that the “free market” is at work, that no laws have been broken, and that any regulations on the free market would be a disaster. However, what is not said is that the market is not a level playing field–the market is not free, it is fixed, it a rigged game, the market is Vegas where the house always wins and the weekend punters always loose.

Ultimately, the rule-makers in Washington determine who, among the principal players in the U.S. economy, is most favored, who is simply ignored, and who is penalized. In the last few decades, the rules have been nearly universally weighted against working Americans. That a huge wealth gap exists in this country is now so widely recognized and accepted as fact that most people have lost track of how it happened. One of the purposes of this book is to show how the gap became so huge and to explain why it was no accident. Over the last four decades, the elite have systematically rewritten the rules to take care of themselves at everyone else’s expense.

The myth of the Free Market is just that—a Myth. As the authors point out, Germany and Japan and European countries such as France protect their citizens against the ravages of the market. In American we decry “protectionism” in the names of American corporations who want to sell American products abroad. The middle class wants, we are told, the ability to purchase “cheap” televisions from South Korea, but as Bartlett and Steele point out the trade between America and its trading partners is not free: their workers are protected; ours are not. The result is that American cars are a luxury in China and cost around $100,000. Europe and Asia are simply not big markets for American cars which, at home, must compete with Toyotas, et. at.

Unfair competition that benefits the rich and forces the workers and the poor to take the hit has been going on ever since travel and technology made globalization possible.

What is different today is that a company can go under or “fail,” regardless of competition or profitability. All it needed is for a company to be swooped down upon by a corporate raider intent on a “hostile takeover.” Indeed in their description of what a private equity company, like Bain, does to a business, the authors state that the vulture-like investors argue that the elimination of companies and jobs forces a greater efficiency and thus benefits the “economy.” Bad CEOs are removed, unproductive workers are sent away, they argue and everyone benefits and the nation as a whole is served. But Sensata, a company with record profits, was suddenly swallowed up and closed down by Bain Capital and the jobs and equipment are being shipped to China—all in the name of a greater profit. So we ask? Who Benefits? Which economy? Their or ours? While using the word “economy,” the corporate executives seem to imply the American economy, but what they really mean is that their personal economic positions are improved on the global stage.

The managers of the largest equity and hedge funds have become immensely wealthy—many are billionaires—even though some of the companies they bought and sold later foundered. In addition to the rich fees they harvest, private equity fund managers rake in millions more courtesy of U.S. taxpayers. Thanks to Congress, a portion of their annual income is taxed at 15 percent (rather than 35 percent) under an obscure provision called “carried interest.” This puts that income in the same tax bracket occupied by the janitors who clean their buildings. Using the proceeds from their deals and the money they save on taxes, private equity and hedge fund managers have lavish lifestyles featuring multiple residences, private planes, and ostentatious parties.

As David Stockman described in The Great Deformer, meanwhile the companies seized by Bain-like companies, loaded down with debt and gutted and left for dead, cannot be more “efficient” because the investors/looters have pocketed all the money. Stockman, once Ronald Reagan’s economic budget guru, pointed out the not only does wealth not trickle down, the kind of wealth won by investment capital is not a win-win proposition—the investor wins by destroying a healthy company and displacing thousands of American workers and gutting hundreds of American towns. The wealthy, the authors write, are able to buy not just Congress and other key members of the government, but are also purchasing so-called “experts,” academics in supposedly intellectual “think tanks,” which are well paid for their so-called “reports” on the economy. Writing of the fabulously rich Koch Brothers who fund any number of right-wing causes, Bartlett and Steele said,

The Kochs have contributed $12.7 million to candidates (91 percent Republican) since 1990 and spent more than $60 million on lobbying Washington in the last decade. But their greatest impact is the millions they have poured into foundations, think tanks, and front groups to mold public opinion in their favor by promoting positions that in almost every case benefit the few. The rise of these conservative think tanks and foundations directly coincides with the economic decline of the middle class. Among the more prominent of these organizations are the Cato Institute, which Charles cofounded in 1974, and Americans for Prosperity, which David launched in 2004 as a successor to a similar group that he had helped found earlier called Citizens for a Sound Economy. Dozens of other groups receive Koch money at the national or regional level. In early 2012, a rift developed between the Kochs and Cato, sparking litigation by the Kochs and charges by Cato president Ed Crane that Charles Koch was trying to gain full control of the think tank to advance his “partisan agenda.” The environmental group Greenpeace, which in 2010 examined just one issue on the Kochs’ agenda—their efforts to discredit scientific data about global warming—identified forty organizations to which the Koch foundations had contributed $24.9 million from 2005 to 2008 to fund what Greenpeace called a “climate denial machine.”

In fact, after the release of the documentary Inside Job, the outcry against economists clearly caught in conflict of interest situations was so loud that the profession briefly flirted with setting ethics standards for itself. Embarrassed, the American Economic Association schedule a session on ethics in its 2011 meetings in Denver. As The Economist pointed out,

You might assume that economists already disclose their links to organisations. But when economists write articles for the opinion pages of newspapers and magazines, appear on television to discuss matters of economic policy or testify before parliamentary committees, the audience is often unaware of their non-academic affiliations. A study by Gerald Epstein and Jessica Carrick-Hagenbarth of the University of Massachusetts, Amherst, looked at how 19 prominent academic financial economists who were part of advocacy groups promoting particular financial-reform packages in America described themselves when they wrote articles in the press. Most had served as consultants to private financial firms, sat on their boards, or been trustees or advisers to them. But in articles written between 2005 and 2009 many never mentioned these affiliations, and most of the rest did so only sporadically and selectively. Readers may have assumed they had more distance from the industry than was in fact the case.

Can This Country Be Saved?

The authors of The Betrayal of the American Dream, who have watched the American economy for years, end their book with a plan to remedy the current situation.

Over the last four decades, public policies driven by the economic elite have moved the nation even further away from the broad programs that helped create the world’s largest middle class, to the point that much of that middle class is now imperiled. The economic system that once attempted to help the majority of its citizens has become one that favors the few. Not everyone in the middle class who pursued the American dream expected to get rich. But there was a bedrock sense of optimism. Most people felt that life was good and might get better, that their years of dedication to a job would be followed by a livable, if not comfortable, retirement, and that the prospects for their children and the generations to follow would be better than their own.

The writers lay out a series of reforms that they think are necessary to save the middle class. From reforming the tax code which as been written to favor the wealthy to policing the financial markets to providing Keynesian stimulus to rebuild the infrastructure—all of the se suggestions are common sense and all are doomed to failure, unless the voters demand otherwise. Bartlett and Steele suggest that

Middle-class Americans, still the largest group of voters, must put their own economic survival above partisan loyalties and ask four simple questions of any candidate who wishes to represent them: 1. Will you support tax reform that restores fairness to personal and corporate tax rates? 2. Will you support U.S. manufacturing and other sectors of the economy by working for a more balanced trade policy? 3. Will you support government investment in essential infrastructure that helps business and creates jobs? 4. Will you help keep the benefits of U.S. innovation within the United States and work to prevent those benefits from being outsourced? The choices we make in the candidates we elect and the programs and policies we support will set the direction of the country.

It will be difficult for Americans to put country before party to look past ideology to find facts, for as Thomas Frank pointed out in his 2004 book, What’s the Matter with Kansas? How Conservatives won the Heart of America, Americans can be counted on to vote against their own best interests. His argument, hotly contested by some writers, is that class interests, i. e. money, has been replaced by ethnic interests, i. e., race. Lower and middle class white people have been persuaded that their interests are aligned with those of the upper classes who will—in their own good time—“trickle” their gains “down” to the deserving few. Someday, they are assured “the job creators” will return the jobs they have shipped overseas. Sadly those jobs are not coming back and the Middle Class must start standing up for itself. As The Betrayal of the American Dream concludes,

What’s at stake is not only the middle class, but the country itself. As the late U.S. Supreme Court justice Louis Brandeis once put it: “We can have concentrated wealth in the hands of a few or we can have democracy. But we cannot have both.”

One this is sure, only the middle class can help itself; no one else will.

The full title of Randall Kennedy‘s new book, The Persistence of the Color Line. Racial Politics and the Obama Presidency, is, or was, published (2011) a bit too soon and needs a sequel. The incompleteness of this book is not the fault of Kennedy, a professor at the Harvard Law School, but the continuing evidence of ongoing and unrelenting racism displayed in disguise by a variety of political groups. From the Birthers to the Congress to the Tea Party, the election of a black man as President has brought out the worst of America. Kennedy’s book barely gets past the first year of a term in office that was complicated by the simple fact that Barack Obama is only half white. And half is not enough. Kennedy’s main point is that Obama is trapped in his (half) blackness and cannot act with the privileged latitude that comes automatically to any and all white Presidents.This trap of skin color has shaped and will shape this unique Presidency.

Kennedy is certainly correct that it is institutional racism that restricts Obama in what he can do, what he can say, who he can champion, what he can support, which laws he can put forward, which policies he can enact. Despite his high office, in his own country (more than in any other nation) Obama is defined by his race. Kennedy opens his book with the assertion:

The terms under which Barack Obama won the presidency, the conditions under which he governs, and the circumstances under which he seeks reelection all display the haunting persistence of the color line. Many prophesied or prayed that his election heralded a postracial America. But everything about Obama is widely, insistently, almost unavoidably interpreted through the prism of race…

Sadly, despite the hopes to the contrary that America was now “postracial,” it is now clear that America is still a racist society. If we define racism in its largest sense: that racism is a “consciousness” of race, then Americans are intensely conscious of Obama as a man of color. For some, this “color”—black—is the color of redemption, for others, the color is a threat and a retribution. Whether positively or negatively, the entire nation is in thrall to the notion that our President is a black man.

One could wonder if the event of the election of a woman or a man of color as President had happened a few decades later, say in the 2030s, more Americans would have been more accepting and fewer people would have cared about race, but instead Barack Obama was elected in 2010. Early twenty-first century people had parents and grandparents who had (fond) memories of segregation and for many Americans, particularly those in Middle America, the sight of people of color is still rare. The reaction of these white Americans was defensive on one hand—a regression into segregationist attitudes—and offensive—an instinctive rejection of someone so unfamiliar, so dark, so cool.

One could also wonder how much the fate of Obama would have been changed if his own white family had survived: if his white grandparents had survived his election, if his white mother could have lived in the White House along with Michelle Obama’s black mother. The whiteness of Obama could have been on full display on the campaign trail, at the Inauguration, and during policy debates. But without either that white half or the black half, a “blackness” born of racism was projected onto Obama. The result was, to borrow Randall Kennedy’s term, to “blacken” Obama and to make him seem alien. However, far from being an “alien,” Obama is the mixed-race future of a more tolerant America to which we might aspire.

It is interesting to note that the President grew up in a white and multicultural society. Obama is the product of the “Melting Pot” so hated and so dreaded by the Nativists and the Know Nothings of the previous century. Obama is the future they fought to avoid. In a very typical fashion, he was raised by a single mother and her parents, all of whom were white and all of whom loved him. He grew up in multicultural Hawaii and went to white-identified schools and colleges, Occidental, Columbia and Harvard, and dated white women and had white friends. Obama chose to be “black” in the sense that he had to seek and learn about “blackness.”

But these subtleties of choice are lost on those who object to Obama solely because he is black—they don’t care about his decisions, or about the distinctions between black skin and black culture, they care only about the skin and refuse to accept him in the office of the Presidency. As Kennedy reports on the ugly fact that there are,

substantial number of Americans who simply refuse to acknowledge Obama’s political legitimacy (for example, the allegation believed by tens of millions that he was born abroad), the open contempt displayed by antagonists not only on the airwaves of right-wing talk radio but also in the inner sanctum of Congress (for example, Joe Wilson’s infamous shout of “You lie!”), and the stark polarization that characterizes the racial demographics of support for and opposition to Obama. That the opposition is overwhelmingly white is a fact that no one can reasonably dispute.

Then Kennedy asserts, “What is disputed, however, is that racial sentiment is an important ingredient in the opposition.” This statement is interesting and what the author is working through is the fact that Obama won an overwhelming victory and that while he did not win the majority of the white vote, he won enough to carry the day. And as Kennedy points out there are “plenty of reasons” to dismiss Obama without even mentioning race—he is too liberal, he is too conservative, and so on. No president is going to please everyone all the time; but, that said, Obama will always be judged according to different standards and this judgement will always be tempered by race and those attitudes are, in and of themselves, a form of racism. The very fact that Americans were (momentarily) proud of themselves is tinged by a history of slavery and segregation. As Kennedy says,

An inflated sense of accomplishment is part of the racial predicament in which Americans find themselves. Electing a black American as president is treated as remarkable. In a sense it is—but only against the backdrop of a long-standing betrayal of democratic principles…

…That Obama has had to work so hard to make himself and his family acceptable to white America and that he has had to continue to work so persistently to overcome the perceived burden of his blackness is a sobering lesson.

I supposed we Americans hoped that we would rise to our own optimistic standards, and, as Kennedy lays out the campaign was remarkably free of racism; but there was a sizable segment of the nation that would never accept Barack Obama as President. One could argue which incident by which public official marked Obama as “black” and unacceptable was the first but barely into his first term it became clear that this was a marked man. A conservative discourse was woven, full of symbolic racist “dog whistles” to a certain group and therefore skirting overt racism. Kennedy writes that “…the prejudice has been sublimated and expressed via a code that provides a cover of plausible deniability: “He’s not one of ours”; “He’s not like us”; “He’s alien”; “He’s a Muslim”; “He’s a socialist.”

Ironically, because he is black, Kennedy argues, Obama cannot appear to favor peoples of color and therefore can do less for his “own people” who truly need the special help than a white President can freely provide. On the other hand, ironically also because he is black, Obama was in the cultural position to assist other Others, the LBGT community and the Latino community. Although Obama has, as Kennedy points out, elevated many black people to high places in his administration, he has arguably done—in a more specific way—more for the gay and lesbian community and Mexican Americans than for blacks. Thanks to Obama, gays can now serve openly in the military and Latino young people who were brought to American as children can now move freely in society without fear. The next steps, thanks to Obama, will be that gay people may be able to marry legally and that young immigrants can become citizens. This willingness to act in a moral fashion towards those who inhabit this country is real progress towards civil rights for all Americans.

Then there is the dark side of this Presidency. Because of the color of his skin, because of his race, and mostly because of the consciousness of his race, oppositional criticism of Obama falls into the zone of racism but these racist (de)evaluations are delivered in code. Once racist sentiments were uttered openly without restraint and were part of the broader culture, but as Kennedy writes, during the 1960s the language of racism in politics changed:

The Civil Rights Revolution stigmatized the open appeal to racial animus. By the late 1960s, politicians were no longer able to blatantly incite racial prejudice to their advantage at little or no political cost. To tap into racial resentments openly meant falling afoul of newly ascendant norms of racial etiquette and thus attracting punishing censure. So open appeals to racist animus gave way to implicit appeals. To avoid being branded as racist while nonetheless trafficking in racial prejudice, some politicians began to use code words to say covertly what they could no longer safely say overtly.

Today, three years into Obama’s presidency, we see these codes fully developed, unfurled and proudly flying out of the mouths political opponents. Add up these wordy criticisms, they all say the same thing: Obama is incapable of being President because he is black: “he is in over his head,” “he is incompetent,” and so on. All blame for all ills can be laid at the door of a black man, a sin eater of white transgressions. Therefore, the white men who created huge budget deficits are not at fault, the white men who started but did not finish two wars are not at fault, the white men who let Osama bin Laden slip through their fingers are not at fault and Obama’s bold deeds cannot be celebrated, because, as Mitt Romney claimed, killing Osama when the opportunity presented itself was a “no brainer.”

All of Obama’s accomplishments are discounted—he was an affirmative action admission to exclusive Ivy League schools, the stimulus did not work, he is wrong to attempt to bring peace to the Middle East, and on and on. Nothing he does is right and everything he does is wrong, not because any of these Codes are true but because the endless assertions of failure are necessary to allow whites to feel superior to this intelligent and intellectual and gifted and exceptional black man.

The idea that a black President might do a better job than a white one—even George Bush—is insupportable to racist white Americans. Kennedy goes through a number racially tinted incidents that happened before or early in the Presidency of Obama: the very real embarrassment of the Reverend Wright, the clash between the Harvard scholar, Henry Louis Gates, Jr. and a Cambridge police officer, the embarrassing incident involving Shirley Sherrod, the confirmation of Sotomayor, and so on. Kennedy does an excellent job of explaining the culture of Reverend Jeremiah Wright and gives an informative account of black patriotism or why black people love America, But the incident that opened the dam of racism in my opinion was the famous “You Lie” outburst of Joe Wilson, Congressperson from South Carolina.

The occasion was a solemn one, the health care address on a major policy proposal by Obama, marred by a loud Southern voice screaming “You Lie!” clearly something that would never happen to a white president. As Maureen Dowd wrote in the fall of 2009,

I’ve been loath to admit that the shrieking lunacy of the summer — the frantic efforts to paint our first black president as the Other, a foreigner, socialist, fascist, Marxist, racist, Commie, Nazi; a cad who would snuff old people; a snake who would indoctrinate kids — had much to do with race…But Wilson’s shocking disrespect for the office of the president — no Democrat ever shouted “liar” at W. when he was hawking a fake case for war in Iraq — convinced me: Some people just can’t believe a black man is president and will never accept it…Barry Obama of the post-’60s Hawaiian ’hood did not live through the major racial struggles in American history. Maybe he had a problem relating to his white basketball coach or catching a cab in New York, but he never got beaten up for being black. Now he’s at the center of a period of racial turbulence sparked by his ascension. Even if he and the coterie of white male advisers around him don’t choose to openly acknowledge it, this president is the ultimate civil rights figure — a black man whose legitimacy is constantly challenged by a loco fringe. For two centuries, the South has feared a takeover by blacks or the feds. In Obama, they have both.

Dowd concluded by quoting “Congressman Jim Clyburn, a senior member of the South Carolina delegation,” “…had a warning for Obama advisers who want to forgive Wilson, ignore the ignorant outbursts and move on: “They’re going to have to develop ways in this White House to deal with things and not let them fester out there. Otherwise, they’ll see numbers moving in the wrong direction.” I believe that Dowd and Clyburn were correct. The Wilson event, during a speech by Obama on health care, was a turning point. The Congressman both apologized and then raised campaign money on the strength of his racist outburst:

“This evening I let my emotions get the best of me when listening to the president’s remarks regarding the coverage of illegal immigrants in the health care bill. While I disagree with the president’s statement, my comments were inappropriate and regrettable. I extend sincere apologies to the president for this lack of civility.”

Wilson was censured by the House, along party lines (the Republicans taking no responsibiltiy), but the damage was done. This outburst, which Wilson claimed to be “spontaneous,” received only a mild rebuke from his colleagues, and Obama accepted the “apology.” Wilson took advantage of the natural paralysis that happens when civilized people are confronted by outrageous barbarism. There is simply no acceptable reply to an act of such contempt. It is asking a great deal of any human being, startled by an unwarranted and untrue accusation, to react in an effective fashion. One either ignores the outburst—Obama’s approach—or to stop the proceedings—a major policy address—and politely ask the offender to leave. The Congressman should have been expelled from the room and expelled from Congress.

But confrontation is not in the make up of Obama. He is a child of consensus and negotiation, an offspring of the postracial society. It is possible that Obama though that Wilson was having a nervous breakdown, a fit or a meltdown of some sort. Obama wants peace and, at that time, during that summer of 2009, he probably genuinely thought that he could bring the Republicans could be brought into the fold. He did not want to offend the other side; but, by accepting what was a facile and meaningless apology from Wilson, Obama suggested to those who were watching, the people that he did not yet see as his enemies, that he was weak.

After Wilson was let off the hook, it was as if the dam had been burst and the Birthers came out of the woodwork with their absurd claims that the Presidency of Obama was a result of an impossibly complex conspiracy to place a Manchurian candidate in the White House—for what purposes it is never clear. Also out in the open were charges of Socialist, Food Stamp President, “European,” “Muslim,” and on and on, all of which were codes for un-American and also not white, because “real” Americans are white and Obama is black. Obama, in trying to govern from a “bipartisan” philosophy of “compromise,” looked foolish and naïve.

Obama, quite rightly, has taken seriously his charge as President to govern all Americans, regardless of age, race, gender or party affiliations, fairly. This position of equity is far more Presidential and fair than most Presidents. As Kennedy writes, Obama refuses to govern from a position of race and insists upon taking his positions on the basis of morality. For Obama, Kennedy states, it…

“…isn’t a matter of black and white. It’s a matter of right and wrong.” Sticking to his strategy of deracialization, Obama sought as much as he could to avoid dirtying himself with the racial messiness of the dispute without alienating his African-American base. He saw deep engagement in the controversy as a losing proposition, a racial quagmire that, for many white voters, would only blacken him…”

If Obama is “blackened,” then all people of color are “colored” in even more intense hues. If it is acceptable to emit racist codes when referring to Obama, then the attacks on Others, those who are not white and male, are suddenly acceptable. Since “You Lie!” we have heard supposedly reputable or apparently sane politicians call for an electrified fence on the border of Mexico and we have seen literally hundreds of laws passed to restrict the rights of female citizens.

We know now that on the night of the Inauguration, certain Republicans met in private (secret) and made a pact—to obstruct very single proposal Obama made, regardless of its merit, regardless of whether or not the policies were originally Republican, regardless of the impact upon the nation. This pact or agreement was nothing short of un-American and un-patriotic and unprecedented. The Republicans have held firm and have voted en masse against every proposal, every policy, every law put forward by Obama. These actions are tantamount to a conspiracy and de-value the office of the Presidency. Already there is ample evidence that the Republicans will have no respect for any President, even their own.

Randall Kennedy shows us the early straws in the wind, one racist event after another, incidents that would have passed unnoticed under a white president or events that would not have happened under a white president. Kennedy points out that in each and every case he presents, that Obama is damned if he speaks out or wades in, and he is damned if he stays silent and stays away. Kennedy is right to stress the fact that Obama is trapped in his blackness and in his innate civility and his heartfelt belief in the good will of all people. I believe that Obama had no idea of how deep and how wide and how old racism is in America. I don’t think he was prepared for the wall of refusal that he faced, and, for years, Obama has had no effective response to the visceral rejection of his presidency.

But Obama is a learner and he is a proud man. The question is what is this nation facing—a return to the blithe and blunt racism of the 1950s? or the last spewings of an ugly racist bile out of the body politic? It this Presidency a Sacrifice Presidency, a period that forces a stained country to redeem its shameful past or is this Presidency a Reversion Presidency, the occasion upon which we revert to the old ways: the rule of the white male? We have a presidential campaign for 2012 that is entirely based upon the charge that Obama must be removed from office because he is “incompetent,” or, in other words, “black.”

News commentators continue tip-toe around this bigoted rhetoric and gingerly call this dark prejudice “tribal” and are forced to call attention to the “codes” used. And as the discourse continues to grow and become more extensive, the media are forced, more and more, to confront the constant racism that has been inspired by this Presidency. But the media–whether left or right—is merely reactive. This Presidency is not just any Presidency: it is an occasion and it is up to Obama to take advantage of his historic election. The speech on Reverend Wright and modern racism was a start, but now, three years later, this brave address is revealed as sadly insufficient for today’s dark world. Obama must take the high moral ground and be a new, another Martin Luther King and demand an end to racism at long last. Kennedy ends his interesting book on a hopeful note,

Among colored folk, his ascendancy has raised expectations of what is possible for them to achieve in a “white” Western modern democracy. It has also affected the expectations of white folk, habituating them, like nothing before, to the prospect of people of color exercising power at the highest levels. There are many who still chafe at this turnabout—witness the racial component of the denial, resentment, and anger that has fueled reaction against the Obama administration. The racial backlash, however, is eclipsed by the lesson being daily and pervasively absorbed—the message that a person of color can responsibly govern.

On the eve of an election campaign that is mired in open and belligerent racism, Randall Kennedy’s book, though now out of date, is an instructive account of how a black man teaches white men (and women) that race should be irrelevant. Only when we all learn what Obama is trying to show us, do we achieve the transcendence of a postracial society.

Dr. Jeanne S. M. Willette

The Arts Blogger

]]>http://jeannewillette.com/2012/08/15/the-persistence-of-the-color-line-by-randall-kennedy/feed/0“Drift” by Rachael Maddowhttp://jeannewillette.com/2012/08/01/drift-by-rachael-maddow/
http://jeannewillette.com/2012/08/01/drift-by-rachael-maddow/#commentsWed, 01 Aug 2012 17:00:55 +0000http://jeannewillette.com/?p=774Drift: The Unmooring of American Military Power (2012)

Introduction

The heart of the question of what Rachael Maddow calls Drift is how do we wage war in the twenty-first century? What is the purpose of war in the contemporary era? And who fights these wars? Or to twist the title of a famous film from World War II, why do we fight?

The answer is: because the President wants us to fight.

American history has been based on the predicate that Americans fight for our rights to be free and to live in a democratic society. We imagine ourselves to be valiant warriors—citizen soldiers, as Stephen Ambrose named those who fought in the last “good war.” Maddow quoted future President Thomas Jefferson as saying in 1792, “One of my favorite ideas is, never to keep an unnecessary soldier,” noting that once the “necessary” war is fought, the “necessary” soldiers fade back into civilian life. But that image of Jefferson’s Yeoman Farmer who could be counted on to spring to the country’s defense when needed is a highly idealized one.

By the middle of the nineteenth century, Jefferson’s concerns about the dangers of keeping a standing army had melted in the heat and fire of expansionism and the Mexican-American War (1846-1848). Maddow quickly skips over a sizable chunk of American history, from the decades of Manifest Destiny and an extended campaign of genocide against Native Americans. This slide into Empire was capped with the Spanish-American War when America finally controlled the maximum territory possible. But making and maintaining an American Empire required having a standing army—how else do you wage a war of conquest from one end of the continent to the other?

I mention this seventy year slice of history, not to criticize Maddow for not covering it, but to make the point that the desire to keep an army, a strong military force, under close command and control of the executive branch, has always been present in American culture, no matter how much the national mythology denies this history. Certainly the Great War was a rude interruption in a self-satisfied isolationism and Americans were dragged with great reluctance into this and the Second World War. Maddow emphasizes how quickly the military was demobilized after these two great wars. However, to paraphrase Karl Marx, the insistence that America was, at heart, a peace-loving nation was a discourse pregnant with its opposite. The ability of a strong President to wage war on command was always present and had been practiced for the bulk of the nineteenth century—war disguised as Manifest Destiny.

That said, the importance of the model or the paradigm of the Second World War cannot be overstated. It was not just the “Last Good War,” as has been often noted, it was also the last conventional war, because it was the last war America fought with Europeans. A shared culture of combat enabled the armies of World War II to fight on the basis of shared assumptions. Japan, having become “modern” by first copying the West and then by beating the West, for the most part complied. Here and there, like Germany, Japan broke the laws of “civilized warfare,” surely a contradiction in terms, but for the most part the basic “rules” were followed. Armies faced and fought one another, Navies faced and fought one another. The goal was for one group to defeat the enemy, invade the territory, seize the capital, and force a formal surrender.

The new enemies did not share these cultural expectations and proceeded to ignore the European forms of fighting. The fact that this rather stilted and formal mode of thrust and parry had a long history, stretching back to Medieval times did not impress the Vietnamese or the tribes of the Middle East. After a brief foray into South Korea, America fought a European style war only once again—an even more brief visit to Iraq. What followed would be a continuation of the Viet Nam style quagmire, a series of non-wars that could not be won, only endured until exhaustion intervened. Despite these unpalatable facts, or because of them, the “Dream War” was a re-run of the Second World War, the Good War, the Winnable War, where words like “victory” and “win” had some meaning.

The Proxy War

After the Second World War, America somehow entered into a continuous state of total war and these were mostly undeclared “wars,” called interventions of some other nomenclature. It seems that after four years of national militarization, it was hard to break the habit of defensive belligerence. The new enemy was the Soviet Union and the Cold War began. There is, apparently, something comforting, in an ordering, logical sort of way, to have a known enemy. The “enemy” sorts the world neatly into two halves: good and evil, simple dualities. We know how hard it has been to let go of a good Foe. Once the Berlin Wall fell and the Soviet Union imploded, America has continued to seek another Opponent. As Maddow comments in her section on Ronald Reagan,

We’d got in the habit of being at war, and not against some economic crisis, but real war—big, small, hot, cold, air, sea, or ground—and against real enemies. Sometimes they’d attacked us, and sometimes we’d gone out of our way to find them.

But the post-war and the post-Cold War world is not so neat and tidy and the new enemies were not schooled in eighteenth century military tactics of opposing lines protecting important strategic sites. And herein lies the trouble with contemporary war and this is the point where Maddow begins to make her point about “drifting” away from traditional formal ways of waging war through declaration and mobilization. Maddow writes of the of the standing army after 1945,

We had 150,000 troops in the Far East, 125,000 in Western Europe, and a smattering in such diverse and far-flung locations as Panama, Cuba, Guatemala, Morocco, Eritrea, Libya, Saudi Arabia, Samoa, and Indochina. Wary as never before of the Communist threat—now a constant “speck of war visible in our horizon”—America had come to see Jefferson’s preoccupation with standing armies and threats from inside our own power structure as a bit moldy. We were, after all, the only country still capable of keeping the planet safe for democracy.

The Cold War set a precedent for war with a goal but no foreseeable ending. Most people thought that the Cold War would never end, precisely because it was cold. Until the twenty-first century, Americans had not considered the possibility that that a Hot War would not have no foreseeable ending but also no articulated purpose. Maddow takes the reader on a Long March from the Viet Nam War into Iraq and Afghanistan, but her purpose is not to refight these endless wars but to discuss why we are fighting them in the first place. The answer seems to be a particularly male or males—the President and the military— need to feel manly and a rather frightening willingness on the part of a temporary leader, i. e., the President, to alone be responsible for the spending of blood and treasure.

In laying out how Maddow made her case, I want to first, move directly past the Viet Nam War into the peculiar non-wars of Ronald Reagan and second, to use the “Reagan Wars” as examples of the lingering Viet Nam Syndrome. The reason for skipping over the conduct of the war in Viet Nam is because this was an inherited war, with long, long roots back to the French Empire. After the Second World War, the tiny Asian nation wanted to be independent of the French who, after surrendering to Germany, were driven to retrieve their dignity by reclaiming parts of their “empire,” such as Viet Nam. The French dragged America into this dubious enterprise through blackmail: if we gave them military and monetary assistance, they would join NATO. And then, the French were defeated at Dien Bien Phu in the summer of 1954. They withdrew and left America holding the bag, so to speak.

Viet Nam became an “American” war by circumstance, doubly damning the conflict as having nothing to do with “our” vital interests. Even though all Viet Nam wanted was national self-determination, as promised by American President Woodrow Wilson, the American government decided that this was the ground where they would fight a proxy war against Communism. From 1959 to 1975 American fought a war that was never declared. Maddow recounts that in an ill-considered desire to carry out the supposed wishes of the deceased President John F, Kennedy, President Lyndon Johnson slip-slided into war sideways through a draft of marginal young men. Privileged young men, future President George H. W. Bush and future Vice-President Dick Cheney and future Presidential candidate, Mitt Romney, could receive draft “deferments.”

The point that Maddow, in laying out her argument concerning wars ordered at the whim of the Executive Branch makes, is that by the 1960s, in the midst of the post-war boom, it was unwise to both wage war and the mobilize the population for war. People did not want another war, not the kind of war that involved the entire population. In order to fight this new war, President Johnson sought recruits from the sons of citizens who had no political clout.

So from the first 3,500 combat Marines Johnson sent ashore near Da Nang on March 8, 1965, to support the first sustained bombing of North Vietnam to the 535,000 American troops who were in Vietnam at the end of his presidency, something like 1 percent would be Guard and Reserves. The active-duty armed forces shouldered the burdens of Johnson’s land war in Asia—fleshed out by draftees, chosen at random from among the ranks of young American men who were unable or unwilling to get themselves out of it.

A dangerous step had been taken—fighting a wrong war with the wrong—or unwilling people—all in the name of an abstraction: The Cold War. Unfortunately, for President Johnson, television had been invented and Americans indicated strongly that they did not want to send their children off to foreign wars, nor did they wish to see nightly battles on television. So for future presidents, the problem would be compounded: how to go to war with the minimum amount of soldiers—no need to call attention to the fact that wars are fought by real people—and with as few witnesses as possible, all the while achieving maximum glory. And here is where Ronald Reagan rode to the rescue with the solution to the problems Lyndon Johnson had left behind.

The Viet Nam War ended in a humiliating defeat for America. The greatest nation in the world had to withdraw ignominiously from an inglorious conflict that had been fought to make a political point for an opponent who was never present. The “manhood” of America had been emasculated, damaged by a guerrilla force impervious to traditional warfare and offended by occupation and division of their nation by colonial masters. Instead of studying the experience of the war and coming to the realization that the myth of American isolationism could make an excellent reality, President Ronald Reagan wanted to help America to “man up.”

The Reagan Solution

However, Reagan was thwarted by a belated law passed by a chastened Congress, a law to curtail adventurous presidents and to limit their War Powers. As Maddow descibes it,

The War Powers Resolution of 1973 was an imperfect law. But by passing it, the legislative branch was putting the executive on notice—it no longer would settle for being a backbencher on vital questions of war and peace. If the president wanted to execute a military operation (any military operation), he had to petition Congress for the authority to do so within thirty days; if Congress didn’t grant explicit authorization, that operation would have to end after sixty days by law. The Oval Office would no longer have open-ended war-making powers.

Rather than putting an end to the unfortunate “foreign entanglements” that George Washington warned of, the War Powers Resolution became an obstacle for annoyed Presidents to overcome. At this point, Maddow begins to describe how one president after another strove to wage war by other means. Reagan’s answer to the Resolution was to order strange little “interventions” or tiny wars, waged on defenseless territories. Reagan was considerably boosted in his Presidential aspirations by his contention that America should reclaim the Panama Canal. The fact that his jingoism, as Maddow puts it, struck a nerve with many Americans suggests that the post-Viet Nam War syndrome—the shame of defeat—was, twenty years later, a national mood.

Once he became President, Reagan immediately began building up the military. To the end of his Presidency, he dreamed of a fantastical mirage of the conquest of space with a weapon called “Star Wars.” Indeed, there was always a strange and surreal aspect to Reagan’s military adventures: he ran when attacked and attacked when there could be no reply. As Maddow explains, Reagan seemed to lack the ability to separate rhetoric from reality and it appears that he actually believed that America had “lost” the Panama Canal and that it was necessary to invade Grenada and then to attempt to overthrow the government of Nicaragua with secret stashes of arms to contras. War under Reagan became a curious mixture of secrecy and public relations.

Maddow lays out how the Reagan administration worked very hard to write a metanarrative that was both teflon and atomic: it was an untouchable story and it would have a long half life. The untouchable narrative was that America had to be Number One and that it had enemies everywhere. Therefore, regardless of facts to the contrary or regardless of the lack of facts, America was in danger, ringed with enemies, in constant danger. From today’s vantage point, the paranoia of the Reagan years seems predictive: a Republican administration frightens the American people with a threat that does not exist, calls those who dare to bring facts to the table “Communist stooges” and what have you, and ignores the impact on those outside of America, who are observing these antics. As Maddow writes,

The Soviets put their own intelligence services on high alert, watching for any and every sign of American military movement. And their ambassador to the United States, Anatoly Dobrynin, who spent much of his adult life in Washington, was gently passing the word to his bosses in the Kremlin that Reagan really did believe what he was saying. Dobrynin later wrote in his memoir that “considering the continuous political and military rivalry and tension between the two superpowers, and an adventurous president such as Reagan, there was no lack of concern in Moscow that American bellicosity and simple human miscalculation could combine with fatal results.” In 1983, when fear at the Kremlin was at an all-time high, the Reagan administration was more or less oblivious to it.

The dangers of this story with a long half-life and this myopic inward vision is apparent. Clearly, Reagan believed everything he was told (he apparently neither read daily briefings nor spent much time in the Oval Office) and clearly he was playing to a local audience for political purposes. Otherwise, why, out of all the nations in the world, invade Grenada? Maddow writes in an ironic spritely style that, in certain contexts, can be somewhat disconcerting, but here, in her description of the Battle of Grenada, excuse me, Operation Urgent Fury, the amused detached tone of near-parody is perfect. The trick the Reagan Administration needed to pull off was to both keep this Operation a secret but to convince the nation that a small group of American medical students were being threatened by an evil Latino dictator.

The story of Operation Urgent Fury reads like a script from the Keystone Cops. It would be a funny story, except for an earlier event that would prove to be prophetic:

On the morning of October 23, 1983, a suicide bomber drove a truck containing six tons of explosives and a variety of highly flammable gases into the US Marine barracks at the airport in Beirut, Lebanon, killing 241 soldiers there on a don’t-shoot peacekeeping mission. Fourteen months into the deployment, and after an earlier suicide bombing at the US embassy in Beirut, Reagan was still unable to make clear to the American people exactly why US Marines were there.

The answer to an unanswerable attack in Lebanon was to invade Grenada and to save medical students from Fidel Castro. Except that, according to Maddow, “Fidel Castro, knew about the invasion well before the Speaker of the United States House of Representatives.” Not only had the rescue teams not bothered to locate the students, who were scattered in various locations, but also, in Maddow’s words, “The chancellor of the medical school had already been telling reporters that their students hadn’t needed rescuing.” Indeed, some students were left behind, never to be “rescued.” But never mind, America was getting its macho back and the public’s attention was diverted from the 240 Marine deaths in Lebanon. Therefore, the Administration took its eye off a very significant ball: the Middle East to gaze southward to Latin nations, where Communism was supposedly fomenting at America’s very doorstep.

Although Congress was not pleased with Reagan and slapped his (now popular) hand, these unilateral actions continued under Reagan’s not always certain management. Maddow quotes the Speaker of the House, Tipp O’Neill:

“He only works three and a half hours a day. He doesn’t do his homework. He doesn’t read his briefing papers. It’s sinful that this man is President of the United States. He lacks the knowledge that he should have, on every sphere, whether it’s the domestic or whether it’s the international sphere.”

The Iran-Contra experience is now a matter of history and it is still unclear who was in charge or whether or not Reagan was or was not in the grip of Alzheimer’s. What is certain is that the “victory” in Grenada gave the President a sense of entitlement and he was determined to have another war in Nicaragua. As Maddow states,

Reagan was convinced that a president needed unconstrained authority on national security. He was also convinced that he knew best (after all, he was the only person getting that daily secret intelligence briefing). These twin certainties led him into two unpopular and illegal foreign policy adventures that became a single hyphenated mega-scandal that nearly scuttled his second term and his legacy, and created a crisis from which we still have not recovered. In his scramble to save himself from that scandal, Reagan’s after-the-fact justification for his illegal and secret operations left a nasty residue of official radicalism on the subjects of executive power and how America cooks up its wars.

In order to have his war and eat it too, Reagan and his sidekick, Oliver North, privitized this little war, which was funded through wealthy (Republican) donors and the Saudis. This unlikely enterprise—too strange to unwind here—came undone and the clear illegalities were exposed to withering investigations. As Maddow summued up this misadventures of Ronald Reagan,

Even before all the indictments and the convictions of senior administration officials, Reagan’s new way—the president can do anything so long as the president thinks it’s okay—looked like toast. In fact, Reagan looked like toast. Whatever his presidency had meant up until that point, Iran-Contra was such an embarrassment, such a toxic combination of illegality and sheer stupidity, that even the conservatives of his own party were disgusted. “He will never again be the Reagan that he was before he blew it,” said a little-known Republican congressman from Georgia by the name of Newt Gingrich. “He is not going to regain our trust and our faith easily.” The president had been caught red-handed.

However, due to the wonderous alchemy of Republican spin, “Reagan could be reimagined and reinvented by conservatives as an executive who had done no wrong: the gold standard of Republican presidents.” Maddow goes on to describe and recount further adventures of the Presidents who came after Reagan. Reagan laid down not just a gauntlet to a meddling Congress but also a path to Executive Power to use the military. The key was not to wage war but to sent out the troops. The problem was that the Draft had been eliminated and the President had to use a professional or volunteer army and the National Guard or the Reserves. It is interesting to note that the liability of not having a large standing army was now an asset. A small but flexible force, especially when combined with an international force, as in the Balkans and in the Gulf War, enabled the President to sent out a focused force without “waging war” and without declaring war.

Once Reagan had established the (specious) “legal” precedent that the military was the President’s tool, there was no check to balance this power. As Maddow states,

Congress has never since effectively asserted itself to stop a president with a bead on war. It was true of George Herbert Walker Bush. It was true of Bill Clinton. And by September 11, 2001, even if there had been real resistance to Vice President Cheney and President George W. Bush starting the next war (or two), there were no institutional barriers strong enough to have realistically stopped them. By 9/11, the war-making authority in the United States had become, for all intents and purposes, uncontested and unilateral: one man’s decision to make. It wasn’t supposed to be like this.

I have been moving through Maddow’s book, or drifting through her arguments by trying to set up, step by step, the trajectory from waging small but satisfying wars somewhere else with a tiny number of military personnel with low psychological cost to the public and with high pay-offs in bragging rights. I think that Maddow is correct to put the starting point of the rise of executive power over war with the Cold War and its ambiguities. That said, during the nineteenth century, there was also a long history of expansion and empire via military campaigns that were informal “wars.” The lack of large and formally declared wars led to the misleading myth of America rousing itself only when necessary while overwriting a longer and more complete story that was actually laced with combat.

The Two Wars of the Bushes

In order to solve the pesky “Viet Nam Syndrome,” or the reluctance on the part of Congress venture into pointless and costly wars, Reagan had solved one problem by seizing the power to put troops in the field and solved the problem of cost by financing the action with a deficit: fight now, pay later. But Reagan’s wars, in and of themselves, were dubious and unsatisfying. What America needed was a “real” war, something that would wipe out the stain of defeat in Viet Nam and when Saddam Hussein invaded the very small and very rich nation of Kuwait, the opportunity to re-masculinize presented itself. After a long and winding wrangle with a recalcitrant Congress, President George Bush put together an international coalition to drive Saddam out of Kuwait.

Thanks to Reagan, Bush felt that he could call up an army without consulting Congress. While Congress complained, Bush and the Chair of the Joint Chiefs of Staffs, Colin Powell, planned. Powell, a veteran of the Viet Nam fiasco, had his own theory of the case on how to fight a war—with deep preparation and with overwhelming force. As Maddow explains,

Powell wanted an overwhelming, decisive use of force to meet American military objectives clearly and quickly. The whole Powell Doctrine of disproportionate force, clear goals, a clear exit strategy, and public support was designed to create a kind of quagmire-free war zone. He was unequivocal—he and his commander on the ground, Norman Schwarzkopf, had agreed: two hundred thousand more troops was what it would take. And they’d already made sure the president understood the numbers would go up if he decided he wanted not only to eject Saddam from Kuwait but to destroy his army, or to depose him. The mission objectives would have to be clearly defined before H-Hour. In any case, Powell and Schwarzkopf wanted five, maybe six, aircraft carrier task forces deployed to the Persian Gulf, which would leave naval power dangerously thin in the rest of the world. By the time the offensive capability was in place, about two months down the road, there would be something in the neighborhood of 500,000 American troops in the Middle East—nearly as many as at the high-water mark in Vietnam. Two-thirds of the combat units in the Marine Corps would be deployed in the Gulf. There would be no more talk of rotating troops home after six months. Soldiers had to understand they were in the Gulf until the job was done, however long that took.

This was the famous “Powell Doctrine,” which was designed to guarantee success. And it worked magnificently in the Gulf War, resulting in a great victory over an inept foe in a truly stupid war that ended in a graceless slaughter along the Highway of Death. Only after long and protracted fight did Congress agree to go to war. According to Maddow, Congress objected to fighting a war in which American interests were not directly involved, but Congress was also disinclined to accept the consequences of not saving Kuwait. The Bush Administration fought a successful war and Kuwait, a nation that circumcised the women, was restored to its (male) owners, but there were hidden costs for the future. The jumping off point into Kuwait was Saudi Arabia and that meant that to one very indignant man infidels were on sacred soil. Osama bin Laden would wait a decade to take his revenge.

Since the “good” Gulf War was fought with Reserves, it was fortunate that the engagement was, thanks to Colin Powell, a short one. But in this short amount of time, certain rules of engagement were laid down—not for the enemy but for fellow Americans. The Viet Nam War had run into trouble as much at home as in the field due to the fact that this was the first war since the Civil War that was uncensored. The military would not make that mistake again. The Gulf War was stage managed, information was controlled and doled out, and press and public was placated with video games of the “smart bombs” over Baghdad. As Maddow said,

Our military dazzled. The First Gulf War was all Powell could have hoped for: a clear mission, explicit public support, and an overwhelming show of force. It was fast—the ground assault lasted just a hundred hours, the troops were home less than five months later. It was relatively bloodless for the away team—fewer than two hundred American soldiers were killed in action. It was cost-effective—happy allies reimbursed the United States for all but $8 billion spent. And it was, withal, a riveting display of our military capability, almost like it was designed for TV. Americans, and much of the world, watched a Technicolor air-strike extravaganza every night. The skeptics were forced to stand down; our military had proved beyond doubt or discussion that we were the Last Superpower Still Standing.

But for longer missions, the Reserves and the video games would not be enough to placate the public. Although, thanks to Reagan, there was no serious thought given to balancing a budget and the military was given whatever it needed or wanted or desired. Aside from boys and their toys, supporting an adequately sized volunteer army was proving to be a very expensive proposition. The military had always supported itself. A young man could enlist or be drafted and find himself, not fighting, but doing laundry or providing food or doing mechanical work. For every combat fighter, there were a dozen or so working in the support systems, as engineers or office workers.

Once the military Draft was ended in 1973 under Richard Nixon, the armed forces all became “volunteer.” At the time, those who were opposed to the Draft, complained of “opportunity costs,” or the economic losses incurred by middle class white males, now likely to have the prospect of high salaries during the post-war boom. Once the white males moved out the way, the males of color could raise themselves socially and economically by volunteering for the military where new “opportunities” could be found. Those who were opposed to the end of the Draft, felt that the ethnic and social mixing that occurred in the military knitted America into a whole nation, instead of a divided country. There was some discussion of patriotism and service to the Flag, but the urgent voices of disgruntled white males had to be heard.

Twenty years later, the all volunteer army was an excellent career choice, but only certain demographic groups took advantage of what the government was offering: young men and women of color and young men and women from the South. The rest of the youth were not interested. The result of these very different life paths would have consequences that would take another twenty years to play out. In the short run, there was the sheer unexpected cost of maintaining a large and long term military full of careerists and their families. As opposed to the draftees, these “volunteers” did not cycle out after a couple of years, they stayed and got married and raised families. Each soldier could easily have three or more dependents living on the base and needing care and feeding.

Maddow brings up a very interesting point about the sheer financial scale of the obligations the government takes on when it commits to a Volunteer Army. The cost of maintaining soldiers and their spouses and children and all the attendant services was huge. As Maddow explained,

In the ten years after 1985, the procurement budget had dropped from $126 billion to $39 billion and represented a paltry 18 percent of total defense expenditures. Sure, the active-duty force had been pared by nearly 30 percent and a few bases had been closed, but that didn’t come close to solving the problem. How were we supposed to ensure our Last-Superpower-on-Earth superiority when just the overhead cost of keeping our standing army milling around was swallowing between 40 and 50 percent of the Pentagon’s annual cash allotment?

The problem was solved by a now familiar term, “outsourcing.” Now, on one hand, it is more expensive to privatize and the corruption when private companies take the place of military personnel is vast, unchecked, and continues today unabated. However, outsourcing can be a very good thing, as Martha Stewart would say, because one can outsource actual soldiers. If one outsources soldiers, not just food services, then the President who is in charge of deploying the mercenaries is now undeterred by such nuisances as Congressional approval. The corporation, such as Xe, assumes the risks and the expenses of the mercenaries who are not eligible for Veterans’ benefits—hospitalization, education, legal protection—but they are paid accordingly with very high salaries that, unlike benefits, have end points. The government is off the hook and the mercenaries can be charged with all kinds of illegal and dishonorable tasks, off the books.

Outsourcing began in earnest in the 1990s. President Bill Clinton was wise enough to not fight wars but to participate in peace-keeping missions, such as the one in the Balkans, where some kind of military presence needed to be in place for years. By the 1990s, the problem of going to war was solved and now it was easy to avoid the skepticism of Congress or the suspicions of the American people or the high cost of casualties. As Maddow explains,

President Clinton never really expended much effort on the politically costly task of convincing the American public of the need to arm the Bosnians or Croatians, or the need to unleash American air power on Miloševic and the Serbs, or the need to put US boots on the ground. Instead, he found a way to do something without the necessity of making any vigorous public argument for it, and without much involving his own balky Pentagon…So it was soon after the peace accords were signed that those twenty thousand American peacekeepers—who would be joined by twenty thousand private citizens under contract to provide support services—arrived in Bosnia and Croatia as part of an international force to keep Miloševic and his Serbian military under heel. And did Clinton have a hard time selling that manpower commitment to the American people? He did not. He was helped greatly by—what else? Outsourcing.

The civil war and the genocide in the former Yugoslavia needed to be quelled and then order had to be restored, a process that took years. Private Contractors, as these mercenaries were then called, made their first appearances in the Balkans. The consequence of the decision to privitize were disastrous, as Maddow says,

…the acute and lasting problem was that they cut that mooring line tying our wars to our politics, the line that tied the decision to go to war to public debate about that decision. The idea of the Abrams Doctrine—and Jefferson’s citizen-soldiers—was to make it so we can’t make war without causing a big civilian hullabaloo. Privatization made it all easy, and quiet.

By the time President Barack Obama inherited two wars, one in Afghanistan and Iraq, the private contractor was a fixture in the American military. During the second Iraq War under a second President Bush, the ratio of the Reserves on active duty and the Private Contractors/Mercenaries was one to one. When the American public is told how many men and women are on active duty in these two war zones, this number should be doubled. In terms of the troops in the field, the actual force is twice as large as we are told. Unfortunately, the troops and the mercenaries are unsuited to the task of “nation building” or modernizing and westernizing a Medieval culture that has no history of democracy or equality.

Into the Cauldron

By the twenty-first century, reasonably good excuses had to be given for rounding up the Reserves and one had to attend to public relations and “nation building” or “bringing democracy” to benighted places seemed to be worthy causes. The invasion of Afghanistan, a barren land, suitable only for the breeding of war and poppies, should have been short-lived once the objective had been obtained—to drive Al Qaeda out of Afghanistan and to kill or capture the architects of the “attack on America” on September 11, 2001. The problem conceptually was that “objective” or goal was not a “victory,” and the second Bush administration cast about for an alternative war on better terrain where a good old-fashioned war could be fought.

Perhaps in the distant future, psycho-historican will explain the psychology of launching a “preemptive war,” also known as the “Bush Doctrine.” The invasion and occupation of Iraq was a strange and surreal event, too familiar to be retold here, but there is one element that remains intriguing—the willingness to not just lie but to create an alternative reality. In contrast to the Cold War, which has been deemed a Simulacra of a war, the Iraq War was a real war fought for fictitious reasons in the fevered mindset of a neo-con fantasy. As with the Reagan administration, it is unclear if the major players actually believed their own rhetoric, if they actually inhabited the alternative universe they created out of whole cloth or whether for unknown reasons they simply wanted to send men and women off to kill other men and women on a whim.

Experience suggests that it is futile to argue with alternative universes and no manner of proof to the contrary will convince the perpetrators otherwise. But what the Iraq was does demonstrate is another step towards executive capriciousness. The second Bush Administration proved to be incapable of governing but the energy of the government was wholly swallowed up in dreams of glory. Maddow suggests that we have now reached the point where the Executive Branch is nearly unchecked and the Pentagon has, thanks to generous Republican (deficit-fueled) spending on defense, the military has taken on a life of its own, regardless of need or regardless of real conditions on the ground.

A fact that’s underappreciated in the civilian world but very well appreciated in our military is that the US Armed Forces right now are absolutely stunning in their lethality. Deploy, deploy, deploy … practice, practice, practice. The US military was the best and best-equipped fighting force on earth even before 9/11. Now, after a solid decade of war, they’re almost unrecognizably better. Early worries such as how much gear we were burning through in Iraq were solved the way we always solve problems like that now: we doubled the military’s procurement budget between 2000 and 2010.

Obama Country

New President Barack Obama won the office, partly on “hope and change,” and partly because he was against “dumb wars.” He inherited two dumb wars and virtually unchecked Executive Power to go to war. Obama is no cowboy. A thoughtful man, he is an intellectual with an analytic mind and it seems that somewhere along the line, he has gently and silently slipped the nation into the new century. As the Obama administration is demonstrating daily, the way in which President George H. W. Bush waged war was old-fashioned and outmoded, a nineteenth century idea of fighting with twentieth century weapons.

To return to a point I made earlier, if the starting point is the “good war,” the Second World War, then the post-war dream is already an outmoded one, one of “victory” and “glory” and “win.” These terms, in the twenty-first century, are without definitions. Even the Powell Doctrine, invading with maximum force, only gets you so far—into the territory—but does nothing in terms of a long occupation and is a hindrance when it is time to get out. And the Powell Doctrine was totally disregarded when the Bush Administration decided to invade Afghanistan and Iraq.

The Iraq War was a horribly expensive war, fought on the cheap in terms of the numbers of troops deployed. While bending to public disapproval of the unnecessary war in search of Weapons of Mass Destruction, the Pentagon kept the number of Reserves low but augmented with Contractors. Iraq is a huge territory that did not want to be invaded or occupied and the shoestring forces could not control the reluctant population. The major objective when waging an unpopular war, justified in a variety of confusing and conflicting ways, is to win this war. But to do so, the Powell Doctrine must be put into play, an impossibility if the war is a “War of Choice.”

Maddow does not spent much time on the fiasco of the Iraq War, already ably covered by other incredulous historians, but she notes that

By 2001, the ability of a president to start and wage military operations without (or even in spite of) Congress was established precedent. By 2001, even the peacetime US military budget was well over half the size of all other military budgets in the world combined. By 2001, the spirit of the Abrams Doctrine—that the disruption of civilian life is the price of admission for war—was pretty much kaput. By 2001, we’d freed ourselves of all those hassles, all those restraints tying us down.

Iraq and Afghanistan, of course, did not go well. The British, how had tried to contain Iraq in the 1920s and the Soviets who had tried to control Afghanistan in the 1980s could have warned the deaf Americans of their ridiculous quest. No amount of time or effort could bring about a “victory” or a ” success” in these ancient lands of Mesopotamia. As if to satisfy himself that the Neo-Conservative assertions that these wars could be won with more troops (remember that the actual number of soldiers is double what we are told), Obama conducted a “surge.” In male military language a surge is an increase of personnel for a limited period of time. The hope is to stabilize the situation long enough to get out of Dodge. Obama’s surge allowed America to save face and taught the President that surges are futile. To ask for a surge is like asking for the price in a fancy boutique—-if you have to ask, you can’t afford it; it you have to surge, you’ve lost the war.

Quietly, Obama took the advice of his Vice-President, Joe Biden, to use commandos instead. And this is where the book ends. Maddow makes the point that every step along the way disconnects “war” from national responsibility, national participation, and democratic participation. As Obama pulls out of the Twin Wars of Bush’s devising, he is escalating the ultimate dislocated war, a War of Drones waged by the CIA, augmented by occasional strikes by elite Special Forces. The Administration has a supposed “secret kill list” of those who are to be removed through long-distance strikes and the rule of engagement are unknown. Congress is kept in the dark about the details but the benefits are clear.

First, the President and the CIA and a small portion of the military can operate at will. They are not engaged in a war but in a program of planned assassinations, designed to take out the leaders and discourage the followers. Compared to a large number of “boots on the ground,” the Drone Program saves lives and money, blood and treasure. The result is the Ultimate Video Game. As Maddow explains it,

When one of those Blackwater-armed drones takes off with a specific target location programmed into its hard drive, it is operated remotely by a CIA-paid “pilot” on-site, in a setup that looks like a rich teenager’s video-game lair: a big computer tower (a Dell, according to some reporting), a couple of keyboards, a bunch of monitors, a roller-ball mouse (gotta guard against carpal tunnel syndrome), a board of switches on a virtual flight console, and, of course, a joystick. Once the drone is airborne and on its way to the target, the local pilot turns control over to a fellow pilot at a much niftier video-game room at the CIA headquarters in Langley, Virginia. The “pilot,” sitting in air-conditioned comfort in suburban Virginia, homes the drone in on its quarry somewhere in, say, North Waziristan. Watching the live video feed from the drone’s infrared heat–sensitive cameras on big to-die-for-on-Super-Bowl-Sunday flat-screen monitors, the pilot and a team of CIA analysts start to make what then CIA chief Leon Panetta liked to call “life-and-death decisions.” Maybe not sporting, but certainly effective.

According to an article by NPR, the local pilots are required to wear uniforms and there are programs to help these people to cope with the after effects of frequent killing, even at a distance. Maddow’s concern is that there is such a dislocation between the decision making process and the public and the distance between the moral responsibility of waging war that it is easy to be in a state of constant conflict without any accountability. She is concerned that the breakdown is between Congress and the President, but I think that there is another trajectory that also needs to be looked at—the increase in distance between the target and the triggerman.

The real question might be another kind of separation, one that dates back to the bombing of civilians in the 1920s. When these bombings first occurred, there was little concern, because the victims were in Iraq and Ethiopia. Only when Europeans were assaulted in Guernica did any outcry occur but these moral qualms vanished, and ten years later, the Allies had firebombed Dresden, Hamburg, and Tokyo and had dropped two atomic bombs on non-military targets in Japan—all on civilians.

The ethical aspects of killing helpless human beings was wiped out by the blanket assumption that the populations of Germany and Japan were complicit in the Second World War. The rationale for these civilian bombings was that the morale of the people had to be broken. Studies after the war have suggested that these bombings, such as that of London, were not effective in either lowering morale or in slowing war time production, but it was hard to break the spell of cost-free or effective aerial warfare.

In fact, Powell had dissuaded Clinton from attempting to settle the Serbian conflict through bombing. Maddow quotes Clinton assistant, Nancy Soderberg, who reported that Powell had advised, “ ‘Don’t fall in love with air power because it hasn’t worked,’ [he said]. To Powell, air power would not change Serb behavior, ‘only troops on the ground could do that.’ ” Indeed, the Second World War was won on the ground in a long slow and deliberate drive to capture and hold territory. In the end, the most effective bombing was those two that were dropped in the end on Hiroshima and Nagasaki. However, the second Bush Administration was still enraptured by air power and treated the helpless and blameless Iraqis to “shock and awe” in 2003…again to no avail.

Wars in the Mideast were quite different from wars in Europe. These new wars were asymmetrical, tribesmen with a cache of modern weapons against a large contingent of well armed twenty-first century warriors who become mired down in what is part of an ongoing tribal conflict. Even though America was convinced that it was fighting a “War on Terror,” the nation was confronting an old culture that was fighting against modernism or modernity. In addition to fighting unwelcome change and colonialism from the outside, these tribes were fighting each other for religious reasons that were unclear to Westerners. But however sectarian these local issues, America is committed to fighting a condition that has been named a “War” to give the American public a framework through which to “read” the traumatic “event” of September 11th.

Obama has definitively changed the way in which this non-war is not waged. The troops are coming home, while the Drones carry on the killing. On one hand, if we follow this line of thinking—kill at a distance—from the bombing of Dresden to the Drone attacks on terrorists in Pakistan, the two points are certainly connected. What remains unclear, even in Maddow’s book, is why a President would want to take sole responsibility for body bags, ours or theirs. Drift seems to imply that one President after another “drifted” into taking more and more power because they could do it, because there was no power capable of stopping them. As the wars became more and more arbitrary, from Viet Nam to Iraq, the personal responsibility became greater, and, as Johnson and Bush found out, the consequences, the judgment of history can be harsh for those who wage war unsuccessfully and for no good reason.

But if the costs of blood and treasure are relatively low, as with the secretive Drone Wars, then the power shifts decisively towards to the Executive Branch. If “war” is redefined as tracking down designated targets on a “kill list,” then the ostensible cost of war goes down as does the size of the military. If Drone attacks can do the job of people, then the need to attack or invade or occupy should diminish. The public will be happy to allow this kind of invisible war to continue, no questions asked. No more flag draped coffins. Maddow ends her book with a list of problems that need to be solved—what she calls a “to do list.” Most of the points on her list concerning going to war, the role of the citizen soldiers, privatization and the disposal of nuclear weapons, will resolve themselves within a few years.

Two of her objections—the “secret” Drone Wars and Executive Power—are here to stay and are the future of war: a President in the Situation Room waiting for the outcome of a covert operation by a team of Seals or for a report on a strike on a target thousands of miles away. If we accept the “necessity” of dropping an atomic bomb on Nagasaki, how can we complain about a single Drone strike on one person? If we want to balance the budget, then how can we not accept this cheap and reliable manner of taking the war to the terrorists? If we could go back in time and assassinate Osama bin Laden, would we do it? If so, then targeting other individuals before they do their worst is a moral act.

Although, such strikes now come under the auspices of the CIA and are “secret” and based on”intelligence” that the public and Congress do not know, Rachael Maddow ends hopefully,

We just need to revive that old idea of America as a deliberately peaceable nation. That’s not simply our inheritance, it’s our responsibility.

I wish I could agree with her hopeful assessment. America has not been a “deliberately peaceable nation,” but we decidedly do not want to take responsibility for these new wars. I was shocked to learn that one of my former art students has become a Drone Pilot. Happy and satisfied in a military career, he is in charge of sorting out the designated target from innocent civilians, and he is convinced that these assassinations save money and lives. What is the more moral position—send thousands of men and women off to die or quietly kill the “terrorists” identified by “intelligence?”

This could well be a question that we will never be asked in any formal way. While there are those who are questioning the Drone War, the real Drift is away from taking collective responsibility. So war becomes the provence of the President who wages it in secret and we may be told from time to time of its causalities. This is the future.

Unlike all other art forms invented out of modern technology, film has remained stubbornly entrenched in its pre-industrial heritage. Even though the technology of “moving images” allowed for a wide range of artistic experimentation, early “movies” re-presented the theatrical experience and borrowed from painting gestures, postures and poses, the vocabulary of visual communication. Trained on the familiar, movie audiences expect to have their belief suspended and that suspension rests upon the ability of directors and actors to create a new reality. Given that making movies is a business, those demands have shaped the history of film, preventing the kind of growth and development that has changed other art forms. The “movies” have been mired in the late nineteenth century and it is now the beginning of the twentieth first century and still mainstream film stays the same. If film is to “progress” or change, any experimentation must take place outside of the commercial world and any advance if film as an art form rests in the hands of artists.

Crafted by Berlin-based photographer and filmmaker, James Higginson, Willful Blindness is part of the sub-culture of “art films” where the “consumer” does not exist and where the art audience wants change and innovation. Higginson comes out of a history of experimental art films in the tradition of Bruce Connor’s A Movie and Andy Warhol’s Empire. Connor started with the idea that a strip of film has rows of cels or square pictorial units, each of which is filled with or contains a single image. But Connor challenged the assumption that these strips had to flow seamlessly from one segment to another, and he took the concept of montage or editing and spliced together found footage to subvert and disrupt the needs of movie audiences to have a “story.” Warhol, conversely, eschewed editing altogether in Empire by reducing “filming” to its most basic essence—pointing the camera at an object—in this case the Empire State Building—and turning the camera on. For eight hours the camera hummed, the sun traversed the skies, weather arrived and departed and the building remained unmoved. Like Connor, Warhol was also playing with attention span and the process of looking, seeing and watching, in at attempt to reinvent or de-invent “film.” This de-invention, or deconstruction of film, means to strip the moving image of its overgrowths of “movie” conventions.

Like these artistic pioneers, Higginson starts with the premise that the medium of recording movement has its own inherent (but changing) properties and that the “movies” have ignored the possibilities of what can be done with camera and film. One of the tropes of “going to the movies” is the dream. When entering the theater, we leave the real world of sunlight behind and enter into a cave where flickering images are projected onto a screen. As if frozen in a private dream, we sit and gaze raptly, as if watching our own dream. Afterwards, we wake up, walk out of the dark, and reemerge into the ordinary, which announces itself as a place of light. An award winning film, Willful Blindness moves back and forth between dream and reality, between the present and the past, by borrowing the semiotics of light and dark—that which is well-lit is the outside of the Real and that which is dark is the inside of Desire.

A canny and aware filmmaker, James Higginson deploys his film tools with the mastery of a mature artist. While Connor and Warhol used black and white film in their classic experiments, Higginson works with color, but his color pays homage to the black and white history of movie making with a bleached and grayed out tones intercut with slashes of jarring red color. These are the main contrivances that Higginson wields—the unparalleled ability of the camera to stare, the post filming intervention of montage—cutting and pasting—and the historical role of color. In using color as mood and atmosphere, Higginson evokes other film artists, who somehow ventured into the mainstream, using color artistically, such as Todd Haynes in his homage film, Far From Heaven (2002).

To concentrate on the plot of Willful Blindness is to miss the point of this film. The story and the action is really a conceptual play with the properties of film. Higginson plays with two elements of filmmaking, both often overlooked: the fact that one looks at a movie and conversely the fact that the film conceals as much as it reveals. Willful Blindness begins with an act of enforced watching, deliberately suggestive of the determined ennui of Empire except that something is actually happening or unfolding in successive waves. The viewer is brought to earth, forced the pavement as the camera drags along the ground. Someone—male or female—is crawling, putting one hand in front of another, dragging an unseen body along behind. All we see are the hands, reaching outward for purchase.

Here, Higginson takes up one of the single most overlooked characteristics of the movies—the ellipses—or that, which is left out and not seen. Usually the ellipse is used to move the story forward: rather than showing the character walking from one place to another, the director will end the scene and will begin a new one. The significance of this lack or empty space in the action is that the viewer mentally fills in this gap. When the viewer sees the grasping reaching hands, s/he enters empathetically into the action, even inhabiting the invisible body of the actor who is an obvious victim of some terrible event. Higginson takes the notion of “economy” in art to extremes, showing a difficult and complex set of actions, dragging oneself along a city sidewalk, with only the barest of suggestions.

Conveying extreme effort, Higginson works against the forward movement, however, labored and difficult, not by looping the film but by seeming to overlap the progress: one step forward, two steps back. The great effort of the crawler is repeatedly impeded but not prevented, adding layers of frustration on the viewer. Higginson makes the watcher watch. There is no way to intervene or help. He makes the viewer suffer along with the wounded protagonist; the film deliberately drags, mimicking the painful scraping of the hands on the rough pavement. The irritation at this prolonged scene counters the way in which mainstream movies quickly “establish” the first act for the impatient audience.

Playing with the conventions of slow motion and the undeniable advance of a strip of film through the sprocket, Higginson considers the very concept of “pace” in a movie. In contrast to the slow sequence, are the recurring brisk and rapid actions of a woman walking in bright red very high heels—pace personified. Once again we are on the ground, once again we cannot see the body, only the feet and those shoes, moving fast with purpose. And these red shoes—baleful and malevolent, intimating violence—are the mirror images of the victim’s slow hurt hands. These are perpetrator shoes, quickening the processional pace of the film, reassuring the viewer that a story has a beginning, middle, and end that it moves forward and comes to a terminus. The engine of the film is the determined red heels, but where are we going?

Early on, Higginson warns the viewer: he will give color and he will take it away. Color, for this filmmaker, conveys both life and death. Full of vibrancy, the red heels are full of life but they are as red as blood and predict and forebode. The hands are drained of color and the environment is emptied of life as if by a vampire. Willful Blindness is a dark and black film without daylight, without bright color. Often the viewer is blind in that it is difficult to see, thwarting the very purpose of the movies, watching and looking. The movie lights turn on only when the red heels appear. But Higginson not only keeps the viewer in the dark, so to speak, but also refuses to bow to the main demand of movie making—explain to the viewer what is going on. He keeps us willfully blind and pertinaciously mires us in the dark as if to trap us in a nightmare.

The red heels are the parentheses of Willful Blindness the film’s alpha and omega—its beginning and the end. They belong to a traveling woman. At its heart, Willful Blindness is a canonical road movie in which the main character travels. This journey into darkness is punctuated with a series of incidents, which occur along the way, perhaps connected or perhaps not. In between, Higginson investigates the most compelling aspect of the camera vision: voyeurism. Movie-making essentially splits between what society allows us to see, what is deemed desirable, and what society thinks we should be sheltered from, that which is forbidden. People come to the movies to see the forbidden—sex and violence, which always hover on the edge of pornography and unbridled bloodthirstiness. We enter into an imaginative place to give way to our most unsocial instincts, which are also our most basic and that, therefore, must be the most rigorously suppressed.

Higginson serves up hints of pornography and unsavory sex, but his real theme, resonating throughout his photographic work, is violence. Violence, in Willful Blindness, is private, closed and secretive, taking place in some sort of twisted domestic setting. Willful Blindness is an excruciating journey into extremity, filling the viewer with dread. Along the journey, Higginson picks up and discards the old dead languages of traditional film—the German Expressionist style, the film noir of the crime story, pornography and gratuitous violence, as if searching for the right way to detonate an act of retribution. His reanimation of these old allegories is where the practical practice of editing or cutting unwanted or unnecessary scenes—becomes an act of slashing and hacking, and the film reaches its denouement.

The editing style, which deservedly won a prize, the cropping of fragments, the slicing into slivers of film, mimics Hitchcock’s famous shower scene in Psycho with its eighty odd cuts. Higginson has moved beyond the literal metaphors of the master and dwells in the conceptual: he cuts the film—rapidly and repeatedly, implying and indicating terrible acts of violence. Suddenly color bleeds into the film, drenching it. For the viewer, dragged hand over hand into a nightmare composed of a web of image that is both beautiful and dreadful, this explosion of horror is a cathartic relief. We leave the cave of sublimated Desire, our need for revenge satiated.

Higginson was not content with deconstructing the givens of filmmaking; he rethought the role of sound as well. Sound, in a visual medium, is by definition an invasion of an alien other. In fact when “talkies” took the place of silent movies, the purists objected. The technology of sound—talking, ambient noise and music—totally changed the way in which movies functioned. The broad gestures inherited from painting disappeared and pantomime was replaced by dialogue. Interestingly, early silent movies were much more oriented towards action and activity compared to the films of the thirties and forties which relied much more on actors talking to each other to move the plot along. But dialogue along with the sound effects are “natural,” lifelike, an enhancement of the “reality effect.” But music is inherently unnatural.

It is with the music and the editing of sound that the viewer, who has been intensely interacting with the fabula, becomes most aware of Higginson as the orchestrator of the syuzhet. Suddenly, one is jolted into realizing that, contrary to mainstream film; there is no dialogue, no voice over, not even subtitles. But not no sound. Once again the artist has pushed filmmaking back in time, to an era when the images had to stand on their own, but the music stood in for human speech. All silent films were, in fact, not “silent” but were designed to have music accompany them. If the theater venue could afford it, an entire orchestra would do the accompaniment, if, if the theater were in a small town, then a simple piano player pounding out the film score would suffice.

Although the sound design is by Higginson himself, working under the alias “Roberto Pelligrini,” with his assistant Maik Wolf, the music for Willful Blindness is a totally original score by Roland Hackl. Hackl is part of the European tradition of contemporary film music, for like his colleagues and predecessors, Daft Punk and Tangerine Dream, he comes out of the techno music scene. Once on the fringes of the music scene, techno is now mainstream but is far more flexible in format and sound than established forms of popular music, such as rock ‘n’ roll and blues. Techno has no history, it comes from machines that are also without history; it is electronically generated artificial sounds that are mimickeries of a new kind of “music.” Hackl has skillfully explored the in-between-ness of techno/music and its split personality and its greatly expanded abilities to evoke emotions within the audience and to intervene with the diegesis. In the hands of Hackl, the absence of the naturalizing effects of dialogue becomes an asset to be exploited and music re-takes its traditional original role in the film as a stand-alone experience, quick-marching the viewer to the determined denouement.

At the end is a reentry into the light of reality and the woman in the red heels strides purposefully towards her appointed task—something must be buried. Bizarrely, the world ignores all this activity, suggesting that, contrary to what we believed, we are still trapped in a bad dream. James Higginson takes the concept of film to its final limits—that it is not the camera that is the projector, it is us, our minds, reaching out of the depths of the repressed impulses who streams our darkest fears onto a helpless blank white screen. The screen is the world itself, the passive recipient of what the ancient Greeks feared most—the beast within all of us. We sleep, we eat, we mate and we kill, there is nothing else.

The Big Fix opens with the poignant observation that Louisiana is not a state; it is a colony of Big Oil. For over a century, Louisiana, an oil-rich territory, has been raped and pillaged and looted for its natural resources. Nothing is safe from the avarice of multinational corporations, not the land, which is a pincushion for oil derricks, not the sea, which is peppered with oil rigs.

Those natural resources include the Gulf of Mexico, which supports a huge fishing “industry”—a dangerous oxymoron if there ever was one. Louisiana provides one third of the nation’s seafood and, after the oil companies, fishing is the second source of employment in the state/colony. The third employer is the tourist industry. Fishing and tourism depend upon good weather and upon a natural environment that is pristine and respected. But the oil industry cares nothing about nature or human beings. The coexistence of fishing and tourism and oil depends entirely upon luck…and the competency of the oil companies and their commitment to public safety…which is to say, the citizens of Louisiana are gambling.

When that luck runs out, nature ultimately loses. Overfishing can force fisher folk to pull up their nets; tourists can carelessly toss their trash and deface scenic beauty, but oil is inherently poisonous and dangerous. Only oil can destroy nature, probably permanently. On April 20, 2010, ironically on Earth Day, an oil rig, rented by the oil giant BP from Transocean, exploded, killing eleven workers. The resulting oil gush of oil, unchecked for three months, destroyed the fishing community, polluted the Gulf of Mexico, and contaminated the local seafood. But not to worry. The pirates and parasites, also known as the corporate colonizers, were also the masters of the Big Fix, also known as the Big Payoff.

Everyone’s palm was greased, everyone grabbed at the cash, everyone took the silence money and everyone agreed to be paid off. The punch line of this sad tale of a Lost Colony is that the colonists really do not want to be fixed. The inhabitants of the oil territory demand more drilling and the fisher folk are willing to distribute poisoned fish and shrimp to unsuspecting Americans.

Profit for the corporate colonizers and financial survival for the colonists trumps any moral or ethical concern for innocent fish, fowl, dolphins, wildlife and people. The Big Fix was result was a complete lack of values of any kind on the part of all the participants and the protests of the righteous victims of the oil spill were drowned out and the world moved on. By any standards what happened in the Gulf of Mexico on Earth Day was an American Tragedy.

A tragedy is like a crisis, and in the case of the BP oil spill, crisis and tragedy came together. Like a crisis, a tragedy is a long time in the making and lies in wait and eventually all the parts come together in a sort of cosmic inevitability. Like a junkie on heroin, for a hundred years, Louisiana has been dependent upon oil. Only Huey Long understood that the oil companies are also dependent upon the state and demanded that they pay the residents for the resources that bring so much profit. The Big Fix pointed out that Huey Long was assassinated in 1935. To this day this murder has never been solved. His death ended whatever resistance could be mustered against Big Oil and the state simply swooned into the arms of the oil companies.

In return for the privilege of raping Louisiana and sucking it dry of oil, these corporations offered blue-collar jobs to the workers while paying off the state government and the elected politicians to leave them alone and the federal government to not regulate them. The people of Louisiana passively accepted their oppressed condition and that generational passivity is part of their tragedy. But the explosion on April 20th awakened them to their own situation and the citizens of the Gulf demanded some kind of compensation.

Filmmakers, Rebecca and Josh Tickell, enter this theater of venality and victimization with the innocent aplomb of a Rosencrantz and Guildenstern: they are small players in a larger drama; they are witnesses who, unlike Shakespeare’s’ bit players, live to tell their tale but the story literally makes them sick. As the creators of Fuel, 2008, this husband-and-wife documentary team were the likely candidates who could expose the duplicity of government and corporations. As efforts on the part of federal and state government progressed throughout the summer to cover-up the exposed incompetence and irresponsibility of an oil company, BP (British Petroleum or Beyond Petroleum, as they like to be called), the Tickells went to Louisiana to find the truth. They headed South, leaving Los Angeles behind, and brought Peter Fonda with them. (Fonda, along with Tim Robbins was one of the executive producers.) After making some public feel-good appearances and doing his well-meaning star turn, Fonda went home and the real work of the Tickells could begin. The Big Fix, which will appear in theaters in limited release in June of 2012, is the result of their quest to document the aftermath.

Like fellow documentarian, Michael Moore, the Tickells are actors in their own documentary and they rest their case on the investigative reports of the Obama administration of the Gulf Oil Spill. Indeed most of the facts presented in the film are in the public domain and are well known. What is remarkable about the events of the summer of 2010 is that the whole world was watching—-literally viewing the oil pumping out of a ruptured underground pipe—and yet there was a determined attempt on the part of the oil company to silence the effected community and to impose a black out on its ongoing attempts at a highly toxic cleanup. We, of the television audience, watched as the community was paid off with a “settlement” that remains to this day an abstract sum of money out of reach of the victims. Once you become a claimant, you become a “defendant” and must prove that you deserve it. Once it offers a settlement, the oil company is back in command. The movie pointed out that BP has paid only one claim.

One would expect that the oil company would want the public to watch their penance and their amends to the Gulf but, after a few months of being televised, BP began to work at night under the cover of darkness. The Tickells could only challenge the BP guards but they could not take their cameras into the beaches being cleaned. Careful to film only on public land, they could document the daytime activities of BP. Under the quiet lens of the camera, the corporation deliberately plowed the oil and tar balls under the white sand in full view of the people sunbathing and swimming in the Gulf. The Tickells reported that the swimmers were coming out of the Gulf waves covered in a itchy rash. Some of the inhabitants suffered from boils and welts and open sores from the toxins used to “absorb” the oil.

BP is not so much cleaning up the oil spill as hiding it. The term “cleanup” is misleading for BP is not so much “cleaning” the water and land of the leaked oil as relocating the oil. The relocation is possible only if the oil is reconstituted into heavy nodules, which sink to the bottom of the ocean. Once so dispersed, the measurable amount of oil spilled is reduced and so too is the amount of restitution paid by BP.

The problem is not just the fact that BP is cheating, not just that huge lakes of submerged oil wallow at the bottom of the Gulf, not just that the poisonous brew could very well kill the body of water, it is also that the agent used to change the oil is extremely toxic. The cute name of the so-called anti-toxin is “Corexit,” an invention of Exxon and the company developed it to “correct” the spill of the Exxon Valdez in Alaska. According to experts who testify in the film, Corexit mixed with oil produces an artificial element that is far more toxic than the original oil. Given that the federal government is in the clutches of the oil companies, the mere fact that the EPA (funded by the oil companies) politely protested over the use of Corexit was nothing short of amazing. BP of course responded to the admonishment by “correcting” at night, spewing the white fumes from silent crop duster airplanes to squelch the oil plumes.

The toxins hover invisibly in the air, float lazily in the water, and seep into the seafood like a slow poison. Although the Tickells went to no more risk than that experienced by those who lived adjacent to the impact zone of the spill, they ultimately suffered physically from their investigation of the cover-up. The permanent damage to Rebecca Tickell’s health, after only one day’s exposure to the toxicity of the BP “cleanup,”can be the measure of the extent of the sufferings of the past two years and the future pain for the Gulf residents. Ultimately, like the many victims of the greed of a large and well-protected corporation, the couple must take care of themselves. But other victims are simply helpless. Dolphins gave expelled stillborn babies, which drifted in the poisoned waters and washed up on the shores. The herons and cranes and pelicans may look clean and fresh, but probe inside the fish and shrimp and the oil they have ingested lies black against the white flesh.

The fish with the dark insides are a metaphor for the dark heart of corruption that stains the colonized state. The state university climbed in bed with the corporation whose activities bordered on the criminal and accepted millions of dollars in “donations.” In return all the local scientists had to do was to spout the company line. The cleanup, Corexit, would eat up the oil in the proper biological fashion. The fish that we are asked to eat is safe because it passes the “smell test.” The state hired a herd of “sniffers” who apparently have various levels of smell aptitude to smell fish. Of course smelling the outside of a shrimp tells the sniffer nothing of the interior state of the shrimp. Watching a chirpy, cheerful woman in a bright patriotic red suit describing the process, I thought that she was mocking the very idea but later I learned that she was dead serious; she was selling the efficacy of “the smell test.

For BP, the “big fix” for a tragedy was a public relations campaign that compromised all those involved. During a photo op, Obama and Sasha swam in the Gulf to advertise the cleanliness of the Louisiana beaches. What the public did not know was that the father and daughter were swimming in a protected and unaffected bay.

In the end the Tickells looked like Diogenes with his lamp searching for an honest man. The university was compromised; its experts are not to be believed. The governments, both state and federal, wanted to appease BP as urgently as the oil company needed to turn off the spigot. The so-called “little people” rushed to take jobs and seize the bribes offered by BP and allowed their silence to be purchased for a very few dollars. The innocent could protest and complain but there was no public will to help them gain justice. As in the aftermath of Katrina, the attention of the public moved on.

Although the Big Spill was two years ago, the use social media and its various technologies has exploded and the Tickells are the kind of savvy filmmakers who rely on the public to publicize social problems. Today society communicates through an informal citizens network operating through Facebook and Twitter and e-mail and Internet postings (such as this review). This is how change can take place. Social change is moved because of the public’s perception of a fundamental injustice. The American public has little stake in the corporate economy and views big businesses with suspicion. Increasingly disenfranchised by corporate “contributions” to elections, sophisticated enough to know that politicians are purchased and laws are bought and paid for, the public trusts blogs more than ballots, public activists more than the government.

The Big Fix reveals how the system works, or to put it in another way, The Big Fix lays out in an easy to understand package how the fix worked. The Tickells explain to us, the citizens of this nation, what happened to Louisiana while we all watched and wondered but did nothing.

The couple appeared at a small local showing at the Otis College of Art and Design on April 19 and engaged in a genial question and answer session with the students and faculty. They urged the audience to not buy seafood because we will not be correctly told whether or not it comes from the Gulf. They told us to simply not buy the products of Big Oil: use public transportation, ride bikes, car pool, walk—all alien concepts in Los Angeles. But the purpose of the Tickells in coming to Otis was to remind the students that, as artists of the new generation, they are potentially very powerful actors. Bad outcomes can be changed only by the power of social media putting public pressure upon politicians. Although the Tickells did not mention it, the case of Trayvon Martin comes to mind as an example of what happens when the public will demand justice.

Time ran out and the hour was late and I did not get a chance to ask the couple my question. As I sat in the college auditorium as part of a small select group watching The Big Fix and viewed the parade of “fixers” go by, nodding their talking heads and lying, I wondered what kind of people are these? University professors betraying public trust, experts misleading the people, bland-faced corporate executives, double-talking politicians—who are they? Are they genuinely delusional? Are they sociopaths? Josh Tickell recounted how he had a “lucid dream” of showing Obama all the facts of the Big Fix and stated that the President was stunned that he did not know the truth until Tickell told him what had really happened. Rebecca Tickell stated that the representatives of BP were nice people with families. I though that this charming and optimistic couple doth protest too much. I wondered if these excellent and talented people were not projecting their own inherently moral qualities upon those who do not deserve the benefit of the doubt. No these corporate people cannot be “nice.” Cheating innocent victims, refusing to pay those whom you wrong—-the perpetrators may smile but they are not ethical or honorable and they have no right to look their children in the eye.

The night I saw the film, I also read an article by Richard Cohen in The Washington Post. Written on April 16, the column was discussing a current political candidate for high office who has a career in business. Noting this candidate’s uncanny ability to lie at will, Cohen explained, “…what his career has given him is the businessman’s concept of self — that what he does is not who he is.” This is what is called compartmentalization. You can do foul deeds, you can lie and cheat and steal as the oil companies do; you can lie with impunity and go home to your children and to your mate and sleep well at night. It is not you who has done the illegal and immoral things; it is the corporation. You are only following orders.

Political cynicism is rarely rebuked. Seasoned operatives play the game to win by fair means or foul and apparently never consider the long-term consequences. When their glory days are long over, some, like Lee Atwater and Robert McNamara, recant their tactics and their lies. Game Change, an HBO special movie based on the well-received book of the same name, is a cautionary tale of the unintended consequences of a half-baked political strategy that revealed the “dark side” of populism. Four long years have come and gone since the memorable and frightening 2008 Republican primary which revealed the fecklessness of a Presidential candidate, John McCain, who selected as a running mate, Sarah Palin, a neophyte governor utterly unfit for public office.

Sadly, the movie is slack and soggy, as half-baked as the plan to make Sarah Palin into viable vice-presidential material. This is a compelling true story that we all saw unfold in real time. Those of us wedded to the notion that a politician should be at least competent felt alarm and consternation at the rise of Sarah Palin. It would be hard to say which was more frightening—her supreme ignorance or her supreme raw political talents. The sheer terror of the thought of Sarah Palin as second in line to the Presidency, the shudder that ran across the body politic, is strangely subdued in this account of one of the most unforgivable insults handed to the American people. And yet, after watching this sympathetic account of a badly handled candidate, I came away with a new respect and empathy for Sarah Palin.

The real villains of the piece are Steve Schmidt and John McCain who needed a “game change” to confront and counteract the charisma of Barack Obama. Gently played by an amiable Ed Harris, John McCain is given an Easy Pass in this account of his catastrophic campaign. The John McCain of today is an angry man, still smarting over his humiliation at the hands of Barack Obama. Defeat has not sat well with him and he has shown none of the graciousness of a vanquished John Kerry or Jimmy Carter. Ed Harris makes John McCain seem like a doting and absent-minded grandfather, rather than a candidate wounded from the campaign against George Bush and determined to find redemption. There is no trace of his trade-mark hot temper, impulsiveness, and volatility.

That said, the movie clearly shows McCain as being reckless and irresponsible. On one hand, he casually used Sarah Palin to boost his percentage points; on the other hand, he abandoned her to his campaign staff, who came to hate her. Chief hater was Steve Schmidt, well played by Woody Harrelson, who really steals the movie. Schmidt rather liked Palin in the beginning but when the governor did not respond well to his wishes, he learned to fear and loathe his recalcitrant candidate. Unfortunately, Julianne Moore’s performance is thin and bloodless. Yes, she imitated Sarah Palin very well, but it is as if the imitation overwhelmed Moore’s ability to act and give power to what history suggests were a very real rage at what was being done to her. In the face of Harrelson’s Emmy worth performance, Moore almost recedes and we are never given a convincing emotional connection to how Sarah Palin broke away from her captors and went “rogue.”

The film obscures the fact that Sarah Palin had actively campaigned for a larger role in Republican politics, courting susceptible neo-conservatives such as Bill Kristol, who pushed John McCain to select her as a running mate. Kristol seems to have had a crush on the beautiful governor of Alaska and played a partial and outsider role to the campaign that gave his voice a significant weight with McCain. Doubtless, Palin also enchanted McCain who, in the beginning (in real life), was visibly besotted with her. By not making the connection between Palin’s active ambition and the thwarting of her fantasy of political stardom, her rise and rebirth after weeks of humiliation at the hands of the press has no foundation.

Game Change asserts that Palin was located through Google—an odd elision of known facts that makes the campaign look even more lacking in judgment that it was in real life. The criteria were few: the vice president must be a woman, to counter the Republican deficit with women, and she must be pro-choice, to inspire the listless Republican base. Given that America is behind Afghanistan in the percentage of women we have in public office, finding a Republican woman with the kind of political experience routinely granted to men was a difficult task. Most Republican women active in politics at that time were pro-choice, narrowing the field significantly, almost guaranteeing a Sarah Palin-type—lightly educated and living in an isolated area outside the mainstream.

All Sarah Palin had to offer was ambition, the skills of a performance artist, and a taste for public adoration. Sadly, from the very start, the McCain campaign mishandled a very viable politician who proved to be a game changer—if not in the way anyone had imagined. What Schmidt utterly failed to see, even after her acceptance speech at the Republican politician, was that Palin could not and should not be prepped into sophisticated knowledge of world affairs. It was the intention of the McCain people that their boss should have a running “mate,” or a political wife who would “support” his positions. Apparently assuming that a woman would be less ambitious than a man selected as vice president, the team did not consider the fact that the nation would view her the same way as a male candidate: as the proverbial “heartbeat” away from the Presidency.

Palin understood not only her expected role but also saw her nomination as a path to her own political future, and it is this ambition that Game Change failed to grapple with. Julianne Moore is never allowed to fully show the driving ambition that led to the eventual success of Palin and is forced to spend most of the movie in a state of shamed failure. True, Palin, as would anyone funning for office, needed to be “prepped.” However, her needs went beyond an updating or a boning up on obscure aspects of foreign policy: Palin had to be taught or educated at a high school or college level, but the film shows that she was prepared for press interviews with condescension bordering on contempt. In the process, the army of managers, consumed by concerns with her weaknesses, failed to see Sarah Palin herself and neglected to determine her strengths. The campaign proceeded to remake her in their own image.

The result was an artificial creation, an attempt to turn an ordinary Alaskan wife and mother who also happened to be a governor into a well informed chicly dressed talking head stuffed with undigested factoids. The problem was that political operatives were not trained as teachers and did not have a clue as to how to educate a human being. No one can learn disparate bits of information given without an intellectual context. In a reflection of No Child Left Behind, Schmidt and Wallace, tried to teach Sarah Palin for the test—interviews with the press. When confronted with this well dressed, sleekly made up vision political acumen, the press reacted accordingly, asking Palin the kinds of questions any run of the mill politician familiar with Washington D. C. could answer. What was seen, what we all experienced, live on television, was a person stricken by a panic attack when asked about the “Bush Doctrine”—-“In what respect, Charlie?” And her inability to think when asked about which newspapers she read—“All of them.”

Oddly, given the amount of time the film gave to the teaching of Sarah Palin, little time is given to the interviews, which were sheer agony to watch in real life. Nothing is more painful that witnessing a complete failure to construct a coherent thought but there is almost nothing in Game Change of Palin’s mangled syntax, twisted by what must have been her sheer terror. After the interview with Charlie Gibson, the campaign owed her an apology but instead the operatives blamed her and redoubled their efforts to cram facts down her throat as if she were a Strasbourg goose. No wonder, the poor woman became catatonic and rebellious.

Perhaps because the people who worked for McCain had ulterior motives concerning their own futures in politics, McCain is absolved and Palin takes all the blame. Nicole Wallace flatly refused to work with her (and ultimately to vote for her) after Palin bombed the Katie Couric interview, leaving the governor to the irritated mercies of Steve Schmidt. Lower placed operatives in the campaign clearly leaked their dissatisfaction with Palin to the press and undermined her during the campaign with the presumed effect of letting McCain off the hook and shifting the blame from themselves to an inexperienced candidate. In hindsight, everyone claimed that 2008 was a “Democratic year,” and the the McCain candidacy was doomed, particularly in the turbulent wake of the Bush presidency. Palin, then, was a “Hail Mary” attempt at a three-pointer.

If there is, as Palin claims today, a “false narrative” to Game Change it lies in the refusal to take responsibility on the part of the major players. Why did Schmidt and Wallace not see who and what they were dealing with—the real Sarah Palin? Was it unconscious sexism? Was it failure to recognize the capacities of a person so different from themselves? Was it their own blind loyalty to John McCain? This blind spot, whatever it was, blurs the heart of this narrative and, in the end, the film rushed past the most significant part—how Sarah Palin, possibly encouraged by her husband Todd, shook off her handlers and found herself, her own voice and reached past the campaign to the voters. In the process, she eclipsed McCain.

To this day McCain remains circumspect about Palin’s rise to fame and glory. After all, it was this very rise of Palin’s popularity that not only surely bruised his ego but also wrecked his candidacy by unsettling the balance of the campaign. And herein lies one of the great “what ifs” of 2008. What if Schmidt and Wallace had recognized the potential of Sarah Palin? What if they had allowed her to use the interviews with the press to reach out to the voters who had felt ignored and talked down to—the voters who adored her? What if the campaign had used Palin to reach the very groups they hired her to represent—conservative women and base voters? What if they had allowed her to be herself? No doubt Palin would have stumbled and made mistakes, but with proper guidance, perhaps she could have learned how to be a populist candidate with the heart she obviously had.

Instead, at the end of the campaign, what we saw was an angry mishandled woman on the loose, seething with resentment over the “lamestream media,” those very television journalists who had revealed her deficiencies and held her ignorance up to public ridicule. Although there the movie is far too lax in covering Palin’s self-redemption, the candidate struck out on her own and began to campaign her own way. Now that she drew huge and rapturous crowds, the campaign seemed to be unable to “handle” or contain her energies. According to Game Change, her populism disinterred the “dark,” racist, xenophobic side of American life. The audience to the film must fill in blanks that should have been edifyingly filled, showing us only a John McCain losing control of the narrative, horrified at the sight of the ugly underbelly of America and overwhelmed on Election Day by the public alarm over what had been unleashed.

In another area of fuzziness, both in chronology and agency, Game Change appears to blame Palin for linking Obama to a “terrorist” and to an America-damning pastor. But this kind of dirty guilt by association game had been part of the Republican playbook since Lee Atwater and remains fully operative today. In the end of the film, McCain warns Palin to beware of the “extremists,” such as the Limbaughs, of the Republican party. This brief scene appears too self-serving, too pat to be genuine, a much too obvious attempt to make McCain appear to be blameless for what Sarah Palin had supposedly revealed about the Republican “base,” no pun intended. But blaming Sarah Palin is another Easy Pass for the part Republican master-minds played in devising the infamous divisive “Southern Strategy”—divide and conquer through racism. Palin did nothing but take advantage of an already ready well-worn set of tactics and rode to glory on behalf of the “Real” America.

A year ago this month, Bill Kristol bemoaned the failure of Sarah Palin to take advantage of the (unearned) opportunity that was given to her. Like many of Palin’s former defenders and supporters, Kristol jumped ship after Game Change the book was published. It seems that they were disappointed that Palin’s reach towards fame exceeded her desire to do the hard work of growing into a viable politician. Instead of going back to being a governor of Alaska, gaining experience and preparing to take on the role of heir apparent in 2012, Palin compounded the impression she did not want to work by resigning half way through her term and becoming a television personality on a boring reality show. Instead of growing her candidacy for President into an aura of inevitability, Palin became an inarticulate talking head on Fox News, an embittered mockery of her former self, using self-righteous religion as a cudgel against liberals.

Rising from the ruins of the failed McCain campaign, a year later Steve Schmidt gave an timely interview with Anderson Cooper on CBS’s 60 Minutes and indicated that, although Palin “helped” more than “hurt” the campaign, he would not chose her again. In clearing the ground just before Game Change was published, Schmidt sought redemption and exoneration for his part in what author John Heilemann termed an “irresponsible” action of foisting a “dangerous” candidate upon America. Schmidt’s mea culpa worked and, thanks to an excellent book and to this television movie, Schmidt has cleansed himself and continues to do penance on MSNBC.

Game Change the movie benefited from additional interviews and from reading Palin’s book, Going Rogue, and the new perspectives from her book clearly added to the “empathy” angle, as the screenwriters stated. One does feel sorry for Palin and it seem clear—book or no book, movie or no movie—that the McCain campaign let Sarah Palin down badly. But Palin herself profited only monetarily, not politically, from those intense months. One wonders…what if Sarah Palin had learned from her experience on the McCain campaign and surrounded herself with serious and sympathetic advisors? She could have molded her very real strengths as a devoted wife and mother and shaped her image as a normal person called to a higher office. She could have honed her formidable talents as a communicator.

But like a minor character in a Shakespearean tragedy—a Rosencrantz—Sarah Palin thrust herself to the fringes of history, a fleeting novelty, discredited by her own roiling resentment. Too bad. What if she had allowed herself to try to be better than she was, to learn? Imagine the Republican primary today with Sarah Palin on the debate stage. Her natural running mate: he whose name cannot be Googled. Now that would have been a real Game Change.

Dr. Jeanne S. M. Willette

The Arts Blogger

]]>http://jeannewillette.com/2012/03/12/game-change-2012/feed/0Young Adult (2011)http://jeannewillette.com/2011/12/18/young-adult-2011/
http://jeannewillette.com/2011/12/18/young-adult-2011/#commentsSun, 18 Dec 2011 17:00:15 +0000http://jeannewillette.com/?p=623THE YOUNG ADULT, THE UNRELIABLE NARRATOR, THE STORY WITHIN THE STORY

AND THE TRASHING OF WOMEN

The “unreliable narrator” is a literary device, or concept if you will, that is rarely used. The device is difficult to use effectively, because readers expect narrators to be reliable; they assume that the story being told is, at least, straightforward. Grumpier readers of Agatha Christie mystery novels would complain that the dear lady would hide clues and suddenly solve the puzzle by producing the final inexplicable piece. Readers felt cheated that Christie, who was often too clever by half, had not allowed them to participate in the solution. On a much higher level on the literary scale was Ian McEwan’s Atonement in which the narrator, a writer, obsessively wrote and rewrote a story from her childhood in a futile attempt to make it turn out right. In McEwan’s deft hands, the tripartite telling could enrage some readers (my friend David) and intrigue others (me) who interpreted the unreliable narrator as the author/s’ way of saying that our lives are merely constructed fictions, rewritten by us.

What can one say about the unreliable narrator of Young Adult? My first thought was that the writer for this aggravating film was a man who did not comprehend women; but the writer is a female, Diablo Cody. Cody wrote Juno, another film I truly disliked, but for other reasons. In Young Adult, Cody pulled an Agatha Christie on the viewer, revealing, at the very end, a secret that upended the entire premise of the plot. My next thought was that all the critics were, for some reason, mis-representing the story or mis-understanding the story or mis-telling the story or simply missing the story. Cody bears a great deal of the responsibility for insisting that the leading character, played by Charlize Theron, is “unsympathetic” and spends “ninety minutes trying to steal another woman’s husband.”

In almost every film review (I have read most of them but not all), the reviewers repeat the standard line—-Mavis Gary is a horrible character, a former high school beauty queen, who returns to a hometown for which she feels nothing but contempt, intending to ruin the marriage of her former boyfriend. The impetus for this journey of (self)destruction appears to be a message from Buddy Slade (Patrick Wilson) that he and his wife (Elizabeth Reaser) have just had a baby girl. From the very beginning of the movie, it is clear that Mavis is a deeply depressed angry alcoholic who can barely get dressed in the morning. She lives a half-life as a ghost-writer of novels for young adult girls. Pressed by her editor for the last book of the series, she diverts herself by going on a quest fueled by sheer meanness, to snatch her old boyfriend back after nearly twenty years at the very moment of the peak of his happiness.

How could the average movie-goer like such a character? At every turn, we are lured into accepting the judgment of the town’s middle-aged women who remember her from high school—a “psycho bitch.” Reviewers have made much of the odd pairing between the beautiful Theron and the lumpy Patton Oswalt. But no one explores why the two, one a popular and beautiful girl and the other a misfit, find an emotional connection at a local bar in the dreaded Mercury, Minnesota. “Matt Freehauf” has never left the small town, the place where he was beaten by high school jocks who thought he was gay. He lives with his naïve sister and paints hybrid hero figures. At first it is hard to tell who is more mentally ill: the man who never left the site of his torment and humiliation or the woman who came back to relive her glory days. Something is broken within both souls of this non-couple. The dysfunctional metaphor is very clumsily made with the gratuitous and literal crippling of Matt who walks with a cane.

But there are strange inconsistencies in the film that might—in the hands of a better writer—pass for clues. There are inconsistencies and inexplicable actions on the part of the characters. Why, one might ask, does “Buddy” agree to meet his former girlfriend at a local “restaurant” (I use quotation marks as a comment on the food on the menu) without bringing his wife along? Why do Mavis’s parents (Jill Eikenberry and Richard Bekins) brush her aside when she reaches out for help by telling them that she thinks she is an alcoholic? Why is she so alienated from these seemingly nice people?

At every turn, we are lead to blame Mavis—-she has discarded her parents like she discarded her home town—she has put on the red-light clothes: she is trying to wreck a home. But we should be asking other questions: why does “Buddy,” who, we are repeatedly told is a loving husband and father, keep putting himself in her path? Why, given that Mavis has made her intentions to rekindle their relationship clear, did he then invite her to his child’s “naming ceremony?” What kind of man would act in such as way—encourage an obviously troubled woman who is trying to seduce him? What is the significance of “Matt” telling Mavis that “down south” things don’t work very well since his beating?

This is where the unreliable narrator comes in. The unreliable narrator appears to be doubled in this film: Mavis is writing her latest novel and the viewer follows the teenage story as a counterpoint for the actual plot of the movie. Diablo Cody puts Mavis in one humiliating situation after another, leading the viewer to believe that this woman is responsible for all the consequences that befall her. The character is torn down not just by the writer but also by the audience who always hated the pretty and popular girl in high school.

We secretly dream of going to our high school reunion and seeing the blond and bouncy head cheerleader as a gray haired hag. The writer uses the worst instincts of the audience—blame the victim. Everyone condemns Mavis and takes some sly satisfaction in enjoying the come-uppance of a former prom queen. In one strange scene, Mavis asks Buddy’s wife, who works with special needs children, about a chart of expressions used for teaching her students. Where, Mavis inquires, is the neutral expression that does not show emotion? We are thus led to believe that she has no feelings.

But there is a scene in which Mavis has a public outburst and a long suppressed grief comes out suggesting that she dare not feel, that she numbs herself in order to not break in two. Mavis carefully dresses in a very conservative fashion for the “naming ceremony.” With his wife in another part of their home, Buddy agrees to talk with Mavis alone and then, after leading her on for days, rejects her. His wife later accidentally spills a drink on Mavis’s white blouse and Mavis blows up and calls her a “bitch.”

And, suddenly, out of the blue, comes a revelation that explains her depression, her self-destructive behavior and makes her a completely sympathetic character—-when she was twenty, Mavis was pregnant with “Buddy’s” child and lost the baby in a miscarriage. The young couple had been planning marriage and was expecting to make a family when suddenly this sad event occurred.

Keep in mind that this truth comes out at the “naming ceremony” to which this nice husband has lured her. Mavis reveals her story to the silent contempt of the invited guests. Buddy’s wife has no reaction to this news that he fathered another child. The crowd, including her parents, blame Mavis. Although no explanation is ever given for how Mavis got from the miscarriage in Mercury to a writing career in Minneapolis, it is clear that this woman has been suffering for almost two decades from melancholia, from the unresolved loss and grief of the end of youth and hope.

Now Buddy’s actions seem positively sadistic—to send his former girlfriend who lost his child a birth announcement is nothing short of viscous and cruel. To invite her to one event after another, lead her on, to keep her on the hook is a reprehensible betrayal of his wife. Mavis read Buddy quite well from a distance—he did indeed panic over the baby, he did indeed want to be liberated—-so he summoned her to free him. Then he lost his nerve and we are left with the impression that when she needed him most, twenty years ago, he must have let her down. This supposedly nice guy deliberately draws a lonely, damaged and vulnerable woman into an agonizing situation of a “naming ceremony” and blames her for breaking apart—we included you, he tells her, because it is clear that you are so sad and needy and depressed—-we all feel sorry for you—and now look at what a bad girl you are.

No wonder Mavis writes novels for young adults. She is trying to rewrite her own life. No wonder Mavis is an alcoholic, no wonder she is depressed, no wonder she is angry. It is not that she hasn’t grown up; she became an adult out of sorrow…long ago. It is Buddy who doesn’t want to grow up—-in the most hurtful way he can imagine, he calls out for his old love. It is Buddy’s wife who escapes the baby—-she plays in a rock band. But the film is unmoved by Mavis’s true story. We assumed that she was so unloving that she hated children, but we now know why she was so numb and unmoved by the child that Buddy forced her to confront. Her announcement that the happy event at the “naming ceremony” could have been for her child passes without making a ripple, not in the town, not in the script, and the writer, the unreliable narrator, pulls back and moves on without pausing to consider what such a revelation might mean to her characters.

The real unsympathetic character is not Mavis but Buddy who is a passive aggressive good old by in a frumpy plaid shirt. He is one of the nastiest perpetrators of a supposedly comedic film in a long time. The viewer looks back on the earlier parts of the film—–Buddy and Mavis meet after many years and do not mention the shared sadness that parted them? Mavis’s parents keep her old room in its teenage state? The unreliable narrator leads the reader on, creating and building on false assumptions. A good writer would have allowed the characters to make a feint or a move that would allow the reader to make the connection later. Yes, poor Mavis does not work very well “down south” either. A bad writer simply yanks a rabbit out of the hat and then throws it away—Mavis is haunted by an old wound—-so what?

Alone and abandoned again, Mavis ends her last Young Adult novel by killing off her character’s old boyfriend and his new girlfriend. One of the narrators in this story within a story is perfectly honest and true and that narrator is Mavis. Yes, the small town did not suit her, yes the girls were jealous of her, yes, she had talents that needed to be developed, and yes, the boyfriend got the ending he deserved. The Young Adult series comes to a close when the protagonist finally graduates from high school and leaves home for good.

I wondered when Juno came out what kind of message a “successful” teen pregnancy sent to young women. In real life such an event is a traumatic disruption. Often the young woman is not supported by the father or by her family. Usually, having a child throws a young girl’s life off course and there are grave consequences—education delayed or ended, career plans put on hold or given up, years of maturation denied by premature responsibility. Juno was a fantasy that a teenage pregnancy is a blessed event enriching the lives of all involved. Life simply isn’t like that. A young woman is such a situation is faced with terrible choices—-who will pay the price? Who will sacrifice? In Young Adult, as in life, it is the woman who bears the consequences. Buddy has clearly shrugged off his past while, at the same time, he has engaged in an act of unwarranted revenge towards a woman who has done him no harm.

Young Adult is even more deeply cynical and more deeply hurtful towards women than Juno. The movie begins and ends blaming Mavis for the cruelty of her old boyfriend. She is punished throughout the film and is not allowed even a moment of understanding or sympathy. Man after man takes advantage of a helpless self-loathing that has forced her to disassociate herself from her own body. I have written elsewhere of female film writers, Nancy Meyers is a prime offender, who create female protagonists only to tear them down and to put them in their rightful place. Young Adult is another such film by a woman writer.

Young Adult is particularly cruel and nasty towards women. The current social wars going on against women in state after state, the political attempts to take rights away from women across the nation are based upon an unprecedented plan to control women’s behavior at the expense of their freedom. Young Adult seems to reflect the cultural desire to blame women for what is an act between two people and to punish women for a natural human impulse. The lack of support for “Mavis” is indicative of the lack of support for the aspirations and dreams of young women who must struggle to take care of themselves in a world that is increasingly hostile to their hopes.

It is a shame that the film missed an opportunity to take a serious look at the damage done when women are blamed for falling in love with a boy and getting pregnant. It is a shame that the film declined to examine what happens when women must carry the social burden of the consequences of being a young adult. Imagine how the film could have developed if Mavis had called out Buddy and his behavior. But this is a lazy and disingenuous film. It is easier to blame the victim and move on. Too bad.

Margin Call took my money, not once, but twice. It is rare that a movie can separate me from cold hard cash and it is even rarer that I will pay to watch it again in a real movie theater, rather than a year later in the comfort of my own condo. But Margin Call is that kind of a film—you need to see it twice for the details and the nuances of human behavior. It’s that good.

As someone who donated hard earned tax dollars to the very bad people who live on Wall Street, I have followed the sad fate of my money with obsessive interest. Every book that has been published on the debacle of 2008 is on my Kindle app. Some of these books are very good, such as Michael Lewis’s Boomerang, and others are simply horribly written, such as Andrew Ross Sorkin’s Too Big to Fail. In contrast to the terrible truth of these books, Margin Call is a terrible fiction about terrible people who have done terrible things.

The historical background of Margin Call is too well known for the film to even mention. As the director, J. C. Chandor, stated, the plot could have been placed at any point in time during the period leading up to the Crash of 2008. For years, experts and people with common sense knew that the Wall Street bubble would burst. The smart ones, as Michael Lewis reported in The Big Short, betted on the crash and made money. The protagonists of Margin Call are merely the first out of the door to screw their clients. The firm gets out of the very crooked game they are running on the suckers and runs off with enough money to make a profit out of the nefarious enterprise.

Like other financial enterprises, called “banks” or “investment firms” or “hedge funds,” the company imagined by Chandor, is a thinly veiled version of Merrill Lynch where his father once worked. These “businesses” are simply casinos that are accidentally located on Wall Street and run, not by the official Mob, but by people with MBAs or others with number-based degrees from Ivy League schools, the “legal” mob, in other words. Stocks and bonds are like casino chips or the little balls that rattle around the roulette wheel or cards flipped over by a dealer. These abstract entities, call them what you will, have an equally abstract monetary value and they are deployed in a rigged game where the house always wins. Except, as Margin Call makes clear, the Wall Street “business people” are not nearly as smart as the mob.

The “firm” at the center of Margin Call has no name or perhaps it is the firm that is dares not to speak its name, out of shame. But shame is not present in this perverse universe. Remorse, morality, ethics, introspection, self-awareness, honor, honesty—-all those homey virtues—hover around the edges of the characters in this film, but fail to materialize. All live in fear of the outcome of situations that they themselves have created. The viewer cannot muster much sympathy for these benighted beings, but the film reveals the ruthlessness of the finance business in the opening minutes that demonstrate the heartlessness and lack of “the social contract,” for lack of a better word, that rules Wall Street.

Margin Call opens with a mass firing. An army of those in charge of severance—we saw the cutting ceremonies in George Clooney’s Up in the Air (2009)—files in, carrying bankers’ boxes and marching orders. The reasons for the firing of large numbers of people are vague—“cutting back” and what have you—-and the selection is just as arbitrary. The lesson is that there are winners and losers, just as there are in Las Vegas, but here on Wall Street, losing more personal. In Vegas, you at least get a hotel room and Elvis impersonators; here you get severance pay and the right to leave the building with a box filled with your desk paraphernalia. For those of us who work in the real world, it is not clear why anyone would work in such an environment. The money may be good but what an awful way to make a living: watching computers that show nothing but graphs in nice colors…until the Cutting Crew marches in.

The survivors politely avert their eyes, not to spare the victims but to protect themselves against the inevitable—-whoever you are, you are next. And that is what the rest of the movie is about: who’s next?

Margin Call has an excellent ensemble cast—astonishing, in fact, for a twenty-six year old first time director—and these pros resist the temptation to chew up the scenery. These are a buttoned down macho bunch in coats and ties and the women are especially hard-bitten. No emotions here but watch the excellent acting for the small tell tale signs of who will be sacrificed, who cannot survive and who will thrive and rise. No feelings on display here but listen to the dialogue for the moment when someone gets thrown under the bus and someone moves to the head of the pack. The young survivors of the first firing, Zachery Qunito (Peter Sullivan) and Penn Badgley (Seth Bergman), seem equal at first: they are both quite capable to making sense of the complicated equations bequeathed by the Risk Manager, Stanley Tucci (Eric Dale). But, take note of “Seth’s” white socks and his overgrown out of control hair and then compare him to “Peter’s” perfectly waxed brows and guess who survives the next cut.

At every turn, Paul Bettany (Will Emerson) reveals that his days too are numbered. He is Kevin Spacey’s (Sam Rogers) assistant but he wastes his high salary on hookers and booze and he is past his shelf life, post forty and still holding, still running in place. One more Cut and he will be gone.

“Sam” is a character that passes for “moral” in this world but he needs the money, because, he too, has wasted his salary. “Sam” will do what his masters tell him. It is no accident that Sam spends the film weeping over the death of his dog. These characters are mere underlings who summon up their Overlords when “Eric Dale” is fired and leaves behind a time bomb of an investment scheme, rotten at the heart, on the verge of exploding. And now it is every man (and woman) for him or herself.

Playing the part of Dick Fuld of Lehman Brothers, the place where the Meltdown started in 2008, Jeremy Irons is “John Tuld” who oversees a meeting of the Board in the wee hours of the morning. It is here that the real decisions are made. The two executives who have apparently masterminded the now disastrous scheme are Demi Moore (Sarah Robertson) and Simon Baker (Jared Cohen).

On leave from The Daily Show, Aasif Mandvi (Ramesh Shah) runs the numbers and confirms that “Eric Dale,” who, having been summarily fired, is now understandably missing, is correct in his calculations. The sky is falling. But “Tuld” asks Peter Sullivan to explain the situation. The ball is in his court and the young analyst—awed but poised and certain—rises to the occasion and makes it clear to Jeremy Irons that “the music has stopped.” Paul Bettany and Penn Badgley are silent and have nothing of value to contribute to the meeting. The viewer realizes that Quinto is now a rising star and that Badgley’s character will be fired, long before “Will Emerson” warns him that the axe is falling.

But bigger heads must roll. When Irons turns to the culprits—Moore and Baker—he is not ascribing blame—-not yet—he is demanding solutions. It is Moore who freezes and it is Baker who steps up and recommends a ruthless move—sell everything—that is also the only way of surviving. Although Baker warned her that he was going to double-cross her, Demi Moore made the fatal mistake of believing that this man, known as the “snake,” would stand by her and take the blame with her. The meeting is hardly open before Irons tells Moore that he will have to show her “head” to Wall Street: she will have to take the blame. Why because she lacked the nerve to dump toxic stocks on her colleagues on the Street.

In contrast to Too Big to Fail (2011), which clumsily “explained” the Crash to the audience, Margin Call “explains” the financial crisis in dialogue that is casual and pops up here and there as characters carry on conversations. The gullible public is named and blamed and the Wall Street bankers excuse themselves for their recognized but uncontrollable greed. They are like addicts who won’t stop gobbling up ridiculous salaries for contributing nothing but misery to society until some higher power stops them. There are no higher powers. With all its failings, Too Big to Fail was very accurate in showing that the “Fed” bailout of Wall Street was really the New York Fed saving its cronies by stampeding Congress while the Bush Administration absented itself from the flim-flam job.

Margin Call, based on the experiences that Chandor’s father had working on Wall Street, ends without redemption. “Eric Dale” is finally found and corralled only in order to sweeten his severance package to silence him. He and “Sarah Robertson” end up in a quiet room together, chatting in their foxhole, waiting for their payoffs.

When the character played by Simon Baker came onscreen, in close up, there was a murmur of female appreciation throughout the theater; but, in contrast to his genial and whimsical character on The Mentalist or in Something New (2006), this man never cracks a smile or changes his expression. Gesticulation, wit, vocal gymnastics, imposition of personality—-all of that is the privilege of one man, the boss, Jeremy Irons, who rules through seer force of will.

In the end, the dirty deed is done and done quick. Innocent people are screwed. The nameless firm is now known on the Street as a ruthless player. But it is doubtful that, in this world, the competitors will do anything other than copy the playbook. Margin Call makes it clear that the only rule is that there are no rules.

The Arts Blogger

Dr. Jeanne S. M. Willette

]]>http://jeannewillette.com/2011/12/13/margin-call-2011/feed/2Mélancholia (2011)http://jeannewillette.com/2011/11/28/melancholia-2011/
http://jeannewillette.com/2011/11/28/melancholia-2011/#commentsMon, 28 Nov 2011 17:00:28 +0000http://jeannewillette.com/?p=598MÉLANCHOLIA AND THE END OF THE WORLD

Lars von Trier is our modern melancholy Dane. And yes, this film is a play within a play. The first play is the play on words. The name of the movie is Mélancolia but the spelling makes it clear that the pronunciation is not “mel”—-as in Mel Gibson—oncholia, but is, because of the accent grave, French. One says “mal” as in the French word “mal,” meaning “evil.” The Derridian game become clear when “Justine,” played by Kristen Dunst, says, “We are alone. The earth is evil. No one will miss us.” The actual French word is mélancolie, so von Trier has rammed two spellings—French and English—together to warn us of a mash-up among references and between planets.

Mélancolia begins with a plethora of art historical references. The Dead Birds of Ross Bleckner fall from the sky, the landscape is Marienbad crossed with Magritte, Dunst floats like Ophelia (another reference to another melancholy Dane) in a John Millais painting, then, in her wedding dress, she drags the strange apparatus seen in Un Chien Andalou behind her, and the leaves fall from the trees of Bruegel’s Hunters in the Snow. Then we see Charlotte Gainsborough, carrying Cameron Spurr, her film son, through a hailstorm, and she sinks into the green grass as the hunters feet are buried in the snow. We are about to witness a work of art, not a work of reality. What follows is an allegory of a metaphor of a mental state—the condition of melancholia.

Visual Reference to Un Chien Andalou

After hiding behind the sun (Hamlet: “I am too much in the sun.”), the planet Mélancolia emerges and goes rogue. According to the astronomers, earth and the new planet are now in a Totenfuge, a dance of death. There is no escape from the onslaught of melancholia. Early in his career Sigmund Freud wrote a definitive and influential essay on Mourning and Melancholia (1917) in which he stated,

The distinguishing mental features of melancholia are a profoundly painful dejection, cessation of interest in the outside world, loss of the capacity to love, inhibition of all activity, and a lowering of the self-regarding feelings to a degree that finds utterance in self-reproaches and self-revilings, and culminates in a delusional expectation of punishment.

In 1967, Alexander and Margarete Mitscherlich wrote Inability to Mourn: Principles of Collective Behavior and used Freud’s ideas to explain the extended melancholia of Germany after the Holocaust. The nation, they concluded, was unable (or unwilling) to mourn the loss of the Jews. Freud continued,

…melancholia is in some way related to an object-loss which is withdrawn from consciousness, in contradistinction to mourning, in which there is nothing about the loss that is unconscious. In mourning we found that the inhibition and loss of interest are fully accounted for by the work of mourning in which the ego is absorbed. In melancholia, the unknown loss will result in a similar internal work and will therefore be responsible for the melancholic inhibition. The difference is that the inhibition of the melancholic seems puzzling to us because we cannot see what it is that is absorbing him so entirely. The melancholic displays something else besides which is lacking in mourning—an extraordinary diminution in his self- regard, an impoverishment of his ego on a grand scale. In mourning it is the world, which has become poor and empty; in melancholia it is the ego itself.

In other words, unless the individual can identify the nature of the loss, she is doomed to an unrelenting state of melancholia that seems to have no proximate cause. For Germany, it is impossible to acknowledge the loss/lack of the Jews because the Jews are “withdrawn from consciousness.” Likewise in the von Trier film, it is never clear why “Justine” is so deeply sad. Her mental state can be best expressed through images; words will not suffice unless they are distended into poetry. After a long eight minute prologue/prelude in which the Prelude to Tristan and Isolde warns the listener of doom, the kind of doom that is an inversion of the young couple who loved too much, for “Justine” cannot love at all. Her ego is insufficient.

The inability to love (to mourn) is revealed when Mélancolia starts the story with a long white wedding limousine attempting to navigate a road that is too narrow, too winding. The road leads to a huge house/castle on an island and the bride and groom are forced to walk to their own reception to which they arrive two hours late. The reception is an elaborate deception on the part of “Justine’s” sister, “Claire” (Charlotte Gainsborough). A play within a play, the party is the attempt on the part of both sisters to perform normalcy. The long slow dragged out first act of the production is an agonizing downward spiral from pretense to intense despair. Drowning in successive waves of alienation, “Justine” systematically destroys everything and everyone—her job, her minute marriage and then the groom and the guests and her parents drive away.

The strange inability of “Justine” to pretend to be happy is the onset of a serious breakdown from which there can be no recovery. There are indications that the mental illness might be a family affliction. Charlotte Rampling plays “Gaby” the vindictively angry mother of the bride and John Hurt plays “Dexter,” the father that runs away from his daughters. The first act ends as “Justine’s” performance concludes and the audience takes flight. That the end is coming is made clear when the reluctant bride scans the skies and sees that the stars are out of place. A new planet is approaching: Mélancolia and there is no escaping its path of destruction.

The second act centers on “Claire” and her futile attempt to save her sister. As anyone who has watched a loved one succumb to mental illness knows there is nothing to be done but watch. The giant planet, Mélancolia, comes closer and closer. “Clarie’s” husband, “John,” Kiefer Sutherland,attempts to keep his wife and child safe but he too is helpless in the face of “Justine’s” all consuming collapse. The planet is a metaphor as is the fact that the family is isolated on an island. Designed by Margitte, the island would have been inundated by a tidal wave had the planet been real, but instead the seas that surround it remain calm and unruffled, serene while Earth gasps its last indrawn breaths.

The isolation of the island symbolizes the isolation that overwhelms any family struggling with a loved one in danger. The family carefully measures the scope of Mélancolia through a homemade wire circle. Presumably if they take care of “Justine” her illness will recede, but like the baleful blue planet, her illness only grows larger and absorbs the entire family, destroying them one by one. The theme of “inability” runs like a connecting thread throughout the acts: “Justine” is unable to love, her sister is unable to save her, “Justine’s” horse is unable to cross a bridge, and the passive husband is trampled to death by that same horse and is unable to save his family. In the end, as “Justine’s” illness looms ever larger, she becomes, like Mélancolia ever stronger and acts to build a “cave” out of long sharpened sticks. Here beneath the peaked triangle of careful wood, the last three people on the island take refuge and, holding hands, the two sisters and “Claire’s” little boy who closes his eyes, wait.

The prelude tells it all: in its last moments, the tiny planet we call home is absorbed by the large planet of all encompassing sadness. Lars von Trier draws out the second act as he drew out the prelude and the first act, as slowly but surely the end of life comes to Earth. The movie audience is forced to concentrate on each moment and contemplate the suffocation to come. The viewer suffers the agony of waiting as the sound of the approaching planet rumbles ever forward. There is a sense of magical suspension as if the film were a Gregory Crewdson photograph come alive. The light used by the directorial photographer is always other worldly. And so this other world falls to Earth. The end comes in a blast, subcumbing to the total darkness that is a madness that swallows all that touches it.

Larsvon Trier has never just made movies; he has always made film according to concepts wrapped around his philosophy of filmmaking. Twenty-five years ago, he and Thomas Vinterberg founded Dogma 95 and established a new set of rules for cinema. The Vows of Chastity were stern and austere. Based on the rejection of artifice in favor of authenticity, the rules of Dogma 95 reigned in the early years of von Trier’s career. In Celebration the camera is hand held, the light is ambient and the result is naturalistic, making the revelations of the film all the more intense. Over time, perhaps with Dogville (?) von Trier began to move away from the Vows and he decisively does so in this film.

1. Filming must be done on location. Props and sets must not be brought in. If a particular prop is necessary for the story, a location must be chosen where this prop is to be found. In Mélancolia the sets are deliberately theatrical.

2.The sound must never be produced apart from the images or vice versa. Music must not be used unless it occurs within the scene being filmed. In Mélancolia the music is borrowed, like the art historical images. The use of music by von Trier references Stanley Kubrik’s 2001 and the Danish director uses Richard Wagner as the British director used Richard Strauss to displace the film in time and space.

3.The camera must be a hand-held camera. Any movement or immobility attainable in the hand is permitted. The film must not take place where the camera is standing; filming must take place where the action takes place. In Mélancolia these rules are not violated and there is a strong sense of an inquiring camera following the action, but the editing is more conventional that in his earlier work.

4.The film must be in color. Special lighting is not acceptable (if there is too little light for exposure the scene must be cut or a single lamp be attached to the camera). In Mélancolia virtually all the lighting is artificial or strange as the island is bathed in moonlight and planet light and the grounds are theatrically lit with electric floodlights. Imagine Crewdson advising Magritte.

5. Optical work and filters are forbidden. In Mélancolia the use of day-for-night is part of the stylistic play, indicating that the film is a play within a play, melancholy Danes in a theatrical production with all the actors famously dead by the last act. Unfortunately there is no Laertes to say “Good night, Sweet Prince.”

6.The film must not contain superficial action (murders, weapons, etc. must not occur.) In Mélancolia the action is internal, in the minds of the victim and those in her orbit.

7.Temporal and geographical alienation are forbidden (that is to say that the film takes place here and now). In Mélancolia the action is divided clearly into two acts, or two states of mind, Julianne and Claire, and there is a strong sense of real time deterioration.

8. Genre movies are not acceptable. Mélancolia is a disaster movie, crossed with a dysfunctional family movie, crossed with intimations of suicide, self-destruction, and Hamlet’s inability to act. Like Hamlet, “Julianne” nurtures her loss and her lack. Just as Hamlet cannot admit that he is suffering from the loss of his mother, “Julianne” cannot locate the source of her pain. While one might not want to mention Another Earth, this film, while not strictly speaking a science fiction movie, is an example of Surrealism.

9.The film format must be Academy 35 mm. In Mélancolia the format is full of digital special effects and Magic Reality.

10. The director must not be credited. Well, Mélancolia is a “Lars von Trier” film.

For those of an artistic bent, for those with patience and dedication, this is a rewarding film. A true work of art in every respect, Mélancolia was made by a great director who continues to evolve. Sadly, Lars von Trier has also behaved in such a way as to discredit and to isolate himself, making one wonder if he, too, isn’t suffering from some kind of mental condition. It would be too cute by half to connect von Trier’s Nazi references to Inability to Mourn, but the director’s use of Wagner does not come out of nowhere. Nevertheless, I feel that is important to separate the maker from the work of art, which should be allowed to stand on its own. Kirsten Dunst won the Best Actress award at Cannes for her work as “Justine” and it’s a rare film that provides a good role for one actress. let alone two. For the reader who has reached the end of a nearly 2000 word review, my thanks and apologies, but I was concerned that other reviewers have written of this film is shallow and superficial ways and a complex film demands a complementary discussion.

Otherwise knows as “The Hood,” Inglewood is on the way to becoming gentrified. As with SoHo and Chelsea, the artists lead the way and where artists go, real estate agents are soon to follow. But for now, the neighborhood, bordered by Loyola Marymount University and Otis College of Art and Design and the local airport, LAX, is at an optimal mix of old timers and new comers from the art world. The Beacon Arts Building is run by Renée Fox, an artist in her own right, who puts together interesting exhibitions that feature artists from the local scene. This exhibition space is more than a gallery; it is also a neighborhood hangout for art gatherings and art parties and art panels where the issues of the day are discussed.

Beacon Arts Building

This fall, The Feminine Canvas—2011 featured veterans, such as Meg Cranston and Yolanda McKay, mid-career artists, like Fay Ray, and up and coming art makers, including Allie Pohl, in a group show reviving an old topic in Los Angeles—-feminism, or a commentary on the state of women in the world today. As I interpret it, the title uses the word, “canvas,” or the blank surface that is written upon by a gendering process, which transforms the material into something named, “feminine.” “Feminine” is neither female nor woman, both of which imply something “natural.” “Feminine” is a cultural construct that is resistant to change and modification, defying social evolution and political progress. “Feminine” is a (male) given, meaning that feminine is given to women through a (male) system that has been in place for hundreds of years. Women are made into what (male) society wants and needs through the machinations of (male) mass media and (male) mass custom.

The source of the unrelenting indoctrination is an apparatus that is run by men for men. This vast machine that includes television, movies, the internet, print media is a vast territory of visual culture that, like film, excludes women except as window dressing, or that, like fashion magazines mold women into dressed up dolls, complicit with male ideas of the feminine ideal. Women internalize these “ideal” characteristics with designated them as decorations and strive their entire lives to be worthy of male admiration. One of the major sections of The Feminine Canvas was a room full of photographs made by Laura London in a series called Rockstar Moments. London presented a series of images of a thirteen-year old girl, trying on a series of homegrown costumes that would evoke “rock star.”

Rockstar Moments

On one hand, one has to say that progress has been made and that women, from Madonna to Lady Gaga, are finally making some tiny inroads into the male bastion of rock ‘n’ roll. Those two ladies mock and make fun of male expectations of women, but then there are the lesser lights, Brittany Spears and Christiana Aguilera, who display “femininity” without parody and without understanding that “femininity” is a costume. The little Rockstar girl is innocent of the mechanics of the construction of the female as a process Joan Rivière called a “masquerade.” According to Rivière’s 1927 essay, women put on the “masquerade” of “womanliness” in order to not alarm men, who are ever on the alert for what Rush Limbaugh recently called “uppity-ness.”

The young girl in London’s color photographs is the child of Cindy Sherman’s “Centerfolds” of the eighties. She has a bit more agency than her predecessor; she seems more active than passive; but she is still trying on the costumes that the society has doled out to her. Alone in her teenage room, she tries out already ready personas, Cindy Lauper, Blondie, Madonna, all from the past, because nothing new has been created for talented women who was still expected to strut about the stage and reveal their bodies while singing songs of dysfunctional relationships.

It is hard to say what is more disheartening, that we are still discussing a manufactured product like “femininity,” or that, forty years after the feminist movement of the 1970s that it is still necessary to talk the old talk. What does “feminine” mean in the twenty first century? When I attended this exhibition, the previous weeks had been marked—or should I say marred—by the spectacle of a so-called candidate for the Republican nomination for President, Herman Cain, being accused of sexual harassment by several women. It was just two weeks before the state of Mississippi would vote on the strange and terrifying “Personhood” law, giving an embryo more rights than a real woman.

The television screen was alive with the scorn of Herman Cain supporters, all of whom mocked the women he had allegedly assaulted and called them liars. Since the Cain incident, I was required, as are all employees of any legitimate institution or business, to attend a workshop on sexual harassment and realized that what I saw on Fox News was a verbal assault on women that created what in an office would be called “a hostile workplace.” Faced with the unrelenting verbal assault of angry men, the women who bravely came forward to question Herman Cain’s fitness for public office quickly faded into the background.

Who would not be cowed and terrified by the sight of the porcine faces of Dick Morris speculating that one of the women was angling for a Playboy centerfold and Rush Limbaugh smacking his lips and making a crude play on her name, “buy-a-lick?” This, ladies and gentlemen, is how men control women—though dismembering their good names and calling them whores. Apparently, any woman who tries to explain what it is like to be accosted by a male in power is a target for angry denials and derision.

The saddest bit of video footage I saw was Herman Cain surrounded by his male supporters who were making jokes about one of the bravest women of the twentieth century, Anita Hill, the woman who made ‘sexual harassment” a household world and an actionable offense. Or maybe it was the poignant clips I saw of Mrs. Herman Cain, the latest Good Wife, saying that for her husband to assault a woman would mean that he had a “splint personality.” Sadly, wives do not know their husbands. Indeed, women assume that men are morally constructed as consistent beings, but, as Simone de Beauvoir pointed out in The Second Sex, men compartmentalize. Wives are baffled to hear the husband who beats them at home, in private, strike a self-righteous pose of virtue in public. Poor Mrs. Cain, she was trotted out for Fox News, and then put back in her Wife Place. She, like her husband, seemed like a throwback to an earlier era.

Update: A few days after I wrote the paragraph above another woman emerged, claiming that she had been the mistress of Herman Cain for thirteen years. She says that she came forward because of the way the other women were trashed. Herman Cain denied the affair—no sex, says he—but his lawyer does not. His campaign stated that Cain had “alerted” his wife. Mr. Cain, please go away. All these women can’t be liars. Mrs. Cain, read Simone de Beauvoir. She said it all sixty years ago, and what she had to say is still relevant. And that’s too bad.

So have conditions for women really changed? The work of Allie Pohl would suggest that they have not. Pohl has made the groin area of Barbie her artistic territory. The join of the legs, hips and belly and all things in between make an identifiable shape that the artist has made into her trademark and her jewelry line.

Made by our own local toy maker, Mattel, Barbie was born in 1959 as a young woman, complete with breasts. A change from the baby dolls usually given to little girls to train them to be mothers, Barbie taught girls how to be girls. Being a “girl” means dressing up in fetching costumes, fussing with hair and make up, and hanging out with poor emasculated Ken. Barbie changed with the times. She began to look her owner directly in the eyes, rather than casting her eyes modestly sideways, she had her vocations and her careers and her jobs.

Pohl is uninterested in the feints of the toy company. She focuses instead on what about Barbie is unchanging—her measurements. Despite three alterations in her body shape, Barbie has retained her large breasts (39”), her small waist (19”), her narrow hips (33”), and her huge head. According to various experts, Barbie is about six feet tall and weighs 100 pounds and cannot menstruate. The exhibit at the Beacon Arts Building showed the Barbie Torso as a chain of cutout dolls locked into a white picket fence and nailed in place. The image shown was taken at another exhibition and at the Beacon, Barbie was wrapped around a column that supported the ceiling. But wherever she is, Barbie cannot change; she remains firmly fixed in her improbably body.

Allie Pohl's Barbie Fenced In

Although Barbie has run for President—you go girl—she has never grown stout like Margaret Thatcher nor worn pants suits like Hillary Clinton nor had children with strange names like Sarah Palin. Barbie Enjoys Being a Girl, as the song goes and her image is burned like a brand on the minds and hearts of every little girl who has combed her long blond locks. This little girl may talk of becoming a lawyer one day but she will always yearn for the perfect Barbie figure and will be haunted the rest of her life by an impossible Ideal Woman.

Allie Pohl's Barbie as a Chia Pet

Laura London’s little teenage girl is fantasizing about a life as a rock star, a dream as impossible as getting a figure like Barbie. What is so poignant about this little girl is what is not present in these photographs. It is the Lack that pains. It is what we do not see that bites. Although some of these photographs have a light background, for the most part their mood is dark. The colors are not saturated like those of Nan Goldin, the colors are not intense like Cindy Sherman. The artist has chosen a tonality that is both dim and dark at the same time, as though the dressing game is being played in a shadow and filtered through amber. London could have used light pastel colors to make the images more “girly,” but she does not. She could have shown a little girl having fun trying on adult clothes, but she does not.

Laura London's Rockstar Moments

The choices of the artist make it clear that is is not a happy little girl. She is moody of expression, unsmiling and deeply serious. One has the sense that without a costume, she might not exist, that she needs an image to be. She does not smile, she poses, she postures, but there is no happiness in her enterprise. The mood is claustrophobic, closed in, introverted. Rock stars live at night and work in the dark. Clearly, her quest for persona dissatisfies her, discourages her, saddens her. Even worse, she seems to equate personality or personhood with the right combination of clothes…if only she can locate her outfit.

Rockstar Moments

This is what we do not see. We do not see this young woman reading her own book, writing her own story, singing her own song, dancing her own dance.

She searches through hand-me-downs. You begin to ask questions: Why is the “female” and “femininity” still a spectacle on a catwalk for the benefit of the male gaze? What has society done to this child-woman that she is inside, closed up in her room, alone with only clothes as her companions?

Why doesn’t she laugh? Why is she not outside, in the sun, running in the bright light, alive with a vital future, leaping for joy, suspended against the sky?

Naomi Klein is my hero. She is beautiful and brilliant and can look at the sick world in which we are trying to exist, diagnose it, and give a prognosis for the future. If you want to understand how we got from there—-middle class security and prosperity—-to here—-the death of the middle class, then read Naomi Klein. Start with No Logo and then continue to The Shock Doctrine and you will come away feeling disgusted, discouraged and sadly enlightened. As Naomi Klein said this morning on the MSNBC program Up w/Chris Hays, “The system is broken.” How true.

The Shock Doctrine is a harrowing account of how a particular economic theory, popularized by economist Milton Friedman and spread by his Ayn Rand-dazed acolytes to many helpless nations, has created vast wealth for corporations and vast misery for the people who live in these countries. Briefly and perhaps crudely, one can explain this economic doctrine as “free market capitalism,” or the myth of the free market which translates in reality to corporate monopolies over the lives of people—not just their economic lives, as in what kind of products they are forced to buy at non-competitive prices—but their social and political lives.

The Shock Doctrine is a phrase coined by Klein referring to the Milton Friedman doctrine of crisis. Private business interests should take advantage of a public or social crisis in a nation and force radical change quickly, set these changes in place before the population can recover and then sit back and reap the economic rewards. This cultural monopoly imposed by corporate interests must be all-encompassing because the political system needs to be co-opted in order to create a machine that delivers money to the business interests. Government money, otherwise known as taxes paid by the citizens that should be returned to the people as part of a social contract, is used to subsidize the moneyed class to assist them in making profits without interference of inconveniences such as financial or environmental regulations.

The Shock Doctrine begins with the reaction of the Bush Administration to the flooding of New Orleans by the epic hurricane Katrina. Although Klein is not making a new observation—many other commentators remarked at how quickly the African-American refugees were driven out of state, dumped and abandoned, leaving Louisiana a much whiter state, she analyzes the post-Katrina situation in terms of “disaster capitalism.” This doctrine, which originated with Milton Friedman, urges the conservative government to rush in when a population is in shock and to upend existing structures and to replace them with private interests in the service of free market capitalism.

Klein remarks upon how quickly the Administration swooped down upon New Orleans and swept away the public school system as efficiently as Katrina had swept through the Ninth Ward. The goal was to whiten the city by not rebuilding the African-American neighborhoods where people who traditionally voted for the Democrats once lived. Lest any of these displaced persons think of returning, steps were taken to not rebuild their neighborhoods and to make education economically beyond their means. The replacement for free public education? Charter schools, a privatized mode of education, accountable to no one, even its customer base, parents and children.

A public system was replace by a private one: this is what happened to the school system in New Orleans. Instead of having a public system of education that we all pay into because we all benefit from an educated society, this city now has charter schools. For those who are well-to-do, a private school, excuse me a charter school, can be as expensive and as exclusive (and as segregated) as it wishes, out of reach of government supervision. Such schools can teach what they wish, again with very limited government oversight. Through the back door, separate but equal comes on little cat feet and steal the American dream.

For Milton Friedman, public schools are nothing short of socialism. As the late guru once stated, “The preservation of freedom is the protective reason for limiting and decentralizing governmental power. But there is also a constructive reason. The great advances of civilization, whether in architecture or painting, in science or in literature, in industry or agriculture, have never come from centralized government.”

That statement is astoundingly ignorant, especially for a university professor. As an art historian, I would like to waft a few names heavenward to Dr. Friedman (if he is in heaven) Egyptian pyramids, Jacques Louis David, Joseph Awkwright, Werner von Braum—-all these accomplishment, from architecture to art to invention to the “advance” of rocketry—came from centralized governments. I can only suppose that his students were too intimidated to try to inform him of the facts.

However, Friedman, when speaking against public education asserted,

“…It isn’t the public purpose to build brick schools and have students taught there. The public purpose is to provide education. Think of it this way: If you want to subsidize the production of a product, there are two ways you can do it. You can subsidize the producer or you can subsidize the consumer. In education, we subsidize the producer—the school. If you subsidize the student instead—the consumer—you will have competition. The student could choose the school he attends and that would force schools to improve and to meet the demands of their students.”

Sounds good, but the flaw in the argument is that charter schools actually lower competition and prevent intervention of the “consumers,” by limiting the alternatives. With public school, all the officials, from the governor, the mayor, the superintendent, the teachers, etc. are accountable to the public who can elect those who represent them. Neighborhood schools can respond to the needs of the community, while a charter school reacts to the desire for profit.

Certainly the profit motive and selfishness—virtues praised by Friedman—are great motivators—but certain public services are public goods paid for by the public and provided by the government, which does not have and should not have a profit motive. Friedman and his followers, called Neo-Conservatives (those lovely people who drove the American public into the Iraq War to the lasting profit of contractors), think the government should be run like a business. This philosophy, the neo-conservative ideals, is at odds with the founding ideals of the American government, expressed in the Declaration of Independence and the Constitution and the Bill of Rights.

America is a nation built on the philosophy of the Enlightenment and is, therefore, is based upon The Social Contract. The Social Contract is an idea based upon Jean-Jacques Rousseau’s The Social Contract, written in 1762. Rousseau was contemplating the end point of the logic of the Enlightenment philosophy, which proposed individual freedom and individual responsibility as opposed to the divine right of Kings and Queens. If human beings are not governed by a central authority ordained by God, then how are we to govern ourselves? His answer was that people came together freely and gave their consent to govern and to be governed and guided by the foundational idea of mutual respect and mutual rights and mutual aid.

The American government was not founded on the ideal of the profit motive.

The American government was founded on the ideal of mutual consent.

The problem of privatization of government services is that privatization removes mutual consent and removes accountability as privatization “gets government out of the way.”

Once the government is out of the way, the corporations have free reign over the citizens who are their captive customers.

The Chicago School, or the economic philosophy of Milton Friedman, thinks about the role of government in terms of not-government or not-governing. In other words, less government means more corporate control and more profits for the wealthy at the top. When the government is shrunk, its withdrawal creates a space and power vacuum, and the corporations rush in and fill up the open territory. The citizens become consumers without a vote. Neoconservativism is a form of public policy that is set on disenfranchising the public and reshaping society for the benefits of private profit.

Klein begins with an early experiment with the Shock Doctrine in Chile by Augusto Pinochet who overthrew the legitimate government by a coup-d’êtat and was advised in the conduct of his economic policy by Friedman himself. As Klein described, the experiment in Chile would be repeated elsewhere. The formula was simple, find a country in which an event has put the population in a traumatized state, “shock” the people, and seize the system and reshape it to your own ends. According to Klein, Friedman advised Pinochet to implement

“…rapid-fire transformation of the economy—tax cuts, free trade, privatized services, cuts to social spending and deregulation. Eventually, Chileans even saw their public schools replaced with voucher-funded private ones. It was the most extreme capitalist makeover ever attempted anywhere, and it became known as a “Chicago School” revolution, since so many of Pinochet’s economists had studied under Friedman at the University of Chicago. Friedman predicted that the speed, suddenness and scope of the economic shifts would provoke psychological reactions in the public that “facilitate the adjustment.” He coined a phrase for this painful tactic: economic “shock treatment.” In the decades since, whenever governments have imposed sweeping free-market programs, the all-at-once shock treatment, or “shock therapy,” has been the method of choice.

The United States (the CIA) supported the 1973 coup but Pinochet quickly revealed himself to be a particularly ugly bedfellow. Nevertheless, the dictator, who wrecked Chile and killed and tortured its people, was preferable to any socialist politician, such as the socialist Allende, who had nationalized industry. As Klein pointed out, the citizens are always opposed to the economic theories of the Chicago School, because these theories do not benefit them, only the corporations. Indeed when Pinochet died in 2006, the Chilean government probed the financial corruption of almost thirty years of misrule. According to The Washington Post, Pinochet, though dead, had amassed ten tons of gold or $160 million dollars.

Imagine what $160 million could have done for the people of Chile.

Although Klein goes through a number of case studies of the Chicago School intervening with foreign nations that have dictators eager to emulate Pinochet, she concentrates on the “event” in America that unleashed our own Shock Doctrine within our nation: September 11th. It is perhaps a coincidence that Pinochet seized power on September 11th, 1973 and that his coup was a dress rehearsal for the immediate reaction of the Chicago School neo-conservatives embedded in the Bush Administration. After 911, the astonishing leap from Afghanistan to Iraq may have surprised those of logical mind was in fact a long planned campaign into Iraq, site of massive oil fields. Klein states,

The Bush team seized the moment of collective vertigo with chilling speed—not, as some have claimed, because the administration deviously plotted the crisis but because the key figures of the administration, veterans of earlier disaster capitalism experiments in Latin America and Eastern Europe, were part of a movement that prays for crisis the way drought-struck farmers pray for rain, and the way Christian-Zionist end-timers pray for the Rapture. When the long-awaited disaster strikes, they know instantly that their moment has come at last.

Klein correctly points out that the doctrines of the Chicago School had never been popular or desired by the American people. That said, many of the ideas and principles were implemented by the Reagan Administration’s program of what George H. W. Bush called “voodoo economics,” also known as the “trickle down theory.” The concept that, if taxes were cut for the wealthy, then the benefits would trickle down to the lower classes, was disproved by the fact that 1. The incomes of the middle class have stopped rising (and have stayed static to this day) and 2. Taxes had to be raised by Reagan eleven times to offset a growing deficit. However, the great success of Ronald Reagan was that he introduced the idea that the “government is the problem.”

If that was the case during the Reagan Administration, during the Bush administration, the “government is the solution” to enriching corporations. For the first time in the history of America, the nation went to war on a credit card. The nation was urged to shop, not sacrifice, as the government conducted an endless “war on terror.” Except that it was not the government that was waging this war. The “military” or the “troops” in the field that the American people heard about were something of a screen for what was really going on in Iraq. As Klein explained it,

“…the Bush administration outsourced, with no public debate, many of the most sensitive and core functions of government—from providing health care to soldiers, to interrogating prisoners, to gathering and “data mining” information on all of us. The role of the government in this unending war is not that of an administrator managing a network of contractors but of a deep-pocketed venture capitalist, both providing its seed money for the complex’s creation and becoming the biggest customer for its new services. To cite just three statistics that show the scope of the transformation, in 2003, the U.S. government handed out 3,512 contracts to companies to perform security functions; in the twenty-two-month period ending in August 2006, the Department of Homeland Security had issued more than 115,000 such contracts.”

Furthermore in the best tradition of the Chicago School, the huge cost increases incurred by privatizing the military and outsourcing fighting to contractors were hidden “off the books” and not put into the deficit until the Obama Administration. As Klein pointed out, while the American people were improvised by this for-profit war of choice, Halliburton earned a $20 million profit. The Iraq was an experiment in large-scale privatization of war waged by corporate interests and their stockholders. Secretary of Defense, Donald Rumsfeld, put forward the idea of a small army, which hid the subtext of a large force of private contractors, who would fight in “our” name with taxpayer dollars but without accountability. This hidden army was never counted in the number of people who were fighting in Iraq but they doubled the number of military personnel fighting for American interests in Iraq. The result was a ten-year trillion-dollar war that started with a lie and will end in resignation.

Klein points out that the Shock Doctrine of the Chicago School calls its followers by a number of names: neoconservatives in America, living in so-called “Think Tanks,” such as the American Enterprise Institute and the Hoover Institution, and “neoliberals” in Europe, indicating the interest in Macroeconomics or in corporate globalization. The author decides upon a more descriptive term,

A more accurate term for a system that erases the boundaries between Big Government and Big Business is not liberal, conservative or capitalist but corporatist. Its main characteristics are huge transfers of public wealth to private hands, often accompanied by exploding debt, an ever-widening chasm between the dazzling rich and the disposable poor and an aggressive nationalism that justifies bottomless spending on security. For those inside the bubble of extreme wealth created by such an arrangement, there can be no more profitable way to organize a society. But because of the obvious drawbacks for the vast majority of the population left outside the bubble, other features of the corporatist state tend to include aggressive surveillance (once again, with government and large corporations trading favors and contracts), mass incarceration, shrinking civil liberties and often, though not always, torture.

By making an analogy to “torture,” Klein explains that the victim/nation is “softened up” through terrible events, which make human beings temporarily defenseless and susceptible to doing whatever it takes to remedy the crisis. As she says,

That is how the shock doctrine works: the original disaster—the coup, the terrorist attack, the market meltdown, the war, the tsunami, the hurricane—puts the entire population into a state of collective shock. The falling bombs, the bursts of terror, the pounding winds serve to soften up whole societies much as the blaring music and blows in the torture cells soften up prisoners. Like the terrorized prisoner who gives up the names of comrades and renounces his faith, shocked societies often give up things they would otherwise fiercely protect.

The Chicago School, according to Klein, long though of itself as a School of Thought or a philosophy, rather than an economic theory. Just as the American military sought a city that had not been bombed upon which to drop the atom bomb the better to ascertain the results, the Chicago School economists sought a “clean slate” upon which to write their doctrines the better to ascertain the results. These economists imagined that the capitalist system was faultless, endlessly flexible and endlessly self-correcting, and, hence, infallible. This is typical Enlightenment thinking, based upon an idealized model, generated by math, and based upon a hypothesis.

The problem begins when the elegant model meets the real world. The economic system works only for corporations; the populations hate how they are disenfranchised and become restive. In order to control the experiment, the government must increase surveillance on its own citizens who are constantly signaling their discontent. The disconnect is caused by a conceptual misfit: the government is now for the benefit of the corporations but is masquerading, as in America, as a democracy and allows a charade of elections which are financed and manipulated by corporations in a viscous circle. Caught in the middle, “We the People” become more and more angry and, eventually, a rebellion ensues to put things right again, as in Chile.

The fact that while the Shock Doctrine may work, the Chicago School economic ideas do not has not given the Neoconservatives pause. Instead, they simply double down and repeat their assertions, for years, in the fact of facts and documentation, all of which point to the contrary. As Klein points out, the Neoconservatives are “purist” thinkers, meaning that they think in theory and feel the need to wipe away any pollutants that sully or interfere with what they think of as the “free market.” One can understand the insistence of the Republican Party that the Environmental Protection Agency prevents jobs from being created by realizing that regulation per se is “impure.” The problem is, that many have pointed out, the logical outcome of Enlightenment thinking: such a stance of “purity” would end regulations totally and we would not be able to drink the water and the Cuyahoga River will be ablaze once again.

Klein mentions that the Neoconservatives of the Chicago School were in the intellectual wilderness for decades, and, indeed, even today, orthodox economics and mainstream economists have pointed out that the government has to take a role in regulating and directing the economy. Today, as we are mired in neo-Depression, these economists are calling for Keynesian economic policies to prime the job market and to stimulate the economy. And the neoconservative politicians stand firm for a policy of purity and refuse to help any element of society, except the wealthy. Their philosophy is in line with that of Milton Friedman who decided that the nation went off the rails with the New Deal and created a “welfare state.” For nearly a century, it has been the goal of these anti-Keynesians to dismantle the role of government in society, from social safety nets to regulations that promote public health and safety.

Because of the popularity of the New Deal and its programs and the success of post-war government intervention in building a prosperous middle class through public policies, the “Chicago Boys” had to practice overseas, mostly in South American nations. Despite the fact that some of the students at the University of Chicago protested the corrupt and brutal killing regimes brought into being by Chicago style politics, Milton Friedman won a Nobel Prize for Economics in 1976 and apparently he never apologized for or agonized over all the horrible injustices done under his policies. As Klein explained it,

This intellectual firewall went up not only because Chicago School economists refused to acknowledge any connection between their policies and the use of terror. Contributing to the problem was the particular way that these acts of terror were framed as narrow “human rights abuses” rather than as tools that served clear political and economic ends. That is partly because the Southern Cone in the seventies was not just a laboratory for a new economic model. It was also a laboratory for a relatively new activist model: the grassroots international human rights movement. That movement unquestionably played a decisive role in forcing an end to the junta’s worst abuses. But by focusing purely on the crimes and not on the reasons behind them, the human rights.

Somehow the Chicago School escaped being discredited on moral and ethical grounds and politicians realized that those economic policies were bad for the people who still casts votes in free nations. Therefore, Milton Friedman was disappointed in the performance of Richard Nixon who understood that a contented population would reelect him. As James Carville said, “It’s the economy, stupid.” But later politicians would be bolder. Despite the undeniable truth of the terror and torture implemented by the Pinochet regime, free market politicians looked upon his work in Chile with favor. Klein states,

When Friedrich Hayek, patron saint of the Chicago School, returned from a visit to Chile in 1981, he was so impressed by Augusto Pinochet and the Chicago Boys that he sat down and wrote a letter to his friend Margaret Thatcher, prime minister of Britain. He urged her to use the South American country as a model for transforming Britain’s Keynesian economy. Thatcher and Pinochet would later become firm friends, with Thatcher famously visiting the aging general under house arrest in England as he faced charges of genocide, torture and terrorism. The British prime minister was well acquainted with what she called “the remarkable success of the Chilean economy,” describing it as “a striking example of economic reform from which we can learn many lessons.”

Klein studied Margaret Thatcher’s implementation of Milton Friedman’s doctrines which worked so badly that her position as Prime Minister of Great Britain was saved by a strange and unnecessary war, the Falklands War of 1982, fought on behalf of less that three thousand people and an almost equal number of sheep. Friedman would have preferred an economic crisis, a depression, a currency meltdown, or something like we have today, a global collapse of the economic system. But Margaret Thatcher, the Iron Lady, went to war to cloak her failures. England was a difficult site for Chicago School politics to flourish, and, as Klein continues, the former Soviet Union and China were more successful in following the “purity” of the free market philosophy of Milton Friedman who unapologetically advised China at the moment of Tiananmen Square. But then Freidman always maintained that the ends always justify the means. He said,

A common objection to totalitarian societies is that they regard the end as justifying the means. Taken literally, this objection is clearly illogical. If the end does not justify the means, what does? But this easy answer does not dispose of the objection; it simply shows that the objection is not well put. To deny that the end justifies the means is indirectly to assert that the end in question is not the ultimate end, that the ultimate end is itself the use of the proper means. Desirable or not, any end that can be attained only by the use of bad means must give way to the more basic end of the use of acceptable means.

As Klein points out, the former Soviet Union, now known as Russia, was an ideal proving ground for a doctrine that had continually failed. She chronicles the psychological impact of the theories-come-home-to-roost as practice of the Chicago School: alcoholism and AIDS and prostitution and drug addiction and wealth concentrated in the hands of the few. Such is the lament of the hopeless under a doctrine of “planned misery.” She states,

Russia’s population is indeed in dramatic decline—the country is losing roughly 700,000 people a year. Between 1992, the first full year of shock therapy, and 2006, Russia’s population shrank by 6.6 million.83 Three decades ago, André Gunder Frank, the dissident Chicago economist, wrote a letter to Milton Friedman accusing him of “economic genocide.” Many Russians describe the slow disappearance of their fellow citizens in similar terms today. This planned misery is made all the more grotesque because the wealth accumulated by the elite is flaunted in Moscow as nowhere else outside of a handful of oil emirates. In Russia today, wealth is so stratified that the rich and the poor seem to be living not only in different countries but in different centuries. One time zone is downtown Moscow, transformed in fast-forward into a futuristic twenty-first-century sin city, where oligarchs race around in black Mercedes convoys, guarded by top-of-the-line mercenary soldiers, and where Western money managers are seduced by the open investment rules by day and by on-the-house prostitutes by night.

One might wonder why, with the many manifold and manifest failures of the Shock Doctrine and the Chicago School philosophy, the Neoconservatives continued to be fruitful and multiply. The only answer that I could come up with is that the corporations like the policies because, once implemented, they become vastly enriched, even when the Chicago Boys can get only part of their agenda through, as in America. To return to the Rumsfeld idea of “transforming” the military into a corporation by outsourcing fighting to contractors Klein recounts how unpopular this idea was to the generals who would watch the military double in size with half of the personnel beyond their control. As she pointes out, the role of the government is to subcontract services to private businesses (which inevitably charge two or three times more), which cause the cost of any “government” service to spiral.

The philosophy of Milton Friedman made corporations and businesses profitable beyond their wildest dreams. Thanks to Presidents Bill Clinton and George W. Bush, more and more areas traditionally reserved for government professionals, who were often unionized, were turned over the corporations. The result was a gutting of unionized labor (which started with Ronald Reagan) and the disenfranchising of the voter who could not confront a corporation in a town hall. Klein points out that Bush had energetically privatized the prisons in Texas and then went on to privatize the War on Terror. What Bush wanted to do, she asserts is to “hollow out the government.” With what seems like a preternatural patience, the neoconservatives who had been waiting and practicing for years came into own, thanks to the calamity and trauma of September 11th. She states,

September 11 has changed everything,” said Ed Feulner, Milton Friedman’s old friend and president of the Heritage Foundation, ten days after the attack, making him one of the first to utter the fateful phrase. Many naturally assumed that part of that change would be a reevaluation of the radical antistate agenda that Feulner and his ideological allies had been pushing for three decades, at home and around the world.

911 allowed for the collapse Enron to happen with less notice than it would have otherwise have subjected to. But Enron and its mode of doing business was a harbinger of things to come: total economic collapse through one of the maladies that has plagued the Chicago School since the experiments began in the 1970s: corruption. The problem of outside contractors happily ripping off the government had been going on for years but under the Friedman style government of George Bush, the process accelerated to the extent that we still do not have a complete accounting of taxpayer money that was misspent or simply lost. Vast sums of money went, not to stimulate the American economy, which remained stagnant, but to corporations. As Klein recounts,

New Deal would be exclusively with corporate America, a straight-up transfer of hundreds of billions of public dollars a year into private hands. It would take the form of contracts, many offered secretively, with no competition and scarcely any oversight, to a sprawling network of industries: technology, media, communications, incarceration, engineering, education, health care.ax What happened in the period of mass disorientation after the attacks was, in retrospect, a domestic form of economic shock therapy. The Bush team, Friedmanite to the core, quickly moved to exploit the shock that gripped the nation to push through its radical vision of a hollow government in which everything from war fighting to disaster response was a for-profit venture.

The economic doctrine of the Bush Administration, expressed by Bush’s Budget Director, Mitch Daniels and others, was that the government did not provide services but purchased them from an outside contractor and resold them to the American public who was then forced to pay for these services at two or three times the market value. The result was a guaranteed deficit, draining the government surplus created under Bill Clinton and the future of the nation, which was not floating off on a sea of endless and unmentionable debt. The War on Terror made contractors and corporations rich, and the nation poor.

For decades, America has been fighting one war after another and has been existing in a low level state of Total War, flying low under the public radar. In the same way, the War on Terror was fought by corporations and by a small group of beleaguered American soldiers who were used as window-dressing. These soldiers were isolated from the mainstream, which allowed the War to be fought globally without much scrutiny or without inconveniencing the American people who were busy “shopping” for homes and commodities. The best part of the War was that it could conceptually go on as long as American could borrow money from China. As Klein says,

From a military perspective, these sprawling and amorphous traits make the War on Terror an unwinnable proposition. But from an economic perspective, they make it an unbeatable one: not a flash-in-the-pan war that could potentially be won but a new and permanent fixture in the global economic architecture.

What Naomi Klein calls the “disaster industry” was based on high tech venture capital businesses ideally suited to hunting “terrorists” with sophisticated technology. Such technology is superbly expensive and is ideally suited to endless improvement, or to put it another way, an endless revenue stream. An entire corporate structure sprung up, designed to fight a war that could not be won—by definition—and, therefore, a war that could never end—like the profits. Klein points out the vast fortunes some fortunate individuals amassed following 911, predicting and causing the current inequities between the very rich and the stalled and suffering middle class. She says,

From a military perspective, these sprawling and amorphous traits make the War on Terror an unwinnable proposition. But from an economic perspective, they make it an unbeatable one: not a flash-in-the-pan war that could potentially be won but a new and permanent fixture in the global economic architecture.

The problem is that once government services are auctioned off to no-bid contractors, the nation has been given to corporations whose motive is profit, not democracy and not public service and not the public good. Corporations answer to stockholders, not to voters. For example, insurance companies are motivated to make money not to make people healthy. A corporation could be providing any sort of good and a health care company or a military contractor is simply filling in a blank corporate space, providing a good or a service, not because it is dedicated to public service but because the business wants to make a profit. For those who have wondered why America invaded Iraq or for those who charged that the war was waged to enrich Vice President Dick Cheney’s company, Halliburton, Klein offers this succulent explanation:

Saddam did not pose a threat to U.S. security, but he did pose a threat to U.S. energy companies, since he had recently signed contracts with a Russian oil giant and was in negotiations with France’s Total, leaving U.S. and British oil firms with nothing; the third-largest proven oil reserves in the world were slipping out of the Anglo-American grasp. Saddam’s removal from power has opened vistas of opportunities for the oil giants, including ExxonMobil, Chevron, Shell and BP, all of whom have been laying the groundwork for new deals in Iraq, as well as for Halliburton, which, with its move to Dubai, is perfectly positioned to sell its energy services to all these companies. Already the war itself has been the single most profitable event in Halliburton’s history.

When Klein went to Iraq to investigate this economic story, she, of course, could find few people to talk with her about the underlying cause and effect of the war for profit in Iraq. There was enough public scrutiny on the war and the amount of money that was wasted, the toll of American lives in the service of Halliburton and the cost of the war on American honor so that the Bush Administration was forced to scale back its occupation forever dream and agreed to begin withdrawal and scale back—of the military, not the contractors. It is still unclear what kind of or extent of an American presence will remain in Iraq. Klein discusses her trip to Iraq,

The fact that it was hard to find people in Baghdad who were interested in talking about economics was not surprising. The architects of this invasion were firm believers in the shock doctrine—they knew that while Iraqis were consumed with daily emergencies, the country could be auctioned off discreetly and the results announced as a done deal. As for journalists and activists, we seemed to be exhausting our attention on the spectacular physical attacks, forgetting that the parties with the most to gain never show up on the battlefield. And in Iraq there was plenty to gain: not just the world’s third-largest proven oil reserves but territory that was one of the last remaining holdouts from the drive to build a global market based on Friedman’s vision of unfettered capitalism. After the crusade had conquered Latin America, Africa, Eastern Europe and Asia, the Arab world called out as its final frontier.

It was clear from the start that Iraq was considered to be, not a nation, but a site of corporate exploitation on a scale that made nineteenth century imperialism look tame and lame. Iraq was to be a staging ground for extraction and profit while the compliant and grateful population looked on in “shock and awe.” As often happens with these best-laid plans of the Chicago Boys (who seem perennially divorced from reality), those very pesky people caused problems from the start: looting, complaining, and forming insurrectionary groups. As Klein recounts, because the “planners” did not plan for the Iraqi people, the occupation was a disaster from the start:

The Bush cabinet had in fact launched an anti-Marshall Plan, its mirror opposite in nearly every conceivable way. It was a plan guaranteed from the start to further undermine Iraq’s badly weakened industrial sector and to send Iraqi unemployment soaring. Where the post-Second World War plan had barred foreign firms from investing, to avoid the perception that they were taking advantage of countries in a weakened state, this scheme did everything possible to entice corporate America (with a few bones tossed to corporations based in countries that joined the “Coalition of the Willing”). It was this theft of Iraq’s reconstruction funds from Iraqis, justified by unquestioned, racist assumptions about U.S. superiority and Iraqi inferiority—and not merely the generic demons of “corruption” and “inefficiency”—that doomed the project from the start. None of the money went to Iraqi factories so they could reopen and form the foundation of a sustainable economy, create local jobs and fund a social safety net. Iraqis had virtually no role in this plan at all.

Predictably, the Iraqis were angry with the Bush Administration and reacted appropriately and reactively. Instead of working with the people they had invaded and conquered, the government treated the innocent Iraqis ruthlessly, disenfranchising them from their own country and offering them no choice but insurrection. The worst elements in Iraqi society floated to the top, while the very people who could rebuild the country simply left. Unable to work with the occupation government, which was intent on sucking the natural resources dry, the best and the brightest, the educated and the trained sectors of the society fled the conditions created by the ineptness and greed of the Bush Administration. But Klein insists that the real cause of the disaster was deeper than mere inexperience:

Iraq’s current state of disaster cannot be reduced either to the incompetence and cronyism of the Bush White House or to the sectarianism or tribalism of Iraqis. It is a very capitalist disaster, a nightmare of unfettered greed unleashed in the wake of war. The “fiasco” of Iraq is one created by a careful and faithful application of unrestrained Chicago School ideology.

The occupation forces viewed local Iraqi businesses as elements to be purchased by international corporations that would then proceed to “downsize” the employees and globalize the assets. While the Iraqis rebelled against their livelihoods being wrested from them by global corporate interests, Klein points to another aspect of the Occupation—the reluctance of the Neoconservatives to allow a government to be built for the people. The Neoconservatives did not believe in government and it would be hard to imagine a contingent of the American population more ill suited to putting a shocked and defeated people on the road to democracy. The followers of Milton Friedman believe, not in democracy, not in the Social Contract, but in an everyman-for-himself philosophy.

Every person has to compete within an economic zone where everything is for sale. If you fail to compete on this narrow and specialized field, it is your fault. The government’s only role is to stage and facilitate economic warfare, the Darwinian survival of the fittest scenario. It has been remarked on over and over, especially in Rajiv Chandrasekaran’s excellent 2006 book, Imperial Life in the Emerald City: Inside Iraq’s Green Zone, that the people hired to undertake the delicate and difficult task of reconstructing Iraq were young and inexperienced and given their jobs based, not on their understanding of nation building, but on having the “correct” positions on conservative “values,” such as abortion. Klein makes it clear that such litmus tests that so puzzled me when I read Chandraskekaran’s book were probably just proofs of philosophical positions. As she explains of the young people,

…they were frontline warriors from America’s counterrevolution against all relics of Keynesianism, many of them linked to the Heritage Foundation, ground zero of Friedmanism since it was launched in 1973. So whether they were twenty-two-year-old Dick Cheney interns or sixty-something university presidents, they shared a cultural antipathy to government and governing that, while invaluable for the dismantling of social security and the public education system back home, had little use when the job was actually to build up public institutions that had been destroyed.

Thanks to this army of neoconservatives, there was a vacuum where a government should have been. Klein points out that the Iraqis who remained in their country had no government to coalesce around. There was no government, only an army of corporate occupiers, determined to loot and leave. With few Iraqis allowed to be public presences or to have roles or jobs in the new corporate state, the people turned to the one element of society that had not been abolished, looted or corrupted: the fundamentalist Islam. The Muslim religion in what had been a secular state under Sadaam became the only unifying force for the Iraqis. A nation that had not allowed terrorists to disturb the dictator was now in the hands of terrorists and small fires of resistance broke out everywhere. Soon the Green Zone was under siege and under fire, interrupting the contractors in their systematic looting of the nation’s resources.

The corporations were interested in taking money for not rebuilding Iraq, bombed into submission by its “liberators.” In activities still incomprehensible, corporations such as Halliburton and Kellogg, Brown, and Root spend billions of borrowed money to “construct” facilities and buildings so bad and so dangerous that one has to wonder how such atrocities are actually carried out. If anyone should be so bold as to sue, the corporations were beyond accountability: we paid them but we could not control them—the perfect situation for global looters. As Klein says,

In March 2006, a federal jury in Virginia ruled against the company, finding it guilty of fraud, and forced it to pay $10 million in damages. The company then asked the judge to overturn the verdict, with a revealing defense. It claimed that the CPA was not part of the U.S. government, and therefore not subject to its laws, including the False Claims Act. The implications of this defense were enormous: the Bush administration had indemnified U.S. corporations working in Iraq from any liability under Iraqi laws; if the CPA wasn’t subject to U.S. law either, it meant that the contractors weren’t subject to any law at all—U.S. or Iraqi.

At the end of the book, Klein circles around from her long analysis of the looting of Iraq and returns to New Orleans after Katrina. It seems that the Iraq model could be used in New Orleans to the benefit of tourist industries and developers. This time, the disaster allowed the government to transport any citizens who might protest and ship them out of state so that the dismantling of entire neighborhoods and school districts could proceed unopposed. As in Sri Lanka after the tsunami, the “abandoned” territory was privatized and gentrified. The model of privatization has become so stealthily and systematically insinuated into the fabric of the American way of life that the private contractors have become stronger and less accountable. As Klein expresses it,

The emergence of this parallel privatized infrastructure reaches far beyond policing. When the contractor infrastructure built up during the Bush years is looked at as a whole, what is seen is a fully articulated state-within-a-state that is as muscular and capable as the actual state is frail and feeble. This corporate shadow state has been built almost exclusively with public resources (90 percent of Blackwater’s revenues come from state contracts), including the training of its staff (overwhelmingly former civil servants, politicians and soldiers). Yet the vast infrastructure is all privately owned and controlled. The citizens who have funded it have absolutely no claim to this parallel economy or its resources.

That these private corporations have the fate of the nation in their unaccountable hands is made clear when one looks at the banking industry. Nowhere is the idea of public money and private gain truer than in the world of finance. It is the public who risks and looses and the private that is saved and rewarded. Klein’s thesis of “disaster capitalism” is playing out across America where we are seeing what she calls “disaster apartheid.” The rich become richer and isolate themselves from the increasingly alienated lower classes, the middle and working and unemployed and underemployed classes.

It is not just the gated communities that withdraw from the Social Contract, such as one that Klein describes in Georgia; it is the gated minds that withdraw from the American promise: that we are one people and one nation. Today, we are nation divided between the rich and protected who reap the rewards of a tax code rigged to make them rich and everyone else poor. They are protected by powerful interests who are less interested in a single wealthy person on a Long Island estate in the Hamptons than in the “slippery slope” that the attentions of the citizens might turn to the corporations who also do not pay taxes. Klein points out that Israel, like America has become a divided society, profiting from the “threats” of “terrorism” coming from tribes people who are living in a seventeenth century society.

The absurdity that twenty first century nations should establish an economic system dedicated to arming themselves against people who would leave us alone if we just left them alone has created a huge gulf between privatized wealth and public poverty. Klein states that the American governments under the spell of Milton Friedman fear democratic socialism more than they fear any outside threat. Any hints of “income redistribution” or “economic fairness” bring about instant assaults from the conservative media, which howls with charges of “Marxism” and “Nazism.” Either these people are uneducated and don’t know the difference between the theories of Marx and the practices of Hitler or they simply hurl word grenades indiscriminately.

“Socialism” or a government that actually governs is a dire threat to the followers of Milton Friedman. The people who run as conservatives run for office, not to govern, but to un-govern. Their role is that of moles; to “hollow out” the government, leave it an empty tunnel under the crumbling sod of a nation that was once called “America.” Running this brave new world will be a handful of corporations, those “people” who cannot vote but can buy elections.

As I write, there are protestors on Wall Street, “occupying” Zuccotti Park. The protests against the implementation of the Shock Doctrine upon Americans have been going on for years, ever since the Wall Street Bail Outs. In another post, on Inside Job (2010) I wrote of the complicity of now-discredited economists and economic doctrines in causing a global economic crisis from which it will take us years to recover. I say “us,” but most of “us” will never regain our strides or places in a once-thriving society that was looted by the rich and powerful who are affronted when “we” demand “economic justice.” “We” are “Marxists” and “unpatriotic.” The Shock Doctrine ends on a hopeful note as Naomi Klein sees signs that people are trying to take their country back. Our future hangs in the balance and some of us wonder if this is our last chance before we all become “America, Inc.”

Artists make our memories for us. We tend not to think about this prodigious feat of collective historical construction but when we ask ourselves what is the first image that comes to mind about, say, Iwo Jima? We answer “The Iwo Jima Memorial.” Not the wounding photographs published sixty years ago in Life Magazine but the huge cast bronze group of men, thirty-two feet tall, raising the American flag on Mount Surabachi. The sculptor, Felix de Welton, is less well-known than the photographer Joe Rosenthal who staged the famous photograph upon which the Marine Corps War Memorial (1954) was based.

But, in recent times, we have been faced with challenges that have gone far beyond the traditional monuments or memorials. The problem of the last forty years has been how to represent tragedy and to wring something redeeming from it.

Two recent memorials, dedicated within weeks of each other, demonstrate eloquently a lack of eloquence that conveys the delicate question that the artist must answer visually—how to touch the hearts and minds? How to create a meaningful history? How to heal wounds? On one hand we have what I consider a complete and colossal failure, painful to look at, and ill-conceived at its very heart, The Martin Luther King Memorial. On the other hand, we have the National September 11 Memorial in the footprint of the World Trade Center in New York. This memorial was designed by architect Michael Arad, and this very different structure is far more successful, suited to the site and understated in its impact.

The Martin Luther King Memorial rears up out of the green tuft of the Mall, yet another blemish on what is becoming an overcrowded field cluttered with really bad works of public “art.” The huge white sculpture of the Civil Rights leader emerges like a bad Michelangelo work (evoking an unfortunate memory of the Renaissance artist’s series of Slaves for the Tomb of Julius II). King’s arms are crossed and he has a pouting unpleasant expression on his face.

The sculpture was based on a photograph by Bob Fitch in 1966. This particular image was a strange choice, a passive pose for an active man. Fitch was summoned to King’s office when the preacher was enjoying a rare pause between appointments and had time to pose for some photographs. He and his wife spent a great deal of time in Atlanta with the Kings, with his wife acting as Coretta King’s secretary. It is clear that King was comfortable with Fitch, for his posture is relaxed and peaceful. His folded arms are a gesture of relaxation and familiarity with an old friend. No one asked Fitch’s permission to use (misuse) his original image, reverse it, and turn it into a cold totalitarian figure looming over the Mall.

The pose of Martin Luther King was transformed into something forbidding and off putting—a stand-offish stance that repels rather than attracts. The folded arms ward off any approach—an impossible posture for a leader who gathered followers. It is strange that the first African-American honored on the Mall, should be so glaringly white. Chosen by the memorial committee from an international competition, the Chinese artist Lei Yixin was given King a very Asian look, as if King were Chinese or Mao was African-American. Yixin stated,

“Dr. King’s vision is still living, in our minds; we still miss him, we still need him,” said Yixin through a translator, calling the sculpture the most important of his life, technically and emotionally. “I am trying to present Dr. King as ready to step out … this is King’s spirit, to judge people from their character, not race, color or background.”

Nice words but one can’t help wondering what David Hammons would have done with such an opportunity.

There is a clumsy literalness about the idea of King on the part of the uninspired artist that captures nothing of the history of King the minister, King the leader, King the martyr. Adding to the illustrative quality to the experience is the very silly chunks, supposedly made from a white granite “mountain,” broken in two parts named, and one shudders at these names, the “Mountain of Despair” and the “Stone of Hope.”

King’s speech from which the phrase was taken is far more eloquent: “Out of the mountain of despair, a stone of hope,” from his “I Have a Dream” speech. The preacher holds a copy of the speech in one of his hands, the speech read on that very Mall August 28, 1963. And if that is not enough there are fourteen other quotations from his speeches. According to other news source, the great poet, Maya Angelou objected the shortening of some of the fourteen quotes. I must agree that it is nothing short of vandalism to tamper with someone’s writing, especially a speaker as powerful as Martin Luther King.

The quotation that Angelou disliked the most was originally, “Yes, if you want to say that I was a drum major, say that I was a drum major for justice. Say that I was a drum major for peace. I was a drum major for righteousness. And all of the other shallow things will not matter.

But the short version states simplistically, “I was a drum major for justice, peace and righteousness.” Which immediately conjures up an image of King in a band uniform tossing a baton into the sky. One wonders why this and other statements were shortened—budget costs?

The failure of this work is so great, in my opinion, that it makes the World War II Memorial—an artistic abjection that I have long thought to be a huge fascist monstrosity—look positively noble. The problem is not that the artist is not American but that the artist is not imaginative or inspired. In other words Mr. Yixin had a photograph but no concept. Can you make art when you know nothing about your subject? Can you create a successful memorial out of a concept?

For an answer, look no further than Maya Lin’s Civil Rights Memorial at the Southern Poverty Law Center of 1988. She was in her twenties, she had never learned of the Civil Rights Movement, but Lin read one of King’s speeches with a quote taken from the Bible. King paraphrased Amos 5:24 and said, “We will not be satisfied until justice rolls down like waters and righteousness like a mighty stream.”

As Lin said later, “The minute I hit that quote I knew that the whole piece had to be about water,” Lin said. “I realized that I wanted to create a time line: a chronological listing of the Movement’s major events and its individual deaths, which together would show how people’s lives influenced history and how their deaths made things better.”

The architect made a powerful black circle engraved with the forty names of all those who had died in the cause of voting rights, including Martin Luther King in 1968. Water streams over the surface of the smooth etched surface, healing the wounded hearts of those still waiting for justice. In other words Maya Lin had a concept.

What kind of memorial would have been fitting for Martin Luther King? King was a small man who led a big movement, a long march that continues today, and he agreed to join the Civil Rights Protests, knowing that once he did, he was dead man. We forget today that for a black child going to school with white boys and girls in the 1950s was impossible—textbooks touched by a black child could not be touched by a white child: books were kept separate and unequal. We forget today that for a black man to try to vote in the South in the 1960s was to invite the inevitable lynching.

Martin Luther King made integrated schools possible, made it safe for African-Americans to vote, made it possible for Barack Obama to become President. King gave his life, as did many others, so that American citizens could have American rights. There was something knowing and innocent about King’s face: his eyes were set wide apart and his light brows made him look open and welcoming. He was a man with a face that seemed to know what was coming.

King’s role model was Ghandi and we see the photograph of the Indian leader in the photograph by Bob Fitch. Twenty years after Ghandi’s assassination, Martin Luther King, apostle of non-violent protest, was murdered. I believe his memorials are everywhere and appeared spontaneously.

]]>http://jeannewillette.com/2011/08/26/of-memorials-and-memory-martin-luther-king-memorial/feed/0The Help (2011)http://jeannewillette.com/2011/08/11/the-help-2011/
http://jeannewillette.com/2011/08/11/the-help-2011/#commentsThu, 11 Aug 2011 17:00:05 +0000http://jeannewillette.com/?p=515MAID IN THE SOUTH

It was one-thirty in the afternoon. On a Wednesday. It was Orange County. South Orange County, one of the most white and most Republican sections of California. Who but me, I thought, would be in the theater to see a movie about black maids in Jackson, Mississippi? In 1963? To my shock, the theater was packed, floor to ceiling, stem to stern. All white people to be sure, but they—mostly middle-aged and young, a few oldsters—were there. The OC represented. The audience laughed and cried and applauded in the end. Despite the reviews, which have been mixed and cautious, this film may be a nice little hit at the end of the summer.

Then I came home and watched Lawrence O’Donnell’s The Last Word and heard a commentator I respect, Melissa Harris-Perry, lambast the movie. Her distaste for the film was tweeted regular intervals while she watched it. I have no intention of debating or disagreeing with Dr. Harris-Perry, but I would like to present a different viewpoint. I understand her objections and, as I drove to the theater this afternoon, I, too, sighed and said to myself, “Wouldn’t it be nice for once for a story about black people be told from the perspective of black people by black people?” “Wouldn’t it be nice to have a film about the Civil Rights movement that didn’t have a white person as the spokesperson for or the rescuer of black people?” I had the same qualms when writing about The Blind Side. I had already read some reviews on The Help that called attention to the way in which the black maids were asked to speak in “dialect.” I was braced for the patronizing white-centric inevitability, but I am a huge Viola Davis fan and had been waiting for this film for months. Therefore, I went to The Help on the first day it opened, so what did I think? Attack or support the film?

I come back to my original point: Wednesday, one-thirty, Orange County, many white people in the theater. Good Job.

Like Dr. Harris-Perry, I am an educator and a professional public speaker. I know, first hand, what happens when, if that audience is a general audience with a general education, you approach an audience on a scholarly level. And it’s not good. You sound patronizing and the audience justifiably reacts to you with hostility. You lose your opportunity to do what you were hired to do: teach. You have to start with where your audience is. You don’t talk down; you talk with. You don’t lecture, you use humor and pathos and facts to get your point across and listen to the people who have come to see you. I always say, I teach only to learn.

I live in an educated neighborhood, University of California, Irvine is five minutes away, but I would guess that I was among the few in the audience who had watched the wonderful PBS special on the American Experience series, Freedom Riders, this past May. Not everyone is interested in history or in politics. I know few people who match the total geekiness that is me. And, it is surprising how little Americans know of their own history. Many people have heard of “Kent State,” but, how many people have heard of “Jackson State?” Same year, 1970. My point is that The Help is another Hollywood attempt to teach history through a human-interest story. It’s what I call “infotainment,” the film informs and it entertains; otherwise no one would watch. “Real” history belongs on PBS.

The definitive Civil Rights movie along the lines I outlined, with black characters telling the story from the black point of view, has yet to be made. Mississippi Burning reduced African-Americans to extras, Ghosts of Mississippi did a bit better, and the only film I can think of, structured from the black point of view, that I have seen is Rosewood, a wonderful movie by John Singleton, who kept the white presence in its historical place. This last was a film almost no one went to see, black or white. Rosewood, a horrific true story of the destruction of a prosperous middle class African American community named “Rosewood” in North Florida in 1923, was a powerful and moving film—I have shown it to some of my classes—but it came and went without causing much of a ripple. But people will go and see The Help. Why? Because, as a Southern woman would say, “you catch more flies with sugar than with vinegar.”

Not that The Help is sugar; it is not. Yes, there are places to laugh, just as there are places to cry. And yet, this is a serious movie that comes out of a real history of a real place. If you were “a Negro” in the 1960s and you lived in Jackson, Mississippi, you lived in one of the most dangerous places in America. The movie captures some of the very real fear of white backlash experienced by the black maids (Viola Davis and Octavia Spencer) who talked to a young white woman (Emma Stone) about what it was like to be in bondage as maids to the white women of the Junior League. Those who have read Warmth of Other Suns, by Isabel Wilkerson, one of the best books about the black experience in the South…ever, know that if you were black and lived in the South, you experienced a constant and unmitigated reign of terror. Talking to a white person as an equal, like “Aibileen” and “Minnie” talked to “Skeeter,” was a death sentence, never mind that the maids were talking about their “betters,” the white women who exploited and humiliated and terrorized them.

The book, The Help, by Kathryn Stockett, was fiction, or so the author says. Stockett, who is being sued by her family’s maid, was born too late (1969) to have any authentic experience of those terrible years of the Civil Rights movement, especially in the early sixties. Her distance from history might account for the lack of weight and urgency in this film, but, as a native of Mississippi, Stockett should be commended for paying some sort of penance in her quasi-confessional story that is a tribute to the endurance of generations of oppressed African-American women.

Those who feel that the character of “Hilly Holbrook,” played by Bryce Dallas Howard, was an exaggeration or a caricature or was overplayed by the actor, should watch the news footage of attempts to integrate public schools in the South and look at the faces of the white protestors and listen to the voices of the women in the mobs. True, there is what Harris-Perry called “mean girl” atmosphere among the Southern women who played bridge to pass the heavy time but women like that were genuinely mean, cruel and racist in real life. People like that, women like that, still exist today, to be sure, but today’s culture does not countenance that kind of behavior. In 1963, these women would have had no shame in their actions and no understanding that they were monsters.

In the presentation of the matriarchy, the film lets the men of Jackson off easily. “Skeeter” falls briefly for the unlikely wooing of a frat boy-alcoholic in the oil business (Chris Lowell), but otherwise the males leave the home front and the running of the maids to their wives. The men had other things to do, such as enforcing the iron laws of Jim Crow and the terror atmosphere of segregation through insanely unconstitutional laws and plain old brutality. The wives are the second line of offense against the black citizens of Jackson who are not allowed to vote or eat at local restaurants or sit in the main auditorium of the movie theater or sit in the front of the bus or try on clothes in the department store or use the restroom in the houses they cleaned. My mother spent her entire life in the South and there was a mysterious toilet in her basement. Its presence was incomprehensible to me, for like Kathryn Stockett, I left the South early and never looked back. Why was there a toilet in the basement, sitting exposed without any kind of privacy? My mother never confessed or explained. I was an adult before I caught on: it was the maid’s toilet.

The Help starts with the toilet issue. Maids were expected to work twelve or fourteen hour days in a white home without using the toilet. That is what segregation meant. The line between black and white had to be held at all times and in all places. The space—and it was a wide space—and unbridgeable gulf—between maid and employer could never be bridged. One slip, one acknowledgement that your black maid was also a human being, and the entire edifice of inequality would come tumbling down. It was a strange system in which white women entrusted their children to black maids and yet could not share the toilet with them.

Robert Frank, Charleston, South Carolina, 1955

The Help does make clear that, like slavery, segregation of the races was a social system that poisoned the souls of the perpetrators. Like slavery, segregation was a kind of psychological illness that dehumanized the enforcers who thought they were dehumanizing those whom they abused. Make no mistake; the housewives in The Help were very dangerous to the black maids. The South had spent millions of dollars duplicating public facilities so that blacks and whites would never come into physical contact. This region of the county was the poorest but no amount of money was too much to keep “our way of life” intact.

"Colored Waiting Room," Jackson, Mississippi

One of the blessings of the Civil Rights era was the leadership of Dr. Martin Luther King, who taught non-violence, love and forgiveness. All the violence of the sixties came from one side, the white side, the side that had the most to lose. And here is where The Help reads false: that black maids would never trust a white person with their lives. In Mississippi, a black person was not allowed to look into the eyes of a white person. “Aibileen” had raised seventeen white children and every one of these children repudiated her as adults and did not think twice about reinforcing a shameful social situation of unconscionable injustice. The white child was not allowed to use the word “Mrs.” when addressing the maid, she would be permitted to employ only first names, and the child was taught not to shake hands with a black person.

This is the region of the nation where a thirteen-year old boy was beaten to death because he spoke to a white woman, where three Civil Rights workers were murdered because they tried to register blacks to vote, where blacks were trapped inside a bus that was then set on fire by a white mob because the Freedom Riders wanted to use a white waiting room. Under no circumstances would a black woman (“Minnie”) serve a white woman (“Hilly”) a pie make of her poo and then admit it. Serve the poo pie, yes, admit it, never. The film burdens one character, “Hilly,” with almost all the racism of the region and carefully presents a number of “good” white people who are “enlightened” about race. But these white characters are without power or leadership. Allison Janney plays “Skeeter’s” mother who regrets she fired the family maid but she is dying, and the young couple, “Celia and Johnny Foote” (Jessica Chastain and Mike Vogel), who are finally nice to “Minnie,” are social outcasts. Despite the good feelings, no one ever thinks to offer Social Security to The Help.

But it’s 1963 and the conversation on race is not quite ten years old and Mississippi is getting ready to experience its close up on national television. The Help sketches a glimpse of a precarious culture about to be visited by the conscience of the Twentieth Century, a culture on the edge of violent change.

So The Help is a sweet gentle film, a fragile fantasy, but it is a teachable moment. The audience is led to identify, not with Skeeter who isn’t particularly interesting, but with the victimized and proud black maids in their gray and white uniforms. The acting, especially that Viola Davis and Octavia Spencer, is worthy of Oscars.

One can justly complain that black women should have better roles in Hollywood and that Davis, who is beautiful and luminously talented is way over due for her star turn, and one can argue that it is a questionable decision to ask these women to speak as if they were characters in Uncle Remus—but they shine and out perform everyone else on the screen. It would have been nice to have the maids speak dialect in the homes of white people and communicate among themselves in normal dialogue, a device that worked so well in Skin Game.

I think the film’s merits outweigh its faults and that what it has something very valuable to offer to people who are too young to remember the Civil Rights era and to those who have never lived in the South. The Help is not just a watch and learn film; it is a watch and enjoy movie; it is laugh and cry movie. Do Viola Davis and Octavia Spencer a favor, honor their performances and see this movie.

The deportation of French Jews to their deaths in Nazi concentration camps raises questions similar to those asked of the Germans—how could such supposedly “civilized” peoples enter into a cold-blooded program of mass extermination? Sarah’s Key puts the question squarely to the people of France who took decades to acknowledge their complicity and participation in the roundup of French citizens during the German occupation of France. In May of 2010, the British magazine The Economist summed up the rather sorry record,

The French have tended to confront their record under Nazi occupation with a mixture of denial, silence and myth. The second world war was not on the school curriculum until 1962. Textbooks scarcely mentioned the Holocaust. No French leader from de Gaulle to Mitterrand acknowledged the state’s part in deporting Jews to Nazi death camps. It was not until Jacques Chirac became president in 1995 that the French state accepted its official complicity, prompting much soul-searching over collaboration, memory and guilt.

As the film shows, some of the French participated with gusto while others were reluctant and even defiant heroes who tried to help the Jews. Despite individual acts of mercy or heroism, it is clear that without the passivity of the majority of the French, the deportations could not have happened. Denmark sheltered and protected its Jewish population but the French did not. In contrast, trains full of French Jews bound for death, left for concentration camps year after year, up to three days before the Allies marched into a liberated Paris. The maniacal determination to continue to slaughter up to the last minute, even when it was clear that the Germans had lost the war, was unprecedented—even soldiers surrender when they are defeated.

Sarah’s Key, based on Tatiana de Rosnay’s 2007 book, is the story of the infamous Raid or Rafle on French Jews who were then deposited in the Vélodrome d’hiver (Winter Cycling Station). This sports arena, once the site of bicycle races, was the holding pen for these tragic people, mostly women and children. After five days without food or water or sanitation, the Jews were sent to interim camps in France. There, mothers were separated from their children and sent on their final destination in concentration camps in Poland. The children spent weeks in camps such as the one at Drancy before they too were shipped to the gas chambers.

The French demolished the bicycle stadium after the war and this site of such suffering and other sites of infamy have been thoroughly obliterated. Under contract from the Gestapo, French moving companies would follow a Nazi sweep through a Jewish neighborhoods, gather up the contents of vacated Jewish flats and take clothing, furniture and personal items to sorting sites all over Paris. These buildings for the “appropriations” have all disappeared and the sites now have a new identity—an advertising agency and a haute couture fashion house and a construction site.

A few memorials for the victims exist but it was not until 1993 that the French finally came to grips with their role in the extermination when the President Chirac gave a speech that pleased no one but began the process of healing a long-festering wound. ” These dark hours will stain our history forever, and are an insult to our history and tradition. Yes, the criminal insanity of the occupier was seconded by the French, by the French state,” Chirac said.

In 2010, despite the recent release of a French film about the Vél d’hiv’, The Round Up (Le Rafle), which focused on the fate of the Jewish children left behind, President Sarkozy refused to add anything to these original comments. Indeed, the wonderful films, The Sorrow and the Pity (1969), directed by Marcel Ophüls was an early and isolated effort, followed two decades later by Claude Lanzmann’s Shoah (1985) and Louis Malle’s Au revoir les Enfants (1987) are among the most powerful and earliest tellings of the Holocaust by French artists. Slowly, books have emerged on this traumatic period of history that the French want to forget. A quick glance at the publications makes it clear that there was silence until a new generation began to re-write French history in the 1990s, a full decade after the Germans began to take serious steps at atonement. Sarah’s Key is the story of how history has an uncomfortable way of not dying.

Starring Kristin Scott Thomas as an investigative journalist, Sarah’s Key, is a fiction that is also an allegory of guilt and shame. “Julia Jarmond” works for a not-so-well known news magazine and snags the assignment of doing a substantive story on the roundup at the “Vél’ d’hiv’” as the French refer to this racial crime. “Julia” is an American married to a successful French businessman, “Bertrand Tézac” (Frédéric Pierrot) who takes over an apartment in the Marais that has belonged to his family for a long time. When the couple and their young teenage daughter decide to remodel and move in, “Julia” is in the midst of working up her story on Vél’ d’hiv’ and the film proceeds to tell two stories, one of the contemporary investigation about the Deportation and the other of the original Jewish inhabitants of the flat. Shortly after the Vél’ d’hiv’, the Tézac family acquired the flat and generations of guilt by complicity. The movie is a study of how evil lives long and thrives, spreading out to ensnare innocent people who become stained, if only through association.

The first family who lived in the apartment, the Jewish family, were the ideal family: two parents, two children, a boy and a girl and a cat. As soon as the German occupation of France began in 1940, Jews were marked and forced to wear the dreaded yellow star. And then the French launched the poetically named “Operation Spring Wind,” the round up of 12,800 Jews on 16, 1942. When the Nazis arrive at the door of the Starzynski family to take the mother and her children away, the quick-witted “Sarah” (Mélusine Mayance) locks her little brother, “Michel” (Paul Mercier), in a large closet and carries the key with her to the Vél’ d’hiv’. It is here that the father reunites with this wife and child but he blames the little girl for leaving her brother behind. The child who sensed the danger could have no way of comprehending the true fate awaiting her—she assumed she would return to her brother. There is no way to give the key to anyone; there are no kind souls to trust, and the Starzynski family is shipped to the Beaune-la-Rolande, where the parents are taken away and “Sarah,” ill and feverish, is left behind on her own.

When Sarah recovers, she manages to escape with a friend through the kindness of a French guard. Haunted by the driving desire to unlock Michel from the closet, she and the other little girl run for their lives and escape the certain death at Auschwitz. They find refuge with a kindly couple (Niles Arestrup and Dominique Frot) in a small town, but the other little girl dies of diphtheria. The couple hides Sarah from the Nazis and disguises her as a young boy. This masquerade allows the trio to travel to Paris and here is where the Tézac family encounters the Starzynski family, or what’s left of it. The Tézac father and son are the only ones present in the flat when Sarah bursts into the apartment on a hot August day and unlocks the door to free her brother. Of course Michel is dead. The Tézac family, the males, now have a secret which they keep to themselves: a dead Jewish child who obeyed his sister and waited for her to come home and let him out.

It is into this Tézac family that “Julia” has married. Of course, it is a bit of a coincidence that an investigative journalist would be married to the grandson of the man who took over an empty apartment “abandoned” by a Jewish family; but the film is really an allegory of loss and memory and the determination to not look back. Sarah’s Key is not only about the personal memory of the traumatic discovery of the body of a child, whose presence was known only through its death scent in the Paris summer, it is also about the will of an entire nation to forget and to put a twin humiliation behind it: the humiliation of occupation and the humiliation of liberation. True, the French hated the Germans but they also cohabitated and collaborated with them for four years. All over France, there are clear signs of denial of the complicity and participation of the ordinary French people in the persecution of the Jews. (For a more complete discussion of French ambiguity, read Holocaust Monuments and National Memory Cultures in Germany and France by Peter Carrier, published in 2005.)

The Deportation Memorial, of which I have written elsewhere, appears in this film as a background for “Julia’s” growing knowledge about the legacy of the deportations. Designed by Georges-Henri Pingusson in 1962, the memorial consists of white walls carved with the names of 200,000 French deported by the Nazis. The effect is to include the names of the 76,000 Jews without admitting that the French themselves were in charge of the operation and to obscure the French participation in the Holocaust. The memorial alphabetizes the victims, unlike the Vietnam Veterans Memorial, designed twenty years later by Maya Lin who organized the names of the dead chronologically. If this chronological arrangement had been followed by Pingusson, the day of July 16 would have been an entire section of Jewish names, overwhelming all the other non-Jewish names and indicating a systematic round up of Jews. As in Germany, it was not just what Daniel Goldhagen called “ordinary Germans,” but the ordinary French and the corporations and businesses who were also culpable. (One of the best books written about the process of “coming to terms with the past” is Coming to Terms with the Nazi Past by Philip Gassert and Alan E. Steinweis was published in 2006.)

SNCF, the French national railway, so adept at building and operating a superb rails system, was also adept at keeping its silence over its role in transporting 76,000 Jews to concentration camps. Then SNCF bid for a now-defunct (killed by a Republican governor) high-speed rail system between Tampa and Orlando and ran into the wrath of Holocaust Survivors in Florida. After seventy years of silence, SNCF finally apologized in 2010…sort of…to the victims, less than 3,000 of whom returned home.

The SNCF had long maintained that it was “owned” and controlled by the Germans and that the company and the employees were “under orders.” In addition, the railway did not, it claimed, profit from the deportation “business.” Historians have refuted each of these claims but the SNCF outlines its familiar self-defense on the English language website put up by the company in the fall of 2010. During the Deportations, each train car carried over Jewish 2,000 souls and the casualty rate was usually around 500 people on the way to Auschwitz. The employees of the SNCF dutifully cleaned out the cars and returned to France for their next “cargo.”

The American President, Franklin Delano Roosevelt was well aware of the active participation of he SNCF and he stated in 1944, “All who knowingly take part in deportation of Jews to their death in Poland or Norwegians and French to their death in Germany are equally guilty with the executioner. All who share the guilt shall share the punishment.” In their 1981 book, Vichy France and the Jews, (one of the earlier French books on the topic) Michael Robert Marrus and Robert O. Paxton, write of the constant demands the Nazis made upon the railroad that complex deportation schedules be kept or else, Eichmann warned, the French would be “denied the privilege of participating in the Final Solution.” SNCF quickly fell in line.

Until the corporation met the Survivors of Florida, SNCF managed to escape responsibility but, with an eye to contracts for high speed rail systems in Florida and California, both states with large Jewish populations, SNCF apologized. “In the name of the S.N.C.F., I bow down before the victims, the survivors, the children of those deported, and before the suffering that still lives,” said Guillaume Pepy, who is the chair of the corporation. Denying that there is any connection between their desire to secure lucrative contracts in America, the SNCF is donating a train station at Bobigny as a memorial to the victims of the deportations of Jews between 1943 and 1944 from that site.

Sarah’s Key tells a small but painful story, not of heroic resistance, but of coming to terms with indifference and blindness on the part of one French family who belatedly tried to do the right thing. Sarah and her family symbolize all the innocent lives snuffed out by one of the purest examples of evil that has ever existed. Sarah makes that evil become achingly real. The Tézac family never acknowledges Sarah’s rightful ownership of the flat or her right to be compensated for their uncontested occupancy: their guilt will never take them that far. But the film does not condemn all the French. The family who rescued Sarah raised her, sheltered her and loved her but knew that she would never be whole, she would never get over her guilt over the death of her brother. These “righteous Gentiles” were wise enough to let Sarah go and the young woman vanishes from France. The journalist tracks down the fate of Sarah and her brother, and in the course of her journey into the past, she parts with her husband. “Julia” is trying to put things right but although she can bring some comfort to the Tézac family who learn that the head of the family had faithfully sent money to Sarah’s new family, that is not enough recompense for her husband. But there are some graves that should not be disturbed.

Eventually, “Julia” follows the trail of Sarah to New York where she married an American named “Richard Rainsferd.” But having committed suicide, Sarah is long dead, leaving behind her husband and a son, played by Aiden Quinn. Even in New York, the truth is obscured. It was not uncommon for a Holocaust Survivor to die of Survivor’s Guilt and the Rainsferd family closes the door on Sarah’s sad past and moves to the future. When “Julia” finds “William Rainsferd,” he is unaware that his mother was Jewish, because she insisted on protecting him by baptizing him a Christian. It is with “William” that the circle closes as he rediscovers his mother’s past and is finally able to understand and to grieve over her death, and the death of his uncle in a locked closet and the death of his grandparents in Auschwitz.

In the end, with the investigation over and her long article published, “Julia” leaves Paris and returns to her homeland, New York, where she raises her unexpected child, Sarah, all alone. The film ends with “Julia” and “William” and little “Sarah,” in a New York restaurant. The final moments of the film show “Julia” and “William” going over Sarah’s memorabilia and finding a peace with a past that is and is not theirs. The message is not uplifting but a heartfelt, “never again.” This is a wonderful movie, far and away one of the best films of 2011. See it.

For my readers who would like to learn more of this historical period, the most recent book on the subject is And the Show Went On. Cultural LIfe in Nazi-Occupited Paris by Alan Riding, published in 2011.

One of the great “what ifs” in American history is “what if Al Gore had become president in 2000?” Notice I did not say, “What if Al Gore had won the 2000 election?” For some, George W. Bush did not defeat Al Gore, instead the Supreme Court in what many left-wing thinkers consider a coup-d’état handed him the presidency. Who knows who really won? The counting of the votes, hanging chads, butterfly ballot and all that, was never completed but was halted by the Court. The Republican response to the Democratic dismay was to “suck it up” and accept the loss. While this transfer of the presidency to George W. Bush has never left the consciousness of the Democrats, and while we will never know who actually won the most votes in Florida, some things we do know for certain and that is what would not have occurred if Gore had become president.

Imagine what we would not have had

No war in Iraq

No “discretionary” wars

No Patriot Act

No torture, no torture memos,

No wholesale spying on the American people

No Guantanamo Bay

No Abu Ghraib

No flouting of the Geneva Convention

No privatization of the military

No Haliburton, no KRB

No wars fought on credit cards

No unfunded prescription drug programs

No government lying

No outing of CIA agents

No inaction on Katrina

Job outsourcing offset by jobs at home

No Great Recession

No Bush Tax Cuts to the Wealthy

No massive debts

No union-busting governors

No Defense of Marriage Act

No polarization between political parties

No John Roberts

No Samuel Alito

No Citizens United Decision

No Tea Party

No Sarah Palin

No Michelle Bachmann

No Barack Obama

What we would have had:

A Short War in Afghanistan

A Green Economy

Green Jobs in America

Smaller Wall Street Crash

Illegal Immigrants made legal tax-paying citizens

The Protection of Reproductive Rights

The Protection of Voting Rights

Well-funded Social Security and Medicare and Medicaid

Compromise and Negotiation

A Respect for Truth and for Reality

Each president teaches the nation a series of lessons, some of them with lasting repercussions, some good and some bad. Lyndon Johnson taught us that presidents lie. Richard Nixon taught us that government is not to be trusted. Ronald Reagan taught us that greed was good. George H. W. Bush taught us to use racist lies as a campaign strategy. Bill Clinton taught us that presidents have sex while in office. George W. Bush taught us that it was just fine to spend money we do not have and had no way of paying back. Barack Obama taught us that resistance is futile. Al Gore taught us how to lose gracefully. Al Gore also taught retired public servants who to make the most of their retirement and how to maximize their experience for the public good. Of all the ex-politicians, Al Gore has contributed to the globe perhaps the most admirably, warning the world of the coming catastrophe of Global Warming or Climate Change or whatever you want to call it. Only Jimmy Carter and Bill Clinton have equaled Gore in public service after serving in elected office. We are still waiting to see what the Bushes, Senior and Junior, will do to show that they deserved the faith their voters put in them to serve the people.

We know what happened under George W. Bush. But what if Gore had been president? What are the arguments that things would have been better as the result of a Gore presidency? First, Gore would have retained the surplus accrued under Clinton. There would have been no tax cuts for the rich. So how would all that extra money have been spent? Undoubtedly, the deficit would have been paid down over time. But there are always rainy days and the unexpected. During the first decade of the twenty-first century, there were two events that could not have been planned for. In making the second point, we could arguably ask: would there have been a September 11th?

While it is doubtful that the terrible insane plan to turn planes into weapons could have been detected, there would have been much more awareness of the dangers of Islamic terrorism in the Gore administration than in the Bush administration. The Bush State Department was fully briefed by the outgoing administration on the threats from Al Qaeda and chose, famously, to ignore the information. Third, while we can assume that, regardless of the increased vigilance that 9/11 would have happened anyway, we also know that there would have been no war in Iraq. Certainly after September 11th, American would have fought what probably would have been a short and sharp war in Afghanistan. How short, we cannot know, but certainly not the ten plus years we are witnessing now.

Another cost of the Bush wars was the very expensive privatization of the military. Once the military took care of itself, from cooking to cleaning to fighting. Under the Bush administration, the basic cost of running a war was enormously increased by outsourcing what had been standard military tasks to private companies, which proceeded to overcharge the government. It has long been known that the Defense Department had always been the target of enrichment scams on the part of civilian businesses and there were attempts, however feeble, to keep the outrageous overcharges under control. Under the Bush administration, the ceding of the military to private enterprise exploded the cost of the war beyond what it would have normally been.

And none of the increased costs were paid for. During the Second World War, the military was self-sufficient and the citizens paid the costs, one day at a time, through the sale of war bonds. Instead, no bid contract were handed out from everyone to electricians to caterers to commandos, effectively doubling the personnel and causing costs to spiral out of control. It is doubtful that under a Democratic president that the wars would have been either plural or privatized. Without the wars, there would have been no Patriot Act, no wholesale spying on the American people, no Guantanamo Bay, no Abu Ghraib, no torture memos, no flouting of the Geneva Convention, no decline in American credibility and no loss of American honor.

Fourth, this war would have been paid for. The two Bush wars were the first in American history to be waged without a tax increase and fought totally on borrowed money. Fifth, it is unlikely that going into two wars on credit cards would have been coupled with another charge on the card, the unfunded and unpaid-for prescription drug plan. Although it would be safe to assume that none of the budget busting events that happened under Bush—two wars, a tax cut and a prescription drug deal, none of which were ever paid for—under a Gore Administration, it would not be safe to assume that there would have been no financial melt-down. The crisis of 2008 could well have come about regardless of who was in charge. The only real question is how bad would it have been?

The Gore administration would have, in all probability, continued the de-regulation of the lending financial industries undertaken by the Clinton economic team. What is unclear is the extent of the financial excesses. During the Bush years, Wall Street came to resemble Las Vegas even more than usual. The stock market and its minions take its cues from political leadership and the market clearly followed the lead of the Bush administration and adopted the philosophy of short-term goals and short-term gains, to borrow and spend with no thought to the consequences. The market will always take advantage of the slightest permissive loophole and even invent a few more but, under Bush, there was clear permission to binge.

Recall that after 9/11, the president urged the nation to shop. Credit cards were flashed and homes were used as the proverbial piggy bank and, thanks to “liar’s loans,” value was extracted from what was the homeowner’s major financial asset. The market may always be counted on to behave badly and selfishly but, under Bush, the basic fabric of responsibility and morality and ethical behavior became openly unraveled. The bills finally came due and the entire structure, built on fantasy, came crashing down. Would the Gore administration have bailed out Wall Street?

It is possible that, given the precedents, such as the Savings and Loan debacle, the answer would have been “yes.” But it is probably safe to assume the crash would have been much less severe and the money would have been there to pump into the economy. Not only that, but the economy would have been in much better shape and could have better absorbed such a blow. Under the Bush administration, there was no job creation and no rise in middle class income. Jobs were going out the door and traveling to other nations with cheap labor. Tax incentives were created to encourage outsourcing and corporations were allowed to not pay taxes. Of course with the high cost of labor and the stringency of regulations in America, all the businesses that could do so shipped their jobs overseas.

This practice was nothing new and had been going on since the 1970s. Outsourcing is not a bad thing in and of itself. American consumers have certainly enjoyed affordable commodities; from television sets to automobiles, and it make sense to allow certain societies to specialize in manufacturing if the advantage exists. The problem is that, under Bush, these lost jobs were never replaced. Real wages went down and, when taxes were cut, especially on the people who continued to experience a rise in income, revenues fell sharply. With not enough coming in and with huge unprecedented amounts of money going out, a deficit rapidly replaced the surplus and America went into a deep financial hole.

With the Afghanistan War over, with the rich paying their fare share, with no unpaid-for prescription drug plan, with no war in Iraq, and with a healthy economy, the Gore administration would have been ready for the Wall Street Crash of ’08. The Bush administration encouraged jobs to leave America and did nothing to encourage job creation at home. Eight, here is where there would have been an enormous difference between Bush and Gore. Environmentally conscious, Gore would have started green industries in America, creating green jobs. Green jobs are the kind of jobs that cannot be outsourced and the range of these kinds of jobs is enormous, offering opportunities to men and women with a wide range of skills and education. In addition, green jobs would have been located everywhere, eliminating the pockets of joblessness and limiting the dependence on federal spending seen in the southern part of the United States, for example. People could have actually afforded their homes, paid their bills, and, who knows, maybe there would have been no total meltdown that impacted homeowners. Maybe Wall Street would have had to suffer for its own excesses. Who knows?

Given the aging Baby Boomers under Gore would there have been an upswing in socialized medicine and health care? Or to put it another way, would Social Security and Medicare and Medicaid been in financial trouble? The crisis in these government guarantees of public health is due to the lack of taxes to support them. With normal tax revenues, there is no problem for the future of any of these programs. It is even possible that, under Gore, American would have been allowed to buy drugs on a competitive market, even allowed to buy drugs in Canada, bringing down the cost of health care. But there is something more to consider. Under Gore would there have been a Democratic push to legalize illegal immigrants? Given the rewards, why not?

Legal citizens pay taxes, instead of sending the surplus to Mexico, because they now have a stake in their new nation. The influx of income would be felt immediately in local and state and federal governments. People of Latino descent are a fast growing demographic and a young demographic, more than taking care of the spaces left by the Baby Boomers who will very shortly stop paying taxes and will start extracting their contributions to their retirement. The current budget “crisis” could be solved simply by ending the Bush tax cuts and by making illegal aliens legal. Legal citizens can vote and in gratitude, they would vote for the party that had given them citizenship.

Republicans know this fact of life and will continue to obstruct Democratic efforts to solve the “immigration problem,” which like many of the so-called “problems” we are told we have are problems of Bush’s making, because they know that the Republican base is a small one. The idea of a permanent Democratic majority is simply unthinkable to the Republicans, even Bush knew that, but his own party blew the opportunity he gave them. The Republicans can offset their smaller numbers with larger campaign spending, which is no anonymous and unlimited, thanks to the Supreme Court infamous Citizens United decision. And that Decision brings up another major difference between the administration of Bush and Gore. Under Gore, there would have been no John Roberts and no Samuel Alito and no rightward turn to the Supreme Court. Instead, Gore would have nominated two more liberal or neutral justices to the Court and there would have been no rollback of civil liberties and no decisions that favored corporations over citizens such as we have seen over the past decade.

Finally, the last thing that we do know is that without Bush and the drift rightward of his administration, there would have been no Barack Obama. Obama, a conservative Reagan Democrat, was able to position himself left of Bush, only because of the extreme right leaning positions taken by that administration. Obama’s mild Republican health care policies, which seek to shield American citizens from predatory health care companies, were a shock due to the strong contrast to Bush’s laisse faire attitude towards the poor and the middle class. Without a right-wing Bush administration, there would also have been no Sarah Palin. The Bush administration prepared the ground for an extreme Republican agenda and for extreme Republican candidates who do not read newspapers and who want to pray the gay away.

At the end of a Gore administration, the next president could have been a moderate Republican, like Romney, or another environmentally conscious Democrat. It is doubtful that whomever the President would have been in 2008 that there would have been the latest upsurge of the John Birch Society, the Tea Party. The Tea Party emerged, as did Sarah Palin, on the fertile soil of the Wall Street Bailout. With a good economy, there would have been no need for a faux “tax revolt.” Today, when nothing substantial gets done in Washington, it is hard to imagine what might have been. As unimaginable as it seems, the Democrats and the Republicans would be talking to each other today.

Just as Ronald Reagan allowed greed to emerge unchecked in America, George Bush allowed and encouraged a take-no-prisoners approach to politics. Taking a page from the book of his father’s late unlamented advisor, Lee Atwater, for the campaigns of the younger Bush, no trick was too dirty, no lie was too extreme, as long as it worked politically. The result was the birth of scorn for “reality-based” narratives and the door to stories that had no basis in fact was opened. It was fine to lie about the weapons of mass destruction, it was OK to reveal the identity of an officer of the CIA, just as it was perfectly acceptable to torture and to hold people indefinitely without charge or trial. If one side believes in an untenable scenario and castigates anyone who wants to tell the truth, then compromise is impossible. Once facts become meaningless then the party, which believes in non-facts can neither see nor agree to other points of view. When the Bush administration showed its willingness to buy into improbable versions of actual reality, the way was cleared for political gridlock. Without an agreement on basic facts and basic truths, no actions could ever be taken.

What the Bush administration taught us is that there was no accountability. Would the Wall Street Robber Barons have been allowed to go free in a post-Gore administration? Probably not, but Obama, following a regime without penalties, threatened the bankers with only Elizabeth Warren. But there is such a thing as accountability and we, the middle class American citizens, are still paying for old sins that we did not commit. In his best selling 2007 book, The Attack on Reason, Al Gore does not mention any of the might-have-beens listed above. He simply outlines in a clear precise language the failings of the Bush administration. Writing before the Wall Street Crash, his concerns have to do with civil liberties lost and the campaign of misinformation that passes for “news” during the first decade of the twenty-first century. Gore is especially concerned about the spread of false information by a mass media that is controlled by corporations and political interests. Gore quotes Edward Muskie, a former presidential contender, brought low by media manipulation in 1970,

“There are only two kinds of politics. They’re not radical and reactionary or conservative or liberal or even Democrat and Republican. There are only the politics of fear and the politics of trust.”

We all know that the next famous quote was “I am not a crook,” uttered by Richard Nixon. The president engineered his own demise by turning a small political misdemeanor into a massive cancerous cover-up, bringing the term “Watergate” and all things “gate” into being to designate scandals that could not be overcome. Watergate, like the McCarthy Hearings, was played out on television to a fascinated audience who was dazzled at the cast of luminaries brought low. Watergate was a rare case of the truth coming out and of that truth having consequences and of those responsible being held accountable. It would be the last time such a public political punishment would occur. Watergate was a story broken by a great newspaper, The Washington Post and what lodged in the public psyche was that newspapers, print media, was the last resort of truth. Since Watergate the public spends more and more time passively consuming television in a one-way no-exchange experience. As Gore points out, today, television is the public’s main source of political news on government business and newspapers are folding one by one.

Not only are newspapers dying and television ratings are soaring but television viewing has become more and more of a niche experience. Unlike newspapers where a range of news and opinions co-exist, television programs appeal to the fears and prejudices of the audience. Television exists to entertain and to make money for the owners not to seek and find the truth. Furthermore, competition has greatly lessened among media outlets since the 1970s and a few vast conglomerates control everything. Monopoly capitalism has captured the news, turning it into a source of revenue. In such an atmosphere, reason has no place.

Gore’s main thesis is that reason has been replaced by “dogma and blind faith.” The result is “a new kind of power” that is arbitrary because the public is not informed and cannot consent from an informed position. Gore also states that this power comes from “deep poisoned wells of racism, ultranationalism, religious strife, tribalism, anti-Semitism, sexism, and homophobia…” In such an atmosphere, the ugliness that always underlies any body politic is allowed and even encouraged to emerge. Real problems can be ignored while non-problems and fake crises distract the American people. The result is a replacement of our system of checks and balances with unchecked power and influence thanks to a “coalition” that serves their own interests, not that of the public.

Gore used the Iraq War and the systematic lies that led to it as a prime example of the techniques of distraction. It is now known that the Bush administration came into office with the goal of deposing Saddam Hussein and the administration’s spin machine diverted attention away from Osama bin Laden to phantom weapons of mass destruction. Anyone who disagreed with President Bush or brought facts to bear was dismissed as “unpatriotic.” Ideology replaced facts, faith replaced information, fantasy replaced history, and dogmatism replaced reason so that Bush could “benefit friends and supporters.”

The coalition, or the “friends with benefits,” that Gore describes is made of a number of groups or what Bush called “my base.” He lists “the economic royalists” who want only to eliminate taxation and regulation, an “ideology” which has an “almost religious fervor.” The public interest does not exist for these people. Indeed, any government programs that aid the people are disincentives to make these people work hard for low wages. The interests of the “wealthy and large corporations” have the highest priority for Republican ideology. The infallibility of this ideological position is buttressed by what Gore lists as “well funded foundations, think tanks, action committees, media companies, and front groups capable of simulating grassroots activism and mounting a sustained assault on any reasoning process that threatens their economic goals.”

True, Republicans have been trying to dismantle the New Deal and the prosperity of the middle class for eighty years, but Gore asserts “this is different: the absolute dominance of the politics of wealth is something new.” He traces the long struggle in America to create an equal society, which is also a struggle against monopoly power and corporate interference with the workings of government but once regulations that made sure that there were many choices of media outlets to ensure competition. Under Reagan, Gore points out, media competition was ended when regulations were lifted allowing vast corporations to gather together many television and radio stations and newspapers into one bundle that spoke with a single mind, devoted to preserving the wealth of the wealthy. Any information that should get in the way of ideology is promptly distorted for the cause or spun in a favorable direction. Gore says of the Bush administration, “I cannot remember any administration adopting this kind of persistent, systematic abuse of the truth and the institutionalization of dishonesty as a routine part of the policy process.” Gore states that the result of administration tactics was “to introduce a new level of viciousness in partisan politics.”

The Supreme Court, always compliant with right wing agendas, helped President Bush garner unprecedented and unchecked power to the executive branch. The Bush doctrine became whatever the president did was legal, a stance taken unsuccessfully by President Nixon. Bush was allowed to flout the American legal system and to disdain international laws. All Supreme Court decisions were made in favor of corporations and their powers and against the people, leaving the individual with no recourse, not even the right to a trial by jury. Bush was less interested in social issues than the later Republicans would be. He was far more interested in amassing the power to do what he wanted, whether it was warrantless wiretapping, searches without search warrants and the “right” to put an unprecedented number of innocent citizens under surveillance for no particular reason. The public was not allowed to assemble freely and any protestors were removed far away from the President and corralled in special sections so that Bush’s day would not be ruined by any sign of dissent.

Gore ends his description of the illegal and unconstitutional abuses of the Bush administration by stating what it would take to create a “well-informed citizenry” that democracy requires. He does not have much faith in television and puts his faith instead in the Internet. Gore warns that there are powers, corporate powers, which want to control the Internet by giving the content the rich and famous approve the green light of high speed and forcing the dissenters into the slow land of endless downloads. This compartmentalization of the Internet into fast and slow ideologically structured lanes is a real and present danger. One can only hope that the True Believers and the Bloggers will keep protecting the last bastion of true participatory democracy. This book was published before the Bush presidency ended and does not account the last days of the Bush Bonfire when Wall Street burned. Reading The Assault on Reason three years into the Obama presidency is to recognize how totally the Bush administration ruined the very promising situation it inherited from the Clinton-Gore administration. One realizes that this is a group of politicians where were discredited to a man and woman but they were never held accountable. They just got out of town and left the government in a shambles.

What was gained? What was the Bush Administration all about? Reading Gore’s book helps us understand that what was gained by the monied interests was a significant weakening of regulations of all kinds, a shrinking of taxes on the rich, an enlargement of subsidies even for the wealthiest corporations, and a lack of meaningful consequences when oil spills or chemicals leak or coal mines cave in and people die. Wall Street banks can demand money from taxpayers and then refuse to help the very same citizens refinance their mortgages while giving themselves record bonuses. Global Warming is now a hoax and every time it snows, the right wing throws verbal snowballs at Al Gore. Every time there is a tornado or a flood or a drought, then the same people call the federal government. Labor unions, especially teachers, are now the villains and these groups are under assault so that more tax breaks can be given to the wealthy. States’ rights have made a comeback and even Obama, a black man who should know better, says that states should decide on whether or not to “allow” gay marriage. “Compromise” and “negotiation” are bad words for a person whose election promise is to destroy government as we know it. Washington is in gridlock. Media has rewritten lived history: the deficit was caused by Obama who was not born in the United States and who wants us to all become “European,” whatever that means.

Gore has been largely silent about these events that have unfolded since his book was published, but he cannot be surprised by the trend of today’s events. He has not been outspoken like Bill Clinton, nor has he overtly supported Obama. He has put forward the facts of Global Warming, won his Nobel Prize, and he will watch to see all his prophecies come true in forest fires, tornadoes, floods, droughts, melting ice caps, the extinction of polar bears, the widening of the hole in the ozone layer, endless winters, rising sea levels—Gore watches it all. Some Americans look away from the dust storms and cry “Hoax.” Other Americans have lost hope, and no wonder. The game, we learned, was rigged for the rich and not for the public interest. The land will be raped for the profits of the few and we the many will pay for the destruction. Meanwhile, we watch television and see good-hearted well-meaning Americans demonstrating in Revolutionary War costumes to preserve tax cuts for the Wall Street bankers. The media they watch has convinced them to dismantle all the social programs they enjoy and use. These good people have been gathered together by powerful corporate interests who can bend them to their will. Reason has no place in politics. Nor do facts. Nor does reality. Spin rules. Slogans speak. If Al Gore is right, the last refuge of the honest broker is the Internet…while it lasts.

For some strange reason, the American title of this film was weirdly translated from the more apt Les Noms des gens or People’s Names. And people’s names are what identify people as “French” and favored or as Jewish or Arab or outsiders and “not French.” Although the subject matter is very serious, this Mary Poppins of a movie serves the discussion of nationality with a spoonful of sugar and a heaping helping of female nudity to make the message/medicine go down. Given the tension in France today regarding the wave of immigrants “coming home” to the mother country, a light-hearted response was probably a wise one. Obviously the production was impactful for the screenplay by Michel Leclerc and Baya Kasmi won a César as did the leading actress, Sara Forestier. The way a society copes with change in the twenty first century is through popular culture.

Like Great Britain, France has been coping with the blowback of Empire and with the consequences of colonialism for at least fifty years. The last of the French colonies, Algeria finally won its independence in 1962 after decades of shameful repression. Perhaps the best-known film depicting this grotesque struggle of a colonial power to hold on to an empire is The Battle of Algiers, an even-handed film made by the Algerian government in 1966. More recently American audiences have seen Of Gods and Men, (2010), another painful account of atrocities on both sides. Les Noms des Gens makes the very good point that the modern nation of France must come to terms with its own past. The movie proposes that individuals move beyond their “names” and fixed identities and to merge separate entities labeled “Jewish” or “Arab” or “French” into new, perhaps nameless, people for a new era in the name of love.

Baya Benmahmoud is a later day hippie, a free spirit who makes love not war on right wing “fascists” and bigots who are male. She leaves the female fascists alone and directs her efforts to the male of the species who are “converted” into left-wing liberalism through having sex with her. She is a Lysistrata in reverse who sees fascism everywhere even in a veterinarian (Jacques Gamblin), who specializes in dead birds. While Baya celebrates her identity as an assimilated Frenchwoman with an Algerian father and a French mother, her newest target, Albert Martin, is hiding a half-Jewish identity. Between the two of them, this unlikely couple embodies two sensitive points on the French body politic—what the French did to the Jews during the Nazi occupation and what the French did to the Algerians during the post-war period.

Critics have complained, rightly, that the movie is a superficial treatment of a serious topic, but it is perhaps all the more effective for that. The great Louis Malle produced a masterpiece of anguish, Au Revoir les Enfants in 1987, which unflinchingly examines the scar of the Holocaust on the French conscience. The intention of Les Noms des Gens is simpler than that of films, such as Malle’s, which take a historical approach, but with a light touch, it makes its point. Arthur’s parents are still haunted by the feeling of being hunted by the Nazis and his mother’s lost Jewish identity eventually comes back to her and drives her to suicide.

There is an interesting scene where Arthur and Baya visit the “Deportation Memorial” on the Île de Cité to find the names of his Cohen relatives. Because there were so many “Cohens,” listed in alphabetical order, their search is futile. Dedicated by Charles de Gaulle, this memorial was designed by the architect Georges-Henri Pingusson for the purpose of honoring the French “martyrs” of deportation to Nazi camps. What the film does not say is that this memorial is not an official “Holocaust” memorial and it lumps the Jewish victims in with other political enemies of the Nazis, whitewashing (the memorial is white) French culpability in the death of 200, 000 French citizens, 76,000 people including 11,000 children who just happened to be Jewish.

In fact Les Noms des Gens passes over the ugly past in Algeria lightly and approaches the recent debate over whether or not French Muslim women can legally be veiled or not a bit more directly. Without getting into the controversy, the film follows Baya to her latest conquest, a traditional Muslim man, who, unlike her father, veils his women. The shock of seeing this beautiful free wheeling woman shrouded by black garments says it all. The question of whether the veil is a suppression of the humanity of women or is an assertion of Muslim identity is asked and answered in a few minutes. Baya, whose mother is French, asserts that in the veil she is seen as a Muslim woman for the first time.

Of course, when her work is done, Baya leaves her Muslim fascist, jettisons the veil, and marries Arthur, two halves making a new whole and creating a child who is beyond “names” or the labels that tear societies apart. Les Noms des Gens predicts that names will not matter in the new society that is in the process of re-identifying and redefining what is “French” in the twenty first century. The message, amusingly and deftly delivered is a hopeful one of global peace through love and marriage and children. We can only hope.

Oh and by the way, we Americans are not as ignorant as the French seem to think: we don’t need to have “Bernard Henri-Lévy” translated into “Woody Allen” on the subtitles. We know who Bernard-Henri Lévy is, thank you very much.

The passage of the “Marriage Equity Act” in New York in the early summer of 2011 gives Beginners a special resonance. The story is a simple but painful one: after a lifetime of living in the closet, an elderly man reveals that he is gay. Based on a true story, Beginners refers to the well-known museum director, Paul Mills who died in 2004. Mills was very important in the art circles of northern California. It was he who realized in the early 1950s that the Bay Area was the site of an independent response to Abstract Expressionism in New York City. It was he who brought together the Bay Area painters in 1957 at the Oakland Museum exhibition’s “New Bay Area Figurative Painting.” Spotting and creating a new art movement became an important role for museum directors and curators and the show that Mills put together was of historical significance.

In 1970 he moved to the museum in Santa Barbara and continued his concentration on California art. The museum website states that Mills was “a flag designer and enthusiast who initiated the Breakwater Flag Project for the harbor in Santa Barbara. Who knew? Upon the death of his wife, Jan, Mills started a new life as a gay man in 1999. Sadly he died of cancer four years later. Beginners begins with the brief life and sad death of “Hal,” the surrogate for the real life museum director. The highly fictionalized story of a son supporting his father’s new life as an authentic person stars two of the best actors from England, Christopher Plummer and Ewan McGregor, who are, as always, excellent in their roles.

I do not approve critics who criticize art for not being what s/he wants it to be. I feel that art should be judged on its own merits, in its own terms, as it stands. That said, despite the presence of a sexy French love interest and a talking dog, the film is boring. I sat through the entire movie because I paid for it and wanted to get my money’s worth. Sadly, in pursuit of exploring the gaiety of being gay, this film took a slightly patronizing approach to an older man re-entering the contemporary world of gay life and discovering the joys of “house music.” Like Lady Chatterley, “Hal” finds a younger lower class man to awaken him into his true nature and the old man seems to have been able to make the transition from straight to gay without any psychological turmoil. True, northern California at the Millennium had no issues with gay people, which brings us to an unanswered question—-why was this man in the closet so long.

“Oliver’s” parents were married in the dark decade of the 1950s, in the shadow of the poison of the McCarthy hearings and witch hunts against homosexuals. The fifties was also a decade of conformity by a culture that wanted to get back to “normal” and to stay out of trouble. For a gay man, the closet was the only choice for a safe life and many women married these men, knowingly—in the case of Oliver’s mother—-or unknowingly. For a woman the fate of being single, an “old maid” was as socially reprehensible as it would have been for a man to admit he was gay. For these couples, marriage was an arrangement. Whatever children that resulted from these unions of convenience stood as guarantees that the secrets of the family were safe. It is still widely believed, even today, that if a man is married and has children he is not gay.

These marriages could be respectful and affectionate and practical, but they would also be empty of what people, men and women, had come to expect—romantic love and passion. One can understand, given the political and social climate of the fifties and sixties, why Oliver’s parents remained married. But by the 1970s gay men, in and out of California, were eyewitnesses to the liberation of gays and lesbians. True, in the eighties, there was regression and suppression of the rights of all and any minorities, including the rights of the majority—women—but in California being gay was accepted. In real life Paul Mills was living in the arts community, a community that was and is full of successful and openly gay people in the university town of Santa Barbara. One of the mysteries that the film does not answer is why did the marriage, so obviously unhappy, drag on long after the need to live in the closet had passes? The only answer can be the psychological closet that kept the generation of the fifties trapped, in denial, in unhappiness, in emptiness.

The strength of social prejudices against gay men persists, and there are countless men who disguise themselves as “straight” and, like the former governor of New Jersey, do real harm to other innocent people in the process. The film shows the wife’s aching unhappiness and her empty existence but “Hal” takes no responsibility. He merely says blithely that the wife was are that Hal was gay and she wanted to marry him. So the victim is blamed for her fate—being trapped in a marriage of deprivation that she willfully chose. Not a word is said of why Hal should agree to such an arrangement, but it is clear that marriage to a woman who was his “beard” would give him cover in a period of prejudice. One can only imagine that the wife could not have born the shame of revelations and the humiliation of the divorce.

And here is where I think the movie missed the opportunity to explore some powerful issues that are still painfully pertinent in American life. California (where there is widespread acceptance of the GLBT community) and New York (where the right to marry is recognized as a civil right) are not the rest of America. We are living in a nation where an apparent candidate for president spies on gay people, runs screaming from lesbians and who has a husband who uses federal money to “cure” gays who he perceives to be “barbarians” who must be “disciplined.” We are also a nation where one of the most popular sit-coms, Modern Family, features a gay couple who as adopted a child. Despite the vital role popular culture has played widening the acceptance of gay people, there are places in America where gay men and women life in the closet.

To live in the closet is to live an unauthentic life, dedicated to appeasing the bigotry and inhumanity of a group of people who are increasing looked at askance. The anti-gay forces resent being referred to as “haters” and have seen their organizations recognized as “hate groups” on par with the Klu Klux Klan. The tragedy is that an uncounted number of men and women are forced to live in shame and fear, trapped by the ugly bigotry of self-righteous and cruel forces. If Beginners can join the ranks of a growing number of films that present gays and lesbians as spouses and parents who love and care for each other and their children, then this film will have done a good thing. The Kids are Alright was not shown in certain parts of the country and was, undoubtedly, not shown on local cable channels lest local sensibilities be disturbed by seeing gays portrayed as human beings. Beginners is a feel-good film that skims over the dark and disturbing discrimination that was so powerful that a good and decent man had only four years to live his real life.

Dr. Jeanne S. M. Willette

The Arts Blogger

]]>http://jeannewillette.com/2011/06/13/beginners-2011/feed/0Midnight in Paris (2011)http://jeannewillette.com/2011/05/28/midnight-in-paris-2011/
http://jeannewillette.com/2011/05/28/midnight-in-paris-2011/#commentsSun, 29 May 2011 04:09:51 +0000http://jeannewillette.com/?p=416IN SEARCH OF LOST TIME

Not to read too much into Woody Allen’s latest amuse bouche, but the movie does look like a witty mise-en-abyme, an endless regression back in time. Midnight in Paris proposes a witching hour when one can step into a Peugeot and drive into one man’s lost Golden Age and then climb into a carriage and enter a woman’s idea of what the perfect time would be. Gil, Woody Allen’s alter ego (younger and better looking) is a successful Hollywood writer who thinks that he could write the great American novel if only it were the 1920s. Stranded in the wrong time, Gil (Owen Wilson) is marooned in the right place—Paris—with his fiancée’s rich Republican parents. Inez and her obnoxious mother and father, defend the Tea Party (the current one) and like Gil’s money and his Hollywood success but not him. The absurd unsuitability of Inez (Rachel McAdams) for Gil, the incurable romantic, is our clue that the film is an allegory.

Allen draws the audience into the philosophical fantasy by forcing us to assume the role of the most obnoxious character in the film. Paul (Michael Sheen), an old friend of Inez, is a typical pedantic academic—the kind that is compelled to lecture to all within earshot about matters clearly not in his realm of expertise. Any art lover with even a bare minimum of knowledge knows that the sculptor, Auguste Rodin, was never married to the sculptor, Camille Claudel, but Paul gets in an argument with the guide at the Rodin Museum. And it is here, at this unfortunate juncture, that we become Paul the Pedantic, for those of us in the know immediately spot Carla Bruni, who makes the American Inez look lumpy and badly dressed.

The fun for the effete truly begins when the magic Peugeot comes around a dark curve of a quiet Parisian back street as the midnight hour chimes. Who should pop out of the Peugeot and beckon Gil to join to get in but Scott and Zelda Fitzgerald?

The glamorous pair whisk the bemused American in Paris away to an elegant soirée held by Jean Cocteau, where he meets Adriana (Marion Cotillard) the current mistress of Picasso. We all know that Picasso was entangled with a wife he couldn’t divorce, Olga, and was enthralled with Marie Thérèse, so Adriana is another clue that we are in fantasyland. And then Scott and Zelda take Gil away to another party and we are so pleased that we know where they are going: Bricktop’s and we know who Bricktop was. And when we get there, we immediately see Josephine Baker, clothed, dancing the Charleston in her off time. We, in our erudition, also wonder why Cole Porter was at the first party instead of playing the piano for Bricktop, which was more to his habit. And then at the end of the evening, we finish off our entrée with a large helping of Ernest Hemingway.

Corey Stoll (Law and Order, L.A.) does a great job of playing Hemingway who is self-important and pompous, obsessed with manhood, and spouts his own spare and lean “masculine” prose, learned from Gertrude Stein. Hemingway tells Gil that he has published only one novel, presumably The Sun Also Rises, meaning that, in time, we are in 1926. It cannot be any later than that year because after 1926, the Fitzgeralds left Paris. Francis Scott Fitzgerald had already published This Side of Paradise and The Great Gatsby, and, of course, Hemmingway was passive aggressive and jealous of the more successful writer. Having deduced with year we are in, the next night we get to meet Gertrude Stein (Kathy Baker) herself, holding court under the portrait Picasso did of her in 1906 (and yes, that is Alice B. Toklas who opens the door for Gil).

Naturally, Picasso is in Gertrude’s salon with a painting that is anachronistically out of place for a decade during which he was in his classical conservative period. The faux painting looks a bit like Large Nude in a Red Armchair (1929), which signals a change to his flirtation with Surrealism.

Speaking of Surrealism, the film is full of Surrealist artists, also a bit out of time. Surrealism proper does not begin until 1924 when André Breton issued his Manifesto and the movement was a movement of poets, not artists. The main artists associated with Surrealism were those who were once Dadaists. Having just painted Harlequin’s Carnival, only Joan Miro, who was careful to keep his distance from the French group, was the most fully developed surrealist painter in the twenties.

But here is Salvador Dali (Adrian Brody) having a drink with Gil three years before he became a Surrealist. And later, the pair is joined by Man Ray (much taller than he was in real life) and Luis Bunuel, with whom Dali would make Un chien andalou in 1928. We miss seeing Lee Miller who could have been either at the Cocteau party or with Man Ray—after all, she was the muse for both men. But for Gil, his muse is Adriana who takes him on a trip to the Belle Epoch where they meet Toulouse-Lautrec at the Moulin Rouge. No sooner do they start chatting with the Count, then they are joined by Edgar Degas and Paul Gauguin, who must have been in town between his journeys to the South Pacific. For Adriana, this is her Golden Age, not the Twenties of the Lost Generation. She could be right; these are the last years before a century of war and loss and disillusionment. Gil, however, needs the inspiration the Paris of Hemingway and Fitzgerald and Stein in order to come into his own and to “find himself” as a writer and he leaves the woman he loves behind in the 1890s.

Before Gil rejoins the land of the present and the unsatisfactory, he delivers a bit of advice to Bunuel, to create a scene of a dinner party that no one can leave. The New York Timesinforms us that the film in question would be The Discreet Charm of the Bourgeoisie. The film ends with Gil finally ridding himself of Inez who has been dallying with the insufferable Paul, our own muse and inspiration in our personal history test. Of course, the ideal lady is already waiting, selling Cole Porter records in the flea market (Les Puces) of Porte de Clignancourt. We see immediately that she is perfect for Gil and an appropriate end to the fantasy of a middle aged man having a mid life crisis. Of course she is half his age: what better way to start a new life with a sweet young thing who doesn’t wear make up and likes to walk in the rain? Meanwhile the private detective who has been following Gil takes a wrong turn and winds up running for his live down the Hall of Mirrors at Versailles.

But one more thing, as Columbo would have said: the last snobbish satisfaction we feel before the end of Midnight in Paris is when we see Gil walk out of Shakespeare and Company. We are smugly pleased that we know the entire story of this establishment and are sorry we did not visit in the 1920s and run into James Joyce…in the afternoon. Oh, we are so smart. Woody Allen is so laughing at us. And by the way, Francis Scott Fitzgerald was named after Francis Scott Key. Had to get that in.

Thirty thousand years ago. This is when art began. Chauvet Cave. This is where art began. Southern France near the Pont d-arc formation. This is where the first art was made. This is the oldest and the best art. Art never got any better than this.

Chauvet Cave Wall

And the German film director, Werner Herzog, was given special permission to visit this spectacular cave with a small film crew to photograph the marks on the walls made by prehistoric artists. Unfortunately, this film will be shown in only a few art houses, almost none of which are equipped to show the movie as it was shot in 3D. The loss of dimensionality is a genuine one in this case for the artists made use of the convex swellings and the concave niches, which are the natural contours of the walls.

Chauvet Cave Wall Contours

Older but less well known than the caves of Altamira and Lascaux, Chauvet is significant because of the great age of the paintings. Imagine drawings are so old that when the lines were drawn, homo sapiens coexisted with Neanderthals. But Neanderthals do not draw. Neanderthals do not make art. With elegant strokes depicting the animals familiar to the Ice Age inhabitants the two species would be divided between human and not-quite human. Such is the power of art.

“They were here!” Éliette Brunel shouted when she and Jean-Marie Chauvet and Christian Hillaire discovered Chauvet Cave in 1994. Although Brunel was the first to see the paintings on the walls, it was her colleague, Jean-Marie Chauvet, the leader of the exhibition, who would give his name to this unusually large cave. The cave was immediately sealed to the public and only scientific teams were allowed inside. The cave has been mapped with lasers, which are able to draw a three dimensional picture of a long and irregular shaped opening into the limestone cliffs above the Ardèche River.

Chauvet Map

An iron door has closed the opening originally made by the explorers who sensed the faint whiff of cave air wafting from a slight crack in the cliff face. A narrow metal pathway wends its way along the cave floor, carefully skirting animal bones and the fragile footprints of a child and a wolf and bears.

“Papa, look, oxen.” Like the caves in the Pyrenees, Chauvet Cave had been kept sealed and safe by a rockslide, which covered the original opening where the earliest artists entered. Interestingly enough, Chauvet was the first cave with prehistoric art to be discovered by an adult. The cave of Altamira was discovered in 1879 by Marcelino Sanz de Sautuloa, a nobleman and amateur archaeologist (the only kind of archaeologist at that time), who was excavating near the mouth of the cave when his eight year old daughter, Maria, went deeper into the cavern to take a look. She is reported to have called out to her father when she saw what she though were drawings of oxen which we now know as an extinct version called “auroches.” Like her father, many people assumed that prehistoric people were primitive brutes, incapable of making art, and the cave paintings were presumed to be forgeries or modern day graffiti. It would take seventy years before the paintings would be proved authentic.

Altamira Ceiling

A little girl then was the first human to set eyes upon these paintings made 17,500 years ago, but the next cave paintings were discovered by a little boy and his dog. In 1940, Marcel Ravidat and his dog, Robot, found a narrow opening, and he returned with his friends, Jacques Marsal, Georges Agnel, and Simon Coencas to explore. This cave had been sealed up to the extent that it was accessible only by small curious boys, and, this time, there could be no doubt that the paintings they discovered were authentic. These paintings were about the same age as those of Altamira and perhaps a bit younger. But the way in which the artists painted these caves was quite different from those of the Chauvet Cave, which are, incredibly, twice as old. The artists of Altamira and Lascaux used more color—ochers, burnt sienna, black and red. In contrast to the more austere monochromes of the Chauvet artists, they filled in the shapes of the animals with these natural colors that enhanced the naturalistic effects.

Bulls of Lascaux

Lascuax and Altamira are both closed to the pubic and both sites have created virtual recreations of the caves. Lascaux has a personal tour. One can visit Lascaux via a video, which takes you inside the cave, providing an idea of the ruggedness of the surfaces of the walls. Altamira has a doppelganger, a duplicate cave that is an exact replica that can be visited by the public, whose moist humid breath cause mildew and mold to threaten the irreplaceable paintings of both caves. Chauvet has the Herzog film, a remarkable accomplishment for the director and his colleagues and those who are the keepers of the cave.

Werner Herzog and His Crew

These Ur-artists entered Chauvet via a frontal opening in the cliff wall, but at some point thousands of years ago, the overhang above the cave collapsed and buried the mouth. Like Lascaux, the contemporary entrance is a narrow side passage that is a vertical drop into the cavern. Once inside, deep in the darkness, one encounters, not so much the art, but the sheer effort expended by humans to make art. Clearly, art was so important to the tribes of southern France that individuals were willing to go deep underground with torches to draw on the walls. In Lascaux and Altamira, the artists used small primitive lamps, but, in Chauvet, fires on the floor or torches held aloft had to light the impenetrable blackness. As the torch burned down, the person who held it scraped the tip against the wall to lop off the dead end so that the torch could be reignited. Carbon dating suggests that these black streaks are some twenty thousand years old.

Cave Bear

Although the carbon dating has been controversial, it seems that some thirty two thousand years ago, the artists scrapped the wall surfaces to provide themselves with a blanched wall, a clean ground to work upon. They drew the animals they knew—long extinct cave bears (whose bones and skulls are everywhere), maneless lions, leopards, rhinoceroses, reindeer, horses, deer, ibexes, even owls and yes, the auroches. At the beginning of the cave, we see the first sign of what would prove to be the very sign of humanity—the urge to put marks on the wall—a series of palm red prints arranged in a circle by the same artist. This artist appeared to be concerned with making a personal history in the cave, for, further down the passageway, his distinctive hand, with its crooked little finger, reappears. His is one of the few bursts of color in an otherwise cool palette that is enhanced by the diamond-like sparkles of the mineral deposits on the cave floor.

Hand of the Artist

The artists made cunning use of the undulating shapes along the surface of the cave wall to mimic bulging bodies of the animals, their thighs, their bellies. Oddly—and I have seen this effect in no other cave paintings—these artists give their animals multiple legs, as though they were running. The cinematic illusion is similar to Giacomo Balla’s famous the Dynamism of a Dog on a Leash (1912).

Chauvet Wall Drawings

As in most of the caves of the prehistoric times, there are few humans, and in Chauvet, there is only one, a partial torso of a woman drawn on a hanging pendulous rock. The filmmakers were not allowed to approach on foot to examine the drawing, much less to view the far side of the rock and the rest of the drawing. Herzog’s crew put a camera on a pole and was able to get a shot of the entire torso. The emphasis on the vulva of the female is reminiscent of the numerous “Venuses” found as small figurines all over northern Europe. Like the animals this piece of a woman is a line drawing, free of color.

Chauvet Venus

The narrator contended that the drawing was a combination of a woman and a bull or a union of a woman and a bull. I say “contended” only because the drawing is very difficult to read but I take him at his word. What interests me is that this theme of the woman and the bull floats though history to emerge mysteriously in the Minoan art of Crete and in the myths of ancient Greece: the story of the bestial coupling of the wife of King Minos with a bull, resulting in a monstrous minotaur. Although he was long dead when this cave drawing was discovered, the art historian, Aby Warburg, who wrote of how the deep psychology of humanity moved like an undertext or subtext throughout the history of art, would have been enthralled.

The contributors to Cave of Forgotten Dreams had what I consider to be a problematic tendency to speculate about why the drawings were done. Most assume that “religion” had something to do with the intention of the artists. Although I can only respect the expertise of these scholars, I feel that speculation can be anachronistic and that the truth of the art can only be far more mysterious than anything we can imagine. It is impossible to put ourselves into the minds of our ancient prehistoric fore bearers. All we can ever know is what we see.

These drawings are strange to us in deep and powerful ways. The approach of the artists remains the same over thousands of years. The idea of “new” or “novel” or avant-garde or rebellion against what ere obviously deeply rooted traditions simply does not exist. Fifteen thousand years separate Chauvet from Lascaux and yet both caves are instantly identifiable as “prehistoric” as “cave art.” The consistency of the aesthetics of the drawings and paintings suggests that art-making may have been connected to ritual, making the “style” impervious to change. But we do not know if the art is ritualistic.

Combat

There is some indication that the artists put certain animals together in what we would call “narration.” Two lions, a male and a female, seem to hunt, side by side. Two rhinos clash in combat, tangling their long curved tusks, probably to win a mate. A group of horses run together as a herd, one with its mouth open as though it is breathing, panting or neighing. But we cannot always read the overlapping as an attempt to link animals with each other for one overlapping was clearly a superimposition. Remarkably this over-drawing was done five thousands years after the original rendering.

Claw Marks Left by a Bear

Superimpositions are common in other caves. So are the handprints, so are the “dots” but we have no idea what these marks mean. Are the superimpositions a form of tagging, a sign of ownership, a record of a changing of generations? What are the drawings? Reportage of a hunt? Prayers for a kill? Worship of the beasts? We will never know the answers but we do know that these drawings are stunning in their blunt simplicity, amazing in their elegance of line. A lion was drawn with a single stroke measuring six feet long. Imagine the confidence of the artist to make such an elegant assured gesture. What are we seeing? “Natural” talent? Frequent practice? An apprenticeship with a “Master/Mistress” artist? The “style,” if one could use such a word, is comparable to a supremely arrogant Picasso or the deft hand of Matisse. The Chauvet drawings are so basic, so primal, so primary and so complete that we have been struggling to return to our atavistic selves, to redeem ourselves as artists.

Lions

Werner Herzog and his remarkable movie has allowed us a privileged look at some of the greatest art in the world. He takes us to a place we can never go. We are enchanted witnesses to his journey into the bowels of the earth where the art is secreted. At some point in time, Chauvet will probably be closed in by the innumerable stalagmites and stalactites that are forming even as I write from the relentless drip, drip, drip of water leaking into the cave. The formations seem to take the place of the living breathing humans who once visited here, compelled by the inexplicable need to make art. Rearing from the floor like sentinels, hanging from the ceiling like hovering guardians, these pale shapes are ghosts of artists past, transfixed like Lot’s wife, into pillars, watching over the art.

On the surface what we have here is the classic Cinderella story: poor, plain girl meets ugly rich man with a secret wife hidden in the attic of his old dark house and their grand romance is thwarted by the revelation of “the madwoman in the attic.” Charlotte Brontë’s classic Gothic novel, Jane Eyre, is usually thought of as a romantic story of a man and a woman, who are soul mates, mysteriously connected by the heartstrings. But to understand Jane Eyre as a love story is to entirely miss the point. The latest rendition, starring Mia Wasikowska as “Jane” and Michael Fassbender as the brooding “Mr. Rochester,” is a good movie, better than some of the earlier versions, but it will never surpass the original 1943 film with Orson Wells as the best “Rochester” ever. If you have never seen the classic black and white original then by all means, go see this film by Cary Fukunage. This new Jane Eyre is certainly the best version since 1943…and it’s in color. But why is Jane Eyre still being made and remade seventy years later?

The screenwriter, Moira Buffini wrote this film as pure romance, passing over its obvious political themes quite lightly, and playing to the audience’s expectations. From the time of its publication in 1847, Jane Eyre was understood as a “Gothic” novel, a tale of mystery typical of the Romantic era. Easily reduced to tropes, the novel and its characters have been copied, remixed, and mashed up, but the essential ingredients remained the same: the gloomy mansion, the master of the manor who has a dark secret and the plucky young woman who pokes around the house, intent upon solving the mystery. The warnings are the same: “Pay no attention to the noises in the attic.” “Don’t go in the locked room.” The “meet cute” when the master’s horse falls, tossing Rochester at the feet of Jane Eyre has been done and redone—remember how Jane Fonda met Jon Voigt in Coming Home? The first version of Jane Eyre could be Bluebeard and his many wives, a cautionary tale for unwary women, suggesting, not that she should be careful of the man she marries but that she should mind her own business.

Indeed, Charlotte Brontë’s novel was directed to a female audience. Denied entrance to any intellectually satisfying and fulfilling fields, middle class women were avid readers of novels, especially those written by women about women. Men disapproved of women reading women and especially of women writing and being published. This communication among women was dangerous, but writing was one of the few areas of professional behavior that could not be totally closed to women. Long before women managed to become successful visual and musical and theatrical artists, women such as Jane Austen, managed to write and were widely read. Even so, due to the disapproval of male publishers, Jane Austen published all but one book, Pride and Prejudice, on her own. We are the ones who appreciate the Nineteenth Century novels of these women, Austen and the Brontë sisters, and we are the ones who have told and retold their stories.

Men were correct to be wary of women writing, for many of these novels are critical of male privileges and unchecked male power. Women began to become novelists literally on the heels of two political revolutions, one in America and one in France, both of which had utterly excluded women. The first and the greatest Gothic novel ever written came from a very young woman, Mary Shelly, the daughter of a famous feminist. Frankenstein is a warning to those (men) who would think that, through technology, they had become God. The “Frankenstein” theme comes up again and again, from Metropolis to Blade Runner: don’t attempt to manipulate nature. All of Jane Austen’s novels are social commentaries on the social plight of women who are not allowed to have access to money. Although Austen could be criticized for ignoring lower class women, but without money such women were not impacted by a loss of money the way upper income women could be. All of Austen’s books are on the same theme: how can women reconcile economic dependency and the necessity of marriage with their desire for “romantic” love? The social messages in novels by women are inescapable, especially today when we are alert for such things.

“Romantic Love” was an invention of the Nineteenth Century, in order, I believe, to compensate women for their loss of political freedom and to reconcile them to their economic dependence. “Romantic Love” in all its improbably glory is the engine of Jane Eyre. One of the best analyses of Jane Eyre was made thirty years ago by Sandra Gilbert and Susan Gubar, The Madwoman in the Attic: The Woman Writer and the Nineteenth Century Imagination. The title comes from the character of “Bertha,” Mr. Rochester’s Caribbean wife, imprisoned in the attic. The literary professors, Gilbert and Gubar, suggested that “Bertha” is a metaphor for all the rage and discontent felt by women in the Nineteenth Century. Women at that time were not allowed to express their feelings or complain about their social condition and when they did they were often declared “mad” and punished in some way. “Bertha” is more than a character in a novel; she is the key that explains the lives of women who are shut up in lives that allow them no freedom. “Bertha” is the counterpoint of “Jane” who has learned to restrain herself and to be careful about what she said. “Bertha” is all the unexpressed pain of women locked up in the “attic’” of the subconscious, rattling and banging about, starting fires and screaming in the night. “Jane” has retained her sanity, even after an abusive childhood, because she wanted to survive and has learned to move and to act with humility, eyes downcast.

Jane Eyre is a feminist novel and film if only because it was told from the point of view of a woman. Like all the protagonists in Austen’s novels, Jane is adrift in patriarchal world, run by men for the benefit of men. From the beginning of their meeting, “Rochester” makes it clear that she must exist for his benefit, act in accordance to his need and wants. Today, most women would steer clear of such an egoist, but for centuries this kind of character was presented to female readers in countless Romance Novels, the kind with lavender covers, as the Broken Man who needed only the Love of a Good Woman to be fixed. One can only assume that, in Brontë’s time, “Rochester” was probably typical of wealthy and powerful men in a time when such men had nearly unchecked privileges. Indeed, he almost gets away with a bigamist marriage to Jane. Jane is warned by “Mrs. Fairfax,” played by Judy Dench, that men like Mr. Rochester didn’t marry governesses but she is too naïve to understand what the older woman is telling her: something is very wrong.

The novel never fully explains why Rochester attempts to marry Jane and offers only love as an explanation for his courtship. One suspects that the romantic reason, a Prince Charming falling in love with a Cinderella, is a fantasy solution devised by Brontë to nurture the flicker of hope in her female readers. Austen also devised romantic solutions in her novels: no matter how isolated the marriageable young women were, suitable young men (usually rich) somehow came into “the neighborhood” and the pairing off ensues. But Austen’s ladies are always the social equals of her gentlemen. The strange dialogue about gender equality that passes between Rochester and Jane underscores the improbability of the romance between them. In the novel, Jane is eighteen years old and Rochester is about double her age, she is poor and he is rich and it’s the Nineteenth Century—they can never be equals. In “real life,” he would not have known of her existence, but the novel uses the myth of “love” to force this unlikely pair together. But there is another approach to Jane Eyre.

Jane Eyre is, in it’s own way, a spiritual coming of age story. The novel is also a religious pilgrimage, for both Jane and Rochester. Both must do penance, Rochester for being “deceitful” and Jane for believing in miracles. For the couple to be together, Jane must complete what Gilbert and Gubar called a “pilgrim’s progress,” which began at “Gateshead” and ends at “Ferndean,” the couple’s forest retreat. Only when the novel is read as a religious allegory does the story begin to make sense. The story is about “Jane Eyre” whose very name indicates spirituality and her ability to float away from her adversaries. “Rochester” is not so much a real character as he is an obstacle in her journey towards fulfillment—he is something that Jane cannot have, not until she completes her tasks. Jane travels from the prison of the Red Room to imprisonment in the school for girls who have been thrown away, Lowood, to the trap of Thornfield whose name alone would enough to make any self-respecting girl to run for her life.

The last station of Jane’s journey is a resting place with the aptly named Moor House, a lonely house in the middle of the English version of a desert, the moors. Like Mary Magdalene, she goes into exile to grieve. On the run from Rochester, Jane is rescued by “St. John Rivers” (Jamie Bell). It is in this bleak and sanctimonious place of crossing that she recovers her sense of self; but, perhaps to satisfy the reader’s need for a happy ending, the author sends Jane back to Rochester. She rejects the offer of marriage from “Rivers,” because she has been rewarded with a large inheritance, and because she mysteriously hears the voice of Rochester calling her back. In modern terms we would call this device of one lover hearing the pain of the other as a voice on the wind an example of a “plot creaking” under the weight of contrivance, but in the 1840s, it’s that new-fangled “Romantic Love” asserting itself.

It is incomprehensible that in any reality Jane could love such a man, someone who had lied to her, betrayed his wife, deceived his friends and then claimed victimhood to explain his behavior. Jane Austen, an austerely Classical novelist who distrusted Romantic fantasies, would have profoundly disapproved of Jane’s actions. Over and over Austen punished such men and wrote them into miserable lives. Even Brontë had to smite Rochester to make him acceptable to her readers. Rochester is an unsympathetic character but in her own way Jane is as weak and as flawed as he and gives in to temptation—running back to a married man. When she returns, Thornfield is rightly burned down by the vengeful Bertha who had had enough of her prison. Rochester lost his sight and the use of one of his hands, but, far from being impotent, he gains Jane Eyre, who has inherited a fortune from an uncle she never met. One assumes, incorrectly, that this was her money, but as soon as she married Rochester, every penny went to him and his control. They disappear into “happily ever after” in a quick ending that concludes Jane’s journey to her destiny: a caretaker of a broken and disgraced man.

Gilbert and Gubar assume that the couple, a blind and physically challenged man and a woman with enough money to make her acceptable to society, are now equal. If Jane Eyre is a feminist novel, it is because it is a more or less accurate account of the lives of women, particularly of surplus and dependent women and their very real sufferings, but few of them had a benevolent uncle. But there is allegorical truth in the novel. “Bertha Mason” is the expression of the oppressed woman and “Jane Eyre” is the portrait of a suppressed woman. They are mirror images of one another: both imprisoned and both unable to escape. “Bertha” is the far more interesting character, so much so that she inspired Jean Rhys to write Wide Sargasso Sea in 1966. Rhys imagined the Jamaican prequel to Jane Eyre. The novel is full of foreshadowings for the Brontë novel and suggests that Rochester was at first sexually enchanted with Bertha then, overwhelmed with sexual guilt, was repulsed by her and by the alien culture of the Caribbean. As if in revenge, Bertha went slowly mad over her husband’s rejection. Rather than abandon her on the island, Rochester took Bertha to a lifetime of confinement in the attic of his English home.

Rhys stripped the Rochester character of his romantic trappings and explained why he was the sort of man who would be repulsed by “Blanche Ingram’s” self-assurance—too much like his wife—and comforted by Jane’s submissiveness and her virginity and inexperience. She is everything his wife was not, controllable and ignorant of all things sexual. Wide Sargasso Sea makes Jane Eyre more understandable because it focuses on Rochester and makes his character comprehensible. So who is Jane Eyre? Ultimately this character and her motivations remain obscure, despite the fact that the novel is told in her voice. One wonders if she is not typical of women of her time. Self-knowledge would have been hard to come by in a time when men wrote about women and told them what they were and who they had to be. Jane becomes understandable only if one assumes that she internalized the myth of “Romantic Love” and the myth of women’s inferiority overlaid with a veneer of self-possession.

Jane Eyre is an abused woman who identified with her abuser, Mr. Rochester, and did not have the vocabulary to understand co-dependence. Only a woman of the nineteenth century would force a female character through such trials and subjected her to such sufferings with so little payoff. Charlotte Brontë was a woman of little experience and much imagination and a great deal of insight for someone who lived such a limited and isolated life. Her life resembled Jane’s to a certain extent in that she, too, had been sent away to a pitiless school for girls after her mother died. She too had taught at a girls’ school, Roe Wood, and she was tethered to a difficult alcoholic brother, Bramwell, who was undoubtedly the model for “Rochester.” The siblings and their father lived in the moors of Yorkshire where Charlotte wrote Jane Eyre, published under a male pseudonym, “Currier Bell.” For a brief time, she enjoyed some acclaim in London literary circles, she even married, but whatever happiness Brontë had was brief. She died in 1855 of “exhaustion.” Jane Eyre was the only notable book she wrote. A long journey for not very much…for her, but for us, two centuries of Jane Eyre. She speaks to us still.

Battle L. A. starts out like bad sex: hard, fast, and then rolls over and snores. But do not despair, a Latina techie saves the world, so you go, girl. Between the beginning and the end of the movie, I was completely distracted by the undeniable fact that Baton Rouge in is Louisiana and that Sherveport does not even remotely resemble Los Angeles. It’s one thing to have Battle Duluth because few people live in Duluth and even fewer plan to visit, therefore, you could film Battle Duluth in Des Moines and no one would know the difference. But millions and millions of people live in the clutches of the sprawl of Los Angeles and we all know that Baton Rouge is no Los Angeles. So don’t make a movie titled Battle L. A. and film it in Louisiana. Just sayin’.

I find movies that show aliens invading Los Angeles very disturbing. After all, I live in Orange County and work in Los Angeles and anxiously look for landmarks that I know and love come under threat. That is why in 2012, watching the Randy’s donut rolling down Manchester Boulevard is so agonizing—Randy’s is so close to where I work. But in Battle L. A., aliens land practically in my backyard and the college where I teach, Otis College of Art and Design, was at ground zero. The Marines were moving down Lincoln Boulevard, evacuating the inhabitants of Santa Monica, including, supposedly me, my colleagues and all the art students. I could just imagine all of us, fleeing the latest invasion of aliens and saying, “Didn’t we just see the same aliens in District 9?” and why are the special effects so bad?”

Leaving aside the disconcerting fact that aliens seem to be fixated on Los Angeles—I mean in War of the Worlds (1953), Gene Barry fought the machines in Puente Hills—I have to ask, since the filmmakers are from Hollywood, why can’t the get the geography of Los Angeles straight? In 1996, aliens loomed above us in Independence Day and Will Smith and Jeff Goldblum (whose sister is a very fine L. A. artist) saved the world. In 1996, I was living in Laguna Hills a few miles from the El Toro, the site of the Marine Corps Air Station. In one of its last missions before the base was closed, the Marine pilots were scrambled to shoot at the aliens. So far, so good. What confused me was the escape route taken by Will Smiths’ S. O., Vivica Fox: she took herself, her son, and her dog from Laguna Hills into Los Angeles where she managed to find one of the few traffic tunnels in the city. Why, I kept asking myself, didn’t she just drive to Palm Springs instead of towards and into what would have to be the world’s greatest traffic jams?

In Battle L. A., I was puzzled at why the Marines from Camp Pendleton left their base to safeguard Los Angeles. Why didn’t they go to San Diego where America’s Pacific fleet is based? But noooo, as the late great John Belushi would say, they went to Shreveport so they could fight in Los Angeles. For the entire film, I was trying to figure out where Aaron Eckhart and his manly men landed. Now Santa Monica is a very posh little city. Strung out along the Pacific Coast, the town boasts some of the most expensive real estate in the world—million dollar cottages, prestige shopping, gourmet restaurants, bright blue skies, golden beaches, and lots and lots of pretty people driving fancy cars.

Third Street Promenade in Santa Monica

I did not see one building that looked like something built in L. A. No distinctive landmarks were shown and I could not situate myself. Where were these Marines? The small heroic band appeared to be inland, but Santa Monica is a beach community. They were fighting house-to-house in a run down neighborhood, but there are few houses under a million in Santa Monica. Eventually I figured out that they must have started somewhere to the north or the south of the world-famous Jonathan Club on PCH.

Pacific Coast Highway

I came to that conclusion because they wanted to drive the bus they found—not one of the local Blue Buses so beloved in Santa Monica but a weird orange one—to a pickup point located at the Airport. To do so they consulted a map, a paper map—who has those things any more?—and decided to drive on the 10 Freeway and go East. East on the 10 to the Santa Monica Airport? They planned to exit on Robertson Boulevard, which is the location of the diamond district in the heart of L. A.’s Orthodox neighborhood. The Airport is in Santa Monica, that’s why they call it the Santa Monica Airport. And it’s a couple of miles from the Pacific Ocean.

Oh well, never mind. Battle L. A. is a perfectly predictable movie. Watch it, but wait until it comes on cable. And break out your map quest and see if you can help these lost Marines find the “Command and Control Center” hidden underneath Santa Monica? Under? There’s no “under” in L. A.

Any film that makes me think about Donald Rumsfeld is just bad to the bone.

As our late great Secretary of Defense once said…famously,

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

The only thing unknown about the horrible film is why Liam Neeson agreed to waste his time making this horrible movie. Rather than getting involved in peripheral issues, like whether or not January Jones was miscast, I want to explain why everything about this movie was known in the first five minutes. There were no unknowns in Unknown.

The film begins with Liam Neeson, playing a botany professor, arriving at the airport in Berlin with his wife, January Jones. He is coming to attend a prestigious conference and to meet with his colleagues and, most importantly, to give a paper. First, he tells passport control that he is going to give a paper, something no self-respecting academic would do. We are very modest people. Second, he places his briefcase with his presentation on a luggage cart at the airport, and third, he leaves his briefcase with his presentation on said luggage cart at the airport and gets in a taxi and off he goes. Now a real academic would never, never, never let anyone else touch his or her case, brief or otherwise, especially if it had a presentation in it. Also a real academic would have the presentation on a backup CD, on a backup USB port, and on an e-mail sent to one’s account. We are a very neurotic people. Fourth, and last, a real academic would have to have a writing and publishing career stretching back twenty or thirty years and everyone in the field would know him or her by sight.

Any smart person would immediately know, therefore, that Liam Neeson was a fake and that he was really someone else, not the person he says he is. Also I have seen this plot before, but I can’t remember the movie—where the protagonist gets amnesia and thinks he is the person he was supposed to play. Therefore, the only unknown element in this dreadful two hours is why didn’t the filmmakers do their basic research and, like, talk to a real academic, you know?

Although Philip K. Dick is a writer who is also a philosopher pondering the meaning of existence, he has been a rich source for science fiction material. The original material for The Adjustment Bureau was about a shiver in the universe as a seemingly meaningless event was “adjusted.” The only witness to the adjustment simply went home and got on with his life. But the greatness of the existential story by Dick rests within the reader who wonders, once again, why life turns out the way it does. Is there a master plan for all of us? Is there anyone in charge?

Such a question is naïve and immature, a child’s need for reassurance that there reasons for the world being the way it is. The fact is that there is no master plan, we don’t learn from our mistakes or from history, and the universe is pitiless and arbitrary. I prefer the philosophy of Terminator: “The only fate is the one you make.” The Adjustment Bureau attempts to be as gutsy as Terminator as the protagonist tries to make his own fate but all it has going for it is the lovely pairing of Matt Damon and Emily Blunt and hats, lots of hats. Magic hats, magic hats that open doors.

Need I say more?

Not really, but I just want to add that this could have been an interesting movie. Many people have life changing and life altering decisions to make and one of the most fateful decisions is marriage. In the film, Matt Damon, a possible President in the making, learns from his new friend at the Adjustment Bureau that he will never want political power again if he marries the woman he loves. Unilaterally, he decides to pursue love but he never tells Emily Blunt that, if she stays with him, she will not become the “most famous choreographer in the world.” He takes for himself all the free will he wants but grants her none of it. Just come with me he says, and she follows along, thanks to the magic hat, through magic doors. And, of course, at the end of the journey, yes, Virginia, there is a Magic Man Upstairs. Each one of us has a little book with a plan in it. How comforting.

But why not let the couple just talk about their choices? Fame, fortune or marriage? The movie stated that Damon would have been so happy that he would have lost the desire to be president in his marriage. However, it was made clear that Blunt would see all her dreams and ambitions thwarted and no one mentions happiness for her. Damon seems to assume that she would be wiling to forego all her dreams for him. What a great twist the film could have had if she had been apprised of her choices and if she decided, no, I need to dance for the rest of my life more than I need you. The movie could have ended there, the universe shuttered and time reshuffled back to the launching of Damon’s presidential career. The end.

As the presence of Mad Men’s John Slattery warns us, The Adjustment Bureau is an old-fashioned romance. Love is more important than a career. Now that’s a Fifties idea.

The King’s Speech is the true story of the struggle of Albert, the Duke of York, to overcome his stuttering. Stuttering, often a problem for males, is connected to the boy’s conflict with his father, a literalized expression of the oppression of an abusive parent. The parent in question was George V, King of England, who eerily resembled his cousin, Nicholas, Czar of Russia. For all his faults, Nicholas appears to have been a devoted parent, but George did something quite dreadful to his sons, David and Albert. His dark and difficult presence lurks around the edges of The King’s Speech. The King, played by Michael Gambon, ruined the one son who would briefly become the King and nearly destroyed the younger one, who, in his turn, would also become the sovereign who won the love of the British public. Colin Firth, who will always be “Mr. Darcy,” won an Academy Award for his performance, revealing how a tormented Duke rose above his handicap and grew into his role as King. The metamorphosis of a man who dared not speak into the leader of a nation at war became a fulfilling film, well written and well acted by the pros, Helen Boham Carter, and the remarkable Geoffrey Rush.

Although The King’s Speech is generally uplifting and inspiring, winning Best Picture and Best Director, it suggests that there is still another movie to be made, much darker but more intriguing, about the uneasy characters seen only around the edges, David Windsor, played by Guy Pearce, and his paramour, Wallis Simpson (Eve Best). To a certain extent, this corrupt couple acted as a foil to Albert and Elizabeth’s wholesomeness, their determined ordinariness and their good hearts. While Albert works with his speech coach, Lionel Lougue, played by Geoffrey Rush, and give himself permission to simply speak and overcome his internalized feelings of shame and inadequacy, his playboy brother cavorted with nefarious characters and carried on a serious flirtation with the fascists in England and Germany.

Although the timing was bad and the ruler of Germany was a monster, the loyalty of the Prince of Wales toward Germany was a perfectly natural one. One tends to forget that the British Royal Family has not had English or Scottish roots since George I from Hanover was offered the throne in 1714. Although George spent most of his time in Germany, spoke German and was surrounded by German advisors, he had the saving grace of being a Protestant. After “Bloody Mary,” no one in England wanted a Catholic on the throne. Anyone else would do. Not until George III, who gets rather bad press in America, did these German kings become British. It is this King, mad for part of his life, who began the British Empire, ironically by losing the American colonies, turning the attention of the imperialists elsewhere. Queen Victoria, the last of the Hanoverians, married another German, Albert of Saxe-Coburg Gotha.

The family remained thoroughly German, through Edward VII and his son, the future King of England, Albert. Albert was ready to marry a German princess, Mary of Teck, when he died of pneumonia. But Mary, considered a valuable alliance, was passed on to the younger son, George, who became King George V. With a German mother, both sons, David and Albert, grew up speaking German. However, during the Great War, King George V changed the family name from “Saxe-Coburg Gotha” to “Windsor.” Although the film mentions that young Albert, the future George VI, was forced to use his right hand (Prince William is allowed to be left handed) and had to wear painful braces to correct his walk, it is unclear who was in charge of the children. Given their duties as King and Queen, it is likely that his parents merely neglected their sons, particularly the younger one, and abandoned him to severe minders and nannies. For whatever reasons, both David and Albert, both reacted strongly to their childhood and to their parents.

Often a young boy, in rebelling against his father, will develop an attachment to his mother that manifests itself in interesting ways. Young David grew up to be a playboy, rejecting a serious role as the future ruler and developing a penchant for older married women as his lovers. The fact that he was conflicted about sex is borne out by the many tales of his complicated relationship with Wallis Simpson. This twice-married woman from Baltimore with a drawling southern accent seemed to understand that abused people identify with an abuser and that this co-dependent relationship, unhealthy as it is, is also very powerful. Apparently, she was a dominatrix and satisfied the otherwise impotent Prince through humiliation. Her reputation was so bad that the government kept Mrs. Simpson a secret from the British public, which docilely endured censorship and even put up with bits and pieces sissored out of foreign media entering the country. The British government knew exactly who she was: a woman who collected men and had connections to Nazis in Germany.

The King’s Speech doesn’t go into what the British government did and did not do about this unsavory situation between the future king and his unacceptable mistress, but the Prime Minister, Stanley Baldwin (played by Anthony Andrews, looking old and awful), did not want this man to be King. Coincidentally or not, it is at the point that the romance has taken hold and looks permanent that Albert the shy Duke of York comes to Lionel Logue to learn how to talk. The film suggests that his wife, Elizabeth, urged him to seek help from the Australian but there is also no question that the eyes of the government were upon him as a far better candidate for King. When Albert’s father died, David was deeply involved with Mrs. Simpson who certainly had dreams of becoming Queen or gaining a position similar to Camilla’s today. The new King, now Edward VIII, also certainly assumed that some sort of arrangement for Mrs. Simpson could be made.

But the government was having none of it: Baldwin wanted both of these people gone; Roosevelt wanted both of these people gone. The King and his mistress were Nazi sympathizers but why? Despite the fact that the Kaiser was Queen Victoria’s grandson, the British had gone to war with the Germans from 1914 to 1918 and, for two decades, had harbored deep suspicions about the intentions of the defeated nation. Edward VIII may have invented the Windsor Knot, now the standard tie for most men, but, in his heart, he was as German as his mother. It seems that what little judgment he had was overwhelmed by his need to be his own man and to defy his father, who had changed the family name. Unfortunately, in the thirties, having cultural German sympathies meant being connected to Hitler.

Based on recently revealed F.B. I. files, new evidence of American and British investigations of the Nazi sympathies of the couple were revealed two years ago on British television. The National Geographic Channel picked up the program. But what is not given is a historical context for their involvement. In the thirties, fascism was favored by many people, particularly those in the ruling classes who feared communism. Wallis was more typical of many people in Europe and America in the 1930s in a time when political extremes were torn between communism and fascism. After the Wall Street crash, capitalism seemed an unstable system and any other system was perhaps preferable. In America, Roosevelt made a conscious effort to save capitalism and, along with it, democracy. Hitler who was elected in the same year as Roosevelt made another choice—fascism, an extreme right wing style of nationalism allied with corporation powers.

Many people, if not outright fascists, had fascist-like sympathies. After all, Mussolini made the trains run on time. Wallis seems to have been as naïve as her consort about the Nazis and blind to the underlying philosophy of horror that would sweep Germany into a decade of darkness. But Hitler had apparently spotted the couple as easily manipulated. One remembers how deeply Hitler was convinced that Germany and England were natural allies and that he waited a long time before attacking the British Isles. One could ask if Hitler had been equally naïve, thinking that the former King had more power than he actually had.

Reading between the lines of the brief rule of Edward VIII, it appears that while Albert was taking speech lessons, Baldwin and the government were cornering the King. He was given a choice give up Mrs. Simpson or abdicate. To everyone’s relief, he abdicated in 1937 to “marry the woman I love.” How romantic. Albert became King George VI. David became the Duke of Windsor and married Wallis Simpson in France. The modest wedding was quite rightly boycotted by the Royal Family. The Duke alternately raged at his brother the King and at the new Queen, but, in reality, the new prime minister, Churchill, was now in charge of the difficult Duke. It is at this point that the film ends with Edward VIII making his abdication speech with perfect diction. In a moving counterpoint, King George VI makes the triumphal and climatic speech without stuttering, introducing himself to the English people as their new King.

For an American audience, The King’s Speech is a story of how one man overcame an impediment with the help of a gifted and inventive and insightful teacher. But to those who heard that historic speech, there must have been a great sense of relief. They were in good hands. Surely the public did not know, until much later, what a narrow escape they had from a King who indeed wanted to surrender England to Hitler. Instead, they got a good King and Queen and their lovely daughters, Elizabeth and Margaret, all steadfast in their defiance of Germany. Rather than a King cavorting with Hitler, as David actually did when he became the Duke of Windsor, newspapers printed images of a truly English King in uniform walking the ruins of bombed out London. The public would not know, until much later, that the Duke had actually recommended that Hitler bomb England to bring the nation to its knees…so he could be restored to the throne.

The King’s Speech is correctly the story of Lionel Logue and Albert Windsor but it is interesting to think of the story that is played out in the background: the King that didn’t happen. Guy Pearce had a small role but the brief appearance of “David and Wallis” is a reminder that the real story of this insidious couple has yet to be made into a movie. The twin stories, of the Good Brother and the Bad Brother, is one of the rare instances in history when Good wins out and Bad is sent into a purgatory of wandering the world in a semi-pariah state.

If The Tourist was a novel, it would be fitted into a paper cover colored in pinks and lavender. Pure silly fun, sheer soft-core girl porn, The Tourist is a romance novel, the escapist fare of house-bound wives. There is a beautiful woman (Angelina Jolie), a handsome man (Johnny Depp), a relentless detective (Paul Bettany) and his boss (former Bond, Timothy Dalton) and Rufus Sewell as The Red Herring, all gathered together on a Venice vacation. Gondolas, speedboats, fancy dress balls, and elegant hotels abound with gunplay and chase scenes silhouetted against beautiful scenery. The Tourist has an absurd plot with an improbable twist at the end, demanding that you believe in everlasting love and the fable of “Brazilian plastic surgery” that cost $20 million. Can a serious-minded critic dismiss such a candy-box film? Not me. I ate the bon-bons, thank you very much.

Despite all the good and glowing reviews of the Coen Brothers latest film, I did not like True Grit. I was bored.

So what bothered me?

Why did I leave feeling unsatisfied and irritated?

But a few words before I diagnose. First, let me disclose my intellectual failings: I do not read fiction; I read only non-fiction. Therefore, I have never read the Charles Portis book. Second, Westerns died and were decently buried in the Sixties. The only good westerns after the Fifties were the “Spaghetti Westerns” and other Clint Eastwood films, especially the truly great Unforgiven. Third, the Coen Brothers have already done their western. It was called No Country for Old Men and it was as great as Unforgiven, and, yes, I actually read the Cormac McCarthy book. Which brings me to True Grit…

True Grit, 2010

For any director, writer, or actor, the “Western” is a minefield of dangers. In the Twenty-first century, we are too well-educated to accept the Cold War myth of the West as the symbol of America, a nation founded on individual enterprise and, of course, “true grit.” The real facts have been thoroughly revealed since John Wayne and Kim Darby made the clean and shiny film of 1969. The West was a place where misfits washed up on the plains, a site where sociopaths and post-war drifters, whores and prostitutes, and opportunists created an out-of-control society we called “wild.”

The period of True Grit is the opening years of the Wild West when Arkansas was still the frontier, a time when the West was suddenly up for grabs and the place where the East sent its worst citizens and its sorriest losers. We can measure the depths of the desperation and perversity of the West by the wild and immoral scramble for “free land” and the willingness of the American government, brutalized by four years of war, to countenance genocide of the Native Americans. The West can no longer be mythologized. There is no reason to feel sentimental about one of the most shameful periods in American history, much less to celebrate its passing in an “elegic” tone, to use one of the words written by many critics in relation to this film.

poster

The best path of a Twenty-first Century Western is to humanize the inhabitants and to tell the truth about how the West was really “won.” The reluctance to deal with the West as it really was only continues to hide a story that is truly compelling—how people discarded from the East built an entirely new political and social system that rose out of the ashes of crimes of theft and killing and inhumanity. To be fair, truth was not the purpose of the Portis novel, and the Coen Brothers seem to have had modest goals: to give a “straight-up” account of the original novel. The book is a more elegant and formal retelling of turn of the century pulp fiction. I suspect that in this day and age such a re-telling would be difficult to recapture, even in the inspired hands of the Coens.

How could the tone of the past be recreated? From the first time we see the town in the opening scenes—an Arkansas frontier town—the film looks false. The buildings look like Hollywood sets on a back lot in the Fifties. Now, this too-clean too-fake appearance may be intentional on the part of the Coens, and, if so, I applaud their intentions. Too pristine, neat and tidy, the town of True Grit has the look and the feel of a simulacra—a copy of a copy of a copy, a free-floating signifier of a “reality” that was never real. Simulacra work best as stilled images and are extremely difficult to pull off in a film. In his early days, Quentin Tarentino was the master at activating simulacra, especially in Reservoir Dogs, one of the best film noir movies of the neo-noir period. That said, True Grit is, at its heart, as the Coen Brothers indicated on Charlie Rose, a “young adult” novel. True Grit is not a road movie, not a quest movie, not a journey into adulthood movie, not a redemption movie. It is much more simple: it is about a young girl who leaves home, a remarkable event in the 1870s. But the film turns into a male-oriented ham fest with a plucky young girl as spectator to the antics of lost and broken men in simulacra of a “western.”

Charles Portis novel

If the Coen Brothers were aiming for simulacra, then the acting let them down. Once the plot leaves the fake town and moves out into the hardscrabble frontier of Choctaw Country, the look becomes more authentic and bleak. The original Portis plot is fine but the characters are difficult to translate from the printed page to live action. The actors are trapped—or trapped themselves—into characterizations, which lead them to hamming it up. Each character is an archetype of the Old West, but what emerges is a stereotype—-the plucky little girl of the west, the old man who has become a professional killer, a Buffalo Bill Bounty Hunter, the sociopathic gunslinger and outlaw, and assorted colorful characters that come and go with little effect. It’s a dangerous mix of the familiar, and Jeff Bridges and Josh Brolin put on their Wild West costumes, climb on their horses, and go over the top.

The “Rooster Cogburn” character is unfortunately played for comic effect by a self-indulgent Jeff Bridges. While Bridges is a much finer actor than John Wayne, he comes across like “The Dude” who has wandered into a Western movie. Brolin, also a fine actor, turns Tom Chaney into a moronic killer, hardly worthy of a quest, much less relentless pursuit. One wonders how such a mouth breather—mere debris—could have eluded the bounty hunter, Le Boeuf, aka “Le Beef.” Hailee Steinfeld and Matt Damon are the only actors in this film whose performances are centered and grounded and should be given credit for holding the storyline together. I believe that Brolin and Bridges should have followed the lead of Steinfeld and Damon and played their roles straight. Instead of being so self-consciously “in a movie,” the bounty hunter and the outlaw could have been authentic, played like a pair of damaged men leading deranged lives. They were two sides of the same coin; the same trained killer split into two paths. One man found his humanity and the other lost his.

It remains to be seen if I am right about this film—that the audience will be bored. I will be watching the numbers on the attendance, instead of reading the reviews. That said, I would be surprised if Jeff Bridges does not get nominated for an Academy Award, like John Wayne.

Tron: Legacy is the movie of the year. Tron: Legacy makes Avatar look like Walt Disney’s Cinderella from 1950. Many critics have complained that Tron: Legacy is a film for “fan boys” only. They are wrong. Roger Ebert pondered the probability of the physics of being sucked into a computer. He is old. For people of all genders and ages who love art and computers and video games—and that’s a lot of us—Tron: Legacy is simply an amazing and enthralling experience. I am a girl and I went with a girlfriend and we went to IMAX 3D—we went all out—spent all our girl dollars—and I would go again. If you are looking for a story, look elsewhere: this film is an allegory of the Internet. If you are looking for plausibility, move on: this movie is a purely optical event. Just open your eyes and allow yourself to be drawn into the world of “Tron” and “Clu.”

Steven Lisberger commented that it was as if the original fans of the first Tron had to grow up and become executives at Disney for the sequel to be made. But, more importantly, as Lisberger pointed out, we understand the basic concepts about computers much better. We entered into the arcade game via an “avatar,” a vague term then, but now we all have avatars on the Internet, either through our logos or by playing Second Life. Although inspired by the game “Pong,” the games in Tron were played by two humanoid figures rather than by bouncing white dots and the action took place over a flat grid that, like a Flat Earth, had ends and edges and one could fall off or out.

Today, we think of the Grid as the Internet, which is a verbalization of the Grid, which looks like, of course, a net. This net, imagined by “Kevin Flynn,” is endless and self-evolving, propelled by the will of “Clu” the avatar and doppelganger of the CEO of Encon. Clu was programmed to find perfection, but in his quest towards purity and logic—sought after by all programmers—he has committed genocide against the innocent ISOs. Quorra (Olivia Wilde) is the only survivor of the spontaneously generated creatures and is sheltered by “Kevin Flynn.” “Clu2” summons the son of “Flynn,” young “Sam,” (Garrett Hedlund) now in his late twenties and lures him back into the game. Father and son are reunited, father sacrifices himself to save the son and the ISO so that the two can head towards the light and escape back to the real world. The story is a mere armature for the art. But this is postmodern art, a hybrid of quotations from twentieth century art, a true bricoulage.

This Tron is dark, shades of blacks and grays, slivered with streaks of light. The key colors (like my website) are black and Tiffany blue. Unlike the original, which was an arcade game come to life, the technology of today allows the film to become a work of art. The director, Joseph Kosinski, was very frank about the fact that the new Tron with its new grid, its new server, was built from the ground up. He storyboarded each shot, thinking in terms of choreography—the placement of the characters within the Grid like three-dimensional chess. The built environment was an art project. “If you’re not interested in design,” Kosinski said, “you wouldn’t be interested in working with this film.” The fabled Grid has grown over the past twenty-eight years and the rather sparse landscape of the original quadrille has developed beyond the old office-like cubicles into a city with a mountainous landscape beyond. Indeed, the entire world had to be designed by Ben Procter, the art director, and stretched out in a map. A moat or an Infinite Void, lurks for those who fall off the connecting bridges, and surrounds the Downtown City. The Safe House where the older and wiser “Kevin Flynn” hides is in the mountains and the Outlands stretch out to the Sea of Simulation (Baudrillard would love this film).

The Safe House is one of the few places in the film where actual sets could exist. Designed by Darren Gilford who described the futuristic home as a “hideout,” the House has the ethos of the love child of Charles Eames and Philippe Starck crossed with the refuge of Dr. Dave Bowman from 2001. The original Rococo furniture was a reimagining by Lin McDonald who lit the chairs from the inside with rope lights. Indeed the entire glass floor, which had to hold the weight of the actors and the equipment, was uplit. All the furniture, the Eames armchair and ottoman, the Mies chaise longue (the 670) and Barcelona chair, the Arco lamp by Achille Castiglioni, were white and silver. The pale fire in the white fireplace was a silver waterfall. Olivia Wilde pointed out that these sets, which created an alternative world, were a welcome surprise in a film that could have been mostly green screen. But nothing in the architecture of the Safe House, designed by Kevin Ishioka (who was the supervising art director) and Jan Kobylka, was precisely otherworldly—it was the blacks, grays and whites that gave the set its spectral look, reminding the viewer that this world had no natural light.

Olivia Wilde lounging on a luminous Rococo sofa

If we take this concept of being literally within the computer, the logic of the set design and the costumes become clear. The computer has its own internal light system. The screen is lit from within; the keyboard will light up when you are typing in the dark. This is the light that substitutes for the sun in the land of The Grid. People, or humanoid computer programs, are dressed all in black or white in skin-tight suits of neoprene. These suits are “electro-luminous,” meaning that flexible lamps are inserted into channels on the sides of the rubber like costumes. The light literally outlines the body’s shape, making the wearer visible. According to Wilde, it took months of training to look good in the suit and hours every day to get into the outfit. She was “proud” to wear the suit, which made her feel like a “warrior.” Christine Clark, who designed the costumes with Michael Wilkinson, explained that the tight-fitting suits were based on the actors’ actual bodies and were kitted out with flexible lighting that had never been used on such a large scale before. The light in the suits acts like the characters—the typed numbers and letters—on a monitor, one of those old fashioned Eighties screens that were backlit in green. The skeletal outlines also link the humans to the machines, because the people and the machines they make are psychically connected—they have become us. And we have become them.

For those of us who saw the original Tron—-the few of us—-in 1982, the old familiar leads are back, Jeff Bridges and Bruce Boxleitner, but other characters are missing. Sark (David Warner) and Yori (Cindy Morgan) are gone. Gone as well are all the array of Eighties pastels as are the complex designs on the body suits.

complex suit design

However, the 1950s grids on glass by Irene Pereira of the original Tron survived and can be seen in the film’s theme song video, “Derezzed” by Daft Punk.

Night by Irene Pereria

The current Tron is cleaned up and the colors are carefully and judiciously deployed and separated to indicate the worlds of good (the forces of “Kevin Flynn”)—blue—and evil (the forces of his evil twin, “Clu2”)—orange. They fight each other by slinging the deadly identity discs that are magnetized to the back of the black suit and with the Light Cycles. To mount a Light Cycle, one has merely to assume the position and the Light Cycle will manifest itself under you and off you go. In the original Tron, the contests were rather staid versions of throwing the disc or catching it with a jai alai stick and the motorcycle races were strictly on a grid of straight lines. Like the Grid, in Tron: Legacy, the Light Cycles have evolved into longer leaner and meaner machines, less blunt-nosed and utilitarian than their grandfathers. The Light Cycles swoop and soar and leave trails of light like exhaust. Unlike the straight-edged and geometric predecessor, this Tron is feminine and curvilinear, taking its cues, like the earlier version, from Star Wars: The New Hope. There are the aerodynamic dog-fights and the panoramic shots of marching soldiers, gathered together for war by “Clu2” and even the bar scene, one of the most awful and copied scenes in the history of film.

1982 version of the Grid

The bar cum disco and nightclub is where we meet the last and strangest character of the film, Michael Sheen’s white clad “Castor/Zuse. Although it is a relief to see Michael Sheen play someone other than Tony Blair, his character is such a flaming hodge-podge of previous characters, from Ziggy Stardust to Gary Oldman’s “Dracula” to Joel Grey’s master of ceremonies in Cabaret, that I will lie awake many nights trying to figure out all the references. The real fun of the End of the Line club scene is the brief appearance of the Daft Punk as helmeted DJs, head bobbing to their own music. The heroes of French electronica, Daft Punk, aka Guy-Manuel de Homen-Christo and Thomas Bangalter, never really “appear” and they always wear disguises. Best known for “One More Time,” Daft Punk are more like Street Artists in that they prefer to keep their real appearance on the down low. We see them in Tron: The Legacy but only in passing, performing “Derezzed,” which is what happens to you when you lose the game in Tron Land: you are shattered into millions of tiny splinters and, like your computer, you crash.

Daft Punk had to be persuaded to take on the project of doing the sound track for Tron: Legacy. For the highly successful musicians, working on the film meant taking two years off from touring. However, as the video of “One More Time” suggests, Daft Punk is interested in filmmaking. The team worked with a full orchestra, fusing electronic music with classical instruments, from flutes to French horns to Bassoons. The result is a sound track that is somewhat reminiscent of the work of the German group, Tangerine Dream, on the soundtrack of Risky Business, but unlike Tangerine Dream, Daft Punk can go harder and grittier and there are tracks on the sound track album (number eight) that have the hard grinding sound that sometimes comes out of Digweed. The soundtrack makes the movie take off like the Light Cycles.

Notice I have paid little attention to the story, for I consider the plot incidental to the special effects, the artistry and the new form of making art. Although it is nice to see Bruce Boxleitner holding up better than Jeff Bridges after twenty eight years, the real miracle is the remarkable way in which the old Jeff Bridges is transformed into the young Jeff Bridges through digital effects. I am aware that other reviewers have made snide commends about Botox and plastic surgery, but, in my opinion, the transformation of the actor works in the film. “Clu2” is a perfectly acceptable digital character because the entire movie takes place inside a computer; therefore, everyone has to look like the computer version of oneself. “Clu” is an avatar, as are all the other programs in the film. Their bodily perfection, their agelessness is not about computer programs which age quickly and are outdated and discarded without remorse but more about us and what we would like our lives to be, how we would like to always look, what we “really” are in our own minds.

What is nice is that “Clu” does not look like the young Jeff Bridges but like the old one without the aging—the bags under the eyes, the wrinkles, the shaggy gray beard, the thinning hair, etc. The women, by comparison, all look frozen in the prime time of models, which is about seventeen. Their makeup is flawless and will, I predict, set some new styles along with the costumes. Already there are Tron platform heels coming to a store near you any day now. The point is that these avatars are who we could become if only…we could find the time to exercise, the time to diet, the time to ride around on cool looking motor cycles or wear those terribly uncomfortable platform heels. The avatars are allegories of our fantasy selves. Computer games are places we enter into to escape the real world of pending unemployment, of disappointment in real life human relations, of financial peril, and all the other Shakespearean “slings and arrows of outrageous fortune.” We want to be like “Clu2” in looks but like “Kevin Flynn” in wisdom. We want to be like “Quorra” in her innocence and her untroubled perfection. We want to be like “Sam Flynn” who is a young hero on a quest who must pass mythic tests and perform great deeds before he can become a man.

1980s colors

So disregard the lame reviews. Also ignore the Disney merchandising. But buy the soundtrack. Awesome. Tron: Legacy is a prime example of a phenomenon I have been observing for some time now—the flow of cultural capital and creative energy away from “high” art and into popular culture. The leap from Avatar to Tron: Legacy is enormous but it measures the speed of the drive to create art inside the matrix of the computer, the new art world. The computer is the site of the new avant-garde.

We are the users.

Oh, and the avatar, “Tron,” barely appears in Tron: Legacy. Who fights for the users now?

Like the swallows return to Capistrano, censorship of art returns every time forces of morality feel emboldened or threatened. Two decades ago, it was Robert Mapplethorpe and Andreas Serrano who were the targets of right wing indignation. In 1989, a threatened conservative faction was on its last legs and would be challenged by the Clinton phenomenon. Attacking helpless artists who want to make art not headlines was an easy diversion, a feint that drew attention away from the very real economic problems the nation faced. Today, two new victims have emerged under strikingly similar circumstances—a right wing threatened by the repeal of “don’t ask, don’t tell” and an economic crisis of their own making.

The new attacks struck down the photographer, David Wojnarowicz, who died twenty years ago, and the political German street artist, Blu. This time, one of the culprits was presumed to be open-minded, Jeffrey Deitch of the Museum of Contemporary Art in Los Angeles. In an unexpected act of apparent censorship, Deitch ordered Blu’s supposedly offensive mural to be whitewashed. The other violator, the venerable Smithsonian Institution, was under the usual monetary pressure from the usual suspects, the Catholic, led by Bill Donohue and the upcoming Republican Leader of the House, John Boehner. The Smithsonian removed Wojnarowicz’s video, A Fire in My Belly (1987) of ants crawling over a crucifix from an important exhibition on homosexual identity. That fact that one museum was under political pressure and the other was not indicates that the issue of censorship needs to be looked at from another angle. When and why does censorship of the arts occur?

Smithsonian Institution’s Hide/Seek: Difference and Desire in American Portraiture

Censors are never right. History proves them wrong every time.

When the Corcoran refused to show the Mapplethorpe retrospective, The Perfect Moment, the art world united in its condemnation, and the museum has never recovered from the stain on its honor and reputation. Twenty-one years later, the Smithsonian, a federally funded institution like the Corcoran, was forced to sacrifice the integrity of art for financial survival. And like the Corcoran, the solution of the Smithsonian is short term and is at the expense of moral and ethical principles. If the art was good enough to have been selected, then it is worthy of being defended. The decision by the Smithsonian was particularly strange, given the sea change in public opinion over gay men and women since the deaths of Mapplethorpe (1989) and Wojnarowicz (1992).

The other factor that adds to the ill-timed act of self-censorship is that the Catholic Church, a major actor in this new drama, has lost all credibility. In today’s newspapers, December 18, there are two new stories—one about the Catholic Church sheltering a rapist and the in the other—a pedophiliac. And that was today’s news, not the news of three or four years ago. Where does the Church get off in objecting to the art of a man who has been dead for twenty years? Dead, because conservative factions, including the Catholic Church, blamed the victims of AIDS rather than doing what Jesus Would Do—- help the sick and the helpless.

One can perhaps understand the Smithsonian, which was facing a Republican dominated Congress in the fall. But the repeal of “don’t ask, don’t tell” suggests that the decision to censor its own exhibition is, if nothing else, ironic and, worse, pointless. But the whitewashing of the mural in Los Angeles is a strange act on the part of a purportedly open-minded director of a major museum. According to the story, the German street artist, known as “Blu,” had worked with Jeffrey Deitch before and actually stayed with the director of the museum before he painted the mural. Given the checkered history of murals at the Geffen, it is hard to believe that Deitch did not ask Blu what his intentions were.

Censored Mural

Christopher Knight, who defended Deitch, stated that, the neighborhood where MOCA’s annex, The Geffen, is located is sensitive to art projects. Knight pointed to problems with a mural painted by Barbara Kruger in 1989, that year of art censorship, as an example of art offending the Japanese-American community of Little Tokyo. The Geffen is wedged between the Japanese-American National Museum and the “Go for Broke” War Memorial for the Japanese-American soldiers who died in World War II. [1]

MOCA was concerned for the feelings of the Japanese-American community, due to the proximity of the “Go For Broke” site.

Kruger’s first mural offended because it was a simple quotation of the Pledge of Allegiance. For the community, the Pledge was movingly depicted by Dorothea Lange’s photograph of Japanese-American schoolchildren with their hands over their hearts. These children would spend years with their parents in internment camps. During war those years, Little Tokyo was emptied out and when the community returned, it was haunted by one of the worst violations of the Constitution in American history. Kruger painted a new mural with theme of who had the right to speak, a powerful political statement in its own right, especially in that location. That the community approved of the new mural indicates that Little Tokyo is perfectly capable of absorbing political discourse.

Barbara Kruger, artist, 1989

However, this time, the Japanese-American had no time to intervene with the painting of Blu’s mural. In “MOCA’s Very Public Misstep,” Knight made a good point that the community needs to be consulted about public art before it is placed in an environment, that is, like any site, fraught with politics and history. For whatever reason, this very important step was overlooked and the director, acting quickly, arguably too quickly, had the mural painted over the day after it was finished. [2]

Blu’s mural

Censorship, in the Twenty-first Century, is a particular futile gesture. Blu’s mural was extensively photographed, first, in its completed state and then, in its wiped out condition of destruction. Like Wojnarowicz’s video, A Fire in My Belly, which is on YouTube, the images are easily obtained over the Internet. [3] The images of Blu’s mural are everywhere. The offending mural showed rows of coffins, covered, not in the American flag, but in dollar bills. Clearly, the artist was making a statement about America waging unpopular and illegal wars of choice for the sole purpose of making money for Halliburton and seizing Iraqi oil.

Who knows what the Japanese-American veterans and their descendants would have thought of the mural? Maybe they would approve of the anti-war statement: lives should never be squandered (hence the $1 bills) for an unjust cause. Lives are too precious and too priceless to be laid down for anything less than a fight for survival. Perhaps using soldiers as pawns in political wars would not go down well with a group—the legendary 442nd—that was the most decorated—21 medals of Honor, the most wounded—9,486 Purple Hearts—and the most killed in the history of the American military.

If the feelings of the Japanese-American veterans were the Museum’s concern, then the view of the institution was not particularly nuanced. There was a significant and vocal group of young men, interned in concentration camps, who took a principled stand against serving a country that took away the rights of its citizens. One of those conscientious objectors was Frank Emi, wh0 died yesterday. According to the obituary in The New York Times, he was joined in his stand against the United States government by three hundred protesters in ten camps.

All these men were tried and convicted of evading the draft. [4] Emi was sentenced to four years in prison and served eighteen months until President Truman acquired a conscience and granted the young men a pardon. Called a traitor by those in the Japanese-American community who served, Emi explained, “We could either tuck our tails between our legs like a beaten dog or stand up like free men and fight for justice.” The Japanese-American community is, like every other group in America, is diverse. But surely they would agree with freedom of speech?

The argument that Deitch’s action was a misjudgment because he did not consult with the community first is not very convincing, because the community was not brought into the discussion either before or after the mural was painted. Rather than opening the doors for a frank and honest discussion of wars and why they are fought, Deitch slammed the door with a unilateral decision.

Whitewashing the Mural

Writing in The Huffington Post, my friend, Mat Gleason, has stated that the Smithsonian censorship is not like that of MOCA, [5] citing the proximity of the “Go for Broke” site.

But I beg to differ.

So did Peter Clothier in “Censorship: Coast to Coast,” in Huffington Post, December 17. In fact most observers of this fiasco agree: Censorship is censorship. No amount of whitewashing will undo what Deitch has done. [6] However, I will agree with Gleason that the two acts of censorships are different. The Smithsonian caved in to right wing politics to the habit conservatives have of latching on to a perceived “assault” on “family values” and attacking it. Usually, these people move on but leave behind in their wake very real and very lasting damage.

Undoubtedly it is the goal of the religious right to harm “elitist” institutions and that is all the more reason to stand up to the hysteria of such fanatics who would take away freedom of speech. It should be recalled that the heroes of 1989 are not Christina Orr-Carhall of the Corcoran but the late Ted Potter of the Southeastern Center for Contemporary Arts and Dennis Barrie of the Contemporary Arts Center in Cincinnati. Both men stood up to their critics and survived with their honor intact.

And then there is the issue of Street Art itself. Did the censorship of Blu’s mural occur because the director was afraid that art would get dragged into politics? If so, he clearly does not understand street art. Street art is often political. Deitch invited unfortunate comparisons with Christine Sterling, who infamously whitewashed the Tropical America mural by David Siquerios in 1933, a year after it was painted.

Siqueiros Mural Restored

The irony is doubled with the Getty at this moment engaged in a years long restoration of the work, obscured for decades. Street Art is, by its very nature, an outsider art. The artists, many of whom practice in anonymity, represent the last of the avant-garde. Supposedly, the role of the contemporary artist is to challenge the public but most of the prominent contemporary artists have long since been co-opted by the Establishment.

Postmodern thinking asserts that the avant-garde is dead and that there can be nothing new in art, therefore, so what? But does the avant-garde, which merely means “forward movement” have to be about the new and the novel? Does the unfortunate fact of belatedness mean that an artist cannot confront a public or shock the art audience from its complacency? Like many observers of the current art world, I am appalled at the moribund state of the art world, which is doing the Same Old, Same Old, or to quote Jean-Michel Basquiat, “SAMO” or the “same old shit.”

Street artists seem to be the last of the Old Guard: the only artists willing to prod people into doing actual thinking. An excellent example of the artist as gadfly was on view the other day when an unnamed street artist put up a poster of Jeffrey Deitch as the Atollah. [7] The judgment of the street artist may be as harsh as the comparison but the poster begs the question is censorship ever justified?

protest poster

Two very real problems have been raised by the actions of MOCA. Public art is always a negotiation between the world of art and the world of the public. If there is a gap between the art and the public, it is because the art world deliberately created that gulf called the “avant-garde.” Can any form of public art remain avant-garde or have the pretension of being thought provoking? The case history of Richard Serra’s Tilted Arc would suggest that public art must always be an art of compromise. On both sides. In the case of MOCA’s actions, there seems to have been no negotiation, no discussion, and no compromise, just censorship. If the artist is to have any role in society as an individual with a unique mission, then is it not to stand tall for freedom of expression? Are not artists our first line of defense against those who would silence eloquent voices?

If the career of Bansky is any indication, street artists can slide into the mainstream and put themselves in danger of compromising their principles. Of all people, Shepherd Fairey has condoned the effacement (called the “buffing”) of the mural of Blu’s mural. After a brief flirtation with accommodation, Blu decided he was not happy with being censored. One wonders what will happen to the upcoming exhibition, Art in the Streets, this April—-how many artists will withdraw because of MOCA’s act of censorship? After a problematic overture to the exhibition, hopefully, Deitch can redeem himself this spring with another of those landmark shows that allowed MOCA to make its mark. MOCA’s 1989 exhibition, The Forest of Signs, provoked this powerful mural by Barbara Kruger. Its message still says it all:

Who is Free to Choose? Who is Beyond the Law? Who is Healed? Who is Housed? Who Speaks? Who is Silenced? Who Salutes the Longest? Who Prays Loudest? Who Dies First? Who Laughs Last?

After the Second World War, the veterans came home to parades and to the GI Bill that rewarded them for the sacrifice of years of their lives in the service of their country. One of the greatest benefits of this bill was a free education and cheap home ownership. A GI could buy a home with little money down, and without much ado, a piece of the American Dream was theirs. That is….if this GI was white. Thanks to the GI Bill, thousands of average white males were able to achieve middle class status but the many men of color who had also fought for democracy were “redlined.” To be “redlined” was to be defined as less than creditworthy due solely to the color of one’s skin. A man of color might possibly get a loan, yes, but it would be at a higher interest rate and the monthly payment would be higher than that of his white counterpart. The higher the payment, the harder it is to keep up the monthly payments, forcing a self-fulfilling prophecy from the banks upon the veteran of color. Fast forward sixty years and “redlining” is renamed “sub-prime,” and hereby hangs our tale.

Inside Job, brought to you by the same man, Charles Ferguson, who made No End in Sight: The American Occupation of Iraq (2007), continues the sordid tale of redlining aka sub-prime. The other significant documenter of the follies of our time, Michael Moore, is more sardonic, more sarcastic than Mr. Ferguson but the sheer lunacy of the actors in what is nothing less than the Financial Crime of the Century is so unbelievable that the audience was howling with incredulous laughter. By now, most Americans have a dim idea of how a handful of New York bankers lost an unfathomable amount of money. Combining reckless and immoral behavior within the financial sector with the equally inexcusable passage of tax cuts while two unnecessary wars were being fought entirely without funds resulted in the Mother of all Meltdowns. If you were middle class and had any money in your house, your pension, or your stock portfolio, chances are all your investments are gone, never to return. We know what happened: we have only to look at—or should I say for—our vaporized retirements accounts. What we don’t understand is why did this happen?

Getting back to redlining—this was a bank practice to “safeguard” the bank’s “risk” but the bank’s policies were underwritten and supported by the American government, making redlining, not a private decision, but public policy. The field of public policy was not the full-blown academic pastime it is now but whether or not such practices are named (or not), they amount to social engineering of the general public by private interests on a massive scale. The bankers and the government had two choices. One, you can argue that to allow as many people as possible access to the American dream gives the participants a stake in the society, as the result of a literal investment in the nation’s future. If, as the result of social inequities, a certain group of people were disadvantaged, it would make sense to help them participate, by giving them a lower rate of interest and lower payments over a longer period of time. It would be important to incorporate everyone into society for the benefit of everyone. Or two, the public policy could deliberately exclude as many potential players of color as possible, thus creating a permanent underclass of color, disenfranchised and disaffected, alienated and unable to support itself, costing the government great expense in the short and long run. The post-war public policy of the banking industry and the American government chose the second path, which led to a decade of riots and protests in the Sixties from people who could see the American Dream as lived by the whites and yet capriciously denied to them.

Public Policy is an academic discipline but it is clearly an ideological position. If a government is deliberately created an underclass of color, the reason cannot be an economic one in terms of the benefits to the nation as a whole. An underclass is not cost-effective. So why create and perpetuate one? A better question would be qui bono? Who benefits? For a start the white middle class benefits, not necessary financially, because it will have to pay the cost of crime, welfare, and the huge price of controlling and maintaining a large group of very discontented citizens, but in terms of a warm feeling of superiority. The white middle class elevated itself at the expense of divesting the people of color of their rightful share, as citizens, of the American way of life. Of incalculable cost is the loss of talent and national productivity by not allowing a large percentage of people to participate in the nation’s growth. What happened was not economic policy but a belief system, an ideology of inequality and superiority.

But why did redlining return? After the Civil Rights legislation forced a positive public policy upon the nation, the middle class of color grew and a growing number of Asians, Blacks, and Hispanics achieved bourgeoisie status. By the beginning of the Twenty-first Century, we knew full well how much America had benefited from allowing an Oprah or a Mario Rubio or a Stephen Chu rise to their potential. Why repeat the same mistake by rolling out the sub-prime one more time? The answer to the question of why a government would introduce and enforce bad deleterious public policy is that wealthy financial interests would benefit. In other words, so that a few thousand people could get massively wealthy, the rest of us had to lose everything. And these few rich people—old white males for the most part—are more powerful than all of us put together.

Charles Ferguson’s film is perhaps the most effective in the last two segments when he discusses the disgrace of the so-called experts and the lack of criminal or social accountability. Even in Enron. The Smartest Guys in the Room (2005), there was some measure of criminal liability and one of the participants was honorable enough to kill himself. But here, as Inside Job notes, we have lost even that modicum of morality. All the perpetrators walked away, richer than ever, and completely unscathed, and totally unrepentant. Max Weber (The Protestant Ethic and the Spirit of Capitalism), who linked the capitalist impulse with the Protestant ethic, would be amazed at the distance between the will to power and profits and common decency. Jean-Jacques Rousseau (The Social Contract or Principles of Political Right), who was the main inspiration for the Declaration of Independence, would be horrified at the extent to which the fabric of the Social Contract upon which America was founded has unraveled. It will take generations to recover from the moral, ethical, and financial damage done to America by a few greedy people.

The leaders of the financial institutions refused, to a man, to speak to Ferguson, underlining their total disregard for public accountability. It is quite possible they think they have done nothing wrong. Ferguson portrays these men as unthinkably isolated from the real world and solely motivated by the profit motive. Nothing matters to them but short-term gains. Like five year old boys, and badly behaved ones at that, they were—are—all Id, no Ego and no Superego. If one replaces morality with the profit motive, if one replaces ethics with answering to the stock holders, then lending money to people who could never pay even the first mortgage payment for the sake of a brief burst of cash, then having to strong arm the government to bail out a few banks is a mere temporary inconvenience. The suffering of the perpetrators was brief, a few moments grilling before a powerless Senate committee. Martha Stewart was fined $30,000 and was sentenced to five months in prison and five months house arrest. And her crime? She lied to the FBI. For that you go to jail. Wrecking an entire economy for the foreseeable future? You get a huge bonus.

I was amazed at the assumption, obviously shared by all of the men of Merrill Lynch, Bear Sterns, Goldman Sachs, Lehman Brothers, et al., that they are smarter than we are. We could not possibly understand, they insist. We are not smart enough to regulate them, they state. We must not fire them, they protest. We need them, the experts. Actually, no, we don’t need these men. And they are not smart. Even I am smart enough to know that the economy is now a global one and has been for decades. I was stunned to learn from Inside Job that Hank Paulson, the Secretary of the Treasury, did not realize that letting Lehman fail would have global impact. Was he too insulted to know that Lehman had foreign branches? Was he too panicked to think the decision through? Apparently, Paulson was, if nothing else, so sectarian in his concerns, he did not give his foreign counterparts a heads up and international monetary chaos ensued. Undoubtedly, the debacle in New York City would have had global ramifications with or without Paulson but his strange lapses in a time of crisis are inexcusable. Likewise, we have to keep in mind that Paulson and his team, Timothy Geithner and Larry Summers, were the Wall Street insiders who deliberately panicked Congress into bailing out the (their) banks. Their action is the moral equivalent of asking innocent bystanders to repay a bank that has been robbed by masked bandits.

And these are the Wall Street wizards President Obama has put in charge of what’s left of the economy: Timothy Geithner, who ran (didn’t run) the New York Fed, had the closest proximity to the insanity of his colleagues, and Larry Summers, who thinks women can’t do math. If I were as irresponsible as they are, I would be fired immediately. But Obama has put them in the lead of Operation Nothing Will Be Done. The only gesture of seriousness I have seen from Obama is the appointment of Elizabeth Warren who will attempt, in the face of the Old Boy’s Network, to protect us, the meek and helpless, from the clutches of the likes of Chase. The gender component of the disaster has been discussed at great length. Women, it is asserted, are more prudent when it comes to money. Whether or not this is true, we still do not know, but the behavior of the males was nothing short of astonishing in the levels of irresponsibility and immaturity. Most normal humans invest in Wall Street more or less blindly, through a variety of pensions (now all gone) and they trust a “broker.” But, like the film about Enron, Inside Job takes a look at the brokers. Which brings us to sex, drugs and rock ‘n’ roll—-well, maybe not the rock ‘n’ roll—these guys are too stuffy, but certainly to sex and drugs.

We are told we are too uneducated and ill informed about the mysteries of economics to understand the ways of Wall Street. But economics is simply common sense. Ask yourself, who is the typical broker on Wall Street? A twenty five year old while male, otherwise known as “the Talent.” Now ask yourself a common sense question: would you give your retirement account to your neighbor’s twenty five year old nephew to invest and to handle? Of course not. Ask yourself another question: would you give your retirement account to your neighbor’s twenty five year old nephew, who is high on cocaine because he had spent the night boozing with prostitutes, to invest and to handle? You just did. If you had money in Merrill Lynch, Bear Sterns, Lehman Brothers or Goldman Sachs or AIG, a twenty-five year old with a college degree and maybe a couple of years of business school was given your money and your dreams of retiring to play with. And he was on drugs, and that’s who was in charge of your money— your money that is now gone. And the twenty-five year old man? Still snorting.

As an academic, I was most distressed at the fall of the academics involved in the Collapse. Cynically, I had thought that the profession of economics could not fall any lower, after the debacle of Reaganomics in the 1980s. I was wrong. The most horrifying interviews in the film were those of the academics who styled themselves as economists. Economics is a dismal science indeed. Inherently a soft and social science, the practitioners have attempted to distance themselves from the mushy softness of the humans (who actually are the actors) by “hardening” the discipline into a phallic pseudo-science in which a barrage of numbers and screen of mathematical formulas which separate actual economic activity from theoretical economic models. The result is a fatal separation of the real world from an academic discipline. Economic activity is at the very heart of society. Ask Marx. Or better yet, ask Nietzshe.

According to Marx, the economy is the secret engine of any society and it is this driving force that shapes human relations. Reduced to abstractions, like money, human beings are alienated from themselves and each others and are mere pawns in a system of exchange. People have been dehumanized and the moral ties that hold us together become weakened in favor of the profit motive. Echoing Adam Smith, Marx understood that capitalism, unfettered, would benefit the few at the expense of the many. Nietzsche, writing in state of syphilitic madness, spoke of the Ubermensch, the Superman who seized power because he had the will to do so. This Superman was above normal beings in his Will to Power and therefore deserved any power he obtained. The Superman is a celebration of the Id, the rejoicing of the irrational, the acting out of the Dark Side, our Dionysian Other in all its glory. Sound familiar? The titans of Wall Street, already rich, already powerful, only want to seize more wealth and wield more power. Nothing will ever be enough for those people. The Crash of ’08 is a study of the Irrational Man.

John Rawls wrote his Theory of Justice in 1971, at the end of the Civil Rights era and the period of war protests. He had every right to assume that there was a social contract, a public morality that was founded in rational thinking. Doing the right thing for the entire society made moral and economic sense, as the Civil Rights movement and the eventual ending of the Vietnam War seemed to suggest. There was a time when Rawls was taken seriously and one can only wonder what he would have thought of the sorry spectacle set before us in Inside Job. Unfortunately, Rawls died in 2002, having outlived his time. By 2002, the forces of irrational behavior had taken over and the nation was on a slippery slide towards the abyss. Leading the way, were dry and dull economists who might have known better if they had not sold out long ago to the lure of Washington. Worse than being merely dazzled by their brush with power, is the apparent lack of training and education on the part of the individuals interviewed for this film. I assume they all have advanced degree. Business schools offer professional degrees, that is, three more years of specialized training, but academics have different degrees, doctorates, which stress scholarship, research, and rigorous application of theories. For example, as a professor, I have a combined thirteen years of education, from undergraduate to a doctorate. I assume the economists in the film have somewhat less graduate education, as art history is a notoriously difficult discipline, but I also assume that these individuals should have mastered the basics, after all, they all have posh jobs at Ivy League schools. But not so. None of these men has gotten beyond the high school level in a academic integrity and proficiency.

Up to this point, the audience for the film had been watching the recounting of the horrors of financial inventions, such as CDOs and Derivatives in stunned and subdued silence. But when the parade of the economists marched on screen, the audience roared with laughter. To begin with there is the imperious Dr. Martin Feldstein who has been in the service of Republican administrations since Nixon. He comes across as an unflappable and bemused Gulliver, baffled that the Tiny People are attempting to prick his conscience. Wrong? He ponders, what me do anything wrong? How could that be? And he is a professor at Harvard, teaching impressionable young people. Presumably the students will get their ethics elsewhere. Then there is Glenn Hubbard, the Dean of the Columbia University Business School, who puffed up indignantly at the impertinence of inopportune questions about uncomfortable issues such as conflict of interests. There is a revolving door between these academics and the political establishment, meaning that their so-called scholarship is for sale to the highest bidder. Hubbard who regretted his decision to grant (in the Kingly sense) an interview, began a countdown: “You have three minutes left.” The interviewer asked simple common sense questions and caused great offense. We are left with the impression of a condescending and pompous man, Hubbard, who was convinced that we are unworthy of an explanation.

That attitude—that the public is too stupid to comprehend the convolutions of economic theory—oozed from every pore of the academics. However, Frederic Mishkin, a professor at Columbia, was not as smooth and self-righteous and un-self-consciously immoral as Hubbard and Feldstein. I have seen just enough of Lie to Me to know that if a person says one thing, all the while shaking his head to the negative, that either he is lying to us or to himself. His subconscious is frantically telegraphing “no, this is a lie, this is not true, don’t believe a word I say.” Mishkin fled his government post to, as he put it, to “write a textbook.” Even I, in my lowly academic post at an art college, know that textbook writing is something no self-respecting academic would do—that’s a task for a group of graduate students. And who writes textbooks anymore? Even the interviewer, presumably not an academic, did not accept his lame excuse for his Profile in Shame. Before he was writing his textbook, Mishkin had “delivered a paper” on the state of the three banks of Iceland, huge conglomerates that had gambled the entire nation away. These banks, unleashed by an unwise government, lost three times the GNP of the country. But Mishkin, hired for over $100,000 by the Iceland Chamber of Commerce, had given these banks a glowing report. The interviewer asked how he could have gotten it so wrong. Well, Mishkin explained artlessly, one trusts one’s friends. In other words, instead of going to Iceland with a forensic accountant and going over the books for a few months, Mishkin took a hundred grand and wrote a report for an agency whose job it was to boost Iceland. The “report” or the “paper” was really an advertisement and an inducement for investors who would be lured in. Instead of research, Mishkin produced a document based on gossip. Not his fault that his “friend” was wrong about Iceland.

The President of Columbia also refused to comment on the University’s integrity and the possible conflicts of interest. The President of Harvard also refused to comment about loss of integrity over conflicts of interest among the University’s scholars.

One of the best points the film makes is the extent to which leading Ivy League universities have been compromised morally through conflicts of interest such as Mishkin’s ethical failure. Professors for sale. Professors who pretend to teach. Professors who apparently do not know the first rules of research. Did Miskin at least do a Google search of Iceland? He might have learned something. And this man has tenure. This man is allowed to teach economics and business. I would have given Mishkin an “F” in my class. I would have made him do his work over until he learned basic skills of scholarship. In the end, all I can do is to invite the students of Columbia and Harvard to come to my little art college here in Los Angeles. I, and my colleagues, will teach you how to do research and how to write a real research paper. As for the professors in the film, undoubtedly, I am beneath their notice, but I came away from Inside Job wondering if I know more about the Dismal Science than they do. One can only hope that someday, they will find the late-breaking courage of David Stockman and plead guilty to their scams. In an August interview with Guy Raz of NPR this year, Stockman said that the current Republican economic policy was,

Utterly disingenuous. I find it unconscionable that the Republican leadership, faced with a 1.5 trillion deficit, could possibly believe that good public policy is to maintain tax cuts for the top 2 percent of the population who, after all, have benefited enormously from this phony boom we’ve had over the last 10 years as a result of the casino on Wall Street.

And I blame Paulson on it. I blame the Bush White House. They basically sold out the birthright of the Republican Party when they bailed out Wall Street unnecessarily, in a state of complete panic in September 2008. That’s really, at the end of the day, one of the greatest misfortunes in fiscal governance since the Reagan revolution tried to straighten things out beginning in 1980.

Many people would disagree with Stockman that Reagan tried to “straighten things out.” Inside Job starts, as do many other observers of public policy, with Ronald Reagan’s social re-engineering of America. It was in the 1980s that The American Social Contract began to be shredded. As the movie pointed out, as have many other sources, it was precisely in this era that the income gap between rich and poor began to expand. The rich were enriched at the expense of the less fortunate who began to lose ground. It is those left behind who became prey, twenty years later, for the bankers who would talk them into sub-prime loans, which could be bundled into tranches and sold and resold until no one knows who owns the houses that are being repossessed. The Reagan Administration opened the door for Greed and irresponsibility and the Clinton Administration held the door open. After the S & L meltdown, deregulation began in earnest and continued with abandon after the Tech Bubble. Then came the Great Recession and the hounds of the hell of total deregulation have been let loose with the midterm election of 2010.

People have been wondering to which point back in time we have been pushed. Some have suggested the Fifties. Wrong. The Fifties was a decade of government intervention in the economy, building freeways, creating the military industrial complex, fueling the space race. Some have suggested that, because we are now getting ready to “Hoover” the economy, we are witnessing the end of the New Deal. Wrong. We are re-experiencing the Gilded Age of the Late Nineteenth Century, a time of rampant free-booting and raging and unrepentant capitalism. Wall Street is now the Wild West and the Outlaws are running the town. The criminals have taken over the prison. The inmates are in charge of the asylum. Inside Job makes it clear that the Obama Administration will not help us get the bad guys. There will be no punishment for any of the architects of the Tragedy of 2008.

Much has been written, by other critics, about the irony of the inventor of The Social Network being so unsocial. But I believe the vaunted anti-sociabilty of Mark Zuckerberg (Jesse Eisenberg) goes a bit deeper than a guy who can neither make friends nor keep them. The film can be read on two levels. First the movie could be understood as a fictitious account of events that will never be recounted, due to non-disclosure agreements—-also an irony when Zukerberg’s Facebook discloses everything. Here we have all our usual suspects, the elite villains, the Winklevoss twins (Armie Hammer), the underdog, Zuckerberg, the geek, of course, and the betrayed friend, Eduardo Saverin (Andrew Garfield), and we are on familiar territory. Supporting characters, playing predictable roles are the rejecting girlfriend, the pig-headed and obtuse College president, Larry Summers (Douglas Urbanski), and the devil who introduces the innocent to the finer pleasures of merchandising an idea, Sean Parker (Justin Timberlake), the founder of Napster. This reading is a Revenge of the Nerds plot. But the Second, alternative reading, suggests that the writers, Aaron Sorkin and Ben Mezrich, might be aiming at a social commentary on 21st century ethics.

The entire film turns on a series of lawsuits against Zuckerberg, who was accused of stealing the ideas of others and then cutting them out of the proceeds. The narrative is a series of flashbacks, told from the perspective of lawyers and propelled by their questions. Zuckerberg, himself, could be occasionally roused to react with contempt to the complaints of the losers surrounding him. Eventually he will pay off these nuisances to make them go away. The issue that Sorkin explores in his film is a basic but 21st century question—who owns an idea? An older generation would speak passionately about intellectual property rights. An author owns her book. A writer owns his words. The song is the property of the composer. A sculpture is the sole handiwork of the maker. But, despite the myth of “owning” your own thoughts, the actual truth is far murkier. An artist, for example, is not like a musician and does not own the work; the collector does. An artist, unlike the composer, does not get royalties. An artist, contrary to the advertiser, cannot own the image; the “actual” owner, such as the museum, retains the copyright. Clearly, Zuckerberg, however fictitious his impatience with the legal proceedings, had a point: it is not the originator of the idea who “owns” the idea; it is the producer of the idea. The publisher owns the book, not the writer.

The Wincklevoss twins speak indignantly and passionately for the old fashioned concepts of honor and integrity and ethics. In their (elitist) minds, they have hired Zuckerberg to do their bidding. Never mind, that they wanted him to work without a contract, never mind they were trying to get him to do the work they could not—-the twins are horrified and shocked that their “nerd” took their idea and transformed it into something else entirely. It is not that the twins are Aryan, privileged, and sions of the powerful, and that Zuckerberg is portrayed as a homely Jewish dork—-that’s just window dressing—-it is that the twins are mired somewhere in a Tom Brown’s Schooldays idea of a “gentleman’s agreement.” And poor earnest Eduardo, who is so, so supportive of his friend and who is so, so clearly out of his depth—-what can we say about him? Have any of these people ever heard of the phrase, “get it in writing?” Has no one ever trained these boys that you don’t do business with your friends? Are there no lawyers at Harvard? One can only assume that the parents of the twins and Edouardo told their sons that they were going to Harvard to “make contacts” with future associates. What a lovely and quaint idea.

The problem with the naïfs of Harvard is that those days are past—-a time of old fashioned ideals, such as owning an idea. The character of Sean Parker should have been better drawn, for he was an instigator (along with his partner, Shawn Fanning) of the 21st century notion that culture cannot be possessed. Napster was a music sharing program that distributed single songs, freed from albums or record producers, and spread the music among those who should “own” music—music lovers. Of course the producers, the publishers, and the executives who live off the talents of others disagreed and brought Napster down. But as the Parker character pointed out, he started something. True Napster is still up and running today, operating legally, but that is not the significance of Napster. Preceding Facebook by four years, Napster was a generational expression of an idea that there are some things that should not be “owned,” and if you “owned” music then the consumer should not be ripped off or exploited by the greed of the producers. The music business has been forced to adapt to this new terrain of sharing. The Harvard students who gave their idea to the brightest computer nerd on campus should have remembered Napster and what it stood for—the free and unfettered dispersal of ideas.

Zuckerberg, who in person is a lot more sturdy and buffed up than Eisenberg, took the position that it is impossible to own an idea and that the idea ultimately belonged to the one who could execute it. Not only could he make Facebook possible, he also had the vision for the networking system. The Wincklevoss twins were thinking small and exclusively—Harvard only—-while Zuckerberg began to see the possibilities of social networking among thousands of “friends,” who didn’t necessarily have to know one another. It is Parker who truly opened his eyes, because Napster had fought with musicians and record companies in the olden days before music makers began to see that sharing their music could help their careers. For many bands, ten years later, the music recorded is just the beginning of money-making possibilities, which include merchandise and tours. The music becomes a loss leader, an inducement to come to a concert. Musicians now have more control over their creations. Imagine what it would have meant to the Beatles if they could have retained their own songbook so that they could have controlled their creative efforts. Facebook became, in Zuckerberg’s hands, a creative idea about a new way of social communication on a global scale. It is Parker who showed Zuckerberg that an Ivy League college is not the proper breeding ground for social revolutions.

Only when Zuckerberg arrived in California and soaked up the possibilities in the home of computer culture does he begin to see the possibilities of thinking outside the box of ownership and control. When, far too late, Eduardo flies to San Francisco, it is clear that he has been passed by, all while trying to operate in the old-fashioned way, raising money in New York City. There is something to be said for protecting your investment, not letting it out of your sight. Eduardo took his eye off the ball. The Wincklevoss twins communicated with Zuckerberg through phone and e-mail, without trying to forge a relationship. Neither of the parties suing Zuckerberg ever understood what Facebook was really about—making and maintaining social contact. To their dismay, it is they who are left out in the cold and rain; it is they who were not social.

The Social Network is about two worlds, the Old and the New. The Old World is represented by the Wincklevoss twins and their lawyers, full of outraged honor, but wanting a piece of the action that evolved out of a conversation. The New World is Mark Zuckerberg and his Facebook where anyone can be a star. In the Old World, there are Gatekeepers who control culture, deciding what culture is, who is allowed to represent culture, determining for whom to open the gate. Once corralled, the artist loses control, revokes his rights, gives her products over to merchandisers. In the New World, there is technology that has empowered anyone and everyone to make culture, to participate, to take an active role, and above all, maintain control over one’s own work. The word is “control,” not “ownership.” The name of the game now is getting your work out and letting it find its audience.

The Gatekeepers thunder that nothing that has not been legitimated by them has not truly entered into the privileged sanctuary of the Accepted or the Anointed. But just as anyone can present her face on Facebook, an author can get published by many publishing services, a singer can put his song on YouTube, and a journalist can start her own newspaper and call it The Daily Beast. Culture has been changed into a game we can all play. We are all cultural producers now. What has been lost to society is the exclusivity aspect of the game, the sheer tactic of keeping the many out for the (financial) enhancement of the few. What has been gained is an explosion of art making. Some ideas are good and take on a life of their own, like Napster and Facebook. Other ideas await their audience, like van Gogh. True, much that has been spewed upon and into the Internet has died from neglect, mostly on the part of the maker. But today, talent can assert itself and new ideas can come on the market place unimpeded. The Controllers are trying to figure out how to reassert control.

A new ethic is emerging, one based upon sharing, not restricting, access to knowledge. A website with a password is a website passed by. A new integrity has been formed, one based upon allowing talent and creativity to express itself; an ethic of not trying to create a “culture” based upon those who are “in” and those who are “out.” The new honor repudiates the idea of social control through rejection and discrimination and refuses to accept the evaluation of the Opinion Makers. It is simply wrong to prevent someone from contributing to their own culture. The lawsuits against Zuckerberg were based upon a contradiction in terms, “intellectual property.” If anything cannot be property, it is the intellect. The Generation of the New World simply does not accept the old rules of ownership, nor do they play by the old rules of property. The new rule is—put it out there, maybe somebody will buy it. For some, like poor Eduardo, the “fundamental things apply,” but time is going by. Dogs may bark, but the caravan passes on.

At some point in time, Mount Everest went from being the impossible climb to the possible climb to the latest fun-filled vacation hike for the well to do. Jon Krakauer’s 1997 book, Into Thin Air: A Personal Account of the Mount Everest Disaster, is a horrified and horrifying account of what can happen when climbing Mount Everest becomes a lucrative business. This was the book that started my obsession with the unmitigated folly of climbing into what is named, very seriously, as “the Death Zone.” I took Krakauer’s book along with me on a trip to Germany one summer, to save myself, after a hard day at museums, from German television. Since then, I have seen as many film and television accounts of mountain climbing that I can. The appeal of putting one’s life in grave danger for the fleeting minutes at the to top of the world escapes me. Why not take a plane ride and get the same view? I ask. The only answer to the question of why a mountain should be climbed came from George Mallory who answered, “Because it’s there.”

Mallory who is the subject of The Wildest Dream, died on his last and fatal attempt to climb Everest in 1924. The voice overs of Mallory and his wife were done by husband and wife actors, Liam Neeson and, in a sad last performance, Natasha Richardson. Mallory vanished into the fog of a rising storm, captured briefly as a dark silhouette through the lens of the exhibition camera. For decades, the question of whether or not he had actually got to the top of Everest was an open one. He was attempting to ascend up the treacherous north face where there is an almost impossible rock formation that juts out, interrupting the otherwise consistent ascent.

Today, climbers prefer the easier and more consistent south face, but some climbers have managed to climb up on the north side, and the Chinese, in fact, fastened a steel ladder from point to point, allowing for the inconvenient outcropping to be bypassed. But could Mallory have made it up the most difficult side of Everest? Did he make it to the peak, to the top of the world? This movie, The Wildest Dream, sets out to learn if such a climb, in primitive equipment, without the aid of a ladder, was possible. Ultimately, the answer was a qualified “yes,” because the contemporary climbers were wearing professional clothing during their “free climb.’

Based on the biography of Mallory of the same name (The Wildest Dream: The Biography of George Mallory, by Peter and Leni Gillman), The Wildest Dream is a recreation of the discovery of Mallory’s body by Conrad Anker, recounted in The Lost Explorer: Finding Mallory on Mount Everest by Conrad Anker and David Roberts, published in 2000. George Mallory was not climbing alone. His companion was the younger and less experienced climber, Sandy Irwin (Fearless on Everest: The Quest for Sandy Irwin, by Julie Summers, 2001). Both climbers disappeared on Everest, about eight hundred feet from the summit and the body of Mallory wasn’t found until 2000.

The Wildest Dream reenacts that fateful climb, with Conrad Anker and his younger climbing companion, Leo Houlding, as green and as inexperience as was Irwin. The two contemporary climbers recreate not just the climb but also briefly wore the actual clothes and boots they wore and the primitive oxygen tanks they used. Today, mountain climbers are well equipped for the treacherous slippery slopes, dangerous winds, deadly cold, and sheer lack of oxygen at such heights. Then, one hundred years earlier, Mallory and Irwin wore gabardine suits and rather ordinary boots, studded with hob nails. The two were dressed like Victorian gentlemen hiking in the polite English countryside.

No wonder both Mallory and Irwin died. The film reenacted the moment in 2000 when Anker stumbled across Mallory’s body. Mallory, who is now buried, apparently slipped, fell and broke his leg. Poignantly, he crossed his good leg over his broken limb to ease the pain and died, Anker tells us, within a half hour. Irwin’s body is still missing. Undoubtedly with the progression of global warming, his and all the other bodies left behind of Everest will be discovered as the snows melt away.

The disastrous 1996 season was a warning sign that the mountain had gone from a lonely peak in the Himalayas to a well-traveled thoroughfare. A total of fifteen people died in that season, eight of them on one climb, several of them professional climbers, including the American, Scott Fischer. Thanks to the miracle of cell-phone communication, at least one of the climbers, Rob Hall of New Zealand, was informed that a sudden storm made it impossible for any rescue to be attempted. He was able to talk with his wife and tell her goodbye. This tragic season was made into a movie in 2001, Into Thin Air: Death on Everest, starring Nathanael Parker as Rob Hall.

The season of 1996 cast a long shadow over Everest and the “sport” of climbing. Dark Summit: The True Story of Everest’s Most Controversial Season, by Nick Hall, published in 2009 had already been told on Frontline, Storm Over Everest in 2008. A sudden and raging snowstorm separated two parties of climbers, some made it down the mountain to the climb to wait for the storm to pass, knowing that others were still stranded. One climber, Beck Weathers wrote a book about his experiences, Left for Dead. My Journey Home from Everest. Weathers was left on the mountainside but managed to struggle down to the camp and stagger into a tent, where he was again left for dead. Weathers, who was eventually rescued, appeared on Frontline. Frostbite had left his hands with fingers shaped like stubby ears of corn, huge and monster like. Is such an experience worth the time, the money, the very real danger, and the horrific consequences?

One of the criticism Krakauer made of the 1996 climb was the fact that too many unqualified and inexperienced people with too much money tried to climb Everest, putting the lives of even seasoned climbers in danger. One would have tough that his words of warning would have been heeded, but in 2006 another disastrous climb took place. Twelve people died. That season alone, two hundred people made it to the summit. Among those people were the Nepalese Sherpas, professional climbers and their “clients.” This “sport” of mountain climbing is a dangerous one, one to whom many people have given their lives. The controversy over whom and who should not be allowed to climb Everest will go on as long as summiting the world’s tallest mountain remains a “business.”

One of the aspects of these documentary films of climbing Everest that always confounds me is the way in which the cameraperson or persons are treated as if that individual is absent. Who are these people? Other climbers who are trained in filmmaking? Surely they deserve some mention in the film? I am all in favor of spending your money any way you wish; ii am all in favor of earning your living according to your skills, but there is something about putting the lives of many people in danger that gives me moral pause. If you want to climb Everest, please put your own life at risk, if you must, but this “sport” is not like big game hunting: disaster is always waiting for everyone. Not ever the experienced are immune from death.

Although I wonder at this ethically questionable ambition of climbing the highest mountain in the world, it is not my place to judge and I recommend reading the many blogs that discuss this contentious topic. Is Everest sacred ground? Is there no place on earth we can leave untrammeled? The slopes are littered, not just with lost bodies, but also with trash. Should we not leave this natural wonder alone? Meanwhile, The Wildest Dream made me wonder what George Mallory would think—-his “wildest dream” is now a tourist vacation—and he died for that?

“Flipped is one of the worst, if not the worst movie, Rob Reiner has ever made. Flipped is a flop.”

So began the film review I began four weeks ago after I saw this movie. Out of pity and respect for Rob Reiner, I never posted it. I was reminded of the reason for my great disappointment at Flipped this morning when, channel surfing, I found Stand by Me (1986). I had not seen this film in a long time but as I watched it, every minute I was reminded of how superior this little gem was compared to Flipped. I wondered why. Flipped completely lacks the look, the feel, the authenticity of the Fifties. Perhaps the Fifties were simply too long ago. Perhaps we can no longer authentically re-imagine the decade with a genuine connected feeling and instead return via a cheap, badly colored simulacra. The generation that loved Stand by Me is now remote from that time and the new generation can experience Flipped only as a simple minded morality tale of girl meets boy, girl loves boy, boy does not love girl, girl learns to un-love the boy, boy loves girl, and so on: a meet cute couples film for tweens. In contrast, Stand by Me was an authentic coming of age film that transported the viewer of a certain age back to what Vern (Jerry O’Connell) called “a good time,” meaning an incomparable time in the lives of the four boys.

The “good time” was not about the Fifties, which is portrayed darkly, but that singular moment in time when a young person comes of age and realizes what he is made of. I used the pronoun “he” deliberately, because such coming of age movies or books were and still are rare for girls. There is Little Women (1949 and 1994), based on the novel by Louisa May Alcott, and Bend it Like Beckham A(2002) separated by one hundred and fifty years and still sharing the same theme—-women coming into their own by seizing the privileges young men take for granted. For young men, there are a plentitude of films and books in which an incident happens and a character is formed, A Separate Peace (1959) by John Knowles being, in my opinion, a great example of the genre. Somewhere in between Little Women and Bend it Like Beckham there was Pretty in Pink, also in 1986, in which the girl (Molly Ringwald) was allowed to test her mettle along with the boy (Jon Cryer). Although often understood as a “chick flick,” Pretty in Pink, like Sixteen Candles (1984), was a male conception (John Hughes), and in both movies feature young women far more mature than young men who grow up (Anthony Michael Hall) or not (James Spader) through their relationship with the girls who are vehicles for male maturation.

If there are no corresponding coming of age stories for girls with the impact of Stand by Me, perhaps it is because no literary precedents exist. In Pride and Prejudice, Sense and Sensibility, and Emma, Jane Austen foregrounded female characters, but her stories are all about the economic choices women must make to save their social lives. Jane Austen’s novels remain popular today because so little has changed for women: the success of their lives still depends upon economic decisions. The importance of money is great to both genders and lacks the universal symbolic significance of a (male) hero’s quest. The measure of the power of Rob Reiner’s movie is that Stand by Me became a touchstone for a generation. We followed the actual maturation of each boy with interest, noting with astonishment who did well as an adult and who did not. River Phoenix died in 1993 of a drug overdose on the sidewalk in front of Johnny Depp’s Viper Club; Corey Feldman is still alive, but given his frenetic lifestyle it is not clear why; Will Wheaton went on to become the most derided character in the history of Star Trek on Star Trek: The Next Generation; Jerry O’Connell, aka, “the fat kid,” became tall, dark and handsome and starred in a number of successful television series and films and married one of the most beautiful women in Hollywood. The success of Reiner as a director can be measured by the fact that we cared about the fate of the four young boys, long after they had grown up.

There are no girls in Stand by Me. Based on a story, “The Body,” by Stephen King, the only female characters, fleetingly seen, are a waitress and a mother. The four boys in the movie are alone in the world, abandoned and abused by their cruel and violent parents in one way or another. The proof of the neglect these children are suffering is the fact that they could be gone for two days before any adult noticed. Although they take a trek to find the body of a young man who was struck and killed by a train, they are really on a hero’s quest, a mythic journey straight out of classical or ancient legends. Like any hero, they must collectively overcome obstacles to reach their goal. On the journey, they must conquer many enemies, from leeches to trains to a gang of pseudo delinquents, led by Kiefer Sutherland. They plunge deeper and deeper into the wilderness, encountering menacing technology (a train), going through forests and fields and fording a stream. During the two-day trek, some of the boys mature (River Phoenix and Will Wheaton), some do not (Corey Feldman and Jerry O’Connell) The object of the quest itself, whether it be the Golden Fleece or Death itself, is irrelevant. The point, for each boy, is to face his fears and, in standing up to danger, to come of age.

Set in a small town of no particular distinction, Stand by Me captured those last moments in the Fifties before total suburbanization, before transforming modernization, before the new freeways would leave such communities to wither and die. Stand by Me is redolent with the nostalgia of a generation now middle aged, looking back on the formative years of its lost youth. The entire tone of the film is elegiac, a mediation on loss. The teller of the tale is a middle aged successful writer, played by Richard Dreyfuss, who, upon reading of the death of one of his boyhood friends, “Chris,” played by the late River Phoenix, returns to his childhood to write, at last, of the truth of that summer, those last days before junior high. Each boy has a task to do before the transition into maturity can begin. The boy who would grow to be the writer (Will Wheaton) must learn to mourn the death of his beloved older brother (a very young John Cusack) and the boy who would grow up to be a lawyer (River Phoenix) must find the strength to escape, not just from the town, but from the identity imposed upon him. The other two boys do not stand up for themselves and find that they cannot confront danger and that their characters are weak.

The confrontation in the woods between the delinquents and the boys is not about a dead body but about manhood and what it means to be a man. Being a man means being a friend, a friend with character and moral strength. Although they do not realize it at that time, those moments are nothing less than the destiny for each boy. The two boys, who ran away, ran away from manhood, possibilities, from life and were fated to remain in trapped in a nowhere town, one as a fork lift operator and one as an odd jobs man. But Stephen King seemed to understand the classical origins of his tale well. The knife Kiefer Sutherland uses to threaten “Chris” comes back to him in the end, for the lawyer is senselessly killed in a knife fight. Stand by Me became a classic itself over the past twenty-five years, touching a generation wondering if it had made good use of the possibilities of youth.

The Eighties was a golden age of such films, perhaps because the decade was the one in which an entire generation of middle aged people sold out their values for junk bonds—-after wasting an entire decade on disco. The boys of Stand by Me might have been the ones to march for Civil Rights and in protest against the Viet Nam War…. or so we would like to think. But as the film points out, such character driven moments are rare, as are those people (or nations) who actually display ethical character. And the moral moment fades. Stand by Me is a film in mourning for a great loss. Which brings me to what was perhaps the most moving film about lost youth, Splendor in the Grass.

What though the radiance which was once so bright
Be now for ever taken from my sight,
Though nothing can bring back the hour
Of splendour in the grass, of glory in the flower;
We will grieve not, rather find
Strength in what remains behind;
In the primal sympathy
Which having been must ever be;
In the soothing thoughts that spring
Out of human suffering;
In the faith that looks through death,
In years that bring the philosophic mind.

(Ode: Intimations of Immortality from Recollections of Early Childhood, by William Wordsworth, 1807)

In the future—soon to be available for your grandchildren—there will be no classrooms. The era of the Little Red Schoolhouse will be over. As we watch the Budget Masters of the Educational Universe scramble for funds, we see them raise tuitions and cut back on enrollment—a truly antediluvian solution, for the flood has already occurred. This Flood, our Deluge is called the “Depression,” characterized by lack of jobs, lack of homes, and subsequent lack of taxes to support public education at the college level. This regressive action of raising tuition and lowering students seen on the part of California and other states can only be stopgap measure. In the future, what the state will cut back and eliminate is the real prize, not the students, but the expensive luxury of having a faculty, fully laden with bennies—health and retirement and a bad attitude. Do the math: faculty costs money, students bring in money. If this were your budget, which item you would eliminate? An expense or a source of income? Strangely, the state has eliminated both the expense and the income and the students are being shortchanged.

Why are students, who really need to get out into the work force, being forced to compete for classes? Why are students asked to wait five or six years to graduate? College classes being cut, and inquiring minds want to know—why? Because the faculty, even the part-time teachers and graduate students, are expensive, it is a simple short-term solution to eliminate people. It is not the classes the university system in California is cutting, it is the faculty who are being eliminated and the effect of slashing the faculty is the cutting back of the number of classes. Although the goal was to save money, the result is a self-fulfilling prophecy: cut the faculty, cut the classes, cut the students, cut the income. Impossible as it seems, budget cuts can also result in a cut in income.

Is there a solution to this impossible problem the state has created for itself? But wait, is what we see as a problem, cutting classes, really a solution in disguise. Is the state is putting in action with a long range goal of getting ride of faculty on a permanent basis? In the September 5th issue of The New York Times Book Review, Christopher Shea wrote about “The End of Tenure.” Shea discussed two recent books, Higher Education: How Colleges are Wasting Our Money and Failing Our Kids—and What We can do about It by Andrew Hacker and Claudia C. Dreifus and Crisis on Campus: A Bold Plan for Reforming Our Colleges and Universities by Mark C. Taylor, whose writings on this subject I have been following. Both authors question the effacy of tenured professors and what Taylor calls the “education bubble.” The fact is that university education is morally unsustainable. It is simply immoral to ask either the students or the faculty to support or to countenance a system of tenure that privileges the few at the expense of the many. It is untenable to put forward an ideal of education open to all, on one hand, while sustaining within the same system a hierarchical pyramid of exploitation of junior teachers. Not only does building a structure based upon disrespect for the have-nots, the “part-timers,” on the part of the “haves,” the tenured members of the faculty have unfortunate ethical consequences, for as Shea remarked, “The labor system…is clearly unjust.”

But it is unlikely that the state of California cares whether or not young talent and new ideas are being crushed beneath the chariot wheels of the privileged faculty who, after years of expensive research paid for by taxpayers, will produce a book read by a dozen people (what Taylor calls “overspecialized research”. The state is interested in eliminating an expensive luxury and that would be faculty, however privileged or exploited. So here is another question: How can it make any financial sense for every community college in the state of California to teach (re-teach) the same course in different classrooms at different times throughout the state? Why should every California (University or State) campus offer the same requirements in endless multiples, semester after semester, year after year? The result of such needless repetition visually is not unlike a mise-en-abyme, looking into an endless corridor of repetition and duplication of nearly identical courses. In what universe does it make monetary sense to duplicate efforts on the part of many, many faculty members, to duplicate many, many classrooms, to build many, many physical plants, called campuses, to feed, house, shelter and support thousands of people for two, four, even ten years, counting graduate schools, when all these students could be taught in the realm beyond the campus—cyber space? Why contribute to pollution by building expensive physical plants for classrooms, which must be heated and cooled? Why encourage students to drive to school, clogging freeways, expelling pollution? In the past the answer would have been that student have to come to school, emphasis on “come” as in “get to” as in “arrive” as in “be there.” But no more. Technology has changed the tradition of “going away to college.”

The shift is already underway. For at least a decade distance learning has been offered as an alternative or a substitute to on-campus learning. Indeed, some professional schools are already all e-based. Of course, these for-profit colleges have, in the eyes of academic snobs, given distance learning a bad name. Tenured faculty in the University of California is solidly against the notion of teaching on line, but for all the wrong reasons. It is true that e-education is the solution to the expense of huge campuses, bloated salaries of faculty and administration, exploitation of “lesser” teaching staff, damage to the environment caused by commuting. It is also true that, whether they like it or not, the State will bring about learning via computers, slowly but surely. The recent cutbacks in faculty will, like the last round of cutbacks in the 1990s, will be permanent. With little fuss, more and more classes will be put on line. It is well-known that many professors who desperately need the work have been developing on-line classes that are then the property of the institution which hired these people. The professors get a (very) small salary for their services and the colleges get the money for as long as the class runs. It should give the elite teachers some pause to realize that the classes of the future are being written by those they consider to “inferior” for tenure.

The objections of established faculty to distance learning are well taken, but for all the wrong reasons. In an effort to reproduce the virtual effect of a virtual classroom and a virtual teacher, the set-up is for on line classes still that of the Little Red Schoolhouse, complete with the student, the textbook, and the teacher, lacking only the Little Red Apple. Distance is the only difference. The software for course management has tended to replicate the ideal or traditional classroom experience, valuing “class discussions” and “student participation,” recreating a “group of learners,” who must make an on-line appearance at a stipulated time. The demand for student “presence” is intended to make sure that the students are actually “attending” the class. Ironically, there is no way of knowing if the “actual” student who is enrolled is “present,” or if a paid substitute is “taking” the entire class for a fee, while the “real” student is out having fun. The “teacher” is present, making scheduled appearances, guiding and leading and teaching unseen students, but as those of us have taught these courses know, the time and effort expended by the virtual teacher explodes exponentially to the point that a cost-benefit analysis reveals that the costs to the teacher’s time greatly exceed any monetary benefits to the instructor.

In the past, such an investment in teaching for the beginning educator would pay off in a full time job. But these jobs are in the process of being eliminated in favor of asking part time people to put in many more hour than they would in a “real” classroom. The result is that many veteran teachers simply opt out of these rudimentary and sentimental cyber Little Red Schoolhouse classrooms, leaving the field to those willing or inexperienced enough to be unable to say “no.” Because these cyber classrooms and course management systems are modeled on a web replication of a real classroom experience, their scope is deliberately limited to only what the individual teacher can handle. In other words, the hours put in by the teacher is expanded but not his or her pay and not the number of students and not the amount of money coming in to the school.

Already the teachers working on line are lowering the amount of education the students gets for the sake of their own survival. “Lectures,” for example, mandatory in a “real” classroom, have been eliminated on line. It is impossible to replicate the sheer amount of information given in a classroom lecture in an on-line situation. In the virtual world, the students need only read the text and answer questions, engage in virtual discussions, and take tests based upon the book’s content. Even without attempting to provide lectures as posts for the students to read as supplements or explanations of the textbook, the burden of caring for individual students, instead of presiding over a group, is overwhelming to the teacher. Little is gained by the student, by the teacher, or by the school through continuing these old-fashioned methods in a format that is antithetical to the Little Red Schoolhouse. One of the great virtues of distance education could be the sheer lack of the classroom. In cyberspace, the student can progress at his or her own speed and finish a course in, say, a week and be done with it. And indeed, this is exactly how the for-profit colleges allow students to work. But in the traditional colleges, the experience is drawn out over a semester because of sentimentality and nostalgia. The professor who used to be able to leave the classroom and leave the students behind is now “on call,” like a country doctor, to the students, all times of the day and every day.

The current course management practices in distance learning insist upon in-class or on-campus methods of teaching that prevent a serious examination of the possibilities of cyber learning. The only element provided on line by distance learning is distance or an alternative to attending a class on campus. But Blackboard, Moodle, E-Luminate—all of these course management systems, no matter how nostalgically they are constructed—have the technological seeds for expansion in scope. Indeed, the only factor holding back the capacity of the virtual classroom and its student enrollment is the lack of faculty willing, qualified, and trained for distance learning. The only way to increase the numbers of students in a virtual classroom is to have the class taught by a team of collaborating teachers, a rather clumsy solution. At this point, we are stuck with two problems: the limitations of the teachers and the limitations of the time, a semester or a quarter, in which the courses are taught. How are these problems to be solved?

Let’s start with scuttling the old model of the Little Red Schoolhouse. We shall see that when the limitations of the Little Red Schoolhouse are eliminated, then all of its traditional elements will be wiped away—except for the one determining reason for the Schoolhouse: the students. If you eliminate the limitations of the virtual “classroom,” you can have unlimited students. Once you expand the scope of learning, not teaching, then the Little Red Schoolhouse dissolves. Since “teaching” such a course, to thousands and thousands of students, will be impossible for any one human being, the professor will also have to be dispensed with. The result is the replacement of the faculty with course management systems, the campus for cyber-space: a good financial trade off for the state, and a vast increase in the number of student served and a consequent flood of pure profit revenue. We are imagining Life After Faculty.

What would education look like? Let us being by eliminating “education.” After all, we just eliminated the faculty. We must rename “education.” How would such a transformation work? If this theoretically unlimited classroom is where distance learning is headed then Step One will be the development of canned courses. That is, the many duplicated courses in say, Survey of Western Art I, throughout the state will morph into THE COURSE, THE STANDARD COURSE for a particular subject. The personalized course, “brought to life” by an inspirational teacher who sparks the dullest pupil’s brain will vanish. Traditional courses taught by individual teachers in his or her own way with his or her individual expertise will be replaced by THE COURSE, developed by a team of educators and experts. Because the aging tenured faculty will not want to be left out of the inevitable process, the “educators” will be the soon-to-be retired specialists in a field, such as art history, who will, as a committee, write the course content, assignments, requirements and tests. The “experts” will be technical advisors, who will set up the course materials, making them suitable for computer learning.

Cyber learning will necessarily be different and will take into account—unlike vestigial courses offered in today’s vestigial classrooms—the fact that the students are NOT in a classroom, are NOT learning through listening or through teacher demonstration or, in the case of art history, pointing at the object. The students will NOT be limited in time by a traditional semester or quarter system, which will also be eliminated. The students will NOT be in contact with a teacher or with one another. They will not be “on campus” at all. The Second Step will obviously be the complete elimination of the faculty. Thousands of individuals in all academic fields will be, as the British say, “redundant.” No longer necessary. It is possible that the elimination will take place through attrition: the Old Ones will be the educators on THE COURSE committee and the Young Ones will simple fall out of graduate school, as young seedlings fall on barren ground. The Young Ones will have neither a course nor a campus to sink their roots into. More on the fate of the Young Ones later. As the Old Ones are retired, willingly or unwillingly, all of their particular courses will be replaced by STANDARD CANNED COURSES, virtually provided, without individual teachers guiding and directing discussions and learning. Gone will be “real” classrooms with their uncomfortable desks, their chalkboards and white boards, their power point presentations, the Blue Books, their hierarchies of the Smart and the Dumb, and the presence of the all-knowing authority figure.

If on campus classrooms are eliminated and the students stay home, the impact upon the university and college campuses will be enormous. Campuses will shrink to labs and administrative buildings, and even these buildings will be few in number, serviced by a small parking garage and perhaps a nice cafeteria. The rest of the campus might become a verdant park, including playing fields for college sports. The Administration of Higher Learning, now mainly computerized registration, will become increasingly centralized, with Deans and Chairs and Provosts, and the like, will become unnecessary relics. Along with the faculty, they will be discarded. Administration will be mostly financial officers and tech personnel, for “staff” will shrink in numbers, although unlike the professors, the staff, like Cher, will not disappear.

Let us return to the impact of course management systems and the disappearance of the teacher upon education. Step Three will be the re-definition of “education.” Many sentimental and nostalgic people have already recoiled from this picture of the future in instinctive horror, picturing the end of the college campus with their academic groves, the swath of green, the quad, crisscrossed by connective paths, the brick buildings, ivy climbing up the elderly walls, the book-laden students walking in clusters, scurrying to class, talking to friends, making connections, mating for life, with autumn leaves drifting down in anticipation of the first football weekend, leading to a solemn graduation ceremony, a rite of passage, a ceremony that requires medieval robes, complete with cowls and mortar boards, perched jauntily upon heads old and young….How could we let all this tradition go?

The answer is very easily and very quickly in the face of a faster and cheaper and more efficient alternative. In the case of the automobile it was the people who made the choice: we gave away our horse, we turned our barn into a garage, the blacksmith became the auto mechanic, and we all learned how to drive motorized vehicles. In the case of what we sentimentally call “education,” we will have little choice; we will not make the decisions. The Budget Masters can and will make the decisions for us. Finances and demographics will dictate the future. Once software that is suitable for mass education without teachers is developed, there will be no turning back. Why maintain thousands of teachers and thousands of classrooms when all of these expensive physical entities can be eliminated? Why maintain the verdant campuses and ivy covered halls if no one is at home? Campuses with students will no longer make financial sense—in a very few years.

So what will cyber-education look like? Without discussing the fate of college football and other sports, education will become mass dissemination of units of linked information. So “education” will be replaced by “dissemination,” and “knowledge” will become “information.” Thinking—one of the traditional academic goals and by-products of education—will become a SKILL SET to be learned or, shall we say, consumed and applied. Courses traditionally have combined content and critical thinking and developmental and evaluative practices of reason. Cyber courses will be split between the disseminating of information, which must be mastered, and the instruction of analytical skills, which must be learned. Students will not be encouraged to critique, say, the economic system, but will learn of a variety of economic systems throughout time and will receive training in critical evaluation in an unconnected course. The student may or may not apply any of the analytical or critical skill sets to any of the information gained. What use the student makes of the courses taken is up to the student and his or her needs and inclinations.

Let us imagine the California college experience of the future. The Community College system, now a centralized entity, will provide basic foundational two-year classes. The California State University system, similarly constructed, will provide the third and fourth year required courses. The University of California system will provide specialized high-level courses for the various majors. In fact, over time, these levels could simply become all one University System, eliminating the now unnecessary separations. To the extent that separate campuses retain their names or still exist, these greatly reduced local sites will be used solely for the majors that need lab work—such as the arts, music and dance, and the sciences and sports. Because most sciences can be done on line, we can envision campus life consisting of two dominant groups, the artists and the jocks. Everyone else will stay home.

Where will “home” be? Anywhere and everywhere. Anyone can take these courses from any location. All you have to do is pay. Gone is the admissions process, except for the jocks, which need to try out and be selected for aptitude and athletic talent. Admissions to a specific university traditionally have served two purposes: one is frankly elitist and the other is practical. Elitist hierarchies have been created: certain University of California campuses are considered “better” than others because the students are “better” because their entry grades are higher. The University of California campuses are, in turn, valued over their Cinderella sisters, the California State system for the same reasons. The Community Colleges are used by all and scorned by everyone. Practical limitations of campus space have resulted in limitations on enrollment, leading to selective admissions of more or less qualified students: the “best” go to Cal Berkeley and the “worst” go to a community college. The professors are paid accordingly, rewarded accordingly, and worked accordingly.

A professor at a UC School will teach three or four classes a year and will be paid three times more—at least—than their Cal State counterpart who must teach eight classes a year. A community college teacher will also have four classes a semester, but unlike his or her higher-up counterpart s/he will have no graduate assistant to help with research or grading. As one goes down the hierarchy, the workload and the inequality increases and the salary decreases, based upon the assumption that some professors are “better” than others and must therefore must be treated in a more privileged manner and that some students are “worse” than others and deserve a supposedly lower quality education. All of us who have been through the UC system, taking the occasional Community College class, know that one can have an amazing teacher there in the “lower depths” and have a simply terrible teacher at the University.

But because the vestiges of that unjust hierarchy will undoubtedly remain, it will be the university professors who will probably survive, as the “educators,” inventing THE COURSE, putting the other teachers out of a job. That said, the students would benefit the most from the elimination of this ancient architecture of privilege. With admissions based upon campus space no longer necessary, the game will change. The goal is no longer to pass on privilege from family to family, from social class to social class. The idea is now to educate the whole population. Everyone starts at the same level and everyone finishes at the same level. Excellence is now based solely upon how well one does in the courses. There will be no hierarchies among campuses; there will be only one degree from one university. Everyone else simply buys a course—pays for the information—on the open market. The trick is that the purchaser “owns” the course only when the course is completed. Initially what you pay for is the right to “inhabit” the class. Think of buying a home: you provide a down payment, but you do not “own” the house you inhabit until you complete your obligations, that is, make your payments in full, paying off your mortgage.

Students will be allowed to “inhabit” a course for a limited period of time, say two years. If the requirements are not completed in two years, then the class is “foreclosed” and the student needs to repurchase or move on to a more suitable course. The student may get out of the course at any time, but no money will be refunded after a certain length of time. Because students must pay monthly “rent,” the course will cost more for those who take longer to finish. Think in terms of insurance payments on your home or fees on your condominium. The course can be a cheap or as expensive as the student allows or can manage. Certainly the smarter and more prepared students will finish faster and cheaper than those who have less aptitude or time, but the former group has always been advantaged over the latter group. Some buyers may never put together a degree; some may purchase particular courses for specific purposes; others will obtain a university degree. The revenue stream coming to the State will be large—because the student pool has enormously increased—-and will be continuous—because humans, by their very nature, will procrastinate on their courses and pay “rent” for months or even years.

Without any admission requirements, the student body, with the aid of translator widgets, will be international. So what are the students buying? The students are buying, not education, but access to information. Unlike “education,” now a quaint practice in quotation marks, information will not come from textbooks, written by authority figures, will not be personal, ideological, or value-based. Information will be disseminated with low literary levels—almost like bullet points. But the lines of basic facts will be laced with links to documents of all kinds, from primary to commentary, all available and ever hyper-expanding for the students to peruse. One of the arguments, made for decades, as to why women and people of color are excluded from course such as history and literature is that a full and complete portrayal of the role of African Americans in the history of the United States would take up too much time in the traditional three-hour class in a traditional semester.

There are only so many classroom hours available and the need to teach of the accomplishments, however dubious, of the white male must take precedent. A single semester or a single year is insufficient to include Virginia Woolf or Georgia O’Keeffe in a course in literature or art. Although the demand for the inclusion of women and people of color has resulted in the insertion of tokens here and there, American education has been traditionally Eurocentric, white and male. One of the problems, a very real one, is the training of individual teachers who are forced to (over) specialize. A professor of English literature will have concentrated on Chaucer and will be required by her university to publish or perish in a specific and narrow area of his or her field and is discouraged from developing other fields of concentration, such as contemporary Anglo-Indian authors. In cyber space there are no such limitations—not the teacher’s time, not the teacher’s knowledge. In the cyber world there are only links that propel the student into the endless pleasures of hyperspace.

Students will be required to learn the history of the United States as the histories—plural—of the genders and ethnicities of America. The result will be “histories” written by experts found by links to articles or books: no one teacher is expected to attempt to cover all the materials. The students will be given the benefits of many scholars in the field. Information will be theoretically limitless. There will be no professor in the classroom explaining why Langston Hughes cannot be taught because Ted Hughes is more important than Sylvia Plath and so on. Authority is gone, guidance is extinct, mentors are absent, and as are the idiosyncratic and unqualified and abusive professors who try to impose their wills upon helpless students. The student is the “activated learning agent” who browses and chooses what lines of information to follow, evaluate and develop. Assignments and tests provide the only direction.

Somewhere in the cyber background are computerized evaluations of the students’ homework materials or perhaps vestiges of professors, now nameless survivors of the college and university system, who are given the tasks of writing assignments and tests and making sure the computer programs take note of the correct “key words.” Of course, one can do an assignment over and over until the desired grade is obtained, and, ideally, one can learn through re-doing. Some few of these students will be attracted to the possibility of endless learning and limitless information gathering. Those will be the future scholars who may actually come into personal contact with others of their kind in a specialized area of the University system called “graduate school,” but there is no need to enter a campus. Graduate school can be as “virtual” as undergraduate “education.” For those who remember the tyrannical and politicized and competitive atmosphere of graduate school, the simple pleasure of pursing a train of thought in solitary splendor in cyber archives will be quite sufficient. Indeed graduate school will shrink back to its original dimensions: a place for professionals, such as lawyers, doctors, and architects, and a place for those with the income and leisure time to concentrate on a field of study for a decade or more.

Once there are no jobs at universities at the end of graduate school, students will move on to other professions, leaving behind only the dedicated learners, the one who truly care about what we used to call “knowledge.” They, the solitary, the few, will be entrusted with the task of creating new information by synthesizing a vast array of floating data and documentation. But their task will be fundamentally different from current scholars who have the pretense of “originality.” These postmodern writers will be the true bricoleurs, or should we say, bricoleurs who will admit to the fact: they do not create; they assemble units of usable information for the students to consume and assimilate. Instead of being limited to “publication” in “peer reviewed” professional “journals” or university supported “presses,” the new cyber scholar simply posts his or her writing on his or her website. Interested readers can find this work and can make contact with the writer to ask for additional information, to exchange thoughts and resources and so on.

People who wish to be this new kind of scholar can develop their own specializations within scholarly territories that are now “unguarded” and open because “gatekeepers” can no longer function. Information is now everywhere, free for the taking. True, we all have memories of that special professor who mentored and encouraged us but they will continue to exist in cyber space. In cyber space, no close-minded professor can tell the cowed graduate student what s/he should or should not read, what s/he should or should not believe. Authority has almost no meaning on line. The information “market” determines what it needs and takes it. Just as “education” has become redefined, professor eliminated, the “student body” also becomes a sentimental artifact of the past.

This elimination of one of the major means of socialization of young people (and old people) will probably be only an extension of what will be happening in the workplace, with more and more people working from home. The trade-off is losing an incompetent or tyrannical professor or boss and gaining autonomy and independence and success based upon merit rather than favoritism or looks or privilege. The losses must be considered and constitute a real problem: money is saved, revenues are increased, the population is more efficiently informed and trained, but human contact is drastically altered. Perhaps the germ of human socialization in the future is already upon us: sites like Match.com provide hook-ups for dating, but there is no reason why there could not be similar sites for college students who will meet on line and create social group. Although Facebook was set up so that linked students could study for an art history exam, this social network is not specifically directed towards students. Perhaps one can envision as a positive possibility to increase of one’s circle of acquaintances being anywhere in the world. People are resourceful in their desire to be together. They will find a way to create new kinds of communities. We can call this phenomenon “global info-cation.”

When on that day in 1988 he did not wake up, Jean-Michel Basquiat joined the pantheon of artists who died young and thus passed into legend. Although there is a fairly definitive biography by Phoebe Hoban, the beautiful young man that was Basquiat remains a mythic figure and the image of a doomed artist looms over the truth of his life. Despite its title, this film, based on a short interview Tamara Davis did with Basquiat, does nothing to shed light on the artist. Instead, twenty years later, it continues the hagiography and the white washing of a black artist in this tale told mainly by white people. For the cynics, Basquiat was a “head on a pole,” a warning sign to any minority brash enough to attempt to breech the all white walls of the art world. This film does nothing to discredit the notion that Basquiat was an exception to the all-white rule in New York’s art institutions.

The problems with this documentary begin with the title itself. “The Radiant Child” comes from a 1981 article written for Artforum by Rene Ricard (who does not appear) and the title referred, not to Basquiat, but to a famous drawing by the late street artist, Keith Haring. Basquiat was a peripheral player in the article, which focused on Haring; and yet, the signifier has somehow floated over to fix itself on Basquiat. To attach the term “child” to this artist is racist on the face of it and reflects the white art world attitude towards the young black man. The chic SoHo crowd thought of Basquiat as a “wild” “primitive,” not because he was a street artist, but because he was a young black man in dreds. To label him as a “child,” radiant or not, is close to calling him “boy.” During his entire career, Basquiat was raced and marked.

The implication of this linguistic marginalization is that Basquiat was an innocent child adrift in a smart and savvy art world, which ate him up and consumed his art for fun and profit. But the record indicates that Basquiat was a shrewd observer of the SoHo scene, in which he had been operating for years as the street poet, “Samo.” Like Keith Haring and Kenny Scharf (both white men), Basquiat positioned himself and his work in the midst of a burgeoning East Village SoHo scene that combined art and music. It was the last seventies and early eighties and a new generation had drifted into town. Cindy Sherman and Pat Benetar and Ross Bleckner and Robert Mapplethorpe and Madonna all became part of the new vibe. Beyond the exchange of ideas and innovation that could take place among the artists in such an atmosphere, there were the up and coming galleries and the gradual gentrification of the loft district. New artists, new galleries, new ideas and new opportunities—Basquiat was smart enough to know where to place himself. This artist was never an outsider artist; he was always an insider artist.

There is a naïveté to this movie towards a very sophisticated and complex subject. Jean-Michel Basquiat: The Radiant Child assumes an outmoded myth of the artist as genius who is “discovered” and finds sudden fame. This myth which, destroyed the artist, badly needs to be deconstructed. Basquiat was smart and ambitious and focused on making a name and a career for himself. Built around a short interview with Basquiat done by the filmmaker, this documentary vividly portrays another asset that the artist was well aware he possessed. Besides being smart and well educated, Basquiat, who was from a middle class family, was a beautiful young man. “Radiantly” good looking, he attracted the admiration and support of both men and women. If this film has any merits, it is that it captures the physical appeal that allowed the artist to stand out among the contenders in the art world. The loving, lingering close-ups of the obviously smitten cameraperson show the artist in the last moments before he began to deteriorate from drug use. The same qualities that made people want to take care of him were the precise qualities that made people want to use him. His career became a run away train, fueled by the desire to make as much money as possible before the ride was over. But there is nothing unique about Basquiat’s story. We here in Los Angeles see it played out again and again—-sudden fame, sudden flame.

The tragic arc of the career of Jean-Michel Basquiat parallels that Vincent van Gogh or of Jackson Pollock, other doomed and damned artists. But unlike van Gogh or Pollock, Basquiat became wealthy in an unprecedented art market, and, after his death, his estate passed to his father who also profited from owning art that was now iconic. Although Andy Warhol developed the notion of the artist-as-celebrity, Warhol was an older and more seasoned player whose detached cynicism protected him from the consequences of fame. Basquiat was arguably one of the first Hollywood-like figures in the New York art world. If the artist lacked the armor, the art world lacked the resources to deal with his sudden ascendency. A review of his career reveals that no one was responsible. No one was in charge. The sudden and massive influx of money and the overwhelming demands on Basquiat’s time and talents would have destroyed any young artist.

Although Basquiat has been dead for over twenty years, there is no context or perspective from Davis. The art world he served was a hypermarket of frenzied buying and spending, one of the epicenters of greed during the Eighties, one of the last unregulated laissez-faire capital markets in the world. But this art world crashed like an over-priced stock, shattering forever the fiction that art and money were separate enterprises. Careers rose and fell and there were numerous casualties, from David Salle to Robert Longo. No art historians or art critics appear in Jean-Michel Basquiat: The Radiant Child to discuss what was a very distorted era in the art world that made and destroyed many of its inhabitants. Some artists survived and went on to distinguished careers: Cindy Sherman became a major figure in postmodern art and Julian Schnabel became a well-regarded filmmaker. Although there is some discussion of the content of Basquiat’s art, the sheer novelty of a black male artist painting black history, featuring black male protagonists for white patrons remains ripe for a more considered discourse.

Basuqiat’s paintings were confrontational, political, and critical of the white world, which used and discriminated against black men (and black women). The film mentions that Basquiat identified with Charlie Parker, a black entertainer for an audience of whites, suggesting that he was well aware of what was being done to him. The writing on the canvas walls of Basquiat’s large scale paintings reinforce his understanding of his role as artistic entertainer to privileged white audiences, much like the boxer, Sugar Ray Robinson. The distinguishing feature of Basquiat’s work is that he was essentially a writer who put his literature onto canvas using a brush instead of a spray can or pen. As a writer, as a street artist, Basquiat was much closer to Jenny Holzer and Barbara Kruger. Like these feminist artists, he was a critic of the dominant society. And like these installation artists, he was able to maintain the balance between criticizing the very people who supported his art and selling his art to the establishment. There is an unstated but present subtext that Basquiat was somehow a “natural” painter, who never had to attend art school. This “noble savage” narrative uses racism to elide just how brilliantly and fluently Basquiat took up the paintbrush and how completely he made the transition from spray painting walls to painting canvases. Although it is easy to look at his expressionistic style, the obvious attraction for his patrons, his activist messages come across loud and clear for all who, in this conservative era, would listen. At a time when affirmative action was being halted, Basquiat was literally one of the most powerful and outspoken black voices in America, judging white people on their actions towards people of color.

Unfortunately, the film does not inform the viewer as to just how tragic his young life was. One deplores the lost opportunities and the missed chances over years to help Basquiat, and it continues to be astounding that no one intervened with his addiction until it was too late. His drug problems, which were long standing, are treated as a late development. His numerous love affairs are omitted, and only two of his former lovers appear on camera. His dealers, with the exceptions of Larry Gagosian and Mary Boone, did come on camera but gloss over the fact that Basquiat was a very high maintenance artist and never mention the vast sums of money he made for them. Some artists speak of him on camera, Julian Schnabel who made a very fine movie about the artist, Basquiat, provides commentary. It is worth noting that Schnabel’s film is more hard-hitting and realistic than jean-Michel Basquiat: The Radiant Child. Basquiat’s fellow street artists, Fab5Freddy, Kenny Scharf, and some of his earlier associates, such as Al Diaz become voices, which merely reiterate the legend and reinforce the myth.

But the film ends with telling us little we do not already know and the Hoban biography remains the best account of a life fully but badly lived. Basquiat may have been “a head on a pole,” but he paved the way for other artists of color, from Glen Ligon to Kara Walker. Watching the film and seeing the recovered interview with Basquiat, I was reminded of last year’s exhibition at Occidental College of photographs taken of one of their more famous students, Barack Obama. Like this documentary film, the photographs were taken by a young woman; and like Davis’s film, the photographs were put away and forgotten. There is a similarity of youth and promise shining out the photographs of Obama, but there is one thing missing—-the vulnerability of Basquiat. Obama came across as cool and self-confident and totally in command of his own life. So what happened to Jean-Michel Basquiat? Simply put, once he entered into the precincts of the art world, he lost control of his life. Hopefully, his art works can be removed from the dramatic story line of the doomed genius artist and will receive a hardheaded assessment as the social critique they were. This film is nostalgic and elegiac but completely without insight. Too bad.

If he were alive today, Jean-Michel Basquiat would be a year older than President Barack Obama.

A question often asked is how did the nation that produced Goethe, Brahams and Rilke also produce Goering, Himmler and Hitler? Another question one could ask is what is the connection between literary genius and the fascist infatuation with what they though of as “objectivity?” Nazi “objectivity” was the act of appropriating reality and reconfiguring it into propaganda. Coincidentally, when I went to see A Film Unfinished (2010), I was reading the new book, The German Genius, just out this year by Peter Watson. Watson took on the task of explaining how in just one hundred fifty years, German culture jumped from Immanuel Kant to Adolph Hitler and I took a break from his chapters on the Nazis to go to this remarkable film within a film. The “film” referred to in the title is an unfinished fragment, edited, but never completed, of the Warsaw Ghetto. Nazi camera crews entered into the ghetto in May of 1942, mere months before the inhabitants were transported to Treblinka. For reasons that will be forever unknown, the distorted record of the last days of the lives of the Jews was never completed, leaving us with a mystery.

Made in Israel, directed and written by Yael Hersonski, this film is an attempt to explain the Nazi film of the Ghetto. We are not alone in our viewing of the unfinished footage; we are joined by three Holocaust survivors, called “witnesses,” who lived in the Ghetto when they were children. The German counterpoint to America’s much lauded Greatest Generation would be the most Guilty Generation, the perpetrators of nameless and unthinkable crimes. Many books, including that of Watson, have recounted the post war wall of silence, the communal refusal to discuss life under Hitler. I have even seen documentaries in which the aging criminals, ordinary Germans, who still have no guilt, were still defensive of their actions. But as Sigmund Freud pointed out a century ago, guilt may deferred and repressed, but it exists and is acted out. In 1975, Alexander and Margarete Mistcherlich applied Freud’s thesis to the Germans after the revelations of the Holocaust. Their “inability to mourn” resulted in a generation of deeply melancholic people. Germany cannot mourn for its sins and its citizens are condemned to be trapped in melancholia until the people come to terms with their crimes. But most of these perpetrators are dead by now and the remainder is fast dying out. We must assume they have told us as much as they were willing. And yet, after thousands of books, war crimes trials, Survivor testimony, and filmed accounts, we are no closer to answering the question: how could the human soul be so dark?

The narrator of A Film Unfinished begins with commenting upon the Nazi obsession with the visual, with recording their history, including their most heinous crimes against humanity. The film, titled “Das Ghetto,” was rediscovered in a concrete vault in the mountains in East German territory, and later extra color footage and outtakes were also located. The identities of all but one filmmaker are lost and the participants are probably dead by now. The structure of the film demonstrates a sub text of guilt projected onto the victims who have no choice but to be receiving screens for racist hatred. In a Ghetto of starving people, lying down and dying on the sidewalks, the Nazis managed to round up the few Jews who could afford to eat, who still had some meat on their bones, and forced them to be “actors” in the “objective” and completely fictitious account of Jewish life in the Ghetto. Here, in Warsaw, the film insists, Jews are living in luxury, enjoying their elegant spacious new homes. In reality a half million people are crowded into a few acres, separated from the rest of the city by a wall. Beyond this wall, in Aryan Warsaw, life goes on as usual, while on the other side of the structure, hell exists on earth.

According to the witnesses, the intention of the Nazis seemed to have been to picture the Jews as being divided between the uncaring rich and the suffering poor and/or as aliens who indulged in strange folk practices. Well-fed Jews are compared to starving Jews; well-dressed, “indifferent” Jews are forced, as the editing process shows, in take after take, to walk past corpses. Rabbis are forced to “demonstrate” a circumcision on a tiny newborn, and leading members of the community are gathered together in an elegant funeral procession. Nude men and women are forced to participate in ritual immersions. A woman is asked to pull aside a quilt to disclose a young girl in a bed, lying still in starvation, waiting to die. Internal contradictions to the Nazi argument are freely filmed. A group of young children caught smuggling food into a ghetto supposedly full of food are forced to shake vegetables out of their clothes. Countering the elegant funeral hearse is a scene where Jewish workers come to collect a pair of corpses left out on the street by their families. The wagon full of corpses is then followed to a “shed” filled with many bodies, which are then hauled off and dumped into a mass grave. Apparently the Nazis never considered the possibility that the intended audience might have taken into account the fact that the Jews were incarcerated in a Ghetto on their own government’s orders. Only Hitler could condemn the Jews to death, but the film blames the Jews for their own slow deaths.

The film was never finished and one wonders why it was even made. Himmler, the master propagandist knew full well that the German public preferred wartime escapism. Earlier attempts at graphic or crude attacks on Jews were not successful, and this film of the Ghetto would have been extremely offensive to any audience. Another reason for putting the film away, aside from its horrifying content, was that the entire Ghetto would have been wiped out by the time of its release. This was a Ghetto that did not give up its condemned easily and an uprising broke out in April and May of 1943, and a Jewish resistance turned the Ghetto into a war zone. It is quite possible that authorities decided that the less said about the Warsaw Ghetto the better. The most curious aspect of this film is the strange combination of visuality—how they filmed everything, apparently without flinching—and blindness—-how the filmmakers utterly failed to see what they, the Nazis, had done to the Jews…. even as they were filming the results of their own actions and the horror.

Today such a film seems to be nothing short of madness, but in its attempts to reflect the guilt back on the Jews as deserving their own fate only mirror the sentiments of many Germans for decades after the war. Watson discussed the impact of the evolutionary theories of Charles Darwin upon Germany. The idea of the “survival of the fittest” was horribly twisted against the Jews: they died because they deserved to die. Perhaps the most difficult sights one must endure in this film is that of children who have somehow lost their parents. They seem to be under six, but it is hard to tell the age of a child starving alone in the streets. Sometimes the children grouped together—perhaps they were siblings—and lean against a building, waiting for their tiny lives to end. The witnesses admit to a communal indifference to the suffering of others, who were outside their family circle. But it is clear that indifference is part of survival and that loss of humanity is part of staying alive.

Peter Watson’s book, The German Genius, discussed one of the key elements of German philosophy, the introduction of the concept of a critical analysis. In her review of this film in The New York Times, Jeannette Catsolusis, remarked that the director “embarks on a critical analysis of Das Ghetto,” and one cannot but be struck with the irony of an Israeli using the methods of German philosophers against the very Nazis who appropriated the great names of their tradition and sullied them by associating thinkers with the Nazis. A classical philosophical critique is the close reading of a document, which reveal its structure. Classic deconstruction of a document is a close reading, which extends critique to seek out the inconsistencies that disrupt the intention of the author. Rarely has a piece of writing (if we can dignify the Nazi film by putting it into the category of “literature”) so turned against itself. The graphic images of the condemned, the dying and the dead cancel out any possible propaganda effects. A Film Unfinished is not for children and young people should be cautioned but this film is rarity: a preserved German-made record of their own loss of humanity. The film, unfinished or no, is a document of guilt; it is an admission by the perpetrators of their unimaginable crimes.

AMC is giving HBO and Showtime a run for their money with Rubicon, the most recent in a series of truly remarkable shows. The network started out modestly enough with a British show, Hustle, followed by the original dramas, Mad Men and Breaking Bad. Rubicon is just as gloomy and dystopic, just as cynical and hopeless as its predecessors. In other words Rubicon is in keeping with the downbeat end-times we live in. That said Rubicon is different from other cable shows. It lacks the brightly colored spectacle of Sixties fashion and décor and sexism and racism and homophobia and anti-Semitism that makes Mad Men so horrifyingly compelling. It also lacks the crazed frenetic energy that animates the ill-fated partners in crime in Breaking Bad. The first three series are all about rule breakers: Hustle and Breaking Bad are about criminals we learn to like, and Mad Men is about the last days of social and sexual immorality without consequences. Rubicon is all about keeping the peace, by waging war, a job done by a strange team of idealists and the bottom and cynics at the top.

Rubicon is slow and majestic in its careful pace, mimicking the underwater occupation of the hero, “Will Travers,” a low key, depressed intelligence operative working modestly on the downlow. Played by James Badge Dale, who was in The Pacific with a good role in an otherwise boring series, “Will” works for one of those black box agencies that operate off the books and fight to be free of congressional scrutiny. When his father-in-law and boss, “David Hadas” (Peter Gerety) is murdered, “Will” sets off on a quest to solve the mystery of his death. Scattered along the way are clues in scrawled in crosswords, hidden in motorcycle seats, and left behind with a four-leaf clover, and “Will” must follow and decrypt these enticing suggestions for many weeks to come.

On one hand this is a classic conspiracy film. Yes, there is a vast right wing conspiracy out there; possibly rich white men who are the unseen but felt power exercised behind the empty throne of the American government. No other group is so rich or so powerful—certainly not people of color and certainly not women—and no other group has such vested interests to protect. In the case of the minor players, these men are called lobbyists or financiers or corporate CEOs, and they are the ones who run the country according to their own interests. We all know that. Rubicon suggests that there is yet another layer of secret power and manipulation of world affairs by a subterranean group we only dimly sense. But on the other hand, the series is about what the British called, The Great Game, the cat and mouse contest called spying.

This is a terrain left over from a 1960s black and white Cold War thriller, like The Spy Who Came In from the Cold (1965). The agency in question is API, which claims to have access to all intelligence and can, therefore, find the truth, or something like it. The agency is located in an unmarked building located on a non-descript street in an unidentifiable part of New York City, where the sun never shines and it is always night. There is no James Bond, there is no Q, there is no M. There is no flash and dash at this agency where “intelligence” is at least something the agents attempt to demonstrate.

As another reviewer put it, the whole show has the old-fashioned look of Three Days of the Condor from the 1970s. There is a retrograde and nostalgic atmosphere to the sets and the actors. The agents are intellectual and tortured nerds who are paid to talk and think. It is shocking when some kind of technology, like a computer or a television set, is revealed. In this universe, people use actual reports on actual paper, stuffed in actual paper folders. Regardless of the fact that one of the current “enemies” is the entire Middle East, everyone seems to be white and European. Perhaps there is an Africa group with real African experts, but we haven’t seen those people yet. The ones in power at API are overwhelming old and white and male. The few women are wives, secretaries, and newcomers to API. Since the Fifties, time has barely moved. Men rule and America is still fencing with its enemies, all whom are playing The Great Game. Except for the terrorists.

On a recent episode, technology—albeit unseen—entered in to an episode. The team of analysts, Tanya (Lauren Hodges), who has a pill and alcohol problem, Grant (Christopher Evan Welch), who is sulking because he is not the team leader, and Miles (Dallas Roberts), who has lost his wife and children to divorce must decide on whether or not to recommend a drone strike. These otherwise ordinary individuals, people just like us, must decide whether or not to take out their terrorist target, a dangerous man, we are told. The target is yet another leader of some terrorist group, hiding among women and children in a civilian zone. Terrorists have a long history of forcing the Americans to kill many helpless and vulnerable people in order to kill a few “evildoers.” The somewhat distracted team is working blind, with little intelligence, and is leading the blind, a pilot, who controls the drone from thousands of miles away from the kill zone. In the end, the team comes to terms with its scruples and recommends that the terrorist leader be disposed of.

Although using drones is a less expensive way to fight the so-called War on Terror, such strikes cause collateral damage and innocent victims die. The terrorists have created a trap for Americans who are forced to struggle between national principles and what they think is wartime necessity. The “Rubicon” that separated military and civilian targets was crossed during the Spanish Civil War when the Luftwaffe bombed Guernica in 1936 or in World War II when the Germans bombed London in 1940. After that, it was an eye for an eye, ending with Hiroshima five years later. Still, the deaths of innocent victims of the War on Terror are, ironically, few enough for us to focus on, and photographs of the deaths stir the consciences of Americans. What the intelligence officers at API are doing is fighting a war. The mere analysis of data and the resulting recommendations have life and death consequences. Although fought at a clinical distance, this new kind of war has oddly personal blowback.

The War on Terror is not really a “war” and should not have been named as such. Terror does not respond well to military solutions. Terrorists are not enemy combatants but shadows who dart in and out of hiding, melting into the general population. “Terror” by its very definition depends upon unexpected and unpreventable attacks on innocent and random civilians. And yet, we delude ourselves that we can prevent “terror.” The teams at API are on the front lines of an undefined battlefield where the best targets are the leaders and instigators, single individuals. Traditional battles are a waste of time, money and lives and cannot touch the triggers of terrorism itself. Under these conditions, targeted assassinations, debated in Rubicon, surely make sense and it is certainly cheaper to employ someone like the assassin in The Day of the Jackal (1973) to “take out” certain individuals. Terrorism, by definition, does not lend itself well to invasion; it yields much better to infiltration by a network of spies. But we cannot infiltrate the ranks of the terrorists. As one character remarked, “Our intelligence is lousy.” The question is why?

The American government surely has access to a large and assimilated population of Islamic Americans. Here in Orange County, there are many Arabs. I just spent fifteen minutes with a lovely Muslim woman at my local bank, who helped me open a new CD. It is unclear to me the extent the government has cultivated these citizens to fight terrorism. But one of the answers to the question of why our intelligence on the ground is so bad could be our unremitting hostility to the Arab world, even to Muslim Americans who have been loyal citizens. The latest cable news firestorm or fake debate is swirling around the misnamed “Ground Zero Mosque.” The false controversy is but one of many aggressive stances towards Arab American citizens who have suddenly become the “outsider.” Ignorant and credulous people are whipped into an ill-advised frenzy to appease whatever anxieties the general public has about the “Other.”

To attack our homegrown Muslims—even verbally—seems counter-intuitive. If I were in charge, and I am not; I would be in every mosque and community center that serves Islam and I would be recruiting Arab intelligence officers. During the height of the Second World War, military recruiters went into the Japanese internment camps and signed up the best and the bravest soldiers ever, the fabled 442 Infantry Regiment. There was no one more loyal to their country, America, than these Japanese-American soldiers. These Japanese men were eager to prove their loyalty and they suffered far greater consequences due to their ethnicity than have Arab Americans—loss of property, businesses, possessions, and years in camps.

True, Muslim Americans have endured “only” verbal slurs, but why would any self-respecting Arab want to work for a nation that routinely vilifies you and your religion? Why would any Muslim in the Middle East have any confidence in American motives? Tragically, Americans are fighting and dying to give the people of Islam a better life, while at home, Americans are denying them the First Amendment. If I were ethnically Arab, I would be tempted to keep a low profile in such a poisonous atmosphere. So America fights in the dark, without intelligence, without our most valuable natural assets, many of whom we have alienated. As I write, the last American troops are leaving Iraq. Soon most of the American boots on the ground will be gone from Afghanistan, if only because we have run out of money. We will revert to the kind of war suggested by Vice-President Biden, a war of strategic strikes via drones and fought by small groups of commandos. Although we have thousands of Arab Americans who would be proud to help stamp out terrorist groups, they must be discouraged by the lack of support at home. One can only imagine what the troops in Afghanistan must think of how Americans are undermining their position in the war.

Rubicon, in Episode 4, was grounded in reality. “Will” and his boss go to Washington to protect their privileged agency, while “Will’s” team tried to decide how many women and children must die to kill one man. And all with “lousy intelligence.” One of the great things President George Bush did was to speak out against the persecution of American citizens who were Muslim. President Barack Obama has likewise taken a principled stand on behalf of religious freedom. Rubicon seems intent on following the obscure murder mystery tour, which will unravel another vast conspiracy. While we do love conspiracies, because they explain so much, Rubicon might do a better and more interesting job if it grounded itself in the real world moral and ethical dilemmas of contemporary spy craft.

Even though I am a Jennifer Aniston fan, ordinarily, I would never go see this kind of film. But I got a free ticket and there I was, watching a chick flick. I have written elsewhere (Garb. A Fashion and Culture Reader) about how these films are socially regressive and fix women into their proper place: barefoot and pregnant and married. Which, in this film, happens in that order. The only missing plot device is the woman losing her job (What Women Want) or her business (You’ve Got Mail) before she gets her man. In The Switch, Jennifer Aniston is forty with no prospects of marriage and decides to have a baby. Her reasoning is that her biological clock is ticking, she doesn’t need a husband, and that she has a good job. OK. Her Best Friend, Jason Bateman, is an appealing eunuch and there is not even the remotest degree of chemistry between them. Ergo, “Kassie” decides to find a sperm donor, someone she has met and approves of. For some reason, “Roland,” played by Patrick Wilson, complies with her request and “donates” the “ingredient…and his wife lets him. Now that is a marriage in trouble. “The Switch” takes place when the aptly named “Wally” spills the seed donated by “Roland” down the bathroom sink and must replace—switch it—with his own, gathered according to Diane Sawyer (don’t ask).

The result is merriment, which proceeds to ensue. “Kassie,” of course, gets pregnant, with “Wally’s” baby, but she thinks the child is from “Roland.” She then moves to the Midwest, giving up her good job, because New York City is a bad place to raise a child. Time passes but without consequences. “Wally” works as a hedge fund consultant and he lives through the Wall Street Crash with his job intact. He seems to work for “Leonard,” Jeff Goldblum, being his usual eccentric self. Why “Leonard” still has a job, much less a company, or why he has hired such an unlikely financial expert as “Wally” remains a mystery. “Leonard” also allows “Wally” unlimited access to him and is available for all kinds of sensitive guy talk. We need the new Gordon Gekko film, Wall Street. Money: Never Sleeps to bring the Real Men back into the gambling casinos, also known as hedge funds.

Time-lapse photography encodes the passage of the years as the odd couple lives apart but still not married. There is a nice blind date bit about how strange it is that “Wally” has not married yet, but, then, he is such a loser. “Kassie” returns to New York—new job offer—with the child of “Roland” (really “Wally”), the gloomy and neurotic “Sebastian,” beautifully portrayed by Thomas Robinson. Seven years have passed and the truth of who the real father is must be revealed, which it will, all in due course. But not until all obstacles have been removed from forming the family that was always meant to be.

The film is sweet and forgettable, salvaged only by the lovely parent-child relationships and the charming child. There is a much better movie hiding behind stock characters, the best girlfriend who must be ethnic, the worthy but boring boyfriend, and his too handsome to be real rival. Although the couple works on Wall Street and in the media business, financial meltdowns and the partisanship that is contaminating television, leaves the characters untouched. Although roomy apartments in New York City cost thousands a month, none of the characters have any money worries. Although, taking care of a child is time consuming, both characters seem to be on call for little “Sebastian,” and there is no nanny in sight. One can only wonder what the interjection of reality could have done for an otherwise anemic film. Glad this movie was free.

Having taken aboard a surplus of estrogen after Eat Pray Love (2010), I felt a desperate need for a dash of testosterone. I rushed out to get some Salt (2010) and feel much more balanced now. Thank you, Angelina Jolie. It is obvious that this film was originally intended for a male lead, Tom Cruise, and was only slightly altered for a woman. Jolie, a very tough lady, makes a very credible action star. The plot, the masochistic and persecuted wronged man theme, is one typically used for male protagonists, who, of course, are always vindicated in the end. This tried and true storyline is deeply psychological in that it satisfies the lingering male childhood traumas of being bullied in school. The adolescent male yearns to be the hero in his fantasies and the victim-vindication dream remains powerful, well into adulthood.

Changing the protagonist from a male to a female had interesting consequences. When a woman is being prosecuted, in female-based dramas, it is usually by someone in her close circle, such as her husband. She has to be isolated for the purposes of the plot, and she cannot have friends or family, or community support systems. This isolation is necessary, because she must have no place to turn but to the Next Man, her new love interest. Women’s films are consumed with “creating the couple,” to borrow the term from a book of the same name by film critic, Virginia Wexman. However, in this film, the husband of Evelyn Salt is kidnapped and killed, eliminating him from the couple equation. I have no information as to just how many changes were made in the script by Kurt Wimmer to accommodate a woman in the lead, but killing off the spouse fits into the narrative line for a male protagonist. A dead wife sets the male lead free and allows the men in the theater to enter into the fantasy of freedom and the women in the audience to desire the hero. The dead wife is also a frequent motive for revenge of the husband.

However, once the husband is dead, then the villain becomes obvious, because Liev Schreiber, the other big star, does not try to save Salt. From the start, the audience, familiar with the female plot line, realizes that he is the villain because he sides with the boys in the Agency. Evelyn Salt, Russian mole, was turned into a loyal American by love—now that is a female specific story line. Furthermore, the death of “Mike Krause” (August Diehl), her spider scientist husband, gives Salt the motivation for revenge against those who executed him before her eyes. Except for the slight chin tremble when her husband is shot, Salt is as stoic and as action oriented as any man. The bulk of the film is all run and gun, impossible leaps and falls, crazy car chases, paralytic spider venom, and a rudely interrupted funeral. For the most part, it is Jolie who keeps this familiar “Bourne” formula (except maybe for the spider) fresh and compelling. Like Jason Bourne, there is nothing she cannot do—she can run, climb walls, hop down an elevator shaft, shoot and kick and punch, and she’s nice to dogs.

What makes the formulaic film interesting is that, Bourne special effects aside, this movie is totally retrograde. If you shake and stir the film, add in the spider, suddenly the plot morphs into something rather like James Bond. The evil Russian villain escapes with a Rosa Klebb move from From Russia with Love (1963). Like Rosa, he has sharp knives that dart out of his shoe tips and he stabs his guards and flees. The longing of the Cold War is palatable in Salt. The Evil Empire can strike again, thanks to many, many mole children who have been planted, like little seedlings, here and there, in key positions in the American government like Gregory Peck’s Boys From Brazil (1978). Of course, the “boys from Brazil” were new little Nazis and there is another nod to Nazi films in the idea of faux American operatives. The 1965 film, 36 Hours starred James Garner as an American soldier who had knowledge of the Normandy Invasion and was put in fake American hospital to induce him to reveal the “secrets” to a (Nazi) psychiatrist. One could go on in this vein and find even more recycled plots but all the elements have one thing in common—there is a major identifiable enemy who is “like us.”

By the end of Salt, nuclear war is imminent and the countdown is on. What a relief! Symmetrical warfare. Real weapons, no more stupid cowardly car bombs. We can nuke each other again! The Russian President was apparently assassinated by Salt and the Russians are angry. Without hesitation, the President (now white and tall and young—very Sixties) does not hesitate to launch a nuclear strike and the countdown begins. The film combines the current Republican talking point about “Terror Babies,” who will infiltrate the country and blow us up in some unknown future, with the joys of having an enemy that stays in one place (Russia) and doesn’t move to Yemen, and knows how to play the game, fair and square. All of this Cold War nostalgia is too good to waste. We know there are more Salt movies to come because “Evelyn” could easily be declared innocent once the President wakes up and explains that it was the other bad guy who tried to kill him. But remember, it took Jason Bourne three films to sort out his problems.

Salt will return. The last scenes show that Salt, who has been arrested for many crimes—-all done for her country, America—-has convinced Agent Peabody (the indispensible Chiwetel Ejiofor) that she is the Good Girl (that didn’t translate well from Good Guy, but that is our language—gender challenged). She warns him that there are many more Russian moles out there, “Far more than you and I can take care of,” she says. Peabody allows her to escape. Salt jumps out of a helicopter and swims, a long, long way, to shore and freedom. Is there anything this girl cannot do? Hooray! The Russian Bear is out of Hibernation. The Vindication Fantasy is complete and now we need to get to the Revenge Fantasy. Sequel, anyone?

What can one say about Eat Pray Love? In my experience eating makes you fat and love makes you crazy. As for praying, well, if I were the Julia Roberts character in this film, the writer Elizabeth Gilbert, and if I prayed to My God, it would go something like this:

And God would say thusly, “You narcissistic, self-absorbed, self-indulgent, privileged New Yorker. You want to be fulfilled?” He would ask, sarcastically, “Why don’t you take a job teaching inner city children for less than $40, 000 a year and get over yourself.”

That’s what my God would say.

But what did the God of Julia Roberts say?

She said, “You poor dear. What can I do for you? Don’t worry about being a single woman and a writer who, post divorce, has a precarious financial position in society, you have my permission to discard your cute but flakey husband and your cute but feckless boyfriend like the used Kleenexes they are, and go off to Italy, spend your money, and stuff yourself with food for a few months, and then you can move on to India, where, I promise you, you will not have to look at even one poor person, and you can receive karmic wisdom from this self-indulgent, old guy who is intent on forgiving himself for being an alcoholic, and then, you can go on to Bali—-again no poor people—-and you get to meet Javier Bardem and fall in love and have great sex and live happily ever after.”

OMG. I have been praying to the Wrong God. Don’t tell me. There is a Relationship God? Instead of being a schoolteacher, I could have had a Javier Badam?

Want a Relationship? There’s a God for that. Who knew?

Waitress? Excuse me. See that Julia Roberts woman over there? I’ll have what she’s having.

For those Americans who can’t read (subtitles), to the horror of those of us who admire good filmmaking, Hollywood has decided to remake the Swedish Millennium Series (a trilogy, so far) by the late Stieg Larsson. Hollywood has a terrible track record for seizing upon perfectly good “foreign” films and ruining them. I am sure that someone can tell me, which, if any, European or Asian films Hollywood has improved but nothing comes to mind. The Departed came close to Infernal Affairs but was not as good. A recent case in point is the pointless remake of the wonderful French farce, The Dinner Game, just released as the terrible Dinner for Schmucks. The Girl With the Dragon Tattoo, released in Europe a year ago, is a violent, brutal, and uncompromising film, reflecting the ugliness of the human soul. By the time I saw the Swedish film, Hollywood had already decided to hijack this difficult and convoluted family drama. I walked out of the theater knowing that Hollywood would be unable to resist prettying up a film that was often hard to watch.

The day has come and casting has begun and with it, the prettification. The lead male, a crusading journalist, “Mikael Blomkvist,” was played by Michael Nyqvist, an ordinary sort of man. As an actor, Nyqvist, is pleasant looking, dumpy with a pot belly. His character, “Blomkvist,” is an activist but not an action hero. So who would be selected to improve this character? Well, James Bond, of course. Handsome, blue-eyed, blond haired, Daniel Craig, who has one of the best bodies in the land of movies, an action hero with chiseled abs, is replacing a pleasant-looking but plump actor who has no concept of grooming his ample chest hair. Nyqvist looks like the character he plays. Compared to the female lead, the male character is relatively passive: he is a reactor and Nyqvist, who comes across as an intellectual, pales in comparison to the angry intensity of Noomi Rapach’s “Lisbeth Salander.”

So far the part of “Lisbeth” has yet to be cast. One can only hope that Hollywood is a bit daunted at the idea of finding an actor as perfect as Noomi Rapace. Rapace, a lovely woman in real life, completely inhabited the character, a tiny and tough little woman, hard as nails, aggressive and quick moving. Her look goes beyond Goth in that she is not stylized, just tough looking and unfeminine. And the actor has natural boobs. Searching for a young actress with natural boobs in Hollywood will take years and finding one will be nearly impossible. Perhaps Hollywood will go to New York, to the theater, to find an unknown without implants. It is hard to think of which of the crop of young Hollywood starlets would even begin to be right for the part. Maybe the search will extend to the British Isles, where they found Daniel Craig, because nothing says “Swedish” like a British accent.

Larsson was frankly writing “pulp fiction,” a potboiler that would earn money, which it has. Unfortunately he died before he could enjoy any of his earnings and his family and his mistress are fighting over who should get what. Originally titled, “Men Who Hate Women,” (“Man som hatar kvinnor”) the series lives up to its name. “Lisbeth” is everything that arouses fear and dread in the male. She cares not about being attractive to male eyes, she is ambidextrous sexually but cares more for women than men, she is neither feminine nor masculine but more neuter. She does not need men and is completely independent and totally alienated from humanity. She is a computer whiz, a skill usually reserved for the boys. Indeed she is the one who comes to the rescue of “Blomqvist,” using her tech savvy to help him prove his case against a powerful man. Although it is possible for a Hollywood actress to de-prettify (remember Charlize Theron?), the actor who takes on this role needs to be very good indeed. The strength of the “Lisbeth” is formidable and needs to vibrate off the screen. Craig is an excellent actor who can possibly temper his good looks but will his role be rewritten to make him a stronger character? Will his part be expanded so that the female character will be subordinated? The woman who takes the role of “Lisbeth” will have to be prepared to dominate James Bond. And Craig will have to be generous enough to let her.

Girl with the Dragon Tattoo is violent towards women in a fashion that one rarely sees in an American film. To be fair to American entertainment, the theme of serial killers doing horrible things to women is a constant, from films to television programs. There is an outlandish quality to the elaborate murders that shields the viewer of American television. But we expect Europeans to be less preoccupied with slaughtering their women folk. But Larsson, supposedly, was attempting to show that Sweden was no paradise, like we think it is. But that justification for violence against women is hard to believe. We don’t think Sweden is a paradise because of gender equality, we think Sweden is paradise because they have health care. Because Larsson was a crusader (like his hero) against right-wing extremists and neo-Nazis, he lived with the hourly threat of violence. The Series is a manifestation of the violence that hovered over the author’s real life. Hollywood prefers stylized violence but the Swedes have strong stomachs, apparently. There are scenes in this film that are difficult to sit through, especially if one is a woman, for the violent hatred of women is demonstrated unsparingly and goes on at length, in excruciating detail.

It is hard to imagine how Hollywood will deal with the rape scene and the consequences of the rape. The film, as it is now, skated past the extreme edges of the “R” rating, probably because it was not intended for a general audience. What makes the violence so affecting and disturbing is that it is not stylized, Hollywood style, but is naturalized in all its unnaturalness. As objectionable as the violence against women is, the hatred of key male characters towards “”Lisbeth” is a necessary device which propels the story across four books. Larsson wrote like Charles Dickens, sprawling across the novels and spinning a tortured convoluted story, a true Freudian “family romance.” I went to see The Girl Who Played with Fire with a colleague who remarked that the series reminded him of Raymond Chandler’s dark noir novels of corrupt and criminal Los Angeles families.

My colleague’s point is a very perceptive one. In the second film, connections and coincidences become annoying, as the arch villain is revealed to be “Lisbeth’s” father. What had been a serious investigation into political corruption devolved into a particularly grotesque soap opera. But the family connections are also pure Raymond Chandler who always found the heart of darkness in the home. When the second film ended, “Lisbeth” and her father are both in the hospital and we are left with a cliffhanger. The third film, The Girl who Kicked the Hornet’s Nest will arrive in America this fall. As for the fourth book, the mistress of Larsson is holding it hostage, negotiating her way through her own family drama with her lover’s children. By the time we get through the third film, we may not care about a fourth. The second film was not as strong as the first, which could, and should stand on its own. Meanwhile, we await the casting of “Lisbeth Salander,” Hollywood style. Be afraid, be very afraid.

Is the Internet changing our brains? We know what our brains look like on drugs—-but do we know what our brains look like on the web? Don Tapscott, one of the experts in the realm of Internet communication says that our minds have been improved by unlikely mechanisms, such as video games and the much-scorned Wikipedia. Even though it is hard to imagine World of Warcraft as the implementer of intellectual prowess and the facilitator of social skills, today’s children and teenagers, the sons and daughters of Dungeons and Dragons players, are smarter than their parents. For some educators, the news that their students have sharper, better developed minds than they do, will come as a bit of a surprise. However Tapscott insists,

…what we are seeing is the first case of a generation that is growing up with brains that are wired differently from those of the previous generation. Evidence is mounting that Net Geners process information and behave differently because they have indeed developed brains that are functionally different from those of their parents. They’re quicker, for example to process fast-moving images…

What does it all mean? What are the implications for the future? Tapscott’s book is an informative and insightful journey into the way the twenty-somethings—the Net Generation—think. Despite the scientific data that suggests that the brain of a person who has been web-trained his or her entire life is different from the book generation, the main thesis of Tapscott is not so much brain change but power change. He posits the Net Gen as the “Lap Generation,” the first generation to lap or pass their parents by possessing authority their elders do not understand: how to use electronic technology. The result of the younger generation’s apparent natural mastery of all things tech, Tapscott thinks, is the end of hierarchies and the abolition of a centralized authority. The author focuses on four areas, family, education, business, and politics. All of these entities are being faced with the Lap Generation and their egalitarian mindsets.

Family

The youth of today are better informed, more adept at technology, and savvier with the ways and means of the Twenty-first century than the adults who are still in charge of education, businesses, and governments. What Tapscott’s book points to is a huge generations gap, a chasm as wide as the famous “generation gap” of Margaret Mead. For the Baby Boomers, their parents’ pre-war knowledge and experiences were irrelevant and useless, making what the author refers as the authoritarian family structure of the era extremely frustrating for the Boomers. The fathers, who acted like CEO’s, as Tapscott calls them, pontificated, but they had little of use to share and were unwilling to learn from their children. After years of having to endure lectures on topics that were alien to teenagers in the Sixties, the Boomers escaped the home front, never to return to the clutches of authority.

In contrast, today’s parents, who are the Boomers grown up, are more open to listening and to allowing their children to show them how to log onto the Internet. The relationship between parent and child is more open and more nurturing. Parents and children are close, so close that an entirely new kind of parent has emerged, “the Helicopter parent.” As an educator, I am familiar with that kind of ever-hovering parent but did not know that these same parents will continue to hover. They will go on job interviews after college, and will even confront the boss if their child is not well treated. How are the parents so well informed about the office politics for their child? The Lap Generation, the “boomerang” generation, making a strictly economic decision, likes to live at home. There are no hierarchies, only equality, in this new family.

After reading Tapscott’s observation about the new family, it occurred to me that this new arrangement bodes well for the distant future when the Boomer parents are elderly. For the first time in generations, it may be possible that the children will care for the parents. The Boomers ran away from home and abandoned their parents. Many Boomers today are facing the conundrum of what to do about an elderly parent or two. It is not uncommon for the Boomer’s elderly parents to be abandoned—again—in a facility where they will live out the last of their golden years, unvisited, and will die, unmourned. But the Boomers who have been respectful and kind to their children should expect better care from their children. What else could this new kind of anti-authoritarian family offer to the future?

Education

Educators should take note. The current model of pedagogy is teacher focused, one-way, one size fits all. It isolates the student in the learning process…. (Net Geners) will respond to the new model of education that is beginning to surface—student-focused and multiway, which is customized and collaborative… says the author.

Tapscott states that the Net Gen carries with it two sets of expectation when these students enter schools and colleges. First, they are shaped by their experience with the Internet, which demands that they interact with technology, search for content, and socialize with their peers, long distance. Second, they expect to shape and participate in their own education. Rather than passively accepting intoned truths delivered from behind the lectern on high, this generation wants to participate and collaborate in what they expect to be a joint enterprise. The author characterized current education as being a one-way model, that is one-person talks and another listens. It occurs to me that, in fact, the educational system reflects the technology. The Guttenberg technology, based upon the printing press, is a one-way form of communication. The author writes and the reader reads. The radio repeated this form of speaking and listening that reflected the print technology. Then television came along and replicated the Gutenberg method once again. Education is based upon the premise that an educated person, i.e. e. the teacher, is also a reader who has read and who, is, therefore, qualified to redeliver the written messages in an oral form, again repeating the model of one way communication.

Following my line of thinking, the real challenge to today’s educational model is the Internet, which is a two-way mode of communication. In contrast the traditional Sermon on the Mount, the Web is participatory, non-authoritarian communication, a call and response format that is ignored and discredited by the authorities until they feel threatened by the sound of Other voices. The call and response nature of the Internet—this new technology—means that education must become more participatory for the Net Gen students. Tapscott writes that the Net Gen students expect interactive teaching and learning. If they cannot actively collaborate, they will tune out and get bored with traditional methods of lecturing. Although Tapscott does not get into the weeds of pedagogy, I suspect that, contrary to their current teachers, this is a generation that would accept and welcome distance learning. Today’s students are used to learning from the computer, an instrument that many of today’s educators view with suspicion. On one hand the computer is a convenient tool, on the other hand, it challenges the authority of the teacher who wants to be the sole source of knowledge.

Tapscott describes the elders of the Net Gen, the Gen Xers, as being “aggressive communicators who are extremely media centered.” But unlike the Gen X, the Net Gen grew up using the “programmable web.” “And every time you use it, you change it.” The author continues later, “On the Net, the children have had to search for, rather than simply look at, information. This forces them to develop thinking and investigative skills—-they must become critics. Which Web sites are good?” Tapscott rightly calls the model of education we currently use—-teacher lecturing and student listening—-as Industrial, but I think he may be off by a few centuries. The model is more that of a pre-Gutenberg culture, before the printing press made it possible for people to read what they wanted. I would agree with Jeffrey Bannister, quoted in Tapscott’s book, who uses the term, “pre-Gutenberg.

We’ve got a bunch of professors reading from handwritten notes, writing on blackboards and the students are writing down what they say. This is a pre-Gutenberg model.

I might point out in passing, to Bannister, that in attempting to accommodate multiple learners, it is considered good practice to write on the board for the students who learn by reading, not hearing. Indeed, Tapscott also states that,

Students are individuals who have individual ways of learning and absorbing information. Some are visual learners; others learn by listening. Still others learn by physically manipulating something.

As early as 1967, as Marshall McLuhan, also quoted by Tapscott, said,

Today’s child is bewildered when he enters the nineteenth-century environment that still characterizes the educational establishment, where information is scarce but ordered and structured by fragmented, classified patters, subjects, and schedules.

The New Learning must be customized for each student’s needs. Tapscott also quotes Howard Gardner, who called today’s educational model as mass production, a reflection of the industrial economy, which created assembly lines and Taylorism that forced human beings to work in tandem with machines. According to Gardner, school is also mass-production. “You teach the same think to students in the same way and assess them all in the same way,” he says. True but this is how No Child Left Behind teaches, as it must, for the standardized test. Even the best secondary schools teach towards to entrance exams so that the students can get the highest scores, not necessarily the best critical thinking skills. The test becomes the teacher. How are the Net Geners going to respond to a mechanism so crude and arbitrary as an SAT test? Note that these standardized tests do not take into account the way that the test-takers, the Net Gen, actually think. Change takes place at a glacial pace, especially when the entire educational system comes from a foundation based upon magical thinking: if the speaker says it, it is so. Education equal authority—unquestioned authority. How did strange combination of information without questions come about? And how did such a procedure become labeled as “education?”

When Gutenberg invented the printing press, the Church was against this new instrument, because the sacred words, once intoned only from the pulpit would be distributed to the great unwashed, delivered by the voice of authority. The Church feared, rightly, that the power of the printed word and of reading would allow the people to challenge the priesthood. The authority of the Church was unquestioned and was based upon a far older form of disseminating information, an oral culture of story telling. A culture of story telling is a logo centric culture, backed by the presence of the speaker who is the source of the story, information, and the truth. God spoke to Noah, to the Prophets, etc. and the word of God was transcribed. It was the task of religion to tell to those congregated the words of the Lord. The Church inherited a largely illiterate society—even kings and queens often could neither read nor write–that had to be preached to. Through years of standing for six to eight hours in cathedrals, hearing mysterious Latin, listening to sermons, and “reading’ the sculptural programs and the frescoes, the uneducated people under the care of the clergy were socially conditioned to listen to one voice (God’s) and one source of authority (the Church). The Protestant movement was proof that once the common person could read the words of the Bible, those people would take unto themselves the power to interpret God himself.

There are historically close ties between the Church and the University. The first universities, the Sorbonne and Oxford, were affiliated with religion and, with the clergy the only educated group, the priests became the first faculties. The traces of this history are clearly visible any graduation day with the procession of professors marching down the center aisle of the school auditorium, like the clergy files down the nave, in full “regalia,” wearing the long black robes, very monk like. Further traces of the Church lie in the very practice of lecturing: the teacher stands at the head of the class and speaks alone. The students speak only to ask questions and are expected to subside into obedient silence. Just as the priests re-spoke the Word of God, academics re-speak the words of their precursors. The very form of academic and scholarly phraseology mimes the sacred scriptures. “As —- tells us,” “As —- famously said,” and so on. Logos being handed down from authority figure to authority figure. Academics depend upon the logocentric tradition and upon the mystical belief that the speaker is backed by the fullness of authority. It is as if Moses descended from the mountain, bearing tablets written in stone—not to be altered—-after communing with the Almighty.

The assumption of a plenitude of knowledge, like that of the completeness of presence, is a false one but authority must be protected at all costs. Another prevailing characteristic of education, inherited from the Church, is, paradoxically, secrecy. Knowledge is guarded by the initiated, those who are learned in the ways of scholarship; knowledge is not to be given out freely, especially insider secrets. Like the Greek temples where only the priests were allowed inside the inner sanctum, only those inside the circle of the select are allowed to “speak” or be “present,” that is to publish, that is to “re-speak” the already spoken. The Internet has changed all that. The Net Geners are not readers, they are not listeners; they are iconographers. As Tapscott notes,

Net Geners who have grown up digital have learned how to read images…. they may be more visual than their parents are…. (They) tend to ignore lengthy instructions for their homework assignments…

Tapscott points out that students of today learn better through images. Indeed, this generation has invented a series of new hieroglyphs that function as signs such as happy= (: and sad= ):

Today’s students, Tapscott points out, will want to customize their education. He mentions that “tinkering” has made a come back. Indeed it has. The time of the mash-up has come. In higher intellectual circles, we call the mash-up, or sampling, bricoulage, that is, taking the existing culture and making something else with it. This is postmodern thinking, reclaim, reuse, remake, recycle. The very same teachers who teach postmodern theories are those who insist upon “original” work from students who are what I call, the Mash-Up Generation. The professors who eagerly and enthusiastically teach Postmodernism, or the questioning of the “metanarrative” of Modernism, will reject cutting and pasting and demand that the student cite “sources,” or the validating voices of authority. The same professors find it hard to accept that a student has ideas of his or her own, attitudes that stem naturally from their own generation, for, although the Boomers may have resisted authority, they knew it existed.

If my generation got into trouble for questioning authority, this generation gets into trouble for leveling sources. Every voice, every bit of cultural material has equal value and can be freely borrowed and re-used. Net Gen seeks convenience and speed over venerated voices, who are often unwilling to make themselves available on the web. Even more threatening to the traditional authority of educators is the declining value of scholarly knowledge, which is being by-passed and ignored by the mainstream undergraduate. Every teacher knows that students think that Google is a database. Students routinely ignore the expensive databases, paid for by student tuition, made available through library websites. Getting into the date bases is a clumsy and cumbersome and often unrewarding enterprise, because the technology of these databases is antediluvian. Naturally the student goes to Google’s fast and functional search engine to find information. Like the Net Gener who gets a job and finds, to his horror, that the technology is twenty years behind the times, the student will not tolerate the ritual of multiple clicks and passwords and all the other paraphernalia that work to make knowledge inaccessible. Even when forced to read a credible source, the students, accustomed to the all-purpose Net-speak, rebel at the insider jargon, written by scholars writing to scholars.

Net Geners want to be informed, not talked at. They like to take materials they find helpful or interesting and remake it. As opposed to always referring back to the authorities, the Net Gen likes to write its own material and to create its own content. Tapscott indicates that the Web actually encourages creativity and productivity because the Web gives easy access to inventors. From their habits of playing video games or participating in the virtual reality of Second Life, the Net Geners learn how to play their own game. Speaking of video games, Tapscott says,

“This kind of play is deeply creative. It involves trial and error, learning by experiment, role playing, failure, and many other aspects of creative thinking.

None of this kind of creativity is allowed in education. Play is forbidden and failure is mocked. In contrast, the author discusses a thirteen year-old writer who contributes stories to a website where they are read by thousands of readers. “Isn’t that better than writing on paper and hoping that some day it might get published?” Tapscott asks. For today’s teachers and professors the Web 2.0 is something Roland Barthes would have loved: this new Web is called the “read-write” web—we read it and we write it.

Although there are many teachers who are eager and willing try more experimental student centered ways of making learning a collaborative enterprise between mentor and apprentice, they are constrained by a system that demands command and control. Distance Learning still attempts to replicate a now-obsolete classroom format, by demanding assignments at set due dates, by demanding chat room appearances at a set time, and so on. This is hardly learning the way the student needs it, customized, when the student can devote the time to it, at a pace that facilitates learning. Even distance learning classes end after a set number of weeks. Traditional classroom education is ruled by the physics of time and space: one teacher to a classroom, a certain number of students in a space, taught a common denominator course that must fit into a larger curriculum at a specific time. Student centered education is evidenced by allowing students to speak more or to participate in class discussion. There is no time for the teacher to waste. S/he has a set amount of material that must be covered.

Students are increasing unwilling to learn in the traditional manner, because they assume all knowledge is available on the Internet. Why learn math when one has a calculator? Why not teach how to use the calculator to find the answer? Why plow through many books when Wikipedia tells you anything you want to know and, even better, you too can write the content. Tapscott tells an amusing story about interviewing a young man named Joe O’Shea who stated that he never read a book—why should he? All the information he needed is on the Internet.

“I don’t read books per se,” he told the erudite and now somewhat stunned crowd. “I got to Google and I can absorb relevant information quickly. Some of this comes from books. But sitting down and going through a book from cover to cover doesn’t make sense. It’s not a good use of my time as I can get all of the information I need faster through the web. You need to know how to do it—to be a skilled hunter.”

Before you educators out there jump to your feet to explain the difference between “information” and “knowledge,” know that the punch line was that the young man had just been awarded a Rhodes scholarship.

Business

Tapscott describes a new world in which the consumers remake the product, as they are remaking education. Education, he suggests, should think like a business and respond to the consumers, but Tapscott also points out that the businesses, which do not respond with agility to the demands of the Net Gen can get into trouble. The Net Gen, rightly, in my view, views businesses and corporations with suspicion. Tapscott points to the empowerment of the NetGeners who like to be “prosumers,” that is, proactive consumers, who customize their products. Young people have been prosumers for generations, but no one has named their practices until recently. Little girls have always treated their Barbies to new hair-dos and teen-age boys have always modified their cars with after-market products and custom decoration. This desire to contribute to mass-produced and mass marketed products has only recently been harnessed by companies such as Apple where “there’s an app for that.”

The users of Apple have often been referred to as a “cult” because of their devotion to the product. The term “cult” is derogatory and comes from those who simply don’t understand how the Net Gen thinks. Apple is thought of by the techies as an honorable company, which strives to produce a product that is beautifully designed and user friendly. In addition, the company also works closely with its user base, from the Bleeding Edgers to the novice customer, asking the tech savvy to participate in the improvement of the function and design of the product and watching for the difficulties of the blunderer so Apple can make function more straightforward. The reason why the flap over the iPhone4 and its broken antennae was so minor to Apple users is because those customers know that the company will fix and improve the problem with the next iteration of the phone. The Apple user is invariably an Early Adopter who expects such glitches and enjoys participating in the fix. This kind of audience participation is the Apple business model and it has won the company a devoted following.

But all companies are not so accommodating to the customer base. Witness the hostile relationship between music lovers and the music industry, publishers and those who write and read books, the car companies (Toyota) and those who drive. The new generation of consumers wants to customize their experience with the product, Tapscott declares, but the corporate mind thinks in terms of profit not prosumer. To the Net Gen, music and art and literature and knowledge, like information, should belong to no one and everyone. Downloading “illegal” music is common practice, done without shame or remorse. How can anyone own music? Doesn’t art belong to everyone? The Net Gen is forcing companies that want to survive to be transparent and participatory, Tapscott writes. Older corporations do not want to interact with their customers. Like the traditional media, the corporate mind insists upon a one-way communication: top down. As Tapscott says,

…the industry has built a business model around suing its customers. And the industry that brought us the Beatles is now hated by its customers and is collapsing. Sadly, obsession with control, privacy, and proprietary standards on the part of large industry players has only served to further alienate and anger music listeners…

Tapscott states that the Net Gen prefers flexible hours and “want to chose when and where they want to work.” not only that these young people what their work to be “meaningful.” “They’re not loyal to an employer; they’re loyal to their career path,” he remarks. Imagine the surprise of business types when the Net Gen shows up to “work.” The Net Gen wants to play. The Net Gen employee comes to a company for one reason—-no, not a job—to learn. Once the Net Gen worker learns what s/he needs, s/he will move on to the next learning experience. It is pointless to expect the Net Gener to be “loyal” to the company. The concept of loyalty that his grandfather may have enjoyed was broken when companies began sending jobs overseas in the Seventies. Companies still expect the employee to commit to being a permanent fixture, while refusing to guarantee lifetime employment, much less health care. For the average corporation, human beings are a financial liability, but the Net Gener comes to play with the idea of contributing creatively.

Companies tend to create what Tapscott calls, a “generational firewall,” which separates the newbies from the oldtimers. This strange way of not utilizing recruited talent is not unfamiliar to me. I have often asked, why hire someone who is then suppressed and under utilized? Business runs on a hierarchal basis, those at the top give orders and the orders roll downhill where the underlings carry out the dictates. The Net Gen employee, according to Tapscott, does not accept hierarchy and assume that they were hired for their talents. If they cannot and are not allowed to participate as an equal, the most talented will simply move on. Their attitude, quite properly, is: if you won’t listen to me, why should I stay? Net Gen wants to contribute and needs to contribute to something meaningful. As the parents of the Net Geners changed the modeling of parenting, education needs to change its traditional assignments and business needs to change its traditional models. Show the Net Geners what’s in it for them.

Politics

That same attitude—-what’s in it for me? appears in politics. Today there are two common questions in popular culture: “What would Jesus do?” And “What’s in it for me?” We assume that Jesus would not say, “What’s in it for me?” We like to think he would say, “What can I do for you?” “What’s in it for me?” is a business question and the answer has to be “profits.” “Profits” is a business answer. So when a politician promises to run the government like a business, that implies that the government will not be in the service of the people but in the service of profit making entities, like corporations. Imagine if government were run like a business, like, say an oil company or a music company. Tapscott is convinced that the Net Geners have a better way. The Net Gen voter is an active participant who, unlike her grandparents, is a volunteer or a community activist, Tapscott says. Some of the Boomers joined the Peace Corps, some marched for Civil Rights and some protested against the Viet Nam War. Others marched for women’s rights and demanded gay rights. The Boomer’s children are the Net Roots who became activated by the prospect of being allowed to participate in the election of Barack Obama.

Tapscott discusses the Internet based campaign at length, and reading these passages, now that we are two years into the Obama administration, is enlightening. I think that much of what Tapscott writes is insightful and informative and I learned a lot from reading his book, however, I do think he is too sunny and too hopeful and too optimistic. Politics is a case in point as the enthusiasm for Obama wanes quickly. The Net Gen expected results. When Obama promised “transparency,” they thought that the President was thinking like the open artless, and fearless sharing that takes place on Facebook. The web is totally open and uncontrolled as a source of energy and information. The web is a place where things happen. That is why so many people (like me) devote their time to contributing to it. But the Net Gen quickly learned its lesson. As Tapscott writes,

Most Net Geners believe that the mechanics of power and policy making are controlled by self-interested politicians and organized lobby groups…The Net Generation does not put much trust in politicians and political institutions—-not because they are uninterested, but rather because political systems have failed to engage them in a manner that fits their digital and ethical upbringing.

The Net Gen experience as Internet users has taught them that if they coalesce towards a cause they can make changes. The fact that the Net Gen volunteers for Obama were so excited because they were “natural” Democrats, that is, they shared a cultural attitude that the government should work for the people, and that they—the (young) people could shape the outcome through their participation. According to Tapscott, the Net Geners are not conservative but more open to change and new ways of thinking than any other generation. But a Democratic victory did not bring the change they expected. And now the Net Gen has turned their backs on the administration. Why? The problem is that the government is controlled by a group of middle-aged people who will not let go of power. Just look at Congress on C-Span. All old White Men. No one under forty. No poor people. Few People of Color. Some women here and there. No collaboration, no participation from half of the members of Congress, who appear to have abdicated their governing responsibility in the pursuit of political power. The strategy of not participating—this is not the Net Gen type of thinking in Congress.

Things only get worse when one turns on to the news programs. The gap in age is shocking. Although there are some networks or news programs I do not watch, I do record at least four hours of news a day on TV to which I listen while I am writing) and read three newspapers a day. There are no young faces, no young writers (and therefore no young readers), no young voices, no young way of thinking. Only the Hill reporter, Luke Russert, the bright son of the late Tim Russert, stands out as someone under thirty. An entire generation is being left out of the conversation. The elders reflect back on their days with President Carter or President Clinton, prehistoric eras for the Net Gen, and discuss and debate raging political quarrels that are non-issues for the younger generation.

People—usually men—well beyond their childbearing years decide abortion policy. People—increasingly women as well—who are too old to fight send their young generation off to war for their own political ends or their lobbyists needs. People with lifetime jobs in Congress decide how much money the unemployed will or will not get. People with guaranteed government health care decide that others cannot have those same privileges and see no hypocrisy in their positions. Those who are heterosexual (they say) decide the personal lives of homosexuals. And so on.

Would results be different if the younger generation made itself heard? As Tapscott points out, this generation is far more tolerant than their parents or grandparents. It is their grandparents who are concerned about racial and gender equality, interracial marriage, “illegal” immigration, gay marriage, and other hot button issues. For their grandparents, global warming is debatable, for this generation, raised on green values; a devastated planet is their inheritance. If you asked a Net Gener which problem worried him more, the budget deficit or global warming, he would say, “global warming.” Always the optimist, Tapscott writes,

I’m convinced we’re in the early days of something unprecedented. Young people, and with them the entire world, are beginning to collaborate—for the first time ever—around a single idea: changing the weather.

For the Net Gener, it is discouraging to see who is in power and to watch how they behave. Partisan bickering and political game playing instead of collaborative game, negation instead of affirmation, blocking change instead of accepting it—all of this is alien to the younger generation. Those in the government and those elected to office are one-way communicators, out of touch and out of date. They allow the public to “speak” every two years at the ballot box. And these are the people to whom the question of Net Neutrality will be turned over. The corporations want to segment the Internet so that they can profit maximize what has been a free good, available to everyone. The case of whether or not the net will remain the great equalizer will probably be decided by the Supreme Court, presided over by a Chief Justice who does not understand e-mail.

Not a wonk, I am probably better informed than some people and I value the facts over ideology. So does the Net Gen. For us it is not Democrat or Republican, liberal or conservative, it is integrity, honor, and the desire to tell the truth. For Washington D. C., it is sound bytes and talking points. By selling the “War on Terror” the “War for Weapons of Mass Destruction,” and the need to Bail Out the Big Banks to the credulous public, the government has created what a Bush appointee called, a “post truth” society. How true. For the Net Gen, truth matters. The trust of the public in its leaders has been shattered, leaving a vacuum for the bloggers and talkers to fill. Another authority has to be appointed and anointed. For the older generation, still willing to accept one-way communication, sound bytes stand for wisdom, tweets become knowledge, and talking points are the truth. The Net Gen finds it astounding when the politicians change their stories and refuse accountability, even when they are caught changing their positions or lying or fabricating stories. The Net Gen is used to trolling the Internet and finding the facts and cannot understand how their elders can lie, get caught, pay no consequences, lie again and so on. No wonder they are disillusioned by politics.

The Future

Tapscott does not entirely ignore the real problems brought by the Internet revolution. He points to the gap between the have-nots of technology and those who are active users. His main examples are the poor or the third world, but there are other have-nots, closer to home, such as the poor, the elderly, or the close minded, or the technophobes who are getting left further and further behind. Then there are the bad effects of the Web. One of the odd and underreported facts of technology is that the Bleeding Edge is usually made up of illegal or questionable practices that become outlets for pathologies, including on line gaming, Wall Street derivatives, pornography, pedophilia, including on-line bullying. It is these Early Adopters who benefit the Web by using it and creating new pathways, meaning that all these nebulous people are always one step in advance to the forces of law and order. Parents protest the perfectly legal video games, such as the horrible Grand Theft Auto (which has awesome artwork) but forget that they watch and enjoy violent adult films such as Pulp Fiction. That said, the dangers of the Internet are real but, in the name of freedom, the Net Gen will defend the right of anyone and anything to prowl there. One can only hope that the same Supreme Court that granted freedom of speech to corporations will see fit to allow the Net to remain open to all comers.

Tapscott believes that “Net Geners are quick to recognize that the best way to achieve power and control is through people, not over people. Good lesson. The Net Gen is intelligent enough to know that Obama cannot change Washington D. C. There are too many entrenched interests. The question has become not what can I do for you? but what’s in it for me? All that hard work, all that dedication, all that Hope and no pay off, no results. People go into politics to get things done, to make things happen and when nothing changes, you turn away. It’s like your last job: you learned something new and then moved on. How sad. The problem for the Net Gen is that the fifty-sixty something generation of Baby Boomers have no intention of changing or of letting go of power. They are impervious to the Net Gen. “They” being the Big Banks, they being the Big Corporations, like Big Oil, are so powerful, have such a stranglehold on America that “They” answer to no one. Big Business cares not about the Net Gen, neither as employees, nor as consumers. By the time the Net Gen will have their turn to come into power, they too will be in their fifties, fully thirty years from now. The Baby Boomers joined the Tea Party in their maturity. What will the Net Gen do with their golden years? Tapscott concludes his book,

The big remaining question for older generations is whether that power will be shared with gratitude—or whether we will stall until a new generation grabs it from us. Will we have the wisdom and courage to accept them, their culture, and their media? Will we be effective in offering our experience to help them manage the dark side? Will we grant them the opportunity to fulfill their destiny? I think this will be a better world if we do.

Dr. Jeanne S. M. Willette

The Arts Blogger

Suggested readings from Don Tapscott’s Bibliography:

Beck, John C., and Mitchell Wade, Got Game: How the Gamer Generation is Changing the Workplace, 2004

Benkler, Yochai, The Wealth of Networks: How Social Production Transforms Markets and Freedom, 2006

The Kids are All Right is about a marriage threatened by an affair and a family threatened by a stranger from the outside. The only novelty, which is hardly a novelty, is that the couple is lesbian. My question is who is the audience for this film? I am sure that the people who think that being gay is a sin and a mental disease will boycott this film. I am sure that there are certain small towns that will refuse to show this film. The very audience that needs to be reached isn’t listening. Then there are the rest of us. The supposedly earthshaking revelation that gay people are just people makes the film feel dated, rather like looking at The Bill Cosby Show, which shocked, shocked the world in the Seventies with the news that black people were just people. For those of us in LA and in the OC, this movie is about people we know, our next-door neighbors, our colleagues, and our friends.

Indeed it was the goal of writer and director Lisa Cholodenko to tell the story of her life. She and her partner were shopping for a sperm donor and the process of selection of a candidate and, finally, of reproduction were on her mind. Teaming up with her literary partner, Stuart Blumberg, who, Cholodenko tells us, is straight, she wrote the story of her life, projected it into the future and asked “what if?” the sperm donor returned to the scene of the family? Much has been written about The Kids are All Right. The usual critical theme has been that a family is a family and a marriage is a marriage, regardless of whether the couple is gay or straight. This is a nice film, well written and well acted by the excellent cast, amusing in parts, and totally good natured. Totally mainstream, this is a movie you warm to and you leave the theater, feeling good.

That said, once you get beyond the surface, there are some serious points to make here. Probably unintentionally, this film repeats the prejudices of mainstream straight society. Straight society, for centuries, has forced gay people to accept the strait jacket of straight life. Many gays and lesbians have written eloquently of the extent to which straight society “sells” the heterosexual couple as if it is the only proper way for people to love each other. Agreed. The social assumption that the straight couple is “normal” is a kind of cultural tyranny. But this film replicates that same tactic and reveals another prejudice. Marriage is celebrated as “normal” and is sold as the only proper way for adults to live. Single people are cast as second-class citizens, maladjusted and irresponsible.

In point of fact, there are more single people today than married people. Counting divorces, widowhood, celibates, the never marrieds, the too young to marry, single people outnumber couples. More and more people chose to never marry or, if they were once married, decide to not marry again (especially women). And yet the whole economic basis of society and tax system is geared to encourage and reward marriage and procreation. Popular culture and films routinely portray single people as loose cannons on the decks of the Good Ship Marriage and The Kids are All Right is no exception.

The couple in question, long married, is “Jules” (Julianne Moore) and “Nic” (Annette Bening) who have two lovely children, “Joanie” (Mia Wasiknoska) and “Laser” (Josh Hutcherson), courtesy of their sperm donor (Mark Ruffalo). This normal family of kind and conscientious parents and smart children with good sense is rocked when the “kids” invite “Paul,” the donor, into their family circle. Mayhem ensues. Here I thought that women who were lesbians did not like to have sex with men, but apparently I was wrong, because “Jules” has an affair with “Paul.” “Nic” finds out, the kids find out, and the marriage quakes and shakes, but, with astonishing ease, the firm foundations hold. Accusing him of being an “interloper,” “Nic” banishes the remorseful “Paul.” “Joanie” goes off the college and “Laser” stays home with his “moms.” The bad single person is left behind, his longing nose pressed to the window of marriage and the family he cannot be part of. The end.

But the central character is “Jules;” the film is really about her. She has played the housewife to the wage earning husband figure, “Nic.” “Jules” stayed home and took care of the children, but she resented “Nic’s” power of the purse over her and the fact that she could not get the career she wanted. The “kids” are leaving the nest, which in a few years will soon be completely empty. Once the children leave home, what should she do with the rest of her life? Society still does not provide a road map for the stay-at-home partner. How do you reengage with the real world outside the boundaries of the family? How do you catch up? How do you interact with adults who are not your partner? Very carefully.

Married people live a very sheltered life. They are able to find support and sustenance from each other. When a married person who has been the stay-at-home partner leaves the protective circle of the family, there will almost always be problems for this person, male or female. You tend to assume that the rest of the world is like your family where everyone is on your side. You don’t know that the world is full of self-interested people who are not like your family. You will make mistakes, because, like a young child, you have no experience and no judgment. Particularly when it comes to sex.

“Jules” is naïve and is totally unprepared for real people in the real world. She uses the unexpected offer from “Paul” to landscape his terraced garden behind his organic restaurant to start a new way of life, with herself as an achiever instead of a caretaker. In other words, she left the family and its protection. “Paul” is made out to be a predator, afraid to make a commitment. He very carefully selects an unavailable woman, one who is vulnerable and who does not understand that (single) men hit on women all them time. To be able to hit on all women all the time—that is why “Paul” has never married. “Jules” makes a mistake so typical of people who wander outside the familiar territory of marriage, she blunders into an ill-advised adventure. She is used to agreeing to sex, that is what marriage is all about, and “Jules” does not know that, outside of marriage, you get lots of offers, some you accept, some you don’t. So “Jules” says, “yes” to “Paul,” because that is what she does. The film does not address the very real question of why she was so unworldly and ignorant that she was left open to making such a mistake. “Jules” scrambles back to the family, not older, not wiser, not more grown up. She remains a child, protected by “Nic.”

The affable, free-wheeling, motor-cycle riding “Paul” is a minor character, the outsider, the disruptor. He is the catalyst that works against the grain of the story. This character deserves to be deconstructed. The entire plot is told from the vantage point that marriage is best and that being single is an unnatural condition. But “Paul” is exactly who and what he wants to be. And although self-actualization is supposed to be the goal of every individual, he is condemned for being “arrogant and full of himself.” The married people mock him, indicating just how threatening they find him. But “Paul,” like most single people, likes the freedom. Although “Nic” scorns him for having dropped out of college and for having done exactly what he wanted to, “Paul” has made a success of his restaurant business, a very real accomplishment, for which he gets no credit. Paul is condemned for doing what people do: taking advantage of an opportunity to enjoy himself. In contrast, “Jules,” who obviously has not done what she wanted, tested the waters of a new life by acting out with a man, of all people, gets off with an apology.

The “kids” have much more street smarts than their parents. They see “Paul” clearly, enjoy the new possibilities he offers, but, in the end, they loyally follow their ”moms” lead and reject him totally. Too bad, because the man had something to offer. He was responsible enough to apologize but no one in the family is willing to meet him halfway. He has committed the cardinal sin: he has attempted to break up a family and a marriage. Nowhere is it asked why the marriage and the family were so porous. For “Paul” to be able to enter in, the family had to be in some kind of crisis, if only one of a transition. “Paul” is not the problem. The marriage simply needed refreshing. One might ask why a lesbian couple could not create a more innovative form of marriage. Why would such a couple replicate a 1950s style heterosexual marriage in which the husband held all the power? That type of patriarchal alliance was bad for straight couples. Surely in the Twenty-first century, a more equality or a balanced partnership can be found for marriage.

“Paul” is portrayed as a man-child, a perennial Peter Pan who will not grow up, an attitude that assumes that single people are immature and that only married people are adult and only couples shoulder responsibility, because only couples have children. Even if we leave out the fact of courageous single parents, the attitude of this film is curious because single people have to be strong and independent. Single people are alone in the world, depending upon themselves and their own resources. The pay off for being alone, for a single person, is worth not having a partner as a backstop. It’s not just that you get to eat an entire pint of Ben and Jerry’s Chocolate Chip Cookie Dough at midnight, and no one will laugh at you. Although a single person can come and go without ever having to call home, being single is more than sheer self-indulgence. A single person is free and self-reliant. Without having to consult anyone else, single person can change jobs, make a new life, find a new identity. Strong character, self-reliance and the courage to take chances—-this is the goal of every mature adult, isn’t it? Society has overcome many long held prejudices. The bias against gay people and gay marriage is today largely a generational or regional concern. But the prejudice against single people is still alive and well and active in The Kids are All Right.

“We are the stuff that dreams are made of.” So said William Shakespeare in his last play, The Tempest. And this, in a nutshell, is the theme of Inception. This is the movie the summer has been waiting for. Only rarely does one go to a movie and see something that is actually inventive and imaginative. Most films are recycles of older films and reruns of time worn ideas. But in the sure and steady hands of director Christopher Noland and thanks to a remarkable group of digital artists, Inception reminds you that movies are us: the stuff that dreams are made of. Noland takes a very basic premise about human beings—we use very little of our conscious mind and that much of our mental activity takes place while we sleep. Our minds are unexplored and under utilized territory, but this terrain is inhabited. Beware.

Dreams happen each and every night, but we remember little of the content. From a Freudian point of view, dreams are the “royal road to the unconscious.” There, in this buried kingdom, our deepest longings are secreted, our most profound fears are cached, lying under layers of traumas and very resistant to excavation. Freud described the unconscious mind like the city of Rome, a place he never visited but the analogy was apt. As anyone who has been to Rome knows, you can walk down many streets and many strata of time are laid out in front of you, like an Escher drawing: the ancient world, the world of the Renaissance, the time of Mussolini, and today. The mind, like Rome, has its own archaeological layers of experiences that have to be mined. The psychologist uses the dream images for what they are, metaphors that must be interpreted. Noland is asking the audience to both take dreams literally but to also remember that dreams stand for something else that is hidden away.

Noland also plays with another, more Eastern, more mystical concept: suppose we are the dream of a greater being, a fantasy of some kind of god? For philosophy, psychology, for theology, and for us all, the real question is what is existence? The answer has to be is that Being must include that vast amount of time we spend on our fantasy worlds, not just our night dreams but also our daydreams. Like Shiva, we dream ourselves into an existence. Noland is fascinated with the mechanics of the dream: that we create the dream but do not know that we are creating all elements of the dream. In other words, those who chase you and persecute you are you. Suppose we are the gods of our own dreams and we are dreaming ourselves? Then what is real and what is the dream? That is the key question of Inception.

The plot is simple. The main character, “Cobb,” played by Leonardo Di Caprio, is an “extractor,’ a trained operative, who can go into the dreams of other people and direct the dreams in such a way that allows him to find hidden corporate secrets. In order to go into the dreams of another, one needs a team, people who will enter into the dream with you and add to the false fantasy. His team is “Arthur,” who looks like a serious bank clerk, Joseph Gordon-Levitt, an incredible “acrobat,” as Noland termed him, doing his own stunts. These stunts, which take place in zero gravity are worth the price of the film ticket. Not since Fred Astaire danced on the ceiling in Royal Wedding has this effect been so well done. Other members of the team include “Eames,” played by Tom Hardy, and “Yusuf,” played by Dileep Rao. The key member of this team is “the architect,” played by Ellen Page, who is called “Ariadne.” She shows off to “Cobb” by building an amazing dream world that folds itself over and lands on top of itself. Amazing. Obviously, the architect builds a dream world that does not have to obey the laws of logic, but the dream must function logically like a labyrinth constructed in three levels, each borrowing deeper into the mind of the target. The target in this case is “Robert Fischer, Jr.,” played by Cillian Murphy. Even the veteran actor, Tom Berenger, is on hand to play the lawyer for the Fischer family. According to “Saito,” the corporate client, Fischer is on the verge of making his late father’s energy empire into a total monopoly, and only Saito has a company that could break such a totalizing control.

This plot, of course is a classic Hitchcockian McGuffin. The quest or the task or the object in question was always unimportant to Alfred Hitchcock. For this director it was always the couple, the romance and the need to resolve the relationship between the man and the woman. Often, as in Vertigo, the relationship was obsessive compulsive, with the characters repeating their mistakes on an endless loop. In Marnie, the couple is daunting combination of repression and obsession. In a Hitchcock film, the male must control and contain the female; but the man does not always succeed, as in Rear Window, where, the hapless James Stewart is ensnared by a triumphant Grace Kelly.

The couple at the core of the film is “Cobb” and his wife, played by Marion Cotillard. “Mai” also has/had the power to penetrate dreams and to manipulate their outcome. Hitchcock warned men over and over in many of his films—beware of the powerful woman, the femme fatale. “Mai,” apparently a figment of “Cobb’s” imagination, has the nasty habit of suddenly appearing in his dream jobs and sabotaging his work. Every time “Mai” appears, the beautiful Cotilllard slows the film down. In an action film, it is a problem for the audience to get bogged down in a relationship. But Noland is a brave director. He allows “Mai” to be a drag on the film, because this stopping of the flow is absolutely necessary for the concept of the film and its plot.

Noland, who wrote and directed Inception, is well aware of film studies that have equated film with dreams. We leave the light filled ordinary world and go into a dark theater, we sit immobilized and stare at a screen upon which images are projected. Our state of watching a movie is similar to our watching ourselves in a dream, full of projected images. The camera in the theater is behind us, just like we are the mind behind our screen where we experience the dream. In a dream, we are unable to know that we are dreaming but in a theater we have the choice of suspending belief or not. We are asked to surrender ourselves to the fiction, as we surrender ourselves to the dream. Film language uses dream language—particularly important is the effect of “elision,” or the jump cut. In a movie when a character says something like, “Let’s go out to dinner,” we immediately jump to the restaurant. As in dreams, in movies, we are spared the logistical details of changing clothes and getting reservations and driving to the restaurant and getting a parking place, and so on.

Noland uses the resemblance of film language to dream language to great effect in this film, seducing the audience into assuming that we are experiencing film language. But pay attention; ask yourself—-are we witnessing a film or a dream? The fact that the viewer will be run through a maze is obvious, given the name of the architect, “Ariadne,” a character from mythology who led a Greek hero, Theseus through a maze built by the first architect, Daedalus, to confront a minotaur, the half-man, half-bull. In the world of dreams, we are Daedalus, the true architect, but we are also Theseus, encountering our deepest fears, the Minotaur, the monster, who is also ourselves. Ariadne is the psychologist who is following a thread through the dream construct to find the truth and to heal the patient. We assume that “Ariadne” is the grounded member of the team, the one who holds the string or thread that will lead the hero to the surface. But what is the Minotaur?

The movie begins with a failed mission, one undertaken to get into the mind of Saito” who subsequently hires “Cobb” to get into the mind of “Fischer” to get him to break up his father’s monopoly by planting the idea to do so in his mind during a dream. “Cobb” can no longer be an architect, we are told, because he suffers guilt over his wife, “Mai,” who committed suicide because he planted an idea in her mind about the power that the mind has over reality. This idea caused confusion between the real world and the dream world in her mind. For those in a dream state, there is a “bump” that can wake you up, such as your own death. In order to wake herself up to bump out of the real world she was convinced was the dream world, “Mai” leaped to her death and staged the scene to make it look as if her husband had killed her.

Separated from his children, Phillipa and James, cared for by their grandfather, “Miles,” played by Michael Caine, “Cobb” agrees to take on the job offered by “Saito,” played by Ken Watanabe. “Saito” offers to make a single phone call that will allow “Cobb” to go home to his children. The team goes into the mind of the target—will they succeed? Will “Cobb” be allowed to go home? And that is the McGuffin. The real clue is the implanation of an idea and the effects of this idea, which can take over the mind.

To Noland’s credit, he is absolutely straightforward with the audience. We are given all the facts up front. From the very beginning we know everything, but we chose to go along with the dream world that Noland has written for us. The director tells us that the mind, even the unconscious mind, has defenses that protect the secret. In real life such defenses are responses to the trauma of bad memories. We compulsively repeat certain kinds of dysfunctional behavior or we project onto others our own feelings. In Inception the mind of Fischer creates guards, guns, entire armies to resist invasion with the same single-minded resistance that a compulsive gambler will show to a therapist. The “secret,” a last will and testament is also a glimpse into the mind of his father. The mind of the parent is always a mystery to the child who is always futilely trying to interpret the adult way of thinking. It is the parent who inflicts the first wounds and the primal trauma on the mind of the child who buries the agony in the deepest vault of the mind. To get to this vault, the team must go deeper into the mind and encounter greater resistance to break into the castle, where the safe is to be found.

For a two-hour film, Noland is exceeding generous to his large cast of actors. You never feel as if there are “stars,” who are in the lead or eat up most of the screen time. Each actor gets his or her due and is as fully developed as dream characters get. I have read comments about the violence in this film, but there is actually more action than violence. Fasten your seatbelts; this is a fun ride. Because we are in a dreamscape, the special effects are believable and amazing and you never get the effect of “digital effects” or “CGI” that have become so common to the run of the mill movie. Though less complex than Avatar, the artwork is just as compelling and powerful.

To film audiences unfamiliar with Hitchcock or who did not get the references to Greek mythology or who don’t see Noland’s play with film theory, the ending of the film might come as a surprise. To those of us who knew where the film was going after about ten minutes, it simply doesn’t matter. Like a dream, you just go with it.

Jean-Léon Gérôme (1824 – 1904) was on the wrong side of history. Many people have been on the wrong side of history, and, like the segregationist Senator, Strom Thurmond, they deserve to stay there. However, art history is more subjective than history-history, which is supposedly based upon verifiable facts. Art falls into the perilous zone of subjectivity and art and artists are subjected to the rise and fall of critical preferences and of aesthetic judgments. Gérôme was art history’s most vile villain, most reliable enemy to all things Modernist. He was the perfect foil to Manet, Monet, and Cézanne, not because he was a popular and successful Salon artist but because he railed against his Impressionist counterparts, often and in public, on the record. But since the 1980s, a “younger” generation of art historians, in search of new material for dissertations, began to revive the dead dinosaurs of “official” French art. And Gérôme was among the most in need of revision.

The current exhibition at the J. Paul Getty Museum, in Los Angeles, “The Spectacular art of Jean-Léon Gérôme (1824 – 1904)” is one of the highlights of the summer of 2010. The beige marble citadel high on a hill overlooking the Sepulveda Pass and the 405 Freeway has been honored as the first stop on a tour, which features the Musée d’Orsay in Paris and the Museo Thysssen-Bornemisza in Madrid. It is rare for Los Angeles to be the lead in the museum world, especially for such a superb array of paintings, many of which were recently retrieved from ignominy out of museum basements. Unlike some art historians, I am not interested in rescuing a neglected artist, unjustly discarded by the forces of history, or in making a case for his worthiness. I am more interested in making sure that history is made complete. Without championing Gérôme, it can be said that it is necessary for him to be re-placed in the history of Nineteenth-century art, if only to better understand the Modernist artists and their accomplishments and their courage. It is important to understand the vast differences between Gérôme and the Impressionists in terms of painting techniques and the subject matter in order to comprehend the reception of the art audience and Salon goers.

The Artist

When the real art history of the Second Empire and the Third Republic in France was restored by the new art historians, it was revealed that Gérôme was genuinely popular with the art audiences and collectors of his time because his art was immensely innovative, decidedly novel, technically proficient (not outstanding but good enough), and, above all, featured sex and violence. A can’t miss combination. The reason why he fell off the art history pantheon was, as all art historians know, because of the Theory of Modernism. Beginning somewhere around the art critic, Charles Baudelaire, migrating to the British critic, Roger Fry and culminating in the American critic, Clement Greenberg, this theory put forward an entity called “Modernism,” both a state of mind and a period of time, that produced an artistic attitude called “art-for-art’s sake,” which led to avant-garde art, a reaction to modernité.

According to the teleology of Modernism, the founding fathers (no mothers allowed) were Gustave Courbet and Édouard Manet and their progeny, the Impressionists, the Post-Impressionists and all the “isms” of the Twentieth-century, climaxing with Jackson Pollock. Any artist, no matter how historically famous and successful, could not be a Modernist unless he (not she: women were not considered) was part of that select group. The artists of the Salon were eliminated, the “official” artists were purged, English and American artists were left out, and only a small group of French male artists were allowed to be part of the club. The result was an art history based upon an evolutionary theory of a progressive march taken by art from representation to abstraction. The Greenberg story of art was an excellent metanarrative, as Jean-François Lyotard would later call it, but it was not a proper history of art. Modernism was a construct, a convenient fiction, complete with heroes and villains.

The Artist and the Public

“The Spectacular Art of Jean-Léon Gérôme(1824 – 1904)” comes equipped with a large catalogue and a smaller book of essays that strive to re-write the artist back into art history. Gerald Ackerman, whom I met while I was in graduate school, began the revival of Gérôme. At the time, Ackerman told me he had been working on Gérôme for twenty years and it is his pioneering effort that is the foundation of scholarship for today’s writers. Current scholars point out that, even in his own time, Gérôme was as controversial with the critics as the avant-garde artists. Like the latter, Gérôme had to court and please the bourgeoisie, and the career-minded artist created a juste milieu path between erudite, orthodox, high-minded history painting and low-caste genre scenes of everyday life (“Molière Breakfasting with Louis XIV,” 1862). Taking a page from the playbook employed by Ernst Meissonier, Gérôme rethought history painting and made it accessible and entertaining to the middle class art audience (“The Tulip Folly,” 1862). In place of classical knowledge, understandable to scholars and specialists, the artist inserted carefully researched archaeological and ethnographic detail (“Solomon’s Wall, Jerusalem,” 1876). In place of the relentless ordinariness of Realism and the remorseless observation of Naturalism, Gérôme substituted panoply of information, educating the viewer. Instead of heroes and noble characters, he created a cast for his theater of history and used the actors to tell arresting stories about life in another place and another time.

Gérôme’s art presents us with spectacle on two levels, echoing the culture of scopophilia and observation of the Other that was the basis of Second Empire power and Third Republic imperialism and the control of men over women. First, Gérôme’s art showed the sheer spectacle that held sway during the Roman Empire. Bread, circuses, food and entertainment—-if you provide the people with these two necessities, they will tolerate any amount of tyranny. Whether or not this conscious policy is smart of despicable depends upon one’s political point of view. To the middle-class French people, survivors of multiple revolutions and uprisings among the disempowered, a firm hand on the wheel may have seemed a good idea. Second, there is no reason to assume that Gérôme was trying to anything more than present interesting subject matter to his audience. There was probably not much sub-text in his work. Indebted to his imperial patrons, Gérôme was a conservative who would be unwilling to offend his collectors. All he asked of the viewer was to look. Clearly he had worked out a formula: sex and violence sells; and the exercise of imperialism is comforting to a second-class power.

His paintings of the Roman Empire enshrine the pleasure of looking, of seeing violence that happens to others and not to you, a pleasure called by Burke, the “sublime.” That heightened and intense emotion that Edmund Burke wrote of was, not incidentally, in relation to the spectacle of beheading during the French Revolution. In the Roman arena, there are no victors, only victims of a system of witnessing what was an imperial display of the Emperor’s power. The gladiators, who were slaves, saluted the Emperor before they died in “Ave Caesar, Morituri, Te Salutant” (1859), the Christians, who in actuality were prosecuted in very small numbers, provided the fledgling religion with its first martyrs. Maxime de Camp, photographer of the Middle East, complained that Gérôme was inaccurate, and the artist did, indeed, take liberties for dramatic effect. His painting of Christian martyrs showed human beings used as torches but the scene is set in daytime, while Nero put on such a show only at night. In “The Christian Martyrs’ Last Prayers” (1863 – 1883), the lion approaches the huddling worshipers. Much has been written of Gérôme’s prediction of film and his use of the long pause in a narrative and, indeed, the viewer can see the emerging head of other felines coming into the sand surface of the Coliseum. Like the martyrs, we wait. As opposed to “Gathering up the Lions in the Circus” (1902), in real life, animals had no interest in attacking humans and had to be taught, even forced to pounce. During these centuries of arena entertainment, wild animals, entire species were either wiped out or put in danger, due to the overindulgence of the Romans.

The Roman audience in “Pollice Verso” (1872) came to the arena to be entertained. Some commentators and historians have since suggested that the blood lust acted as a kind of drug, dulling the senses, reducing human carnage to a mere theatrical exercise. There was an endless supply of slaves and criminals to put to death in an exercise of punishment and control. Although Gérôme could not have imagined modern film, his paintings of the Roman Empire became sources of inspiration for Hollywood film directors, from C. B. De Mille to Ridley Scott. But that observation raises a rather interesting question: why are we still so fascinated with an imperial power, which used human beings as stage lights, crucified even the most insignificant dissidents, rewarded the few and persecuted the many, keeping everything in balance through constant spectacles of blood and violence? Why does Hollywood not make movies about the Greeks, unless they are fighting the Persians in tiny leather uniforms? Do we conclude that we are superior to the Romans in the arena because we are addicted only to movie violence?

In a world where schools have long since sidelined history, most of us learn of the past from the History Channel and Oliver Stone. Today we could call Gérôme a “popularizer.” If he were a history professor today, he would be complimented for helping the students identify with the events of the past. However, history painting in Nineteenth-century France was not necessarily supposed to be popular, only revered and respected. Unlike Gustave Courbet who dis-respected the Salon system by portraying unattractive uninteresting modern types on a large scale, reserved for history painting, Gérôme kept most of his works small or medium sized. He was, in effect, using the rules to create a new space for what the system had already approved. In the end, he slipped past his many detractors and found fame, fortune, and many honors. As Scott Allan pointed out in his “Introduction” to “Reconsidering Gérôme,” the artist was “appointed professor at the newly reorganized École des Beaux-Arts in 1863…. (was given) a seat in the Institut de France in 1865…(and was) nominated grand officier of the Legion of Honor in 1898.” Son-in-law to the grand impresario of art reproduction, Adolph Goupil, Gérôme was one of the most reproduced and widely distributed artists of the Nineteenth-century. But was he a good artist?

The Artist and Technique

I went to the exhibition with my good friend and colleague, Irina, and, like post-Post-Modernist art historians, we glided among the many theoretical approaches available to us, from formalism to feminism, to discuss Gérôme. Technically speaking, he was an odd mixture. On one hand, he could handle paint only in a limited manner, for he was essentially a drawer in paint. On the other hand, he never won the Rome Prize for good reason—-he was almost blind when it came to the classical approach to the human figure. Only when he removed himself from the Beaux-Arts tradition did he become at ease with the people he painted. His nude women are borrowed entirely from other artists, especially from Ingres and Chassérieu (“Character Study for a Greek Interior,” 1850), and are boneless and airbrushed to a peculiar blank flatness. But when Gérôme clothed his females, he was completely at ease. His portrait of the daughter of Betty de Rothschild, who was painted by Ingres in 1848, “Portrait of Madame la Baronne Nathaniel de Rothschild” (1866) is not as stunning as an Ingres painting but Gérôme held his own with the master. His”Portrait of M. Édouard Delessert” (1864) with the subject nattily dressed in blue argyle socks is a genuine character study.

Despite these near-great portraits, Gérôme seemed to have had a hard time integrating actual sites with imaginary people. For example, there is a wonderful trio in the exhibition, featuring Napoléon in Egypt. “Napoléon and his General Staff in Egypt” (1867) imagines a very large General on a very dainty camel, but it is not the size disparity, it is the startled expression on Napoléon’s face that makes today’s viewer smile. Oedipus (1863 – 86) also plays havoc with scale: Napoléon is on a tiny horse, standing in front of a shrunken Sphinx. But most interesting is “Napoléon in Cairo” (1867 – 68), a simple little painting with the General standing in full uniform with Islamic mosques in the background. In real life, Napoléon was short and rounded, but here he is tall and slim. The viewer is given TMI—the details of the uniform are exquisitely rendered and one learns, thanks to the deep shadows of selective folds of his trousers, that the future Emperor “dressed” left. There are probably several reasons for the disparity of scale and proportion in Gérôme’s paintings. One would certainly be his academic training, which taught students to think in pastiche and collage and to “paste,” as it were, standard studio poses into grand backgrounds. Another cause would have been the artist’s use of photography as his source. Photography tended to make minute details available to the human eye, and when Gérôme copied these details, the effect was to flatten the surface with non-hierarchal information that overwhelmed the displaced figures and threw off the scale.

Gérôme emerged onto the Parisian art scene as the leader of the “Neo-Grec” school (according to the critics of his day) with “The Cock Fight” in the Salon of 1847. The genre painting tells a story of cocks, both seen and unseen, as a young boy orchestrates a contest between two roosters while a young girl shrinks away, as well she might. However, Gérôme did not confine himself to antiquity and the choice of his subjects says a great deal about what was going on in France during his career. Just as his mentor Paul Delaroche spoke obliquely about the French Revolution (matricide and patricide) with “The Execution of Lady Jane Grey”(1833), Gérôme saluted the Second Empire by celebrating the current Emperor’s uncle, Napoléon I in a number of paintings, some direct references, some indirect. The rather marvelous “The Reception of the Siamese Ambassadors at Fountainebleau” (1864) is a direct steal of Jacques-Louis David’s “The Coronation of Napoléon” (1807). Two other wonderful paintings, “The Grey Cardinal” (1873) and the “Reception of the Duc de Condé at Versailles” (1878) were painted after the fall of the Second Empire and could be interpreted as a warning against secret power (the Cardinal and by extension the late Empire) and a plea for reconciliation (Duc de Condé) after rebellion, but, given the inherent conservatism of Gérôme, the works could be more comfortably read in relation to the nostalgic Bonapartism and a desire for a monarchy, which marked the unsteady early decades of the ill-fated Third Republic.

The Artist and Gender

In painting after painting, Gérôme clearly demonstrated his discomfort with women. Before his very profitable marriage to the daughter of Europe’s biggest art dealer, Gérôme lived a rather Bohemian life in a homosocial environment. Like most men of his time, he would have had little contact with women of his own class and he would not have considered a woman to be his equal. His nudes are far removed from actual women, as if their nakedness made him so uneasy, he had to use “the nude” as a mask for their disconcerting naturalness. But he is equally uncomfortable with male bodies. Both the gladiator in the celebrated Ave Caesar and the belly dancer in “Dance of the Almeh” (1863) are pudgy: the gladiator sports man boobs and the dancer has a large pot. But Gérôme was comfortable with little boys, carefully delineating their backsides in “The Serpent Charmer” (1880) and his early”Michelangelo (in his Studio),” 1849. In the former, the backside is bare, in the latter the backside is literally delineated due to a pair of red-striped tights, worn by the child. My friend Irina remarked that there is something “almost unseemly” in Gérôme’s art. I would agree, although I would eliminate the polite “almost.” Many of his paintings are simply unseemly, in today’s terms, in their confirmation of the scopophilia of male desire for conquest through the passive gaze.

“Prynne Before the Areopagus” (1861) presents one of Gérôme’s repeated themes: men looking at an object of lust. The object, in this case a woman, is isolated and alone and, most of all, naked, totally exposed to the male gaze. The artist tried to have his specious content both ways—men gaze upon women but the beauty of women’s bodies subdue them, stun them into silence and submission. But the male is always clothed and always retains his power. Young “Prynne” is examined by a group of startled old men, dressed in strong red robes, countering her pale hairless body. Nowhere were women under the command of the male as she was in the mysterious East. The notion of the submissive and speechless woman was especially appealing to Frenchmen, alarmed by the propensity of Frenchwomen to rise up during each revolution at home. French women had been stripped of any social powers and political disenfranchisement, stripped naked in custom and law. The “Dance of the Almeh” (1863) also empowers the men, who watch the gyrating dancer who writhes for their amusement. The males in the circle are equipped with long straight phallic instruments, guns, spears, violin bows and even a pipe, as though they are protecting themselves. The same excess of protection and phallic display can be seen in “The Serpent Charmer.” The old man at the center of the group has a long sword suggestively rising above his upper thigh as he watches the naked little boy playing with a long snake. The rest of the men are well equipped with erect spears, raising the unanswerable question of whether or not Gérôme was aware of the sexual subtext.

“For Sale (The Slave Market)” of1866 is the ultimate expression of the stripped and speechless woman being exchanged among men (according to Engels). The slave market in the Middle East has replaced the European concept of marriage as a financial exchange and, as the catalogue essay on this painting reports that the French public “hardly batted an eye.” The idea that the “public” was unphased by this painting implies that the painting was intended for men, which it surely was, and that the art was not viewed by women, which it surely was. Although women of the Second Empire were not expected to see art but were embarrassed by the painterly display of helpless female flesh, one can imagine that some of these ladies imagined the slave woman biting the fingers of the man who was examining her teeth.

In other paintings, the audience itself is outside the scene, looking it or looking at the imagined world of the harem. The external spectator was metaphorically internalized as a red robe reclining on a wooden chair in “King Candaules,” 1859), watching the exchange of male looks. The King’s guard, Gyes, lurking in the dark, off to the right, is watching the queen Nyssia. The queen is caught in a triangulated gaze among men, but as Baudelaire pointed out, she would not have been the “dull puppet,” depicted by Gérôme. As was typical for the artist, the woman is pale and naked and helpless with her back turned to the audience in a gesture of modesty thrown in by the painter. One could ask if showing a woman’s naked body from the rear is more or less discrete. “The Moorish Bath” (1872) is noteworthy for the carefully drawn Islamic tiles and for the inherent racism that exposed the naked breasts of the African slave and allowed the white woman to turn modestly away from the viewer. Echoing European and American notions of racial hierarchy, “The Grand Bath at Bursa” (1885) is yet another Imaginary Orient, complete with white women (presumably captured by swarthy Arab chieftains) who are sexual slaves and black women who take care of the needs of the concubines. At least in the harem, the women are together and have each other’s company. In many of Gérôme’s paintings, the women are alone, with no friend and no one to defend or take care of them, reflecting the Western version of marriage, the isolated woman, entirely dependent upon her husband.

The Artist as Colonialist

Much has been written about Gérôme as painter to the colonizers, and indeed his many trips to Egypt coincided with France’s desire to master the Middle East and to build an empire. Although their imperial ambitions dated back to Napoléon, the French never caught up with the British who had an empire upon which “the sun never sets.” The Second Empire and the Third Republic, the era of Gérôme, were the high points of French acquisition of territory and artifacts from northern Africa and the Middle East. Gérôme was at his best when he acted as ethnographer, observing the Other. As distasteful as the Imperial gaze was, it did have the virtue of freeing Gérôme from the tropes of classicism and the poncifs of academia. One does not often think of Gérôme the landscape artist, but, as my friend Irina pointed out, his desert paintings are beautiful, dappled with the blue of the sky reflected upon the pale golden buff-colored rocks, just as an Impressionist would (“The Lion on Watch,” 1890). Here in a desert light that flattens everything, the silhouetted sharp edges of Gérôme’s dry drawing make sense. In “Arabs Crossing the Desert” (1870), the large scale of the figures is permissible in such open distances. In these paintings of the Middle East, colors are intensified in the light and Gérôme came into his own with his strong colors, unexpectedly pinks (“The Black Bard,” 1888) and brilliant oranges and blazing yellows (The Marabou, 1888), hot reds vibrating on the surfaces (“The Standard Bearer,” 1876). “The Color Grinder” (1891) summarizes the importance of color with a row of large stone mortars lined up in front of a dark shop in the Holy Land. In an age of paint in tubes, Gérôme painted the encircled lips of the large stones, which are glowing with vibrant colors pounded into submission.

Although Gérôme replicated the Middle East and its male inhabitants with apparent exactitude, his paintings are fantasy pastiches. But they are totally convincing and carried a larger truth of white European fantasies of conquest and control of the inferior Other. The people he so carefully studied and observed during his many visits are from another century were devoid of technology beyond the Seventeenth century, backwards and in need of French guidance. “Heads of the Rebel Beys of the Mosque El Assaneyn” (1866) mixed actual events with infidel barbarity, necessitating the civilizing French touch upon a people who favored a public beheading. The irony of such an attitude of superiority may have escaped Gérôme. (The French continued to use the guillotine into the 1940s.) A fascinating and harmless object of curiosity, entertaining the French audience, “The Whirling Dervish” (1889) needs to be Christianized and Europeanized. Due to the precise accuracy garnered from a photograph, the Salon goers assumed that the artist was educating them with “The Carpet Merchant” (1887). Each painting of Middle Eastern life can be seen as a contrast to European life—a market instead of a bank, souks, not department stores, fanaticism instead of Catholicism—with the Muslim barbarians being presented as the Other, as Different, as Inferior, as Strange, as Something to be Looked At, as Spectacle, captured by the artist, commanded by the whitened gaze of the spectator.

The Sculpture

The Getty catalogue included a number of paintings done by Gérôme on the theme of the male sculptor and his female model, reflections on the Pygmalion and Galatea myth. From “Pygmalion and Galatea” (1890), where Galatea comes to live when the delighted Pygmalion kisses her to “The Artist’s Model” (1895), when Gérôme becomes Pygmalion, we can trace the desire of Gérôme to make the perfect woman. Strangely enough, his women come alive only when they are sculpted, usually larger than life. Polychromed Nineteenth century sculpture, especially sculptures with teeth (“The Ball Player,” 1902) and jewelry are an acquired taste, even for people educated by Jeff Koons. For us today, Gérôme’s “The Gladiators” (1878) are both reminiscent of fascistic works and of fantasy figures from World of Warcraft. As Édouard Papet pointed out in his catalogue essay on Gérôme’s sculpture, the fact that the ancients polychromed their sculpture had just come to light. Accustomed to the bleached whiteness of marble exposed to the elements or buried underground, Europeans must have found it difficult to adjust their taste to the multiple colors of marble used by Cordier. By the time of Gérôme, his chastely colored females would have been acceptable. But to Modernist educated contemporary viewers, Gérôme’s sculptures were simply gaudy bad taste and they disappeared from view.

The Getty has brought together a splendid collection of the artist’s late sculptural work, the highlight of which has to be “Corinth” (1904). Decked out with jewelry, which is applied to her naked body in every conceivable site, she would be the envy of Jeff Koons; she was the epitome of vulgarity and excess, a siren of the Gilded Age. “Corinth” and the portrait of “Sarah Bernhardt” (1901) were true highpoints of the exhibition, even for the visitor who has seen many, many paintings already in the previous galleries. Paradoxically, the rare paintings by Gérôme, which allow the woman any agency at all, feature the art of sculpture, the craft of Pygmalion. In “Painting Breathes Life into Sculpture,” 1893), a young woman, working in the back of a Tanagra shop, is painting the small female sculptures, bringing them to life. In ancient Greece, there were women who were allowed to participate in their family’s workshop, but, although they were accomplished artists, they did not sign their names. In”The End of the Séance” (1886) the still-nude model covers the clay repliant of herself, as though to grant the inanimate object some protection and modesty that she herself has been denied.

The Artist and Orientalism

My friend Irina and I noted the penchant of Gérôme to fill in his canvases with overwhelming detail about the Orient. Full of bric-a-brac, the paintings are crowded with information, much of which was gained from the artist’s many visits to the Middle East and documentary photographs. From one perspective, the artist’s work was typical of the Victorian “horror vacui.” From another point of view, the artist was on a mission. The French tactic of conquest through military might and the gathering of facts dated back to Napoléon’s ill-fated foray into Egypt. The history of paintings of the Middle East done by French artists also dates from the early Nineteenth-century when the Turks and the Muslims were depicted as brutal and backward. Gérôme nodded to his artistic precursors in his painting of “Marcus Botsaris,” a hero of the war of Greek Independence who fought with Lord Byron. But the 1874 painting itself, is typical of Gérôme’s approach to the unfamiliar: he delineated a veritable encyclopedia of an Eastern inventory of décor and paraphenalia.

Gérôme’s dedication to accuracy was part of larger tendencies: the rise of modern history writing, the rise of the French Empire, the use of photography to record and preserve the known world, and the period’s fear of empty space. Gérôme’s paintings are packed with these cultural vibrations. His art owed a great deal to the French delight in the Pre-Raphaelites and their facility for storytelling, which he put in the service of imperialism. It would be anachronistic to accuse the artist of “complicity” on a conscious level in an enterprise that would, a century later, be described by the French as an accidental empire. Undoubtedly, Gérôme shared the prejudices and desires of his time and believed in the right for the French to have an Empire. His paintings were part of a deeply felt belief system. In his investigation of the pioneering efforts of French artists in picturing the Orient, Todd Porterfield did not accept that the current French scholarship which insists that the imperialism of France was “haphazard” and “timidly entered into.” According to Porterfield, the French artists portrayed,

As was pointed out earlier, this was exactly the dialectical strategy employed decades later by Gérôme in his paintings of the Mysterious East and the Backward Other. it would be safe to assume that the artist believed, in common with most other Europeans, that the culture of the West or the Occident was superior. The discourse of racial and cultural superiority had been in the making among European scholars and writers for decades. The late Palestinian philosopher, Edward Said, revealed the role of discourse in the literary manufacture of “Orientalism” in his 1978 book of the same name. Although the cover of his book featured Gérôme’s “The Serpent Charmer,” Said did not discuss “Orientalizing” art. Said pointed out that when the Europeans wrote about the East, the scholars were creating, not the truth, but a “representation” of the “Orient.” Using Michel Foucault’s concept of “discourse” in which serious speech acts from experts shape what becomes received knowledge surrounding a topic, Said stated that the “Orient” was constructed in terms of what the West was not. As Foucault pointed out, representation as constructed by the One would fabricate the Other into a inferior for purposes of discipline and punishment, power and control. Said continued the French philosopher’s thought by pointing out that an “Imaginary Orient” was manufactured for the purpose of defining the Europeans themselves by using the “Orient” as the negative to the Western positive. The Imaginary Orient had little to do with the “real” Middle East, for the Europeans were essentially uninterested in the Other. Europeans were concerned with the task of writing themselves into a position of dominance.

The concept of Foucault and Said was quickly taken up by art historians, resulting in a major investigation into the attitude that European artists had towards the Other. Thanks to post-colonial theory, it is possible to view Gérôme and his art as an expression of French power over a dark-skinned people who refused modernity and Westernization. The art of Gérôme had to overwhelm the viewer with facts, information, detail, as though to compensate for a fundamental Lack of knowledge. Foucault equated seeing/sight with power: “voir, savoir, pouvoir:” to see is to know is to have power over. For all the privileging of vision in Gérôme’s work, the Other, the “Oriental” remained a slippery character in the French imperial drama. All the knowledge in the world is spread out on his canvases, but it is all from the French point of view and we learn everything and nothing. In the end, all the superiority, all the power in the world could not hold the Empire together, and today, as Porterfield, pointed out, the French seem vaguely embarrassed about their role in colonialism.

Looking at Gérôme’s art in today’s world is an interesting enterprise. The colonized subjects of the French empire have long since come home to the Mother Country, unsure of their identities, as Frantz Fanon so eloquently stated in ” Black Mask, White Skins.” So thoroughly imbued with the doctrines of colonialism and imperialism, the colonized think that they are partly “French” and came to France to live, but they insist on bringing their “Oriental” culture with them. Suddenly, what seemed exotic in the Middle East caused controversy in Paris: head scarves or not? The fear of the Other continues. Gérôme, like all artists, was engaged in acts of representation; and, as for his attitudes, his biases, his complicity, his patriotism—that is for history to decide.

The Artist and History

Gérôme studied under the official juste milieu artist, Paul Delaroche, who knew how to please a crowd. He had a gift that Gérôme did not: Delaroche could move an audience with his spell binding and compelling scenes of arrested pathos. In “The Execution of Lady Jane Grey,” the blindfolded teenager, England’s nine day queen, gropes for the wooden block where she will lay her little neck. Dressed in white to emphasize her youth and innocence, the little girl who has been a pawn of reckless and ambitious adults is a triangle of pity in the center. She is flanked by the axe man, the executioner. By his side is his axe, the head of which gleams in anticipation. In Delaroche’s “Princes in the Tower,” also known as “The Children of Edward” (1831), the beautiful young princes cower in the dark alerted by a shaft of light under the closed door to their room. The minions of their evil uncle, Richard the III, are upon them. It was this master of the breathless moment who said when photography was invented, “From today, painting is dead!” It was this painter who trained some of the greatest photographers of the new era, men who transformed photography into an art form, Charles Negre and Henri Le Sec. But their photographs were doomed to be neglected and unstudied until the Twentieth century. It was another painter, his pupil, Gérôme, who would profit the most from photography.

Although Gérôme benefitted a great deal from his relationship with Goupil’s, there were many critics in his own time who were wary of the sale of reproduced paintings-as-photographs or as prints. Presaging Walter Benjamin’s observation that when art is reproduced, it loses its aura, its untouchability, its place in ritual, its role in cult, Nineteenth century critics wondered if Gérôme were cheapening himself and his art. But being under contract with the firm, the artist had little control over the fate of his images. The art firm did well for the artist, finding for him an audience of buyers for all levels of his works, from the original paintings to different kinds of reproductions, suitable to a wide range of incomes. Sadly, some of the images in the $80 Getty catalogue reproduced poorly, such as “The Death of Marshal Ney,” where the darks submerge the image into unreadability. But the original reproductions were clear in replication and enhanced the artist’s reputation everywhere. When my friend Irina was surprised that”Pollice Versoi” was now in the Phoenix Art Museum, I bet her that the original buyer was an American. I was one buyer off: the first was British, but after that, all the others were Americans in New York. New Yorkers, especially in the gilded age, as an essay in the catalogue by Mary G. Merton pointed out, loved the opulent visual excess of Gérôme and equated him with all things French. The American buyers tended to not quite understand the distinctions among French artists, and they would purchase a Renoir and a Bouguereau and a Gérôme, ignorant of theoretical debates in Paris.

But Gérôme was in the thick of the quarrel between tradition and modernity. The artist who gained the most from technology was the most hostile to artists who dared to think or paint differently. “Rodin, Pissarro, Monet, Degas are rotten scoundrels,” he exclaimed. In the “Forward, Picturing Gérôme,” the authors quoted Gérôme as objecting, among other things, to a posthumous exhibition of the art of Édouard Manet at the École des Beaux-Arts in 1884. The artist derided Manet as “…the apostle of a decadent manner, of a piecemeal art…” Manet, he continued, produced “…highly willful and lurid work…” Of the Caillebotte donation of the Impressionists to the Louvre, Gérôme raged, “I repeat, if the State has accepted such rubbish, then moral fiber has seriously withered.” This, from the master of the bared bottom. Invective, no matter how heart-felt or mainstream at the time, seldom passes the test of time. Meissonier’s prosecution of Courbet after the Commune soured his place in history, and, like Gérôme, who had nothing but spleen for dead artists, the rancor of these popular artists lost them respect from their peers and from history. The writing that accompanied the exhibition at the Getty suggests that we need to separate Gérôme’s unpleasant nature from our reconsideration of his art. The audience at the museum, however, seemed unconcerned with the issues Irina and I mulled over. On a Wednesday afternoon, crowds were large and appreciative. Gérôme would have been pleased. Back where he belonged. On top. The center of attention.

This year has brought two very good films on the art world, first, The Art of the Steal about the Barnes Collections (reviewed on this site) and, now, Exit Through the Gift Shop. The title refers to the museum blockbuster, which routes the audience through a maze of galleries so that they can “exit through the gift shop.” Here, one can buy tee shirts with art works printed on the front, famed posters of the art in the exhibition, mugs with the paintings wrapped around, note cards, post cards, sometimes backpacks and scarves, even jewelry—all copies of work of art. There is no end of the ways we can all own works of art, albeit in a reproduced form. Exit through the Gift Shop is a commentary on the art world, with the museum being guilty of money changing in the temple with the auction houses as accomplices. By inference, the film presents the street artists as being the last purists.

Outlaws, who are the ultimate “outsider” artists, literally working outside, invading the streets and posting art by night, uphold the lost honor of the myth of the artist. The artist, the true artist, according to Bruce Nauman, speaking in neon, “helps the world by revealing mystic truths.” He or she works for the common good, without hope of money or fame, willing to die for art. The real truth of the “true artist” is that s/he is a small business owner, producing a luxury commodity for a small group of consumers. The work is made on spec, as it were, and the reward is more fame and less fortune. Only a chosen few are ever noticed in this potlatch culture of inverted economics. The hero street artists of this film, Banksy and Shepard Fairey, are master strategists who have used the “rules” of the art world to gain recognition, gangster style. Primal insurrectionaries, they turned the art game into a guerilla war.

On the surface the documentary, narrated with careful solemnity by Rhys Ifans, is a record of one man’s obsession with the camera, directed towards stealthy street artists. But the mere employment of Ifans immediately tells the viewer that the presence of this supporting player, who chewed the scenery in Four Weddings and a Funeral, is a sign of sarcasm. A tale of sound and fury, told by an idiot, the movie is to be a witty one. At the heart of the absurdity, lurking at the fringes of the art world, is an unlikely knight-errant, or more precisely the squire of the art warriors, Thierry Guetta. Guetta is a French expat, living in Los Angeles with his long-suffering wife. He is the classic manic, filming compulsively with no end in sight, pointing his camera at the artists who come out at night.

Street art has been around for decades. One can be very erudite and point backwards in time to tympanums over cathedral doors or go all multicultural and mention Diego Rivera or the WPA or the murals in Chicano neighborhoods, but a more precise analogy might be the New York street artists, Keith Haring, Jean-Michel Basquiat, and the lone survivor, Kenny Scharf. During the golden age of Graffiti Art, they spray painted the streets and subway corridors in the SoHo neighborhood where the chic art galleries were located. Well educated and ambitious, they were the sophisticated counterparts of lower order street artists, such as Fab Five Freddy, and those who spray-painted New York subway cars with images of Andy Warhol soup cans. To some their work was art and these artists were duly and quickly absorbed into the mainstream and appropriated by Mary Boone. To others, graffiti was simply graffiti and, like broken windows in a building, was symptomatic of crime to come. Graffiti was vandalism, pure and simple.

Whether or not one agrees with either position, the situation of the artists who work the streets rather than the galleries is that of someone operating outside the law. Although the streets are supposedly “public” and belong to us all—-after all we paid for them—-the public spaces are, in fact, private and patrolled. Property developers and private entrepreneurs own the buildings. The police control the streets. No unauthorized signage is allowed. The great street muralist, Kent Twitchell, has tales to tell of the ruination of his works of art at the hands of property owners. For the artist with a taste for adventure, the streets are a short cut to fame. Anyone can take the safe route, the gallery system, but there, in these white cubes, control, as stringent as that practiced by the police, awaits. The real freedom is not in the art schools or in the studios; it is out in the open, late at night, in the dark, on the fly.

Thierry Guetta began his career as a documentarian of street artists, who keep their identities secret and use street names. He was introduced to the underground world of art makers through his cousin, the artist named, “Space Invader,” after a video game. “Space Invader” makes small designs from Rubik’s cubes and pastes them to the odd corners of Paris. Reminiscent of the environmental artist, Charles Simonds, in the 1970s, the street artists leave works of art, some large and some small, in odd, hard-to-reach spaces. Simonds, a recognized fine artist, would leave tiny earthen “cities” tucked away, like treasures, for the pedestrian to stumble across. All of these works were, of course, carefully documented with an eye to posterity. The street artists, who worked alone and who knew each other through a network of subterranean communication and silent respect, had no one to record their methods or their art until Thierry came along twenty years ago.

Thanks to the filmmaker, we have hundreds of hours of film, saving the secret practices and the ephemeral art from oblivion. But Thierry, being manic and undirected, was never able to get beyond compulsive acts to actually take all of his material and create a coherent shape. He got sidetracked, thanks to a causal suggestion by Banksy, and became an “artist,” of sorts. As Mr. Brainwash, he began plastering the walls of Los Angeles with a soon-to-be iconic image of himself with sunglasses and a camera. Guetta went beyond Photo-shopping a photograph and began “finding” available images, taken from art books and art magazines. The result was a manic compulsive obsessive hoarder’s dream of an exhibition in 2008, “Life is Beautiful.” In the former CBS Studios, MBW presented a cacophony of every known work of art, seized by Guetta and imprinted with his idea of what an “assisted Readymade” might be. If he even knew who Duchamp was, that is. The collectors, who, as their name might suggest, collect, began to acquire his “art,” because that is their nature: they are acquisitive. Guetta certainly provided plenty of opportunities for the acquirers to acquire. Remember, this is the last year before the Götterdämmerung, the Twilight of the Gods of Wall Street and every one was under the illusion they had money.

From a seller of used clothes to a documentary filmmaker to an art world phe-nom, the trajectory of Thierry Guetta seems to be the story told here, with Banksy and Fairey as supporting characters. But if that is all the film is about, the art lover will be in despair and the art skeptic will say, “I told you so.” The offended reaction of Banksy and Fairey in the end gives us a clue that the story of Thierry Guetta is about more than the lunacy of the art world and a person one reviewer described as the “village idiot.” The credit for this film belongs to Tom Fulford and Chris King, who are listed as editors and constructed all those incoherent hours of footage into a story of sorts. The movie is less about any particular artist, even Banksy, who is listed as the “director,” and more about the century old question: what is art? Guetta is the nightmare of aestheticians and art critics come true. He is an ultra appropriator, ripping off everything and everyone. How hard is it to be an artist if originality is no longer necessary? All you need to do is expose yourself…like a dirty old man in a raincoat.

For the art critic of the Sixties, the question, what is art? was a crisis. Arthur Danto faced this Waterloo at the Stable Gallery in 1964. The occasion was an exhibition of Andy Warhol’s installation art, all replicas of objects both low and commercial. It was said that Eleanor Ward hid in her office during the opening. As he stared at the replicas of stacked boxes of Kellogg’s cereals, Danto pondered the meaning and definition of art. What was to distinguish between the actual cardboard boxes of Kellogg’s products discarded and tossed behind the grocery store and Warhol’s screen-printed wooden boxes? Eventually incorporating obvious answers such as “the artist’s intent” and “the maker’s ideas,” Danto and another aesthetician, George Dickie, proposed the now famous “Institutional Theory of Art.” An object, or a candidate for “art,” becomes designated as “art,” once it has gone through a process of legitimation, moving though one Station of the Art World after the other. To the generation of the Abstract Expressionists, the artist was Christ; for the generation of Andy Warhol, the artist was a self-promoter. Warhol is the hero and role model for all street artists, not because he sold himself, but because he appropriated the look and feel of popular imagery and elevated it to “art” through sheer chutzpa.

By the time of Basquiat, Postmodernism had ended that mystic notion of “origin” and “genius,” and admitted that all art had to come from somewhere else. But acts of appropriation, gestures of quotation, performances of borrowing were the activities of very sophisticated, art school educated, theory permeated artists. They knew what they were doing and why. But that was decades ago. Thirty years after the debut of Jeff Koons, we are confronted with a truly naïve and unschooled artist, Thierry Guetta. Guetta sees without knowing why, takes without understanding how, imitates with the innocent eye of a child. He is a true “primitive,” a modern day Henri Rousseau, who knows just enough to be dangerous to others. All he knows is that “Life is Beautiful.” He has probably never heard of Roberto Benigni.

To the trained eye, Banksy is an educated artist who has shrewdly found his place in the streets of the big cities of the world, especially London. He learned from Basquiat. A true “outsider artist” does not make art “outside” the art world, in a place such as Des Moines or Birmingham, for example. You must place your art, in London or Paris or New York or Berlin, otherwise the art is like a tree in a forest empty of humans. It will fall, making no sound. Like Banksy, Shepard Fairey followed the strategy of maximum visibility. The graduate of the Rhode Island School of Design looks and acts like a nice frat boy and now lives and works in Los Angeles. A clean-cut family man, he became well known for his ubiquitous “Obey” posters of Andre the Giant and famous or infamous for his Barack Obama “Hope” poster. Although we know more about Fairey than Banksy, both artists hide in plain sight. And even better, we can’t see Banksy beneath the dark and shadowed hoodie. His visible invisibility makes him even more sought after.

Fairey and Bansky and the other street artists filmed by Guetta are genuine guerillas, striking by night and fleeing the scene. By morning light their work will be “discovered” and by the end of the day scrubbed out of existence, if possible. But like all guerillas, these artists have to be well financed. The documentary clearly demonstrates that even guerilla art is not cheap. There is much more to their art making process than that of Basquiat, who used a can of spray paint, and Haring, who used white chalk on black paper. The new generation of street artists are more like Renaissance mural artists, complete with the workshops and assistants. We see preliminary sketches and cartoons, the enlarged Xerox prints, made in pieces. Some of the street art comes from stencils and we watch Banksy carefully cutting out an elaborate web of cardboard components. Other images are prints on a grand scale, applied with long brushes like huge rolls of wallpaper. All of this costs money. Someone is funding the enterprises of these highly successful artists and along the way smart art dealers made a smart investment.

But the question still remains, is Thierry Guetta an artist? From the perspective of the Institutional Theory of Art, he is. He has been through an apprenticeship and has earned his place. Guetta is the true result of the Institutional Theory and perhaps the reason why the Theory has been so controversial and debated for forty years. But that does not answer the real question: is he making art? The short answer is No. The long answer is No Way. Therry Guetta takes art; he does not make art. This statement is not intended to be a critique or a criticism. I am not condemning the man. I am simply describing how he works. Guetta is what Walter Benjamin would call a “cultural producer,” although today, in the time of post-Post-modernism, we might call him a “cultural re-producer.” But he is so far removed from any precise source, we cannot even dignify his practice as a type of simulacra. What lies beyond repetition? beyond replication? Thierry Guetta. Both Banksy and Fairey have come to look askance upon their former companion. By dismissing Guetta as a faux artist, they validate themselves as authentic artists. If this film demonstrates anything, it is that something we sense as “real” art actually exits. Whether or not we can explain art, we recognize it and we know when and what it is not. Like pornography.

That said there is nothing wrong with what Thierry Guetta is doing and he has a place in the art world. He grasped the basic psychology of what Banksy and Fairey were doing: they were muscling their way into the world of visual culture through the use of signature styles and trademark imagery. Their tactics were simple: visuality and repetition. Despite the apparently public nature of their work, which could be “owned” by all, their art was the ultimate “unobtainium” for a long time. They would give their art; the authorities would take it away. Part of the thrill was the sheer danger of the act. Guetta filmed street artists running from the law as if they were playing games of parquet. The sheer athleticism of the artists and their audacity made them a breed apart—outlaw gangsters always ready to break and run. The street artists were like cultural Robin Hoods: they robbed the landlords to give to the poor. The art could be seen but not for long. It could not be owned nor possessed. The stencils and the posters were placed just out of reach. The inaccessibility of the accessible created desire. That is the lesson that Thierry Guetta, who gave his art in excess, did not comprehend. He tried to create art through the Gift Shop. But it is Desire that creates art.

When the forces of popular culture meets the immovable object that is art history, the latter always loses and accuracy is often a causality. Those of us in the art history community, especially the classicists among us, have not forgotten the whitewashing of what was an actually a very colorful ancient Rome in Ridley Scott’s Gladiator. Leaving aside the over-dramatized and over-romanticized movies, such as Lust for Life, some quite good films have been made about artists. Surviving Picasso was marginal, Pollock was better, and Basquiat was quite good…give and take some dramatic license. I am more comfortable with creating a story arc—-the rise and fall of the tragic artist—-as phony as it is, than with unnecessary historical inaccuracies. Such was the case this weekend (in L. A.) on Doctor Who: “Vincent and the Doctor.”

Touring Paris, Doctor Who and Amy Pond visited the Musée d’Orsay, home of many Impressionist and Post-Impressionist paintings, once rejected and now revered by the French. The guest star of the week was the wonderful Bill Nighy, playing a stuffy art lecturer to the English-speaking art audience. I was hoping he would have been given a larger role, but that was not to be. Examining one of van Gogh’s paintings, the Doctor and Amy spotted a monster in the window of The Church at Auvers-sur-Oise (1890), one of his last paintings. A monster in the window of a church, a church that Vincent painted as a writhing distorted contorted expression of the mystery that is Gothic—great concept, yes? So where did the Doctor and Amy go to find the artist before he painted the church in Auvers? Arles.

The couple found van Gogh (Tony Curren) in Arles at the famous café, a place he painted twice, always at night: Pavement: The Café at Night andthe more famous, The Night Café. The artist is engaged in an argument over the money he owed the café, a chronic problem for the artist who lived on an allowance from his brother, Théo van Gogh, a well-known art dealer. The physical resemblance between the actor and the artist was quite well done but the choice of accent was not the best. Granted it would have been difficult to replicate van Gogh’s accent—-he spoke English with a Dutch accent and French with a Dutch accent—-but does he have to have a slight Cockney accent, the signifier of the lower class? That said, the actor did a very nice job with van Gogh, who, at this period in his life, was in one his quite times. The crazy frantic bursts of emotion and the self-mutilation would come later in December. If the accent is a bit off, the chronology of the paintings was seriously out of whack.

These paintings of the café were done before the artist had his breakdown and sliced his ear, placing the entry of the Doctor and Amy in the month of September 1888, just before the artist Paul Gauguin arrived in October. When the Doctor and Amy go the “The Yellow House,” they find the house full of Vincent’s paintings. However, there are many paintings that were done later or earlier or in Saint-Remy and in Auvers. For example, the painting of Vincent’s bedroom (the interior of which was nicely produced for the show) was already hanging on the wall but it wasn’t done until October. The Irises, now at the Getty and TheStarry Night were both done while Vincent was recovering at the asylum. La Berceuse, also in the house, wasn’t painted until January of 1889, after he had returned briefly to Arles. No one in Arles threw stones at Vincent until that winter, when he was minus one ear and was fresh from his stay in the asylum. And Doctor Who would have it that the artist did not paint sunflowers until he met Amy. Sunflowers are a summer flower and van Gogh had already begun his many paintings of the bright yellow blossoms.

These mistakes with the art are easily avoidable. There are thousands of books on van Gogh that can be referenced. One can only assume that the set directors, prop masters, and the fact checkers decided that the audience needed to see all of the famous paintings in one room for the neglected genius of van Gogh to be fully signified. The plot itself, about an invisible lost monster, which had been abandoned by its own kind on Earth and was flailing about, desperate and blind, was not too bad. The trick was that Vincent could see and hear the monster and no one else could and he became, by default, a fighter of monsters. The monster met its doom, poked to death by an easel. Very nice. It is always fun on Doctor Who to travel through time and watch him have adventures with famous people. The conceit of the artist as being a visionary whose vision surpassed that of those all around him, including the Doctor, was a clever one. One would like to think that the monster was a manifestation of Vincent’s depression and despair. Of course the monster was real and of course it was vanquished; it was only a McGuffin. The real story was the affection the time travelers had for the doomed artist who was in despair that no one appreciated his work.

The ending was quite lovely, because the Doctor and Amy took Vincent forward in time to the Musée d’Orsay and showed him a room full of his paintings. The “room,” or the gallery, was not real, of course, as there were works that belonged to other museums on the wall, such as the Irises, dislocated from the Getty here in Los Angeles. But never mind. The fun was to see the artist moved to tears at the sight of his art in a museum. I felt that the writers missed an opportunity for Vincent to see paintings he had not yet done—-like the Starry Night and Crows in the Wheatfields and feel hope for his future as an artist. Bill Nighy turned up again, still in the museum, but it would have been nice if he had been taken along for the ride back in time to Arles. When the Doctor asked him what he thought of van Gogh, Nighy proclaimed Vincent van Gogh a great artist in what was actually a very nice little speech. Van Gogh embraced the startled lecturer, who, of course, would never know that he had met his idol. A sort of “what if God was one of us?” moment. Answer—-we wouldn’t recognize her.

However, to conclude my concerns as an art historian: the conflation of Arles with Auvers was strange and unnecessary. Nighy had been very precise with the Doctor: the painting of the church was done in early June of 1890, not long before the artist died. But the TARDIS went to Arles in September of 1888. But here’s an idea. Surely the monster could have been relocated to a painting Vincent had done in Arles, maybe hiding out in The Night Café, a place the artist himself explained as horrifying:

I have tried to express the idea that the café is a place where one can ruin one’s self, run mad or commit a crime. So I have tried to express as it were the powers of darkness in a low drink shop…and all this in an atmosphere like a devil’s furnace of pale sulphur….

Calling Richard Curtis, the writer of Doctor Who, you should come to me the next time you try using an artist in one of your plots. If I were a monster, I would hide out in The Night Café.

P.S.

The jury is still out on the New Doctor. I like Amy Pond (Pond, Amy Pond) but her relationship with the Doctor is yet to be fully resolved. Rose and the Doctor were in love, Martha was in love with the Doctor but he couldn’t love her back, the Doctor and Donna were the Odd Couple, friends who made each other better people, but Amy and the Doctor…..? Still in mourning for Rory, she doesn’t seem to be as dazzled as his previous partners and treats him with disclaimers. Part of the problem is one of imbalance. Amy, I think, is fully realized character, completely inhabited by Karen Gillan, but I am still waiting for Matt Smith to find his inner Doctor.

I am Love could just as well be titled “I am Heavy-Breathing Melodrama” or “I am an Italian Soap Opera.” Here we have the rich behaving badly and stupidly, throwing away position and trampling on privilege in the headlong pursuit of passion and losing everything in the process. The story is Phaedre (1962) plus Damage (1992) in which the final moral is that old people should leave young people alone. As a Greek myth, Phaedra was played like a tragedy, brought on by the fatal flaws of humans who could not control themselves—a favorite Greek theme. Damage reversed the gender of the older lover but made the young woman into an adventurer with a predatory streak. I am Love also begins with a predatory young male lover who destroys a family that is sunk in complacency and is ripe for destruction.

The setting is Milan and the family business of the Recchi family is textiles. The family is wealthy and close-knit, impenetrable to outsiders. However, the son, Edoardo (Gabriele Ferzetti) and heir brought an outsider into the family, his Russian wife, renamed “Emma.” (Tilda Swinton) Emma has done her Catholic familial duty, giving up her identity, becoming Italian, producing two sons and one daughter and enduring the passive aggressive slights of her mother in law, an almost unrecognizable Marisa Berenson, transformed by plastic surgery. “Emma” and “Edoardo” are at that married-but-estranged state, common after three decades of familiarity. And there are other signs that all is not well among the males, who are jousting for position as the head of the family is coming to the end of his long reign. At a Christmas dinner, his grandson, Edoardo, Jr. brings home a somewhat unsuitable young woman and the bad news that he has lost a contest to, of all people, a chef—a working class man. Obviously, the grandson and heir is not worthy of all that should be coming to him.

Enter the snake into this suave Garden of Eden. The “chef,” “Antonio,” courts the son, bringing a cake as a consolation prize to “Edoardo” and spies an even bigger prize, his mother, “Emma.” “Emma” is beautiful, polished and bottled up, and so, like “Lady Chatterley” meeting the “gamekeeper,” she picks up on the young man’s interest. All coiled up and hissing inside a box, the cake is the signifier of temptation, but it takes a while to get to the apple biting. The plot unfolds with glacial slowness, the frozen sexuality of “Emma” telegraphed to us by the thin layer of winter snow. The patriarch of the family dies and leaves the business to his son and immature grandson. The grandson, Edoardo, Jr. needs to prove himself and the chef sees his chance and draws the young man into financing and sponsoring his new restaurant. “Antonio” is talented but needs useless rich people to back him, just as “Emma” needed “Edoardo, Sr.” to take her out of Russia. Both should know better than to upset the social balance: they are dependent upon the rich, but they pursue each other.

Lady Chatterley’s Lover has been a favorite story ever since D. H. Lawrence wrote the scandalous novel of class differences and steaming sex scenes, which was banned in America and made into a half dozen films. While Lady Chatterley is excused by her husband’s physical ailments, there is no excuse for “Emma’s” behavior. The signifier of corruption is an art book “Emma” steals in an absent minded moment of passion. “Antonio” finds her in the bookstore and takes her away in his pickup truck to the site of his future restaurant. As an art historian, stealing an art book bothered me, but never mind, for the two lovers were off in the Springtime of their Love to the sunny top of the chef’s mountain lair. By this time (Spring, Summer, sometime warm), the colors are more intense and, if the audience hasn’t caught onto “Emma’s” sexual awakening, her orange pants provide a vital clue. On this verdant site where “Emma’s” son will invest in a restaurant, the doomed couple frolic in the grass, which springs to erect attention in a field full of blooming flowers penetrated by bees who will also like the ripe swollen raspberries jostling nearby. There are lots of ants who crawl around the fuzzy close ups of writhing bodies and the equally overripe music of John Adams. Adams, we are told is a contemporary of Philip Glass, a designation that obscures the fact that Adams composes music that “sounds like Philip Glass,” to quote a famous phrase. The viewer is dragged through a linear progress of grass waving and flower shaking and violin scrapings, which warn, all the while, that the wages of sin are death. But whose?

The elaborate plot meanders to include a pregnancy for “Edouard Jr.’s” girlfriend, “Eva,” who knows how to tie her man down, while the daughter, “Betta,” (Alba Rohrwacher) deviates off the marriage path, cuts her hair, dons masculine attire and acquires a female lover. Hair cutting becomes a symbol of the freeing of female sexuality. “Antonio cuts “Emma’s” long blond hair” perhaps because he did not like her prim Kim Novak chignon. The family politely ignores such developments—the suspicious short hair, the pregnancy, because they do not upset the order of things, and the lesbian in the family lives in London. However, two events occur which do upset young Edoardo. First, his father sells the business he was supposed to one day inherit, depriving him of his future; and second, “Emma,” who was instructed to give the chef, “Antonio” the menu for a family dinner, provides him with the recipe for a beloved childhood soup. The occasion is a celebration of the sale of the business to a global enterprise, so the poor young man is already upset. As soon as the soup is served, Edoardo immediately leaps to the conclusion that his mother has been having an affair with the chef. The clue: Edoardo, who apparently had the eyes of an eagle, spotted a few strands of his mother’s cut hair at the site of the future restaurant. Hair+recipe=affair.

The young man storms out of the dinner party—very unprofessional and immature—and retreats to the family swimming pool for a good sulk. Unwisely his mother follows, they quarrel and he trips over something or other and hits his head on the concrete and falls into the pool and dies unnecessarily. The young man is dead because of his mother’s bad behavior (but no one knows that…yet). The chef summarily disappears from the stage. The family mourns its dead. The father, Edoardo, Sr. attempts to restore family order, implicitly signaling his willingness to overlook her possible transgressions, but “Emma” announces she “loves Antonio.” Of course she is immediately ordered out of existence; the family, including her other two children, close ranks against her; and she runs off, surrounded by over-dramatic music. End of movie.

The stunned and bewildered audience has to supply its own resolution. “Antonio,” of course, has lost his restaurant, and goes off to find another mark. Edoardo, Sr. remarries someone one-third his age and tried again to produce a satisfactory son. We can imagine “Emma,” if we chose, getting a large divorce settlement from her husband and living happily ever after on her well-deserved alimony. And we warn her to stay away from opportunistic young men…or to please remember that the operative word in “boy toy” is “toy.”

Dr. Jeanne S. M. Willette

The Arts Blogger

]]>http://jeannewillette.com/2010/06/21/i-am-love-2010/feed/1Michael West: The Artist was a Womanhttp://jeannewillette.com/2010/06/18/michael-west-artist-woman/
http://jeannewillette.com/2010/06/18/michael-west-artist-woman/#commentsFri, 18 Jun 2010 17:00:11 +0000http://jeannewillette.com/?p=167MICHAEL WEST: PAINTINGS FROM THE FORTIES TO THE EIGHTIES

The Fifties. According to Gore Vidal, the worst decade in the history of the world—unless, of course, you happened to be white, male, heterosexual and an artist. For the American artist with the appropriate characteristics, it was the best of times. The Second World War left the United States in a position of dominance, militarily, politically, and, thanks to decades of conservatism in Paris, artistically in the lead. The art scene and the art market migrated from Paris to New York; and New York, as Serge Guilbaut stated, “stole the idea of modern art.” Operating out of the Cedar Bar in Greenwich Village, the new American artist had to shake off the “feminine” qualities of being an artist. Sensitivity and intuition were replaced by a strident masculinity, reflecting the military posturing of the Cold War era. Women who were artists were not welcomed in this male dominated arena where tough, ugly, alcoholic men like Jackson Pollock and Franz Kline belched and bellowed like bull elephants. Harold Rosenberg wrote of “art as act” and imagined the (male) artist as a modern gladiator bringing himself into being through the act of creation. Females could create only through motherhood. Women were girlfriends, mistresses, wives, groupies, or all three. Some were allowed to have the privilege of being patrons and collectors, like Peggy Guggenheim and Betty Parsons. This is the world of Michael West, one of the best artists of Abstract Expressionism. Present at the beginning of the New York School, she was relegated to the footnotes and left behind by art history, all because she was a “she.” To be forgotten was the fate of female artists from the Fifties, the worst of times for women.

Although best known as the reputed girlfriend of Arshile Gorky, whose legend overshadowed her, West was, in fact, one of the stronger women of the New York School. Unlike Lee Krasner, who reacted to Pollock, she never allowed Gorky to impact upon her art, unlike Elaine de Kooning, she never made the mistake of marrying a colleague. As a result, her art remained true to her own vision and she continued to develop and evolve even after her untimely stroke in 1976. She bravely continued to paint until her death in 1991. The way in which she continued to make art, undeterred by the chauvinism and bigotry against women, undismayed by the way in which critics and dealers ignored women artists, and un-swayed from her course by her marriage to combat photographer, Francis Lee, resembles the career of Helen Frankenthaler. Frankenthaler married into the New York School when she became the wife of Robert Motherwell; but her art continued to be sponsored by the smitten art critic, Clement Greenberg. Thanks to him, Frankenthaler would be knitted into the critical fabric of modernism. With little support from critics and dealers, like most women, West would be left out of the modernist meta narrative. Finally, in the Twenty-first century, the artists who were the historical actors in the art world are being, slowly but surely, replaced in the history of art.

It is often overlooked in the circles of art history, that art dealers are on the front lines of primary research, and it is to Miriam Smith and Nora Desruisseaux of the Art Resource Group that much credit is due in bringing Michael West to the attention of the art world. Located in Irvine, the Group deals with the secondary market in art, handling estates and bringing to light artists who need to be remembered. A striking full page in the summer issue of Art in America announced their full scale show of Michael West’s work. West was born in 1908, a year after Les Demoiselles d’Avignon changed the course of modern art. Her original name was “Corrine” and it was under this name that she began a career as an actor. Photographs taken of her in the style of Edward Steichen show a beautiful woman, her face glowing in the key light. Later photographs reveal that she never lost that sophisticated beauty and sense of elegant style, which must have beguiled Arshile Gorky, the Armenian immigrant painter. As though the event was the closing act of the theater chapter of her life, there was a brief marriage to an actor, quickly over. An unusually ambitious and determined woman for the period, West simply started all over again.

A talented pianist and gifted poet, she had many possibilities before her, but she chose to become a painter. Few women would have gambled in a career in the arts during the Depression, much less go to New York. But she was one of the first students of the new European refugee, Hans Hofmann, at the Art Students League in New York. In 1932, West was joined by artist, Lee Krasner, sculptor Louise Nevelson, and future gallery owner, Betty Parsons, during a period when women were tolerated in an art world devoid of prizes and competition. Undoubtedly Hofmann would have preferred to teach men, but as a newcomer to America, he needed the students. Hofmann was an autocrat, equaled perhaps only by Joseph Albers who was to arrive later. Both were known for bringing European ideas to America and for teaching a combination of Cubism and German Expressionism. Albers was fascinated with color and mixed media, bringing the idea of collage and assemblage to Black Mountain College in North Carolina. Hofmann remained a total painter, combining the structure of Analytic Cubism with the color play and expressive brushwork of Der Blaue Reiter. The impact the conservative Cubism of the Twenties shows clearly in his work, reflecting his belatedness to the pre-war avant-garde. But his combination of avant-garde styles was part of the prevailing ethos of the art market in Europe where the collectors wanted the “look of” the radical but nothing actually innovative.

Being of the post-avant-garde generation made Hofmann the ideal candidate to transport European studio talk and German art theory to the New York artists. Clement Greenberg, a fledgling writer, learned the aesthetic discourse at the master’s feet and would translate it into his theory of Modernism. Although Hofmann’s students started out together, they would show little loyalty to each other. Krasner, once so promising, would give up her career to support Pollock. Betty Parsons would run a gallery that excluded women. Working under Hofmann’s strong willed dogmas, West quickly caught on to the basic lessons of post-war Cubism, which incorporated the multiple viewpoints of Analytic Cubism with the large colored shapes of collage but replicated everything in paint. The women trained by Hofmann would have been well ahead of their male counterparts, none of whom were his direct students. When Krasner introduced her lover to Hofmann, the older and more experienced artist famously warned Pollock to work from nature, rather than depend upon his personality. Offended, Pollock insisted arrogantly, “I am nature.”

Like Pollock, West rejected Hofmann and left this breeding ground for new American art. Her reasons were different from Pollock. Hofmann was too domineering and his patriarchal ways did not sit well with the independent American women. In 1934, she began studying under the American Modernist, Raphael Soyer, who seems to have left little trace on her mature work. What did leave a mark on her life was an introduction to a man who had reinvented himself as a Russian, Arshile Gorky. Because of his posthumous fame, she would be recast as his “muse,” although at the time she was his equal as an artist. In 1935, she sifted her locale to start her art career outside of New York. To save money, she lived with her parents in Rochester, where she apparently became a bit of a local art star, showing with the Rochester Art Club and lecturing on the current theories of modern art and about “The New American Art.”

This apprenticeship probably served the same purpose as working for the WPA did for other artists—an opportunity to make art and to learn how to be an artist. The sojourn in Rochester would have been an ideal place to develop a career. Here she could get opportunities that would not have come her way in New York, such as a commission to paint fourteen panels for a local production of the Ballet Petrouchka, originally developed by the Ballet Russes for Nijinsky, with music by Igor Stravinsky. Although the ballet was twenty-five years old, in the Thirties, it was still a very modern take on ballet and the fact that the city was supportive of avant-garde theater and hired a modern artist to do the backdrop speaks volumes of the sophistication that could be found in the provinces.

Since their meeting in New York, Gorky was smitten and deluged West with love letters and poems, mostly purloined from the Surrealist poet, Paul Eluard. A telegram he sent her in 1936 was probably the most authentic words he wrote to her: “Dear Corrine, Please come to New York for a few days. Let me know when coming, Arshile.” There are intimations that the separation, bridged by letters, had weakened the relationship, as she later explained, “We planned to marry but changed out minds at least 6 times.” Having learned her trade and craft in the visual arts, in 1938, she returned to New York. Whatever the reasons for leaving Rochester, West had come back at a good time. The clock was ticking down on artistic freedom in Europe and in a year, Hitler had overrun the continent. What followed was the greatest intellectual and artistic migration in modern history. Half the greatest minds and talents in Europe arrived in New York and the rest found themselves in Los Angeles. The Surrealist artists from Paris arrived and became a major presence in New York, sponsored by Peggy Guggenheim and shown at her gallery, Art of This Century. For many artists these haughty painters, who refused to speak English, brought with them the key to the next step for abstract art, automatic writing, écriture automatique. But Michael West seemed to be influenced by the Surrealists in that she assimilated the ideas and reshaped them for her own use more than the actual techniques, while she also stayed true to her Cubist roots.

For this second period in New York, West ceased to be “Corrine” and became “Michael,” upon the advice of Gorky. Undoubtedly, his suggestion was based upon the very real prejudice against women, who had a long history “passing” as men: Georges Sand and George Eliot, for example. West went beyond signing her work as a man; and, like Lee Krasner, she used her new name in all aspects of her life. Becoming “Michael” could not obliterate her beauty and men in the art world probably had a hard time forgetting her gender, but West, like all her generation was consumed with the art problem of the day. How could Cubism become abstract? Hofmann remained figurative for years until he made the shift to painting squares of strong vibrating colors, alternatively roughly and smoothly painted. It should be noted, in comparison to the later works by West, that Hofmann tended to be a flat painter. In his earlier works, he wove a thick and active web of broken brushstrokes, which built up his post-Cubist compositions, featuring favorite cubist still life subjects. Later, he further flattened the picture plane and developed his famous “push-pull” effect, which solved the problem of how to keep abstract painting from going dead. The juxtaposed colors vibrated against one another, cool colors receding and warm colors advancing, activating the surface.

The decisive move away from her Cubist figuration can be traced from West’s A Girl with a Guitar of 1944 to Harlequin of 1946 to Transfiguration of 1948. The jump to abstraction took two years, but it was not a complete transformation until the Sixties. Like de Kooning, West returned to figuration in the 1950s. What is clear is that she understood the basic lesson of Cubism well: the entire surface had to be activated or what would later be called the “all-over” effect. With Cubism, the problem was to equalize the figure and ground, to reduce all areas of the canvas to a pattern of shattered shapes. Without the armature of the object, the question for abstraction became how or perhaps why to fill the canvas. The solution, which we also see in Pollock of the same period, was to cover the surface with dense biomorphic marks, built up into rhythms of painterly movement—a visual horror vacui. Transfiguration of 1948 demonstrates the same denseness and thickness that would characterize her compromise between geometric Cubism and biomorphic Surrealism. But West was still in the process of becoming. The last years of the decade would be critical for the development of American painting as the artists had to take the final step that would free them from dependence upon European Modernism.

Because we have become so familiar with the history of the American avant-garde in New York, it is important to remember that the scene among the artists was not as clear-cut as it would seem with historical hindsight. In his book How New York Stole the Idea of Modern Art, Serge Guilbaut recreated the confusion and uncertainty during the late Forties. By the end of the war, representational art disappeared from the galleries, replaced by abstract art. But abstraction was the only certainty. There were pressing questions of the relationship between the European tradition of Modernism and the newly emerged American art. American artists needed and wanted a complete break and sought to create an “American” art. Michael West had been on the forefront of the pioneers who moved forward to create abstract art in an American idiom. However, as a definition of Abstract Expressionism, American avant-garde, American painting emerged, it would be specifically constructed to eliminate certain elements and players, including and especially women.

Politics was removed from art. This removal was part of a rejection of previous art, such as Social Realism and a reaction against wartime fascist propaganda. It was clear to American observers that the French post-war entanglement in politics was harmful to the recovery of their art. In America, there was a conservative reaction against “elitism” and anything that seemed “un-American” such as European based art. Added to the fact that “modern art” became suspect in many quarters was the chilling fear of the coming Cold War and communism. American insularity and hostility to new ideas was on display against the important show of 1946, “Advancing American Art,” a show that traveled to Europe, organized by J. LeRoy Davidson and sponsored by the State Department. Attacked as being “Red Art” made by “left wing artists,” the “travesty of art” was designed to cause “ill will” towards America which would be made to look “ridiculous” by “half-baked lazy people,” who made that “so-called modern art.” An image of Hiroshima by Ben Shahn was singled out for criticism. For any artist who might have qualms about atomic warfare, it would be wise to forego comment, as America apparently quickly became desensitized and brutalized during the war to dropping “the bomb.’ Fortune Magazine’s chilling 1946 account of the dropping of the atomic bombing of Bikini atoll shows either ignorance or fear,

….there is no reason why only one bomb should be dropped at one time. Some bombs might be detonated mainly for blast effects, others underwater to contaminate the whole harbor area. Some military men even foresee the release of clouds of radioactivity without bombs to act as an invisible gas.

Not every observer was so sanguine. By the end of the Forties, West married again to a combat photographer, Francis Lee. It is unclear what impact this marriage to a man who knew war so well had on her opposition to the Cold War, but her horror over what the war had wrought was shared by many artists in New York. This was a generation that had survived the hopelessness of the Depression and the daily fear of defeat by ruthless enemies, only to be faced—after victory, after the peace—with what proved to be a state of permanent war. In an age of total abstraction, when political art or art with any overt content was unwelcomed, many artists had to hide their horror at the continual testing of atomic weapons. Written after American had dropped atomic bombs on the Japanese to win the war and after the American government began systematically testing nuclear weapons, one of Michael West’s poems related the plight of the artist in such a dark time:

During the Sixties, Adolph Gottlieb did a series of paintings, called Burst, an oblique reference to the threat of immanent annihilation. West had also “blasted” her early work, Harlequin, with a dull silver paint, the color of a bomb casing. The spill of paint obliterated the earlier surface, stunning it into submission. This old work was transformed by her Cold War protest, the silver color acting as a metaphor of the Frankenstein effects of technology. Other works of this period show the cultural dis-ease with the Cold War. West’s Nihilism (1949) and Dagger of Light (1951) have titles which predate those of Gottlieb, suggesting a veiled statement, implied but not stated, except in the use of industrial enamel paint splayed across the canvas.

After those splashes of violence, the art of West began to include landscapes and still lives on white ground. Her 1950s return to figuration would have been regarded as tantamount to treason in the New York art world after the hard fought battle for abstraction. De Kooning was roundly attacked for his Woman series of 1952. West joined the Dutch artist in being one of the few who dared to challenge the new orthodoxy. The flurry of brushstrokes in Flowers of 1952 and Road to the Sea of 1955 are an entirely new form of mark making for West. The works of the Forties retain a sense of the biomorphic that is, in and of itself, a signature of the era. The straightened marks, applied individually in a slashing movement prefigured her later mature work and were characteristic of the Fifties. What remains a constant for this return of figuration were the colors of the early abstractions. West was a colorist, a very inventive and subtle one, creating cool in-between tones mixed to unusual hues of thinned out reds and metallic greens. Green is a very difficult color for artists to work with, but West not only mastered the color but also invented a new version of her own: dense and acid with a sense of transparency, pale and dark at the same time. A Coke bottle green. This green appears in Space Poetry of 1956 and Study of 1962. As West wrote,

The future of art lies in color—but I/ am personally interested in an/ effect of dark and light/ The color explains the space/ The more complicated the space/ the simplier the color/ (this sounds wrong—but it is right for me)

The work of West during the decade when the New York School and Abstract Expressionism became the dominant movement in the international art world demonstrates the current aesthetic zeitgeist, on view at The Stable Gallery in 1953. In an homage to the famous Ninth Street Show of 1951, Eleanor Ward invited the best and the brightest in New York, including all the (remaining) artists of Abstract Expressionism, including both de Koonings, Motherwell, some future Pop precursors, Rivers and Rauschenberg, and all the notable women of the scene, Frankenthaler, Bourgeois, Mitchell. West was in this famous exhibition, which was prefaced with an interesting and telling introduction by Clement Greenberg. Greenberg, seeking to make his mark as an art critic, echoed the macho rhetoric of Rosenberg, writing of the “indispensible” “rivalry” among artists. The ironic juxtaposition of the presence of many women in an important exhibition and the masculine rhetoric of the short essay boded ill for the future careers of artists who were women. By 1952, the new artist, according to Harold Rosenberg, was an “action painter,” modeled on a militaristic fantasy, echoing American triumphalism.

At a certain moment the canvas began to appear to one American painter after another as an arena in which to act—rather than as a space in which to reproduce…

Rosenberg continued,

Art as action rests on the enormous assumption that the artist accepts as real only that which he is in the process of creating.

So by the time of The Stable Gallery show, it was already too late for women. Like politics, they were in the process of being written out of art history. The new artist had to be masculinized and Americanized. Stung by accusations of being “left,” the vanguard art world put forward a group of men who were too old or too unfit to fight in the Second World War and who had to be turned into cowboys and fighters. Most importantly the artist had to be depoliticized as well, a feat that was accomplished by elevating “him” to the status of individual, merged with “life” but not with current events. The male artist had to be male in order to symbolize the true subject of modern art: “man.” The independent male individual was alienated—had to be alienated—in order to create transcendent art.

Constructed during an era when men were supposedly suffering from a “crisis in masculinity,” the new American artist became an extreme figure, modeled on Jackson Pollock, a troubled alcoholic. Above all, this male artist must have “freedom.” In contrast, women in the post-war society were shaped for domesticity, were devoted to her husband and family, and were delighted by housework. Without “freedom,” they were unable to open their own bank accounts. Their individuality disappeared under their husband’s names. They were not individuals, but were defined in terms of their family roles. As “wives” and “mothers,” they could not alienated, nor could they ever be independent. This new post-war woman certainly did not even remotely resemble the newly fabricated American artist.

It is necessary to “re-place” Michael West in the history of art, because like all the women of her time, with the possible exception of Frankenthaler, she was written out of the New York School. By Sixties, she had moved back to abstract art, bringing together all she had learned over the past thirty years. Having experimented with avant-garde abstraction and figuration, in the Fifties, she made the choice to stay with her generation and did not attempt to follow figuration into Neo-Dada. She was a woman, and due to her gender, she has been mistakenly located historically as a “Second Generation” Abstract Expressionist artist, but this designation was because the art of women were assumed to be derivative of the work of men.

In fact, West was part of the First Generation and her development during the Forties as an abstract artist paralleled and paced with that of Pollock. He, of course, was given credit for what de Kooning called the “breakthrough,” or the breakaway from the dominance of European art. Her path to abstraction, unlike that of Pollock, was not through the automatic writing of Surrealism, but was through Cubism. Her transition would have been more like that of Mondrian or Malevich, in that she retained the cubist structure; but she utilized the expressive brushwork of Hofmann and broke free of the outlined strong Cubist blocks. Unlike Pollock, she never worked on the field painting scale but she solved the problem he presented in his Mural of 1943-4—how to paint large scale with kinetic strokes over a large expanse of canvas. Unable to work on an easel, Pollock threw an unprimed canvas onto the floor in 1947 and flung paint onto its surface, solving his found problem with a solution found three years later.

West apparently learned that she could work in large brushstrokes with a big paintbrush and keep the canvases to a large scale. She maintained the easel painting tradition, like de Kooning, but, when one measures her canvases, one can see that they were sized to fit her body: the size of the brush her hand could hold, the distance her arm could travel from end to end, as she swept across the surface. The canvases were as tall as an average woman’s height, minus a few inches and as wide as her outstretched arms. The term “kinetic” is often applied to Pollock’s work, referring to his throw of paint but the term can also be applied to the way in which West must have interacted with her surfaces and materials. Unlike Franz Kline who painted black against white, creating an intermix of contrasts, which flattened his surface, West laid stroke upon stroke, building up and out. In response to the increased use of the entire body in painting, artists of the Fifties often thought of themselves as performers and many allied themselves with body oriented activities, such as the partnership between Merce Cunningham and Robert Rauschenberg at Black Mountain College.

The idea of a performance or of a kind or proto-body art did not include women at the time, but an examination of the canvases of Michael West immediately demonstrates the sheer physicality of her painterly style. Her strokes of strong paint drew a map of figure on top of ground, applied with the rhythm of the sway of her body. As can be seen in her paintings of the 1960s, she left behind the packed and built up surface of the forties abstraction and became a figure-ground painter, seen as early as 1955 with a simple black Still Life. The use of dripping, small splashes on the canvas, which will become part of her work begins to appear. At times, she would take advantage of the liquidity of the paint and allow the paint to flow down but she never allowed the direction of the flow to dictate the orientation of the painting. In Narkisses of 1966, the canvas has clearly been flipped on its head.

West’s paintings were built up with gestures of strong over-painting, often allowing the ground to show through. The strong vertical slashes of the figurative paintings of the Fifties were carried over into the next decade and used on a large scale as though the brushes and the brush strokes had been greatly enlarged and blown up to fill a larger stage. Her colors became stronger and deeper, blacks, dark reds (Untitled, 1961), slate blues (Moments 1970), with touches of white (Vietnam Summer, 1963), and pale lemon yellow (Gento Niese, 1978) were applied with great and confident freedom. Despite the stroke of 1976, she painted on. Little was allowed to deter West—not the death of Gorky in 1948, not her second divorce in 1960, not an illness which was defiantly followed by the beautiful Save the Tiger of 1980.

Over and over, from decade to decade, Michael West always moved with and was part of the cutting edge of the art world. But just when Michael West hit her stride as an artist, just as she found her own voice, the art scene shifted and abstract art became a historical artifact. Pop Art ascended, followed by Minimal Art, both of which repudiated Abstract Expressionism, and, unfortunately, attention shifted away from abstract painters. We know that she was close to the painter Richard Poussette-Dart, but women received little support in an art world dominated by men and she did not get the exhibition exposure equal to her male colleagues. West simply kept evolving, independent as always.

The question is why did such an interesting artist, so in tune with her artistic time, get left behind and written out of the history books? The answer, as was indicated, is two fold. First, Michael West was a victim of the passing fancies of an art world, increasing driven by an activated art market. New York began to look like Paris before the First World War, becoming home to a dizzying series of “isms.” But there the comparison stops. Before the Great War, the avant-garde movements built one upon the other, but in New York, true to the new martial Cold War fervor, each “ism” ousted the other. The “rivalry” Greenberg wrote of began to infect the art world.

The older Ab Ex artists sparred with each other and the group, never a close one, splintered in the fight for recognition and patronage. Even worse, the New York School was superceded, first, by the upstart Neo-Dada trend, and then, by the Pop artists, who were followed by the Minimalists, who were overcome by the Conceptual artists who eliminated the object. All of the new movements rejected the pompous pretentions of myth and poetry and spirituality that were part of the credo of Abstract Expressionism. Michael West, who was interested in what she called “the new mysticism,” Zen Buddhism, and Henri Bergson’s élan vital, was now in an art world charmed by popular culture and dedicated to literalism. The spontaneous art of personal gesture gave way to artists who hired fabricators and mailed instructions to installers. In this new world, one group was suddenly out and old-fashioned and the new group was in favor. The generation that had fought so hard to break away from the Europeans witnessed the uprising of the young artists, who not only mocked them but also obtained, too easily, the financial rewards they had worked so hard for.

Michael West was left behind by history, but so were Mark Rothko and Franz Kline and Robert Motherwell and Barnett Newman. Rothko and Newman were not truly appreciated until the Minimalists during the late Sixties. But regardless of the fact that West produced stunning abstract paintings, such as Mt. Siani Clinic of 1962, she still would have been ignored, unlike her male counterparts, because of the art world gender ideology. The second reason women were left out of art history had to do with old-fashioned gender bias and male prejudices against the female. Harold Bloom, the literary theorist, wrote of the history of literature as a contest, an “agon” between fathers and sons. In A Map of Misreading, Bloom wrote,

A poet, I argue in consequence, is not so much a man speaking to men as a man rebelling against being spoken to by a dead man (the precursor) outrageously more alive than himself.

Artistic rivalry was Oedipal, between men only. Given the succession of movements in the New York art world, with each generation rejecting the other, a male enterprise; women were not and could not be part of the canon. The ideological construct of men defeating men precluded any role for artists who were female. It took decades for new generation of art historians to recognize that it was not “history” that had been written but a male-based belief system—a belief that only men could be artists. Many years after her death, Michael West is joining the long line of women who paint in the rewritten art history.

Dr. Jeanne S. M. Willette

The Arts Blogger

Bibliography

Ashton, Dore, The New York School. A Cultural Reckoning, 1973

Belgrad, Daniel, The Culture of Spontaneity. Improvisation and the Arts in Postwar America, 1998

Bloom, Harold, A Map of Misreading, 1975

Bloom, Harold, Anxiety of Influence 1973

Frascina, Francis, ed., Pollock and After. The Critical Debate, 1985

Guilbaut, Serge, How New York Stole the Idea of Modern Art. Abstract Expressionism, Freedom, and the Cold War, 1983

Lewis, David, “Michael West: More than Gorky’s Muse,” in Michael West. Paintings from the Forties to the Eighties, 2010

McNamara, Chris, “By Any Name,” in Michael West. Painter-Poet, n.d.

Olds, Kirsten, “The New Mysticism in Art,” in The 1950s Paintings of Michael West, n.d.

Pollock, Lindsay, The Girl with the Gallery. Edith Gregor Halpert and the Making of the Modern Art Market, 2006

Rosenberg, Harold, “American Action Painters,” in The Tradition of the New, 1959

Sandler, Irving, The Triumph of American Painting, 1970

Spender, Matthew, ed., Arshile Gorky. Goats on the Roof. A Life in Letters and Documents, 2009

Jacob Samuel, a master printer and the art world’s “best-kept secret” has a life that many would envy. He gets artists to think “outside the box.” As publisher and printer of “Edition Jacob Samuel,” he does exactly what he wants—publishing prints by some of the most famous artists in the world and producing highly regarded editions of original works, prized by international museums. With few exceptions he works only with artists whose oeuvre he has admired and known for at least ten years, and, if he finds that a project is not going well, he simply backs away. Samuel, as the printer and publisher of his imprint, Edition Jacob Samuel (EJS), is completely in charge of his enterprise. After remaining discretely in the background, the printer is featured in the current exhibition at the Armand Hammer Museum, Outside the Box, which displays his entire Edition. For two decades, he has enriched the art world with an old-fashioned medium, etching, working quietly at the service of the artists. The exhibition currently on view features the total output of his publishing career, which has been jointly purchased by the Hammer and by the Los Angeles Country Museum of Art.

The artists in Los Angeles have always independently produced what the trade knows as “artists’ books” and the city has always supported artists who wanted to produce prints. Print workshops such as, the Gemini G. E. L. and Tamarind Institute, are now world-famous. East Coast artists, who wanted to make prints, such as Jasper Johns, came to Los Angeles. Printmaking has been part of the West Coast’s artists’ fascination with materials and experimentation with process. These printmaking workshops were founded in the sixties when Los Angeles was not on the art map, or at last not on the mind of New York critics. Being on the Left Coast and far from the art game, artists in Los Angeles had the freedom to experiment without having to respond to an art market. Although artists, such as the printmaker, June Wayne, from Tamarind, are mostly famous in L. A., book and print artist, Ed Ruscha, is internationally renowned. Ruscha began his career with his series of laconic books, cataloguing the sights of the city, from palm trees to parking lots. His self-published books, which, at one time, you could buy for five dollars, include Every Building on Sunset Strip and my favorite, Royal Road Test. Nowhere are the unexpected possibilities of printmaking explored more inventively than with Ruscha, who has printed with blood, spinach juice, carrot juice, even chocolate, instead of ink.

Samuel honed his craft through a long-term collaboration with the Los Angeles artist, Sam Francis, who died in 1994. In comparison to the exuberant and complicated prints of Francis, the aesthetic of Edition Jacob Samuel is more restrained and reductive. Even though it would seem that Jacob Samuel’s selection of etching, which requires a certain level of exactitude, might constrain the artists’ inventiveness, the prints produced through Edition Jacob Samuel are full of surprises. Ruscha’s work with the printmaker is a case in point. The artist is famous, not just for his books and prints, but also for his paintings, which often feature signs. “Signs” has two meanings with Ruscha, first the familiar advertising signs that guide us, and second, the semiotic sense of sign, that is: signs carry meaning. In one of his better-known paintings, he artist presented the word “hotel” in vivid orange with the letters arranged vertically. The meaning of the arrangement went beyond the word and implied that the “hotel” in question is a cheap one. An expensive hotel always writes out its name in horizontal elegance, while a cheap hotel uses garish neon, economically fixed to the side of the building.

The trademark of Ed Ruscha’s work is the combination of image with text, with the text predominating over the image, until the text becomes the image. After decades of such visual-verbal puns and semiotic play, the prints Ruscha produced for the Edition, Blank Signs of 2004, take the play with signs one step further. In this series of prints, the signs are road signs in the desert, a place where one would need directions; but the signs are blank. The artist’s use of masking on the etching plate rendered the shape of the signs and their supports as ghostly shapes outlined against his delicate drawings of the desert terrain. The traveler is lost without any clues. Perhaps it was the desert winds, but the words are bleached away from the surface of the roadside signs, but the wit of the act of masking out the word play is clear to those who know the artist’s signature satirical style.

Ed Ruscha, like another artist featured in the show, John Baldessari, is local to Los Angeles and can make prints in the city. But what makes the work of Jacob Samuel different from that of Gemini and Tamarind is that artists do not have to come to his print studio; he can travel internationally, carrying his portable studio with him. When an artist comes to the printer’s workshop, he or she is not at “home,” so to speak. But Samuel comes to the artist’s studio where the artist has the full resources of the home studio at her disposal. Through his portable workshop, Samuel provides the printing materials and the artist provides the inspiration and then the portable studio is packed up and the printer goes home. A world famous artist is a busy person, Samuel states, and he respects the limited time of someone like Dan Graham, also in the show. The printmaker and the artist consult on the final result at long distance. The collaboration between the artist and the printer is that of the leader and the follower, the one who initiates and the one who carries out the instructions. Samuel insists upon being humble to not just the artist but also to the materials themselves.

The delicate relationship between the artist and printer are on view with the prints of the German artist, Rebecca Horn. For those of us in Los Angeles, our introduction to the artist was at her influential retrospective at the Museum of Contemporary Art in 1990. Although she had been a leading German conceptual artist since the late 1960s and she had taught in San Diego in 1974, like many European artists she did not get her due in America until mid-career. Her installations in Los Angeles were a revelation in artistic intelligence, but not every work could travel, for example, one of her most important early works, the Overflowing Machine of 1970. Now owned by the Tate, the original machine included a nude dark haired young man, standing immobilized on a pedestal, surrounded by tubes (one of Horn’s trademark materials) through which red blood coursed. The conduits of blood circulation ran up and down on the outside of his body, making the invisible visible.

Her recurring theme of blood reappeared in the series of prints made between Samuel and Horn. The two had met on the occasion of her retrospective in Los Angeles, but Horn was not interested in prints. She actively disliked the effect of the reversed image and said as much to Samuel who immediately offered to solve that problem. The solution was to ask a local supplier of Gampi paper to invent a form of transparent paper. The image could be executed and the print, on surprisingly strong transparent paper, could be flipped over, reappearing in a reverse of a reverse, according to the artist’s original intent. Working in Horn’s large well-appointed studio in Berlin, the printer set up his portable studio and let the artist have her way. Restricted to blood-stain red and to a paper the color of her creamy skin, the redheaded artist made a series of prints, one featuring blood cells, another with marks made from a log from her studio fireplace dragged over a plate, and still another “painted” with a bouquet of dried roses. Like many of the artists in this exhibition, Horn is a writer and is as well known for her poetry as she is for her art, and the poems interspersed among the images preexisted the prints.

Just as Horn scored her plates with found objects, such as twigs, Marina Abramovic scratched her plates with her fingernails. Discussing her Spirit Cooking with Essential Aphrodisiac Recipes of 1996, Samuel noted that Abramovic “performed” her prints, meaning that the process of execution became a performance for the performance artist. Each artist brought his or her unique art form to the experience of making prints. In 2004, Mona Hatoum used her hair as a drawing tool, with coils and strands placed carefully preserved on pieces of paper and then slowly slid onto the plate. The Anglo-Indian artist, Anish Kapoor, commissioned a very special set of colors, deliberately made to reiterate the soft velvety dry pigments of his early works. The result was a set of prints with deep and profound colors that resonated and seemed to lift off the paper. Meredith Monk sang to Samuel as she made her prints of musical scores, and close friend, Chris Burden, shared his many encounters with coyotes in Topanga Canyon, told in a school-boy’s handwriting for Coyote Stories of 2005. Each series of prints presents a new but familiar facet of the personality of each artist.

Jacob Samuel takes pleasure in providing opportunities to artists. His Santa Monica studio, located in one of the last un-gentrified blocks in the city, is clean and spare, but, in the window, floats a transparent print by Gabriel Orozco, a Lotus Leaf from 2003. The transparent print ascends above the heavy and gleaming printing press. Although he has an artistic degree from the California College of Arts and Crafts in the Bay Area, Samuel insists that he “does not think like an artist” but thinks technically. (Collectors of his paintings would disagree.) The son of immigrants from Wales—his grandfather peddled pins—he grew up in Malibu and Venice, when Venice was “Dogtown” and the “Z Boys” ruled. A long-time surfer, Samuel was interested in the Italian Arte Povera movement of the Sixties. Not unlike the post-war cinema of the Italian filmmakers who used ambient light and sound and untrained actors, the artists of the Arte Povera movement were fearless in striking out beyond the materials approved by fine arts at a time when painting ruled.

One of the veterans of the 1967 movement, the Greek artist, Jannis Kounellis, stepped out of his comfort zone in 1999 and produced a series of prints for Edition Jacob Samuel that were surprisingly delicate and lyrical. It is this fertile mix of Samuel’s interest in the historic discipline of prints, his reductive aesthetic, fueled by the concept of serial imagery of the sixties, and the willingness to be open to the possibilities of unexpected and unorthodox materials that gave rise to his imprint. Many of the artists featured are also writers who produce poetry or narratives, which respond to the images, or vice versa. Samuel employs a professional typographer to execute the pages of text, which have their own presence and yet are subordinate to the images. The rows of small spare prints are elegantly presented in simple and pale frames, hung side by side and while the series is under the name of the printer, “Jacob Samuel,” Outside the Box can also be thought of as a group show, featuring world famous artists. Oddly, collectors have not been interested in these print works and ninety percent of the purchases come from museums, which support the publisher’s efforts. For the art audience interested in the full range of an artist’s work, the exhibition, Edition Jacob Samuel, at the Hammer this summer allows the viewer a rare glimpse into the rewards of the collaboration between artist and printmaker.