Blame the Economists, Not Economics

CAMBRIDGE – As the world economy tumbles off the edge of a precipice, critics of the economics profession are raising questions about its complicity in the current crisis. Rightly so: economists have plenty to answer for.

It was economists who legitimized and popularized the view that unfettered finance was a boon to society. They spoke with near unanimity when it came to the “dangers of government over-regulation.” Their technical expertise – or what seemed like it at the time –�gave them a privileged position as opinion makers, as well as access to the corridors of power.

Very few among them (notable exceptions including Nouriel Roubini and Robert Shiller) raised alarm bells about the crisis to come. Perhaps worse still, the profession has failed to provide helpful guidance in steering the world economy out of its current mess. On Keynesian fiscal stimulus, economists’ views range from “absolutely essential” to “ineffective and harmful.”

On re-regulating finance, there are plenty of good ideas, but little convergence. From the near-consensus on the virtues of a finance-centric model of the world, the economics profession has moved to a near-total absence of consensus on what ought to be done.

So is economics in need of a major shake-up? Should we burn our existing textbooks and rewrite them from scratch?

Actually, no. Without recourse to the economist’s toolkit, we cannot even begin to make sense of the current crisis.

Why, for example, did China’s decision to accumulate foreign reserves result in a mortgage lender in Ohio taking excessive risks? If your answer does not use elements from behavioral economics, agency theory, information economics, and international economics, among others, it is likely to remain seriously incomplete.

The fault lies not with economics, but with economists. The problem is that economists (and those who listen to them) became over-confident in their preferred models of the moment: markets are efficient, financial innovation transfers risk to those best able to bear it, self-regulation works best, and government intervention is ineffective and harmful.

They forgot that there were many other models that led in radically different directions. Hubris creates blind spots. If anything needs fixing, it is the sociology of the profession. The textbooks –�at least those used in advanced courses – are fine.

Non-economists tend to think of economics as a discipline that idolizes markets and a narrow concept of (allocative) efficiency. If the only economics course you take is the typical introductory survey, or if you are a journalist asking an economist for a quick opinion on a policy issue, that is indeed what you will encounter. But take a few more economics courses, or spend some time in advanced seminar rooms, and you will get a different picture.

Labor economists focus not only on how trade unions can distort markets, but also how, under certain conditions, they can enhance productivity. Trade economists study the implications of globalization on inequality within and across countries. Finance theorists have written reams on the consequences of the failure of the “efficient markets” hypothesis. Open-economy macroeconomists examine the instabilities of international finance. Advanced training in economics requires learning about market failures in detail, and about the myriad ways in which governments can help markets work better.

Macroeconomics may be the only applied field within economics in which more training puts greater distance between the specialist and the real world, owing to its reliance on highly unrealistic models that sacrifice relevance to technical rigor. Sadly, in view of today’s needs, macroeconomists have made little progress on policy since John Maynard Keynes explained how economies could get stuck in unemployment due to deficient aggregate demand. Some, like Brad DeLong and Paul Krugman, would say that the field has actually regressed.

Economics is really a toolkit with multiple models – each a different, stylized representation of some aspect of reality. One’s skill as an economist depends on the ability to pick and choose the right model for the situation.

Economics’ richness has not been reflected in public debate because economists have taken far too much license. Instead of presenting menus of options and listing the relevant trade-offs – which is what economics is about – economists have too often conveyed their own social and political preferences. Instead of being analysts, they have been ideologues, favoring one set of social arrangements over others.

Furthermore, economists have been reluctant to share their intellectual doubts with the public, lest they “empower the barbarians.” No economist can be entirely sure that his preferred model is correct. But when he and others advocate it to the exclusion of alternatives, they end up communicating a vastly exaggerated degree of confidence about what course of action is required.

Paradoxically, then, the current disarray within the profession is perhaps a better reflection of the profession’s true value added than its previous misleading consensus. Economics can at best clarify the choices for policy makers; it cannot make those choices for them.

When economists disagree, the world gets exposed to legitimate differences of views on how the economy operates. It is when they agree too much that the public should beware.

Dani Rodrik, Professor of Political Economy at Harvard University’s John F. Kennedy School of Government, is the first recipient of the Social Science Research Council’s Albert O. Hirschman Prize. His latest book is One Economics, Many Recipes: Globalization, Institutions, and Economic Growth.

Not So Huddled Masses: Multiculturalism and Foreign Policy

The modest contemporary literature on the connection between America’s immigration and foreign policies contains this assertion by Nathan Glazer and Daniel P. Moynihan, from the introduction to their 1974 volume Ethnicity: Theory and Experience: “The immigration process is the single most important determinant of American foreign policy . . . This process regulates the ethnic composition of the American electorate. Foreign policy responds to that ethnic composition. It responds to other things as well, but probably first of all to the primary fact of ethnicity.”

Yet, the authors noted a nearly complete absence of discussion of the issue, and they pursued it little themselves. Rather, they tossed it in as a supplement to their general argument: ethnicity was not going to wither away, leaving only colorful residues for annoyance or celebration. It would remain a primary form of social life in the United States.

Nonetheless, ethnicity played little role in the foreign policy battles of the 1960s and 1970s. One could discuss the cultural divide between hippies and hardhats, between war protesters and the “silent majority” without reference to race, creed, or national origin. Some scholars would note a dominant ethnic component in the New Left, as well as in the less visible New Right, but such considerations were hardly part of the national conversation.

They certainly had been in the past, in the battles over American entry into World Wars I and II. Glazer and Moynihan implied they would be in the future as well. For the two were writing in the wake of the historic 1965 Immigration Act, which had overturned the restrictionist regime of the 1920s. That post–World War I legislation, which brought to a halt the Great Wave of immigration that had begun forty years earlier, was designed explicitly to freeze the American ethnic balance. By the 1960s, in the warm glow of the civil rights revolution, this was no longer plausible.

In any case, the backers of the 1965 act did not imagine huge demographic changes: there would be, they claimed, some modest increase in the number of Greek and Italian immigrants but not much else. The sheer inaccuracy of this prediction was already apparent by the early 1970s. The 1965 Act allowed entry of immigrants from any country, so long as they possessed certain job skills or family members living here or had been granted refugee status themselves.

The family reunification provision soon became the vital engine of immigrant selection. By the 1980s, it had greatly increased numbers of Asians and of Hispanics—the latter mostly from Mexico. The European population of the country was now in relative decline—from 87 percent in 1970 to 66 percent in 2008. If immigration continues at present rates (and barring a long-term economic collapse, it is likely to), by 2040, Hispanics will make up a quarter of the American population. If that does not guarantee a somewhat different foreign policy, there is also the prospect of a substantial expansion of America’s once miniscule Muslim and Arab populations.

To those attuned to the historic battles over twentieth-century American foreign policy, ethnicity was an obvious subject. It played a major role in the debate over American entry into World War I, which was vigorously opposed by most German-Americans, anti-tsarist Scandinavians, and many Irish-Americans. Leading pro-war politicians railed against “hyphenated-Americans” with a ferocity nearly unimaginable today. And while opposition to America’s entry into the war cut across all regions and groups, the non-interventionist position always maintained a strong core of support in the upper Midwest, where Americans of German descent dominated. In the 1950s, the widely read political analyst Samuel Lubell concluded that isolationism was always more ethnic than geographical, and owed its durability to the exploitation of pro-German and anti-British ethnic prejudices by the Republican Party. Lubell claimed that isolationists, far from being indifferent to Europe’s wars, were in fact oversensitive to them.

This is surely too reductionist an argument. But the volatile ethnic mix at home did inhibit Woodrow Wilson from taking sides in Europe. “We definitely have to be neutral since otherwise our mixed populations would wage war on each other,” Wilson was reported to say in 1914. The “hyphenates,” bullied into silence by 1917, had their day after the Armistice when their opposition helped to lay low Woodrow Wilson’s dreams for the League of Nations. Walter Lippmann interpreted post-war isolation through this ethnic prism: any policy that put America in alliance with some European countries against others risked exacerbating America’s own ethnic divisions. Near the end of his career, Arthur Schlesinger Jr. described the arguments that took place between the outbreak of war in 1939 and Pearl Harbor as “the most savage national debate” of his lifetime, one that “unleashed an inner fury that tore apart families, friends, churches, universities and political parties.”

In any event, America’s intra-European divisions began to melt away quickly after Pearl Harbor, as military service became the defining generational event for American men born between 1914 and 1924. The mixed army squad of WASP, Italian, German, Jew, and Irish became a standard plot device for the popular World War II novel and film. The Cold War generated a further compatibility between ethnicity and foreign policy. East European immigrants and refugees emerged to speak for the silenced populations of a newly Stalinized Eastern Europe. Suddenly, all the major European-American groups were in sync. Italian-Americans mobilized for mass letter-writing campaigns to their parents and grandparents warning of the dangers of voting Communist. Greek-Americans naturally supported the Marshall Plan.

Bipartisanship now meant that both parties had to woo ethnic Americans. (And not always so tactfully: the 1948 GOP platform promised to work for the restoration of Italy’s African colonies). Eastern Europeans lobbied for the rollback of Soviet rule, enshrining it as a GOP platform plank if not a practical commitment. Americans of East European background remained staunchly anti-Communist long after anti-Communism surrendered its luster in the aftermath of Vietnam, allying with neoconservative Jews and hamstringing Nixon and Kissinger’s détente policy. As anti-Communism became an engine of Americanization, the Cold War showcased the hyphenated American.

Twenty years after the fall of the Berlin Wall, America has entered a new era of ethnicity and foreign policy, whose contours are only just now emerging. During the 1990s, when multiculturalism was in vogue, leaders of old and new minority groups steered American foreign policy toward the cause of their ancestral homelands. African-American and Hispanic leaders touted the success of American Jews in lobbying for Israel as an example to be emulated. At one major Latino conference, participants nominated themselves the vanguard of a “bridge community” between the United States and Latin America.

Ethnic lobbies, the old as much as the new, quickly filled the empty space left behind by the Cold War. Traditional realists like former defense secretary James Schlesinger and Harvard political scientist Samuel Huntington bemoaned the diminished sense of national cohesion and purpose. Ethnic lobbies, they feared, would inhibit the United States from exercising global leadership. Indeed, if one were to examine some of the major policy milestones of the Clinton era—active participation in the Northern Ireland peace process, the military occupation of Haiti, expanded trade embargoes on Cuba and Iran, the revelation of the Swiss banking scandals—it could be argued that ethnic lobbies were, as much as any coherent grand strategy, the era’s prime movers.

After a brief spasm of patriotic and military display following the attacks of 9/11, we have picked up where we left off the day before. Which is to say that the preliminary indications point toward a future that will bear some semblance to the politics of the 1990s and the World War I era, when ethnic constituencies operated as a brake on executive power and military intervention. There is no evidence that the rallying cries put forth by America’s neoconservatives and liberal hawks—democratization of tyrannies, the global war on terror, the fight against radical Islam—have gained significant traction among first- and second-generation immigrant communities. Certainly they do not resonate with anything like the intensity that anti-Communism did after World War II. On the basis of what is visible thus far, today’s and tomorrow’s Mexican-, Asian-, and Arab-Americans will more resemble the Swedes, Germans, and Irish of a century ago than the Poles, Balts, and Cubans of the Cold War era.

The plainest indicator is voting behavior. The obvious point to make is that most of the new immigrant groups tend to vote Democratic—a trend that has intensified since the Republican Party twinned itself with the war in Iraq and, more generally, with the “war on terror,” even as the Democratic Party has reverted to a traditional skepticism regarding foreign entanglements.

Consider first Hispanics, a group that has always leaned Democratic. George W. Bush received 35 percent of the Latino vote in 2000 and 40 percent in 2004. John McCain’s share dropped to 30 percent. The GOP’s harsher tone on immigration surely played a role in this. But it bears noting that in one recent survey of Hispanic voter attitudes, the same percentage cited Iraq as an important issue as cited immigration.

Though Latinos constitute the largest new immigrant group (and Mexican-Americans count as the only national group whose relative size rivals that of German-Americans in the early twentieth century), their foreign affairs activism remains modest. Apart from highly-mobilized Cubans, it is not clear Latinos have either the resources or will to influence foreign policy in a singular way. By virtue of history and geography, they are as much the unwilling subjects of American expansion as they are immigrants—a circumstance captured by Jorge Dominguez’s pithy remark that “the boundary migrated, not the Latinos.”

There is little evidence that Mexicans have much loyalty to the Mexican state, which most, with good reason, view as corrupt. In fact, before immigration became a harshly contested issue in the 1990s, a majority of Mexican-Americans tended to think there was too much of it. Moreover, it seems unlikely that this increasingly Democratic constituency will become a pillar of support for globalism of any sort, much less military interventionism. Obviously one can’t draw broad conclusions from a single political figure. But Colorado’s former senator Ken Salazar, named by President Obama to head the Interior Department, gave an address at last summer’s Democratic convention that, for all its rooted-in-the-soil rhetoric, might, with slight shifts of emphasis, have been delivered by an editor of the paleoconservative journal Chronicles.

The Asian-American shift from aggressive red to pacifistic blue has been far more dramatic. This group, once heavily weighted with refugees from Chinese and Vietnamese Communism, voted Republican in 1992 and 1996. But by 2004 the Asian vote began to trend heavily Democratic, an estimated 60 percent for Kerry over Bush, and 63 percent for Obama over McCain. Like most voters, Asians ranked the economy first, but according to one recent survey, the war in Iraq rated second. Seventy percent wanted the U.S. to leave as soon as possible.

Beyond their turn to the Democrats, there is little hard evidence upon which to gauge the future influence of Asian immigrants on U.S. foreign policy. While there are surely Chinese-Americans who yearn for greater freedom in Beijing, they evince little of the refugee-from-Communism zeal displayed by East Europeans or Cubans of the Cold War period. Generally, Chinese-Americans seem proud of China’s progress and emergence as a great power. It follows that, as their political participation grows, it may become a constituency that encourages American accommodation to a powerful China, or at least one that does not weigh in on the side of confronting it.

The Iraq War is likely to be seen a great clarifier in the partisan identification of the expanding Latino and Asian electorates. In the Gender and Multicultural Leadership Project’s 2006–2007 survey of minority state and local elected officials, only 24 percent of Latino respondents and 19 percent of Asians believed the United States had made the correct decision to invade Iraq.

And what of America’s Arab and Muslim populations, hailing from regions where the United States is presently engaged in two wars? Their numbers tend to be smaller, and hard to tally precisely. But according to Daniel Pipes, the most prominent of those alarmed about the prospect of “Islamism” gaining a foothold in America, there were 3 million Muslims in the United States in 2002. The U.S. census estimates a current U.S. population of 1.25 million Arab-Americans, the majority Christian. (The Arab American Institute estimates that over 3 million U.S. citizens have some Arab ancestry.) But this population, comparably tiny, grows steadily through immigration. According to statistics compiled by the Arab American Institute, the Arab-American population doubled in size between 1980 and 2000, with about 26,000 new Arab immigrants entering the United States every year.

Once a swing group, which split evenly in the 2000 election between Bush and Gore, Arab-Americans have essentially abandoned the GOP. Republican identification had dropped to 20 percent according to one 2008 estimate. Not surprisingly, the Israel-Palestine conflict is the main area of contention, where Arab-American views part most dramatically from those now predominant in Congress. Past generations of Arab-Americans have assimilated seamlessly enough, usually by declining to call attention to their background. Those who rose into the public eye typically adopted a low profile on anything related to the Middle East. But that is changing. One can see early harbingers: Congressman Michael McMahon, recently elected to represent Staten Island and parts of Brooklyn, promised some of his Arab constituents that they could chaperone him on a tour of the occupied West Bank. Of course, members of Congress who have toured Israel with Israeli guides number in the hundreds at least. But this kind of politicized sightseeing may soon become a competitive enterprise.

Indeed, political competition over the Middle East now riles most elite American colleges, where it did not twenty years ago: almost every campus boasts an active Arab-American student organization, often cooperating with left-liberal Jewish students—and presenting a narrative of the Israel-Palestine issue far more critical than what was recently a commonplace. The parents of these students were immigrants, unsteady in their English, uncertain of their place in America. Their children have no similar restraints.

It may be decades before we talk seriously about a revived and very different kind of “China lobby” or a new “Palestine lobby.” But the demographic landscape has changed already, and the political coloration of the change does not seem in dispute. Those sections of the country—the South, lower Midwest, and the regions touching the Appalachian mountains—that have received the fewest immigrants from the waves of immigration of the past 130 years not only count as the most Republican; they are the regions least likely to send white antiwar politicians to Congress. They provide a disproportionate share of the nation’s soldiers. (If one were to subtract the very poor and very white state of Maine, one would need to go through a list of twenty states ranked in order of per capita Army recruitment to reach a state that John Kerry carried in 2004.) One political conclusion is obvious: current rates of immigration will not only diminish the “white” proportion of the American population; they will also diminish the political weight of those regions with the most hawkish and pro-military political cultures.

These observations about immigration and foreign policy complicate present debates among Republicans and conservatives. Consider first the more influential neoconservatives, whose viewpoints were neatly summarized during the campaign by Rudy Giuliani and GOP nominee John McCain. Both boasted hawkish views on Iraq and other war-on-terror-related issues; both sought out neoconservative foreign policy advisers.

Both men were entirely out of sync with the Republican base on immigration. As mayor, Giuliani liked to tout New York as a “Capital of the World.” During the presidential campaign he was accused by rivals, with some justification, of running New York as a “sanctuary city” for illegal immigrants. Similarly, John McCain’s campaign was nearly derailed by grassroots hostility to his proposal for normalizing the status of illegal immigrants (derided as “amnesty”).

The Giuliani and McCain positions corresponded to the neoconservative perspective on immigration. True, the 9/11 attacks made neoconservatives, and everyone else, more conscious of border security, and nudged some neoconservatives in the direction of restrictionist positions. But at bottom neoconservatism is a movement that originated among urban Jewish intellectuals, often the children or grandchildren of immigrants themselves, and it retains a good deal of that sensibility. Yet the demographic and political base for a neoconservative foreign policy may be found, to an overwhelming extent, in Protestant red state America, the areas least settled by new immigrants.

And what of the immigration restrictionists? They have contradictions of their own to sort out. They include Democratic environmentalists and liberals worried about immigration’s impact on wages. But most of the restrictionist momentum comes from the traditionalist or paleoconservative camp. Paleoconservatives tout their attachment to old communities, to “the permanent things.” They tend to be more opposed to change, more skeptical about the universal appeal, or relevance, of American ideals to the wider world. In some notable cases, they view themselves as the heirs of the Old Right isolationism that opposed American entry into World Wars I and II.

Paleoconservatives compose too small a faction to have much of a say in the Republican Party. (Pat Buchanan, the most prominent paleo, has been effectively banished from GOP policy debates since 1996.) But there remains a broader category of Republicans, including some prominent intellectuals, with considerable paleo tendencies, sentiments shared by a substantial portion of the American public. Consider two men with long and highly influential careers, the late George F. Kennan and Samuel Huntington.

As a State Department official in the 1940s, Kennan was the primary architect of the Cold War containment strategy. But he spent much of his career arguing that the United States had placed too much emphasis on the military aspects of containment. He was frustrated by what he perceived as an uninformed and moralistic streak running through American foreign policy. In particular, he despaired over the power of ethnic lobbies to influence American policy. These lobbies, he once wrote, “seem more often than not to be on the militaristic or chauvinistic side.” He urged the United States to exhibit more humility and less hubris in its approach to the international scene.

As much as Kennan the diplomat had immersed himself in foreign cultures, he was an ardent immigration restrictionist. In Around the Cragged Hill, published when he was nearly ninety, Kennan lamented the cultural changes brought about by poor immigrants. If they came to America in sufficient numbers, they would create “conditions in this country no better than those of the places the immigrants have left . . . turning America into part of the Third World . . . [and] thus depriving the planet of one of the few great regions” able to maintain a “relatively high standard of civilization.” Kennan also touted the virtues of small republics, and proposed that the United States might better manage its own civilizational problems if it divided itself into smaller self-governing segments, some of which would become culturally part of Latin America.

Harvard political scientist Samuel Huntington was the other leading WASP intellectual to take on the immigration question. Writing in Foreign Affairs in 1997, Huntington worried that the end of the Cold War had left the country without a defining mission. He cited John Updike: “Without the cold war, what’s the point of being an American?” During the mid-1990s, he noted grimly, the void of national purpose was being filled by the special pleading of ethnic subgroups, with little trouble finding receptive ears in Congress. In his final book, Who Are We? published in 2004, Huntington probed deeper. What would become of America’s national identity in an age of mass and especially Hispanic immigration?

“National interests,” he wrote, “derive from national identity. We need to know who we are before we can know what our interests are.” Throughout much of the past century, that identity was clear enough. America was a Western democracy, and both terms were significant. But a changing country would assume new identities and frame its vital interests differently. If American identity were to be defined by commitment to the universal principles of liberty and democracy, then “promotion of those principles in other countries” would guide our foreign policy. Yet if the country was a “collection” of various ethnic and cultural identities, it would promote the interests of those entities, via a “multicultural foreign policy.” If we were to become more Hispanic, we would reorient ourselves accordingly toward Latin America. What we do abroad depends on who we are at home.

To Huntington, America, the historical nation that had existed from the Jamestown and Plymouth Rock settlements until well into the last century, was Anglo-Protestant to the core, as Protestant as Israel is Jewish or Pakistan is Muslim. Huntington was referring to an Anglo-Protestantism of culture, not race or religion—but his ideal culture was definitely the product of the early settlers. The English Puritan Revolution was “the single most important formative event in American political history.” Out of the culture of dissenting Protestantism emerged a secular “American Creed” open to all. The Creed placed emphasis on individual conscience, on work over idleness, and on personal responsibility to overcome obstacles to achieve success. It forged a populace ready to engage in moral reform movements at home and abroad. Americans became accustomed to an image of their nation as one with a divine mission.

Previously, the nation’s elite had been able to “stamp” Protestant values on waves of immigrants. But because of Mexico’s geographic proximity and the sheer number of immigrants, the old assimilation methods would no longer suffice. The major political battles of the 1990s over bilingualism and multiculturalism foreshadowed a larger renegotiation concerning whether the new immigrants would subscribe to the American Creed at all. Huntington favored lower immigration rates and hoped for a reinvigoration of America’s Protestant culture and a renewed commitment to assimilation. But he was not optimistic, and other national possibilities presented themselves. One was a bilingual, bicultural America, half Latin-Americanized; another a racially intolerant, highly conflicted country; still another was a multicultural country subscribing loosely to a common American Creed, but without the glue of a common culture to bind it. Huntington considered the American Creed without its cultural underpinning no more durable than Marxist-Leninism eventually proved in Russia and Eastern Europe.

Protective of the uniqueness of America’s Anglo-Protestant culture, Huntington was a nationalist who hoped to maintain the American “difference” from the rest of the world. But he was acutely aware that one of the distinguishing aspects of Anglo-Protestantism was its messianism, the sense of America as a chosen nation—and one not inclined to leave a corrupt world to its own devices. Anglo-Protestantism had transformed the United States, in Walter McDougall’s words, from “promised land to crusader state.”

Thus, while many have laid the blame for the war in Iraq with the Bush administration or the neoconservatives, that may cast the net too narrowly. Andrew Bacevich is one author who describes a much longer fuse to the impulse that led America to miscalculate how receptive the world would be toward a military campaign to end tyranny. In The New American Militarism, Bacevich noted that evangelical Protestantism, which had evolved from political quietism in the first half of the century, to respectful deference to the Cold War establishment during the Billy Graham era, had, by the 1980s, evolved into a passionate embrace of military culture. The American officer corps made a transition from being mostly Episcopalian to heavily evangelical. And evangelicals embraced not just the soldiers and their values, but militarism as a chosen foreign policy. As Bacevich starkly put it: “In the developed world’s most devoutly Christian country, Christian witness against war and the danger of militarism became less effective than in countries thoroughly and probably irreversibly secularized.” Conservative Christians have fostered among the faithful “a predisposition to see U.S. military power as inherently good, perhaps even a necessary adjunct to the accomplishment of Christ’s saving mission.”

Bacevich’s analysis illuminated a principal weakness of Huntington’s prescription. For if solidifying the American nation required a re-invigorated Anglo-Protestant culture, the initiative would have to come to a considerable degree from Anglo-Protestants themselves. Reading Huntington (and Kennan as well), one cannot but sense that what they really seek is a revival of something resembling the American national elite of the 1940s and 1950s, exemplified by the foreign policy “wise men” of the Truman era (of whom Kennan was one). But that particular Protestant elite, whose cousins held the commanding positions of America’s industries and universities, was more or less banished from the national stage in the 1960s. Not only is its return impossible; it barely exists. What has replaced it as the dynamic core of American Protestantism is the evangelical culture Bacevich describes, rooted in the South and West, whose attitudes were epitomized by the Bush-Cheney administration.

If the emergence of an American elite able to cement a strong national identity and coherent national interest is unlikely, what options remain for a country now irreversibly multicultural? Huntington saw the choice as either imperialism or liberal cosmopolitanism, both of which would erode what is unique about America. Imperialism seems an unlikely choice since the Iraq War, an experience few Americans in or out of the military will want to repeat anytime soon.

What seems more likely is the entrenchment and expansion of a worldly, cosmopolitan elite, increasingly multicultural and transnational, that bears little connection to the WASP establishments of the twentieth century, the cold warriors, or even the Bush administration. American foreign policy will necessarily become less ambitious, more a product of horse-trading between ethnic groups. Messianism, in either its Protestant or neoconservative variants, will be part of America’s past, not its future. Americans will not conceive of themselves as orchestrators of a benevolent global hegemony, or as agents of an indispensable nation. Schlesinger, for one, exaggerated the extent of the fall when he averred that a foreign policy based on “careful balancing of ethnic constituencies” was suitable only for secondary powers, like the late Austrian-Hungarian Empire. But he exaggerated only slightly.

As I have noted, George F. Kennan, patron saint of both foreign policy realists and many paleoconservatives, spent the long second half of his career urging a greater sense of humility abroad. The rethinking of global commitments, the readiness to modify the go-go economy that seems to require them—these have become a refrain of some of Kennan’s heirs. So here is a second paradox, which parallels the irony that neoconservatives support an immigration policy that undermines their own political base. The realists and America-Firsters will find their foreign policy aspirations at least partially satisfied via the unlikely avenues of immigration and multiculturalism. The paleoconservatives, losers in the immigration wars, will end up winners of an important consolation prize: the foreign policy of what remains of their cherished republic.

Scott McConnell is co-founder and editor-at-large of The American Conservative.

Turkish Delight: A Sour Delicacy

Inever planned to live in Istanbul. Like so many things in my life, it just wound up that way. From my window I see massive and glittering cruise ships setting sail on the Golden Horn, sunrise over the Topkapı Palace, glowing like fire, tankers coming down from Odessa, pleasure craft coming up from the Sea of Marmara. I read somewhere that having a beautiful view adds years to your life expectancy.

My relationship with Turkey is not unambivalent, however. Lately, I have been walking down the street and wondering if the warm, kindly people I’ve long known as my neighbors—pudgy Uncle Mehmet, who sells pens at the corner store—are nodding with satisfaction at the scenes on the news of their fellow Turks waving Hamas flags, calling for the eradication of the Zionist Entity, and driving the visiting Israeli basketball team from the court with anti-Semitic curses. They probably are.

My friendships here have come under strain since the recent war in Gaza. Some of my Turkish friends have proved shockingly credulous, willing to absorb every crude slander they hear about Israel on the news and in the street. I’ve fallen out bitterly with Turks whom I viewed, until now, as liberal, Westernized moderates. And in fact, some of my Turkish friends have fallen out with their Turkish friends, having fallen victim to the same error.

That there has been an outpouring of anti-Semitism in Turkey recently should come as a surprise to no one; there has been an outpouring of anti-Semitism around the globe, as there is every time the Israeli-Palestinian conflict goes hot. But it did surprise me to discover it among friends whom I thought particularly unlikely candidates for these sentiments. I have been forced to realize that I didn’t know them—or Turkey—as well as I thought.

Turkey poses particular problems for the foreigner attempting to make sense of it. Istanbul, especially, appears to be quite Western, and in many ways it is. This seduces the observer into thinking it is more intelligible than it is. This, in turn, makes it easy to believe that you know what’s going on and who stands where on the political compass. Quite often, you’re wrong.

In the past two years, at least 16,000 civilians have been killed in Somalia. I have never seen demonstrations on the streets of Istanbul protesting the incursion of Ethiopian troops into Mogadishu, nor have I received e-mails from Turkish friends likening the Ethiopians to the executioners of Auschwitz, nor have I read newspaper columns offering prayers and solidarity to the women and children of Somalia—who are, I note, just as much the Turks’ co-religionists as the Palestinians, and certainly suffering no less. There have been no government-mandated minutes of silence in the classroom for the children of Mogadishu; no signs have been placed on the doors of Turkish shops declaring that Ethiopians are not welcome; the newscasts here have not featured round-the-clock coverage of the conflict in which photographs of Somali children screaming in fear and agony take pride of place. There have been no virulent anti-Ethiopian articles in Turkish newspapers, there has been no anti-Ethiopian graffiti on the walls. There have been no graphic billboards in Istanbul showing a bloody and smoldering Somalian baby’s shoe next to the words “You cannot be the children of Moses.” Indeed, I doubt that the average Turk even knows there is a conflict in Somalia.

At roughly this time last year, Turkish fighter-bombers began flying nightly past my window, terrifying my cats. The Turkish military, under instructions from Prime Minister Recep Tayyip Erdog˘an, invaded Northern Iraq. Turkey used jet planes, armored vehicles, and its extraordinarily disproportionate military power to overwhelm Kurdish villages across the border. It was, Erdog˘an said, a “limited operation to weaken Kurdish militants.” The television news stations here broadcast the assaults with great pride. Iraqi officials claimed the air raids had damaged hospitals, houses, and bridges; Kurdish sources insisted there had been massive civilian casualties. This is of course is what they would say, but it is probably true. Air power is a crude instrument; it is hard to imagine so many bombs could have been dropped on inhabited areas without causing harm to civilians. The claims cannot be confirmed: journalists were not permitted to explore the question.

The Turks, generally, do not much worry about the deaths of civilians in foreign lands. Unless Jews are involved.

I have a close friend here, a highly educated, thoughtful journalist who generally possesses an enormous capacity to draw fine moral distinctions. He is someone with whom I’ve spent hours discussing Turkish politics, someone I imagined to be an ardent enthusiast of Enlightenment values, an enemy of radical Islam, and, if not precisely a philo-Semite, certainly not ill-disposed toward Jews. He has lately taken to flooding my e-mail Inbox with Hamas propaganda. His newspaper columns bear headlines such as “You Wonder Why They Hate You? Look at Gaza.”

Appalled, I sent him a copy of the Hamas charter alongside Israeli drafts for a proposed constitution, submitted to the Knesset in 2006. Compare and contrast, I suggested. Do you seriously mean to tell me that you’d rather see Hamas come to town than the Israelis? “I see that Hamas is much less sophisticated in its rhetoric, and says the stupidest things that will put itself into trouble,” he wrote back. “Israel, of course, is very smart and is using a very delicate rhetoric. But I am not falling for that.”

What shook me here was not the sentiment—we have all heard this before, and not just in Turkey—but the source. This was not some hick from the villages or some partisan clown; this was my friend. This was someone I had hitherto regarded as being fundamentally rational. But beneath the cool-headed exterior ran a deep vein of crazy.

I asked another Turkish friend why Turks are not concerned about Somalians in the way they are concerned about Palestinians. He looked puzzled, then said, “Well, the Africans have always been killing each other.”

Behind the histrionics lies a deep insecurity, and its source is etched clearly into the landscape of Istanbul itself. I remember vividly my first impressions of the city. Arriving at night and walking up the alley to my apartment brought to mind the words of Odon de Deuil, who visited the city in 1147 and declared it “extremely dirty, disgusting, and full of filth; there are even such places to which daylight does not penetrate and under the cover of the darkness that reigns murders and other foul deeds can easily be perpetrated.” It still feels so. In a way, this is the appeal.

Fifteen hundred years from now, long after the decline of the American Empire, an ancient New York will feel like Istanbul—the Gotham of the East, the Byzantine set of a Batman movie; haunted, dark, brooding, ruined, a city clearly once at the very center of the world, its beating heart of commerce and trade and power and passion, but no more. It can be powerfully spooky, this omnipresent sense of eclipsed glory, and terribly sad.

Parts of the city, the skyline chiefly, are breathtaking, with turreted walls and towers and mosques; but much of the city is slightly more ugly than it is beautiful, made of hastily slapped-together concrete—the aesthetic of the developing world. Of the old architecture, most is in a state of shambling decay. There is construction everywhere, but rarely is anything fully constructed. Construction is simply a perpetual condition here.

The ruined architecture combined with the teeming vitality of the street life gives the sense that this is a place where millions of men and women are born and live and work and fight and suffer and rut and whelp and decline and then die; a city, not a museum. But evidence that a former greatness has been lost is everywhere; you cannot escape it. If this is the key to understanding the German psyche and German history, it is also the key to understanding Turkey’s.

It is understood by everyone here that Turkey has been reduced to an insecure, tentative half-power on the periphery of Europe, supplicating for acceptance, ever-rejected. If Turkey is unusually vulnerable to populist demagoguery and anti-Semitism—the disease everywhere of the anxious and the resentful—it is for the same reasons that the Weimar Republic, too, was vulnerable.

Not that this thought should bring comfort to anyone. It certainly doesn’t comfort me.

Prime Minister Erdog˘an knows this perfectly well. Hence his behavior at Davos. “When it comes to killing, you know very well how to kill,” he sneered to Shimon Peres. He used the general form of the word “you,” which could mean either Peres himself or Jews generally. He quoted with approval the writings of an obscure Jew turned anti-Semite named Gilad Atzmon, who has said that “the Jewish state is the ultimate threat to humanity and our notion of humanism.” Then he stormed off the stage, complaining that he had been paid insufficient respect by the moderator and announced that “Davos is over for me.” He returned to Istanbul to a rapturous welcome, telling the crowds at the airport, “I only know that I have to protect the honor of Turkey and Turkish people. I am not a chief of a tribe. I am the prime minister of Turkey!”

These remarks were enormously popular, just as they were enormously revealing. Since when does the leader of a self-confident, developed, modern nation need to explain to his people that he is not the chief of a tribe? No one here seems bothered by the contradiction and the irony at the heart of this speech: it is tribes who care passionately about their honor being protected, and this species of mythomaniacal nationalism is tribalism writ large. Erdog˘an’s behavior at Davos far more closely resembled that of a tribal chieftain than the prime minister of a mature state dispassionately seeking to advance his nation’s interests. The Ottomans, who understood this distinction perfectly, would have laughed down their noses at his display.

His outburst was deliberate, of course, and cynical. It was intended to ensure that his party, the Justice and Development Party, known by its Turkish acronym AKP, sweeps the upcoming municipal elections. The Turkish economy—until now AKP’s great point of pride, and the reason for its political success—has been tanking. The party, which came to power promising to wipe out corruption, has been mired in corruption scandals. If the AKP fails to do well in these elections, its aura of inevitability and invincibility will be lost. The rivals will smell blood. The knives will come out.

Erdog˘an’s eruption at Davos has been described in the press as “erratic,” a “tantrum,” as if it reflected a loss of emotional control. But it was premeditated and scripted. Before traveling to Switzerland, Erdog˘an told both his deputies in Parliament and the Turkish media that he intended to use the forum to humiliate Peres. His briefers had armed him with those quotes from Gilad Atzmon, as well as the other sources to which he appealed. If that was an unscripted outburst, where did all those slick banners applauding him as the Conqueror of Davos come from? Go try to get thousands of banners printed in the middle of the night in Istanbul. The props were obviously prepared days in advance; the cheering throngs were readied.

The press described Erdog˘an at Davos as “red-faced” and “ardent.” He was not. Watch the video. He is perfectly composed. He addresses the cameras, not Peres. He knows just what he is saying and doing, and just what effect it will have.

It worked exactly as planned. A columnist for Aksam, Hüsnü Mahalli, wrote that “Erdog˘an has shown that Turkey is not a banana republic . . . it is the most important country in the region as the heir of the 700-year-old Ottoman Empire!”

It has scarcely been reported here that Erdog˘an also held inconclusive talks at Davos with the International Monetary Fund—from which Turkey desperately needs yet another bailout—and that he, together with the leader of Azerbaijan, was kept waiting at one session for the more important panelists to arrive. His vigorous attack on Israel deflected attention from this ignominy, just as it was intended to do.

There are two main schools of thought here about Erdog˘an’s political party. According to the AKP party line, there is no reason for the West to be alarmed by the party’s Islamic orientation. There is no tradition of Islamic extremism in Turkey, never has there been, and never could there be: Turkish Islam is historically unique; it is and has always been a form of Islam compatible with democracy and Western values. Pro-AKP commentators in the Turkish press will concede on occasion that, yes, there are a few lunatics in the ranks, but they always point out (not unfairly) that the secularists here are not exactly models of sanity, either. Seriously, they ask, have you heard the crazy things they say? About the Turks being the original Sun People from whom the rest of the human race has descended? About the prime minister and his wife being crypto-Jews and Mossad agents? No, they insist, the overwhelming majority of the AKP simply want the same religious freedoms any Muslim in America would enjoy. As Erdog˘an said in June 2005, to CNN: “My daughters can go to American universities with their headscarf. There is religious freedom in your country, and we want to bring the same thing to Turkey.”

This is rubbish, according to the AKP’s opponents. They contend Erdog˘an knows what the West wants to hear. Even as he assures foreign observers that he represents the voice of moderation and human rights in Turkey, he quietly and patiently erodes the power of institutions that guarantee those rights—the army, the courts, the secular educational system, even the press, which has increasingly come under the control of the AKP’s allies and financial backers. Once he has finished emasculating the institutions that function as checks on his power, his real plan—to turn Turkey into another Iran—will become obvious. By then, alas, it will be too late, they argue.

To accept at face value the AKP’s argument that it seeks nothing more than “human rights” for Muslims, say its critics, is appallingly naive. Surely by now the West ought to understand the nature of radical Islam, the willingness of those who embrace it to lie in service of its aims. Is there, anywhere in the world, an example of a truly liberal and Western-oriented Muslim democracy? No? Then exactly why do you believe Turkey will be the exception to the rule? Because you imagine it would be a terrific example to the rest of the Muslim world if it were? Great, reply the skeptics. Go make your own country into a terrific example.

Erdog˘an’s behavior at Davos has given his critics here a kind of grim satisfaction. But it has not caused me to revise my own opinion of the AKP. My view is that the leadership of the AKP isn’t so much radical as cynical. If appealing to Islam helps them grasp power and keep it, they are more than happy to do so, whatever the consequences. They have discovered how to use religious sentiment to get votes, and thus to get rich, without bringing the hammer of the secularist military down upon themselves. They assume they can now use anti-Semitism in just the same way.

Many of the AKP’s senior figures rose to prominence in the now-banned Refah Party, led by ousted prime minister Necmettin Erbakan. Refah, and the larger Milli Görüs¸ movement associated with it, unquestionably did represent a deeply sinister strain of Islamic radicalism, giving the lie to the claim that there exists no such tradition in Turkey. Erbakan came to power promising to “rescue Turkey from the unbelievers of Europe,” wrest power from “imperialists and Zionists,” and launch a jihad to recapture Jerusalem. One of his first acts, upon taking office, was to fly to Iran and fawn over Khomeini.

In 1997, Erbakan was ousted by the army. Refah was banned. The AKP’s senior figures, including the prime minister and the president, have publicly renounced Erbakan and his ideology. But the AKP’s enemies find it frankly preposterous to imagine that the leaders of the AKP have experienced some kind of road-to-Damascus conversion (so to speak). Necdet, as I will call him, a middle-aged man in the construction business, put it to me this way: “Once an Islamist, always an Islamist. There’s no such thing as moderate Islam. You Americans don’t understand that. That was your biggest mistake, supporting the Taliban against the Soviet Union. You can’t make Muslims into your allies. It isn’t possible.”

I sympathize with this view, but suspect the truth is closer to this: Erdog˘an used Erbakan for as long as it was convenient—Refah was the only party that would allow a ruffian from the slums like Erdog˘an to get his foot in the door. When Erdog˘an realized that he would never attain power through Refah, he ditched it and the rhetoric associated with it. Power, not Islamic hegemony, motivates him. He is afraid of losing it now that his Potemkin economic miracle is on the verge of exposure, and if he needs to return to the gutter to keep it, well, one does what one needs to do.

The danger is that Erdog˘an and his intimates may be cynical, but the people to whom they are now appealing are not. They believe what he says. The AKP is conjuring up a genie it may not be able to master.

Erbakan, Erdog˘an’s former mentor, now heads up the Saadet party, which picked up a mere two percent of the vote in the last general election, but which has been organizing massive rallies against Israel and has consequently been gaining ground in the polls. Although the crowds at the airport following Erdog˘an’s trip to Davos were not massive—there were no more than a few thousand people—they were filmed by every organ of the Turkish press and played over and over on the news. By contrast, the Saadet party’s rallies have attracted genuinely huge crowds: I have heard estimates ranging from 20,000 to 100,000 people. This has scarcely been reported at all. No doubt Erdog˘an felt it important to claim the political territory into which Saadet, and other extremist parties, have lately stepped.

I visited the Saadet party’s office, on the Anatolian side of Istanbul in Üsküdar, not long ago. They could not have been friendlier or happier to see me. They were clearly grateful to talk to anyone who would listen.

They offered me cup after cup of tea as they explained how they saw the world. “America wants to change the borders of the Middle East and create a greater Israel,” one of them explained. “The Zionists, like Rockefeller, General Motors and Ford, control the United States. Is it a coincidence that three years after the formation of the UN, Israel was formed? The Israeli flag shows what Israel really wants—a state from the Nile to the Euphrates! That’s why there’s a game going on to divide our people into Kurds, Turks, Alevis.”

“Don’t get me wrong,” another interjected. “We distinguish between people and their governments.”

I was curious about this point. “Why?” I asked. “After all, Americans elected their government.”

“Yes, but we did too!” The room collapsed into mirth.

I asked about Iran. Everyone began talking over each other. “Ahmadinejad only says the crazy things he says about wiping out Israel in response to Israeli behavior. He’s on the defensive. It’s understandable: Look how America manufactured this war on Saddam, and killed one million people! Little babies!” He made a cupping gesture with his hands to show how small the babies were. “The U.S. used to support Saddam! The U.S. created all these lies about weapons of mass destruction so they could invade Iraq. They’re going after each nation, one by one, and Ahmadinejad knows he’s next on the list, that’s why he talks like that. They’ll go after Turkey, as well. They know they can’t fight Turkey directly, so they’ve found internal collaborators to destroy us from the inside. All the parties here are collaborating, except the SP. Look at the history—the Masons, Herzl. The Jews and Zionists established the capitalist system. Their God is the Israeli flag. Everything between the lines is part of the Greater Israeli Project. They believe it was promised to them! Just as I myself believe in God, the Israelis believe they have the right to control the whole world. They’re really smart. Zionists control Lloyd’s of London and the International Air Transport Agency, just like it says in the Kabbalah. They’re smarter than everyone else, except one person: Erbakan. That’s why they had to get rid of him.”

Until recently, people in Turkey who believed these things might have voted for the Saadet party, or simply not voted. Now they will probably vote for the AKP. That’s what Erdog˘an was angling for at Davos.

These lonely, voluble conspiracy theorists in Üsküdar seemed disappointed when I insisted that it had been a terrific afternoon, but I really had to be moving on. I had mostly been quiet until then: I didn’t want to inhibit them. But before I left, I made an attempt to set them straight. “Now look,” I said firmly, “everything you’re saying about Jews is wrong. I myself am Jewish. I grew up around American Jews. I have met far more of them than any of you have. I have never once heard any of them express the views you say they hold. I believe none of the things you say all Jews believe.”

They nodded pleasantly. “Oh, yes,” said one. “We knew you were Jewish from the moment you walked in, because you said you were a journalist and the Jews control the media. Would you like some more tea?”

But as I said, just when I think I understand how things here work, I realize that I am still quite some way from getting a handle on them.

During the first week of the war in Gaza, I hailed a cab in my neighborhood in Istanbul. After the usual fruitless search for the seat belt and the usual pointless conversation with the driver about seat belts and the laws of physics—to which he replied, as usual, that he put his trust in God—I too put my trust in God. “Where are you from?” he asked.

“I’m American. I’m from California.”

“Oh really! Which do you like better, California or Turkey?” This is what cab drivers always ask. There is always in their voices an obvious, poignant yearning to hear a foreigner praise Turkey.

I always offer the same reply. “Well, I’m from San Francisco, and actually, Istanbul and San Francisco are very similar! They are both very beautiful. They are both built on seven hills, they are both mostly sunny but sometimes cold and foggy, and in both cities you can see the sea!”

“Really! How interesting! And what do you do in Istanbul? Are you a student?”

I’m forty, so I’m always pleased when they ask that, just as they are always pleased to hear me laud the beauty of Istanbul.

“No, I’m a writer.”

“Oh, how beautiful! What do you write about?”

“Lots of things. Novels, biographies—”

“Do you ever write about politics?”

“Mmmm. Sometimes.”

Suddenly he became deeply serious. “Things are bad in the world these days. Terrible.”

I sank into my seat with a sigh—I could see where this conversation was going. “Yes, there are many problems in the world.”

He turned, looking at me rather than the road. “You’re American. You understand lots of things. Explain something to me.”

I offered a non-committal “Mmmmm.” I was, of course, expecting him say something anti-American. I didn’t want to have this conversation. I wished he were looking at the road.

“Coca-Cola,” he says. “You know Coca-Cola?” He was now very intense. Eyeballs fixed me in his rear view mirror. Right, I thought. We’re about to have a conversation about why Americans, in concert with multinationals and Jews, are killing Muslim babies.

“Yes, I do.”

We stopped at a light. He swiveled around to pin me in his gaze. He looked at me as if nothing could be more important and grave than what he was about to ask. He hesitated. Finally, he came out with it. “I bought it at 16. On Friday it was down to 13. Do you think I should sell, or should I hang on a bit longer?”

Claire Berlinski is a novelist, journalist, and biographer. Her most recent book is There Is No Alternative: Why Margaret Thatcher Matters.

Policy experts often think alike, even when the evidence contradicts them. This is how billions of dollars get spent on government programs that don’t work, argue CIS researchers Jennifer Buckingham, Andrew Norton, Phil Rennie, Jeremy Sammut, and Peter Saunders

One of the most famous statements in Western sociology was made by W. I. Thomas in 1928. ‘If men define things as real,’ said Thomas, ‘they are real in their consequences.’(1)

What he meant was that people act on the basis of what they believe to be true, not on the basis of what is objectively the case. If, for example, we believe someone is a brilliant artist, then we will act accordingly towards them, and if enough of us act in this way, the person will become ‘brilliant.’ Critics will praise the work, collectors will buy it, galleries will display it, and students will learn to regard it with due reverence. In the end, the question of whether the artist really is original or insightful becomes unimportant, for if he or she is defined as such, that is what they become.

People do not always agree on their ‘definition of the situation.’ You might believe an artist is brilliant while I think he or she is talentless. What matters in these situations is who has the power to impose their definition as the prevailing one. Who writes the catalogues? Who runs the galleries and the art schools? Who mounts the exhibitions? Professional ‘experts’ or officials in positions of authority often hold this power, and what they think they know becomes the accepted orthodoxy.

As in art, so too in social policy. In Australia today, there is a ‘social policy establishment’ that defines what ‘social problems’ are and prescribes the policies needed to resolve them. It includes academics working in universities and research institutes, welfare state professionals, political activists working in the nonprofit sector, social affairs journalists and commentators employed in the media, and bureaucrats employed in federal and state governments to research social problems and advise ministers on the best solutions.

Most of these people believe similar things and think in similar ways. They were educated in the same kinds of degree courses, reading the same books and internalising the same basic theories and perspectives. They interact regularly at seminars and conferences where they reaffirm the core ideas they share. They referee each other’s writings, award each other research contracts, and evaluate each other’s job applications. They often live in the same neighbourhoods, send their children to the same schools, and read the same newspapers and periodicals. Collectively, they ‘know’ what our society is like, and they ‘know’ what needs to be done to improve it.

The core beliefs and assumptions of this group of ‘experts’ are rarely challenged, and when they are, the challenge is generally ignored or waved away as self-evidently absurd and wrongheaded. This is not because these people consciously act in bad faith. They genuinely believe they are open to ideas and that they are self-critical, even impartial. But when everybody around them thinks as they do, and sees the world as they see it, it is difficult for them to take contrary ‘definitions of the situation’ seriously when they occasionally encounter them.

One illustration: members of the social policy establishment ‘know’ that income inequality is a ‘problem.’ They do not have to think about this; it is intuitive, common-sense, shared knowledge. The possibility that greater inequality might be a desirable thing (for example, as a way of strengthening work incentives or rewarding risk) is completely alien to their way of thinking, and probably never occurs to them—it would be like an art critic wondering whether Rembrandt was any good with a paintbrush.

This egalitarian orthodoxy shapes the public policy agenda in all sorts of ways without people even realising it. A few years ago, for example, the highly regarded and scrupulously ‘non-political’ Australian Bureau of Statistics (ABS) published a set of measures to chart Australia’s ‘social progress,’ and it chose greater income equality as one of its key indicators.(2) The only way ‘progress’ could be made on this indicator is by increasing taxes on those who work and/or by increasing the value of welfare payments for those who do not, thereby compressing the gap between high- and low-income households. These are highly contentious policies, and are certainly not the sort of programs a non-partisan government research organisation should be promoting. But the egalitarian presumption is so deeply ingrained in the social policy community that it simply never occurred to the researchers at the ABS that their definition of income equalisation as ‘progress’ was an intensely politicised one.

The social policy establishment occupies a position of considerable potential power and influence, for 70% of the total federal government budget goes on ‘social’ spending (education, health, welfare and family payments, housing, and community services).(3) The assumptions that inform the thinking of social policy experts can have enormous consequences in shaping government programs worth billions of dollars, which impact directly on millions of people’s daily lives. As W. I. Thomas might have put it, when social policy experts define a problem as real, it is real in its fiscal consequences.

It is therefore worth reflecting on the assump­tions that drive social policy thinking in Australia, and asking how well they stand up to critical scrutiny. In what follows, we consider just six of the social policy establishment’s many shared myths. All of them drive expensive policies that almost never work, yet are rarely questioned.

Myth 1: All children can benefit from an increase in government spending on institutional child care

The most recent federal budget contains $2.6 billion in government subsidies to offset the cost of child care for families. The core justification offered for an outlay of public funding of this magnitude is that children benefit from child care. The government needs to spend more so that more children can benefit.

Most child care subsidies are for formal, centre-based child care. These subsidies are generally referred to as a ‘social investment.’ The logic here is that formal child care has benefits for children, which flow on to society as a whole as the children grow up to be more productive and better-socialised adults.

Social policy ‘experts’ have little doubt about this, for various reports by government agencies and early childhood researchers down the years have made strong claims about the value of formal child care. However, a close reading of the research literature used to support these claims reveals that the evidence is inconclusive and often contradictory.
The prevailing belief is a myth.

The most frequently cited child care research comes from a number of American studies, including the High Scope/Perry Preschool Study, the Abecedarian Project, Project CARE, Head Start, and Early Head Start.(4) Each of these studies involved children from low-income or disadvantaged homes, who were provided with a combination of centre-based care and home visits, and in some cases, health and parenting services.

The problem is not that these studies or their results are questionable, but that their findings have been generalised out of context. For example, the High Scope/Perry Preschool Project is the source of the oft-repeated claim that each dollar spent on child care has a sevenfold future payoff in terms of reduced crime, welfare, early school leaving, and teenage pregnancy. But this project involved a small number of very low‑income, low-IQ children aged three and four, who attended preschool part-time. It is fallacious to apply the results of these programs to the broader population, yet there are ‘experts’ who continue to do so.(5)

Another common mistake is to confuse centre-based child care for infants as young as six weeks old with part-time preschool programs for three- and four-year-olds. These are very different forms of non-parental care, and they tend to have very different effects.

While research on preschool in the year or two before school is largely positive for all children, the evidence on child care for babies and infants must be interpreted much more cautiously. Here, it is important to compare formal child care with parental care, but there are few studies that have actually done this, for it does not fit in with dominant research perspectives in this field. This makes it difficult to draw any firm conclusions.

There was, however, a large American study by the National Institute of Child Health and Development that found children in centre-based care had a greater risk of behavioural problems than those cared for at home, and that this risk increased the longer the child spent in care. A Swedish study, by contrast, found that the earlier children began formal child care, the better their academic and social outcomes when they reached school age. However, the effects in the Swedish study did not persist beyond the primary years.(6)

Australian research on child care is sparse. One group of Australian researchers, including Linda Harrison of Charles Sturt University and Judith Ungerer of Macquarie University, has found mixed results—some positive, some negative—on the relationship between child care and later academic, social, and behavioural outcomes. But Kay Margetts of Melbourne University has found that children who had been in child care for extensive periods (with the exception of preschool) had more trouble adjusting to school on a variety of measures.(7)

What all this adds up to is that the research literature provides no strong evidence that child care is good (or bad) for all children. You would never know this from listening to the public policy experts in this field. They talk and act as if the research is clear and the issue is done and dusted. The truth is that governments are being pushed to commit ever-increasing amounts of taxpayers’ money to funding something that does not deliver the claimed payoffs. Australian child care advocates are convinced of the case for more child care and greater subsidies, but the evidence does not support their claims.

Myth 2: More government spending on education and training can solve the problem of joblessness

Spending on child care is what Americans call a ‘motherhood and apple pie’ issue. No politician is going to come under fire for offering to help families with their child care costs.

Lots of social policy issues are like this. Governments find it is ‘safer’ to spend money than to resist demands for more ‘government help,’ so budgets keep increasing even if no good is coming from the funded programs. When everyone agrees that something is a ‘good thing,’ scepticism is drowned out, and government squanders billions of dollars on feel-good policies that achieve little.

Education and training is a classic example. The Labor government led by Kevin Rudd says it is committed to expanding education and training expenditure. As with the expansion of child care, there are few voices raised in dissent. We all believe in the value of education, and it is difficult to argue against spending more money on training when thousands of unskilled workers are jobless while employers are complaining of skilled labour shortages. But will this extra spending achieve anything?

Although unemployment has fallen to thirty‑year lows, there are still over 1.5 million working‑age people on welfare benefits. Many of them are capable of working, but they are often unskilled and unqualified, and demand for unskilled labour has been falling. Technological change and competition from abroad have driven down levels of unskilled employment, and unskilled men in particular have been dropping out of the labour force in substantial numbers. In 1981, three‑quarters of unskilled men had full‑time jobs; today, fewer than 60% do.(8)

In light of these trends, it seems to make sense to spend more money equipping unemployed people to compete in the new skills economy and educating youngsters so they will not leave school without qualifications. The new Labor government favours such a strategy. So, too, does the Business Council of Australia, which has been arguing for more government training for unskilled jobless people, and the Australian Industry Group, which wants 90% of youngsters to stay in education or training to year 12 (the figure is currently 75%).(9) The welfare lobby has also long supported training rather than Work for the Dole for those on unemployment benefits, and teachers and lecturers are happy to support any policy that will increase demand for their services. Here is a policy that nobody is disposed to question. Yet it rests on a major, unexamined fallacy.

The ‘experts’ point to evidence that on average, qualified people enjoy higher levels of employment and earnings than unqualified workers. They assume these advantages could accrue to anyone. But this assumption does not hold, for what is true for the average case is not necessarily true for the marginal case.

Take schooling. Three quarters of students currently stay to year 12, and most of them benefit from higher earnings and better job prospects as a result. But this doesn’t mean the remaining quarter would enjoy these same outcomes if they also stayed on, for the more we extend schooling, the deeper we delve into the ability pool. Recent research by the Australian Council for Educational Research (ACER) finds that, far from benefiting from more education, low ability students often lose from it. On average, they increase their unemployment risk by three percentage points and reduce their earnings by 5% by staying at school for two additional years. They are better off leaving after year 10 and getting a job or an apprenticeship.(10)

It is a similar story when training jobless adults. Basic literacy and numeracy training can help those who lack these skills, and courses that refresh the skills of women returning to the labour force after having children are useful in enhancing their job prospects. But vocational skills training aimed at unskilled adults rarely achieves much, and courses for the young unemployed rarely achieve anything at all.(11)

The point that is persistently overlooked in the education and training debate is that some people are simply not cut out for year 12 schoolwork, a university degree, or a technically skilled job. It is true that qualifications often bring rewards, but unless we are willing to dumb down standards, not everyone can get qualified. Our social policy experts are unwilling to grapple with this truth. They prefer to assume that almost everyone has the ability to get qualified, and that the problem is simply lack of government spending on education and training programs.

Anything the government can do to improve the quality of education should be welcomed, but we should resist demands from social policy experts to throw more money at training schemes that won’t work, or to require more pupils to remain at school or undertake TAFE courses when they will gain nothing from the experience.

Myth 3: High tuition fees are pricing students from poor backgrounds out of university

Ever since the introduction of the Higher Education Contribution Scheme (HECS) in 1989 ended free university education, concern has been expressed that access to higher education will become increasingly limited to students from affluent backgrounds. The evidence shows this is yet another social policy myth, but it is a stubbornly resistant one, and people in high places seem to believe it. Just two months before he became prime minister, Kevin Rudd told The Australian that HECS was preventing ‘children from working class families from going to university.’(12)

The current maximum student HECS pay­ment ranges from about $4,000 a year for nursing and teaching courses to $8,500 a year for law, commerce, and medicine (this compares with a flat rate of $2,400 per year for all courses when Labor lost office in 1996). During its time in office, the Coalition substantially increased student charges for Commonwealth-subsidised university places twice (in 1997 and 2005), but the evidence shows this had no negative impact on low-socioeconomic-status (low-SES) enrolments.

National enrolment figures collected by the Commonwealth education department use students’ home postcodes as proxies for their socioeconomic status. These data show that 15% of commencing university students live in the 25% of postcodes with the fewest people holding higher qualifications or working in high-skilled jobs. Low-socioeconomic-status students are therefore ‘underrepresented’ in universities, but two rounds of cost increases have left their level of underrepresentation unchanged. After rounding up or down, every survey since data collection started in 1991 has found the same 15% low-SES share of commencing university enrolments.

Other sources also cast doubt on the theory that HECS deters students from working-class families. Researchers from ACER analysed data from a dozen social surveys, conducted between 1984 and 2001, which asked their respondents questions about their educational achievement and their parents’ occupation. They found that working-class people born between 1960 and 1969, who had free university education available to them from 1974 to 1988, had much lower rates of university qualification than the cohort born between 1970 and 1980, all but the oldest of whom incurred HECS charges.(13) Contrary to the theory that HECS deters low-SES people from pursuing university education, the proportion of them becoming graduates actually increased at the same time as tuition costs rose.(14)

School leavers born before the late 1970s avoided the Coalition’s HECS increases, which raises the possibility that ACER’s results were contingent on the lower pre-1997 HECS fees. But census statistics show this is not the case. Looking at university attendance rates of eighteen- and nineteen-year-olds living at home (so that we can use census household information to reveal their parents’ occupation), we find an increase of two percentage points in the number of children of blue-collar parents going to university between 1996 and 2001. Only a tiny further increase was recorded between 2001 and 2006, but both results are trending up, rather than exhibiting the downward trend predicted by the HECS-deterrent theory.

The census also includes information on house­hold income. Strikingly, the more a working-class family earns, the less likely it is that their sons will go to university, although for daughters, university attendance rates do increase slightly as household income rises. The children of the poorest professional families have higher university enrolment rates than the children of the most affluent working-class families, which suggests that parental occupation has more of an impact on children’s educational outcomes than parental income.(15)

None of these data sources include school results. Given that prior academic achievement is the main basis for university entry, this is a major omission, but as we have already seen, most policy experts are quite happy to ignore individual capabilities and achievements when analysing education outcomes. A fortunate exception is the Longitudinal Survey of Australian Youth (LSAY), which records students’ tertiary entrance scores. A study based on LSAY respondents subject to the 1997 but not the 2005 tuition cost increases finds that once we take account of school examination performance, university entry rates are the same regardless of socioeconomic status. A person’s family background has a big influence on whether they go to university, but it operates indirectly, via school results, and has little or nothing to do with income.(16)

It is interesting to ask why HECS does not deter low-SES students from going to university. Part of the explanation lies in income-contingent student loan schemes. With student debt repayments tied to their income, the government takes the risk of unsuccessful higher education investment. But part of the explanation is also that young working-class Australians are perfectly capable of making intelligent decisions about their own future careers. The HECS-deterrent theory implicitly assumes they are too dimwitted to calculate the financial benefits of a university education, or too prone to irrational ‘debt aversion,’ to grasp the available educational opportunities, but the evidence shows they are not.

There is a delicious irony in the fact that the social policy myth-peddlers fail to see what is obvious to the intelligent young people they claim to be worried about: that if you have the ability to benefit from a degree, the cost of fees will be far outweighed by longer-term financial returns.

Myth 4: Poverty in Australia is getting worse, and higher welfare spending is needed to counter it

Australia’s welfare lobby repeatedly claims that poverty in Australia is too high and is getting worse.(17) In one of the latest examples, an alliance of welfare groups claimed that over 11% of Australian households are living in poverty, and that their numbers are rising despite the sustained economic boom.(18)

The Uniting Church president described this as ‘scandalous.’ A St Vincent de Paul activist said it showed the need for a ‘national vision’ instead of current ‘piecemeal programs.’ The head of the Australian Council of Social Service (ACOSS) came right to the point by demanding ‘more funding for essential services.’(19) As usual, the welfare lobby paraded ‘shock’ poverty statistics to justify calls for bigger government (‘a national vision’ is code for more committees, meetings, and grand plans) and more spending. More than $70 billion is spent annually in Australia on social security and welfare payments alone, but groups like ACOSS say we should be spending even more.

The welfare lobby persistently produces wildly exaggerated and misleading reports about the size of our poverty problem. They think if they can get us to believe that huge numbers of our fellow citizens are suffering, our sense of ‘fairness’ will lead us to support their demands for more government spending. They even called their latest report Australia Fair. But there are at least three reasons why we should refuse to go along with this.

The first is that the welfare lobby’s definitions of ‘poverty’ are entirely arbitrary. This latest report, for example, says anyone is ‘poor’ who has less than half the median income (which is where the 11% figure comes from). This is a definition commonly used by poverty researchers, but no coherent rationale is ever offered for choosing this as the cutoff point. The report gives the game away when it says that you could define the ‘poverty line’ as 60% of median income, in which case 19% of Australians would fall below it and be considered ‘poor.’ Presumably, you also could define it as 40% of median income, in which case there would be very little ‘poverty’ at all. Clearly, the ‘poverty problem’ expands or contracts according to how you choose to define and measure it.(20)

Secondly, the report is not measuring ‘poverty,’ but income inequality. Its half-median income criterion is a measure of income dispersion, not of hardship or deprivation. The report shows that the proportion of the population receiving less than half the median income has grown from 10% to 11% in the last three years. It calls this an increase in ‘poverty,’ but all the statistics really tell us is that incomes have become slightly more spread out over these three years.

Comparing the incomes of people at the bottom with those higher up tells us about the difference between them, but it tells us nothing about whether they are ‘poor’ or ‘rich.’ This slight increase in the income spread has actually coincided with a rapid rise in real incomes at all levels, so everyone has been getting better off. To describe this as a ‘growth of poverty’ (and even as ‘sad and scandalous,’ as the Uniting Church did) is absurd.

The third reason for taking reports like this with a pinch of salt is that they take a static snapshot rather than looking at people’s incomes over time. Household incomes fluctuate, so most people who appear under any arbitrarily-drawn ‘poverty line’ do not stay there long. Research following a panel of Australian households over several years found 12% had less than half the median income in the first year, but only 6% had an income this low for two years running, and just 4% stayed under the line for three years.(21) Sustained ‘poverty,’ as against a temporary income drop, is thus much less common than the welfare lobby would have us believe.

This is a crucial and often overlooked finding, because we know that people adjust to fluctuating incomes through their lifetime by borrowing, saving, and varying their spending. This means that households’ actual living standards (the thing the poverty researchers say they are worried about) vary much less dramatically than their week‑to‑week incomes do.

The Melbourne Institute reports that people living on low incomes for relatively short periods tend not to consume less food, clothing, trans­portation, gas, electricity, health insurance, alcohol, meals out, or home maintenance than other people do. Living temporarily on a low income does not necessarily translate into poor living standards. To take account of this, the Melbourne Institute suggests combining income and consumption into a single measure of ‘poverty.’ On this basis, only 3% of the population comes out as ‘poor’ at any one time, and just 1% remains ‘poor’ over two successive years. The study concludes: ‘Existing income-based measures [of poverty] are seriously in error. The results they give are much too high.’(22)

This is not a message the welfare pressure groups seem willing to listen to. They have an interest in perpetuating the poverty myth, for it is the foundation for their campaigns for bigger government and higher taxes.

One of the favourite strategies of social policy experts arguing for increases in government spending is to claim the money will result in savings ‘in the long run.’ We have already seen one example of this in the claim that child care subsidies are really an ‘investment’ in future adults. Claims like this are most common in the area of health.

Over the last thirty years, a ‘public health’ profession has developed around the idea that people make unhealthy lifestyle decisions (smok­ing, overeating, failing to exercise), and that education can change this. The rising incidence of ‘lifestyle disease’ is predicted to result in unsustainable demands on the Australian health system in coming decades unless something is done to rectify people’s ignorance. What is needed, it is claimed, is an increase in government spending on preventive ‘health promotion.’ The Australian Chronic Disease Prevention Alliance argues that ‘Investing in promoting increased levels of physical activity and healthy eating in Australians would reduce the burden of chronic disease now and in the future.’(23)

Yet even the experts admit that evidence on the effectiveness of ‘lifestyle interventions’ is ‘limited’ and of ‘poor quality.’(24) Indeed, such evidence as we have suggests that ‘prevention’ strategies have done little to change people’s behaviour.

Australian governments have conducted public health campaigns since the 1960s. A report prepared in 2003 for the Commonwealth Department of Health and Ageing found that despite an estimated $810 million ‘investment’ in thirty-five coronary heart disease programs alone, ‘There was little change in the amount of physical exercise taken and the proportion of overweight persons increased.’(25) Likewise in the United Kingdom, where a series of reports and action plans culminated in a review in 2004 that found that ‘levels of physical activity have remained relatively stable over the last decade, [and] obesity levels have been rising.’(26)

Public health experts sometimes claim that health education programs have been successful in that levels of public ignorance have declined. And it is true that most people nowadays know the lifestyle modifications they need to make to protect their health, even though they fail to act accordingly.(27) But this is a curious definition of ‘success.’ We now know that simply telling people what they should do to protect their health does not always mean they will do it, and that many people choose not to modify high-risk—but often pleasurable—behaviours when the risk of future harm remains relatively remote.

Faced with lifestyle resistance from members of the public, public health professionals have begun to shift their strategy. Rather than unhealthy behaviour being a matter of personal responsibility, it is now presented as a ‘social problem’ reflecting government’s failure to act. Problems like the ‘obesity epidemic’ are attributed to a lack of govern­ment spending,(28) or to government failure to implement effec­tive public health programs.(29) A Labor Party document released last year, co-authored by Kevin Rudd, captures this shift. It complains that preventive health has not been made sufficiently ‘accessible to ordinary Australians struggling to find the time in their busy lives to look after their own health.’ It goes on, ‘We can’t expect people to take better care of their health if we won’t help provide the health services they need.’(30) So, if you eat too much and fail to exercise, blame the government.

It sounds plausible when experts tell us that it is more efficient and effective to intervene to change the behaviours that cause obesity and chronic illness than to spend money on secondary care geared to curing the consequences of unhealthy lifestyles. But this assumes that preventive inter­ventions really do work.

The experts say it does. They point to inter­national evidence that shows preventive primary care achieves better health outcomes at a lower cost.(31) But this evidence is not as authoritative as is sometimes claimed. It consists of studies, mainly from the United States, that purportedly show a higher ratio of primary care providers to population produces better health outcomes measured by lower mortality. But the authors of these studies admit they contain no evidence that access to and receipt of primary care reduces obesity (that it modifies individual behaviour) or that it lowers the incidence of chronic disease.(32) They also admit that improved health outcomes depend on an ‘appropriate balance’ between primary and secondary care.(33)

Meanwhile, a 2002 cross-country analysis of primary care across thirteen OECD countries found that those (including Australia) that had weaker primary care systems but spent more on secondary care achieved better health outcomes than the stronger primary-care-oriented coun­tries.(34) Of course, prevention is better than cure, but only when it works.

Myth 6: Higher social expenditure creates a more caring society

The unifying theme that underlies all of the myths we have examined is the belief that social problems require additional government spending to put them right. If only the government would spend more on child care, education and training, universities, anti-poverty programs, and preventive health care, these ‘problems’ could be put right. These claims reflect a generally unexamined assumption that more government spending is in itself a ‘good thing,’ and that you can judge how caring, decent, and civilised a country is by looking at the size of the government social expenditure budget.

University textbooks assert the ‘under-funding of social services’ as a fact,(35) and politicians urge us to assess their effectiveness by pointing to all the extra money they’ve spent (the inputs), while rarely talking about the outcomes they’ve achieved—the actual results of the spending. Yet when we look more closely at these outcomes, we generally find little relationship with levels of government spending.

Charles Murray’s 1985 book Losing Ground found that from 1950 to 1980 the American government increased its social spending twenty­fold, yet the proportion of people in poverty re­mained exactly the same while other social indicators such as crime and unemployment actually got worse. The same is largely true in other countries, too.

Economists Vito Tanzi and Ludger Shuknecht have studied the growth of government in Western nations during the twentieth century, and the benefits this spending produced. Using basic indicators such as literacy rates, life expectancy, poverty, inequality and crime, they conclude that most public spending since 1960 has produced little or no benefit in terms of improved social outcomes. Countries with smaller governments have performed equally well (or better) on these criteria over the same period of time. This is clearly demonstrated by figure 1, which shows the lack of relationship between public spending and a country’s score on the UN’s Human Development Index.

Further evidence comes from newly industrialised countries like Singapore, Hong Kong, South Korea, and Chile. These countries have rapidly caught up with the West in terms of social outcomes, but have done so with a much lower level of public spending.

So how can it be that huge increases in public spending so frequently produce such miserable results? There are two likely explanations.

The first is ‘churning.’ A large proportion of government spending is recycled straight back to the people who paid the tax in the first place. In Australia, around half of all health and education spending goes to middle- and upper-class households.(37) This means a lot of public spending is not ‘new,’ but is displacing private spending that would have happened anyway—and which would have been far more effective, because individuals can usually allocate their own money more efficiently than politicians or bureaucrats can.

Yet even the part of government spending that is redistributed from rich to poor hasn’t made much of a difference. Government spending keeps rising, but the ‘problems’ never go away. What this suggests is that many social problems, like poverty and crime, are not caused by lack of money and cannot be rectified by more spending. If they could, we would have fixed them decades go. Instead, much social spending goes towards alleviating the consequences of problems rather than their causes.

Anti-poverty programs, for example, alleviate the symptoms of poverty (lack of money) without addressing the factors that generate such hardship (such as drug, alcohol, and gambling habits, or the continuing growth of sole parenthood). Similarly, education programs focus on providing more schooling and training while ignoring the fundamental problems of teacher quality, curriculum content, and the like. The one thing governments are good at is raising and spending money, but this is often not what is needed to tackle the problems they are trying to solve.

Here, then, is the biggest myth of all—the meta-myth, if you like—which is embedded in the shared consciousness of the social policy establishment. It is the assumption that government is the appropriate agency for resolving people’s problems, and that we as individuals bear no responsibility for sorting out our own lives. For as long as this myth persists, ‘social problems’ will continue to grow, government budgets will continue to expand, and job opportunities for social policy experts will continue to multiply.

All the authors are researchers at The Centre for Independent Studies.

Endnotes

(1) W. I. Thomas and D. S. Thomas, The Child in America, 2nd ed. (New York: Alfred Knopf, 1929), 572.(2) Australian Bureau of Statistics (ABS), Measuring Australia’s Progress, 2002, Cat. No. 1370.0. (Canberra: ABS, 2002). For a critique, see Peter Saunders, Whose Progress? A Response to the ABS Report Measuring Australia’s Progress, CIS Issue Analysis 25 (Sydney: The Centre for Independent Studies, 2002).(3) Peter Saunders, The Government Giveth and the Government Taketh Away (Sydney: The Centre for Independent Studies, 2007), table 1.1.(4) Lawrence J. Schweinhart, How the High/Scope Perry Preschool Study Grew: A Researcher’s Tale, Phi Delta Kappa Center for Evaluation, Development and Research Research Bulletin 32, (Bloomington, IN: PDK, 2002); Child Trends, ‘Carolina Abecedarian Program,’ Child Trends (14 March 2007); University of North Carolina FPG Child Development Institute, ‘The Carolina Abecedarian Project,’ http://www.fpg.unc.edu/~abc/; Child Trends, ‘Project CARE,’ Child Trends (16 March 2007), http://www.childtrends.org/Lifecourse/programs/; U.S. Department of Health and Human Services Administration for Children and Families (ACF), ‘Early Head Start National Resource Center—Welcome,’ http://www.ehsnrc.org; ACF, ‘Office of Head Start,’ http://www.acf.hhs.gov/programs/hsb.(5) Don Edgar, ‘The Phony Debate Forgets About Kids,’ The Age (16 June 2006). Edgar, a former director of the Australian Institute of Family Studies, says in this article that ‘Every hour spent in every form of child care is a learning experience.’(6) Jay Belsky and others, ‘Are There Long-term Effects of Early Child Care?’ Child Development 78:2 (March/April 2007), 681–707; NICHD Early Childhood Research Network, ‘Does Amount of Time Spent in Child Care Predict Socioemotional Adjustment During the Transition to Kindergarten?’ Child Development 74:4 (July/August 2003), 976–1005; Bengt-Erik Andersson, ‘Effects of Public Day Care: A Longitudinal Study,’ Child Development 60:4 (August 1989), 857–866; Bengt-Erik Andersson, ‘Effects of Day Care on Cognitive and Socioemotional Competence of Thirteen-year-old Swedish Schoolchildren,’ Child Development, 63:1, (February 1992), 20–36.(7) John M. Love and others, ‘Child Care Quality Matters: How Conclusions May Vary with Context,’ Child Development, 74:4 (July/August 2003), 1021–1033; Kay Margetts, Children Bring More to School than Their Backpacks: Starting School Down Under, European Early Childhood Research Journal Transitions Monograph 1 (2003), 5–14.(8) Robert Gregory, ‘Australian Labour Markets, Economic Policy and My Late Life Crisis,’ in Joe Isaac and Russell D. Lansbury (eds), Labour Market Deregulation: Rewriting the Rules (Sydney: Federation Press, 2005), table 1.(9) See Peter Saunders, What Are Low Ability Workers To Do When Unskilled Jobs Disappear? Part 1: Why More Education and Training Isn’t the Answer, CIS Issue Analysis 91 (Sydney: The Centre for Independent Studies, 2007).(10) Alfred Dockery, Assessing the Value of Additional Years of Schooling for the Non-academically Inclined, Australian Council for Educational Research (ACER) LSAY Research Report 38 (June 2005). See also Gary Marks, ‘Issues in the School-to-work Transition,’ Journal of Sociology 41 (2005), 363–85; and Ralph Lattimore, Men Not At Work, Productivity Commission Staff Working Paper (Canberra: Productivity Commission, 2007).(11) ‘Evaluations of public training programmes in OECD countries suggest a very mixed track record … the most consistently positive results were recorded for adult women. The findings were less optimistic with regard to adult men: some programmes gave positive results, others not. The most dismal picture emerged with respect to out-of-school youths: almost no training programme worked for them.’ John Martin, ‘What Works Among Active Labour Market Policies,’ OECD Economic Studies 30 (2000/01), 93.(12) Matthew Franklin, ‘Rudd Attacks States Over Pokies,’ The Australian (11 September 2007).(13) Gary N. Marks and Julie McMillan, ‘Australia: Changes in Socioeconomic Inequalities in University Participation,’ in Yossi Shavit and others (eds), Stratification in Higher Education: A Comparative Study (Stanford: Stanford University Press, 2007).(14) This access result is not inconsistent with the education department’s data showing that the proportion of commencing university students who are from low-SES backgrounds has not risen. The education department’s data measures low-SES students as a percentage of all university students, while ACER’s data measures low-SES graduates as a percentage of all low-SES-background people.(15) ABS census data, available from the authors.(16) Buly A. Cardak and Chris Ryan, Why are High Ability Individuals from Poor Backgrounds Under-represented at University? La Trobe University School of Business Discussion Paper A06.04 (Bundoora: La Trobe University School of Business, 2006).(17) For examples, see Senate Community Affairs References Committee, A Hand Up Not a Hand Out (Canberra: Commonwealth of Australia, March 2004). Peter Saunders evaluates the claims in Lies, Damned Lies and the Senate Poverty Inquiry Report, CIS Issue Analysis 46 (Sydney: The Centre for Independent Studies, 2004).(18) Australia Fair, Australia Fair: Update on Those Missing Out (Sydney: Australian Council of Social Service, 2007), http://www.australiafair.org.au/upload/site/pdf/publications/3517__Australia%20fair%20numbers%20and%20stories.pdf.(19) ‘Australian Poverty Scandalous, Says Church,’ News.com.au (23 October 2007), http://www.acl.org.au/national/browse.stw?article_id=18029.(20) See Peter Saunders and Kayoko Tsumori, Poverty in Australia: Beyond the Rhetoric (Sydney: The Centre for Independent Studies, 2002).(21) Melbourne Institute of Applied Economic and Social Research, HILDA Survey Annual Report 2004, (Melbourne: Melbourne Institute of Applied Economic and Social Research, 2005); Bruce Headey, ‘A Framework for Assessing Poverty, Disadvantage and Low Capabilities in Australia,’ paper presented to HILDA conference, Melbourne (29–30 September 2005).(22) Bruce Headey, ‘HILDA’s Household Financial Accounts: Their Value for Developing Improved Assessments of Economic Well-being and Poverty,’ paper presented to the 2005 HILDA Survey Research Conference, Melbourne, (19–20 July 2007), 25.(23) Australian Chronic Disease Prevention Alliance (ACDPA), Chronic Illness: Australia’s Health Challenge—The Economic Case for Physical Activity and Nutrition in the Prevention of Chronic Disease (Melbourne: ACDPA, 2004), 6.(24) As above, 9, 14. In 2005, Monash University’s Centre for Health Economics reviewed the best international studies to assess the link between preventive programs, behavioural change, and health outcomes. It concluded that ‘There are critical gaps in the evidence relating to lifestyle interventions across all these areas … In general, evidence from which to assess community-wide interventions is incomplete and what is available is of poor quality … Least satisfactory is the evidence concerning physical activity and multiple risk factor interventions, particularly in relation to retention of behaviour change.’ Leonie Segal, Duncan Mortimer, and Kim Dalziel, Risk Factor Study—How to Reduce the Burden of Harm from Poor Nutrition, Tobacco Smoking, Physical Inactivity and Alcohol Misuse: Cost–utility Analysis of 29 Interventions (Melbourne: Centre for Health Economics, 2005), 7-8.(25) Applied Economics, Returns on Investment in Public Health: An Epidemiological and Economic Analysis prepared for the Department of Health and Ageing (Canberra: Department of Health and Ageing, 2003), 3.(26) Derek Wanless, Securing Good Health for the Whole Population: Final Report (London: HM Treasury, 2004), 77.(27) Peter Baume, ‘It’s All About Health’, On-line Opinion (5 October 2007), http://www.onlineopinion.com.au/view.asp?article=6441.(28) John Menadue, ‘Obstacles to Health Reform,’ Centre for Policy Development (25 July 2007), http://cpd.org.au/article/obstacles-to-health-reform.(29) On this reckoning, the ‘obesity epidemic’ is the result of ‘a catastrophic failure of government and public health authorities to devise and implement concerted, effective evidence-based action.’ Stephen J. Corbett, ‘A Ministry for the Public’s Health: An Imperative for Disease Prevention in the 21st Century?’ Medical Journal of Australia 183: 5 (5 September 2005), 254.(30) Kevin Rudd and Nicola Roxon, New Directions for Australia’s Health: Delivering GP Super Clinics to Local Communities (Australian Labor Party, 2007), 8.(31) Jennifer Doggett, A New Approach to Primary Care for Australia, Centre for Policy Development Occasional Paper 1 (Sydney: Centre for Policy Development, 2007).(32) On the ‘ecological fallacy’—the fact that a ‘direct relationship cannot be found between exposure to primary care and better health’—or the absence of ‘empirical evidence that appropriate receipt of primary care is associated with better health outcomes,’ which the authors of these studies admit severely qualifies their findings, see Leiyu Shi and others, ‘Income Inequality, Primary Care, and Health Indicators,’ Journal of Family Practice 48:4 (April 1999), 280–281; and Leiyu Shi and others, 2003 ‘The Relationship Between Primary Care, Income Inequality, and Mortality in US States, 1980–2005,’ The Journal of the American Board of Family Practice 16 (2003), 419.(33) Leiyu Shi and others, ‘Income Inequality, Primary Care, and Health Indicators,’ 283.(34) B. Starfield and Leiyu Shi, ‘Policy Relevant Determinants of Health: An International Perspective,’ Health Policy 60:3 (June 2002), 208–213.(35) Michael Belgrave, Christine Cheyne, and Mike O’Brien, Social Policy in Aotearoa New Zealand: A Critical Introduction (Auckland: Oxford University Press, 1997), 232.(36) Vito Tanzi, ‘The Economic Role of the State in the 21st Century,’ Cato Journal 25:3 (Fall 2005).(37) Peter Saunders, The Government Giveth and the Government Taketh Away.

Renowned documentary photographer Tomas van Houtryve entered North Korea by posing as a businessman looking to open a chocolate factory. Despite 24-hour surveillance by North Korean minders, he took arresting photographs of Pyongyang and its people—images rarely captured and even more rarely distributed in the West. They show stark glimmers of everyday life in the world’s last gulag.

The Economics of Collapsing Markets
Frank Ackerman1 [Tufts University, USA]
Copyright: Frank Ackerman, 2008
Big banks are failing, bailouts measured in hundreds of billions of dollars are not nearly enough, jobs are vanishing, mortgages and retirement savings are turning to dust. Didn’t economic theory promise us that markets would behave better than this? Even the most ardent defenders of private enterprise are embarrassed by recent events: in the words of arch-conservative columnist William Kristol,There’s nothing conservative about letting free markets degenerate into something close to Karl Marx’s vision of an atomizing, irresponsible and self-devouring capitalism.2
So what does the current wreckage of the global financial system tell us about the theoretical virtues of the market economy?
Competitive markets are traditionally said to offer a framework in which, in the memorable words of the movie Wall Street, “greed is good.” Adam Smith’s parable of the invisible hand, the founding metaphor of modern economics, explains why the attempt by butchers, bakers and the like to increase their own individual incomes should turn out to promote the common good. The same notion, restated in rigorous and esoteric mathematics, is enshrined in general equilibrium theory, one of the crowning accomplishments of twentieth-century economics. Under a long list of often unrealistic assumptions, free markets have been proved to allow an ideal outcome – meaning that the market outcome is “Pareto optimal,” i.e. there is no way to improve someone’s lot without making someone else worse off.
Although academic research in economics has moved beyond this simple picture in several respects, the newer and subtler approaches have not yet had much influence on non-academic life. Textbooks and mainstream policy analyses – the leading forms through which the economics profession influences the real world – still routinely invoke the imagery of the invisible hand and the notion that economic theory has demonstrated that market outcomes are optimal. Critics (myself included) have written volumes about what’s wrong with this picture.3 Broadly speaking, there are four fundamental flaws in the theory that private greed reliably creates social good. The financial crisis highlights the fourth and least familiar item in the list, involving access to information. But it will be helpful to begin with a brief review of the other flaws.Four fundamental flaws
First, the theoretical defense of market outcomes rests on Pareto optimality, an absurdly narrow definition of social goals. A proposal to raise taxes on the richest five percent and lower taxes on everyone else is not “optimal” by this standard, since it makes only 95 percent of the population, not everyone, better off. Important public policies typically help some people at the expense of others: pollution controls are good for those who value clean air and water, but bad for the profits of major polluters. The invisible hand won’t achieve such non-consensual results; public goods require public choices.
Second, market competition only leads to the right outcomes if everything that matters is a marketable commodity with a meaningful price. Marxists and others have objected to the treatment of labor as a mere commodity; environmentalists have likewise objected to the view of nature as something to buy and sell. This is not a new idea: in the words of the 18th century philosopher Immanuel Kant, some things have a price, or relative worth; other things have a dignity, or intrinsic worth. Respect for the dignity of labor and of nature leads into a realm of rights and absolute standards, not prices and markets. It doesn’t matter how much someone would be willing to pay for the opportunity to engage in slavery, child labor, or the extinction of species; those options are not for sale. Which issues call for absolute standards, and which can safely be left to the market? This foundational question precedes and defines the legitimate scope of market competition; it cannot be answered from within the apparatus of economics as usual.
Third, the theory of competitive markets and the proof of their optimality rest on the assumption that no enterprise is large enough to wield noticeable power in the marketplace. Adam Smith’s butchers and bakers operated in a relentlessly competitive environment, as do the small producers and consumers of modern general equilibrium theory. In reality, businesses big enough to wield significant power over prices, wages, and production processes can be found throughout the economic landscape.
Big businesses thrive, in part, thanks to economies of scale in technology and work organization: bigger boilers and furnaces are physically more efficient than small ones; assembly lines can make labor more productive than individual craft work; computers are often more productive when they run the same software used by everyone else. Economies of scale are also important in establishing and advertising well-known brands: since no one ever has complete information about the market, as discussed below, there is a value to knowing exactly what to expect when you walk into a McDonald’s or a Starbucks.
Bigness can also be based on unethical, even illegal manipulation of markets to create monopoly or near-monopoly positions. Manipulation constantly reappears because the “rules of the game” create such a powerful incentive to break the rules. The story of the invisible hand, and its formalization in the theory of perfectly competitive markets, offers businesses only the life of the Red Queen in Alice in Wonderland, running faster and faster to stay in the same place. Firms must constantly compete with each other to create better and cheaper products; as soon as they succeed and start to make greater profits, their competitors catch up with them, driving profits back down to the low level that is just enough to keep them all in business. An ambitious, profit-maximizing individual could easily conclude that there is more money to be made by cheating. In the absence of religious or other extra-economic commitments to play by the rules, the strongest incentive created by market competition is the search for an escape from competition, legitimately or otherwise.
Opportunities to cheat are entwined with the fourth flaw in the theory of perfect competition: all participants in the market are assumed to have complete information about products and prices. Adam Smith’s consumers were well-informed through personal experience about what the baker and the butcher were selling; their successors in conventional economic theory are likewise assumed to know the full range of what is for sale on the market, and how much they would benefit from buying each item. In the realm of finance, mortgage crises and speculative bubbles would be impossible if every investor knew the exact worth of every available investment – as, stereotypically, small-town bankers were once thought to know the credit-worthiness of households and businesses in their communities.So many choices, so little time
The assumption of complete information fails on at least two levels, both relevant to the current crisis: a general issue of the sheer complexity of the market; and a more specific problem involving judgment of rare but costly risks. In general terms, a modern market economy is far too complex for any individual to understand and evaluate everything that is for sale. This limitation has inspired a number of alternative approaches to economics, ranging from Herbert Simon’s early theories of bounded rationality through the more recent work on limited and asymmetric information by Joseph Stiglitz and others. Since no one ever has complete information about what’s available on the market, there is no guarantee that unregulated private markets will reach the ideal outcome. Regulations that improve the flow of information can lead to an overall improvement, protecting the unwary and the uninformed.
When people buy things about which they are poorly informed, markets can work quite perversely. If people trust someone else’s judgment more than their own – as, for instance, many do when first buying a computer – then decisions by a small number of early adopters can create a cascade of followers, picking a winner based on very little information. Windows may not have been the best possible microcomputer operating system, but a small early lead in adoption snowballed into its dominant position today. Investment fads, market bubbles, and fashions of all sorts display the same follow-the-leader dynamics (but without the staying power of Windows).
When people have to make excessively complex decisions, there is no guarantee that they will choose wisely, or pick the option that is in their own best interest. Yet in areas such as health care and retirement savings, individuals are forced to make economic decisions that depend on detailed technical knowledge. The major decisions are infrequent and the cost of error is often high, so that learning by experience is not much help.
The same overwhelming complexity of available choices exists throughout financial markets. The menu of investment options is constantly shifting and expanding; financial innovation, i.e. creating and selling new varieties of securities, is an inexpensive process, requiring little more than a clever idea, a computer programmer, and a lawyer. Such innovation allows banks and other financial institutions to escape from old, regulated markets into new, ill-defined, and unregulated territory, potentially boosting their profits. Even at its best, the pursuit of financial novelty and the accompanying confusion undermines the traditional assumption that buyers always make well-informed choices. At its worst, the process of financial innovation provides ample opportunity to cheat, knowingly selling new types of securities for more than they are worth.

Information about the reliability of many potential investments is ostensibly provided by bond rating agencies. One of the minor scandals of the current financial crisis is the fact that the rating agencies are private firms working for the companies they are rating. Naturally, you are more likely to be rehired if you present your clients in the best possible light; indeed, it might not hurt your future prospects to occasionally bend the truth a bit in their favor. The Enron scandal similarly involved accounting firms that wanted to continue working for Enron – and reported that nothing was wrong with the company’s books, at a time when the top executives were engaged in massive fraud.

Preparing for the worst
There is also a more specific information problem involved in the financial crisis, concerning the likelihood of rare, catastrophic events. People care quite a bit about, and spend money preparing for, worst-case outcomes. The free-market fundamentalism and push for deregulation over the last thirty years, however, have rolled back many older systems of protection against catastrophe, increasing profits in good years but leaving industries and people exposed to enormous risks in bad years. These risks occur infrequently or irregularly enough that it is difficult, perhaps even literally impossible, to discover their true probabilities. Nonetheless, responding correctly to rare, expensive losses is crucial to many areas of public policy.
In the U.S., the risk that your house will have fire next year is 0.4%. In effect, the average housing unit has a fire every 250 years; the most likely number of fires you will experience in your lifetime is clearly zero. Does this inspire you to cancel your fire insurance? You could, after all, spend the premium on luxuries that you have always wanted – an excellent plan for raising your standard of living, in every year that you don’t have a fire. Life insurance, frequently bought by parents of young children, addresses a similarly unlikely event: the overall U.S. death rate is less than 0.1 percent per year in your twenties, 0.2 percent in your thirties, and does not reach 1 percent per year until you turn 61. The continued existence of fire insurance and life insurance thus provides evidence that people care about catastrophic risks with probabilities in the tenths of a percent per year. In private life, people routinely spend money on insurance against such events, despite odds of greater than 99 percent that it will prove unnecessary.
For catastrophic risks to individuals, demographic data are readily available, making the frequency of worst-case outcomes predictable (which is why insurance companies are willing to cover individual losses). For the most serious crises in agriculture, industry, or finance, there is no such database; the public events of greatest concern are very rare, and are dependent on complex social forces, making it virtually impossible to predict their timing or frequency.
There is, however, a strong desire to protect against potential crises, frequently through the accumulation of reserves; it is striking how often the same word is used in different contexts. Storing reserves of grain to protect against crop failure and famine is an ancient practice, already known in Biblical times and continuing into the twentieth century in many countries. Electricity regulation, as it existed throughout the United States until the 1980s (and still does in some states), required the regulated utilities to maintain reserve capacity to generate more electricity than is normally needed, often 12 to 20 percent above peak demand. And financial regulation requires banks and other lending institutions to hold reserves, either in cash or in something similarly safe, equal to a fixed fraction of their outstanding loans.
All of these forms of reserves look expensive in good years, but prevent or limit losses in bad years. How often will those bad years crop up? In non-crisis times, the potential price volatility and risks of losses in the housing and stock markets can appear to be pleasantly and misleadingly low. By many standards, the crash of 2008 is the worst that U.S. and world markets have seen since 1933, some 75 years earlier. No one has much first-hand knowledge of such crashes.
How could society maintain awareness and preparedness for catastrophic risks that exist in the historical record, but not in this generation’s experience? As Henry Paulson, Jr., the Treasury Secretary during the last years of the Bush administration, said after several months of floundering, unsuccessful responses to the financial meltdown of 2008,
“We are going through a financial crisis more severe and unpredictable than any in our lifetimes… There is no playbook for responding to turmoil we have never faced.”4
There used to be a playbook, dating from the days when we (or our grandparents) did face similar turmoil. A system of financial regulations, enacted in the aftermath of the 1930s Depression, drew on the lessons of that painful episode and provided some protection against another crash. Yet the experience of some decades of relative stability, in an era of anti-regulatory, laissez faire ideology, has led to loss of collective memory and allowed the rollback of many of the post-depression regulations.Rolling back the reserves
The free-market fundamentalism of the Reagan-Thatcher-Bush era sought to deregulate markets wherever possible. This included efforts (frequently successful) to eliminate the reserves that protected many industries and countries against bad times, in order to boost profits in non-crisis years. Starting in the 1980s, structural adjustment programs, imposed on developing countries by the IMF and the World Bank as conditions for loans, called for elimination of crop marketing boards and grain reserves, and for abandonment of the pursuit of self-sufficiency in food. It was better, according to the “Washington consensus” that dominated the development discourse of the day, for most countries to specialize in higher-value cash crops or other exports, and import food from lower-cost producers. Again, this is a great success in normal times, when nothing goes wrong in international markets for grain and other crops; in years of crop failures or unusually high grain prices, the “inefficient” old system of grain reserves and self-sufficiency looks much better.
At about the same time, the notion became widespread in U.S. policy circles that electricity regulation was antiquated and inefficient. Under the old system, utilities received a local monopoly in exchange for accepting the obligation to provide service to everyone who wanted electricity, at reasonable, regulated rates, while maintaining a mandated margin of reserve capacity. Deregulation, introduced on a state-by-state basis in the 1980s and 1990s, eliminated much of the previous regulations in order to allow competition in the sale of electricity. The pursuit of profit, in theory, would lead to ample capacity to generate electricity, while competition would keep the prices as low as possible. Yet none of the competitors retained the obligation to maintain those expensive, inefficient reserves of capacity.

California enjoyed 40 years of rapid growth without major blackouts or electricity crises under the old regulatory system. In the five years after deregulation, the demand for electricity grew much more rapidly than the supply, eliminating the state’s reserve capacity. The combination of an unusually hot summer, a booming economy, and intentional manipulation of the complex new electricity markets by Enron and other trading firms then led to the California electricity crisis of 2000-01, with extensive blackouts and peak-hour prices spiking up to hundreds of times the previous levels.
Parallel trends occurred in the world of finance. Before the 1980s, residential mortgages typically were issued by savings and loan associations (S&Ls). These community-based institutions were strictly regulated, with limits on the types of loans they could make and the interest rates they could offer to depositors. Squeezed by high inflation and by competition from money market funds in the late 1970s, the S&Ls pushed for, and won, extensive deregulation in the early 1980s. Once they were allowed to make a wider range of loans, freed of federal oversight, the S&Ls launched a massive wave of unsound lending in areas outside their past experience. Hundreds of S&Ls went bankrupt during the 1980s, leading to a federal bailout that seemed expensive by pre-2008 standards.
The regulation of S&Ls was part of the Glass-Steagall Act, enacted in 1933 to control speculation and protect bank deposits. While provisions affecting S&Ls were repealed in the 1980s, other key features of Glass-Steagall remained in effect until 1999. In particular, the 1999 repeal of Glass-Steagall allowed commercial banks to engage in many risky forms of lending and investment that had previously been closed to them. Then in 2004, the Securities and Exchange Commission (SEC) lowered the reserve requirements on the nation’s biggest investment banks, allowing them to make loans of up to 40 times their reserves (the previous limit had been 12 times their reserves). The result was the same as with the deregulation of S&Ls: taking on unfamiliar, new, seemingly profitable risks destroyed some of the nation’s biggest banks within a few years.
There is a similar explanation for the unexpected news that Iceland was among the countries hardest hit by the financial crisis. Privatization and deregulation of Iceland’s three big banks in 2000 allowed the country to become an offshore banking haven for British and other international investors, offering high-risk, high-return (in good times) opportunities to the world. This led to some years of rapid economic growth, and to a banking industry with liabilities equal to several times the country’s GDP – which did not look like a problem until the international financial bubble burst.Putting the pieces back together again

I suspect that free-marketers need to be less doctrinaire and less simple-mindedly utility-maximizing, and that they should depend less on abstract econometric models. I think they’ll have to take much more seriously the task of thinking through what are the right rules of the road for both the private and public sectors. They’ll have to figure out what institutional barriers and what monetary, fiscal and legal guardrails are needed for the accountability, transparency and responsibility that allow free markets to work.5
5 Kristol, “George W. Hoover?” 284

When the most doctrinaire of the free-marketers – William Kristol, again – start talking about rules of the road, institutional barriers, and guardrails for the market economy, the moment has arrived for new ideas. What follows is not the way that I would design an economic system if starting from scratch – but neither I nor anyone else has been invited, alas, to start over and build a sensible economy from the ground up. The immediate challenge that we face is to repair what’s there without further jeopardy to jobs and livelihoods.
The four fundamental flaws in the traditional theory suggest the shape of the barriers and guardrails needed to keep the market economy safely on the road and headed in the right direction. The first two flaws point to large categories of decisions and values that should be permanently off-limits to the market. The definition of efficiency in terms of Pareto optimality – endorsing only those changes to the status quo that can win unanimous support – is a profoundly anti-democratic standard that is taken for granted in much of economic theory.6 There are many public goods and public decisions, which cannot be handled purely by consensus in any jurisdiction larger than a village. Markets cannot decide what we want to do about education, infrastructure, defense, and other public purposes; nor can they decide who should pay how much for these programs.
The existence of important values that cannot be priced, rooted in the dignity of humanity and nature, requires a system of rights and absolute standards, not prices and market incentives. Reasonable people can and do disagree about the extent of rights and standards, but this is unquestionably a large, and perhaps growing, sphere of decisions. Many of the things we care most about are too valuable to have prices; they are not for sale at any price.
These straightforward points only came to seem remarkable and controversial under the onslaught of market fundamentalism in recent years, with its relentless focus on expanding the sphere of market efficiency, prices, and incentives. Conservatives, securely in power for most of the years from 1980 through 2008, repeated endlessly that government is the problem and the market is the solution – at least until the crash of 2008, when the roles were abruptly reversed. Meanwhile, it has become common to hear the argument, in environmental policy debates, that rational policy-making must be based on setting the correct price for human lives saved by regulations. (A less common, but by no means unknown, next step is the morally indefensible conclusion that the value of a life saved should be lower in poorer countries.)
The third flaw in the theory of the invisible hand, the existence and importance of big businesses, leads to a need for ongoing regulation. Many industries do not and cannot consist of small businesses whose every action is disciplined by relentless competition. As a result, they have to be disciplined by society – that is, by regulation. Recognition of this fact inspired the traditional treatment of electric utilities, prior to the recent wave of deregulation. Since some aspects of electricity supply are natural monopolies (no one wants to see multiple, competing electric lines running along the same street), the firms holding this monopoly power had to accept limits on their prices and continual oversight of their investment plans – including the requirement to build reserve capacity – in order to ensure that they served the public interest.

While utility regulation is an interesting model, it is not the only approach to the governance of big business. The general point is that the invisible hand only ensures that greed is good for society when the greedy enterprises are small and powerless. Larger, more powerful greed must often be directed by the visible hand of government in order to prevent it from subverting the common good.

The fourth flaw, the impossibility of complete information about markets, leads to lessons more directly focused on the financial crisis. The staggering complexity of many decisions in today’s financial and other markets undermines the strongest pragmatic argument in favor of market mechanisms. Even when markets are not perfectly competitive, and do not achieve the theoretical optimum of the invisible hand (or of general equilibrium theory), they can still excel at decentralized information processing, as Friedrich Hayek pointed out long ago. All the information about the supply and demand for steel is brought together in the steel market; all the information about the supply and demand for restaurant meals in a city is brought together in that market; and so on. No one has to know all the details of all the markets – which is fortunate, since no one could.
As market choices become more intricately and technically detailed, the potential for decentralized information processing disappears. Markets that are too complex for many of the participants to understand cannot do a reasonable job of collecting information about supply and demand. Overly complex markets are often ones that have been artificially created, based on an ideological commitment to solving every problem through the market rather than a natural evolution of trading in existing commodities. The market for health care in the U.S. is a case in point: a service that is more efficiently and cheaply provided as a public good has been forced into a framework of private commodity purchases, with mountains of unnecessary paperwork and vast numbers of people employed in denying medical coverage to others. Medicare coverage of prescription drugs is the epitome of this problem, a “market mechanism” that will never convey useful information about supply and demand because no one understands the bizarre complexity of what they are buying, or how the alternatives would differ.
Other invented, ideologically inspired markets also suffer from the curse of complexity; California’s deregulation of electricity was an unfortunately classic example. Our current system of retirement funding, in which everyone manages their own savings, has higher overhead costs and higher risks of mismanagement than a public system such as Social Security; many people have little or no understanding of the process of managing their retirement funds. In financial markets, innovation that creates complexity is often profitable for the innovating firms and bewildering to others. Cynics might guess that this could be the goal of financial innovation; but even with good intentions, the worsening spiral of complexity defeats any potential for the market to accurately assess the supply and demand for loans.
The policy implication is clear: keep it simple. If training or technical assistance is required to comprehend a new market mechanism, it is probably too complex to achieve its intended goals. Another approach – think of single-payer health care – may offer a more direct, lower-cost route to the same objective, without the trouble of inventing a convoluted new market apparatus. Making public choices about public goods is simpler than squeezing them into the ill-fitting costume of individual market purchases.
In financial markets there is a clear need for independent, publicly funded sources of information about potential investments, to do the job that we always imagined the bond rating companies were doing. Regulation has to apply across the board to new as well as old financial instruments; waiting for signs of trouble before regulating new financial markets is a recipe for a crash.Precaution vs. cost-benefit analysis
The importance of infrequent, catastrophic risks, and the lack of information about their timing or frequency, highlights the need for a precautionary approach to public policy. In several recent (and very technical) papers, Martin Weitzman shows that both for financial markets and for climate change, the worst case risks can be so disastrous that they should dominate policy decisions. In complex, changing systems such as the world’s climate or financial markets, information will always be limited; if the system is changing rapidly enough, old information may become irrelevant as fast as new information arrives. If, for example, we never have more than 100 independent empirical observations bearing on how bad the market (or climate) will get, then we will never know anything for certain about the 99th percentile risk.
In a situation with unlimited worst-case risks but limited information about their likelihood, Weitzman proves that the expected value of reducing the worst-case risks is, technically speaking, infinite. In other words, nothing else matters except risk reduction, focused on the credible worst case. This is exactly the idea that has been advocated in environmental circles as the “precautionary principle.”
For example, the latest climate science suggests that the likely sea level rise over this century will be in the neighborhood of one meter; in addition, if the Greenland ice sheet, or the similarly-sized West Antarctic ice sheet, collapses into the ocean, the result will eventually be another seven meters of sea level rise. One meter of sea level rise is an expensive and difficult problem for islands and low-lying coastal areas; seven meters is enough to destroy most coastal cities and the associated industries and infrastructure around the world. It is irrelevant, therefore, to worry about fine-tuning the “most likely” estimate of one meter, or to calculate the precisely appropriate policy response to that estimate. Rather, the goal should be to do whatever it takes to prevent the collapse of a major ice sheet and the ensuing seven meters of sea level rise. This is true even in the absence of hard information about the probability of collapsing ice sheets; the risk is far too ominous to take any chances with trial and error.
Financial markets are directly analogous – although one might claim that in finance, the ice sheets have now melted and the markets are already underwater. The worst case risks are so painful that nothing else matters in setting public priorities. With the benefit of hindsight, who among us would have objected to somewhat slower growth in stock prices and housing prices over the last decade or two, in exchange for avoiding the recent economic crash? It was not, it turns out, a brilliant idea to lower the reserve requirements and remove other restrictions on the risks that financial institutions could take, even though it boosted short-run profits at the time.
Restoration of the earlier, discarded regulations on banking is not a complete answer to the current crisis, although it is hard to see how it would hurt as a starting point. What is needed is a more comprehensive regulation of financial investments, covering new varieties as well as old. Charging a (very small) percentage fee on all security transactions, plus a first-time registration fee for introducing new types of securities, could fund an expanded regulatory system, and might also slow down the worst forms of speculation. (Some states have employed a comparable system in electric utility regulation; a trivial percentage fee, amounting to a tiny fraction of a cent on each kilowatt-hour of electricity, supports the state’s oversight of the system as a whole.)
In general, the accumulation of reserves guards against unexpected bad times and market fluctuations. In a volatile and uncertain world, financial and other systems have to be run in a manner that allows such reserves. It is the social equivalent of insurance against individual losses; likewise, the regulatory rollbacks of recent years are the equivalent of cancelling your insurance and spending the premiums on a few more nights out on the town. Maintaining a bit of slack in the system is essential for accumulating reserves that protect against worst cases; squeezing the last bits of slack out in order to maximize profits when everything works according to plan leaves us all more vulnerable to deviations from that plan.Globalization, new deals, and old economics
The final argument against stringent regulation is that in an increasingly globalized economy, capital will simply move to less regulated countries. Extensive research and debates have found little support for this idea in the sphere of environmental regulation; the “pollution haven” hypothesis, claiming that industry will subvert regulation by moving to countries with weaker environmental standards, is not supported by the bulk of the evidence.7
Financial capital, however, is more mobile than industry; huge sums of money can be transferred electronically across national boundaries with minimal transaction costs. Thus it should be easier to create “speculation havens” than pollution havens; a handful of small countries are already known for welcoming unregulated offshore financial investments. The push for deregulation of banking, from the S&L episode of the 1980s to the present, has come not only from ideology and the desire for short-run profits, but also from the pressure of competition with newer, less regulated financial institutions.
The process of financial innovation will continue to challenge any simple attempts to curtail the flight of capital. The ultimate answer to this problem is not only to regulate existing financial markets and institutions, but also to create new, socially useful opportunities for investment – to steer capital toward better purposes, as well as policing its attempts to steal away.
Lurking behind the failure of financial markets is the lack of real investment opportunities, as seen, for instance, in the near-bankruptcy of the U.S. auto industry. GM, Ford, and Chrysler have engaged in their own form of gambling on good times, over-committing their resources to SUVs and other enormous, energy-inefficient vehicles. Paralleling the risky financial ventures that fell apart in 2008, the “all big cars all the time” strategy produces big profits if (and only if) consumer incomes stay high and fuel prices stay low. When incomes fall and oil prices rise, it turns out to be a shame to have bet the company on endless sales of vehicles much larger than anyone actually needs. A new initiative is needed to reshape and redirect this industry and others; left to its own devices, the free market only leads deeper into the ongoing collapse of U.S. manufacturing. If a bailout in the auto industry, finance, or elsewhere gives the government a share of ownership, as it should, then public priorities can be implemented as a condition of public assistance.
At the end of 2008, profitable investment opportunities are vanishing across the board, as the U.S. and the world economies are sliding into the worst economic downturn since the 1930s. That decade’s depression helped inspire the theories of John Maynard Keynes, explaining how deficit spending helps to cure economic slumps and put unemployed people back to work. Keynesian economics has been out of academic fashion for nearly thirty years, banished by the same market fundamentalism that pushed for deregulation of financial and other markets. Yet when a big enough crisis hits, everyone is a Keynesian, favoring huge increases in deficit spending in order to provide an economic stimulus.
There is no shortage of important public priorities that are in need of attention. Thirty years of relentless tax-cutting and penny-pinching in public spending have left the U.S. with perilously crumbling and underfunded infrastructure, from the failed levees of New Orleans to the fatal collapse of a major highway bridge in Minneapolis. The country is shockingly far away from adequate provision of health care and high-quality public education for all, among other social goals. In terms of prevention of worst-case risks, addressing the threat of climate change requires reinventing industry, electric power, and transportation with little or no carbon emissions – a task that calls both for widespread application of the best existing techniques, and for discovery, development, and adoption of new breakthrough technologies, in the U.S. and around the world. What would it take to structure an economy in which these objectives were more attractive to capital than repackaging subprime mortgages and inventing esoteric con games?
A focus on ambitious new public priorities no longer appears to be absent from American politics. Barack Obama’s speeches invoke the goal of a “green new deal,” representing an enormous improvement over the previous occupant of the White House in this and so many other ways. The reality, however, seems likely to lag far behind the rhetoric. Practical discussion has focused on the size of the one-time stimulus that might be needed, treating it as an expensive cure for a rare ailment rather than a new, healthier way of life. The economic advisors for the new administration represent the cautious mainstream of the Democratic Party, an improvement relative to their immediate predecessors in office, but far from offering what is really needed.
Recognizing the new popularity of Keynesian ideas and analogies to the 1930s, a few conservative critics have begun to object that the New Deal should not be taken as a model because it failed to end the Depression. Despite the ambitious, well-publicized initiatives of the Roosevelt administration, unemployment remained extremely high and the economy did not fully recover until the surge of military spending for World War II. This is literally true, but implies a need to do more, not less, than the New Deal. Programs that put hundreds of thousands of people to work, some of them building parks and bridges that are still in use today, were not misguided; they were just too small. A premature lurch back toward balanced budgets caused a painful interruption in the recovery in 1937-38, prolonging high rates of unemployment.
Indeed, as Keynes himself said in 1940, “It is, it seems, politically impossible for a capitalistic democracy to organize expenditure on the scale necessary to make the grand experiments which would prove my case — except in war conditions.” The grand experiment of mobilizing for World War II did succeed in reviving the market economy; it involved massive, ongoing government redirection of spending toward socially determined priorities.
The need for a pervasive, permanent role of government in directing investment also emerges from more recent studies of economic development. As documented in the research of Alice Amsden, Ha-Joon Chang, Dani Rodrik, and others, the countries that have grown fastest have ignored the advice of the World Bank, IMF, and other advocates of free trade and laissez-faire. Instead, successful development has been based on skillful, continual government involvement in nurturing promising industries, supporting education, research, and infrastructure, and managing international trade. The government’s leading role in development can certainly be done wrong, but it can’t be done without.
The New Deal was on the one hand much larger than any recent government initiatives in the U.S., and on the other hand too small for the crisis of the 1930s – or for today. Rebuilding our infrastructure and social programs, while reducing carbon emissions to a sustainable level, will not be finished in a year, or even one presidential term. An ongoing effort is required, more on the scale of wartime mobilization or the active engagement of governments in successful development strategies. With such an effort, there will be a reliable set of investment opportunities in the production of real, socially useful goods and services, as well as a much-strengthened government empowered to regulate and prevent dangerous forms of speculation and undesirable financial “innovations.”
In such a world, the market still plays an essential role, coordinating the numerous industries and activities, engaging in the decentralized processing of information about supply and demand (which is its indispensable task). It will not, however, be stretched to fit other problems that are better handled through the public sector; and it will not be bowed down to as the source of wisdom and policy guidance. There is a clear need for smoothly functioning financial markets, but adult supervision is required to avoid a repetition of recent events.
To close by way of analogy, the market may be the engine of a socially directed economy, indispensable for forward motion. There are limits, however, to its capabilities: it cannot change its own flat tires; and if we let it steer, we are sure to hit the wall again.
________________________________
SUGGESTED CITATION:
Frank Ackerman, “The Economics of Collapsing Markets”, real-world economics review, issue no. 48, 6 December 2008, pp. 279-290, http://www.paecon.net/PAEReview/issue48/Ackerman48.pdf

The current financial and economic crisis has once again placed the dangers of capitalism at the forefront of our collective consciousness. The left, which until relatively recently had seemed adrift across much of the Western world, lacking in coherent and convincing responses to globalization and neoliberalism, appears once again poised for a comeback, as citizens yearn for stability and security in difficult times. That the left’s fortunes should ebb and flow with capitalism’s is nothing new. Indeed, capitalism is both the reason for and the bane of the modern left; the left’s origins and fate have always been inextricably intertwined with capitalism’s. There is much, therefore, that the left can learn from its past about how to approach the problems of the present.

The Backstory
The emergence of capitalism in the eighteenth and nineteenth centuries led to unprecedented economic growth and personal freedom, but it also brought dramatic inequality, social dislocation, and atomization. Accordingly, a backlash against the new order soon began. During the early to mid nineteenth century, a motley crew of anarchists, Lassalleans, Proudhonians, Saint Simonians, and others gave voice to the growing discontent. Only with the rise of Marxism, however, did the emerging capitalist system meet an enemy worthy of its revolutionary power. By the late nineteenth century an orthodox version of Marxism had displaced most other critiques of capitalism on the left and established itself as the dominant ideology of the international socialist movement.

Part of Marxism’s appeal came from the embedding of its scathing critique of capitalism in an optimistic historical framework that promised the emergence of an even newer and better system down the road. Crudely stated, Marxism had three core points: that capitalism was a great transforming force in history, destroying the old feudal order and generating untold wealth and productivity; that it was based on terrible inequality, exploitation, and conflict; and that it would ultimately and naturally be transcended by the arrival of communism.

We don’t always remember that Marx thought capitalism had amazing qualities. “[It] has accomplished wonders,” he wrote, “far surpassing Egyptian pyramids, Roman aqueducts, and Gothic cathedrals; it has conducted expeditions that put in the shade all former Exoduses of nations and crusades.” But its extraordinary accomplishments, he argued, came at a fearsome human cost. Capital was like a vampire that “lives only by sucking living labor, and lives the more, the more labor it sucks.” And in the end, having fulfilled its historically “progressive” function of destroying the old order and releasing humanity’s productive potential, it would collapse. Marx was convinced that just as the internal contradictions of feudalism had paved the way for capitalism, so the internal contradictions of capitalism would pave the way for its successor. It was, as he once put it, “a question of . . . laws . . . tendencies working with iron necessity towards inevitable results.”

Everyone on the left agreed with Marx on the first two points. By the late nineteenth century, however, some of its sharpest minds began to disagree on the third. For instead of collapsing, capitalism was showing great resilience. It emerged stronger than ever from a long depression in the 1870s and 1880s, and then revolutions in transportation and communication led to a wave of globalization sweeping over not just Europe but the world at large. Several advanced bourgeois states, meanwhile, had started to enact important economic, social, and political reforms, and, for most of the public, life was actually getting not worse but better (however slowly and fitfully).

In response to these conditions, the left effectively splintered into three camps. The first, best symbolized by Lenin, argued that if the new social order was not going to come about on its own, then it could and should be imposed by force—and promptly set out to spur history along through the politico-military efforts of a revolutionary vanguard. Many other leftists were unwilling to accept the violence and elitism of such a course and chose to stick to a democratic path. Standard narratives of this era often leave the analysis here, focusing on the split between those who embraced and those who rejected violence. In fact, however, an additional split within the democratic camp was crucial as well, centering on the future of capitalism and the left’s proper response to it.

One democratic faction believed that Marx may have been wrong about the imminence of capitalism’s collapse, but was basically right in arguing that capitalism could not persist indefinitely. Its internal contradictions and human costs, they felt, were so great that it would ultimately give way to something fundamentally different and better—hence the purpose of the left was to hasten this transition. Another faction rejected the view that capitalism was bound to collapse in the foreseeable future and believed that in the meantime it was both possible and desirable to take advantage of its upsides while addressing its downsides. Rather than working to transcend capitalism, therefore, they favored a strategy built on encouraging its immense productive capacities, reaping the benefits, and deploying them for progressive ends.

The real story of the democratic left over the last century has been the story of the battle between these two factions, which can be thought of as the battle between democratic socialism and social democracy. It is this battle, and in particular the incomplete victory of the latter in it, that has constrained the left’s ability to respond to political challenges up through the present day.

Heirs or Doctors?
The most important and influential of the fin-de-siècle proto-social democrats was Eduard Bernstein. Bernstein was an important figure in both the international socialist movement and its most powerful party, the German Social Democratic Party (SPD). He argued that capitalism was not leading to the immiseration of the proletariat, a drop in the number of property owners, and ever-deepening crises, as orthodox Marxists had predicted. Instead, he saw a capitalist system that was growing ever more complex and adaptable. This led him to oppose “the view that we stand at the threshold of an imminent collapse of bourgeois society, and that Social Democracy should allow its tactics to be determined by, or made dependent upon, the prospect of any forthcoming major catastrophe.” Since catastrophe was both unlikely and undesirable, he argued, the left should focus on reform instead. The prospects for socialism depended “not on the decrease but on the increase of social wealth,” together with socialists’ ability to generate “positive suggestions for reform” that would improve the living conditions of the great masses of society: “With regard to reforms, we ask, not whether they will hasten the catastrophe which could bring us to power, but whether they further the development of the working class, whether they contribute to general progress.” Perhaps Bernstein’s most (in)famous comment was, “What is usually termed the final goal of socialism is nothing to me, the movement is everything.” By this he simply meant that talking constantly about some abstract future was of little value; instead socialists needed to focus their attention on the long-term struggle to create a better world.

Because the issues raised by Bernstein and other revisionists touched upon both theory and praxis, it is not surprising that the international socialist movement was consumed by debates over them during the fin-de-siècle. Karl Kautsky, the standard-bearer of orthodox Marxism, attacked Bernstein, commenting, “He tells us that the number of property-owners, of capitalists, is growing and that the groundwork on which we have based our views is therefore wrong. If that were so, then the time of our victory would not only be long delayed, we would never reach our goal at all.” Similarly, Wilhelm Liebknecht, one of the leaders of the powerful German SPD, noted, “If Bernstein’s arguments [are] correct, we might as well bury our program, our entire history, and the whole of [socialism].” And Rosa Luxemburg, perhaps Bernstein’s most perceptive critic, urged socialists to recognize that if his heretical views were accepted, the whole edifice of orthodox Marxism would be swept away: “Up until now,” she argued, “socialist theory declared that the point of departure for a transformation to socialism would be a general and catastrophic crisis.” Bernstein, however, “does not merely reject a certain form of the collapse. He rejects the very possibility of collapse . . . . But then the question arises: Why and how . . . shall we attain the final goal?” As Luxemburg recognized, Bernstein was presenting socialists with a simple question: Either “socialist transformation is, as before, the result of the objective contradictions of the capitalist order . . . and at some stage some form of collapse will occur,” or capitalism could actually be altered by the efforts of inspired majorities—in which case “the objective necessity of socialism . . . falls to the ground.”

These debates simmered for more than a generation, until events reached a critical juncture during the 1920s and early 1930s. Now in power in several major European countries, the democratic left found itself responsible for actual political and economic governance, not simply for agitation and theorizing. The onset of the Great Depression in particular forced socialists to confront their relationship to capitalism head-on. In the hour of what seemed to be capitalism’s great crisis, what should socialists do? Should they sit back and cheer, seeing the troubles as simply the start of the transition that orthodox Marxism had long promised? Or should they try to stanch the bleeding and improve the system so that such disasters could never happen again? Fritz Tarnow, a leading German socialist and unionist of the day, summed up the dilemma in 1931:

Are we standing at the sickbed of capitalism not only as doctors who want to heal the patient, but also as prospective heirs who can’t wait for the end and would gladly help the process along with a little poison? . . . We are damned, I think, to be doctors who seriously want to cure, and yet we have to maintain the feeling that we are heirs who wish to receive the entire legacy of the capitalist system today rather than tomorrow. This double role, doctor and heir, is a damned difficult task.

In fact, it was not just difficult, it was impossible. And recognizing this, more and more socialists understood that the time had come to choose. One result was that during the early 1930s, reformers across the continent developed policies that, while differing in their specifics, were joined by one key belief: the need to use state power to tame and ultimately reform capitalism. In Belgium, Holland, and France, Hendrik de Man and his Plan du Travail found energetic champions; in Germany and Austria, reformers advocated government intervention in the economy and proto-Keynesian stimulation programs; and in Sweden, the Social Democratic Party initiated the single most ambitious attempt to reshape capitalism from within.

By the end of the 1930s, therefore, the longstanding debate on the democratic left had come to a head. On the one side stood social democrats, who believed in using the power of the democratic state to reform capitalism. And on the other side stood democratic socialists, who believed that leftists should not do anything about capitalism’s crises because ultimately it was only through the system’s collapse that a better world would emerge.

The Postwar World
During the interwar years, social democrats generally lost these battles, except in Scandinavia and, particularly, in Sweden. But in the wake of a second world war brought on by tyrannies that had come to power thanks in part to the interwar era’s economic and social turmoil, the social democrats’ ideas and policies ultimately triumphed, both on the left and across much of the political spectrum. After 1945, Western European states explicitly committed themselves to managing capitalism and protecting society from its more destructive effects. The prewar liberal understanding of the relationship among capitalism, the state, and society was abandoned: no longer was the role of the state simply to ensure that markets could grow and flourish; no longer were economic interests to be given the greatest possible leeway. Instead, after the war the state was generally seen as the guardian of society rather than the economy, and economic imperatives were often forced to take a back seat to social ones.

These changes seemed so dramatic at the time that contemporary observers were unsure how to characterize them. Thus, C.A.R Crosland argued that the postwar political economy was “different in kind from classical capitalism . . . in almost every respect that one can think of.” And Andrew Shonfield similarly questioned whether “the economic order under which we now live and the social structure that goes with it are so different from what preceded them that it [has become] misleading . . . to use the word ‘capitalism’ to describe them.”

But of course capitalism did remain—even though it was a very different capitalism than before. After 1945, the market system was tempered by political power, and the state was explicitly committed to protecting society from its worst consequences. This was a far cry from what Marxists, communists, and democratic socialists had hoped for (namely, an end to capitalism), but it was equally far from what liberals had long advocated (namely, a free rein for markets). What it most closely embodied was the worldview long espoused by social democrats.

Putting into place this new understanding of politics and markets allowed the West to combine—for the first time in its history—economic growth, well-functioning democracy, and social stability. Despite the obvious success of the postwar order, however, the triumph of social democracy was not complete. Many on the right accepted the new system out of necessity alone; once their fear of economic and social chaos (and the radical left) faded, their commitment to the order also faded. But more interestingly, even many on the left failed to understand or wholeheartedly accept the new dispensation. Some forgot that the reforms, while important, were merely means to an end—an ongoing process of taming and domesticating the capitalist beast—and so contented themselves with the pedestrian management of the welfare state. Others never made their peace with the loss of a post-capitalist future.

A LEADING light in the second camp was Michael Harrington, putative heir to the mantle of Eugene Debs and Norman Thomas, one of the American left’s most inspiring and influential figures, and a long-time contributor to this journal. Harrington supported reforms that alleviated the suffering of America’s poor and marginalized (whom he famously termed “The Other America”), but he did not believe that such reforms or the welfare state more generally could ever eliminate suffering or injustice. These were ultimately inherent features of capitalism itself. He argued, for example, that the “class structure of capitalist society vitiates, or subverts almost every . . . effort towards social justice.”

Even the unprecedented economic growth of the postwar era did not fundamentally change Harrington’s views. He described such growth as “misshapen” and “counterproductive,” arguing that no matter how economically successful it was, capitalism was incapable of “meeting the needs of the people.” Perhaps unsurprisingly, he was also convinced that capitalism was on its way out. In 1968, he opened his book Toward a Democratic Left with the proclamation that the “American system [didn’t] seem to work any more.” In 1976, he wrote a book called Twilight of Capitalism. In 1978, he asserted that “capitalism was dying.” And in 1986—just three years before the collapse of communism and in the middle of a lengthy economic boom—he wrote that “the West is living through an economic and social crisis so unprecedented in its tempo, so complex in its effects, that there are many who do not even know it is taking place.”

The problem with such statements and the larger worldview that lay behind them is not merely that they were wrong, but also that they were counterproductive. Convinced that a better world had to await capitalism’s demise, Harrington devoted much of his intellectual and political energy to convincing his readers that capitalism’s apparent triumphs were fictional and that the system was really on its way out. And he sought to persuade the left that its chief task was not to reform and humanize capitalism but rather to press for its passing.

One result of the mismatch between Harrington’s worldview and reality was that his attempts at practical guidance were highly impractical. Indeed, reading Harrington today one is struck by two things: the sharp and amazingly empathetic eye he brought to his descriptions of the American poor and the utopian irrelevance of most of his policy proposals for improving their lot. Harrington knew what he disliked about the existing capitalist order, but had trouble describing concretely how a post-capitalist world would actually work or how to get to it. Like other democratic socialists, he placed a lot of faith in “democratic planning.” Yet aside from the emphasis on democracy and public participation (to differentiate it from the heavy-handed state planning of the Eastern bloc), there was little description about what such planning would involve or how it would achieve its goals. Other recommendations for building a socialist order included the socialization of investment, some form of “social” ownership, shorter working hours, and limits on the private setting of prices. But one looks in vain for details about how such measures could be implemented, what their likely results would be, and how they would relate to each other and to existing institutions so as to produce more efficient or just outcomes.

It is hard not to conclude, especially with hindsight, that the democratic socialist view was ultimately a dead end. Although Harrington and others in his corner were very often correct in their scathing criticisms of capitalism, they consistently played down not only its extraordinary accomplishments but also the changes it went through over time—changes that were, to a large degree, the achievement of the left itself. By insisting that true justice could come only with capitalism’s elimination, democratic socialists implicitly (and often explicitly) denigrated efforts at taming it—thus limiting the left’s cohesiveness and appeal and its ability to offer practical benefits to suffering populations in the short and mid term.

The Fierce Urgency of Now
These arguments are anything but academic or merely historical. For the left today faces a globalized capitalism in the midst of a serious crisis. How the left thinks about capitalism and its own mission will affect its ability to deal with this crisis as well as its chances for electoral success. Although currently chastened, contemporary neoliberals of the right and center have long argued for leaving markets as free as possible and have long dismissed concerns about globalization’s individual and social costs. Large sectors of the left, meanwhile, downplay the adaptability of markets and dismiss the huge gains that the global spread of capitalism has brought, particularly to the poor in the developing world. Such debates resemble nothing so much as those taking place a century ago, out of which the social democratic worldview first emerged. Then as now, many liberals see only capitalism’s benefits, while many leftists see only its radical flaws, leaving it to social democrats to grapple with a full appreciation of both.

Participants at the two extremes of today’s economic debates need to be reminded that it was only through the postwar settlement that capitalism and democracy found a way to live together amicably. Without the amazing economic results generated by the operations of relatively free markets, the dramatic improvements of mass living standards throughout the West would not have been possible. Without the social protections and limits on markets imposed by states, in turn, the benefits of capitalism would never have been distributed so widely, and economic, political and social stability would have been infinitely more difficult to achieve. One of the great ironies of the twentieth century is that the very success of this social democratic compromise made it seem routine; we forget how new and controversial it actually was. As a result, by the end of the twentieth century the West had begun to gradually abandon this compromise, moving in a more neoliberal direction, freeing markets and economic activity from some of the oversight and restrictions that had characterized the postwar settlement. The challenge to the left today is to recover the principles underlying this settlement and to generate from them initiatives that address today’s new problems and opportunities. Many of the specific policies that worked during the postwar era have run out of steam, and the left should not be afraid to jettison them. The important thing is not the policies but the goals—encouraging growth while at the same time protecting citizens from capitalism’s negative consequences.

Building on its best traditions, the left must reiterate its commitment to managing change rather than fighting it, to embracing the future rather than running from it. This might seem straightforward, but in fact it isn’t generally accepted. Many European and American leftists are devoted to familiar policies and approaches regardless of their practical relevance or lack of success. And many peddle fear of the future, fear of change, and fear of the other. Increasing globalization and the dramatic rise of developing world giants such as China and India, for example, are seen as threats rather than opportunities.

At its root, such fears stem from the failure of many on the left to appreciate that capitalism is not a zero-sum game—over the long run the operations of relatively free markets can produce net wealth rather than simply shifting it from one pocket to another. Because social democrats understand that basic point, they want to do what they can to encourage trade and growth and cultivate as large a net surplus as possible—all the better to pay for measures that can equalize life chances and cushion publics from the blows that markets inflict.

Helping people adjust to capitalism, rather than engaging in a hopeless and ultimately counterproductive effort to hold it back, has been the historic accomplishment of the social democratic left, and it remains its primary goal today in those countries where the social democratic mindset is most deeply ensconced. Many analysts have remarked, for example, on the impressive success of countries like Denmark and Sweden in managing globalization—promoting economic growth and increased competitiveness even as they ensure high employment and social security. The Scandinavian cases demonstrate that social welfare and economic dynamism are not enemies but natural allies. Not surprisingly, it is precisely in these countries that optimism about globalization is highest. In the United States and other parts of Europe, on the other hand, fear of the future is pervasive and opinions of globalization astoundingly negative. American leftists must try to do what the Scandinavians have done: develop a program that promotes growth and social solidarity together, rather than forcing a choice between them. Concretely this means agitating for policies—like reliable, affordable, and portable health care; tax credits or other government support for labor-market retraining; investment in education; and unemployment programs that are both more generous and better incentivized—that will help workers adjust to change rather than make them fear it.

JUST AS important, however, is that the left regain its old optimism and historical vision. And here, interestingly, is where Harrington still has something to teach. In his writings, he insisted on the left’s need for some larger sense of where it wanted the world to be heading. Without this, he argued, the left would be directionless and uninspiring. Despite current disillusionment with capitalism, this is precisely the situation the left finds itself in today, given the loss of its vision of a postcapitalist society. Many of its parties win elections, but few inspire much hope or offer more than a kinder, gentler version of a generic centrist platform.

Given the left’s past, this is astonishing. The left has traditionally been driven by the conviction that a better world was possible and that its job was to bring this world into being. Somehow this conviction has been lost. As Michael Jacobs has noted, “Up through the 1980s politics on the left was enchanted—not by spirits, but by radical idealism; the belief that the world could be fundamentally different. But cold, hard political realism has now done for radical idealism what rationality did for pre-Enlightenment spirituality. Politics has been disenchanted.” Many welcome this shift, believing that transformative projects are passé or even dangerous. But this loss of faith in transformation “has been profoundly damaging, not just for the cause of progressive politics but for a wider sense of public engagement with the political process.”

As social democratic pioneers of the late nineteenth and early twentieth century recognized, the most important thing that politics can provide is a sense of the possible. Against Marxist determinism and liberal laissez-faire, they developed a political ideology based on the idea that people working together could make the world a better place. And in contrast to their democratic socialist colleagues, they argued that it was both possible and desirable to take advantage of capitalism’s upsides while addressing its downsides. The result was the most successful political movement of the twentieth century, one that shaped the basic politico-economic framework under which we still live. The problems of the twenty-first century may be different in form, but they are not different in kind. There is no reason that the accomplishment cannot be developed and extended.

Sheri Berman is associate professor of political science at Barnard College, Columbia University. Her latest book is The Primacy of Politics: Social Democracy and the Making of Europe’s Twentieth Century (Cambridge University Press, 2006).

THE MARKET share of what used to be called the “Big Three” U.S. automakers has been shrinking for years. GM alone had over 50 percent of the U.S. market in the 1960s, but Ford, GM, and Chrysler together can now barely muster 40 percent. Since autumn, sales have been in free fall. GM lost $9.6 billion last quarter, and Chrysler has all but announced it is not viable without a foreign partner. Does the United States need an auto industry?

In the short term, the government should act to prevent a sudden collapse of the Detroit Three. Such a collapse could, due to interlinked supply chains, cause the loss of 1.5 to 3 million jobs (adding 1 to 2 percentage points to an unemployment rate already approaching double digits), and cause such chaos that even Japanese automakers support loans to keep GM and Chrysler afloat. But what about the long term? Why not let the Detroit Three continue to shrink, and allow Americans to buy the cars they prefer, whether they are U.S.-made or not?

It is true that the Detroit Three’s problems go deeper than the current dramatic fall in demand due to the economic crisis. But these problems have potentially correctable causes. The automakers have been managed with an eye to short-term financial gain rather than long-term sustainability. Public policy has also been unfavorable, in three major ways: low gas taxes, which lead to large fluctuations in the price of gas when crude oil prices change; lack of national health care, which penalizes firms responsible enough to offer it; and an insufficient public safety net for retired and laid-off employees, causing firms that shrink to be saddled with very high “legacy costs.” Another problem (primarily for the rest of manufacturing, but also for autos) is trade agreements that don’t protect labor or environmental rights.

It is important to note that the United States faces no fundamental competitive disadvantage in auto manufacturing. Competitive advantage in auto manufacturing is made, not born (in contrast to the case of, say, banana growing, where natural endowments like climate play an important role).

First, we should dispel the notion that auto manufacturing is inherently a low-wage activity. Our major competitors in auto assembly (Germany and Japan) pay wages at least as high as in the United States. Low-wage nations such as China and Mexico have made some inroads into auto supply, providing about 10 percent of the content of the average U.S.-assembled vehicle. But even here, competing with low-wage nations is not as daunting as one might think; research by the Michigan Manufacturing Technology Center suggests that most small manufacturers have costs within 20 percent of their Chinese competitors’. Manufacturers could meet this challenge by adopting a “high-road” production process that harnesses everyone’s knowledge—that of production workers as well as top executives and investors—to achieve innovation, quality, and quick responses to unexpected situations.

Is there a public interest in reversing the industry’s undeniable failures? Why not let all the manufacturing jobs disappear and have an economy of just eBays and Googles? Because we need manufacturing expertise to cope with events that might present huge technical challenges to our habits of daily living (global warming) or leave us unable to buy from abroad (wars).

The auto industry has a critical role to play in meeting these national goals. Take the challenge of climate change. We need to radically increase the efficiency of transport, in part by making incremental changes that reduce the weight of cars, more significant changes to the internal combustion engine, and potentially revolutionary couplings of cars with “smart highways” to dramatically improve traffic flow.

Yes, we could import this technology. But it might not be apt for the U.S. context. (For example, Europe has long favored diesels for their fuel economy, but Americans have deemed diesels’ high emissions of nitrous oxides and particulates to be unacceptable). And we’d need to export a lot of something to pay for this technology—or see continued fall in the value of the dollar, leading to a fall in living standards.

The auto industry has long been known as “the industry of industries,” since making cars absorbs much of the output of industries like machine tools, steel, electronics, even computers, and semiconductors. Innovations pioneered for the auto industry spread to other industries as well (see this article). Thus, maintaining the industry now keeps capabilities alive that may be crucial in meeting crises we have not yet thought of. Traditional trade theory has little room for such “irreversibility”; it assumes that if relative prices change, countries can easily re-enter businesses that they were once uncompetitive in. But, it’s very expensive to recreate the vast assemblages of suppliers, engineers, and skilled workers that go into making cars and other manufactured goods.

We should not assume that the United States will keep “high-skilled” engineering and design jobs even if we lose production jobs. In fact, the reverse may well be true. Asian and European car companies do most of their engineering in their home countries; they manufacture here in part because of the bulkiness of cars. Even the Detroit Three are outsourcing engineering to Europe (for small cars) and India (for computer-aided drafting). In addition, it is difficult to remain competitive for long in design when one doesn’t have the insight gained from actual manufacturing. Another reason to save the auto industry is its role as a model of relative fairness in sharing productivity gains. Allowing a high-wage industry to fail does not guarantee that another high-wage industry will emerge to take its place—in fact, by weakening the institutions and norms that created such an industry, it becomes less likely.

So, the United States needs an auto industry, one that pays fair wages and engages in both engineering and production at a sufficient scale to keep critical industries like machine tools humming. Do “we” need a domestically owned auto industry? This is a harder question. Our “national champions” have not served the United States particularly well in recent decades; consumers have benefited greatly from access to Toyotas and Hondas. Yet, the demise of the Big Three may well lead to negative consequences for all of us—lower wages (since foreign automakers have been hostile to unions) and less R&D in the United States—and therefore we need to make sure we don’t create financially viable firms by sacrificing capabilities and wages. Instead, we should implement government policies, such as creating both demand and supply for fuel-efficient vehicles, and involve unions in training programs for both current and former auto workers. These policies would help create an industry that serves all its stakeholders—including taxpayers.

Susan Helper is AT&T Professor of Economics, Weatherhead School of Management, Case Western Reserve University. She is also a Research Associate at the National Bureau of Economic Research and MIT’s International Motor Vehicle Program.

I HAVE often thought that economists should be required to have a better grasp of simple arithmetic. It would prevent them from repeating many silly comments that pass for conventional wisdom, such as that the United States will no longer be a manufacturing country in the future.

Those who know arithmetic can quickly detect the absurdity of this assertion. The implication of course is that the United States will import nearly all of its manufactured goods. The problem is that unless we can find some country that will give us manufactured goods for free forever, we have to find some mechanism to pay for our imports.

The end of manufacturing school argues that we will pay by exporting services. This is where arithmetic is so useful. The volume of U.S. trade in goods is approximately three and half times the volume of its trade in services. If the deficit in goods trade were to continue to expand, we would need an incredible growth rate in both the volume and surplus of service trade and our surplus on this trade in order to get to anything close to balanced trade.

For example, if we lose half of our manufacturing over the next twenty years, and imported services continue to rise at the same pace as the past decade, then we would have to see exports of services rise at an average annual rate of almost 15 percent over the next two decades if we are to have balanced trade in the year 2028.

A 15 percent annual growth rate in service exports is approximately twice the rate of growth in service exports that we have seen over the last decade. It would take a very creative story to explain how we can anticipate the doubling of the growth rate of service exports on a sustained basis.

The story becomes even more fantastic on a closer examination of the services that we export. The largest single item is travel, meaning the money that foreign tourists spend in the United States. This item alone accounts for almost 20 percent of our service exports.

There is nothing wrong with tourism as an industry. However, the idea that U.S. workers are somehow too educated to be doing for manufacturing work, but instead will be making the beds, bussing the tables, and cleaning hotel toilets for foreign tourists is a bit laughable. Of course, with the right institutional structure (e.g. strong unions) these jobs can be well-paying jobs, but it is certainly not apparent that they require more skills than manufacturing.

The category “other transportation” accounts for another 10 percent of exported services. These are the fees for freight and port services that importers pay when they bring items into the United States. This service rises when our imports rise. It is effectively money taken out of our consumers’ pockets because it is included in the price of imported goods.

Royalty and licensing fees account for another 17 percent of our service exports. These are the fees that we get countries to tack onto the price of their products due to copyright and patent protection. It might become increasingly difficult to extract these fees as the spread of the Internet increasingly allows more movies, software, and recorded music to be instantly copied and exchanged at zero cost. It’s not clear that the rest of the world is prepared to use police-state tactics to collect revenue for Microsoft and Disney. The drug patent side of this equation is even more dubious. Developing countries are not eager to see their people die so that Pfizer and Merck can get high profits from their drug patents. This component of service exports is likely to come under considerable pressure in future years.

Another major category of service exports is financial services. This category accounted for approximately 10 percent of service exports in recent years. It is questionable whether this share can be maintained in the years ahead. Wall Street had been known as the gold standard of the world financial industry, with the best services and the highest professional standards. As a result of the scandals that have been exposed in the last year, Wall Street no longer has this standing in the world. After all, investors don’t have to come to New York and give their money to Bernie Madoff or Robert Rubin to be ripped off; they can be ripped off almost anywhere in the world. Perhaps the Obama administration will be able to implement reforms in the financial sector that will restore its integrity in the eye of world investors, but that will require serious work at this point.

Finally, there is the category of business and professional services, which accounts for roughly 20 percent of service exports. This is the area of real high-tech and high-end services. It includes computing and managerial consulting.

Rapid growth in this sector would mean more high-end jobs in the United States, but the notion that it could possibly expand enough to support a country without manufacturing is absurd on its face. First, even though it is a large share of service exports, it is only equal to about 0.8 percent of GDP. Even if quadrupled over the next two decades, it wouldn’t come close to covering the current trade deficit, to say nothing of the increase due to the loss of more manufacturing output.

More important, it is implausible to believe that the United States will be able to dominate this area in the decades ahead. The United States certainly has a head start in sophisticated computer technologies and in some management practices, but it is questionable how long this advantage can be maintained. There are already many world-class computer service companies in India and elsewhere in the developing world, and this number is increasing rapidly.

The computer and software engineers in these countries are every bit as qualified as their U.S. counterparts and are often prepared to work for less than one-tenth of U.S. wages. Furthermore, unlike cars and steel, which are very expensive to transport over long distances, it is costless to ship software anywhere in the world. Given the basic economics, it seems a safe bet that the United States will lose its share in this sector of the world economy. In twenty years it is quite likely that the United States will be a net importer of this category of service, unless of course wages in the United States adjust to world levels.

In short, the idea that the United States can survive without manufacturing is implausible: It implies an absurdly rapid rate of growth of service exports for which there is no historical precedent. Many economists and economic pundits asserted that house prices could keep rising forever in spite of the blatant absurdity of this position. The claim that the U.S. economy can be sustained without a sizable manufacturing sector is an equally absurd proposition.

THERE ARE at least three major reasons why a nation must indeed make things to maintain its prosperity: First, making goods is on balance—with exceptions—more productive than providing services, and rising productivity is the fundamental source of prosperity; second, related to the first, making goods creates higher-paying jobs on balance—again, with a few exceptions; third, a major nation must be able to maintain a balanced current account (and trade balance) over time, and goods are far more tradeable than services. Without something to export, a nation will either become over-indebted or forced to reduce its standard of living.

The United States has looked the other way regarding these important issues for a variety of reasons, but underlying its neglect are certain narratives about how economies work that have been highly misleading. One of the more misleading narratives of recent decades involves the rapid growth of services industries when compared to the rest of the economy. It goes like this: Services will naturally replace manufacturing in an advancing economy exactly as manufacturing replaced agriculture in the 1800s. Do not be concerned. Remember how inappropriately concerned people were a century and a half ago with the rise of manufacturing? The rise in services is the best use of American resources.

Of course, within every overgeneralization lies a pit of truth. Same here. Once we feed, clothe, house, and auto-mobilize ourselves, many economists agree that we mostly want to go to the movies or watch TV, hang out at the mall, trade stocks in our Schwab accounts, and, if financially healthy, go to the doctor a lot. There is thus no need to be alarmed that only 8 or 9 percent of American workers are employed at a factory that makes things. To the contrary, this is proof of the economy’s sophistication and its evolution towards providing Americans with what they really want. Moreover, manufacturing’s productivity is rising rapidly—which means fewer workers are needed for the same output and the price of an equal quantity of goods falls.

A lower manufacturing share of GDP is therefore the natural course of events. In fact, productivity gains are the core reason for job loss. There are even good services jobs—finance, for example. Meantime, corporate profits rise, which is proof of the pudding and the guarantor of high levels of capital investment and the future of the nation. Not long ago, the management guru, Peter Drucker, wrote that all America had to do was learn how to more productively make services. I suppose America listened, because it has now created the remarkably productive Wal-Mart, which in turn supplied America with some of the worst jobs in the nation.

ACCORDING TO the neo-classical equilibrium theory, all of this was supposed to happen as naturally as a dolphin plies the tides. As always, there are controversies over how fast manufacturing’s share of GDP has fallen but in recent years, I don’t think there are many that argue there hasn’t been a significant drop since the late 1970s. In addition, I don’t think many argue that the trade deficit in manufactured goods, which is pretty enormous, has virtually nothing to do with job loss.

Thus, manufacturing should have always been a focus of government policies. But America did far worse than merely neglect it. The decline of manufacturing has gotten a big push from the Democrats in charge in the 1990s, and from most of the Republicans since the early 1980s, in particular the hard Rightists, and increasingly most of mainstream economic academia. This push—really a shove—was the tolerance and further promotion of an overvalued U.S. dollar.

The American dollar had been high through much of the Bretton Woods period, but in 1979 it took off and rose some 60 to 75 percent, depending on the trade-weighted average used, until 1984. High real interest rates in the early 1980s under Federal Reserve Chairman Paul Volcker attracted foreign funds while Reagan’s simultaneous Keynesian thrust of tax cuts and defense spending produced a fast-growing economy in the mid-1980s. In five years, the high dollar dramatically lifted the price of manufactured goods. Coupled with the steep recession, manufacturing was clobbered.

After the dollar declined during the run of Jim Baker’s Plaza Accord, the value of the dollar again turned up and kept rising inexorably until only a couple of year ago. Manufacturing thus did not decline as a consequence of natural causes, but was hastened to the edge of the cliff and pushed off by the high dollar. The relevance of manufacturing was minimized by policymakers who saw an easy way to attract foreign investment and compensate for ever more borrowing, and all the while satisfy Wall Street profit seekers.

And thus America stopped making things. American manufacturing was at an enormous disadvantage in the world. One consequence was the permanent loss of many hundreds of thousands of jobs. But not only that. Entire industries were decimated, needed skills lost, R&D foregone, the innovation from learning-by-doing never undertaken—and so on.

The rest of the world did not mind. American demand was the growth machine for Japan, the Tigers, and finally China. If America wanted to undermine good jobs in its own country, who were they to complain?

THE DISASTER of this policy is now clear. Left to its own devices, the free market in currencies is probably the most devastating economic idea of our times. Because the dollar reigned for so long as the only trustworthy reserve currency, America got a free pass to run up a big trade deficit without the concomitant rise in interest rates. This led to self-destructive abuse. Americans didn’t have to save to finance borrowing and they could still borrow to buy what they wanted.

This led to borrowing at damaging levels. Greenspan, for example, could push interest rates to rock bottom in the early 2000s without undermining the value of the dollar and raising inflationary fears. Meantime, the Chinese and others, intoxicated by the power of their export-led growth model, felt no pressure to raise wages at home and build a domestic market which the early rich nations of Europe and North America had long ago learned was a critical foundation—an idea that some have now forgotten.

The world’s trade and investment imbalances led directly to the current crisis. Debt, not wages, propelled demand in America. And not only America but international institutions invested dollars in bad American mortgages and the housing bubble. Earlier they had done the same in the high-technology sectors.

So the extent of the decline of manufacturing in the United States was not natural. Meantime, under this economic model, finance became America’s leading industry, accounting for more than 30 percent of profits in recent years and more than 40 percent of profits among the Standard & Poor’s 500.

If a high dollar had not been allowed to become the centerpiece of the economic model, manufacturing would have declined but to a far lesser degree. This raises the second issue. Should we let manufacturing follow a natural, market-driven course? The answer is that we should not. It is nonsense to think that free markets will automatically create the industries a nation needs. The thinking that suggests it will is the result of the ascendance of simplistic free-market economic theory.

In America, we fail to develop industries for which there are few short-term incentives or that are too risky or large to be undertaken by private capital. There are gaping holes in what we make in America: no light rail or subway cars to speak of, for example, and far less agricultural equipment and almost no machine tools, once the pride and joy of our early industrial era. We are in desperate need of money for alternative energy solutions. We spent torrentially on fiber communication lines that were unneeded. We lag in broadband coverage. We of course make almost no consumer electronics products or textiles.

We remain leaders in chip-related high technology. But it was the government that saved Intel in the 1980s, and it is the remarkable fall in the cost of computer power with Intel micro-processors that was the principal causal factor in the so-called “New Economy.” We lead in big pharmaceuticals, but that’s because the National Institutes of Health and other government agencies have so intelligently subsidized science and research at U.S. universities. We have a huge defense industry, which is a big exporter, including aircraft. (We know why.) Meantime, the nation’s overall R&D is spotty and weak. The education of engineers and scientists remains well behind our production of MBAs.

Today, to take the most straightforward measure, manufacturing final sales are 10 percentage points lower as a share of GDP than they were in the early 1980s. That’s 1.5 trillion dollars worth. Losing one million manufacturing jobs more than necessary has put an enormous dent in wages in America where the typical male in his thirties now makes less after inflation than the typical male in his thirties did in the 1970s.

SO HERE we are: Enormous imbalances in current accounts everywhere has put the world in a hole from which it may not climb out in the near future. The imbalances are a consequence of everyone taking the easy way out—and most doing so against the most vital long-term interests of the U.S. economy. The United States has succumbed, in particular, to the short-term interests of powerful Wall Street players.

To take one end of the spectrum, the Chinese, now in serious recession, must develop a domestic market. At the other end is the U.S., which, until the 1970s, paid the highest wages in the world since the Colonial years. But it no longer does. It is a high productivity, low-wage nation. Wal-Mart is the symbol of the broader demise. A major reason is the loss of manufacturing jobs.

Because the United States can no longer make many things—it doesn’t have the factories, the labor or management expertise, the new ideas or proper incentives—the trade deficit is that much harder to correct, even if the dollar falls again. An industrial policy, such as the one partly incorporated in the new Obama stimulus package, has fewer teeth because much of the domestic spending will necessarily go to imports.

In sum, then, no nation can sustain the imbalances America has had since the late 1980s. Goods are largely what are exported. Critically, making things also makes good jobs, it creates ideas for the future, it educates and trains workers, it has enormous multiplier effects through the purchase of goods for production and by paying high wages. Contrary to widespread conventional wisdom, no rich nation will survive on services alone.

The United States requires an appropriate currency policy. Since it needs the cooperation of all nations, it is difficult to be optimistic. But present events may cause this to happen, and we can only hope in a stable way. The United States also requires a realistic industrial policy to support needed industry, the ongoing development of skills and products, and appropriate levels of R&D. The lack of such thinking in America—even after the crisis—is yet another failure of over-simplified, market-oriented economic theory.

Jeff Madrick is editor of Challenge Magazine and director of policy research at the Schwartz Center for Economic Policy Analysis, The New School. He is the author of Taking America, The End of Affluence, and most recently The Case for Big Government.

The Reagan-Thatcher model, which favored finance over domestic manufacturing, has collapsed after thirty years of dominance and what we need—and what we can build—is a capitalism more attuned to our national concerns. The decline of American manufacturing has saddled us not only with a seemingly permanent negative balance of trade but with a business community less and less concerned with America’s productive capacities. When manufacturing companies dominated what was still a national economy in the 1950s and 1960s, they favored and profited from improvements in America’s infrastructure and education. The interstate highway system and the G.I. Bill were good for General Motors and for the U.S.A. From 1875 to 1975, the level of schooling for the average American increased by seven years, creating a more educated workforce than any of our competitors’ had. Since 1975, however, it hasn’t increased at all. The mutually reinforcing rise of financialization and globalization broke the bond between American capitalism and America’s interests.

Manufacturing has become too global to permit the United States to revert to the level of manufacturing it had in the good old days of Keynes and Ike, but it would be a positive development if we had a capitalism that once again focused on making things rather than deals. In Germany, manufacturing still dominates finance, which is why Germany has been the world’s leader in exports. German capitalism didn’t succumb to the financialization that swept the United States and Britain in the 1980s, in part because its companies raise their capital, as ours used to, from retained earnings and banks rather than the markets. Company managers set long-term policies while market pressures for short-term profits are held in check. The focus on long-term performance over short-term gain is reinforced by Germany’s stakeholder, rather than shareholder, model of capitalism: Worker representatives sit on boards of directors, unionization remains high, income distribution is more equitable, social benefits are generous. Nonetheless, German companies are among the world’s most competitive in their financial viability and the quality of their products. Yes, Germany’s export-fueled economy is imperiled by the global collapse in consumption, but its form of capitalism has proved more sustainable than Wall Street’s.

So does Germany offer a model for the United States? Yes—up to a point. Certainly, U.S. ratios of production to consumption and wealth creation to debt creation have gotten dangerously out of whack. Certainly, the one driver and beneficiary of this epochal change—our financial sector—has to be scaled back and regulated (if not taken out and shot). Similarly, to create a business culture attuned more to investment than speculation, and with a preferential option for the United States, corporations should be made legally answerable not just to shareholders but also to stakeholders—their employees and community. That would require, among other things, changing the laws governing the composition of corporate boards.

In addition to bolstering industry, we should take a cue from Scandinavia’s social capitalism, which is less manufacturing-centered than the German model. The Scandinavians have upgraded the skills and wages of their workers in the retail and service sectors—the sectors that employ the majority of our own workforce. In consequence, fully employed impoverished workers, of which there are millions in the United States, do not exist in Scandinavia.

Making such changes here would require laws easing unionization (such as the Employee Free Choice Act, which was introduced this week in Congress) and policies that professionalize jobs in child care, elder care and private security. To be sure, this form of capitalism requires a larger public sector than we have had in recent years. But investing in more highly trained and paid teachers, nurses and child-care workers is more likely to produce sustained prosperity than investing in the asset bubbles to which Wall Street was so fatally attracted.

Would such changes reduce the dynamism of the American economy? Not necessarily, particularly since Wall Street often mistook deal-making for dynamism. Indeed, since finance eclipsed manufacturing as our dominant sector, our rates of inter-generational mobility have fallen behind those in presumably less dynamic Europe.

Wall Street’s capitalism is dying in disgrace. It’s time for a better model.

Over the past 60 years, victims worldwide have endured the CIA’s “torture paradigm,” developed at a cost that reached $1 billion annually, according to historian Alfred McCoy in his book A Question of Torture. He shows how torture methods the CIA developed from the 1950s surfaced with little change in the infamous photos at Iraq’s Abu Ghraib prison. There is no hyperbole in the title of Jennifer Harbury’s penetrating study of the U.S. torture record: Truth, Torture, and the American Way. So it is highly misleading, to say the least, when investigators of the Bush gang’s descent into the global sewers lament that “in waging the war against terrorism, America had lost its way.”

None of this is to say that Bush-Cheney-Rumsfeld et al. did not introduce important innovations. In ordinary American practice, torture was largely farmed out to subsidiaries, not carried out by Americans directly in their own government-established torture chambers. As Allan Nairn, who has carried out some of the most revealing and courageous investigations of torture, points out: “What the Obama [ban on torture] ostensibly knocks off is that small percentage of torture now done by Americans while retaining the overwhelming bulk of the system’s torture, which is done by foreigners under U.S. patronage. Obama could stop backing foreign forces that torture, but he has chosen not to do so.”

Obama did not shut down the practice of torture, Nairn observes, but “merely repositioned it,” restoring it to the American norm, a matter of indifference to the victims. “[H]is is a return to the status quo ante,” writes Nairn, “the torture regime of Ford through Clinton, which, year by year, often produced more U.S.-backed strapped-down agony than was produced during the Bush/Cheney years.”

Sometimes the American engagement in torture was even more indirect. In a 1980 study, Latin Americanist Lars Schoultz found that U.S. aid “has tended to flow disproportionately to Latin American governments which torture their citizens,… to the hemisphere’s relatively egregious violators of fundamental human rights.” Broader studies by Edward Herman found the same correlation, and also suggested an explanation. Not surprisingly, U.S. aid tends to correlate with a favorable climate for business operations, commonly improved by the murder of labor and peasant organizers and human rights activists and other such actions, yielding a secondary correlation between aid and egregious violation of human rights.

These studies took place before the Reagan years, when the topic was not worth studying because the correlations were so clear.

Small wonder that President Obama advises us to look forward, not backward — a convenient doctrine for those who hold the clubs. Those who are beaten by them tend to see the world differently, much to our annoyance…

Liberal economists pine for days no liberal should want to revisit.

“The America I grew up in was a relatively equal middle-class society. Over the past generation, however, the country has returned to Gilded Age levels of inequality.” So sighs Paul Krugman, the Nobel Prize–winning Princeton economist and New York Times columnist, in his recent book The Conscience of a Liberal.

The sentiment is nothing new. Political progressives such as Krugman have been decrying increases in income inequality for many years now. But Krugman has added a novel twist, one that has important implications for public policy and economic discourse in the age of Obama. In seeking explanations for the widening spread of incomes during the last four decades, researchers have focused overwhelmingly on broad structural changes in the economy, such as technological progress and demographic shifts. Krugman argues that these explanations are insufficient. “Since the 1970s,” he writes, “norms and institutions in the United States have changed in ways that either encouraged or permitted sharply higher inequality. Where, however, did the change in norms and institutions come from? The answer appears to be politics.”

To understand Krugman’s argument, we can’t start in the 1970s. We have to back up to the 1930s and ’40s—when, he contends, the “norms and institutions” that shaped a more egalitarian society were created. “The middle-class America of my youth,” Krugman writes, “is best thought of not as the normal state of our society, but as an interregnum between Gilded Ages. America before 1930 was a society in which a small number of very rich people controlled a large share of the nation’s wealth.” But then came the twin convulsions of the Great Depression and World War II, and the country that arose out of those trials was a very different place. “Middle-class America didn’t emerge by accident. It was created by what has been called the Great Compression of incomes that took place during World War II, and sustained for a generation by social norms that favored equality, strong labor unions and progressive taxation.”

The Great Compression is a term coined by the economists Claudia Goldin of Harvard and Robert Margo of Boston University to describe the dramatic narrowing of the nation’s wage structure during the 1940s. The real wages of manufacturing workers jumped 67 percent between 1929 and 1947, while the top 1 percent of earners saw a 17 percent drop in real income. These egalitarian trends can be attributed to the exceptional circumstances of the period: precipitous declines at the top end of the income spectrum due to economic cataclysm; wartime wage controls that tended to compress wage rates; rapid growth in the demand for low-skilled labor, combined with the labor shortages of the war years; and rapid growth in the relative supply of skilled workers due to a near doubling of high school graduation rates.

Yet the return to peacetime and prosperity did not result in a shift back toward the status quo ante. The more egalitarian income structure persisted for decades. For an explanation, Krugman leans heavily on a 2007 paper by the Massachusetts Institute of Technology economists Frank Levy and Peter Temin, who argue that postwar American history has been a tale of two widely divergent systems of political economy. First came the “Treaty of Detroit,” characterized by heavy unionization of industry, steeply progressive taxation, and a high minimum wage. Under that system, median wages kept pace with the economy’s overall productivity growth, and incomes at the lower end of the scale grew faster than those at the top. Beginning around 1980, though, the Treaty of Detroit gave way to the free market “Washington Consensus.” Tax rates on high earners fell sharply, the real value of the minimum wage declined, and private-sector unionism collapsed. As a result, most workers’ incomes failed to share in overall productivity gains while the highest earners had a field day.

This revisionist account of the fall and rise of income inequality is being echoed daily in today’s public policy debates. Under the conventional view, rising inequality is a side effect of economic progress—namely, continuing technological breakthroughs, especially in communications and information technology. Consequently, when economists have supported measures to remedy inequality, they have typically shied away from structural changes in market institutions. Rather, they have endorsed more income redistribution to reduce post-tax income differences, along with remedial education, job retraining, and other programs designed to raise the skill levels of lower-paid workers.

By contrast, Krugman sees the rise of inequality as a consequence of economic regress—in particular, the abandonment of well-designed economic institutions and healthy social norms that promoted widely shared prosperity. Such an assessment leads to the conclusion that we ought to revive the institutions and norms of Paul Krugman’s boyhood, in broad spirit if not in every detail.

There is good evidence that changes in economic policies and social norms have indeed contributed to a widening of the income distribution since the 1970s. But Krugman and other practitioners of nostalgianomics are presenting a highly selective account of what the relevant policies and norms were and how they changed.

The Treaty of Detroit was built on extensive cartelization of markets, limiting competition to favor producers over consumers. The restrictions on competition were buttressed by racial prejudice, sexual discrimination, and postwar conformism, which combined to limit the choices available to workers and potential workers alike. Those illiberal social norms were finally swept aside in the cultural tumults of the 1960s and ’70s. And then, in the 1970s and ’80s, restraints on competition were substantially reduced as well, to the applause of economists across the ideological spectrum. At least until now.

Stifled Competition

The economic system that emerged from the New Deal and World War II was markedly different from the one that exists today. The contrast between past and present is sharpest when we focus on one critical dimension: the degree to which public policy either encourages or thwarts competition.

The transportation, energy, and communications sectors were subject to pervasive price and entry regulation in the postwar era. Railroad rates and service had been under federal control since the Interstate Commerce Act of 1887, but the Motor Carrier Act of 1935 extended the Interstate Commerce Commission’s regulatory authority to cover trucking and bus lines as well. In 1938 airline routes and fares fell under the control of the Civil Aeronautics Authority, later known as the Civil Aeronautics Board. After the discovery of the East Texas oil field in 1930, the Texas Railroad Commission acquired the effective authority to regulate the nation’s oil production. Starting in 1938, the Federal Power Commission regulated rates for the interstate transmission of natural gas. The Federal Communications Commission, created in 1934, allocated licenses to broadcasters and regulated phone rates.

Beginning with the Agricultural Adjustment Act of 1933, prices and production levels on a wide variety of farm products were regulated by a byzantine complex of controls and subsidies. High import tariffs shielded manufacturers from international competition. And in the retail sector, aggressive discounting was countered by state-level “fair trade laws,” which allowed manufacturers to impose minimum resale prices on nonconsenting distributors.

Comprehensive regulation of the financial sector restricted competition in capital markets too. The McFadden Act of 1927 added a federal ban on interstate branch banking to widespread state-level restrictions on intrastate branching. The Glass-Steagall Act of 1933 erected a wall between commercial and investment banking, effectively brokering a market-sharing agreement protecting commercial and investment banks from each other. Regulation Q, instituted in 1933, prohibited interest payments on demand deposits and set interest rate ceilings for time deposits. Provisions of the Securities Act of 1933 limited competition in underwriting by outlawing pre-offering solicitations and undisclosed discounts. These and other restrictions artificially stunted the depth and development of capital markets, muting the intensity of competition throughout the larger “real” economy. New entrants are much more dependent on a well-developed financial system than are established firms, since incumbents can self-finance through retained earnings or use existing assets as collateral. A hobbled financial sector acts as a barrier to entry and thereby reduces established firms’ vulnerability to competition from entrepreneurial upstarts.

The highly progressive tax structure of the early postwar decades further dampened competition. The top marginal income tax rate shot up from 25 percent to 63 percent under Herbert Hoover in 1932, climbed as high as 94 percent during World War II, and stayed at 91 percent during most of the 1950s and early ’60s. Research by the economists William Gentry of Williams College and Glenn Hubbard of Columbia University has found that such rates act as a “success tax,” discouraging employees from striking out as entrepreneurs.

Finally, competition in labor markets was subject to important restraints during the early postwar decades. The triumph of collective bargaining meant the active suppression of wage competition in a variety of industries. In the interest of boosting wages, unions sometimes worked to restrict competition in their industries’ product markets as well. Garment unions connived with trade associations to set prices and allocate production among clothing makers. Coal miner unions attempted to regulate production by dictating how many days a week mines could be open.

MIT economists Levy and Temin don’t mention it, but highly restrictive immigration policies were another significant brake on labor market competition. With the establishment of countryspecific immigration quotas under the Immigration Act of 1924, foreign-born residents of the United States plummeted from 13 percent of the total population in 1920 to 5 percent by 1970. As a result, competition at the less-skilled end of the U.S. labor market was substantially reduced.

Solidarity and Chauvinism

The anti-competitive effects of the Treaty of Detroit were reinforced by the prevailing social norms of the early postwar decades. Here Krugman and company focus on executive pay. Krugman quotes wistfully from John Kenneth Galbraith’s characterization of the corporate elite in his 1967 book The New Industrial State: “Management does not go out ruthlessly to reward itself—a sound management is expected to exercise restraint.” According to Krugman, “For a generation after World War II, fear of outrage kept executive salaries in check. Now the outrage is gone. That is, the explosion in executive pay represents a social change…like the sexual revolution of the 1960’s—a relaxation of old strictures, a new permissiveness, but in this case the permissiveness is financial rather than sexual.”

Krugman is on to something. But changing attitudes about lavish compensation packages are just one small part of a much bigger cultural transformation. During the early postwar decades, the combination of in-group solidarity and out-group hostility was much more pronounced than what we’re comfortable with today.

Consider, first of all, the dramatic shift in attitudes about race. Open and unapologetic discrimination by white Anglo-Saxon Protestants against other ethnic groups was widespread and socially acceptable in the America of Paul Krugman’s boyhood. How does racial progress affect income inequality? Not the way we might expect. The most relevant impact might have been that more enlightened attitudes about race encouraged a reversal in the nation’s restrictive immigration policies. The effect was to increase the number of less-skilled workers and thereby intensify competition among them for employment.

Under the system that existed between 1924 and 1965, immigration quotas were set for each country based on the percentage of people with that national origin already living in the U.S. (with immigration from East and South Asia banned outright until 1952). The explicit purpose of the national-origin quotas was to freeze the ethnic composition of the United States—that is, to preserve white Protestant supremacy and protect the country from “undesirable” races. “Unquestionably, there are fine human beings in all parts of the world,” Sen. Robert Byrd (D-W.V.) said in defense of the quota system in 1965, “but people do differ widely in their social habits, their levels of ambition, their mechanical aptitudes, their inherited ability and intelligence, their moral traditions, and their capacity for maintaining stable governments.”

But the times had passed the former Klansman by. With the triumph of the civil rights movement, official discrimination based on national origin was no longer sustainable. Just two months after signing the Voting Rights Act, President Lyndon Johnson signed the Immigration and Nationality Act of 1965, ending the “un-American” system of national-origin quotas and its “twin barriers of prejudice and privilege.” The act inaugurated a new era of mass immigration: Foreign-born residents of the United States have surged from 5 percent of the population in 1970 to 12.5 percent as of 2006.

This wave of immigration exerted a mild downward pressure on the wages of native-born low-skilled workers, with most estimates showing a small effect. Immigration’s more dramatic impact on measurements of inequality has come by increasing the number of less-skilled workers, thereby increasing apparent inequality by depressing average wages at the low end of the income distribution. According to the American University economist Robert Lerman, excluding recent immigrants from the analysis would eliminate roughly 30 percent of the increase in adult male annual earnings inequality between 1979 and 1996.

Although the large influx of unskilled immigrants has made American inequality statistics look worse, it has actually reduced inequality for the people involved. After all, immigrants experience large wage gains as a result of relocating to the United States, thereby reducing the cumulative wage gap between them and top earners in this country. When Lerman recalculated trends in inequality to include, at the beginning of the period, recent immigrants and their native-country wages, he found equality had increased rather than decreased. Immigration has increased inequality at home but decreased it on a global scale.

Just as racism helped to keep foreign-born workers out of the U.S. labor market, another form of in-group solidarity, sexism, kept women out of the paid work force. As of 1950, the labor force participation rate for women 16 and older stood at only 34 percent. By 1970 it had climbed to 43 percent, and as of 2005 it had jumped to 59 percent. Meanwhile, the range of jobs open to women expanded enormously.

Paradoxically, these gains for gender equality widened rather than narrowed income inequality overall. Because of the prevalence of “assortative mating”—the tendency of people to choose spouses with similar educational and socioeconomic backgrounds—the rise in dual-income couples has exacerbated household income inequality: Now richer men are married to richer wives. Between 1979 and 1996, the proportion of working-age men with working wives rose by approximately 25 percent among those in the top fifth of the male earnings distribution, and their wives’ total earnings rose by over 100 percent. According to a 1999 estimate by Gary Burtless of the Brookings Institution, this unanticipated consequence of feminism explains about 13 percent of the total rise in income inequality since 1979.

Racism and sexism are ancient forms of group identity. Another form, more in line with what Krugman has in mind, was a distinctive expression of U.S. economic and social development in the middle decades of the 20th century. The journalist William Whyte described this “social ethic” in his 1956 book The Organization Man, outlining a sensibility that defined itself in studied contrast to old-style “rugged individualism.” When contemporary critics scorned the era for its conformism, they weren’t just talking about the ranch houses and gray flannel suits. The era’s mores placed an extraordinary emphasis on fitting into the group.

“In the Social Ethic I am describing,” wrote Whyte, “man’s obligation is…not so much to the community in a broad sense but to the actual, physical one about him, and the idea that in isolation from it—or active rebellion against it—he might eventually discharge the greater service is little considered.” One corporate trainee told Whyte that he “would sacrifice brilliance for human understanding every time.” A personnel director declared that “any progressive employer would look askance at the individualist and would be reluctant to instill such thinking in the minds of trainees.” Whyte summed up the prevailing attitude: “All the great ideas, [trainees] explain, have already been discovered and not only in physics and chemistry but in practical fields like engineering. The basic creative work is done, so the man you need—for every kind of job—is a practical, team-player fellow who will do a good shirt-sleeves job.”

It seems entirely reasonable to conclude that this social ethic helped to limit competition among business enterprises for top talent. When secure membership in a stable organization is more important than maximizing your individual potential, the most talented employees are less vulnerable to the temptation of a better offer elsewhere. Even if they are tempted, a strong sense of organizational loyalty makes them more likely to resist and stay put.

Increased Competition, Increased Inequality Krugman blames the conservative movement for income inequality, arguing that right-wingers exploited white backlash in the wake of the civil rights movement to hijack first the Republican Party and then the country as a whole. Once in power, they duped the public with “weapons of mass distraction” (i.e., social issues and foreign policy) while “cut[ting] taxes on the rich,” “try[ing] to shrink government benefits and undermine the welfare state,” and “empower[ing] businesses to confront and, to a large extent,crush the union movement.”

Obviously, conservatism has contributed in important ways to the political shifts of recent decades. But the real story of those changes is more complicated, and more interesting, than Krugman lets on. Influences across the political spectrum have helped shape the more competitive more individualistic, and less equal society we now live in.

Indeed, the relevant changes in social norms were led by movements associated with the left. The women’s movement led the assault on sex discrimination. The civil rights campaigns of the 1950s and ’60s inspired more enlightened attitudes about race and ethnicity, with results such as the Immigration and Nationality Act of 1965, a law spearheaded by a young Sen. Edward Kennedy (D-Mass.). And then there was the counterculture of the 1960s, whose influence spread throughout American society in the Me Decade that followed. It upended the social ethic of group-minded solidarity and conformity with a stampede of unbridled individualism and self-assertion. With the general relaxation of inhibitions, talented and ambitious people felt less restrained from seeking top dollar in the marketplace. Yippies and yuppies were two sides of the same coin.

Contrary to Krugman’s narrative, liberals joined conservatives in pushing for dramatic changes in economic policy. In addition to his role in liberalizing immigration, Kennedy was a leader in pushing through both the Airline Deregulation Act of 1978 and the Motor Carrier Act of 1980, which deregulated the trucking industry—and he was warmly supported in both efforts by the left-wing activist Ralph Nader. President Jimmy Carter signed these two pieces of legislation, as well as the Natural Gas Policy Act of 1978, which began the elimination of price controls on natural gas, and the Staggers Rail Act of 1980, which deregulated the railroad industry.

The three most recent rounds of multilateral trade talks were all concluded by Democratic presidents: the Kennedy Round in 1967 by Lyndon Johnson, the Tokyo Round in 1979 by Jimmy Carter, and the Uruguay Round in 1994 by Bill Clinton. And though it was Ronald Reagan who slashed the top income tax rate from 70 percent to 50 percent in 1981, it was two Democrats, Sen. Bill Bradley of New Jersey and Rep. Richard Gephardt of Missouri, who sponsored the Tax Reform Act of 1986, which pushed the top rate all the way down to 28 percent.

What about the unions? According to the Berkeley economist David Card, the shrinking of the unionized labor force accounted for 15 percent to 20 percent of the rise in overall male wage inequality between the early 1970s and the early 1990s. Krugman is right that labor’s decline stems in part from policy changes, but his ideological blinkers lead him to identify the wrong ones.

The only significant change to the pro-union Wagner Act of 1935 came through the Taft-Hartley Act, which outlawed closed shops (contracts requiring employers to hire only union members) and authorized state right-to-work laws (which ban contracts requiring employees to join unions). But that piece of legislation was enacted in 1947—three years before the original Treaty of Detroit between General Motors and the United Auto Workers. It would be a stretch to argue that the Golden Age ended before it even began.

Scrounging for a policy explanation, economists Levy and Temin point to the failure of a 1978 labor law reform bill to survive a Senate filibuster. But maintaining the status quo is not a policy change. They also describe President Reagan’s 1981 decision to fire striking air traffic controllers as a signal to employers that the government no longer supported labor unions.

While it is true that Reagan’s handling of that strike, along with his appointments to the National Labor Relations Board, made the policy environment for unions less favorable, the effect of those moves on unionization was marginal.

The major reason for the fall in unionized employment, according to a 2007 paper by Georgia State University economist Barry Hirsch, “is that union strength developed through the 1950s was gradually eroded by increasingly competitive and dynamic markets.” He elaborates: “When much of an industry is unionized, firms may prosper with higher union costs as long as their competitors face similar costs. When union companies face low-cost competitors, labor cost increases cannot be passed through to consumers. Factors that increase the competitiveness of product markets increased international trade, product market deregulation, and the entry of low-cost competitors—make it more difficult for union companies to prosper.”

So the decline of private-sector unionism was abetted by policy changes, but the changes were not in labor policy specifically. They were the general, bipartisan reduction of trade barriers and price and entry controls. Unionized firms found themselves at a critical disadvantage. They shrank accordingly, and union rolls shrank with them.

Postmodern Progress

The move toward a more individualistic culture is not unique to the United States. As the political scientist Ronald Inglehart has documented in dozens of countries around the world, the shift toward what he calls “postmodern” attitudes and values is a predictable cultural response to rising affluence and expanding choices. “In a major part of the world,” he writes in his 1997 book Modernization and Postmodernization, “the disciplined, self-denying, and achievement-oriented norms of industrial society are giving way to an increasingly broad latitude for individual choice of lifestyles and individual self-expression.”

The increasing focus on individual fulfillment means, inevitably, less deference to tradition and organizations. “A major component of the Postmodern shift,” Inglehart argues, “is a shift away from both religious and bureaucratic authority, bringing declining emphasis on all kinds of authority. For deference to authority has high costs: the individual’s personal goals must be subordinated to those of a broader entity.”

Paul Krugman may long for the return of selfdenying corporate workers who declined to seek better opportunities out of organizational loyalty, and thus kept wages artificially suppressed, but these are creatures of a bygone ethos—an ethos that also included uncritical acceptance of racist and sexist traditions and often brutish intolerance of deviations from mainstream lifestyles and sensibilities.

The rise in income inequality does raise issues of legitimate public concern. And reasonable people disagree hotly about what ought to be done to ensure that our prosperity is widely shared. But the caricature of postwar history put forward by Krugman and other purveyors of nostalgianomics won’t lead us anywhere. Reactionary fantasies never do.

Brink Lindsey (blindsey@cato.org) is vice president for research at the Cato Institute, which published the policy paper from which this article was adapted.