Blame the Economists, Not Economics

CAMBRIDGE – As the world economy tumbles off the edge of a precipice, critics of the economics profession are raising questions about its complicity in the current crisis. Rightly so: economists have plenty to answer for.

It was economists who legitimized and popularized the view that unfettered finance was a boon to society. They spoke with near unanimity when it came to the “dangers of government over-regulation.” Their technical expertise – or what seemed like it at the time –�gave them a privileged position as opinion makers, as well as access to the corridors of power.

Very few among them (notable exceptions including Nouriel Roubini and Robert Shiller) raised alarm bells about the crisis to come. Perhaps worse still, the profession has failed to provide helpful guidance in steering the world economy out of its current mess. On Keynesian fiscal stimulus, economists’ views range from “absolutely essential” to “ineffective and harmful.”

On re-regulating finance, there are plenty of good ideas, but little convergence. From the near-consensus on the virtues of a finance-centric model of the world, the economics profession has moved to a near-total absence of consensus on what ought to be done.

So is economics in need of a major shake-up? Should we burn our existing textbooks and rewrite them from scratch?

Actually, no. Without recourse to the economist’s toolkit, we cannot even begin to make sense of the current crisis.

Why, for example, did China’s decision to accumulate foreign reserves result in a mortgage lender in Ohio taking excessive risks? If your answer does not use elements from behavioral economics, agency theory, information economics, and international economics, among others, it is likely to remain seriously incomplete.

The fault lies not with economics, but with economists. The problem is that economists (and those who listen to them) became over-confident in their preferred models of the moment: markets are efficient, financial innovation transfers risk to those best able to bear it, self-regulation works best, and government intervention is ineffective and harmful.

They forgot that there were many other models that led in radically different directions. Hubris creates blind spots. If anything needs fixing, it is the sociology of the profession. The textbooks –�at least those used in advanced courses – are fine.

Non-economists tend to think of economics as a discipline that idolizes markets and a narrow concept of (allocative) efficiency. If the only economics course you take is the typical introductory survey, or if you are a journalist asking an economist for a quick opinion on a policy issue, that is indeed what you will encounter. But take a few more economics courses, or spend some time in advanced seminar rooms, and you will get a different picture.

Labor economists focus not only on how trade unions can distort markets, but also how, under certain conditions, they can enhance productivity. Trade economists study the implications of globalization on inequality within and across countries. Finance theorists have written reams on the consequences of the failure of the “efficient markets” hypothesis. Open-economy macroeconomists examine the instabilities of international finance. Advanced training in economics requires learning about market failures in detail, and about the myriad ways in which governments can help markets work better.

Macroeconomics may be the only applied field within economics in which more training puts greater distance between the specialist and the real world, owing to its reliance on highly unrealistic models that sacrifice relevance to technical rigor. Sadly, in view of today’s needs, macroeconomists have made little progress on policy since John Maynard Keynes explained how economies could get stuck in unemployment due to deficient aggregate demand. Some, like Brad DeLong and Paul Krugman, would say that the field has actually regressed.

Economics is really a toolkit with multiple models – each a different, stylized representation of some aspect of reality. One’s skill as an economist depends on the ability to pick and choose the right model for the situation.

Economics’ richness has not been reflected in public debate because economists have taken far too much license. Instead of presenting menus of options and listing the relevant trade-offs – which is what economics is about – economists have too often conveyed their own social and political preferences. Instead of being analysts, they have been ideologues, favoring one set of social arrangements over others.

Furthermore, economists have been reluctant to share their intellectual doubts with the public, lest they “empower the barbarians.” No economist can be entirely sure that his preferred model is correct. But when he and others advocate it to the exclusion of alternatives, they end up communicating a vastly exaggerated degree of confidence about what course of action is required.

Paradoxically, then, the current disarray within the profession is perhaps a better reflection of the profession’s true value added than its previous misleading consensus. Economics can at best clarify the choices for policy makers; it cannot make those choices for them.

When economists disagree, the world gets exposed to legitimate differences of views on how the economy operates. It is when they agree too much that the public should beware.

Dani Rodrik, Professor of Political Economy at Harvard University’s John F. Kennedy School of Government, is the first recipient of the Social Science Research Council’s Albert O. Hirschman Prize. His latest book is One Economics, Many Recipes: Globalization, Institutions, and Economic Growth.

The Economics of Collapsing Markets
Frank Ackerman1 [Tufts University, USA]
Copyright: Frank Ackerman, 2008
Big banks are failing, bailouts measured in hundreds of billions of dollars are not nearly enough, jobs are vanishing, mortgages and retirement savings are turning to dust. Didn’t economic theory promise us that markets would behave better than this? Even the most ardent defenders of private enterprise are embarrassed by recent events: in the words of arch-conservative columnist William Kristol,There’s nothing conservative about letting free markets degenerate into something close to Karl Marx’s vision of an atomizing, irresponsible and self-devouring capitalism.2
So what does the current wreckage of the global financial system tell us about the theoretical virtues of the market economy?
Competitive markets are traditionally said to offer a framework in which, in the memorable words of the movie Wall Street, “greed is good.” Adam Smith’s parable of the invisible hand, the founding metaphor of modern economics, explains why the attempt by butchers, bakers and the like to increase their own individual incomes should turn out to promote the common good. The same notion, restated in rigorous and esoteric mathematics, is enshrined in general equilibrium theory, one of the crowning accomplishments of twentieth-century economics. Under a long list of often unrealistic assumptions, free markets have been proved to allow an ideal outcome – meaning that the market outcome is “Pareto optimal,” i.e. there is no way to improve someone’s lot without making someone else worse off.
Although academic research in economics has moved beyond this simple picture in several respects, the newer and subtler approaches have not yet had much influence on non-academic life. Textbooks and mainstream policy analyses – the leading forms through which the economics profession influences the real world – still routinely invoke the imagery of the invisible hand and the notion that economic theory has demonstrated that market outcomes are optimal. Critics (myself included) have written volumes about what’s wrong with this picture.3 Broadly speaking, there are four fundamental flaws in the theory that private greed reliably creates social good. The financial crisis highlights the fourth and least familiar item in the list, involving access to information. But it will be helpful to begin with a brief review of the other flaws.Four fundamental flaws
First, the theoretical defense of market outcomes rests on Pareto optimality, an absurdly narrow definition of social goals. A proposal to raise taxes on the richest five percent and lower taxes on everyone else is not “optimal” by this standard, since it makes only 95 percent of the population, not everyone, better off. Important public policies typically help some people at the expense of others: pollution controls are good for those who value clean air and water, but bad for the profits of major polluters. The invisible hand won’t achieve such non-consensual results; public goods require public choices.
Second, market competition only leads to the right outcomes if everything that matters is a marketable commodity with a meaningful price. Marxists and others have objected to the treatment of labor as a mere commodity; environmentalists have likewise objected to the view of nature as something to buy and sell. This is not a new idea: in the words of the 18th century philosopher Immanuel Kant, some things have a price, or relative worth; other things have a dignity, or intrinsic worth. Respect for the dignity of labor and of nature leads into a realm of rights and absolute standards, not prices and markets. It doesn’t matter how much someone would be willing to pay for the opportunity to engage in slavery, child labor, or the extinction of species; those options are not for sale. Which issues call for absolute standards, and which can safely be left to the market? This foundational question precedes and defines the legitimate scope of market competition; it cannot be answered from within the apparatus of economics as usual.
Third, the theory of competitive markets and the proof of their optimality rest on the assumption that no enterprise is large enough to wield noticeable power in the marketplace. Adam Smith’s butchers and bakers operated in a relentlessly competitive environment, as do the small producers and consumers of modern general equilibrium theory. In reality, businesses big enough to wield significant power over prices, wages, and production processes can be found throughout the economic landscape.
Big businesses thrive, in part, thanks to economies of scale in technology and work organization: bigger boilers and furnaces are physically more efficient than small ones; assembly lines can make labor more productive than individual craft work; computers are often more productive when they run the same software used by everyone else. Economies of scale are also important in establishing and advertising well-known brands: since no one ever has complete information about the market, as discussed below, there is a value to knowing exactly what to expect when you walk into a McDonald’s or a Starbucks.
Bigness can also be based on unethical, even illegal manipulation of markets to create monopoly or near-monopoly positions. Manipulation constantly reappears because the “rules of the game” create such a powerful incentive to break the rules. The story of the invisible hand, and its formalization in the theory of perfectly competitive markets, offers businesses only the life of the Red Queen in Alice in Wonderland, running faster and faster to stay in the same place. Firms must constantly compete with each other to create better and cheaper products; as soon as they succeed and start to make greater profits, their competitors catch up with them, driving profits back down to the low level that is just enough to keep them all in business. An ambitious, profit-maximizing individual could easily conclude that there is more money to be made by cheating. In the absence of religious or other extra-economic commitments to play by the rules, the strongest incentive created by market competition is the search for an escape from competition, legitimately or otherwise.
Opportunities to cheat are entwined with the fourth flaw in the theory of perfect competition: all participants in the market are assumed to have complete information about products and prices. Adam Smith’s consumers were well-informed through personal experience about what the baker and the butcher were selling; their successors in conventional economic theory are likewise assumed to know the full range of what is for sale on the market, and how much they would benefit from buying each item. In the realm of finance, mortgage crises and speculative bubbles would be impossible if every investor knew the exact worth of every available investment – as, stereotypically, small-town bankers were once thought to know the credit-worthiness of households and businesses in their communities.So many choices, so little time
The assumption of complete information fails on at least two levels, both relevant to the current crisis: a general issue of the sheer complexity of the market; and a more specific problem involving judgment of rare but costly risks. In general terms, a modern market economy is far too complex for any individual to understand and evaluate everything that is for sale. This limitation has inspired a number of alternative approaches to economics, ranging from Herbert Simon’s early theories of bounded rationality through the more recent work on limited and asymmetric information by Joseph Stiglitz and others. Since no one ever has complete information about what’s available on the market, there is no guarantee that unregulated private markets will reach the ideal outcome. Regulations that improve the flow of information can lead to an overall improvement, protecting the unwary and the uninformed.
When people buy things about which they are poorly informed, markets can work quite perversely. If people trust someone else’s judgment more than their own – as, for instance, many do when first buying a computer – then decisions by a small number of early adopters can create a cascade of followers, picking a winner based on very little information. Windows may not have been the best possible microcomputer operating system, but a small early lead in adoption snowballed into its dominant position today. Investment fads, market bubbles, and fashions of all sorts display the same follow-the-leader dynamics (but without the staying power of Windows).
When people have to make excessively complex decisions, there is no guarantee that they will choose wisely, or pick the option that is in their own best interest. Yet in areas such as health care and retirement savings, individuals are forced to make economic decisions that depend on detailed technical knowledge. The major decisions are infrequent and the cost of error is often high, so that learning by experience is not much help.
The same overwhelming complexity of available choices exists throughout financial markets. The menu of investment options is constantly shifting and expanding; financial innovation, i.e. creating and selling new varieties of securities, is an inexpensive process, requiring little more than a clever idea, a computer programmer, and a lawyer. Such innovation allows banks and other financial institutions to escape from old, regulated markets into new, ill-defined, and unregulated territory, potentially boosting their profits. Even at its best, the pursuit of financial novelty and the accompanying confusion undermines the traditional assumption that buyers always make well-informed choices. At its worst, the process of financial innovation provides ample opportunity to cheat, knowingly selling new types of securities for more than they are worth.

Information about the reliability of many potential investments is ostensibly provided by bond rating agencies. One of the minor scandals of the current financial crisis is the fact that the rating agencies are private firms working for the companies they are rating. Naturally, you are more likely to be rehired if you present your clients in the best possible light; indeed, it might not hurt your future prospects to occasionally bend the truth a bit in their favor. The Enron scandal similarly involved accounting firms that wanted to continue working for Enron – and reported that nothing was wrong with the company’s books, at a time when the top executives were engaged in massive fraud.

Preparing for the worst
There is also a more specific information problem involved in the financial crisis, concerning the likelihood of rare, catastrophic events. People care quite a bit about, and spend money preparing for, worst-case outcomes. The free-market fundamentalism and push for deregulation over the last thirty years, however, have rolled back many older systems of protection against catastrophe, increasing profits in good years but leaving industries and people exposed to enormous risks in bad years. These risks occur infrequently or irregularly enough that it is difficult, perhaps even literally impossible, to discover their true probabilities. Nonetheless, responding correctly to rare, expensive losses is crucial to many areas of public policy.
In the U.S., the risk that your house will have fire next year is 0.4%. In effect, the average housing unit has a fire every 250 years; the most likely number of fires you will experience in your lifetime is clearly zero. Does this inspire you to cancel your fire insurance? You could, after all, spend the premium on luxuries that you have always wanted – an excellent plan for raising your standard of living, in every year that you don’t have a fire. Life insurance, frequently bought by parents of young children, addresses a similarly unlikely event: the overall U.S. death rate is less than 0.1 percent per year in your twenties, 0.2 percent in your thirties, and does not reach 1 percent per year until you turn 61. The continued existence of fire insurance and life insurance thus provides evidence that people care about catastrophic risks with probabilities in the tenths of a percent per year. In private life, people routinely spend money on insurance against such events, despite odds of greater than 99 percent that it will prove unnecessary.
For catastrophic risks to individuals, demographic data are readily available, making the frequency of worst-case outcomes predictable (which is why insurance companies are willing to cover individual losses). For the most serious crises in agriculture, industry, or finance, there is no such database; the public events of greatest concern are very rare, and are dependent on complex social forces, making it virtually impossible to predict their timing or frequency.
There is, however, a strong desire to protect against potential crises, frequently through the accumulation of reserves; it is striking how often the same word is used in different contexts. Storing reserves of grain to protect against crop failure and famine is an ancient practice, already known in Biblical times and continuing into the twentieth century in many countries. Electricity regulation, as it existed throughout the United States until the 1980s (and still does in some states), required the regulated utilities to maintain reserve capacity to generate more electricity than is normally needed, often 12 to 20 percent above peak demand. And financial regulation requires banks and other lending institutions to hold reserves, either in cash or in something similarly safe, equal to a fixed fraction of their outstanding loans.
All of these forms of reserves look expensive in good years, but prevent or limit losses in bad years. How often will those bad years crop up? In non-crisis times, the potential price volatility and risks of losses in the housing and stock markets can appear to be pleasantly and misleadingly low. By many standards, the crash of 2008 is the worst that U.S. and world markets have seen since 1933, some 75 years earlier. No one has much first-hand knowledge of such crashes.
How could society maintain awareness and preparedness for catastrophic risks that exist in the historical record, but not in this generation’s experience? As Henry Paulson, Jr., the Treasury Secretary during the last years of the Bush administration, said after several months of floundering, unsuccessful responses to the financial meltdown of 2008,
“We are going through a financial crisis more severe and unpredictable than any in our lifetimes… There is no playbook for responding to turmoil we have never faced.”4
There used to be a playbook, dating from the days when we (or our grandparents) did face similar turmoil. A system of financial regulations, enacted in the aftermath of the 1930s Depression, drew on the lessons of that painful episode and provided some protection against another crash. Yet the experience of some decades of relative stability, in an era of anti-regulatory, laissez faire ideology, has led to loss of collective memory and allowed the rollback of many of the post-depression regulations.Rolling back the reserves
The free-market fundamentalism of the Reagan-Thatcher-Bush era sought to deregulate markets wherever possible. This included efforts (frequently successful) to eliminate the reserves that protected many industries and countries against bad times, in order to boost profits in non-crisis years. Starting in the 1980s, structural adjustment programs, imposed on developing countries by the IMF and the World Bank as conditions for loans, called for elimination of crop marketing boards and grain reserves, and for abandonment of the pursuit of self-sufficiency in food. It was better, according to the “Washington consensus” that dominated the development discourse of the day, for most countries to specialize in higher-value cash crops or other exports, and import food from lower-cost producers. Again, this is a great success in normal times, when nothing goes wrong in international markets for grain and other crops; in years of crop failures or unusually high grain prices, the “inefficient” old system of grain reserves and self-sufficiency looks much better.
At about the same time, the notion became widespread in U.S. policy circles that electricity regulation was antiquated and inefficient. Under the old system, utilities received a local monopoly in exchange for accepting the obligation to provide service to everyone who wanted electricity, at reasonable, regulated rates, while maintaining a mandated margin of reserve capacity. Deregulation, introduced on a state-by-state basis in the 1980s and 1990s, eliminated much of the previous regulations in order to allow competition in the sale of electricity. The pursuit of profit, in theory, would lead to ample capacity to generate electricity, while competition would keep the prices as low as possible. Yet none of the competitors retained the obligation to maintain those expensive, inefficient reserves of capacity.

California enjoyed 40 years of rapid growth without major blackouts or electricity crises under the old regulatory system. In the five years after deregulation, the demand for electricity grew much more rapidly than the supply, eliminating the state’s reserve capacity. The combination of an unusually hot summer, a booming economy, and intentional manipulation of the complex new electricity markets by Enron and other trading firms then led to the California electricity crisis of 2000-01, with extensive blackouts and peak-hour prices spiking up to hundreds of times the previous levels.
Parallel trends occurred in the world of finance. Before the 1980s, residential mortgages typically were issued by savings and loan associations (S&Ls). These community-based institutions were strictly regulated, with limits on the types of loans they could make and the interest rates they could offer to depositors. Squeezed by high inflation and by competition from money market funds in the late 1970s, the S&Ls pushed for, and won, extensive deregulation in the early 1980s. Once they were allowed to make a wider range of loans, freed of federal oversight, the S&Ls launched a massive wave of unsound lending in areas outside their past experience. Hundreds of S&Ls went bankrupt during the 1980s, leading to a federal bailout that seemed expensive by pre-2008 standards.
The regulation of S&Ls was part of the Glass-Steagall Act, enacted in 1933 to control speculation and protect bank deposits. While provisions affecting S&Ls were repealed in the 1980s, other key features of Glass-Steagall remained in effect until 1999. In particular, the 1999 repeal of Glass-Steagall allowed commercial banks to engage in many risky forms of lending and investment that had previously been closed to them. Then in 2004, the Securities and Exchange Commission (SEC) lowered the reserve requirements on the nation’s biggest investment banks, allowing them to make loans of up to 40 times their reserves (the previous limit had been 12 times their reserves). The result was the same as with the deregulation of S&Ls: taking on unfamiliar, new, seemingly profitable risks destroyed some of the nation’s biggest banks within a few years.
There is a similar explanation for the unexpected news that Iceland was among the countries hardest hit by the financial crisis. Privatization and deregulation of Iceland’s three big banks in 2000 allowed the country to become an offshore banking haven for British and other international investors, offering high-risk, high-return (in good times) opportunities to the world. This led to some years of rapid economic growth, and to a banking industry with liabilities equal to several times the country’s GDP – which did not look like a problem until the international financial bubble burst.Putting the pieces back together again

I suspect that free-marketers need to be less doctrinaire and less simple-mindedly utility-maximizing, and that they should depend less on abstract econometric models. I think they’ll have to take much more seriously the task of thinking through what are the right rules of the road for both the private and public sectors. They’ll have to figure out what institutional barriers and what monetary, fiscal and legal guardrails are needed for the accountability, transparency and responsibility that allow free markets to work.5
5 Kristol, “George W. Hoover?” 284

When the most doctrinaire of the free-marketers – William Kristol, again – start talking about rules of the road, institutional barriers, and guardrails for the market economy, the moment has arrived for new ideas. What follows is not the way that I would design an economic system if starting from scratch – but neither I nor anyone else has been invited, alas, to start over and build a sensible economy from the ground up. The immediate challenge that we face is to repair what’s there without further jeopardy to jobs and livelihoods.
The four fundamental flaws in the traditional theory suggest the shape of the barriers and guardrails needed to keep the market economy safely on the road and headed in the right direction. The first two flaws point to large categories of decisions and values that should be permanently off-limits to the market. The definition of efficiency in terms of Pareto optimality – endorsing only those changes to the status quo that can win unanimous support – is a profoundly anti-democratic standard that is taken for granted in much of economic theory.6 There are many public goods and public decisions, which cannot be handled purely by consensus in any jurisdiction larger than a village. Markets cannot decide what we want to do about education, infrastructure, defense, and other public purposes; nor can they decide who should pay how much for these programs.
The existence of important values that cannot be priced, rooted in the dignity of humanity and nature, requires a system of rights and absolute standards, not prices and market incentives. Reasonable people can and do disagree about the extent of rights and standards, but this is unquestionably a large, and perhaps growing, sphere of decisions. Many of the things we care most about are too valuable to have prices; they are not for sale at any price.
These straightforward points only came to seem remarkable and controversial under the onslaught of market fundamentalism in recent years, with its relentless focus on expanding the sphere of market efficiency, prices, and incentives. Conservatives, securely in power for most of the years from 1980 through 2008, repeated endlessly that government is the problem and the market is the solution – at least until the crash of 2008, when the roles were abruptly reversed. Meanwhile, it has become common to hear the argument, in environmental policy debates, that rational policy-making must be based on setting the correct price for human lives saved by regulations. (A less common, but by no means unknown, next step is the morally indefensible conclusion that the value of a life saved should be lower in poorer countries.)
The third flaw in the theory of the invisible hand, the existence and importance of big businesses, leads to a need for ongoing regulation. Many industries do not and cannot consist of small businesses whose every action is disciplined by relentless competition. As a result, they have to be disciplined by society – that is, by regulation. Recognition of this fact inspired the traditional treatment of electric utilities, prior to the recent wave of deregulation. Since some aspects of electricity supply are natural monopolies (no one wants to see multiple, competing electric lines running along the same street), the firms holding this monopoly power had to accept limits on their prices and continual oversight of their investment plans – including the requirement to build reserve capacity – in order to ensure that they served the public interest.

While utility regulation is an interesting model, it is not the only approach to the governance of big business. The general point is that the invisible hand only ensures that greed is good for society when the greedy enterprises are small and powerless. Larger, more powerful greed must often be directed by the visible hand of government in order to prevent it from subverting the common good.

The fourth flaw, the impossibility of complete information about markets, leads to lessons more directly focused on the financial crisis. The staggering complexity of many decisions in today’s financial and other markets undermines the strongest pragmatic argument in favor of market mechanisms. Even when markets are not perfectly competitive, and do not achieve the theoretical optimum of the invisible hand (or of general equilibrium theory), they can still excel at decentralized information processing, as Friedrich Hayek pointed out long ago. All the information about the supply and demand for steel is brought together in the steel market; all the information about the supply and demand for restaurant meals in a city is brought together in that market; and so on. No one has to know all the details of all the markets – which is fortunate, since no one could.
As market choices become more intricately and technically detailed, the potential for decentralized information processing disappears. Markets that are too complex for many of the participants to understand cannot do a reasonable job of collecting information about supply and demand. Overly complex markets are often ones that have been artificially created, based on an ideological commitment to solving every problem through the market rather than a natural evolution of trading in existing commodities. The market for health care in the U.S. is a case in point: a service that is more efficiently and cheaply provided as a public good has been forced into a framework of private commodity purchases, with mountains of unnecessary paperwork and vast numbers of people employed in denying medical coverage to others. Medicare coverage of prescription drugs is the epitome of this problem, a “market mechanism” that will never convey useful information about supply and demand because no one understands the bizarre complexity of what they are buying, or how the alternatives would differ.
Other invented, ideologically inspired markets also suffer from the curse of complexity; California’s deregulation of electricity was an unfortunately classic example. Our current system of retirement funding, in which everyone manages their own savings, has higher overhead costs and higher risks of mismanagement than a public system such as Social Security; many people have little or no understanding of the process of managing their retirement funds. In financial markets, innovation that creates complexity is often profitable for the innovating firms and bewildering to others. Cynics might guess that this could be the goal of financial innovation; but even with good intentions, the worsening spiral of complexity defeats any potential for the market to accurately assess the supply and demand for loans.
The policy implication is clear: keep it simple. If training or technical assistance is required to comprehend a new market mechanism, it is probably too complex to achieve its intended goals. Another approach – think of single-payer health care – may offer a more direct, lower-cost route to the same objective, without the trouble of inventing a convoluted new market apparatus. Making public choices about public goods is simpler than squeezing them into the ill-fitting costume of individual market purchases.
In financial markets there is a clear need for independent, publicly funded sources of information about potential investments, to do the job that we always imagined the bond rating companies were doing. Regulation has to apply across the board to new as well as old financial instruments; waiting for signs of trouble before regulating new financial markets is a recipe for a crash.Precaution vs. cost-benefit analysis
The importance of infrequent, catastrophic risks, and the lack of information about their timing or frequency, highlights the need for a precautionary approach to public policy. In several recent (and very technical) papers, Martin Weitzman shows that both for financial markets and for climate change, the worst case risks can be so disastrous that they should dominate policy decisions. In complex, changing systems such as the world’s climate or financial markets, information will always be limited; if the system is changing rapidly enough, old information may become irrelevant as fast as new information arrives. If, for example, we never have more than 100 independent empirical observations bearing on how bad the market (or climate) will get, then we will never know anything for certain about the 99th percentile risk.
In a situation with unlimited worst-case risks but limited information about their likelihood, Weitzman proves that the expected value of reducing the worst-case risks is, technically speaking, infinite. In other words, nothing else matters except risk reduction, focused on the credible worst case. This is exactly the idea that has been advocated in environmental circles as the “precautionary principle.”
For example, the latest climate science suggests that the likely sea level rise over this century will be in the neighborhood of one meter; in addition, if the Greenland ice sheet, or the similarly-sized West Antarctic ice sheet, collapses into the ocean, the result will eventually be another seven meters of sea level rise. One meter of sea level rise is an expensive and difficult problem for islands and low-lying coastal areas; seven meters is enough to destroy most coastal cities and the associated industries and infrastructure around the world. It is irrelevant, therefore, to worry about fine-tuning the “most likely” estimate of one meter, or to calculate the precisely appropriate policy response to that estimate. Rather, the goal should be to do whatever it takes to prevent the collapse of a major ice sheet and the ensuing seven meters of sea level rise. This is true even in the absence of hard information about the probability of collapsing ice sheets; the risk is far too ominous to take any chances with trial and error.
Financial markets are directly analogous – although one might claim that in finance, the ice sheets have now melted and the markets are already underwater. The worst case risks are so painful that nothing else matters in setting public priorities. With the benefit of hindsight, who among us would have objected to somewhat slower growth in stock prices and housing prices over the last decade or two, in exchange for avoiding the recent economic crash? It was not, it turns out, a brilliant idea to lower the reserve requirements and remove other restrictions on the risks that financial institutions could take, even though it boosted short-run profits at the time.
Restoration of the earlier, discarded regulations on banking is not a complete answer to the current crisis, although it is hard to see how it would hurt as a starting point. What is needed is a more comprehensive regulation of financial investments, covering new varieties as well as old. Charging a (very small) percentage fee on all security transactions, plus a first-time registration fee for introducing new types of securities, could fund an expanded regulatory system, and might also slow down the worst forms of speculation. (Some states have employed a comparable system in electric utility regulation; a trivial percentage fee, amounting to a tiny fraction of a cent on each kilowatt-hour of electricity, supports the state’s oversight of the system as a whole.)
In general, the accumulation of reserves guards against unexpected bad times and market fluctuations. In a volatile and uncertain world, financial and other systems have to be run in a manner that allows such reserves. It is the social equivalent of insurance against individual losses; likewise, the regulatory rollbacks of recent years are the equivalent of cancelling your insurance and spending the premiums on a few more nights out on the town. Maintaining a bit of slack in the system is essential for accumulating reserves that protect against worst cases; squeezing the last bits of slack out in order to maximize profits when everything works according to plan leaves us all more vulnerable to deviations from that plan.Globalization, new deals, and old economics
The final argument against stringent regulation is that in an increasingly globalized economy, capital will simply move to less regulated countries. Extensive research and debates have found little support for this idea in the sphere of environmental regulation; the “pollution haven” hypothesis, claiming that industry will subvert regulation by moving to countries with weaker environmental standards, is not supported by the bulk of the evidence.7
Financial capital, however, is more mobile than industry; huge sums of money can be transferred electronically across national boundaries with minimal transaction costs. Thus it should be easier to create “speculation havens” than pollution havens; a handful of small countries are already known for welcoming unregulated offshore financial investments. The push for deregulation of banking, from the S&L episode of the 1980s to the present, has come not only from ideology and the desire for short-run profits, but also from the pressure of competition with newer, less regulated financial institutions.
The process of financial innovation will continue to challenge any simple attempts to curtail the flight of capital. The ultimate answer to this problem is not only to regulate existing financial markets and institutions, but also to create new, socially useful opportunities for investment – to steer capital toward better purposes, as well as policing its attempts to steal away.
Lurking behind the failure of financial markets is the lack of real investment opportunities, as seen, for instance, in the near-bankruptcy of the U.S. auto industry. GM, Ford, and Chrysler have engaged in their own form of gambling on good times, over-committing their resources to SUVs and other enormous, energy-inefficient vehicles. Paralleling the risky financial ventures that fell apart in 2008, the “all big cars all the time” strategy produces big profits if (and only if) consumer incomes stay high and fuel prices stay low. When incomes fall and oil prices rise, it turns out to be a shame to have bet the company on endless sales of vehicles much larger than anyone actually needs. A new initiative is needed to reshape and redirect this industry and others; left to its own devices, the free market only leads deeper into the ongoing collapse of U.S. manufacturing. If a bailout in the auto industry, finance, or elsewhere gives the government a share of ownership, as it should, then public priorities can be implemented as a condition of public assistance.
At the end of 2008, profitable investment opportunities are vanishing across the board, as the U.S. and the world economies are sliding into the worst economic downturn since the 1930s. That decade’s depression helped inspire the theories of John Maynard Keynes, explaining how deficit spending helps to cure economic slumps and put unemployed people back to work. Keynesian economics has been out of academic fashion for nearly thirty years, banished by the same market fundamentalism that pushed for deregulation of financial and other markets. Yet when a big enough crisis hits, everyone is a Keynesian, favoring huge increases in deficit spending in order to provide an economic stimulus.
There is no shortage of important public priorities that are in need of attention. Thirty years of relentless tax-cutting and penny-pinching in public spending have left the U.S. with perilously crumbling and underfunded infrastructure, from the failed levees of New Orleans to the fatal collapse of a major highway bridge in Minneapolis. The country is shockingly far away from adequate provision of health care and high-quality public education for all, among other social goals. In terms of prevention of worst-case risks, addressing the threat of climate change requires reinventing industry, electric power, and transportation with little or no carbon emissions – a task that calls both for widespread application of the best existing techniques, and for discovery, development, and adoption of new breakthrough technologies, in the U.S. and around the world. What would it take to structure an economy in which these objectives were more attractive to capital than repackaging subprime mortgages and inventing esoteric con games?
A focus on ambitious new public priorities no longer appears to be absent from American politics. Barack Obama’s speeches invoke the goal of a “green new deal,” representing an enormous improvement over the previous occupant of the White House in this and so many other ways. The reality, however, seems likely to lag far behind the rhetoric. Practical discussion has focused on the size of the one-time stimulus that might be needed, treating it as an expensive cure for a rare ailment rather than a new, healthier way of life. The economic advisors for the new administration represent the cautious mainstream of the Democratic Party, an improvement relative to their immediate predecessors in office, but far from offering what is really needed.
Recognizing the new popularity of Keynesian ideas and analogies to the 1930s, a few conservative critics have begun to object that the New Deal should not be taken as a model because it failed to end the Depression. Despite the ambitious, well-publicized initiatives of the Roosevelt administration, unemployment remained extremely high and the economy did not fully recover until the surge of military spending for World War II. This is literally true, but implies a need to do more, not less, than the New Deal. Programs that put hundreds of thousands of people to work, some of them building parks and bridges that are still in use today, were not misguided; they were just too small. A premature lurch back toward balanced budgets caused a painful interruption in the recovery in 1937-38, prolonging high rates of unemployment.
Indeed, as Keynes himself said in 1940, “It is, it seems, politically impossible for a capitalistic democracy to organize expenditure on the scale necessary to make the grand experiments which would prove my case — except in war conditions.” The grand experiment of mobilizing for World War II did succeed in reviving the market economy; it involved massive, ongoing government redirection of spending toward socially determined priorities.
The need for a pervasive, permanent role of government in directing investment also emerges from more recent studies of economic development. As documented in the research of Alice Amsden, Ha-Joon Chang, Dani Rodrik, and others, the countries that have grown fastest have ignored the advice of the World Bank, IMF, and other advocates of free trade and laissez-faire. Instead, successful development has been based on skillful, continual government involvement in nurturing promising industries, supporting education, research, and infrastructure, and managing international trade. The government’s leading role in development can certainly be done wrong, but it can’t be done without.
The New Deal was on the one hand much larger than any recent government initiatives in the U.S., and on the other hand too small for the crisis of the 1930s – or for today. Rebuilding our infrastructure and social programs, while reducing carbon emissions to a sustainable level, will not be finished in a year, or even one presidential term. An ongoing effort is required, more on the scale of wartime mobilization or the active engagement of governments in successful development strategies. With such an effort, there will be a reliable set of investment opportunities in the production of real, socially useful goods and services, as well as a much-strengthened government empowered to regulate and prevent dangerous forms of speculation and undesirable financial “innovations.”
In such a world, the market still plays an essential role, coordinating the numerous industries and activities, engaging in the decentralized processing of information about supply and demand (which is its indispensable task). It will not, however, be stretched to fit other problems that are better handled through the public sector; and it will not be bowed down to as the source of wisdom and policy guidance. There is a clear need for smoothly functioning financial markets, but adult supervision is required to avoid a repetition of recent events.
To close by way of analogy, the market may be the engine of a socially directed economy, indispensable for forward motion. There are limits, however, to its capabilities: it cannot change its own flat tires; and if we let it steer, we are sure to hit the wall again.
________________________________
SUGGESTED CITATION:
Frank Ackerman, “The Economics of Collapsing Markets”, real-world economics review, issue no. 48, 6 December 2008, pp. 279-290, http://www.paecon.net/PAEReview/issue48/Ackerman48.pdf

THE MARKET share of what used to be called the “Big Three” U.S. automakers has been shrinking for years. GM alone had over 50 percent of the U.S. market in the 1960s, but Ford, GM, and Chrysler together can now barely muster 40 percent. Since autumn, sales have been in free fall. GM lost $9.6 billion last quarter, and Chrysler has all but announced it is not viable without a foreign partner. Does the United States need an auto industry?

In the short term, the government should act to prevent a sudden collapse of the Detroit Three. Such a collapse could, due to interlinked supply chains, cause the loss of 1.5 to 3 million jobs (adding 1 to 2 percentage points to an unemployment rate already approaching double digits), and cause such chaos that even Japanese automakers support loans to keep GM and Chrysler afloat. But what about the long term? Why not let the Detroit Three continue to shrink, and allow Americans to buy the cars they prefer, whether they are U.S.-made or not?

It is true that the Detroit Three’s problems go deeper than the current dramatic fall in demand due to the economic crisis. But these problems have potentially correctable causes. The automakers have been managed with an eye to short-term financial gain rather than long-term sustainability. Public policy has also been unfavorable, in three major ways: low gas taxes, which lead to large fluctuations in the price of gas when crude oil prices change; lack of national health care, which penalizes firms responsible enough to offer it; and an insufficient public safety net for retired and laid-off employees, causing firms that shrink to be saddled with very high “legacy costs.” Another problem (primarily for the rest of manufacturing, but also for autos) is trade agreements that don’t protect labor or environmental rights.

It is important to note that the United States faces no fundamental competitive disadvantage in auto manufacturing. Competitive advantage in auto manufacturing is made, not born (in contrast to the case of, say, banana growing, where natural endowments like climate play an important role).

First, we should dispel the notion that auto manufacturing is inherently a low-wage activity. Our major competitors in auto assembly (Germany and Japan) pay wages at least as high as in the United States. Low-wage nations such as China and Mexico have made some inroads into auto supply, providing about 10 percent of the content of the average U.S.-assembled vehicle. But even here, competing with low-wage nations is not as daunting as one might think; research by the Michigan Manufacturing Technology Center suggests that most small manufacturers have costs within 20 percent of their Chinese competitors’. Manufacturers could meet this challenge by adopting a “high-road” production process that harnesses everyone’s knowledge—that of production workers as well as top executives and investors—to achieve innovation, quality, and quick responses to unexpected situations.

Is there a public interest in reversing the industry’s undeniable failures? Why not let all the manufacturing jobs disappear and have an economy of just eBays and Googles? Because we need manufacturing expertise to cope with events that might present huge technical challenges to our habits of daily living (global warming) or leave us unable to buy from abroad (wars).

The auto industry has a critical role to play in meeting these national goals. Take the challenge of climate change. We need to radically increase the efficiency of transport, in part by making incremental changes that reduce the weight of cars, more significant changes to the internal combustion engine, and potentially revolutionary couplings of cars with “smart highways” to dramatically improve traffic flow.

Yes, we could import this technology. But it might not be apt for the U.S. context. (For example, Europe has long favored diesels for their fuel economy, but Americans have deemed diesels’ high emissions of nitrous oxides and particulates to be unacceptable). And we’d need to export a lot of something to pay for this technology—or see continued fall in the value of the dollar, leading to a fall in living standards.

The auto industry has long been known as “the industry of industries,” since making cars absorbs much of the output of industries like machine tools, steel, electronics, even computers, and semiconductors. Innovations pioneered for the auto industry spread to other industries as well (see this article). Thus, maintaining the industry now keeps capabilities alive that may be crucial in meeting crises we have not yet thought of. Traditional trade theory has little room for such “irreversibility”; it assumes that if relative prices change, countries can easily re-enter businesses that they were once uncompetitive in. But, it’s very expensive to recreate the vast assemblages of suppliers, engineers, and skilled workers that go into making cars and other manufactured goods.

We should not assume that the United States will keep “high-skilled” engineering and design jobs even if we lose production jobs. In fact, the reverse may well be true. Asian and European car companies do most of their engineering in their home countries; they manufacture here in part because of the bulkiness of cars. Even the Detroit Three are outsourcing engineering to Europe (for small cars) and India (for computer-aided drafting). In addition, it is difficult to remain competitive for long in design when one doesn’t have the insight gained from actual manufacturing. Another reason to save the auto industry is its role as a model of relative fairness in sharing productivity gains. Allowing a high-wage industry to fail does not guarantee that another high-wage industry will emerge to take its place—in fact, by weakening the institutions and norms that created such an industry, it becomes less likely.

So, the United States needs an auto industry, one that pays fair wages and engages in both engineering and production at a sufficient scale to keep critical industries like machine tools humming. Do “we” need a domestically owned auto industry? This is a harder question. Our “national champions” have not served the United States particularly well in recent decades; consumers have benefited greatly from access to Toyotas and Hondas. Yet, the demise of the Big Three may well lead to negative consequences for all of us—lower wages (since foreign automakers have been hostile to unions) and less R&D in the United States—and therefore we need to make sure we don’t create financially viable firms by sacrificing capabilities and wages. Instead, we should implement government policies, such as creating both demand and supply for fuel-efficient vehicles, and involve unions in training programs for both current and former auto workers. These policies would help create an industry that serves all its stakeholders—including taxpayers.

Susan Helper is AT&T Professor of Economics, Weatherhead School of Management, Case Western Reserve University. She is also a Research Associate at the National Bureau of Economic Research and MIT’s International Motor Vehicle Program.

I HAVE often thought that economists should be required to have a better grasp of simple arithmetic. It would prevent them from repeating many silly comments that pass for conventional wisdom, such as that the United States will no longer be a manufacturing country in the future.

Those who know arithmetic can quickly detect the absurdity of this assertion. The implication of course is that the United States will import nearly all of its manufactured goods. The problem is that unless we can find some country that will give us manufactured goods for free forever, we have to find some mechanism to pay for our imports.

The end of manufacturing school argues that we will pay by exporting services. This is where arithmetic is so useful. The volume of U.S. trade in goods is approximately three and half times the volume of its trade in services. If the deficit in goods trade were to continue to expand, we would need an incredible growth rate in both the volume and surplus of service trade and our surplus on this trade in order to get to anything close to balanced trade.

For example, if we lose half of our manufacturing over the next twenty years, and imported services continue to rise at the same pace as the past decade, then we would have to see exports of services rise at an average annual rate of almost 15 percent over the next two decades if we are to have balanced trade in the year 2028.

A 15 percent annual growth rate in service exports is approximately twice the rate of growth in service exports that we have seen over the last decade. It would take a very creative story to explain how we can anticipate the doubling of the growth rate of service exports on a sustained basis.

The story becomes even more fantastic on a closer examination of the services that we export. The largest single item is travel, meaning the money that foreign tourists spend in the United States. This item alone accounts for almost 20 percent of our service exports.

There is nothing wrong with tourism as an industry. However, the idea that U.S. workers are somehow too educated to be doing for manufacturing work, but instead will be making the beds, bussing the tables, and cleaning hotel toilets for foreign tourists is a bit laughable. Of course, with the right institutional structure (e.g. strong unions) these jobs can be well-paying jobs, but it is certainly not apparent that they require more skills than manufacturing.

The category “other transportation” accounts for another 10 percent of exported services. These are the fees for freight and port services that importers pay when they bring items into the United States. This service rises when our imports rise. It is effectively money taken out of our consumers’ pockets because it is included in the price of imported goods.

Royalty and licensing fees account for another 17 percent of our service exports. These are the fees that we get countries to tack onto the price of their products due to copyright and patent protection. It might become increasingly difficult to extract these fees as the spread of the Internet increasingly allows more movies, software, and recorded music to be instantly copied and exchanged at zero cost. It’s not clear that the rest of the world is prepared to use police-state tactics to collect revenue for Microsoft and Disney. The drug patent side of this equation is even more dubious. Developing countries are not eager to see their people die so that Pfizer and Merck can get high profits from their drug patents. This component of service exports is likely to come under considerable pressure in future years.

Another major category of service exports is financial services. This category accounted for approximately 10 percent of service exports in recent years. It is questionable whether this share can be maintained in the years ahead. Wall Street had been known as the gold standard of the world financial industry, with the best services and the highest professional standards. As a result of the scandals that have been exposed in the last year, Wall Street no longer has this standing in the world. After all, investors don’t have to come to New York and give their money to Bernie Madoff or Robert Rubin to be ripped off; they can be ripped off almost anywhere in the world. Perhaps the Obama administration will be able to implement reforms in the financial sector that will restore its integrity in the eye of world investors, but that will require serious work at this point.

Finally, there is the category of business and professional services, which accounts for roughly 20 percent of service exports. This is the area of real high-tech and high-end services. It includes computing and managerial consulting.

Rapid growth in this sector would mean more high-end jobs in the United States, but the notion that it could possibly expand enough to support a country without manufacturing is absurd on its face. First, even though it is a large share of service exports, it is only equal to about 0.8 percent of GDP. Even if quadrupled over the next two decades, it wouldn’t come close to covering the current trade deficit, to say nothing of the increase due to the loss of more manufacturing output.

More important, it is implausible to believe that the United States will be able to dominate this area in the decades ahead. The United States certainly has a head start in sophisticated computer technologies and in some management practices, but it is questionable how long this advantage can be maintained. There are already many world-class computer service companies in India and elsewhere in the developing world, and this number is increasing rapidly.

The computer and software engineers in these countries are every bit as qualified as their U.S. counterparts and are often prepared to work for less than one-tenth of U.S. wages. Furthermore, unlike cars and steel, which are very expensive to transport over long distances, it is costless to ship software anywhere in the world. Given the basic economics, it seems a safe bet that the United States will lose its share in this sector of the world economy. In twenty years it is quite likely that the United States will be a net importer of this category of service, unless of course wages in the United States adjust to world levels.

In short, the idea that the United States can survive without manufacturing is implausible: It implies an absurdly rapid rate of growth of service exports for which there is no historical precedent. Many economists and economic pundits asserted that house prices could keep rising forever in spite of the blatant absurdity of this position. The claim that the U.S. economy can be sustained without a sizable manufacturing sector is an equally absurd proposition.

THERE ARE at least three major reasons why a nation must indeed make things to maintain its prosperity: First, making goods is on balance—with exceptions—more productive than providing services, and rising productivity is the fundamental source of prosperity; second, related to the first, making goods creates higher-paying jobs on balance—again, with a few exceptions; third, a major nation must be able to maintain a balanced current account (and trade balance) over time, and goods are far more tradeable than services. Without something to export, a nation will either become over-indebted or forced to reduce its standard of living.

The United States has looked the other way regarding these important issues for a variety of reasons, but underlying its neglect are certain narratives about how economies work that have been highly misleading. One of the more misleading narratives of recent decades involves the rapid growth of services industries when compared to the rest of the economy. It goes like this: Services will naturally replace manufacturing in an advancing economy exactly as manufacturing replaced agriculture in the 1800s. Do not be concerned. Remember how inappropriately concerned people were a century and a half ago with the rise of manufacturing? The rise in services is the best use of American resources.

Of course, within every overgeneralization lies a pit of truth. Same here. Once we feed, clothe, house, and auto-mobilize ourselves, many economists agree that we mostly want to go to the movies or watch TV, hang out at the mall, trade stocks in our Schwab accounts, and, if financially healthy, go to the doctor a lot. There is thus no need to be alarmed that only 8 or 9 percent of American workers are employed at a factory that makes things. To the contrary, this is proof of the economy’s sophistication and its evolution towards providing Americans with what they really want. Moreover, manufacturing’s productivity is rising rapidly—which means fewer workers are needed for the same output and the price of an equal quantity of goods falls.

A lower manufacturing share of GDP is therefore the natural course of events. In fact, productivity gains are the core reason for job loss. There are even good services jobs—finance, for example. Meantime, corporate profits rise, which is proof of the pudding and the guarantor of high levels of capital investment and the future of the nation. Not long ago, the management guru, Peter Drucker, wrote that all America had to do was learn how to more productively make services. I suppose America listened, because it has now created the remarkably productive Wal-Mart, which in turn supplied America with some of the worst jobs in the nation.

ACCORDING TO the neo-classical equilibrium theory, all of this was supposed to happen as naturally as a dolphin plies the tides. As always, there are controversies over how fast manufacturing’s share of GDP has fallen but in recent years, I don’t think there are many that argue there hasn’t been a significant drop since the late 1970s. In addition, I don’t think many argue that the trade deficit in manufactured goods, which is pretty enormous, has virtually nothing to do with job loss.

Thus, manufacturing should have always been a focus of government policies. But America did far worse than merely neglect it. The decline of manufacturing has gotten a big push from the Democrats in charge in the 1990s, and from most of the Republicans since the early 1980s, in particular the hard Rightists, and increasingly most of mainstream economic academia. This push—really a shove—was the tolerance and further promotion of an overvalued U.S. dollar.

The American dollar had been high through much of the Bretton Woods period, but in 1979 it took off and rose some 60 to 75 percent, depending on the trade-weighted average used, until 1984. High real interest rates in the early 1980s under Federal Reserve Chairman Paul Volcker attracted foreign funds while Reagan’s simultaneous Keynesian thrust of tax cuts and defense spending produced a fast-growing economy in the mid-1980s. In five years, the high dollar dramatically lifted the price of manufactured goods. Coupled with the steep recession, manufacturing was clobbered.

After the dollar declined during the run of Jim Baker’s Plaza Accord, the value of the dollar again turned up and kept rising inexorably until only a couple of year ago. Manufacturing thus did not decline as a consequence of natural causes, but was hastened to the edge of the cliff and pushed off by the high dollar. The relevance of manufacturing was minimized by policymakers who saw an easy way to attract foreign investment and compensate for ever more borrowing, and all the while satisfy Wall Street profit seekers.

And thus America stopped making things. American manufacturing was at an enormous disadvantage in the world. One consequence was the permanent loss of many hundreds of thousands of jobs. But not only that. Entire industries were decimated, needed skills lost, R&D foregone, the innovation from learning-by-doing never undertaken—and so on.

The rest of the world did not mind. American demand was the growth machine for Japan, the Tigers, and finally China. If America wanted to undermine good jobs in its own country, who were they to complain?

THE DISASTER of this policy is now clear. Left to its own devices, the free market in currencies is probably the most devastating economic idea of our times. Because the dollar reigned for so long as the only trustworthy reserve currency, America got a free pass to run up a big trade deficit without the concomitant rise in interest rates. This led to self-destructive abuse. Americans didn’t have to save to finance borrowing and they could still borrow to buy what they wanted.

This led to borrowing at damaging levels. Greenspan, for example, could push interest rates to rock bottom in the early 2000s without undermining the value of the dollar and raising inflationary fears. Meantime, the Chinese and others, intoxicated by the power of their export-led growth model, felt no pressure to raise wages at home and build a domestic market which the early rich nations of Europe and North America had long ago learned was a critical foundation—an idea that some have now forgotten.

The world’s trade and investment imbalances led directly to the current crisis. Debt, not wages, propelled demand in America. And not only America but international institutions invested dollars in bad American mortgages and the housing bubble. Earlier they had done the same in the high-technology sectors.

So the extent of the decline of manufacturing in the United States was not natural. Meantime, under this economic model, finance became America’s leading industry, accounting for more than 30 percent of profits in recent years and more than 40 percent of profits among the Standard & Poor’s 500.

If a high dollar had not been allowed to become the centerpiece of the economic model, manufacturing would have declined but to a far lesser degree. This raises the second issue. Should we let manufacturing follow a natural, market-driven course? The answer is that we should not. It is nonsense to think that free markets will automatically create the industries a nation needs. The thinking that suggests it will is the result of the ascendance of simplistic free-market economic theory.

In America, we fail to develop industries for which there are few short-term incentives or that are too risky or large to be undertaken by private capital. There are gaping holes in what we make in America: no light rail or subway cars to speak of, for example, and far less agricultural equipment and almost no machine tools, once the pride and joy of our early industrial era. We are in desperate need of money for alternative energy solutions. We spent torrentially on fiber communication lines that were unneeded. We lag in broadband coverage. We of course make almost no consumer electronics products or textiles.

We remain leaders in chip-related high technology. But it was the government that saved Intel in the 1980s, and it is the remarkable fall in the cost of computer power with Intel micro-processors that was the principal causal factor in the so-called “New Economy.” We lead in big pharmaceuticals, but that’s because the National Institutes of Health and other government agencies have so intelligently subsidized science and research at U.S. universities. We have a huge defense industry, which is a big exporter, including aircraft. (We know why.) Meantime, the nation’s overall R&D is spotty and weak. The education of engineers and scientists remains well behind our production of MBAs.

Today, to take the most straightforward measure, manufacturing final sales are 10 percentage points lower as a share of GDP than they were in the early 1980s. That’s 1.5 trillion dollars worth. Losing one million manufacturing jobs more than necessary has put an enormous dent in wages in America where the typical male in his thirties now makes less after inflation than the typical male in his thirties did in the 1970s.

SO HERE we are: Enormous imbalances in current accounts everywhere has put the world in a hole from which it may not climb out in the near future. The imbalances are a consequence of everyone taking the easy way out—and most doing so against the most vital long-term interests of the U.S. economy. The United States has succumbed, in particular, to the short-term interests of powerful Wall Street players.

To take one end of the spectrum, the Chinese, now in serious recession, must develop a domestic market. At the other end is the U.S., which, until the 1970s, paid the highest wages in the world since the Colonial years. But it no longer does. It is a high productivity, low-wage nation. Wal-Mart is the symbol of the broader demise. A major reason is the loss of manufacturing jobs.

Because the United States can no longer make many things—it doesn’t have the factories, the labor or management expertise, the new ideas or proper incentives—the trade deficit is that much harder to correct, even if the dollar falls again. An industrial policy, such as the one partly incorporated in the new Obama stimulus package, has fewer teeth because much of the domestic spending will necessarily go to imports.

In sum, then, no nation can sustain the imbalances America has had since the late 1980s. Goods are largely what are exported. Critically, making things also makes good jobs, it creates ideas for the future, it educates and trains workers, it has enormous multiplier effects through the purchase of goods for production and by paying high wages. Contrary to widespread conventional wisdom, no rich nation will survive on services alone.

The United States requires an appropriate currency policy. Since it needs the cooperation of all nations, it is difficult to be optimistic. But present events may cause this to happen, and we can only hope in a stable way. The United States also requires a realistic industrial policy to support needed industry, the ongoing development of skills and products, and appropriate levels of R&D. The lack of such thinking in America—even after the crisis—is yet another failure of over-simplified, market-oriented economic theory.

Jeff Madrick is editor of Challenge Magazine and director of policy research at the Schwartz Center for Economic Policy Analysis, The New School. He is the author of Taking America, The End of Affluence, and most recently The Case for Big Government.

The Reagan-Thatcher model, which favored finance over domestic manufacturing, has collapsed after thirty years of dominance and what we need—and what we can build—is a capitalism more attuned to our national concerns. The decline of American manufacturing has saddled us not only with a seemingly permanent negative balance of trade but with a business community less and less concerned with America’s productive capacities. When manufacturing companies dominated what was still a national economy in the 1950s and 1960s, they favored and profited from improvements in America’s infrastructure and education. The interstate highway system and the G.I. Bill were good for General Motors and for the U.S.A. From 1875 to 1975, the level of schooling for the average American increased by seven years, creating a more educated workforce than any of our competitors’ had. Since 1975, however, it hasn’t increased at all. The mutually reinforcing rise of financialization and globalization broke the bond between American capitalism and America’s interests.

Manufacturing has become too global to permit the United States to revert to the level of manufacturing it had in the good old days of Keynes and Ike, but it would be a positive development if we had a capitalism that once again focused on making things rather than deals. In Germany, manufacturing still dominates finance, which is why Germany has been the world’s leader in exports. German capitalism didn’t succumb to the financialization that swept the United States and Britain in the 1980s, in part because its companies raise their capital, as ours used to, from retained earnings and banks rather than the markets. Company managers set long-term policies while market pressures for short-term profits are held in check. The focus on long-term performance over short-term gain is reinforced by Germany’s stakeholder, rather than shareholder, model of capitalism: Worker representatives sit on boards of directors, unionization remains high, income distribution is more equitable, social benefits are generous. Nonetheless, German companies are among the world’s most competitive in their financial viability and the quality of their products. Yes, Germany’s export-fueled economy is imperiled by the global collapse in consumption, but its form of capitalism has proved more sustainable than Wall Street’s.

So does Germany offer a model for the United States? Yes—up to a point. Certainly, U.S. ratios of production to consumption and wealth creation to debt creation have gotten dangerously out of whack. Certainly, the one driver and beneficiary of this epochal change—our financial sector—has to be scaled back and regulated (if not taken out and shot). Similarly, to create a business culture attuned more to investment than speculation, and with a preferential option for the United States, corporations should be made legally answerable not just to shareholders but also to stakeholders—their employees and community. That would require, among other things, changing the laws governing the composition of corporate boards.

In addition to bolstering industry, we should take a cue from Scandinavia’s social capitalism, which is less manufacturing-centered than the German model. The Scandinavians have upgraded the skills and wages of their workers in the retail and service sectors—the sectors that employ the majority of our own workforce. In consequence, fully employed impoverished workers, of which there are millions in the United States, do not exist in Scandinavia.

Making such changes here would require laws easing unionization (such as the Employee Free Choice Act, which was introduced this week in Congress) and policies that professionalize jobs in child care, elder care and private security. To be sure, this form of capitalism requires a larger public sector than we have had in recent years. But investing in more highly trained and paid teachers, nurses and child-care workers is more likely to produce sustained prosperity than investing in the asset bubbles to which Wall Street was so fatally attracted.

Would such changes reduce the dynamism of the American economy? Not necessarily, particularly since Wall Street often mistook deal-making for dynamism. Indeed, since finance eclipsed manufacturing as our dominant sector, our rates of inter-generational mobility have fallen behind those in presumably less dynamic Europe.

Wall Street’s capitalism is dying in disgrace. It’s time for a better model.

Liberal economists pine for days no liberal should want to revisit.

“The America I grew up in was a relatively equal middle-class society. Over the past generation, however, the country has returned to Gilded Age levels of inequality.” So sighs Paul Krugman, the Nobel Prize–winning Princeton economist and New York Times columnist, in his recent book The Conscience of a Liberal.

The sentiment is nothing new. Political progressives such as Krugman have been decrying increases in income inequality for many years now. But Krugman has added a novel twist, one that has important implications for public policy and economic discourse in the age of Obama. In seeking explanations for the widening spread of incomes during the last four decades, researchers have focused overwhelmingly on broad structural changes in the economy, such as technological progress and demographic shifts. Krugman argues that these explanations are insufficient. “Since the 1970s,” he writes, “norms and institutions in the United States have changed in ways that either encouraged or permitted sharply higher inequality. Where, however, did the change in norms and institutions come from? The answer appears to be politics.”

To understand Krugman’s argument, we can’t start in the 1970s. We have to back up to the 1930s and ’40s—when, he contends, the “norms and institutions” that shaped a more egalitarian society were created. “The middle-class America of my youth,” Krugman writes, “is best thought of not as the normal state of our society, but as an interregnum between Gilded Ages. America before 1930 was a society in which a small number of very rich people controlled a large share of the nation’s wealth.” But then came the twin convulsions of the Great Depression and World War II, and the country that arose out of those trials was a very different place. “Middle-class America didn’t emerge by accident. It was created by what has been called the Great Compression of incomes that took place during World War II, and sustained for a generation by social norms that favored equality, strong labor unions and progressive taxation.”

The Great Compression is a term coined by the economists Claudia Goldin of Harvard and Robert Margo of Boston University to describe the dramatic narrowing of the nation’s wage structure during the 1940s. The real wages of manufacturing workers jumped 67 percent between 1929 and 1947, while the top 1 percent of earners saw a 17 percent drop in real income. These egalitarian trends can be attributed to the exceptional circumstances of the period: precipitous declines at the top end of the income spectrum due to economic cataclysm; wartime wage controls that tended to compress wage rates; rapid growth in the demand for low-skilled labor, combined with the labor shortages of the war years; and rapid growth in the relative supply of skilled workers due to a near doubling of high school graduation rates.

Yet the return to peacetime and prosperity did not result in a shift back toward the status quo ante. The more egalitarian income structure persisted for decades. For an explanation, Krugman leans heavily on a 2007 paper by the Massachusetts Institute of Technology economists Frank Levy and Peter Temin, who argue that postwar American history has been a tale of two widely divergent systems of political economy. First came the “Treaty of Detroit,” characterized by heavy unionization of industry, steeply progressive taxation, and a high minimum wage. Under that system, median wages kept pace with the economy’s overall productivity growth, and incomes at the lower end of the scale grew faster than those at the top. Beginning around 1980, though, the Treaty of Detroit gave way to the free market “Washington Consensus.” Tax rates on high earners fell sharply, the real value of the minimum wage declined, and private-sector unionism collapsed. As a result, most workers’ incomes failed to share in overall productivity gains while the highest earners had a field day.

This revisionist account of the fall and rise of income inequality is being echoed daily in today’s public policy debates. Under the conventional view, rising inequality is a side effect of economic progress—namely, continuing technological breakthroughs, especially in communications and information technology. Consequently, when economists have supported measures to remedy inequality, they have typically shied away from structural changes in market institutions. Rather, they have endorsed more income redistribution to reduce post-tax income differences, along with remedial education, job retraining, and other programs designed to raise the skill levels of lower-paid workers.

By contrast, Krugman sees the rise of inequality as a consequence of economic regress—in particular, the abandonment of well-designed economic institutions and healthy social norms that promoted widely shared prosperity. Such an assessment leads to the conclusion that we ought to revive the institutions and norms of Paul Krugman’s boyhood, in broad spirit if not in every detail.

There is good evidence that changes in economic policies and social norms have indeed contributed to a widening of the income distribution since the 1970s. But Krugman and other practitioners of nostalgianomics are presenting a highly selective account of what the relevant policies and norms were and how they changed.

The Treaty of Detroit was built on extensive cartelization of markets, limiting competition to favor producers over consumers. The restrictions on competition were buttressed by racial prejudice, sexual discrimination, and postwar conformism, which combined to limit the choices available to workers and potential workers alike. Those illiberal social norms were finally swept aside in the cultural tumults of the 1960s and ’70s. And then, in the 1970s and ’80s, restraints on competition were substantially reduced as well, to the applause of economists across the ideological spectrum. At least until now.

Stifled Competition

The economic system that emerged from the New Deal and World War II was markedly different from the one that exists today. The contrast between past and present is sharpest when we focus on one critical dimension: the degree to which public policy either encourages or thwarts competition.

The transportation, energy, and communications sectors were subject to pervasive price and entry regulation in the postwar era. Railroad rates and service had been under federal control since the Interstate Commerce Act of 1887, but the Motor Carrier Act of 1935 extended the Interstate Commerce Commission’s regulatory authority to cover trucking and bus lines as well. In 1938 airline routes and fares fell under the control of the Civil Aeronautics Authority, later known as the Civil Aeronautics Board. After the discovery of the East Texas oil field in 1930, the Texas Railroad Commission acquired the effective authority to regulate the nation’s oil production. Starting in 1938, the Federal Power Commission regulated rates for the interstate transmission of natural gas. The Federal Communications Commission, created in 1934, allocated licenses to broadcasters and regulated phone rates.

Beginning with the Agricultural Adjustment Act of 1933, prices and production levels on a wide variety of farm products were regulated by a byzantine complex of controls and subsidies. High import tariffs shielded manufacturers from international competition. And in the retail sector, aggressive discounting was countered by state-level “fair trade laws,” which allowed manufacturers to impose minimum resale prices on nonconsenting distributors.

Comprehensive regulation of the financial sector restricted competition in capital markets too. The McFadden Act of 1927 added a federal ban on interstate branch banking to widespread state-level restrictions on intrastate branching. The Glass-Steagall Act of 1933 erected a wall between commercial and investment banking, effectively brokering a market-sharing agreement protecting commercial and investment banks from each other. Regulation Q, instituted in 1933, prohibited interest payments on demand deposits and set interest rate ceilings for time deposits. Provisions of the Securities Act of 1933 limited competition in underwriting by outlawing pre-offering solicitations and undisclosed discounts. These and other restrictions artificially stunted the depth and development of capital markets, muting the intensity of competition throughout the larger “real” economy. New entrants are much more dependent on a well-developed financial system than are established firms, since incumbents can self-finance through retained earnings or use existing assets as collateral. A hobbled financial sector acts as a barrier to entry and thereby reduces established firms’ vulnerability to competition from entrepreneurial upstarts.

The highly progressive tax structure of the early postwar decades further dampened competition. The top marginal income tax rate shot up from 25 percent to 63 percent under Herbert Hoover in 1932, climbed as high as 94 percent during World War II, and stayed at 91 percent during most of the 1950s and early ’60s. Research by the economists William Gentry of Williams College and Glenn Hubbard of Columbia University has found that such rates act as a “success tax,” discouraging employees from striking out as entrepreneurs.

Finally, competition in labor markets was subject to important restraints during the early postwar decades. The triumph of collective bargaining meant the active suppression of wage competition in a variety of industries. In the interest of boosting wages, unions sometimes worked to restrict competition in their industries’ product markets as well. Garment unions connived with trade associations to set prices and allocate production among clothing makers. Coal miner unions attempted to regulate production by dictating how many days a week mines could be open.

MIT economists Levy and Temin don’t mention it, but highly restrictive immigration policies were another significant brake on labor market competition. With the establishment of countryspecific immigration quotas under the Immigration Act of 1924, foreign-born residents of the United States plummeted from 13 percent of the total population in 1920 to 5 percent by 1970. As a result, competition at the less-skilled end of the U.S. labor market was substantially reduced.

Solidarity and Chauvinism

The anti-competitive effects of the Treaty of Detroit were reinforced by the prevailing social norms of the early postwar decades. Here Krugman and company focus on executive pay. Krugman quotes wistfully from John Kenneth Galbraith’s characterization of the corporate elite in his 1967 book The New Industrial State: “Management does not go out ruthlessly to reward itself—a sound management is expected to exercise restraint.” According to Krugman, “For a generation after World War II, fear of outrage kept executive salaries in check. Now the outrage is gone. That is, the explosion in executive pay represents a social change…like the sexual revolution of the 1960’s—a relaxation of old strictures, a new permissiveness, but in this case the permissiveness is financial rather than sexual.”

Krugman is on to something. But changing attitudes about lavish compensation packages are just one small part of a much bigger cultural transformation. During the early postwar decades, the combination of in-group solidarity and out-group hostility was much more pronounced than what we’re comfortable with today.

Consider, first of all, the dramatic shift in attitudes about race. Open and unapologetic discrimination by white Anglo-Saxon Protestants against other ethnic groups was widespread and socially acceptable in the America of Paul Krugman’s boyhood. How does racial progress affect income inequality? Not the way we might expect. The most relevant impact might have been that more enlightened attitudes about race encouraged a reversal in the nation’s restrictive immigration policies. The effect was to increase the number of less-skilled workers and thereby intensify competition among them for employment.

Under the system that existed between 1924 and 1965, immigration quotas were set for each country based on the percentage of people with that national origin already living in the U.S. (with immigration from East and South Asia banned outright until 1952). The explicit purpose of the national-origin quotas was to freeze the ethnic composition of the United States—that is, to preserve white Protestant supremacy and protect the country from “undesirable” races. “Unquestionably, there are fine human beings in all parts of the world,” Sen. Robert Byrd (D-W.V.) said in defense of the quota system in 1965, “but people do differ widely in their social habits, their levels of ambition, their mechanical aptitudes, their inherited ability and intelligence, their moral traditions, and their capacity for maintaining stable governments.”

But the times had passed the former Klansman by. With the triumph of the civil rights movement, official discrimination based on national origin was no longer sustainable. Just two months after signing the Voting Rights Act, President Lyndon Johnson signed the Immigration and Nationality Act of 1965, ending the “un-American” system of national-origin quotas and its “twin barriers of prejudice and privilege.” The act inaugurated a new era of mass immigration: Foreign-born residents of the United States have surged from 5 percent of the population in 1970 to 12.5 percent as of 2006.

This wave of immigration exerted a mild downward pressure on the wages of native-born low-skilled workers, with most estimates showing a small effect. Immigration’s more dramatic impact on measurements of inequality has come by increasing the number of less-skilled workers, thereby increasing apparent inequality by depressing average wages at the low end of the income distribution. According to the American University economist Robert Lerman, excluding recent immigrants from the analysis would eliminate roughly 30 percent of the increase in adult male annual earnings inequality between 1979 and 1996.

Although the large influx of unskilled immigrants has made American inequality statistics look worse, it has actually reduced inequality for the people involved. After all, immigrants experience large wage gains as a result of relocating to the United States, thereby reducing the cumulative wage gap between them and top earners in this country. When Lerman recalculated trends in inequality to include, at the beginning of the period, recent immigrants and their native-country wages, he found equality had increased rather than decreased. Immigration has increased inequality at home but decreased it on a global scale.

Just as racism helped to keep foreign-born workers out of the U.S. labor market, another form of in-group solidarity, sexism, kept women out of the paid work force. As of 1950, the labor force participation rate for women 16 and older stood at only 34 percent. By 1970 it had climbed to 43 percent, and as of 2005 it had jumped to 59 percent. Meanwhile, the range of jobs open to women expanded enormously.

Paradoxically, these gains for gender equality widened rather than narrowed income inequality overall. Because of the prevalence of “assortative mating”—the tendency of people to choose spouses with similar educational and socioeconomic backgrounds—the rise in dual-income couples has exacerbated household income inequality: Now richer men are married to richer wives. Between 1979 and 1996, the proportion of working-age men with working wives rose by approximately 25 percent among those in the top fifth of the male earnings distribution, and their wives’ total earnings rose by over 100 percent. According to a 1999 estimate by Gary Burtless of the Brookings Institution, this unanticipated consequence of feminism explains about 13 percent of the total rise in income inequality since 1979.

Racism and sexism are ancient forms of group identity. Another form, more in line with what Krugman has in mind, was a distinctive expression of U.S. economic and social development in the middle decades of the 20th century. The journalist William Whyte described this “social ethic” in his 1956 book The Organization Man, outlining a sensibility that defined itself in studied contrast to old-style “rugged individualism.” When contemporary critics scorned the era for its conformism, they weren’t just talking about the ranch houses and gray flannel suits. The era’s mores placed an extraordinary emphasis on fitting into the group.

“In the Social Ethic I am describing,” wrote Whyte, “man’s obligation is…not so much to the community in a broad sense but to the actual, physical one about him, and the idea that in isolation from it—or active rebellion against it—he might eventually discharge the greater service is little considered.” One corporate trainee told Whyte that he “would sacrifice brilliance for human understanding every time.” A personnel director declared that “any progressive employer would look askance at the individualist and would be reluctant to instill such thinking in the minds of trainees.” Whyte summed up the prevailing attitude: “All the great ideas, [trainees] explain, have already been discovered and not only in physics and chemistry but in practical fields like engineering. The basic creative work is done, so the man you need—for every kind of job—is a practical, team-player fellow who will do a good shirt-sleeves job.”

It seems entirely reasonable to conclude that this social ethic helped to limit competition among business enterprises for top talent. When secure membership in a stable organization is more important than maximizing your individual potential, the most talented employees are less vulnerable to the temptation of a better offer elsewhere. Even if they are tempted, a strong sense of organizational loyalty makes them more likely to resist and stay put.

Increased Competition, Increased Inequality Krugman blames the conservative movement for income inequality, arguing that right-wingers exploited white backlash in the wake of the civil rights movement to hijack first the Republican Party and then the country as a whole. Once in power, they duped the public with “weapons of mass distraction” (i.e., social issues and foreign policy) while “cut[ting] taxes on the rich,” “try[ing] to shrink government benefits and undermine the welfare state,” and “empower[ing] businesses to confront and, to a large extent,crush the union movement.”

Obviously, conservatism has contributed in important ways to the political shifts of recent decades. But the real story of those changes is more complicated, and more interesting, than Krugman lets on. Influences across the political spectrum have helped shape the more competitive more individualistic, and less equal society we now live in.

Indeed, the relevant changes in social norms were led by movements associated with the left. The women’s movement led the assault on sex discrimination. The civil rights campaigns of the 1950s and ’60s inspired more enlightened attitudes about race and ethnicity, with results such as the Immigration and Nationality Act of 1965, a law spearheaded by a young Sen. Edward Kennedy (D-Mass.). And then there was the counterculture of the 1960s, whose influence spread throughout American society in the Me Decade that followed. It upended the social ethic of group-minded solidarity and conformity with a stampede of unbridled individualism and self-assertion. With the general relaxation of inhibitions, talented and ambitious people felt less restrained from seeking top dollar in the marketplace. Yippies and yuppies were two sides of the same coin.

Contrary to Krugman’s narrative, liberals joined conservatives in pushing for dramatic changes in economic policy. In addition to his role in liberalizing immigration, Kennedy was a leader in pushing through both the Airline Deregulation Act of 1978 and the Motor Carrier Act of 1980, which deregulated the trucking industry—and he was warmly supported in both efforts by the left-wing activist Ralph Nader. President Jimmy Carter signed these two pieces of legislation, as well as the Natural Gas Policy Act of 1978, which began the elimination of price controls on natural gas, and the Staggers Rail Act of 1980, which deregulated the railroad industry.

The three most recent rounds of multilateral trade talks were all concluded by Democratic presidents: the Kennedy Round in 1967 by Lyndon Johnson, the Tokyo Round in 1979 by Jimmy Carter, and the Uruguay Round in 1994 by Bill Clinton. And though it was Ronald Reagan who slashed the top income tax rate from 70 percent to 50 percent in 1981, it was two Democrats, Sen. Bill Bradley of New Jersey and Rep. Richard Gephardt of Missouri, who sponsored the Tax Reform Act of 1986, which pushed the top rate all the way down to 28 percent.

What about the unions? According to the Berkeley economist David Card, the shrinking of the unionized labor force accounted for 15 percent to 20 percent of the rise in overall male wage inequality between the early 1970s and the early 1990s. Krugman is right that labor’s decline stems in part from policy changes, but his ideological blinkers lead him to identify the wrong ones.

The only significant change to the pro-union Wagner Act of 1935 came through the Taft-Hartley Act, which outlawed closed shops (contracts requiring employers to hire only union members) and authorized state right-to-work laws (which ban contracts requiring employees to join unions). But that piece of legislation was enacted in 1947—three years before the original Treaty of Detroit between General Motors and the United Auto Workers. It would be a stretch to argue that the Golden Age ended before it even began.

Scrounging for a policy explanation, economists Levy and Temin point to the failure of a 1978 labor law reform bill to survive a Senate filibuster. But maintaining the status quo is not a policy change. They also describe President Reagan’s 1981 decision to fire striking air traffic controllers as a signal to employers that the government no longer supported labor unions.

While it is true that Reagan’s handling of that strike, along with his appointments to the National Labor Relations Board, made the policy environment for unions less favorable, the effect of those moves on unionization was marginal.

The major reason for the fall in unionized employment, according to a 2007 paper by Georgia State University economist Barry Hirsch, “is that union strength developed through the 1950s was gradually eroded by increasingly competitive and dynamic markets.” He elaborates: “When much of an industry is unionized, firms may prosper with higher union costs as long as their competitors face similar costs. When union companies face low-cost competitors, labor cost increases cannot be passed through to consumers. Factors that increase the competitiveness of product markets increased international trade, product market deregulation, and the entry of low-cost competitors—make it more difficult for union companies to prosper.”

So the decline of private-sector unionism was abetted by policy changes, but the changes were not in labor policy specifically. They were the general, bipartisan reduction of trade barriers and price and entry controls. Unionized firms found themselves at a critical disadvantage. They shrank accordingly, and union rolls shrank with them.

Postmodern Progress

The move toward a more individualistic culture is not unique to the United States. As the political scientist Ronald Inglehart has documented in dozens of countries around the world, the shift toward what he calls “postmodern” attitudes and values is a predictable cultural response to rising affluence and expanding choices. “In a major part of the world,” he writes in his 1997 book Modernization and Postmodernization, “the disciplined, self-denying, and achievement-oriented norms of industrial society are giving way to an increasingly broad latitude for individual choice of lifestyles and individual self-expression.”

The increasing focus on individual fulfillment means, inevitably, less deference to tradition and organizations. “A major component of the Postmodern shift,” Inglehart argues, “is a shift away from both religious and bureaucratic authority, bringing declining emphasis on all kinds of authority. For deference to authority has high costs: the individual’s personal goals must be subordinated to those of a broader entity.”

Paul Krugman may long for the return of selfdenying corporate workers who declined to seek better opportunities out of organizational loyalty, and thus kept wages artificially suppressed, but these are creatures of a bygone ethos—an ethos that also included uncritical acceptance of racist and sexist traditions and often brutish intolerance of deviations from mainstream lifestyles and sensibilities.

The rise in income inequality does raise issues of legitimate public concern. And reasonable people disagree hotly about what ought to be done to ensure that our prosperity is widely shared. But the caricature of postwar history put forward by Krugman and other purveyors of nostalgianomics won’t lead us anywhere. Reactionary fantasies never do.

Brink Lindsey (blindsey@cato.org) is vice president for research at the Cato Institute, which published the policy paper from which this article was adapted.