The Truest Measure of America’s Progress

Perhaps the most compelling question of 2016 will be how can our nation continue to grow, while also creating a path to prosperity for more Americans? Unlike the “golden age of the middle class” after World War II, the growth in the last 20 to 30 years has not been widely shared. With much of the income earned since 1990 going to the highest earners, this has been the golden age of the “One Percent”.

Presidential candidates and others have recognized this disparity as an issue, and solutions range from redistribution — including taxing the top earners– to the often-cited idea that if we focus on economic growth, then a “rising tide will lift all boats.” The right path is to focus on both growth and shared prosperity. Michael Porter and Jan Rivkin, co-chairs of Harvard Business School’s U.S. Competitiveness project, argue that to be competitive the U.S. must both be able to win in the global marketplace and at the same time be raising the living standards of the average American.

But how do we know if we are on the right track? We know the truth of the old adage, “What gets measured gets done.” Wouldn’t the right metric keep our leaders focused – and allow us to measure progress on this critical journey?

For a tangible example of how powerful measurement can be, we need not look any further than the ‘growth’ component of growth and shared prosperity. Prior to the Great Depression, the United States had no comprehensive measure of national income and output. Politicians and economists were forced to consider a disjointed collection of statistics that measured everything from stock prices to freight car loadings when crafting economic policies. In reaction to this deficiency, the economist Simon Kuznets led a team of economists to develop the national accounts during the 1930s, which ultimately laid the foundation for arguably two of the most important indicators of economic growth of the last century: Gross National Product (GNP) and Gross Domestic Product (GDP).

Today’s quarterly GDP postings are bellwether measures of the economy and are watched carefully by global market makers, government officials, and ordinary workers alike. These measures have become, in the words of Nobel laureate Paul Samuelson and economist William Nordhaus, “beacons that help policymakers steer the economy toward the key economic objectives.”

But what are the beacons that will help steer the economy to both growth and shared prosperity? How do we refocus and redefine the way we measure the economy so that market makers, elected officials, and ordinary citizens can hold each other accountable to an evolved notion of economic success? We posed these questions to 73 thought leaders, CEOs, and public officials that attended the Growth & Shared Prosperity Convening at Harvard Business School in June.

Participants brought vastly different perspectives to their conversations about the measurement of shared prosperity, and as such, only a few indicators captured broad consensus. This diversity of opinion on the subject of measurement indicates how many elements are a part of the concept of shared prosperity, and how little consensus there is currently on what comprises it.

There was broad agreement among participants that capturing an assorted set of indicators in an index or composite score could be a worthwhile alternative to a single measure. Several organizations have attempted to do just that most notably the United Nations with the Human Development Index, the OECD with the Better Life Index, and the Social Progress Imperative with the Social Progress Index. Yet, for a number of reasons including the lack of consistent data and technical issues with composite indices, no single important indicator has emerged in the national conversation.

There was one particularly interesting proposal that surfaced during the convening, suggested by a Fortune 50 CEO and an MIT economist: Why not use GDP as a measure of growth and combine it with a measure of how many citizens are enjoying a reasonably prosperous or middle class life.

The proposed proxy for the shared prosperity measure was the ratio of households with income at a certain designated level above a reasonable cost of living to total households. The cost of living threshold would take into account the cost of an important basket of household needs such as housing, food, transportation, recreation, education, and retirement. The measure used would be the percent of households with income at some fixed amount—perhaps 15% or 20%, above the threshold level.

The hope is that as GDP increases, the ratio of the number of households in the nation who are above the threshold where they can pay for life’s necessities, invest in the future and have a bit to spare, would also increase.

While boiling shared prosperity down to a single indicator is difficult, the story of GDP reminds us that that there is plenty of potential upside to doing so. GDP has its limitations and its critics, but it is difficult to argue that it has not been an immensely valuable tool for policymakers and economists.

Now is the time for a challenge to academics, policymakers, and thought leaders to discover a new measure—one that tracks progress on the long-held goal of the opportunity for all Americans to realize and enjoy a prosperous future.

Karen Mills is a senior fellow with the Harvard Business School and the Harvard Kennedy School focused on competitiveness, entrepreneurship and innovation. She was a member of President Obama’s Cabinet, serving as Administrator of the U.S. Small Business Administration from 2009 to 2013.

Why millennials and the Depression-era generation are more similar than you think

Millennials have a bad rap. We imagine them spending their days updating social media accounts with headsets covering their ears and their parents’ credit card numbers pre-logged into Amazon Prime accounts.

A nice life if you can get it, but the reality is far different, according to research by Standard & Poor’s. Millennials — those born between the early 1980s through the early 2000s and also known as Generation Y— are shaping up to be a frugal and career-focused generation with the potential to lead a robust and sustainable U.S. economy. WeI say potential because they’re not yet the potent economic force that they could be; they are thus far a quiet group, economically conservative and waiting for better conditions to roar to life.

The success or failure of this generation will have widespread economic consequences. Already, millennials spend about $600 billion annually and are on track to spend $1.4 trillion a year by 2020.

According to our research, continued low wages for millennials could reduce U.S. GDP by as much as $244 billion through 2019, or $49 billion a year, relative to our baseline scenario. This suggests that policies around housing, wages, and the new threat caused by high student debt may have the greatest potential to help or harm millennials — things policymakers should heed as this generation grows as a political force.

We come to this conclusion in part by looking to the past. If you compare millennials to other generations you’ll find, somewhat surprisingly, that they share the most similarities with the so-called Silent Generation. These were Americans born in the mid-1920s through the early 1940s and who grew up during the Great Depression, but eventually drove a booming economy.

Just as their grandparents (and great-grandparents) before them, millennials experienced a major financial crisis during their formative years that has infused in them financial conservatism and a propensity to save. They are more likely to keep a larger amount of cash on hand, holding more than half their assets in cash, less than a third in equities, and 15% in fixed-income assets.

So why aren’t millennials guaranteed a strong economy in their middle years? The differences with the Silent Generation come in two areas, in particular: a slow-growth economy with lower wages combined with crushing loads of student debt. The Silent Generation entered adulthood during a robust growth cycle in part due to programs, such as the New Deal and Works Progress Administration. Millennials, instead, have only seen slow to moderate growth in GDP, with near stagnant gain in wages as they enter the workforce.

Adding to this difficulty are massive student loan bills. Millennials are the most educated generation in American history, but it has come at a cost ranging in the hundreds of billions of dollars. Indeed, many of millennials’ spending and saving habits can be attributed to this debt – a major determinant of current and future spending ability, given the length of loan maturities and weak post-recession wage growth to date.

To see the weight student-load debt is having on millennials, consider this: for the first time in at least 10 years, 30-year-olds with no history of student loans are more likely to purchase their homes than those with a history of student debt, according to the New York Federal Reserve Bank. The effects of student-loan debt could be mitigated if the economic recovery continues. A growing economy means the launch of the millennials into the U.S. economy will have been delayed, but not grounded.

Over the next five years, I expect the labor market will keep improving and wages will start increasing. Student loans that once looked very difficult to repay will become manageable, and delays to home-buying—albeit more skewed toward urban areas than in the past—will pick up.

My downside scenario, however, paints a grimmer picture. What if wages continue to stagnate through the next decade due to fundamental shifts in the U.S. economy? How would this anemic (or absent) growth impact a generation, which already must wait an added four years to reach middle-income status?

First, graduates with modest income opportunities would likely continue to use government options in order to delay student-loan payments – an option that may prevent default, but also means balances would climb. More borrowers would be caught in the web of higher balances, where they can only barely cover minimum payments, which in turn would continue to weigh on their credit scores.

The potential impact of this outcome on the economy could be significant. Millennials would likely be forced to continue their current pattern of economic behavior, avoiding big-ticket purchases like cars or homes and continuing to delay starting families. Options to take on more debt to start a business would be curtailed, with business creation holding near current 20-year low levels.

Further, housing starts would climb only slowly, with most activity in rental units, rather than single-family homes. Indeed, if millennials were to buy homes at the same sluggish rate as in the current recovery, housing starts wouldn’t reach 1.5 million units until almost two years later than in our baseline projection.

This is why it’s crucial for economists, policymakers, and the business community to consider this downside scenario when thinking of the future of the U.S. economy, and to have options ready to combat a difficult adulthood for this otherwise promising generation.

If, as we hope, millennials inherit a more robust economy and with higher wages and growth potential, we will more likely see the launch they appear very capable of orchestrating.

How much would you pay for a Nobel Prize in economics? Now’s your chance.

Most of the time, to get your hands on a Nobel Prize in economics, you need some serious brain power, decades of hard work on an esoteric subject, and the universal respect of the economics community. Now, to join the ranks of Krugman, Hayek, Myron and Scholes, all you need is a small fortune and the desire to spend it.

That’s right, this week, the Los Angeles-based auction house Nate D. Sanders will be accepting bids online for the Nobel Prize awarded to Belarusian-American economist Simon Kuznets, the third ever recipient of the Nobel Prize in economics. (If you are thinking about buying this piece, you should probably be aware that the economics prize is kind of the red-headed stepchild of the Nobel, as it was never endowed by Alfred Nobel himself, but by the Swedish Central Bank, decades after Nobel’s death).

That said, if you’re gonna shell out $150,000 on a Nobel Prize in economics—that’s where bidding starts—allow me to humbly suggest that this be the one. While every Nobel Prize-winning economist has made a great contribution to the science of economics, you’d be hard pressed to find someone who has had a bigger impact on the world beyond economics than Simon Kuznets.

Kuznets, who died in 1985, is the grandfather of the concept of gross domestic product, a statistic that gets bandied about every day in the press, by politicians, and activists. It wouldn’t be a stretch to say that, for some, GDP is the single best measure of human progress. The ubiquity of the statistic might lead some to think of it as timeless, but, in fact, it is quite young. The U.S. government only grew interested in calculating estimates of national income—of which GDP is one measure—during the depths of the Great Depression.

The idea of computing national income goes back to at least the 17th century. Private groups, like the National Bureau of Economic Research and the National Industrial Conference Board in the U.S., had been working to compute national income statistics in the United States in the years leading up to the Depression. But there was little agreement over what should be included in these measures. One bone of contention, for instance, was whether it should include household work, like child care or the preparation of meals.

Kuznets delivered his estimates to the Senate in 1934, putting numbers to the great suffering during those dark days. He showed that between 1929 and 1932 national income had fallen by more than 50%, and the income of wage earners had fallen 60%. These statistics helped guide the U.S. government’s efforts to stimulate the economy out of the Great Depression and the “recession within the depression” that struck the country in 1937.

Over the years, Kuznets and his colleagues tinkered with their methods, developing gross national product (GNP) in 1942, which measures the value of goods and services produced within the borders of the U.S. Unlike national income, GNP ignores the depreciation of capital stock. This was essential to understanding America’s war potential because it measured the production capability of the assets within our borders, regardless of whether foreigners owned those assets. The economics world gradually came to favor gross domestic product in the 1990s, partly as a reflection of the globalization of the U.S. economy. GDP, unlike GNP, measures the output of assets owned by U.S. citizens or entities that are both inside and outside the country.

Even today, there is a fierce debate over the worth of statistics like GDP. Some economists on the right side of the political spectrum question the wisdom of including government spending in our measures of national income because there is no market test for whether these expenditures do anything to improve welfare or make us richer. Those on the left criticize GDP for its failure to account for environmental degradation or the depletion of resources. If an oil company extracts $1 billion of oil from the ground and burns it, did we really become $1 billion richer? Or do we simply have less oil in the ground and more pollution?

The criticisms of GDP are numerous. In 2008, then French President Nicolas Sarkozy commissioned a group of economists, including Nobel laureate Joseph Stiglitz, to devise a new measure of national wealth that would include measures of quality of life and the sustainability of wealth, rather than just raw production figures.

Despite the criticisms, it’s unlikely governments will revise how they measure economic progress in the near future because the proposed alternatives involve a most difficult question: what constitutes well being and wealth? Considering the U.S. government can’t even tackle an overhaul of its tax code, asking it to address these loftier questions is probably a bit much.

But the debate over GDP is testament to just how important it is. We need these numbers to help us understand economic progress. Policy makers in the early 20th century likely failed to react as effectively to the Great Depression as they could because they lacked such tools at their disposal. In an age when we’re drowning in big data and statistics, such a scenario is hard to imagine. So, if you’re a history buff or an econ nerd—and the kind of person who will spend six figures on a piece of memorabilia—take a look at Kuznet’s prize.

Financial crisis: Revisiting the banking rules that died by a thousand small cuts

Many have bemoaned the U.S. government’s failure to do more to strengthen the financial system following the 2008-2009 crisis. In particular, Congress has not considered anything resembling a revival of the Glass-Steagall Act, which separated the humdrum deposit-taking function of commercial banks from the kind of dubious investment and trading activities that set up financial institutions for a fall.

In fact, Congress recently weakened The Volcker Rule, which aimed to prohibit some forms of risky trading by banks. And now the Republican-controlled House and Senate vow to further roll back the Volcker Rule and other provisions of the Dodd-Frank financial reform.

All this makes it important to get the history, and specifically the history of Glass-Steagall, right. First put in place in response to the financial crises of the Great Depression, the Glass-Steagall Act was allowed to lapse in 1999. Many critics of that move argue that it enabled the orgy of financial risk-taking that followed. Others counter that the increased risk-taking was not coming from commercial banks freed up by the elimination of Glass Steagall but by the shadow banking system of investment firms, hedge funds, and commercial paper dealers that were never restricted by Glass Steagall in the first place.

In fact, the financial crisis of the late 2000s was not brought on by the lack of Glass-Steagall per se but instead by a whole set of measures that loosened regulation. The end of Glass-Steagall was simply emblematic of that process.

It all started in 1980 with the abolition of Regulation Q ceilings on deposit interest rates, which allowed commercial banks to compete more aggressively for deposits. That led to a cascade of unintended consequences. It intensified the pressure on Savings & Loans, which previously had been permitted to offer higher deposit rates than other financial institutions. To limit the damage, the Garn-St. Germain Act of 1982 allowed S&Ls to engage in a range of commercial banking activities, those related to consumer lending for example.

Garn-St. Germain helped set the stage for the S&L crisis because it allowed thrifts to take on additional risk but didn’t do anything to restrain them. Meanwhile, as the thrifts began to offer more financial services to customers, traditional banks began to feel competitive pressure. Commercial banks had long been frustrated by their inability to underwrite corporate and municipal bonds. So, in response to a petition from J.P. MorganJPM, Bankers Trust, and Citicorp, the Federal Reserve creatively reinterpreted Glass-Steagall in December 1986 to allow commercial banks to derive up to 5% of their income from investment banking activities, including underwriting municipal bonds, commercial paper, and, fatefully, mortgage-backed securities.

In 1987, over the opposition of Fed Chair Paul Volcker, the Federal Reserve Board authorized several large banks to further expand their underwriting businesses. Under Volcker’s successor, Alan Greenspan, the Fed then allowed bank holding companies to derive as much as 25% of their revenues from investment banking operations.

The pressure to loosen regulation intensified as a merger wave swept through the world of investment banking and brokerage firms in the 1990s. Investment banks had first been allowed to expand when, in 1970, the ban on publicly listing their shares was lifted. The response took time to gather steam, but they now expanded with a vengeance.

In 1997, Morgan Stanley MS, an investment bank, merged with Dean, Witter, Discover & Co., a brokerage and credit card company. That same year, the trust company and derivatives house Bankers Trust acquired Alex. Brown & Sons, an investment and brokerage firm. The consolidation of investment houses, brokers, and insurance companies threatened to put banks at an even bigger disadvantage. So, the banks responded by lobbying even more intensely for the removal of the remaining restrictions on their operations.

By the 1990s, then, the Glass-Steagall Act was already significantly weakened. The fatal blow was struck in 1998 when Citicorp moved to purchase Travelers Insurance Group, notwithstanding Glass-Steagall’s requirement that it sell off Travelers’ insurance business within two years. The merger allowed Travelers to market insurance and its in-house money funds to Citicorp’s retail banking customers. And it gave Citicorp access to an expanded clientele of investors and insurance policyholders. Its main shortcoming? It was not compatible with Glass-Steagall.

The chairmen and co-CEOs of the merged company, John Reed and Sandy Weill, mounted a furious campaign to remove Glass-Steagall’s nettlesome restrictions before the two-year window closed. Their arguments received a sympathetic hearing from Alan Greenspan’s Fed, the Clinton White House, and the Treasury Department, especially when Lawrence Summers succeeded Robert Rubin as secretary in mid-1999. And the executives were warmly received in the halls of Congress, where bank lobbyists freely roamed.

Glass-Steagall was finally euthanized by the Gramm-Leach-Bliley Act, which repealed residual restrictions on combining commercial banking, investment banking, and insurance underwriting businesses in November 1999.

It would be all too easy to claim that Glass-Steagall’s death was a singular event that caused the financial crisis. In fact, its demise was the culmination of a decades-long process of financial deregulation in which both commercial banks and shadow banks were permitted to engage in a wider range of activities, while supervision and oversight lagged behind. Competition between commercial banks, investment banks, and shadow banks squeezed the profits of all involved. Many of the affected institutions responded by using more borrowed money and assuming more risk. The consequences, we now know, were disastrous.

The fact that much of the risky business that led to the 2008-2009 crisis was performed by shadow banks that had always operated outside the Glass-Steagall ring-fence does not excuse the ongoing relaxation of financial oversight and regulation. But simply putting Glass-Steagall back in place will not protect us from future crises. Re-regulating only part of the financial system will not be enough.

We need comprehensive financial reform to cope with 21st century financial markets. From this point of view, eviscerating the Dodd-Frank Wall Street Reform and Consumer Protection Act, as some in the recently inaugurated Congress propose, would be a step in precisely the wrong direction.

Wealth inequality in America: It’s worse than you think

For the true believers in laissez faire economic policy, the recent and ongoing national discussion over income and wealth inequality probably seems like it was started as a cynical ploy for those on the left to gain a political advantage. After all, if rising inequality is a problem, you would be hard pressed to find any solutions offered by the right wing.

It would be laughable to argue that left-leaning politicians aren’t using the issue for political advantage. But focusing on that fact alone misses one of the main reasons we have begun to pay more attention to inequality, which is the fact that we have better tools for measuring and understanding inequality than ever before. This is thanks to the work of economists like Emmanuel Saez and Gabriel Zucman, who have dedicated their careers to compiling and analyzing wealth and income data. Without these numbers, advocates for concerted effort to combat inequality would have no foundation for their argument.

Saez and Zucman released another working paper this week, which studies capitalized income data to get a picture of how wealth inequality in America, rather than income inequality, has evolved since 1913. (Income inequality describes the gap in how much individuals earn from the work they do and the investments they make. Wealth inequality measures the difference in how much money and other assets individuals have accumulated altogether.) In a blog post at the London School of Economics explaining the paper, Saez and Zucman write:

There is no dispute that income inequality has been on the rise in the United States for the past four decades. The share of total income earned by the top 1 percent of families was less than 10 percent in the late 1970s but now exceeds 20 percent as of the end of 2012. A large portion of this increase is due to an upsurge in the labor incomes earned by senior company executives and successful entrepreneurs. But is the rise in U.S. economic inequality purely a matter of rising labor compensation at the top, or did wealth inequality rise as well?

The advent of the income tax has made measuring income much easier for economists, but measuring wealth is not as easy. To solve the problem of not having detailed government records of wealth, Saez and Zucman developed a method of capitalizing income records to estimate wealth distribution. They write:

Wealth inequality, it turns out, has followed a spectacular U-shape evolution over the past 100 years. From the Great Depression in the 1930s through the late 1970s there was a substantial democratization of wealth. The trend then inverted, with the share of total household wealth owned by the top 0.1 percent increasing to 22 percent in 2012 from 7 percent in the late 1970s. The top 0.1 percent includes 160,000 families with total net assets of more than $20 million in 2012.

Saez and Zucman show that, in America, the wealthiest 160,000 families own as much wealth as the poorest 145 million families, and that wealth is about 10 times as unequal as income. They argue that the drastic rise in wealth inequality has occurred for the same reasons as income inequality; namely, the trend of making taxes less progressive since the 1970s, and a changing job market that has forced many blue collar workers to compete with cheaper labor abroad. But wealth inequality specifically is affected by a lack of saving by the middle class. Stagnant wage growth makes it difficult for middle and lower class workers to set aside money, but Saez and Zucman argue that the trend could also be a product of the ease at which people are able to get into debt, writing:

Financial deregulation may have expanded borrowing opportunities (through consumer credit, home equity loans, subprime mortgages) and in some cases might have left consumers insufficiently protected against some forms of predatory lending. In that case, greater consumer protection and financial regulation could help increasing middle-class saving. Tuition increases may have increased student loans, in which case limits to university tuition fees may have a role to play.

So, why should we care that wealth inequality is so much greater than even the historic levels of income inequality? While inequality is a natural result of competitive, capitalist economies, there’s plenty of evidence that shows that extreme levels of inequality is bad for business. For instance, retailers are once again bracing for a miserable holiday shopping season due mostly to the fact that most Americans simply aren’t seeing their incomes rise and have learned their lesson about the consequences of augmenting their income with debt. Unless your business caters to the richest of the rich, opportunities for real growth are scarce.

Furthermore, there’s reason to believe that such levels of inequality can have even worse consequences. The late historian Tony Judt addressed these effects in Ill Fares the Land, a book on the consequences of the financial crisis, writing:

There has been a collapse in intergenerational mobility: in contrast to their parents and grandparents, children today in the UK as in the US have very little expectation of improving upon the condition into which they were born. The poor stay poor. Economic disadvantage for the overwhelming majority translates into ill health, missed educational opportunity, and—increasingly—the familiar symptoms of depression: alcoholism, obesity, gambling, and minor criminality.

In other words, there’s evidence that rising inequality and many other intractable social problems are related. Not only is rising inequality bad for business, it’s bad for society, too.

If that’s the case—and all measures suggest that it is—what trajectory is the eventual selloff likely to take? Forget about a soft-landing scenario.

On Wednesday, the Dow Jones Industrial Average jumped by 274 points. But bear in mind, stocks usually don’t crash in a day, or a month. Nevertheless, the damage comes fast. When stocks are this pricey, history tells us that prices swirl downwards in a continuous whirlpool that usually lasts about a year and goes to extremes, driving valuations well below rational levels. First everyone believes equities are a great deal, then no one believes. Eventually, the panic makes stocks a bargain once again.

But don’t count on a benign, downward drift of 10% or 15%, just enough to bring the kind of “healthy correction” and “new buying opportunity” that equity strategists are always touting. When you start from these levels, the fall is more often brutal than soft.

Since 1876, U.S. stocks have experienced eight contractions that hammered prices from between 30% to 60%, in round numbers. In the post-World War II era, where we have complete data, the crashes tended to follow similar patterns: Equities peaked with extremely high PEs, as measured by the Shiller Cyclically Adjusted Price-Earnings ratio (or CAPE), which adjusts for big swings in earnings that can greatly distort PEs based on today’s highly erratic profits. Dividend yields also tend to be far below average when selloffs commence. That’s a good description of where the market stands right now.

A review of these major crashes offers a guide on what we can expect. Generally, the downdrafts last around a year, and seldom continue for more than 18 months, with the glaring exception of the Great Depression.

Using data beginning in 1871, the first big bust came in the mid-Victorian period. From mid-1876 to mid-1877, stock prices tumbled 35% in a 13-month stretch. It was another three decades before investors suffered steep losses again. From January 1906 to January 1907, the S&P lost 37.7%. Another crash lasted from November 1916 to November 1917, slashing prices by 31%.

The Great Depression broke the tradition of yearlong busts. From September 1929 to June 1932—a period of 33 months—the S&P contracted by 85%. It’s worth noting that at the start of that long descent, the Shiller PE stood at 33, its highest level to that date and twice its previous average. The dividend yield was also unusually low for the era, at around 3%.

Stocks rebounded strongly until March 1937, when the CAPE tripled from its lows to 22. Over the next 11 months, stocks cratered by 45.3%.

From the end of World War II until the late 1960s, stocks enjoyed a magical run, interspersed with only minor corrections. Then, from May 1969 through May 1970, shares fell by 27.6%. The starting Shiller PE: an elevated 21, around 30% above the then-historic average of 16.

The OPEC oil crisis brought the longest market crash since the Great Depression. From January 1973 to September 1974, a span of 21 months, the S&P fell by 42.5%. This time, it was the tripling of oil prices and its damage to the economy, rather than overpriced stocks—the CAPE started out at 19—that prompted the collapse.

Surprisingly, the Black Friday dive of October 19, 1987, was more severe for being sudden than big. From September 1987 to April 1988, the S&P dropped by 19.7% and then rebounded quickly. The CAPE at the start was hardly signaling rough times ahead. It stood at around 18—above average, but hardly alarming. Nor was there cause for great concern, despite the short-lived carnage of Black Friday.

In fact, Black Friday is an exception to the rule that big selloffs persist for around a year. That’s hardly comforting. During those 12 to 18-month spans, stocks generally swoon straight downwards, often falling a few percentage points a month.

The next crunch came in September 2000 with the bursting of the dot-com bubble. It was really two separate crashes. In September, the Shiller PE was flashing red at an incredible 42, near an all-time record. Over the next 11 months, stock prices shrank by 29%. That selloff wasn’t sufficient. The S&P snapped back until March 2002, sending the CAPE back over 30. A second selloff took shares down another 27.5% to the low point of February 2003.

The pain is still fresh from the cataclysm that struck in October 2007. Stocks dropped, more or less in a straight line, until March 2009, a period of 17 months. When the din subsided, shareholders had lost 51% of their money. Once again, the Shiller PE was dangerously high at the start of the trouble, standing at 27.3. And the dividend yield was a puny 1.75%. Suddenly, investors decided they weren’t getting paid nearly enough for the danger of holding volatile shares that had just decimated many a nest egg.

Right now, the CAPE of 26 tells us that stocks are pricey­­—unless you think equities aren’t particularly risky. It also doesn’t help that the dividend yield is far, far below its historic norm at 2%.

We don’t know if a crash is imminent, or if investors are happy with low yields and the modest capital gains that, at best, they can expect at these prices. It could break either way. But if a crash is coming, it will probably mirror those of the past. Most if not all of the damage will come in around 12 months. Prices will pretty much go straight down. Don’t look for “buying opportunities” in the infrequent lulls. What’s encouraging is that the selloff will be overblown. Just look at the aftermath of 2007: By early 2009, the CAPE had dropped to 13. That was no soft landing. And that was the time to buy.

Did Geithner save America from a Second Great Depression?

FORTUNE — Since the release of former U.S. Treasury Secretary Timothy Geithner’s new book, Stress Test: Reflections of Financial Crises, accounts of his stint in the Obama administration have been getting considerable attention in Washington policy circles. One assumption that has gone virtually unquestioned is that Geithner and his colleagues at the U.S. Federal Reserve and the Treasury saved us from a Second Great Depression (SGD). However, it is long past time that this narrative get some serious scrutiny.

The basis of the SGD story is that the first Great Depression of the 1930s was the result of the failure of the Fed to come to the rescue of the banks in the middle of a series of bank runs. If the Fed had flooded the banks with liquidity and offered various guarantees to depositors and other creditors, it could have put an end to the bank runs.

The failure to do so led to a chain of collapses that destroyed much of the economy’s wealth. This was a direct hit to the people who saw their life’s savings disappear when their banks went into bankruptcy. The macroeconomic consequences were enormous, as people had to radically cut back their spending, forcing massive layoffs. In addition to the loss of demand, many businesses also saw their working capital disappear when their banks collapsed.

This was the disaster that Geithner and his colleagues were determined to prevent in the financial crisis in 2008-2009. But this is only part of the story. The Great Depression was not just the financial crisis that set it off; it was a prolonged period in which the economy operated well below its potential, leading to double-digit unemployment.

The recipe for countering these types of weaknesses in demand is simple: Spend money. This is something we have known since renowned economist John Maynard Keynes wrote The General Theory in 1937. It is also a proposition that we had the opportunity to test with the massive spending associated with the United States’ entry into World War II in 1941. And events played out just as Keynes predicted. The economy surged and unemployment plunged.

There is nothing magic about the economic impact of military spending. If the U.S. government had spent massive amounts of money building up its infrastructure and its education and health care systems, and done this in 1931 rather than 1941, we would not have seen a decade of double-digit unemployment. The initial downturn from the financial panic would have been quickly reversed and the economy returned to near full-employment levels of output.

This is not just idle speculation, we had the opportunity to witness this set of events in Argentina in the last decade. In December of 2001, Argentina defaulted on its national debt and broke the link of its currency to the dollar. This led to the sort of meltdown that Geithner and company worked desperately to prevent. Banks couldn’t repay depositors, and businesses couldn’t get access to working capital. The country was overtaken by panic as the economy plummeted.

But the plunge proved to be short-lived. Government measures were able to stabilize the economy by the second quarter of 2002, and it was growing rapidly by the second half of the year. In fact, by the end of 2003, Argentina’s economy had fully recovered the ground lost from the crisis. By the end of 2004, the economy was larger than it had been before it went into recession in 1998. The country maintained healthy growth until the world recession brought it to a halt in 2009. (There are questions about the integrity of the data toward the end of this period, but there is little dispute that the data through 2004 are largely accurate.)

In short, Argentina had a full-fledged financial crisis and meltdown of its banking system, but it didn’t endure anything like the Great Depression. Its government and central bank were able to act aggressively to quickly get the country’s economy back on its feet.

Given Argentina’s experience, why would we think that U.S. policymakers would be paralyzed in the event of a financial meltdown? Would Congress lose the ability to vote spending measures and tax cuts that put money in people’s pockets? Would the Fed be unable to conduct the expansionary monetary policy it has been pursuing for the last five and a half years?

There are obviously differences between Argentina and the United States. A collapse of the U.S. financial system would have far greater global consequences than Argentina’s collapse. On the other hand, the U.S. would still be the world’s dominant economy and the U.S. dollar the leading reserve currency even after a collapse.

The veracity of the SGD story matters hugely in how we think about Geithner and the performance of the Bush-Obama economic teams through the crisis. If we really had to fear a decade of double-digit unemployment then we should be very thankful, even though the economy remains weak and unemployment is still high at 6.3% as of April. However, if the SGD is just a scare story for the kids, then people should be very angry about the current state of the economy. And the evidence suggests they should be very angry.

Dean Baker is co-founder of the Center for Economic and Policy Research. Follow him @DeanBaker13