Is Poor Forecasting the Achilles Heel of Economics?

MOST POPULAR

In a recent column, the economic journalist Robert J. Samuelson excoriated the economics profession for doing a poor job of anticipating, explaining or offering useful advice regarding the economic crisis that began in 2007 and continues to this day.

His points are certainly valid, although perhaps overstated. Forecasting is, after all, only one small aspect of economics, no better or worse than, say, public opinion polling in the field of political science. Pollsters always caution that polls, at best, can only tell us the results if the election were held today, not several months from now. And the same caveat applies to economic forecasting.

The best forecasters, in my observation, don’t really forecast, they explain. That is, their value doesn’t lie so much in telling us that the gross domestic product will grow exactly 2.35 percent in 2014. Such information is essentially useless, even if exactly correct, because it depends a great deal on expectations. And figuring out expectations is more akin to tarot card reading than anything remotely scientific.

If markets are expecting a GDP number higher, then the forecast may be depressing; if they are expecting a lower number, then it may be bullish. But it may also matter a great deal what the composition of GDP is expected to be.

Basically, GDP consists of private consumption and investment plus government consumption and investment plus net exports, which is defined as exports minus imports. In other words, exports of goods and services are expansionary, imports contractionary. A higher GDP growth rate resulting from temporary government purchases, therefore, will be interpreted differently than, say, a long-term improvement in the trade balance as domestic energy production permanently rises and displaces oil imports.

Unfortunately, much economic forecasting is quite cosmetic. Forecasters are often trying to predict estimates rather than reality. This is an occupational hazard resulting from the common practice of continually revising economic data, sometimes long after the fact. A forecast that might be absurdly wrong in the short-run might turn out to be exactly correct in the long-run or vise-a-verse.

This actually happened to a friend of mine. He made a forecast for GDP back in 1971 that was far above the consensus view and he was widely ridiculed for it. But over the years, GDP has been revised upwards to the point where his forecast is now well below the current estimate for GDP in that year.

Unfortunately, it is often the case that a forecaster’s reputation is based more on whether his prediction is out of line with the consensus view at a moment in time than its accuracy after the fact. Economists with forecasts far outside the consensus—either on the high side or low side—tend to suffer for it, professionally, even if they turn out to be right.

One area where all forecasters tend to be weak is on economic turning points, the beginning and end of recessions, in particular. But this information is less useful than one might imagine because many key indicators, such as the unemployment rate, lag behind the business cycle and, as a consequence, so does the public’s perceptions. Polls show that a large segment of the population believes the U.S. is still in a recession even though, technically, it began in December 2007 and ended in June 2009, according to the National Bureau of Economic Research.

The NBER is the “official” scorekeeper of business cycles because it was founded in 1920 for that purpose and because allowing the government to call such things would be too political.

Since turning points are so important, some forecasters are what I call “broken clocks”—that is, they are right twice a day, but wrong the rest of the time. Such forecasters are always predicting a recession or hyperinflation right around the corner, year in, year out. Every once in a while they turn out to be right and they milk it for all it’s worth, hoping investors will forget all the times they were wrong.

Another problem with forecasting is that a very critical input is Federal Reserve policy. Every six weeks, its members meet to discuss and possibly change monetary policy—raising or lowering interest rates, changing its scheduled purchases of Treasury securities, raising or lowering its economic forecast and other matters intensively examined by financial markets for moneymaking clues.

Curiously, the Fed’s own economic forecast isn’t much better on average than those of private forecasters, even though Fed economists presumably know the future course of monetary policy and have access to other data available only to them, such as the views of foreign central banks.

Nevertheless, there is a small army of economists known as Fed watchers whose entire day consists of parsing the public statements of various Fed officials—in addition to the 7 members of the Federal Reserve Board there are 12 regional Federal Reserve Bank presidents, all of whom give speeches and media interviews on a regular basis. Plus the Fed publishes important economic data that it collects itself and analyzes the data published by other agencies. Fed watchers know from experience that certain numbers, such as payroll employment, are closely followed by the Fed.

Some journalists, such as John Hilsenrath of the Wall Street Journal, are thought to be particularly well plugged into Fed thinking and their reporting is very closely followed by Fed watchers. There are also financial institutions such as Goldman Sachs and PIMCO that are known to expend enormous resources analyzing the most subtle, even trivial, indicators that may give the slightest clues to Fed policy.

For example, when Alan Greenspan was chairman of the Fed, analysts would closely observe the size of the briefing books he carried with him into meetings of the Federal Open Market Committee for hints about what he might be thinking.

Like I said, what good forecasters do isn’t so much predict as explain. Sometimes understanding why a forecast went wrong is the most useful thing a forecaster can do. In the recent crisis, one of the things all forecasters missed is the importance of the financial sector and the ripple effects it can create throughout the entire economy. Economists are now working overtime to find ways of incorporating financial effects into the computer models they use for forecasting and analysis.

Unfortunately, such efforts will not necessarily help predict the next recession, which, more than likely, will arise from different causes than the last one. Economists, like generals, tend to always be fighting the last war, not the next one.