Tuesday, January 22, 2013

Macro always fights the last war

Matthew Klein of The Economist has a great post up about the history of modern macro, drawing on a presentation by the incomparable Markus Brunnermeier. If you are at all interested in macroeconomics, you should check it out (though of course econ profs and many grad students will know the bulk of it already).

Here is Klein's summary of pre-2008 macro:

As the slideshow makes clear, macro has evolved in fits and starts. Existing models seem to work until something comes along that forces a rethink. Then academics tinker and fiddle until the next watershed.

In response to the Great Depression, John Maynard Keynes developed the revolutionary idea that individually beneficial actions could produce undesirable outcomes if everyone tried to do them at the same time. Irving Fisher explained that high levels of debt make economies vulnerable to downward spirals of deflation and default...

Problems developed in the 1970s. “Stagflation,” the ugly portmanteau that describes an economy beset with rapid price increases and high levels of unemployment was not supposed to be possible—yet it was afflicting all of the world’s rich countries...A new generation of macroeconomists, including Ed Phelps, Robert Lucas, Thomas Sargent, Christopher Sims, and Robert Barro, responded to the challenge in the late 1970s and early 1980s...[their] new “dynamic stochastic general equilibrium” (DSGE) models were based on individual households and businesses that tried to do the best they could in a challenging world...Despite...many drawbacks, DSGE models got one big thing right: they could explain “stagflation” by pointing to people’s changing expectations.

Klein and Brunnermeier both say that macro is changing again, this time in response to the Great Recession and the financial crisis that preceded it. The big change now, they say, is adding finance into macro models.

Reading this, one could be forgiven for thinking that macro lurches from crisis to crisis, always trying to "explain" the last crisis, but always missing the next one.

How true is that? Well, on one hand, science should progress by learning from its mistakes. You have a model that you think explains the world...then something new comes along, and you need to change your model. Great. That's how it's supposed to work.

Doesn't that describe exactly what macro has been doing? Well, maybe, but maybe not. First of all, what you shouldn't do is develop models that only explain the most recent set of observations. In the 70s and 80s, the DSGE models that were developed to explain stagflation had a very hard time explaining the Great Depression. Robert Lucas joked about this, saying: "If the Depression continues, in some respects, to defy explanation by existing economic analysis (as I believe it does), perhaps it is gradually succumbing under the Law of Large Numbers."

But the fact that DSGE models couldn't explain the Depression was not seen as a pressing problem. There was no big push to modify or expand the models in order to explain the biggest economic crisis of the 20th century (though there were scattered attempts).

So macro seems to suffer from some "recency bias".

And here's another issue. When we say macro models "explain" a phenomenon, that generally means something very different, and less impressive, than it means in the hard sciences (or even in microeconomics). When we say that 80s-vintage DSGE models "explain" stagflation, what we mean is "there is the possibility of stagflation in these models". We mean that these models are consistent with observed stagflation.

But for any phenomenon, there are many possible models that are consistent with that phenomenon. How do you know you've got the right story? Well, there are several ways you can sort of tell. One is generality of a model: how well does the model explain not just this one thing, but a bunch of other things at the same time? (This is closely related to the idea of "unification" in physics.) If your model can explain a bunch of different stuff, then it's probably more likely to have captured something real, instead of being a "just-so story".

But modern macro models don't do a lot of that. Each DSGE model matches a few things, and not other things (this is why they are all rejected by formal statistical testing). Ask the author about the things his model doesn't match, and he'll shrug and say "I'm not trying to model the whole economy, just a couple of things." So there's a huge proliferation of models - not even one model to "explain" each phenomenon, but many models per phenomenon, and very little in the way of choosing which model is appropriate to use, and when.

Another clue that you've got the right story is if your model has predictive power. But modern macro models display very poor forecasting ability (as do non-modern models, of course).

Before the 2008 crisis, there doesn't seem to have been very much dissatisfaction with the state of macro. Models were rejected by statistical tests...fine, "All models are wrong," right? There were 50 models per phenomenon...fine, "We have models for anything!" Models can't forecast the future...fine, "We're not interested in forecasting, we're interested in giving policy advice!" I wasn't alive, but I imagine there existed a similar complacency before the 1970s.

Then 2008 came, and suddenly everyone was scrambling to update and modify the models. No doubt the new crop of finance-including models will be able to tell a coherent, plausible-sounding story of why the 2008 Financial Crisis led to the Great Recession. (In fact, I suspect quite a number of mutually conflicting models will be able to tell different plausible-sounding stories.) And then we'll sit back and smile and say "Hey, look, we explained it!"

But maybe we didn't.

Of course, this doesn't necessarily mean macroeconomists could do a lot better. Maybe this is the best we can do, or close to it. Maybe time-series data is so inherently limited, data collection so poor, and macroeconomies so hideously complex, non-ergodic, and chaotic that we're never going to able to have predictive, general models of the macroeconomy, no matter how many crises we observe. In fact, I wouldn't be terribly surprised if this turned out to be the case. But I think at least we could try, a little more pre-emptively than in the past. And I think that if we didn't tend to oversell the power of the models we have, we wouldn't be so embarrassed when the next crisis comes along and smashes them to bits.

41 comments:

"In fact, I wouldn't be terribly surprised if this turned out to be the case."

You know, I would be. Limited data is a problem that can be solved as data collection techniques become more sophisticated. It was a lot easier to decipher moth attraction once we had a way of detecting pheromones - Fabre' knew approximately what was going on when his house filled with male moths attracted to the caged female, but he couldn't really explain it with the existing knowledge base. Hideously complex macroeconomies may appear frightening, but lots of things have been hideously complex before they were carefully disassembled and the complex tapestries of interactions teased apart. Maybe macroeconomies will prove to be different, but I would doubt it. Perhaps we just need Hari Selden's insights...I agree with you, though - I think economics would be better-served with a little less alchemy-style "My school is Real, your school is made up of big old bed-wetting poopy-heads!", and a little more "Where and why do the models not fit the observations, and how do we fix them?" Scientists are just human, and there are plenty of personality-driven back-stabbing feuds in science, so why wouldn't a fumbling metaphysics like economics be at least as prone to them? Nobody likes to be publicly wrong, but if you can't separate your self from your models, you should probably go into art, where at least the models are attractive.

"Maybe this is the best we can do, or close to it" Or... maybe build a new macro upon the shoulders of giants like Smith, Mill, Marx, Schumpeter, Veblen, Weber, Keynes, Polanyi, Minsky, etc. and abandon the neoclassical cul de sac altogether?

All I'm saying is, there is every reason to be optimistic about the potential to construct predictive models: there is a huge stock of knowledge to build upon. A good start would be to revive the post-keynesian school of thought. http://en.wikipedia.org/wiki/Post-Keynesian_economics

Economists working in this tradition have had a pretty good record when it comes to predictions (See http://mpra.ub.uni-muenchen.de/15892/1/No_one_saw_this_coming.pdf)

A lot of people in international economics saw that coming - in fact most and the euro was supposedly an "unexpected success". Saying in 1992 that "EU is no good" isn't useful unless you do a forecast AND

Forecasting is different - Monday will be mostly sunny, light rain on Tuesday. Economics can barely get to the middle of Monday right now and there are endless debates of the temperature in the morning and do umbrellas keep you warm.

Dimitar: "Forecasting is different"There is a difference between predicting that the stock market will crash tomorrow at noon, and understanding that the policy-driven course we are on is unsustainable and should be changed before we suffer the consequences.

I was in grad school in the 1970s (and recently retired). So I have lived through maybe 4 generations of intro macro books. And my hypothesis has been, since the 1980s, anyway, that intro macro books were being written with an eye toward explaining what went wrong when the author(s) of the book were in grad school. Which also would mean that the theorists were doing the same thing...So maybe I should have written that up?

Re: no single unified model - you may find these remarks by Itzhak Gilboa relevant. Basically he claims that primary role of model-building in economics is not to build a single "true" (judged by its predictive power) theory, but instead to create a pool of "analogies", which are abstract and simplified descriptions of different economic mechanisms. The task of an applied economist is then to select from these analogies the one which is most relevant for the particular problem he studies. To me, this sounds like a reasonable description of the kind of research published in academic journals. Though of course this raises a question about how much effort should we allocate to building theories vs. figuring out how to apply them...

Thanks for sharing this ivansml. I enjoyed reading it, even though I might not necessarily agree with all of what Gilboa wrote. Nevertheless, I believe that J.M. Keynes himself said something similar - that in economics, one has to choose among multiple models the appropriate one for the appropriate situation.

Loved the leopard analysis, but will economists change their spots? Noah, you've recently issued a couple of challenges to your readers to come up with solutions, so, apologizing in advance for taking up so much space, here goes.

From my post comparing computer simulations of weather and global warming to current economic modeling at http://somewhatlogically.com/?p=785

"AWH Phillips, of Phillips Curve and Moniac hydromechanical simulator fame, proposed that a similar effort might be made in the study of economics. Realize that he had done much pioneering work on analog simulators and economic stability in the 50’s when the results were shown on an oscilloscope!

Phillips proposed that future developments might enable the construction of an electronic analog machine that “using a combination of econometric and trial-and-error methods, the system of relationships, the form of time lags and values of parameters of the analog might be adjusted to produce as good a ‘fit’ as possible to historical time series, and the resulting system used in making economic forecasts and formulating policies. This would be a very ambitious project. Apart from the engineering problems involved, requiring close coordination of statistical, theoretical and historical studies in economics…” The engineering problems have been overcome with the development of massive computational capability, and the history of climate modeling provides a direct parallel in the study of global warming, an even larger problem than economic stability."

While I'm not an economist, I've looked from an environmental point of view as to how one might simplify policy analysis of resource utilization and have done some very initial work on extending Phillips flow analog using a fluid dynamics simulation, rather than the tanks and valves of Phillips 'Moniac' hydromechanical machine, thus taking into account friction, mass, and the ability to convert assets as potential energy into kinetic energy representing the rate of transactions. You can find a very general description of the basis of the non-linear, dynamic 3 axis of freedom simulation here, in an article mostly about risk

http://somewhatlogically.com/?p=598

The geeky notes at the end give an example of the output looking at declining loan value and economic stability with links to the Dominican University working paper. It's interesting to compare the model output curves with Phillips paper on simulation of "Stabilization Policy in a Closed Economy" with specific attention to Figure 9 http://xmlservices.unisi.it/depfid/joomla/iscrizione/materiali/16888/Phillips%20EJ%201954.pdfIncidentally, this paper, in figure 11, marks the first appearance of what would become the Philllips Curve, and more importantly to my work, he was using very early analog computers developed to predict aircraft stability to run his models.

The biggest problems in building effective macro models are dodgy microfoundations — falsified or unfalsifiable assumptions like perfect competition, MR=MC, equilibration, rational optimisation, and most of the things Steve Keen relates in Debunking Economics. The (ridiculous and rather idiotic) assumption from Samuelson that it is unnecessary to model banks because they are only an intermediary is just one of these ridiculous claims.

I'm starting to think that a big problem in the development of economics is the concept of pure economics. This is found in Walras (his foundational general equilibrium text being Elements of Pure Economics and also Menger (although Menger at least noted that empirical economics was also an important endeavour), and spread out through multiple branches of economics — the Austrians through Mises' "pure" praxeology, the Walrasians (of course), and also to some degree the Keynesians who were influenced by Marshall and Walras. There were some exceptions of economists including macroeconomists who worked with a focus of trying to explain observations (I'm thinking of Minsky in particular), but the 20th Century in economics was dominated by overarching theorising starting from "self-evident" assumptions (that very often turn out to be untrue).

The great break in the tradition of "pure" economics is of course behaviourism, which sets out to study economics observationally. The great challenge of the 21st Century is to build a new economics on behavioural microfoundations, treating macro phenomena as emergent.

Personally I can explain stagflation as a response to the post war baby boom entering the workforce + world oil production hitting the wall. If you look at the later curve it's an incredibly smooth exponential curve right up to 1970, at which point it becomes an angry jaggy line. The reason standard models can't explain it is because they're about money instead of energy.

In physics, the best experimenters are often former excellent theoreticians. A good experiment needs thorough and deep undertanding of the purpose and influences. I would expect a breakthrough in macro when a good set of accurate time series is available. No of hundreds macro time seies is compatible over time, as a researcher may read in documenation. This is because of the permanent change in all macro definitions. one needs a few excellent econominds to stop changes to the definitions and procedures.

Someone has found a new 'large structure," a collection of quasars 4 billion light years across (or so they believe). Of course, no reason not to overstate the case so this has to mean that the Cosmological Principle, which is just a well known assumption, doesn't hold.

Now, you and I could engage in a lengthy exchange about why this find doesn't disprove the Cosmological Principle, or this or that or blah, blah, blah, but to what end?

BTW, how does one convert an assumption into a Principle? Hold conference? It seems to me that, going forward, you should put forward your ideas as Principles, marking the thoughts of others, "mere assumptions."

Save for Brad, there are no other economists actively working on thought experiments on quantum gravity---they don't have the thought and reasoning power as you have often shown---so let's sit back and see what we hear from the other polymaths before commenting further.

But the cosmological principle is so ingrained that it is hard for researchers to shake. "People are maybe understandably reluctant to give up the thing, because it will make cosmology too bloody complicated," says Sarkar.

It seems to me that, a best, macro can make only two predictions at present.

First, the cost of health care will continue to rise. This is due to the drop in prices (including implicit drops due to increased productivity) in other large segments of the economy (principally technology, computers, software, internet, etc.) IOW people will have more $$$ to spend and prices will rise.

Second, the same will be true for higher education, until we hit a "cliff," when there is a shift to distance, internet based education, at which time prices will plunge. The latter will likely occur in connection with the build out of Google Gig. Today, a hacker house in KC with Google Gig costs $39 a night, including Google G. The education is free. That is the future baseline price of higher education, without a meal plan.

Beyond that, it seems to me there is little else that macro can forecast.

Your initial commentary on Japan is holding up well. I think your prediction was right (so far) because you got the context right. Often economists get the context wrong. Currently I'm reading Nate Silver's good book.

Models were rejected by statistical tests...fine, "All models are wrong," right? There were 50 models per phenomenon...fine, "We have models for anything!" Models can't forecast the future...fine, "We're not interested in forecasting, we're interested in giving policy advice!"

Not to mention that then it is very easy to go from there to "policy X understates the substantial professional uncertainty and disagreement about the wisdom of implementing such policy." With X being different things depending on who is making this critique. In other words, macroeconomics has models to rebut or buttress any policy position you choose.

What is funny about the modern macro crowd is that they act as if a consensus really existed, when it clearly does not.

Really good points, but isn't it true that modern economists predict things correctly all of the time (i.e. the effect of business cycles, effects of supplies and demands), just like modern physists do. What we're hoping for is that they be able to predict *everything*. The fantastic thing about our times is that we can hope for this at all. As Louis CK said, "Everything is amazing, and nobody is happy."

good post. As a practitioner of empirical macroeconomics I confess that you are correct in terms of the challenges we face. Imagine a model with dozens of different shocks, many of them quite persistent, and only a few decades worth of data. We really have very few degrees of freedom to work with. As for theorists, the biggest challenge is that the quantitative rigor of the models rises exponentially as they become more complex.

Having said that, I do think that exactly for this reason we try to estimate the same variable using different datasets as such sets become available. Just do a search on how many different papers estimate the elasticity of intertemporal substitution or test the permanent income hypothesis. As Lucas pointer out recently, our hope is to produce a large number of studies to allow a meta-analysis that will narrow the range within which the true value likely lies.

Moreover, I do believe that DSGE models, contrary to the early Keynesian partial-equilibrium models, represent an attempt to form a unified theory. First, they merge business-cycle models with growth models, precisely because a model that explains the behavior of the economy in the short run should also be able to explain its long-run growth. Second, they model the behavior of agents in the same way as standard microeconomic theory. Finally, they view the economy as a collection of related markets. The problem with the initial DSGE models was that they were too simplistic, they included only a small number of markets and ignored other, potentially important ones (e.g. financial markets were ignored, labor markets were treated as one unified market, etc.). But their developers have always admitted that. I think the current agenda is to expand these models by adding more markets, incorporate recent ideas from microeconomics (e.g. frictions, behavioral biases, etc.) and see what happens.

Sure they are. There is really only one market, the market for money, which together with the I=S identity determine the demand-side. But there are no markets for goods or factors or production, no supply side. Everything is demand-driven.

Nope. The I=S condition is not a market, it is derived from an identity, the National Income Identity: Y=C+I+G+NX. Which is why Krugman, a Keynesian, refers to it as an identity. If it was an equilbrium condition there should be a supply, a demand, and a price that adjusts to bring the two in equilibrium. Where is the supply here? Which price adjusts to set I=S? The answer, or course, is none.

The interest rate, determined in the money market, decides I, which along with the other expenditure component determine GDP, which in turn determines saving. If I > S then GDP simply rises and so does S until the two are equal. If I < S the opposite happens. There is no price mechanism. This view is of course different from the classical view according to which the I=S condition is brought about by the adjustment of the interest rate in the loanable funds market. In the Keynesian model you can't have that because the interest rate has already been determined in the market for money.

In the keynesian model a la ISLM the interest rate is not determined only by the money market. The money market determines a set of equilibrium (Y,r) combinations for the money market (this is the LM curve). But then you also need the IS curve (which can be interpreted either as equilibrium in the loanable funds market or the product market) in order to pin down the global equilibrium. That is why ISLM is a general equilibrium model.

Here we go again. The IS is the loci of points where actual and planned expenditure are equal (Keynesian cross). It is not a loci of market equilibrium points! The interest rate is determined solely in the money market. Its equilibrium level then affects planned expenditure through investment, which in turn determines GDP. A higher interest rate lowers investment expenditure and therefore GDP. This is how you get the downward sloping IS curve. In the IS curve the interest rate is exogenous (determined in the LM market). In the loanable funds market the interest rate is endogenous. I am not sure where you get your information but it is completely off. The IS-LM model is partial equilibrium. There is no theory of supply whatsoever. GDP is determined solely by demand, and particularly by the money market in combination with the autonomous expenditure components!

Let's try this. You say that in the ISLM model interest rate is determined solely in the money market. If that is true, then you would agree that there shouldn't be any feedback effects from (interest rate-induced) changes in income back on to the money market, right? As you say the interest rate is completely exogenous to the IS relation, so the causality runs in only one direction. Agree?

So, suppose the FED decides to expand the money supply (which is the policy instrument of the monetary authority in ISLM). This changes the equilibrium in the money market to one with a lower interest rate. Similar to your example, a lower interest rate stimulates investment and causes planned expenditure to rise. Per the Keynesian cross, this moves the economy to a higher income equilibrium. Is this the whole story? No! Remember that money demand depends also on income (transaction motive), not only on the interest rate. With higher income there will be an increased demand for money. This will again change the interest rate! In other words, there are feedback effects. And we will have a new round of adjustments. So, the global equilibrium (the final value of income and the interest rate) is determined simultaneously through these interactions between the money market (LM) and the goods market (IS).

You are not thinking in terms of an ISLM model. Your intuition is more along the lines of the model proposed by David Romer in his JEP 2000 paper (Macroeconomics without the LM curve). In that model the LM curve is replaced by a horizontal monetary policy rule that is chosen by the central bank. The central bank sets the interest rate, and this value fed into the IS curve gives you output. Romer's IS-MP model abstracts completely from the money market. The interest rate is exogenously determined by the central bank. But that is not the case with ISLM!

"...will be able to tell a coherent, plausible-sounding story of why the 2008 Financial Crisis led to the Great Recession."

Well there's your problem right there. You mistake correlation with causation and are already expecting models to show something that didn't happen.

Imagine all the casinos in vegas were taking bets on housing starts and all the money was coming in on the long side. Housing starts...stopped. All the casinos became insolvent when they had to make good. Do you think the casinos would have been responsible for the slump in aggregate demand caused by bursting of the largest asset bubble in history? Or maybe both (crash of the casinos and the economy) were symptoms of the same disease (collapse of asset bubble)?

tl;dr:

Already have models explaining asset bubbles. Financial Crisis did not cause Great Recession. To look for a model that says otherwise is silly.