By Silvia Merler, an Affiliate Fellow at Bruegel and previously, an Economic Analyst in DG Economic and Financial Affairs of the European Commission. Originally published at Bruegel

Dynamic Stochastic General Equilibrium models have come under fire since the financial crisis. A recent paper by Christiano, Eichenbaum and Trabandt – who provide a defense for DSGE – has generated yet another wave of reactions in the economic blogosphere. We review the most recent contributions on this topic.

A recent paper by Christiano, Eichenbaum and Trabandt (C.E.T.) on Dynamic Stochastic General Equilibrium Models (DSGEs) has generated quite a reaction in the blogosphere. In the paper, C.E.T. argue that pre-crisis DSGE models had shortcomings that were highlighted by the financial crisis and its aftermath. But over the past 10 years, progress has been made incorporating financial frictions and heterogeneity into DSGE models and C.E.T. foresee that DSGE models will remain central to how macroeconomists think about aggregate phenomena and policy, because there is simply no credible complete alternative to policy analysis in a world of competing economic forces.

Much of the criticism of the paper refers to the first version published online – which is, however, no longer available (the latest version is dated November 27). Noah Smith has extracts of the earlier version, in particular a sentence in which C.E.T. referred to people who don’t like DSGE as “dilettantes”, because they only point to the existence of competing forces at work – and informally judge their relative importance via implicit thought experiments – but can never give serious policy advice. Smith argues that C.E.T.’s defense of DSGE as the only way to make quantitative predictions about the effects of policy changes is wrong, because there are at least two other approaches in common use – sVARs and SEMs. A structural model is also not always needed to make quantitative predictions about policy, as this can often be done in reduced form. When policy changes can be treated like natural experiments, their effects – including general equilibrium effects – can be measured directly instead of inferred from a structural model. But C.E.T. ignore the existence of natural experiments, despite the rapidly rising popularity of the natural experiment approach in economics.

Bradford Delong points out that new Keynesian models were constructed to show that old Keynesian and old Monetarist policy conclusions were relatively robust, and not blown out of the water by rational expectations. They were built to show that the irrelevance of real variables to systematic policy results were extremely fragile. Lucas and company then followed Prescott into the land of Real Business Cycles (RBCs), taking a residual error and claiming it was their fundamental driving exogenous variable. The DSGE framework was then constructed so that new Keynesians could talk to RBCites. None of this has, so far, materially advanced the project of understanding the macroeconomic policy-relevant emergent properties of really existing industrial and post-industrial economies.

Jo Mitchell at Critical Finance thinks that what C.E.T. are attempting to do is argue that anyone doing macro without DSGE is not doing it “properly”. But on what basis is DSGE macro “done properly”? There are two places to look for empirical validation – the micro data and the macro data. Thirty years of DSGE research have produced exactly one empirically plausible result – the expectations-augmented Phillips Curve. It was already well known. There is an ironic twist here: the breakdown of the Phillips Curve in the 1970s gave the Freshwater economists their breakthrough. The breakdown of the Phillips Curve now – in the other direction – leaves DSGE with precisely zero verifiable achievements. C.E.T.’s paper is welcome in one respect: it confirms what macroeconomists at the top of the discipline think about those lower down the academic pecking order, particularly those who take a critical view.

Lars Syll thinks that ‘rigorous’ and ‘precise’ DSGE models cannot be considered anything other than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model, and no decisive empirical evidence has been presented. Advocates of DSGE modelling want to have deductively automated answers to fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of what’s going on in the real world, and not in idealised models. Conclusions can only be as certain as their premises. The modelling convention used when constructing DSGE models makes it impossible to fully incorporate things that we know are of paramount importance for understanding modern economies. Given all these fundamental problems for the use of these models and their underlying methodology, it is beyond understanding how the DSGE approach has come to be the standard approach in ‘modern’ macroeconomics. DSGE models are based on assumptions profoundly at odds with what we know about real-world economies. That also makes them little more than overconfident story-telling devoid of real scientific value.

Chris Surro at Pretense of Knowledge argues that the problem with DSGE models is not that they are unable to explain specific economic phenomenon, but that they can explain almost any economic phenomenon you can possibly imagine and we have essentially no way to decide which models are better or worse than others except by comparing them to data that they were explicitly designed to match. All that the DSGE model itself adds is a set of assumptions which everybody knows are false, that generate those intuitive results. C.E.T. do nothing to address this criticism. Surro argues that macroeconomics should be exactly the opposite: start by getting the assumptions right. Since we will never be able to capture all of the intricacies of a true economy, the model economy should look very different from a real economy. However, if the assumptions that generate that economy are realistic, it might still provide answers that are relevant for the real world. A model that gets the facts right but the assumptions wrong probably does not.

Brian Romanchuk at Bond Economics thinks that the the recent attempt at a defence by C.E.T. was such a spectacular intellectual failure that it is not worth taking seriously.One could argue that we need to use a modelling strategy similar to the one used by DSGE modellers (i) to account for a shifting policy environment, and (ii) to take into account macro relationships between all variables. Although those are reasonable points, it does not mean that DSGE macro actually fulfils those objectives. One could easily raise doubts about other methodologies, but the paper by C.E.T. went completely off the rails by arguing that no other economic modelling methodology even exists.

Post navigation

17 comments

any “representative agent” model is a demonstrably false representation of reality, as 40 years of health economics, academic marketing and other disciplines will tell you – and which will lead to grossly wrong predictions. We have had the ability to properly segment and model the population for many years now. Why are DGSE models not laughed out of court at their first sentence?

I guess I had always thought the word was related to stoichiometry, which I first learned of in chemistry classes, and means a very exact invariable relationship between elements in chemical reactions. So I assumed that the DSGE theory was critiqued, and I think this is borne out in this very post, of being a rather inexact methodology that doesn’t supply any predictive information that a person familiar with the economy would come to anyway, with the same probability of guessing right.

So I was somewhat surprised that the very title (DSGE) of the “model” seems to imply that it is nothing more than a random number generator.

@fresno dan, 12-12-17, 7:32 am – A deeper problem implied by “stochastic” is that the assumption that economic events or processes will follow some kind of probability distribution allowing for useful predictions is simply false.

Without seriously massaging empirical data, econometricians cannot demonstrate that these probability distributions are accurate descriptions of the real economy.

The reason is because the errors in a normal distribution are assumed to be random. If you assume the errors are not random, you get very different results. Nassim Taleb and the late and great Benoit Mandelbrot did much work on this. There are two books by Mandelbrot called Fractals and Scaling in Finance and The Misbehavior of Markets. I’d highly recommend them.

Also, the distributions that occur in the real world very often scale to power laws where there is a winner-take-all effect. This is especially true in financial markets and in finance/economics in general.

Mainstream economists think there is a gain from the DSGE style of modelling in its capacity to offer some kind of structure around which to organise discussions. To me, that sounds more like a religious theoretical-methodological dogma, where one paradigm rules in divine hegemony. That’s not progress. That’s the death of economics as a science.

DSGE models do not push economic science forwards one single millimetre if they do not stand the acid test of relevance to the target.

It’s typical of the economics discipline to take insights and arguments that demolish the very foundations of mainstream economic analysis and first admit their validity in some limited sense, and then proceed to completely ignore them. Or else they will create a sub-disipline to deal specifically with some particular way in which the mainstream theory utterly fails — behavioral economics, for instance — and then, once the problem has been safely siloed, continue preaching the same ol’, same ol’. Now, when someone criticizes your unrealistic assumptions you can say “That’s what behavioral economics is all about.” and not have to actually address the critique of your model.

That’s interesting, flawed ecological arguments run on very similar basis. I recall a seminar by a certain modeler who claimed to understand the salmon population – but he was only looking at dynamics in the ocean setting and entirely ignored the fact that salmon spawn in freshwater streams. The ability to reproduce would obviously affect the population. When someone got up and asked him how he hoped to understand the dynamics of the salmon population when he was entirely ignoring issues like drought’s effect on streamflow, or damming of rivers, or logging sediment clogging rivers, he smiled and said, “that would be a separate subject.” It didn’t go over very well.

The problem you have in arguing about DSGEs is that if you don’t address their purely symbolic importance, you’re missing the most important point about them. Very crudely, to make money in economics you need to at least tip your hat to a supposed self-sustaining, self-stabilizing nature of capitalism as being derived from fundamental laws that make it inevitable and unavoidable.

Remember Larry Summers and ‘the-laws-of-economics-are-like-the-laws-of-engineering-they-work-the-same-everywhere’? In any other discipline someone making a comment as fundamentally stupid as that would have lost their job a long time ago, but orthodox institutional economics is a cargo cult, based on worshipping at the feet of old dead white guys.

Like climate change denialism, the more empirical evidence absolutely fails to support what DSGErs claim is reality, the more they double down on it – because that’s where the corporate sponsorship, the prestigious academic posts, the awards and the big money are.

And why not? Economists are nothing if not logical, and if the big money was in phrenology (reading the bumps on people’s heads) then there would be dynamic bumpology models…

“. Very crudely, to make money in economics you need to at least tip your hat to a supposed self-sustaining, self-stabilizing nature of capitalism as being derived from fundamental”

I thinks is absolutely correct in that it was never really anything particular about the the model–other than a bunch of silly hand waving like ” the model has to equilibrate based on the Euler condition where the representative agent is maximizing his utility by trading off between future and present consumption, etc– the take away was the story. Not the silly model.

For example, the reason there is unemployment was that wages didn’t adjust. Of course the implication is that the unemployed can blame the unions thus pitting workers against other.

Stochastic modeling techniques are frequently used for designing radar systems. Maybe these economists just need to change disciplines and go work for the MIC. They might be able to breath new life into the Sgt York. That would employ both their skills at stochastic modeling and their skill at selling models that don’t work.

“A recent paper by Christiano, Eichenbaum and Trabandt (C.E.T.) on Dynamic Stochastic General Equilibrium Models (DSGEs) has generated quite a reaction in the blogosphere. In the paper, C.E.T. argue that pre-crisis DSGE models had shortcomings that were highlighted by the financial crisis and its aftermath. But over the past 10 years, progress has been made incorporating financial frictions and heterogeneity into DSGE models and C.E.T. foresee that DSGE models will remain central to how macroeconomists think about aggregate phenomena and policy, because there is simply no credible complete alternative to policy analysis in a world of competing economic forces.”

Thanks to all of you for these fascinating comments and observations. A true Xmas treat.

I’m no economist but I’ve been working now with Bayes Nets for some years now and a few months back discovered DGSEs were essentially dynamic Bayesian constructs and are faced with the same strengths and limitations as my modest efforts. I light of this offer some comments on DGSEs for consideration:

1. In regard to the above quote CET seem to dismiss the possibility that at this stage in model development economics are so complex and poorly defined in terms of variables that credible models beyond simple empirical extrapolation/correlation lines are simply not possible and they are illogically defending their position with a negative – a bit like saying the Democrats are a vibrant progressive political party because Trump and the Republicans are so appalling.

2. A dangerous feature of Bayes (Nets) models is you can easily use large data sets to learn what appear seductively to be good models. But then when you do a serious error rate check you find…..yes the model is statistically better than zero….but its predictive reliability is so poor it is pretty useless. Economics looks to have a real problem here in that its possible to select data or pile enough into a learning process to get something with a positive correlation i.e. its publishable….but it may still be pretty useless beyond being able to assist conceptualizing the problems faced.

3. Dynamic (Bayes nets) models are even more problematic. During the expansion process the number of somewhat correlated variables explodes during full model creation along presumably with the noise while Parsimony is lost and ad hoc constraints on variables are maintained a bit like the NASA satellite data analysis program which rejected ozone hole detecting measurements until the English showed “yes there is a hole”.

4. In light of this and the other comments above I wondered how many economists have heard the question “How many parameters does it take to model an elephant” – (possibly derived from Von Neumann or Gauss depending on your source).

“”With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.””

By this he meant that one should not be impressed when a complex model fits a data set well. With enough parameters, you can fit any data set. It turns out you can literally fit an elephant with four parameters if you allow the parameters to be complex numbers.”

Another great example of this seductive nature of models is told in a story by Freeman Dyson following a meeting with Enrico Fermi DYSON, F. 2004. A meeting with Enrico Fermi. Nature, 427, 297-297.

Surely if such great physicists and mathematicians as the above can admit the limitations of their models, so once and for all can Economists.