Wednesday, September 14, 2016

The new heavyweight macro critics

I got tired of lambasting macroeconomics a while ago, and the "macro wars" mostly died down in the blogosphere around when the recovery from the Great Recession kicked in. But recently, there have been a number of respected macroeconomists posting big, comprehensive criticisms of the way academic macro gets done. Some of these criticisms are more forceful than anything we bloggers blogged about back in the day! Anyway, I thought I'd link to a couple here.

First, there's Paul Romer's latest, "The Trouble With Macroeconomics". The title is an analogy to Lee Smolin's book "The Trouble With Physics". Romer basically says that macro (meaning business-cycle theory) has become like the critics' harshest depictions of string theory - a community of believers, dogmatically following the ideas of revered elders and ignoring the data. The elders he singles out are Bob Lucas, Ed Prescott, and Tom Sargent.

Romer says that it's obvious that monetary policy affects the real economy, because of the Volcker recessions in the early 80s, but that macro theorists have largely ignored this fact and continued to make models in which monetary policy is ineffectual. He says that modern DSGE models are no better than old pre-Lucas Critique simultaneous-equation models, because they still take lots of assumptions to identify the models, only now the assumptions are hidden instead of explicit. Romer points to distributional assumptions, calibration, and tight Bayesian priors as ways of hiding assumptions in modern DSGE models. He cites an interesting 2009 paper by Canova and Sala that tries to take DSGE model estimation seriously and finds (unsurprisingly) that identification is pretty difficult.

As a solution, Romer suggests chucking formal modeling entirely and going with more general, vague but flexible ideas about policy and the macroeconomy, supported by simple natural experiments and economic history.

Romer's harshest zinger (and we all love harsh zingers) is this:

In response to the observation that the shocks [in DSGE models] are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that "the more significant the theory, the more unrealistic the assumptions (p.14)." More recently, "all models are false" seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite.

The noncommittal relationship with the truth revealed by these methodological evasions...goes so far beyond post-modern irony that it deserves its own label. I suggest "post-real."

Ouch. He also calls various typical DSGE model elements names like "phlogiston", "aether", and "caloric". Fun stuff. (Though I do think he's too harsh on string theory, which often is just a kind of math that physicists do to keep themselves busy, and has no danger of hurting anyone, unlike macro theory.)

Meanwhile, a few weeks earlier, Narayana Kocherlakota wrote a post called "On the Puzzling Prevalence of Puzzles". The basic point was that since macro data is fairly sparse, macroeconomists should have lots of competing models that all do an equally good job of matching the data. But instead, macroeconomists pick a single model they like, and if data fails to fit the model they call it a "puzzle". He writes:

To an outsider or newcomer, macroeconomics would seem like a field that is haunted by its lack of data...In the absence of that data, it would seem like we would be hard put to distinguish among a host of theories...[I]t would seem like macroeconomists should be plagued by underidentification...

But, in fact, expert macroeconomists know that the field is actually plagued by failures to fit the data – that is, by overidentification.

Why is the novice so wrong? The answer is the role of a priori restrictions in macroeconomic theory...

The mistake that the novice made is to think that the macroeconomist would rely on data alone to build up his/her theory or model. The expert knows how to build up theory from a priori restrictions that are accepted by a large number of scholars...[I]t’s a little disturbing how little empirical work underlies some of those agreed-upon theory-driven restrictions – see p. 711 of Lucas (JMCB, 1980) for a highly influential example of what I mean.

In fact, Kocherlakota and Romer are complaining about much the same thing: the overuse of unrealistic assumptions. Basically, they say that macroeconomists, as a group, have gotten into the habit of assuming stuff that just isn't true. In fact, this is what the Canova and Sala paper says too, in a much more technical and polite way:

Observational equivalence, partial and weak identification problems are widespread and typically produced by an ill-behaved mapping between the structural parameters and the coefficients of the solution.

That just means that the model elements aren't actually real things.

(This critique resonates with me. From day 1, the thing that always annoyed me about macro was how people made excuses for assumptions that were either unverifiable or just flatly contradictory to micro data. The usual excuse was the "pool player analogy" - the idea that the pieces of a model don't have to match micro data as long as the resulting model matches macro data. I'm not sure that's how Milton Friedman wanted his metaphor to be used, but that seems to be the way it does get used. And when the models didn't match macro data either, the excuse was "all models are wrong," which really just seems to be a way of saying "the modeler gets to choose which macro facts are used to validate his theory". It seemed that to a large extent, macro modelers were just allowed to do whatever they wanted, as long as their papers won some kind of behind-the-scenes popularity contest. But I digress.)

So what seems to unite the new heavyweight macro critics is an emphasis on realism. Basically, these people are challenging the idea, very common in econ theory, that models shouldn't worry about being realistic. (Paul Pfleiderer is another economist who has recently made a similar complaint, though not in the context of macro.) They're not saying that economists need 100% perfect realism - that's the kind of thing you only get in physics, if anywhere. As Paul Krugman and Dani Rodrik have emphasized, even the people advocating for more realism acknowledge that there's some ideal middle ground. But if Romer, Kocherlakota, etc. are to be believed, macroeconomists aren't currently close to that optimal interior solution.

Updates

Olivier Blanchard is a bet less forceful, but he's definitely also one of the new heavyweight critics. Among his problems with DSGE models, at least as they're currently done, are 1. "unappealing" assumptions that are "at odds with what we know about consumers and firms", and 2. "unconvincing" estimation methods, including calibration and tight Bayesian priors. Sounds pretty similar to Romer.

Meanwhile, Kocherlakota responds to Romer. He agrees with Romer's criticism of unrealistic macro assumptions, but he dismisses the idea that Lucas, Prescott, and Sargent are personally responsible for the problems. Instead, he says it's about the incentives in the research community. He writes:

We [macroeconomists] tend to view research as being the process of posing a question and delivering a pretty precise answer to that question...The research agenda that I believe we need is very different. It’s hugely messy work. We need...to build a more evidence-based modeling of financial institutions. We need...to learn more about how people actually form expectations. We need [to use] firm-based information about residual demand functions to learn more about product market structure. At the same time, we need to be a lot more flexible in our thinking about models and theory, so that they can be firmly grounded in this improved empirical understanding.

Kocherlakota says that this isn't a "sociological" issue, but I think most people would call it that. Since journals and top researchers get to decide what constitutes "good" research, it seems to me that to get the changes in focus Kocherlakota wants, a sociological change is exactly what would be required.

Kocherlakota now has another post describing how he thinks macro ought to be done. Basically, he thinks researchers - as a whole, not just on their own! - should start with toy models to facilitate thinking, then gather data based on what the toy models say is important, then build formal "serious" models from the ground up to match that data. He contrasts this with the current approach of tweaking existing models.

My question is: Who is going to enforce this change? If a few established researchers start doing things the way Kocherlakota wants, they'll certainly still get published (because they're famous old people), but will the young folks follow? How likely is it that established researchers en masse are going to switch to doing things this way, and demanding that young researchers do the same, and using their leverage as reviewers, editors, and PhD advisers to make that happen? This doesn't seem like the kind of change that can be brought about by a few young smart rebels forcing everyone else to recognize the value of their approach - the existing approach, which Kocherlakota dislikes, already succeeds in getting publication and prestige, so the rebels would simply coexist alongside the old approach, rather than overthrowing it. How could this cultural change be put into effect?

Also: Romer now has a follow-up to his original post, defending his original post against the critics. This part stood out to me as particularly persuasive:

The whine I hear regularly from the post-real crowd is that “it is really, really hard to do research on macro so you shouldn’t criticize any of our models unless you can produce one that is better.”

This is just post-real Calvinball used as a shield from criticism. Imagine someone saying to a mathematician who finds an error in a theorem that is false, “you can’t criticize the proof until you come up with valid proof.” Or try this one on and see how it feels: “You can’t criticize the claim that vaccines cause autism unless you can come up with a better explanation for autism.”

Sounds right to me. The old like that "it takes a theory to kill a theory" just seems wrong to me. Sometimes all it takes is evidence.

38 comments:

I've already commented at lenght on Romer at Mark Thoma's. So I'll just use something you wrote on physics to make a tangential comment on unrealistic assumptions.

"They're not saying that economists need 100% perfect realism - that's the kind of thing you only get in physics, if anywhere"

This is a somewhat misleading way of putting it, but it allows me to illustrate some important points about 'unrealistic' assumptions.

In real world modelling in Physics 'unrealistic' assumptions are ubiquitous. What matters is not literal realism of assumptions but robustness.of conclusions.

Consider a point-mass. There is no such thing. Yet it is a perfectly legitimate simplifying assumption about a planet if you are interested in studying its orbit around its sun. It is not a legitimate assumption if you are interested in studying a planet's rotation about its axis

The most important points underlying such simplifying assumptions are:

1. Simplifying assumptions are context specific, ie ad hoc, and never axiomatic.The ad hoc nature of simplifying assumptions is a feature, not a bug as the above example illustrates.

2. Robustness is critical. As we move from our simplifying assumptions towards greater realism/precision, the conclusion should not change in any material way, and we use the simplifications because the gain in accuracy of the conclusions is not worth the added complexity and consequent loss of tractability in the model.

3. Out of sample performance of the model.

* Richard Feynman:

"...in order to understand physical laws you must understand that they are all some kind of approximation.

The trick is the idealizations. To an excellent approximation of perhaps one part in 10^10, the number of atoms in the chair does not change in a minute, and if we are not too precise we may idealize the chair as a definite thing; in the same way we shall learn about the characteristics of force, in an ideal fashion, if we are not too precise. One may be dissatisfied with the approximate view of nature that physics tries to obtain (the attempt is always to increase the accuracy of the approximation), and may prefer a mathematical definition; but mathematical definitions can never work in the real world. A mathematical definition will be good for mathematics, in which all the logic can be followed out completely, but the physical world is complex, as we have indicated in a number of examples, such as those of the ocean waves and a glass of wine. When we try to isolate pieces of it, to talk about one mass, the wine and the glass, how can we know which is which, when one dissolves in the other? The forces on a single thing already involve approximation, and if we have a system of discourse about the real world, then that system, at least for the present day, must involve approximations of some kind.

This system is quite unlike the case of mathematics, in which everything can be defined, and then we do not know what we are talking about. In fact, the glory of mathematics is that we do not have to say what we are talking about. The glory is that the laws, the arguments, and the logic are independent of what “it” is.

This is indeed excellent. The three criteria for evaluating assumptions/simplifications, the precise definition of ad hoc, and the crystal-clear example of point mass for orbits vs rotation.

I'd like to bring in my pet bailiwick, accounting. Our (national) accounting systems are rife with assumptions and simplifications — they are economic models. (Or in Feynman's excellent term, "idealizations.") And those assumptions are effectively invisible to almost everyone. If I had a nickel for every time I've heard "it's an accounting identity" as if that was somehow synonymous with "truth"...

Just one example, relating to a rather important economic measure — income:

http://unstats.un.org/unsd/nationalaccount/rissue.asp?rID=3

The national-accounting sages know that the appropriateness of this basic conceptual construct is a very open question. But that fact is invisible to almost everyone. National accounts could be depicted quite differently (yes, with everything still balancing).

Economists' thinking is completely owned by the conceptual constructs, the idealizations, embodied in our national-accounting structures. And they frequently display zero understanding of the constructs that they are (we are) using to think with.

Herman, I've been critical of you in the past, but that is a really good comment, 100% on the ball. But I will add that the simplifying assumption you used to illustrate your point, may not be true, but it is nearly true (without the scales being considered). And many simplifying assumptions used in economics are not nearly true.

Informally we might - and sometimes do - say that the assumption (point-mass) is 'nearly true', but it is not quite correct. It is an idealization that satisfies criterion (2): robustness, and the resulting model satisfies criterion (3); out-of-sample performance.

Of course this is very different from the sort of assumptions common in economics which are often patently false - and this is the critical point - making them more realistic materially changes the conclusions ie the assumptions in the models fail to satisfy the robustness criterion. And, at least in DSGE/RBC macro to talk of in-sample fit or out-of-sample performance of the resulting model would imply a libelous misuse of the terms.

Actually, as Romer notes, the situation in economics is often even worse.with assumptions being not merely false ( with non-robust conclusions) but entirely meaningless in terms of real world observables. Assumptions of the sort that are deservedly derisively dismissed as not even wrong in every scientific or engineering discipline.

It's not just an argument about having models with realistic assumptions. It is also an argument about the extent to which mathematics and models can usefully provide the answers we need to know. Basically we are going back to Keynes's (1937) arguments about the limitations of "pretty and polite techniques". Edgeworth was also very much aware of the limitations of mathematics in economics. And so have many others, for a long time.

I have been critical of Romer in the past. His growth theory for me does not answer the critical questions that I think are the most important into understanding why certain countries get on to a growth curve and others do not. But I now really have to admire his honesty.

It is not true that we do not have a lot of macro data. The National Accounts contain scores of (largely stock-flow consistent) data. The point is: one of the big failures of DSGE economists is their failureto establish a measurement system which produces data consistent with the DSGE models. Keynes, who even established his own government statistical office, the present day ONS, and, in a more indirect sense, Smith, Marshall as well and Veblen did establish systems of measurement to measure data consistent with their models and ideas. Read Mitra Kahn http://openaccess.city.ac.uk/1276/ or my efforts https://www.researchgate.net/publication/304988655_Models_and_measurement_in_economics_2_A_short_overview_of_conceptual_differences_between_neoclassical_macro_models_and_the_national_accounts

DSGE economists never bothered to do this. Weird (well, not that weird - taking account of real life data would have meant taking unemployment and the government serious... Or the fact that the National Accounts identities only hold for nominal variables, not for deflated real variables). Anyway - as there is no system of DSGE consistent measurement of the macro-economy it can't be called a valid science.There are however systems consistent with the ideas of Keynes and Veblen...

So, we are witnessing a battle between a declining DSGE scam and ascending "Realistic assumptions" scam.

Both approaches are worthless, but I guess it will give an excuse to macroeconomists why they are useless: we just used the wrong paradigm, now we are switching to the new one. Just many more years of research is needed and we will be ready. Science!, as they say.

I'm curious how many economists are simply too blind to understand that this will lead nowhere and how many are simply cynical beyond belief.

I just don't understand the mentality. Wouldn't you like to do something productive? Like produce actual knowledge? Can you guys be satisfied with infinite curve fitting?

Are you equating macro with business-cycle theory, or are you saying that Romer does?

In either case, I think this is another big problem with macro, its obsession with business cycles as opposed to long-term thriving and prosperity. eg, Gerald Friedman got tied in knots by this; he was trying to use "stimulus" thinking and arguments to talk about about multi-decadal possibilities.

" (Though I do think he's too harsh on string theory, which often is just a kind of math that physicists do to keep themselves busy, and has no danger of hurting anyone, unlike macro theory.)"

I find it hard to believe Noah understands string theory well enough to justify such a strong opinion of it only existing to keep theorists employed. As much as I like "The Trouble With Physics" those reading should keep in mind that Lee Smolin acknowledges that maybe there is something to string theory.

And again, the focus of string theory in theoretical physics is harmful to the expansion of knowledge and economic growth if too many brains not only barked up the wrong tree - nothing wrong with that - but *continued* to bark up the wrong tree for years, ignoring other paths of understanding physics, which is Smolin's main point.

I'm fond of observing that in addition to "cargo cult science", macroeconomics has often been likened to a religion. What religions do when the mainstream becomes intolerable for one reason or another is schism. Then after a number of years what used to be the mainstream dies out and the former schismatics become the mainstream.

Psychology went through this kind of crisis some years ago when the scientists split off from the clinicians, and created the Association for Psychological Science to contrast with the clinically-oriented American Psychological Association (the APA is the one that publishes the unscientific but influential Diagnostic and Statistical Manual).

All that heterodox economists need to do is gain some self-confidence and stop calling themselves derogatory names. That won't make them scientific, but it'll be a step in the right direction.

In order to be scientific, the standard method is to actually try predicting. Prediction is messy and provably fails to converge to any possible theory, but there are other authentic sciences that have this same theoretical limitation, like meteorology. This doesn't prevent meteorologists from constructing theories which make predictions that demonstrably get better and better year after year.

Why don't all these macro critics stop publishing in "unscientific" mainstream journals and setup their own J.Econ.Sci. that has rigorous scientific standards? Many of them have tenure or non-academic jobs (e.g. Roemer) and don't need to kowtow to committees who care only about established impact factors. It's been done elsewhere. It wasn't so long ago that one of the most prestigious biology journals Cell, was just an upstart new face on the block. All it takes is a strong editor and a pool of like-minded peer reviewers.

I think Paul Romer's self-serving ad hominem attacks should be identified as just that. One would hardly blame the older generation of Nobel laureates of conspiring to deny economic pre-eminence to Romer - look at how he behaves! - but I think they probably have better things to do.

I admit I haven't completely digested Romer's latest thunderbolt - I'm basing my comments more on Romer's "mathiness" series of a year ago. In that case, I went back and read the "mathy" papers that Romer was attacking. Mathy they were, but the Lucas and Moll paper at least was very clear about why it didn't see increasing returns-to-scale in growth models convincing: the intellectual property-driven economic sector just isn't, in their view, big enough. (BTW, that's almost exactly the same argument made by William Nordhaus against the AI "singularity" folks: it could happen, but none of today's macroeconomic data suggest that it is happening.)

To come back to the current discussion, I have no particular sympathy with the Lucas-Prescott-Sargent rational expectations / microfoundations / real business cycle approach - but the needed discussion of the defects of RBC has been underway for some time. And note that Romer's opening distillation of RBC makes its problems all about a supposed "exogenous" component, for which the subtext is that RBC's authors don't accept Romer's "endogenous" growth theory.

For twenty years Romer has been implying (and recently saying) that economists who don't accept endogenous growth theory have abandoned the canons of science and are either blind or indifferent to the truth. Over the same twenty years he seems to have produced very little theoretical work, while his targets have remained working economists. (Why, after all, should anyone continue to do theory, since Romer has discovered the truth?)

I wish Romer well at the World Bank. There is no doubt that his ideas around urbanization, for example, will bring an important and updated perspective to a development bank. But the very move suggest to me that the World Bank has not failed to note Romer's ability to propagandize an economic agenda - and that it values his political skills as much as his reputation as an economic theorist.

It's easy to poke holes in existing methodology, but it's much more difficult to come up with viable alternatives and solutions. Do those who knock DSGE models really think we should go back to 1970's macro and reuse old-school Keynesian models? The empirical evidence against Keynesian multipliers is overwhelming (See Ramey for an overview). Methodologically, Keynesian models make just as many implausible, ad hoc assumptions as DSGE models, if not more. Their forecast accuracy is no better; private forecasters are mostly selling stories and scenarios, not forecasts that in any way will prove ex post to be accurate.

I think you are repeating - and it is a good reminder - the classic Mark Blaug argument that economists should not abandon the "best available" theory (even if its deficiencies are manifest) if there is no better replacement. I have no problem with that.

However, I think the discussion right now is about those manifest defects. And there are stirrings about what comes next. Noah has blogged several times on the new "empirical turn". And the Keynesians, who have never gone away, may yet stand up a rehabilitated theory. For a usable business cycle theory, there are really three tests to satisfy: 1) Normal forecasting capability (as you mention); 2) Convincing comparative statics on the effects of monetary or fiscal intervention. (RBC omitted this almost by definition.)3) Some ability to detect pressures that are building toward a major shock. (I call this 'the Cassandra feature', since the predictions are unlikely to be believed or heeded.) Whether any model could really offer this is open to question, but it's a real question. The Fed always talks about "risks to the economy", but is the perception of those risks coming from the model? How did Warren Buffet know that the pile of financial derivatives would collapse, but bankers and regulators and economists not know it? One answer, at least for economists, is that rational expectations theory forces prediction of any kind of discontinuity completely out of the model. That part of Paul Romer's complaint seems to me to be valid.

" think you are repeating - and it is a good reminder - the classic Mark Blaug argument that economists should not abandon the "best available" theory (even if its deficiencies are manifest) if there is no better replacement. I have no problem with that."

The issue is that DSGE as such has no content. It is just a formalism that can fit anything. By itself, it's obviously not a problem. A Language can help express things that are hard to say otherwise, but does DSGE have anything to say? Does it simplify anything? What's the value proposition there?

To Barry: I agree "best available" is always an open question. But I think Mark Blaug would have meant the term in the sense of the standard working theory. If I recall correctly he used the term specifically with reference to the Post-Keynesian critique, acknowledging that there were flaws in neo-classical, GE-based growth theory, but (being an empiricist) denying that (1970s vintage) post-Keynesian theory (of the mid 70s) had anything better to offer. According to Blaug, the Cambridge critics provided more realistic descriptions of an econonomy, but they couldn't, according to Blaug, produce empirically testable propositions. In that sense, RBC today may be the best available theory - simply because we can prove it is wrong! :-)

Khryz: The value prop of DSGE, as modified by new keynesians like Mankiw, (David) Romer, Blanchard, and Fischer was that it provided working short-term forecasts for policy planners - until it didn't. Now, some would argue, absent a crisis, it does so again. People use it, despite this fearful hole, because there is no "better" or more complete theory. So I don't think it is just "formalism", even though I think it is a pretty terrible theory. One might agree Immanuel Wallerstein, as I do to some degree, that a crisis of the capitalist world-system is inevitable. But Wallerstein would be the first to admit that he can't say with certainty whether a specific downturn is cycle or symptom; he has no "better" day-to-day theory for policy makers.

Good point. I think it's more about re-calibrating DSGE models than reverting to the 1970s. With new insights into preferences from behavioral economics, a field that has exploded in the last decade, these models might have better predictive power.

Allie: No, recalibrating DSGE with behavioral assumptions will yield models with no predictive power whatsoever. No matter how great the assumptions are. Even if they are exactly true.We do not know the aggregation function and never will. Not with a total of 10-20 observations.

Post-Keynesians have been calling out the mainstream about this very topic for decades at this point. The fact that it's coming to the mainstream via the establishment is a little bittersweet, but at least it's getting attention.

Economics in these days in NOT about criticizing macro in its DSGE/RBC incarnation but to fully replace it because it is beyond repair.

Noah Smith writes: “... recently, there have been a number of respected macroeconomists posting big, comprehensive criticisms of the way academic macro gets done.” (See intro)

What the respected macroeconomists overlook, though, is that there is NO WAY to improve a dead theory. There is nothing left to do but to bury it. So, what the critique of respected macroeconomists amounts to is the zombiefication of economics. These critical folks cannot be taken seriously as scientists. A scientist does not waste too much time with the critique of an obsolete research program but moves on to a progressive paradigm.

The first thing to notice is that the critical heavyweights miss the crucial point. Noah Smith reports: “Romer basically says that macro ... a community of believers, dogmatically following the ideas of revered elders and ignoring the data. The elders he singles out are Bob Lucas, Ed Prescott, and Tom Sargent.” (See intro)

Depicting the DSGE/RBC crowd as quasi-religious sect is a silly ad hominem argument. Every economist should know from Schumpeter: “Remember: occasionally, it may be an interesting question to ask why a man says what he says; but whatever the answer, it does not tell us anything about whether what he says is true or false.”

The decisive argument against DSGE/RBC is that it is scientifically forever unacceptable because it is materially and formally INCONSISTENT. Whether the adherents of DSGE/RBC appear weird to non-adherents is absolutely IRRELEVANT. Science is well-defined since more than 2000 years: “Research is in fact a continuous discussion of the consistency of theories: formal consistency insofar as the discussion relates to the logical cohesion of what is asserted in joint theories; material consistency insofar as the agreement of observations with theories is concerned.” (Klant, 1994)

The simple fact is that DSGE/RBC is PROVABLE inconsistent and needs to be replaced. The more embarrassing fact, though, is that this holds to the same extent for Walrasianism, Keynesianism, Marxianism, Austrianism, which means that there is nothing left of economics to be taken seriously.

How can we be absolutely sure that the critical heavyweights are as scientifically incompetent as the DSGE/RBC crowd? Well, simply look at their ad hominem arguments and their ridiculous solutions: “As a solution, Romer suggests chucking formal modeling entirely and going with more general, vague but flexible ideas about policy and the macroeconomy, supported by simple natural experiments and economic history.” (See intro)

Lo and behold, this has always been the methodological mantra of those microfoundations lacking Keynesians: “It is better to be roughly right than precisely wrong!”* Science, though, is digital=binary=true/false and NOTHING in between. There is NO such thing in science as roughly right or roughly wrong, there is only materially/formally true/false.

Vague blather, untestable wish-wash and storytelling has always been the hallmark of what Feynman famously called cargo cult science: “Another thing I must point out is that you cannot prove a vague theory wrong.” (Feynman, 1992). To immunize a theory/model against refutation and thereby to save their job has always been the apex of smartness of the scientifically incompetent.

Economics has not lived up to scientific standards in the last 200 years. Economists are well aware of this and therefore try to question the standards and to ‘play tennis with the net down’ (Blaug). Unsurprisingly, the representative economist’s methodology of choice turns out to be postmodern-new-age-pluralistic-anything-goes. Make no mistake, the ‘throng of superfluous economists’ (Joan Robinson) feels at home in the morass where “nothing is clear and everything is possible” (Keynes).

Critique and good advice of Romer, Kocherlakota, Krugman, Rodrik and other scientific underweights is worthless: “The moral of the story is simply this: it takes a new theory, and not just the destructive exposure of assumptions or the collection of new facts, to beat an old theory.” (Blaug, 1998)

What the Grand Coalition of Failed Economists (Walrasians, Keynesians, Marxians, Austrians) wants least but economics needs most is a new theory, or in methodological terms, a paradigm shift.** The only thing left to do for these scientific deadweights is to get out of the way.

Will economists ever accede to the fact that there are just too damn many variables for models to 1) provide any productive value 2) stand the test of time? Economists want to analyze as if they could really choose all the relevant variables for any experiment. The economy is no college chemistry lab. Take something relatively more simple than economics but still absurdly complicated: medical research. Without randomized controlled trials, much medical research is taken by those in the know with skepticism, AS IT SHOULD BE. The best that economists can ever hope to do is observational trials, but nothing is standing still. Asserted above was that "monetary policy affects the real economy." Does it? Even if it appeared to in the 1980's, can one be sure that it can have the same effect now? Of course not. Much has changed. Perhaps the reasons that monetary policy was effective 40 years ago no longer applies. It would be nice if you could test such assertions. But you can't. You just can't. Economics is necessarily a backward looking endeavor.

People who quote Milton Friedman to the effect that realistic assumptions in a model don't really matter often forget the flip side of that: What Friedman thought does matter is whether or not the model makes good out of sample predictions. DGSE models don't.

The best way I know of to test economic theories is to cast them as restrictions on the parameters of a BVAR (Bayesian Vector AutoRegression). Litterman and Sims showed over 30 years ago that BVARs with simple random walk priors outperformed most economic forecasting models at forecasting out-of-sample. But we know from Statistics that imposing correct restriction on an estimated model should improve its out-of-sample forecasts. By this measure, modern macroeconomics teaches us almost nothing about how the economy behaves. Not that old-style Keynesian models are any better. They stink too.