Tuesday, December 17, 2013

I love microfoundations. Just not yours.

Those of you who love to follow blog debates about macro methodology - and I'm assuming that's pretty much all of you - should check out the debate between Tony Yates and Simon Wren-Lewis. The debate started on Twitter, and then Yates wrote a long post about why microfoundations are the bee's knees (I think that means "good" in British).

Yates essentially recites the standard catechism: the Lucas Critique is really important, the '70s inflation proved it, SVaRs are a sort-of-acceptable alternative, New Keynesian models are OK except Calvo pricing is suspect, etc. etc. If you want to know how the average Freshwater-y DSGE-slinging macroeconomist thinks about his place in the cosmos, read Yates' post. Yates also tosses in this gem:

There’s something irksome about defending micro-founded macro from the attack that it is ‘without merit’. A voice inside me says: if they aren’t doing macro, by which I mean, generating new empirical or theoretical work themselves, who are they to go about proclaiming whether something has merit or not, or how macro should be done? [I'm not singling out Adam here. Lots are at it.] And why should anyone care what they say?

So the only people qualified to judge the value of an activity are the people being paid by the government to do it, eh? How convenient. Snark snark.

(OK OK I'm being mean with selective quoting. Yates dutifully follows up with this quote: "[T]here’s the risk that we come to seem like a cult bent on disengaging, concerned to interact with those outside the cult only so far as is necessary to squeeze them for the money we need to continue playing with our toys.")

Anyway, it was kind of cute that Yates singled out Calvo pricing as an unrealistic, kludgey, hold-your-nose sort of microfoundation. Oh, it certainly is that. But it's hardly unique in that respect! In the Lagos-Wright (2005) model that is the basis of "New Monetarist" macro, for example, people exchange goods with anonymous counterparties with whom they are randomly matched and will never meet again. That's every bit as unrealistic as Calvo pricing, but you don't hear Freshwater guys like Yates kvetching about that, or any of the other equally unrealistic mechanisms in RBC-type models. Why does Calvo pricing receive special, negative attention? My instinctive guess is that it's because unlike many other equally silly microfoundations, Calvo pricing tends to justify a role for government intervention.

But I digress. On to Simon Wren-Lewis' response, which was really quite excellent, although - in typical New Keynesian form - he goes out of his way to be deferential to his hard-charging Freshwater counterpary. He prefers an "eclectic" approach, where microfounded DSGE models are used alongside other types of models. This is pretty much what the Fed does (and the Bank of England, and the Bank of Japan, etc.). Wren-Lewis describes a situation in which such an approach might be better than relying solely on DSGE models:

Suppose in the real world some consumers are credit constrained, while others are infinitely lived intertemporal optimisers. A microfoundation modeller assumes that all consumers are the latter. An eclectic modeller, on finding that consumption shows excess sensitivity to changes in current income, adds a term in current income into their aggregate consumption function...

Now a microfoundation modeller might respond that the right thing to do in these circumstances is to microfound these credit constrained consumers. But that just misses the point...[E]ven the best available microfounded model will be misspecified, and an eclectic approach that uses information provided by the data alongside some theory may pick up these misspecifications, and therefore do better.

Another response might be that we know for sure that the eclectic model will be wrong, because...it will fail the Lucas critique[.]...But we also know that the microfounded model will be wrong, because it will not have the right microfoundations. The eclectic model may be subject to the Lucas critique, but it may also - by taking more account of the data than the microfounded model - avoid some of the specification errors of the microfounded model. There is no way of knowing which errors matter more. (emphasis mine)

Wren-Lewis absolutely nails it. In the comments to Wren-Lewis' post, I wrote "YES YES A THOUSAND TIMES YES". In a follow-up post, Yates caricatures my comment as "NO NO GET RID OF ALL THE MOTHER&&&&&&G MICROFOUNDATIONS WHILE YOU ARE AT IT." But let us ignore that particular flerp-o'-derp for now, and focus on why Wren-Lewis is so very very right.

The Lucas Critique was important and right. If you don't take into account how your policy will change the workings of your model, you can't know what effect your policy will have. So if you use a model that doesn't satisfy the Lucas Critique (not that anyone knows what really satisfies the Lucas Critique!), you're going to introduce some errors into your policy recommendation. Potentially some very big errors.

But some macro guys seem to walk around thinking that Lucas Critique errors are the only kind of errors a model could possibly make! This is, of course, not the case. If your microfoundations are wrong, then you will introduce misspecification errors. As Wren-Lewis points out, those errors might also be very big!

[It's] dangerous to presume that you understood something because you had “microfoundations” when those microfoundations were wrong. After all, Ptolemy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn[.]

In fact, I encourage everyone to read that post in full.

But back to Yates. Yates says I just want to get rid of all the microfoundations. But that is precisely, exactly, 180 degrees wrong! I think microfoundations are a great idea! I think they're the dog's bollocks! I think that macro time-series data is so uninformative that microfoundations are our only hope for really figuring out the macroeconomy. I think Robert Lucas was 100% on the right track when he called for us to use microfounded models.

But that's precisely why I want us to get the microfoundations right. Many of microfoundations we use now (not all, but many) are just wrong. Obviously, clearly wrong. Lots of microeconomists I talk to agree with me about that. And lately I've been talking to some pretty prominent macroeconomists who agree as well.

So I applaud the macroeconomists who are working on trying to develop models with better microfoundations (here is a good example). Hopefully the humble stuff I'm doing in finance can lead to some better microfoundations too. And in the meantime I'm also happy to sit here and toss bombs at people who think the microfoundations we have are good enough!

Updates:

Just to hammer the point home, see this excellent post by Antonio Fatas, about what macro models need to include that they (mostly) currently omit.

For me the question with microfoundations is always - when does a difference in degree become a difference in kind?

We can probably model an economy with only two people preeeeeeetty well, at least until the moment that the fact that there are only two people left in the world means one so annoys the other that they defenestrate half the species.

We can also probably model an economy with only three people just about as well, until such time that political science takes over and a permanent 2 v 1 coalition emerges and there is some sort of force-based realignment of power.

We can probably model an economy of four people pretty well, until such time permanent 2 v 2 deadlock on all votes leads to autocracy which leads to tyranny.

We can probably model an economy of five people pretty well, until such time that a particularly rambunctious game of Cards Against Humanity ends in tragedy.

But at some point we stop being able to model economies very well. But why? Is it just a matter of computation power; that is, given a powerful enough computer that can perfectly or at least adequately model individual actors, just type in "#ofactors=100000000" and boom you have the macroeconomy? Or is it that new, complex, unpredictable phenomena emerge when you achieve a certain scale?

I'm inclined to think the latter, which makes me think that micro foundations are a dead end.

Here's another question - at what point will microfoundations predict war? For example, given the level of technology prevalent roughly one century ago, would a sufficiently accurate and sophisticated enough microfounded model predict conflagrations of the kind actually experienced? If not, does that mean that we simply lived in the tails of the distribution of all possible worlds? Or does it mean that the model is wrong?

Maybe this was unclear, so let me use a concrete example - the bubonic plague.

There is plenty of reason to believe that the bubonic plague - or to be more specific, yersinia pestis - existed well before its first recorded pandemic. So why didn't we have any plague pandemics before the first one?

There are two basic reasons. The first is climate; the unusual and exceptional cold of the years that preceded the early medieval plague were probably important to its spread, as the plague is zoonotic (spread by animals) and the specific critters that transmitted the plague couldn't thrive in the Mediterranean otherwise.

The reason is political and economic atomization. If the plague had made it centuries earlier (we have no evidence for this, just hypothetical) it may have affected a small city-state. But it may have stopped there. Overall populations were lower, densities of population and trade were lower, and the conditions for rats to thrive in were less present. Therefore, even if rats bearing fleas bearing plague had made it to say the pre-Alexandrian Mediterranean its likely that a very small number of people would have died.

But instead it arrived in the late Roman era when the Mediterranean was a higher-population, still rather dense area under largely unified governance and widespread trade and travel. Basically, plague heaven.

Plague pandemics (and lets just say pandemics more broadly) are specifically a phenomenon of large societies, and more specifically those large societies that have higher rates of urbanizations and trade densities. So if your microfounded model suggests pandemic plague is extremely unlikely, your model will fail to predict pandemic plague when in fact the conditions for pandemic plague are in place.

Now the question is - what if lots of things are like pandemics? What if ideas are like pandemics?

We already have enough computing power to do agent based microfoundation modeling. This kind of modeling has the advantage that there actually are economic agents and, not only are they often introspective, there is a great deal of literature describing how they make decisions. You don't have to postulate Calvo fairies; you can just study how prices are actually established in actual businesses.

One thing that falls out of this approach is a way of answering your question. In statistical situations we call the transition at which a difference of degree turns into a difference in kind a phase change by analogy with the way a liquid turns into a gas or solid. I think economics would do a lot better if it adopted some of the mathematical and scientific techniques which have been developed over the last 50 years or so. Whenever I try to follow an economist's reasoning these days I fell like I've been swept into the mid-19th century.

Can anybody please tell me by which non-tautological criterion Calvo pricing is "unrealistic" and "not micro-founded" if by the same criterion basic real-business cycles are "realistic" and "micro-founded"? Thanks.

Another way to go is agent-based modeling where the microfoundations are specified, but may be behavioral in the sense of non-optimizing. Macro outcomes emerge from the microfoundations.

To Zopolan, it is the lack of any optimizing base for why the Calvo pricing operates as it does that is what gets its critics up in the morning. It may be empirically vaild (some question about that actually), but it certainly is a useful ad hoc way to introduce some price stickiness into a model.

Of course, if one is not wedded to the idea that price stickiness is necessary for macro fluctuations to occur, for example if there are bubbles or price overshooting, then Calvo pricing is not necessary, and the theme that price flexibility may be destabilizing has been around since at least Tobin and also popular among many Post Keynesians.

Another way to go is agent-based modeling where the microfoundations are specified, but may be behavioral in the sense of non-optimizing.

Sure!!

That's why I think microfoundations are actually the *most* important part of macro models. You can stick them in a DSGE, or stick them in an agent-based simulation! You can test them in a lab, you can look for them in the micro data. You can eat them on a train, you can eat them in the rain.

First, what other freaking field of science says that the only thing that can be science, and of value, is deduction (microfoundations), and never induction? Could you imagine how world productivity would be decimated if meteorology and climatology had this stupid snobbery, enforced with journal and department control? Could you imagine how much less advanced the world would be if no scientist ever used induction and macro-level models.

Second here is a comment on microfoundations I left on Simon Wren-Lewis's blog a little while back:

"I suspect nearly all economists are naturally reluctant to embrace cases where agents appear to miss opportunities for Pareto improvement - I give another example related to wage setting here. However in most other areas of the discipline overwhelming evidence is now able to trump these suspicions. But not, it seems, in macro."

• Suppose you had $100 in a savings account and the interest rate was 2 percent a year. After five years, how much do you think you would have if you left the money to grow? More than $102, exactly $102 or less than $102?

• Imagine that the interest rate on your savings account was 1 percent a year and that inflation was 2 percent. After one year, would you be able to buy more than, the same as or less than you could today with the money?

• Do you think this statement is true or false: “Buying a single company stock usually provides a safer return than a stock mutual fund”?

Anyone with even a basic understanding of compound interest, inflation and diversification should know that the answers to these questions are “more than,” “less than” and “false.” Yet in a survey of Americans over age 50 conducted by the economists Annamaria Lusardi of George Washington University and Olivia S. Mitchell of the Wharton School of the University of Pennsylvania, only a third could answer all three questions correctly.

"65% of people answered incorrectly when asked how many reindeer would remain if Santa had to lay off 25% of his eight reindeer."

"1 in 3 people didn't know how much money a person would be spending on gifts if they spent 1% of their $50,000/year salary."

– Personal Finance for Dummies, 7th edition, 2012, page 9

It's not just the issue that they require that all macro models be microfounded, with the resulting limitations and problems (see: http://richardhserlin.blogspot.com/2012/03/haugens-critique-of-microfoundations-in.html). They also require that the microfoundations always be highly unrealistic in the way that they like, the way that fits their preferred paradigm, and/or makes their preferred libertarian ideology look more desirable, especially with the very literal way they interpret the models to reality. The interpretation to reality is the most important part of any model. As I always say, a model is only as good as its interpretation.

One thing I find really interesting: I often hear economics and finance professors talk about how ignorant and incapable their students are – and then in their research they assume everyone is a genius! with not only perfect public information in their minds, but perfect expertise to analyze it with! And they see no contradiction (or don't care).

Part of this is many will claim that you don't need everyone to be smart and expert and informed to get the result, you only need a savvy minority. But for many things it's easy to show that won't be enough. Complete markets and perfect arbitrage won't often exist like they do in models, and with regard to the smart getting more and more control of money, or becoming more and more common, remember people don't live forever, and they spend-up lots of what they gain. The smart die off, and paraphrasing P.T. Barnum, there's a new sucker born every minute. And anyway, from Keynes, in the long run,... As well, the savvy are limited in how much they can push an inefficient price to its fundamentals by how undiversified their portfolios will become as they buy more and more, so each additional unit becomes riskier and riskier – more and more eggs in the same basket – and so worth less and less to them. I had a letter on this in The Economist's Voice, at:

An interesting take on an old issue. To me, this was always obvious: Such assumptions are made, like complete markets, to be able to produce neat models that get published and get academics well-paid jobs.

But you don't even have to point out how many people lack basic maths or how economists complain about students' idiocy. I've never met an economist capable of giving me the totality of his preferences or describe his utility function.

IMHO, microeconomics is best left to econometricians and behavioural scientists who may or may not decide to use tools used elsewhere (life sciences, Chaos theory, whatever).

But I have to say that I fail to see how this matter for our present-day macro issues.

What are our problems? Mass unemployment and inequality. That's it. And they are likely to be very much linked issues.

How to solve those issues is not theoretically difficult, it is the building of a political consensus around a set of solutions that is difficult.

Noah, with deduction you start with the primitives, the small parts, and their behavior, and from that you deduce what happens to the big phenomenon. A classic example is that you start with the behavior of individual molecules, and from that you try to deduce, or predict, the behavior of the cloud of smoke coming off of a cigarette. With induction, by contrast, you would repeatedly study the behavior of the smoke cloud as a whole, to understand how it works and to predict how it will behave and develop.

This use of the terms deduction and induction is not mine. I got it from the late University of California, Irvine finance professor Robert Haugen.

I copy a graph from Haugen's book, "The New Finance", where he has a scale of complexity of micro-unit behavior and interaction, and as it gets more complex, he has induction becoming more important relative to deduction. He writes, "As we move to the right, induction and statistical estimation dominate deduction and mathematical modeling in their ability to explain and predict..."

The post was well received. Simon Wren Lewis wrote of it, "This argument will mirror similar points made in an excellent post by Richard Serlin in the context of finance.", at:

I think Noah would be happy with microfoundations based in observed human behavior. I would...

The problem is that the majority of microeconomics is just wrong. Not all of it, but almost all of it. Start with observed human behavior and you can get some sort of "microfoundations" -- but this research in economics STARTED in the 1990s.

The first discovery was that people care about fairness more than about profit! The second was that standards of fairness vary across culture! The third was that very large amounts of money cause people to change their behavior and become more selfish! The fourth was that almost nobody really pays attention when very small amounts of money are involved!

These are some of the earliest results in empirical microeconomics. They overturn basically the entire field of microeconomics.

You can also look at the empirical theory of the firm -- as studied in any number of fields OTHER than economics -- and discover that firm behavior is driven by "corporate culture", supply levels are determined by weird internal corporate politics, pricing is determined by fiat or random guess, and the core obsessions of business are marketing, product differentiation, and attempting to create monopolies.

You could create a theory of microeconomics based on the observed behavior of individuals and firms. But I haven't seen one yet!

And I think this is Noah's point. Working up from small elements -- reductionism, in the technical sense -- is great if you have a good idea what the small elements are, but if you don't it's just stupid.

It seems to me that the real problem is that lay people might actually understand your model! Hence the begrudging half-acceptance of economic history and SVARs. And it was begrudging and half-accepting because an educated person from another field can much more easily pick them up if she wanted to; the mitigation here is if you've noticed is that she would still need theory sooner or later. And that theory cannot be made easy to understand!

Also what about Wendy Carlin? I really enjoyed the chapters of her incoming book with Soskice. I would really like some discussion of the three chapters that offer a new model for students and policy-makers (explained here http://www.cepr.org/pubs/new-dps/dplist.asp?dpno=7979). The Romer IS-MP-IA is also cool and on the freshwater side I really liked and felt enlightened from Andolfatto textbook that he has published freely.

But Andolfatto's book will be considered too extreme for non-freshwater economists in many propositions. So, the goal is first to convince the other economists to dump Keynesian thinking, before going to the public. Just like communists felt that they could never win if there are still social democrats out there.

Disclaimer: I don't want to make a strawman out of Bates, he may have very different opinions, but that is the way I feel about the Freshwaters right now.

"But that's precisely why I want us to get the microfoundations right."

That is a good goal, I suppose, in the long term. That is, if it were done by scholars of a silent religious order (devoted to the cult of Lucasian epistemic purity) who would never comment on policy until the project was complete.

What economic policy needs is an ensemble approach -- lots of different models for lots of different phenomena. That is how you can deal with a system that is impossible to model in its entirety in our present state of knowledge. Experts in this ensemble approach are the people who should be advising on actual policy today (like some of the better economists in central banks).

The problem with macro is that macroeconomists want to be both scholar-monks and policy advisers.

Noah, can you expand a bit on what makes a microfoundation wrong or right?

Take Calvo pricing. Nobody who sets prices is going to recognize that as an accurate description of what they do, but it doesn't seem very realistic to expect tractable microfoundations to resemble reality very closely on the surface. Perhaps the role Calvo pricing plays in the models that use it, how it changes the results and why (sticky prices) is close enough to being realistic to justify its use, and the mechanism is an acceptable reduced form approximation of whatever processes are actually at work.

I'm not trying to make that claim, and can easily see why something better than Calvo pricing is desirable, but I do think that is the question as stake, and it isn't resolved by saying things like "I don't believe in the Calvo fairy".

So given that microfoundations are always going to be wrong to some degree, what would count as wrong-but-acceptable in your view?

ah, thanks. So it's not the lack of realism in supposing a Calvo fairy you object to, but that what it predicts about pricing behaviour doesn't accord with what we observe. You're not primarily concerned with microfoundations that are realistic in the sense of being recognisable descriptions of how we behave (although I presume what would be a bonus)

Luis: first, throw out the word "tractable". It doesn't matter whether it's "tractable", it matters whether it has anything to do with reality.

Too many economists have preferred "tractable" models to plausible ones, which is what caused economics to stop being a science.

I'd love to see a model of pricing based on how pricing is actually done by price-making companies -- it's done by cost-plus, mostly, followed by random changes due to committee thinking. Would this model be "tractable"? Maybe not, but it would be great microeconomics. I haven't seen it.

Quite ~ the translation of "tractable" is "I will happily give you the wrong answer, if it turns out that finding the right answer is just too damn hard", and when translated, that is a far from persuasive thing to say.

[Apologies if this appears twice, first time I seem to have lost it].I'm honoured to have a brick thrown by you Noah.You can't seriously brand me as freshwater; I agree with their methods, not their point on substance. All my papers use other people's sticky prices models. I even have a survey paper from way back measuring how sticky prices are in the UK. On many of my blog posts I've been pushing the Bank of England to do better countercyclical monetary policy, something that's pointless in a freswhater [substance] model where prices are flexible. I've been a tad more 'austerian' on fiscal policy than some, but not because I think fiscal policy should not be used to stabilise the business cycle, just because I took the view that enough of it had been done [one inference the sticky price model makes when it sees high and constant inflation]. And only a tad.It's a bit unfair of you not to offer the reply I wrote to my own rhetorical question. I concede later that without engaging with these challenges we will look like a cult trying to milk the rest of you for money for our toys. And it's only human isnt' it, to bristle at being told how to do things by people who do something else entirely? Ultimately, my post [and its length which you and Simon both remark on] speaks for itself in that I am trying to respond [in particular to the critics like you and Adam Posen]. I don't actually need to. More money would surely flow into my wallet if I just got my head down, wrote more of these papers you hate, and took time off only to hang out with other members of the cult at conferences.I actually think Calvo pricing is in the realm of the unrealistic but necessary. In my post I describe it as a freshwater position that it's a bad assumption to make. I agree with them on the rank ordering of nastiness, just not where the cut off point lies.I don't think you are behind Simon at all. All of his papers use DSGE pretty much. You want better microfoundations, but in the meantime you'd rather get rid of most of the assumptions that build the DSGE model, not keep all of them and make the odd tweak to change consumption dynamics [Simon's example]. So I don't see why you object to my characterisation of you at all. While we don't have these better microfoundations, what would you do? How would you respond to the Sims' critique of Simon's way forward - that you shouldn't stop putting variables into your equations until you have a VAR, because you have no good reason not to? Aren't you prepared to pay dues to the parralel study field of the science of doing policy with bad models?

I'm not going to comment on your economics now, but I will heartily object to your use of the square-bracket symbols for asides. Everyone knows that such brackets are to be used when using inserting a paraphrase into a quote, in order to indicate that your addition was not in the original. When Noah quoted your text containing said brackets, I was momentarily confused whether he or you had written them. Parentheses are used for asides (I use them all the time!).

You can't seriously brand me as freshwater; I agree with their methods, not their point on substance.

You're just taking the freshwater side in the blog post...

I concede later that without engaging with these challenges we will look like a cult trying to milk the rest of you for money for our toys.

Haha OK OK...

I don't think you are behind Simon at all. All of his papers use DSGE pretty much.

Just because you do research in the dominant paradigm doesn't mean you don't think things need improving.

You want better microfoundations, but in the meantime you'd rather get rid of most of the assumptions that build the DSGE model, not keep all of them and make the odd tweak to change consumption dynamics

Ah. Here I think Simon is exactly right. POLICY people should use the best we've got right now (for DSGE, that would be some sort of Smets-Wouters model augmented with financial frictions, or maybe a Christiano-Eichenbaum-Trabant model). But RESEARCHERS should be focused on two tracks - keeping the best state-of-the-art "kitchen sink" models up to date (Christiano and Eichenbaum and Smets and Wouters are doing this), while also looking for better microfoundations so the next generation of models can be better. The latter is something that both macro AND micro people need to cooperate on.

Hello, I wanted to comment on your blog, but I had to login to wordpress and I don't have/want facebook, so and I didn't write comment, but..

You mentioned that there was some discussion about Carlin and Soskice and textbook 'reform', anyhow I don't seem have noticed, can you please suggest where to look? I searched your blog but didn't see any.

Still I would love to hear as many opinions on alternative principles or intermediate macro/fluctuations models like IS-MP, dynamic AS-AD and of course Carlin and Soskice. David Andolfatto has an intro text for macro with a lot of indifference curves and it is very freshwaterish.

NoahI somehow doubt that a properly specified micro-founded GE model is possible Just my feeling based on problems that I encounter in the financial markets which ought to be a lot easier to calibrate and arb. These differences are not accounted for by bid/offer or transactions costs alone (although people tend to wave them away with b/o, tc all the time!).

If such is the case in derivs, I can only roughly guess the difficulties in Macroeconomics

(Example: the sum of the implied variance matrix for 500 stocks should be equal to the implied variance in the SPX, correct? Well it never is! and significantly different. So if you put on a dispersion trade, you can only hope to catch a portion of it, with risk on misspecified covariance. There are many other examples)

Reading the post you linked to, I can't help but think you misrepresented his argument about Calvo pricing, especially the point about rejecting it because of policy implications. I mean, the last [non bracketed] sentence in that paragraph is "I deduce from this that with some probability the false microfounded model has taught us something useful about good monetary policy design."

"Oh, it certainly is that. But it's hardly unique in that respect! In the Lagos-Wright (2005) model that is the basis of "New Monetarist" macro, for example, people exchange goods with anonymous counterparties with whom they are randomly matched and will never meet again. That's every bit as unrealistic as Calvo pricing, but you don't hear Freshwater guys like Yates kvetching about that, or any of the other equally unrealistic mechanisms in RBC-type models. Why does Calvo pricing receive special, negative attention? My instinctive guess is that it's because unlike many other equally silly microfoundations, Calvo pricing tends to justify a role for government intervention."

Well, so far you haven't figured out what makes a Lagos-Wright model work, so you're still the guy on the outside looking in, and attempting to criticize something you don't have a grip on. The random matching is not important. It's there as a modeling convenience. You seem to think that people are objecting to Calvo pricing because of the results it delivers. Basically, you think that all macroeconomists are born with beliefs about how the world works, and then they just reverse-engineer. The objection isn't to the sticky prices, or that government matters. We do in fact observe that prices of goods and services don't behave like asset prices. The government matters, and we better understand how. The objection to Calvo pricing is that it's a chicken model (look that up). We know that results change dramatically if we do state-dependent pricing, for example, which is one small toward what we would like in a theory of the pricing of goods and services. What people are objecting to is that they think that, if we get deeper into how firms are making pricing decisions, that the decision rules will depend on policy in ways that could matter. Maybe there is something about the pricing of goods and services that is important for macroeconomic activity and how we conduct policy, but you can't just assert it. You have to do the work.

Well, so far you haven't figured out what makes a Lagos-Wright model work, so you're still the guy on the outside looking in, and attempting to criticize something you don't have a grip on.

You're right, and that's why I'm not criticizing the Lagos-Wright model (until I fully understand it). Maybe after I understand it, I will criticize it, but I generally try to steer away from doing that on the blog. I was just using it as an example here.

Basically, you think that all macroeconomists are born with beliefs about how the world works, and then they just reverse-engineer.

I sort of suspect that there's a lot of this going on, but I don't really know. Haven't you accused New Keynesian modelers of doing exactly this? Here's a quote from http://newmonetarism.blogspot.com/2012/07/reply-to-sumners-reply.html -

"I'm more cynical about the way central banks adopted New Keynesian economics later on. Rightly or wrongly, this wasn't stuff that was challenging what they were doing - more like an exercise in reverse engineering. How do we write down a framework that justifies the status quo?"

I read about the chicken model from your blog but I still don't get what Ed Prescott (if it originated from him) meant to say.

I think the point was that the purpose of the model is provide rationale for a policy by setting up an environment where the policy would be justified strongly, instead of first modeling the world and seeing where it takes you.

If I did understand it correctly, I still don't see why that is a bad thing. When you have an obvious problem, you should try to focus on the problem.

"I'm more cynical about the way central banks adopted New Keynesian economics later on. Rightly or wrongly, this wasn't stuff that was challenging what they were doing - more like an exercise in reverse engineering. How do we write down a framework that justifies the status quo?"

I'd put that a little differently, as it seems to cast aspersions on Woodford's motives. I think what Mike was trying to do was to write down a simple model where monetary policy matters, and where he would not be at odds (at least in an obvious way) with what we had learned since 1970. Part of the marketing was to sell that to central bankers as looking much like Old Keynesian economics - only more technical. Fooling Old Keynesians is fine with me, actually. But of course a Woodford model isn't Old Keynesianism - it's Prescott with sticky prices.

"Williamson should add this as another service delivered by the economics of the blogosphere: It catches bullshi*tters really rather fast! "

Obviously not correct. The b.s. just persists forever in this venue.

Chicken model: Assume the private sector does not produce chickens. Assume the government can produce chickens. Assume people want chickens. Theorem: The government should produce chickens for people. Proof: Obvious.

" In the Lagos-Wright (2005) model that is the basis of "New Monetarist" macro, for example, people exchange goods with anonymous counterparties with whom they are randomly matched and will never meet again. That's every bit as unrealistic as Calvo pricing, but..."

I don't know. Sure looks like a criticism to me. Unrealistic seems pejorative, don't you think?

Here's what's interesting about this. Lucas, Prescott, Sargent, etc. went to war over their ideas. Woodford didn't. He tried to make everyone happy. In both cases, the ideas were ultimately successful. But is one approach better than the other? Or maybe we can't compare. Maybe Lucas/Prescott/Sargent would not have succeeded without a war. Those people were outside the mainstream, but Woodford was an insider. Mike is an MIT PhD, and has worked mainly in the Ivy League. But he also worked for a time at Chicago, which people might forget. Also wrote with Lucas:

I think this paper by Head, Menzio, Liu, and Wright (2012) elaborate on some of the points Steve is trying to make: http://web-facstaff.sas.upenn.edu/~gmenzio/linkies/EPDR.pdf

Having said that, if one were to pick on the Lagos-Wright model, I think random matching is indeed the wrong place to start. I am still a novice, but I have a bigger problem with the existence of a centralized market, which seems to be driving many of the results of the various models. Having said that, I understand that its inclusion is necessary for tractability. So the Lagos-Wright is certainly not a chicken model, contrary to Calvo pricing.

You might think the random matching was a big deal, if you read the early monetary search literature - Kiyotaki-Wright for example. The key frictions, as we understand better now, are imperfect information (lack of "memory") and limited commitment. You buy a huge amount of tractability in the model with quasilinear preferences - the centralized market then becomes a forum for portfolio rebalancing and debt settlement. Then you're off - you can include all kinds of financial detail in such a model - credit, collateral, banks, coexistence of exchange using currency and exchange using financial intermediary liabilities. All kinds of room for action by the government in that world.

FWIW, we know for a stone-cold fact, from empirical evidence and the work of Kenneth Arrow, that the government can provide health insurance that works, and the private market can't.

So if we replace chickens with health insurance, we are *correct* when we set up our "chicken model" and conclude that the government should provide health insurance. What the hell do you mean by a "chicken model"? One which assumes its conclusions? If so you should use the term "Circular Reasoning".

Calvo pricing is just a sticky-pricing model. We know prices are sticky downward, and we even know WHY prices are sticky downward, based pon empirical psychological. (Ask any employee if they want to get a wage cut; ask any firm if they want to cut their prices; they just HATE it.) This is not circular reasoning.

CA: Not sure. But if the only model of optimal govt. intervention that does not get ridiculed as a "chicken model" is one where the govt. policy pops surprisingly and counterintuitively out of the math, well then I think the deck is a little stacked, don't you think?

Yes, I do think. In economic theory, justifying government intervention takes effort, and there is good reason for that. In fact, I think the problem is that in the non-academic world government intervention is too often invoked when the outcome is not the one people desire or think they deserve. This, unfortunately, is particularly true in my home country. But a long tradition of economic research, starting with Adam Smith, tells us that individuals are pretty good at exploiting mutually beneficial trades, so government intervention often makes things worse. This is precisely why one has to work hard at showing why this might not be the case in a particular instance. To put it differently, yes, there may be $20 bills lying on the ground that no one has picked up yet, but they are not as many as people think.

I think that determining when government intervention is optimal requires better models of public choice than we currently have. We simply don't know much about government failure - when and why it happens, how bad it is, or how to reduce it.

Nathanael, apparently Arrow did not provide any argument about government specifically in his famous health care paper:http://econlog.econlib.org/archives/2009/06/krugman_misstat.htmlYou seem to be making precisely the "chicken model" error complained about.

"All of this doesn’t necessarily mean that socialized medicine, or even single-payer, is the only way to go. There are a number of successful health-care systems, at least as measured by pretty good care much cheaper than here, and they are quite different from each other. There are, however, no examples of successful health care based on the principles of the free market, for one simple reason: in health care, the free market just doesn’t work. And people who say that the market is the answer are flying in the face of both theory and overwhelming evidence."

Also, please note that he doesn't seem to claim that Arrow said to socialize medicine, but hey I should be used by now about claims of gotchas on Krugman.

So...

- The free market doesn't seem to work- This doesn't imply that more Government is the solution.- But it well could be, after all Arrow admits that government is usually the institution that does redistribution and other necessary fixes (although it can also be the family, trust relationship with doctors and so on.)

So I guess it passes the "Chicken critique". I'm still not sure I'm getting it, I probably don't have the gut instinct to do it. Now it seems to me that it is pretty much worth it to do a chicken model for healthcare; why not, much more radical things are assumed all the time.

Dimitar, first we have to specify if we are talking about health provision or health insurance. These are two different things. Moreover, it is important to examine what type of government intervention would make things better (after we define what better means) and if existing intervention is not making things worse. For example, here is an interesting take by Cochrane:https://soundcloud.com/hoover-institution/after-the-affordable-care-act

Noah: "The Lucas Critique was important and right. If you don't take into account how your policy will change the workings of your model, you can't know what effect your policy will have. So if you use a model that doesn't satisfy the Lucas Critique (not that anyone knows what really satisfies the Lucas Critique!), you're going to introduce some errors into your policy recommendation. Potentially some very big errors."

BTW, it's not really the Lucas Critique; there was a Cavendish (?) critique in the late 1940's, where it was pointed out that if the function didn't stay the same over varying ranges of data, both interpolation and extrapolation were very risky.