Saturday, 12 October 2013

Nominal wage rigidity in macro: an example of methodological failure

This post develops a point made by Bryan Caplan (HT MT). I have two
stock complaints about the dominance of the microfoundations approach in macro.
Neither imply that the microfoundations approach is ‘fundamentally flawed’ or
should be abandoned: I still learn useful things from building DSGE models. My
first complaint is that too many economists follow what I call the microfoundations purist position: if
it cannot be microfounded, it should not be in your model. Perhaps a better way
of putting it is that they only model what they
can microfound, not what they see. This corresponds to a standard method of
rejecting an innovative macro paper: the innovation is ‘ad hoc’.

My second complaint is that the microfoundations used by
macroeconomists is so out of date. Behavioural economics just does not get a
look in. A good and very important example comes from the reluctance of firms
to cut nominal wages. There is overwhelming empirical evidence for this
phenomenon (see for example here (HT Timothy Taylor) or the work of Jennifer
Smith at Warwick). The behavioural reasons for this are explored in
detail in this book by Truman Bewley, which Bryan Caplan
discusses here. Both money illusion and the importance
of workforce morale are now well accepted ideas in behavioural economics.

Yet debates among macroeconomists about whether and why wages
are sticky go on. As this excellent example (I’ve been wanting to link
to it for some time, just because of its quality) shows, they are not just
debates between Keynesians and anti-Keynesians, so I do not think you can put
this all down to some kind of ideological divide. I suspect nearly all
economists are naturally reluctant to embrace cases where agents appear to miss
opportunities for Pareto improvement - I give another example related to wage
setting here. However in most other areas of the
discipline overwhelming evidence is now able to trump these suspicions. But
not, it seems, in macro.

While we can debate why this is at the level of general
methodology, the importance of this particular example to current policy is
huge. Many have argued that the failure of inflation to fall further in the
recession is evidence that the output gap is not that large. As Paul Krugman in
particular has repeatedly suggested, the reluctance of workers or firms to cut
nominal wages may mean that inflation could be much more sticky at very low
levels, so the current behaviour of inflation is not inconsistent with a large
output gap. Work by the IMF supports this idea. Yet this
is hardly a new discovery, so why is macro having to rediscover these basic empirical truths?

There may be an even more concrete example of the price paid
for failing to allow for this non-linearity in wage behaviour. For all it
inadequacies, the Eurozone Fiscal Compact does at least include a measure of
the cyclically adjusted budget deficit among its many indicators that are meant
to guide/proscribe fiscal policy. However, as Jeremie Cohen-Setton discusses here, the Commission now think they have been
underestimating the output gap. As he suggests, the reason is pretty obvious:
they have overestimated how much the natural rate of unemployment has risen in
this recession. Here is the example he gives for Spain.

How could the Commission have been so foolish as to believe the
natural rate had risen from 10% to 27% in a few years? Might it be because they
looked at nominal wages in Spain, and inferred from the fact that nominal wages
were not falling that therefore actual unemployment must be close to its
natural rate? If empirical macromodels as a matter of course allowed for the
absence of nominal wage cuts, would they have made such an obvious (to anyone
who is not a macroeconomist) mistake?

I think this example illustrates why it can be dangerous to
rely on DSGE models to guide policy. Yet the influence of DSGE models in policy
making institutions is strong and growing. The Bank of England’s core
forecasting model (pdf) is a fairly basic DSGE construct, and as
far as I can see its wage equation is a standard New Keynesian specification,
with no non-linearity when wage inflation approaches zero. Now I know the Bank
have many other models they look at, and they will undoubtedly have looked at
the implications of a reluctance to cut nominal wages. (I discuss the Bank’s
‘new’ model in more detail here.). However default positions are
important, as the examples I discussed earlier show. Focusing on models where
consistency with fairly simplistic microfoundations is all important, and
consistency with empirical evidence is less of a concern, can distort the way
macroeconomists think.

35 comments:

I think your second criticism of the (standard) microfoundations approach is a very good point, but doesn't it make the first unnecessary? If we couldn't get good microfoundations for very well-confirmed macroeconomic phenomena like sticky wages, then perhaps compromising on ad-hoccery would be a regretable necessity, but given that we aren't in that position, the ad hoc critique against macroeconomic paper that use these unfounded microeconomic assumptions still seems to carry a lot of weight.

That's not to say that such papers should be dumped in the trash-can, because the requisite microfoundations may yet be developed in microeconomics and/or the macroeconomic model may turn out to be replicatable using different assumptions. However, until then, I think that the ad-hoc criticism still carries a lot of weight and can be sufficient grounds for rejecting a macroeconomic model, at least for practical purposes, at least if there is a decent non-ad hoc alternative.

If nothing else, ad-hoc approaches create the danger that we overlook structural relationships that can be useful for the right solution to the problem in question. For example, if uncertainty is a big part of sticky wages & prices, then macroeconomic policies that encourage certainty and accurate inflation expectations are very good things. Or if one factor behind wage stickiness are government microeconomic policies, then one can consider changing them. Or maybe there are some government interventions that can help address mitigate wage stickiness etc. etc. My point is that we want microeconomic policy and macroeconomic policy to work together well, and if macroeconomic policy is based on models with ad hoc micreconomic assumptions, then we run the risk of missing opportunities for better policies or even having positively counterproductive policies.

You are assuming that behavioral economics will provide microfoundations for all the things, sticky wages being only one, that are observed but not currently microfounded. I see no guarantee that this is true.

And if not, surely leaving things that are known to be important out of our models is worse than using ad hoc assumptions to put them in, even given the dangers that freely using ad hoc assumptions presents.

At least, that's the conclusion to which Dr.Wren-Lewis has be led. I'm not entitled to an independent opinion; so, I borrow his.

(1) I don't assume that microfoundations for well-confirmed phenomena need come from behavioural economics. There is more to micro than the latest fashion. I also think that there are microfoundations for sticky wages (in fact, the problem seems to be selecting which microfoundations are most important) but then again I'm not an economist.

(2) Observation is a slippery thing, especially with highly constructed data. As Koopmans pointed out back in the early days of econometrics, you need a theoretical basis for measuring phenomena in a particular way, and if the theoretical basis is that the phenomena are needed to satisfy the goals of an ad hoc model, then it's a big stretch for call that 'observation'.

Ditto "known to be important". Outwith a structural theory, we have a very weak basis for saying that something is important.

So I don't think that we can confidently say, prior to some theory that independently grounds the causal relations and motivates the existence of some economic phenomena, that it is either observed or important. We CAN say that there is good evidence for such phenomena e.g. if models using sticky wages tend to have very good empirical characteristics. However, that's suggesting a hypothesis, not proving it.

There is also the paper 'Inflation Dynamics and the Great Recession' by Laurence Ball and Sandeep Mazumder, May 2011, looking at the Phillips curve and sticky wages, or in their term, 'anchored inflation expectations'.

'Anchoring' is behavioural economics - see Robert Shiller's 'HUMAN BEHAVIOR AND THE EFFICIENCY OF THE FINANCIAL SYSTEM', February 1998, pp8-11 for 'anchoring'.

And more recently, for Krugman, see Krugman blogs: July 22, 2012, 'Sticky Wages and the Macro Story', and April 3, 2012, 'Screw Your Analysis to the Sticky Point'.

"My first complaint is that too many economists follow what I call the microfoundations purist position: if it cannot be microfounded, it should not be in your model. Perhaps a better way of putting it is that they only model what they can microfound, not what they see. This corresponds to a standard method of rejecting an innovative macro paper: the innovation is ‘ad hoc’."

Simon, exactly who in the profession do you accuse of adopting these extreme views? I see ad hoc assumptions up the wazoo in most contemporaneous research. So what the heck are you talking about?

I'm from a completely different field, but read this piece as if it were speaking about it. Just substitute "truth-conditional semantics" (though this label has recently gone out of vogue) for "microfoundation purists", "cognitive linguistics" for "behavioural economics", et voila. The rest just comes automatically with the bargain: The adherence to a-priori principles in the face of empirical evidence, the refusal to even get familiarized with relevant recent research that fails to support preconceptions (I once had a paper rejected by a top journal, with comments by referees that implied the term "cognitive linguistics" was a label I made up...), and the silly mistakes (the assignment of "semantic content" to sentences in many truth-conditional analyses looks very much like the calculation of base unemployment levels for Spain in that graph).

Although my last comment is the most relevant, just in case anyone reading yours thinks there is any truth in what you say, let me add this.

(1) The macro I use in my papers is hardly old fashioned. For example, my forthcoming in the JMCB with Campbell Leith develops analysis from papers published in the mid 2000s.

(2) I teach the beginning of the core masters macro at Oxford, so inevitably that covers models (e.g. Ramsey model, OLG) that have been around for some time. However when I took over teaching this part of the course I added a substantial amount of material based on Obstfeld and Rogoff (1995), plus some features from Gali and Monacelli (2005), which if you had done the course you would know.

(3) Of course other commitments mean I do miss some seminars. But as you hide behind anonymity, and given (2), I wonder if you are at Oxford at all.

Don't be so certain the Commission made an honest mistake. The ECB and Commission are more than ready to publish absurdities in support of their position. I can understand the perennial optimism these institutions display, their job is to talk up the economy, not increase pessimism. However, in the early stages of the Cyprus crisis the Troika recommended a number of actions. The ECB then released an estimate of the impact on the Cypriot economy of these recommendations, clearly in the hopes of influencing the heated parliamentary debate in Cyprus. Any informed person could see it bore no relation to reality. The supposed effects were so minor that I suspected Cyprus had already surpassed the estimated maximum contraction and unemployment. Which turned out to be true, and today Cyprus is far worse off than in the estimate. It's also noteworthy that as soon as Cyprus committed itself to the recommendations, Germany immediately demanded even more cuts. Basically they were too craven to state their full demands in advance, because they knew there was a non-trivial chance Cyprus might leave the Euro. The Cypriot crisis left a really bad taste in mouth, in my eyes it tore down the edifice of credibility the ECB as a young central bank is trying to construct.

Are all "accepted" microfoundations really valid? Do microfoundational purist economists even belong in scientific world cognisant of behavioural psychology, emergent properties and deterministic chaos?

"I suspect nearly all economists are naturally reluctant to embrace cases where agents appear to miss opportunities for Pareto improvement - I give another example related to wage setting here. However in most other areas of the discipline overwhelming evidence is now able to trump these suspicions. But not, it seems, in macro."

• Suppose you had $100 in a savings account and the interest rate was 2 percent a year. After five years, how much do you think you would have if you left the money to grow? More than $102, exactly $102 or less than $102?

• Imagine that the interest rate on your savings account was 1 percent a year and that inflation was 2 percent. After one year, would you be able to buy more than, the same as or less than you could today with the money?

• Do you think this statement is true or false: “Buying a single company stock usually provides a safer return than a stock mutual fund”?

Anyone with even a basic understanding of compound interest, inflation and diversification should know that the answers to these questions are “more than,” “less than” and “false.” Yet in a survey of Americans over age 50 conducted by the economists Annamaria Lusardi of George Washington University and Olivia S. Mitchell of the Wharton School of the University of Pennsylvania, only a third could answer all three questions correctly.

"65% of people answered incorrectly when asked how many reindeer would remain if Santa had to lay off 25% of his eight reindeer."

"1 in 3 people didn't know how much money a person would be spending on gifts if they spent 1% of their $50,000/year salary."

– Personal Finance for Dummies, 7th edition, 2012, page 9

It's not just the issue that they require that all macro models be microfounded, with the resulting limitations and problems (see: http://richardhserlin.blogspot.com/2012/03/haugens-critique-of-microfoundations-in.html). They also require that the microfoundations always be highly unrealistic in the way that they like, the way that fits their preferred paradigm, and/or makes their preferred libertarian ideology look more desirable.

One thing I find really interesting: I often hear economics and finance professors talk about how ignorant and incapable their students are – and then in their research they assume everyone is a genious! with not only perfect public infomration in their minds, but perfect expertise to analyze it with! And they see no contradiction (or don't care).

Part of this is many will claim that you don't need everyone to be smart and expert and informed to get the result, you only need a saavy minority. But for many things it's easy to show that won't be enough. Perfect arbitrage won't often exist like it does in models – please see the later quotes from this handsome young man at a resent post by Miles Kimball for details:

Simon, yes macroeconomists can be skeptical of causes of recessions that in theory at least could be solved simply by the government telling people that if only they weren't so foolish and saw through the money illusion then they wouldn't lose their job/they would get a job (which they really, really want even at lower pay in many models of nominal wage rigidity). So it's important to have a story/microufoundations like the efficiency wage stories. Furthermore, there's a difference between observing lack of adjustment and knowing that this failure to adjust in some dimensions will have strong aggregate effects. Wage rigidity need not be allocational, see for example Elsby,2009,https://68088b26-a-62cb3a1a-s-sites.googlegroups.com/site/mikeelsby/documents/Final_JME_formatted.pdf?attachauth=ANoY7cqilmkORGj0rr6FmdxPvaEbcNTS10SgOje8dRz1f2Sn-9G6cQ8rTlpBLGg2pbo9sWgmngGz_YHbj_oMW3Bwnw_YZYCc3A6lGCFwOkjN_HcPSI8EScWZw_6g4uEiCddwzTIKTcnlDnZlunH8_2gQBLlUawamC6j6ifrDfud5T3UJz-vkZerySsiFvjXM3AJ-Cgde3UeD-P09KoNyLpfruZPK2mf6gOm_qUCduqX-iOlDluC-2t4%3D&attredirects=0or Amano et al., 2008http://web.uvic.ca/econ/research/seminars/shukayev.pdf+ the original papers by Barro in the 1970's re the Barro critique (1977 in JME I think or 1979)., As for the BOE, they have one of the most sophisticated forecasting systems around in terms of technique and IT. They could probably create a version of the core model with assymetric linex wage adjustment costs and solve it via say a 3rd order Taylor approximation as part of their suite of models. Or if that's too sophisticated, just adjust the wage rigidity parameter when doing negative shocks. If I compare to common private sector models,I don't really see any serious treatment of this sort of nonlinearity anyways. I mostly just see linear simultaneous equations systems or VECM's anyways. The bank of England approach to forecasting, strikes me as one of the most sophisticated and flexible around (and kudos for the excellent documentation they provide, which is not something you can say for most other public sector or private forecasters). And as central bankers (hence fundamentally attached to the New/Old Keynesian price and wage rigidity story ), I'm sure they take wage rigidity quite seriously.

On your first point, I agree that it is very important to look for microfoundations. The success of the microfoundations project came in part from its ability to show up a lot of dodgy reasoning that had gone on before. The key issue for me is whether good microfoundations should be a precondition for putting things into models.

On the Bank of England, I agree with all you say - and said similar things in my post. But models are meant to capture important features of the economy, so you do not have to make adjustments to the model all the time. And given where we currently are in terms of wage inflation, its difficult to imagine simulations where you would not want to allow for this non-linearity in a routine way.

The BOE examines wage rigidities and labour market frictions in published DSGE models. Interestingly, in a recent model (Working Paper 408, Faccini et al.), they do NOT find that nominal wage rigidities lead to inflation persistence. By running the model both with and without frictions they find little impact on inflation. They explain this on the basis that their model includes search frictions (supporting earlier work by Klause and Lubik). I guess Krugman is basing his assertions on US models (Gertler et al.) which show labour market frictions creating inflation inertia. The IMF report cited above favours an alternative explanation. They believe the relative stability of inflation in recent years is due to the high credibility of inflation targeting by central banks.

SW-L: I suspect nearly all economists are naturally reluctant to embrace cases where agents appear to miss opportunities for Pareto improvement

But putting it this way understates the problem, since in reality there is often no mapping from a free lunch in a model to any exploitable free lunch in a real world domain to which the model is applied. Too often, it seems to me that the illusion of a free lunch is an artifact of committing tha fallacy of composition. The free lunch could only be exploited by a goal-directed Borgian collective.

Peter Diamond (Nobel Lecture):

'... there appears to be wide acceptance of the Barro (1977) stricture that a model should not have “an inefficiency that intelligent actors could easily avoid,” ...'

'I disagree with these views because they ignore the incompleteness of models and the role of simplification for tractability. For simplicity, many search models have one-employee firms to simplify the analysis. Yet employment is overwhelmingly in firms with two or more employees. Are we going to learn more from one-employee modeling by invoking considerations that seem plausible in a literal one-employee environment or from involving considerations that seem plausible in many-employee firms and applying them to the one-employee environment? It seems to me that the latter is more likely to yield useful insights.'

The Barro critique is not tied down to having 1 employee (a legitimate simplification in search and matching models under constant returns to scale and some other conditions). In fact, in that special case there are several models of wage rigidity which is immune to the Barro critique (since it doesn't cause inefficient job destruction, but reduces hiring).The point remains, that it's dangerous to base policy conclusions on models where unemployment is due to e.g money illlusion which if it was literally the only factor could be fixed by a government education campaign or some "Nudge" style policy. Doing that means you risk missing some deeper underlying causes for big drops in employment in a recession, causes which may lead to quite different policy conclusions than suggested by the sticky wage disequilibrium story.Academia and applied private sector or policy analysis have different standards and objectives(not just in macroeconomics- and econometrician may insist that some estimation is rubbish unless it's done with the latest state of the art adjustment for bad instrumental variables- meanwhile the applied analyst may be stuck with no good instruments and hoping a theoretically inconsistent OLS regression says something useful nevertheless). Sure, if you have a strong prior that nominal wage rigidity has important employment effects (i.e the Barro critique is not a serious issue), by all means find some ad hoc way to put wage rigidity in your model that generates big swings in employment (though for that matter why not just assume a high labour supply elasticity at business cycle frequencies- you'd get many of the same effects in terms of more sluggish inflation). But, the default of always blaming business cycle on messed up pricing can also lead you to miss all sorts of things- since it's so convenient and easy to just blame irrationality all the time without trying to see if there's reason behind the madness.

"if you have a strong prior that nominal wage rigidity has important employment effects ... by all means find some ad hoc way to put wage rigidity in your model that generates big swings in employment (though for that matter why not just assume a high labour supply elasticity at business cycle frequencies .... [b]ut the default of always blaming business cycle on messed up pricing can also lead you to miss all sorts of things"

Oh its worse than that because one of those assumptions is not like the other - one has very solid applied microeconometric support, the other has none. That you can substitute one for the other in your model and get the same results is precisely what tells you that your model is not merely wrong (as all models are) but that it is unlikely to be useful in saying anything about the real world.

When we talk about why people trade, or why there is so much trading, or such hard to explain trading in economics, we're only allowed certain explanations, or factors; liquidity needs, private information. We can include not perfect rationality, but only if it's some very fancy, cute explanation stemming from Ivy League psychologists or evolutionary theorists – a "behavioral" explanation. And there is Ackerloff's "Bounded Rationality", but I see it rarely applied and just to limited things.

What we ignore, what we refuse to admit – so we have these "puzzles" and "paradoxes" – is the ridiculously obvious – not some super fancy "behavioral" niche trick, or some occasional talk of "Bounded Rationality" for limited things here and there, but just that it's an ultra-complicated world and people are super busy. The vast majority of investors have ridiculously little financial expertise. And they have little interest in spending their tiny amount of free time doing the thousands of hours of serious mathematical study to really get good expertise – even if they had those thousands of hours. Likewise for relevant public knowledge. They have ridiculously little of the relevant public knowledge in their heads when they trade in stocks, because there's a massive time cost to getting it in their heads in our ultra-complicated, ultra-busy world. They don’t even begin to have the kind of thinking in Rajiv's blog post. Most don't even have basic finance expertise and public knowledge.

This is the gigantic, whopping, explanation, or factor (a more accurate word), for the manufactured "puzzles" and "paradoxes". The vast majority just have very, very little finance expertise, public knowledge, and time. It's not even that much a matter of rationality or intelligence, or fancy behavioral flaws. You can be perfectly rational, and very intelligent, and have none of these little niche behavioral flaws, and still do all of these things just by, like most people, having very, very little finance expertise, public knowledge, and time. Or inclination to spend the massive time to get them.

From a Center for Economic and Financial Literacy survey:

"65% of people answered incorrectly when asked how many reindeer would remain if Santa had to lay off 25% of his eight reindeer."

"1 in 3 people didn't know how much money a person would be spending on gifts if they spent 1% of their $50,000/year salary."

– Personal Finance for Dummies, 7th edition, 2012, page 9

But it can't be this, nooooo, because this goes against the paradigm economists (or those in control of economics) so badly want. And it's not fancy enough; it's not part of some fancy Harvard evolution theory cognitive trick, that accounts for a just relatively small fraction of the reason for what we see.

That said, or gotten out, you can put people like this – really low expertise and public information, making the commonly observed mistakes, acting in the commonly observed ways, but still pretty, or very, rational and intelligent – into a big mathematical extravaganza model that the profession likes, and show things. And that might get published where it will have an impact, in the AER's and Journal of Finance's, but I just haven't seen it. I wish I had the time to do it myself.

Hello Simon,The EU Commission does not use a macro model at all to arrive at its NAWRU (NAIRU) estimates for EU member states. Previously, these estimates were obtained through trend/cycle decomposition using a HP filter. The well-known "end point" problems associated with this filter then led to the exploration of alternative methodologies, ending up with an Unobserved Components method based on a wage Phillips curve and a Kalman filter approach to estimating the NAWRU. The Commission does the technical work, and presents it to national delegations at the regular meetings of the "Output Gap Working Group" in Brussels. This "technical" group reports to the EU's Economic Policy Committee, where member states officially decide on matters of policy and methodology. The delegations are invited to question and comment. Many of the delegates do not master these methods, and questions are few and superficial, I guess because one dares not question these very complicated and so-called "state of the art" methodologies. And so, one step at a time, little by little, we ended up with a "state of the art" method delivering sometimes very improbable results. But insofar as the method remains unquestioned, its results must be accepted... Best regards,John Doe

This is what I had thought. An Unobserved Components method based on a wage Phillips curve and a Kalman filter approach will give you the result they get if you make no allowance for resistance to nominal wage cuts. At the very least, thinking about resistance to nominal wage cuts would make you worried about applying this technique in current circumstances, and instead using a bit of common sense.

But you are absolutely right that any kind of effective scrutiny of the Commission's work would have exposed this issue. So the system does appear to have failed.

Indeed, the methodological issues derive from the failure of an entire process, where we have massive information asymmetries, coordination difficulties, herd behavior, and agency problems. I think that we could go a long way in resolving many of these issues by having the Commission's work being peer-reviewed or assessed, not by national delegates coming from public administrations and who may sometimes have some difficulties in fully understanding all of the technicalities and implications of the methods that are proposed by the Commission, but by peers in academia who are certainly more specialized in the use of techniques as sophisticated as these Kalman filter applications.John D.

My own suggestion would be that every country had a well resourced fiscal council, who inevitably would have to do the same calculations, and so would have the expertise. These fiscal councils could then peer review what the Commission did.

I fully agree that this would be the ideal solution. However, I note that funding and staffing a public institution to provide it with the ability to follow or review the Commission's current work is not going to happen any time soon, at least in the country where I am working, due to budget cuts and personnel layoffs...

According to the WSJ, „[t]he change was approved by technical experts at a meeting last week and was expected to be supported at a Tuesday meeting of more senior officials in Brussels. But an article published in The Wall Street Journal about last week's decision generated concern in some national capitals about its effects on budget policies, an EU official said.”

Living in one of those "national capitals" (Berlin), and knowing what the "political elite" reads (FAZ, the leading conservative newspaper), I would guess that this article had something to do with the last-minute reversal:

What the article basically says, particularly but not exclusively in between the lines, is that the EU changes the way it calculates structural deficits in order to accommodate spendthrift debtor countries. Needless to say that it omits all those arguments at least hinted at in the original WSJ report that explain why the present approach is bogus. In fact, the teaser even leaves out the "structural" in front of "deficits", insinuating that "actual" deficits are made to look smaller than they are. Which, judging from the comments section, is precisely what fuming FAZ readers took from the article.

Simon, might it be worthwhile if you could point out the issue here in layman terms, say, in a Social Europe Journal article? As the previous commentator pointed out, even delegates from national public administrations are having a hard time understanding how big the impact of those econometric techniques is on actual economic policies. When it comes to politicians, including progressive ones, it gets even worse. But some of those at least read SEJ. Or so I heard.

Basically, the methodologies you mention are decided upon by the Eu's "Output gap Working Group" (OGWG), a group of experts representing Eu national governments. In fact, the OGWG did not decide to change its methodology but to make an ad hoc change in the application of the methodolgy to data for Spain, as the normal computation of the NAWRU for Spain appeared to be so improbable, and Spain is a "large" euro area member state, and Spanish objections prevailed. What was proposed was simply an ad hoc change, such as changing the definition of a variable, changing a lag, or imposing an a priori restriction a coefficient in the Phillips curve equation for Spain. This was meant to reduce the level of the estimated NAWRU for Spain, making the number less difficult to swallow politically. However, the general methodology was never meant to be changed. Note that the same case for an ad hoc change could have been made for other countries, such as Greece or Ireland...Regards,John Tardis Smith

Interesting. If I am informed correctly, the Commission's "Labour Market Developments in Europe 2013" report, which is scheduled for publication this week or next, is going to contain a section on cyclical vs. structural unemployment in the EU. This analysis will come to the conclusion that substantial parts of calculated NAWRU increases across Europe (and not just Spain) are in fact of a cyclical nature. Who would have thought.

According to the authors, however, their "analysis is not linked to the work ongoing in the Output Gap Working Group of the Economic Policy Committee of the ECOFIN on the fine-tuning of the methodology for the computation of the NAWRU used in EU surveillance." Given that the authors are also part of DG ECFIN, it might be interesting to observe reactions to this publication.

That would be incredible. And leave Olli with some serious explaining to do... It will also make for some interesting discussions within the OGWG no doubt, and maybe open the way for some backtracking on the current methodology to something simpler and more transparent.John Smith

I just saw this post by Simon and this incredible thread of comments. Many of you seem more informed than I am on the dynamics within the OGWP. I will write more about this topic in the coming weeks. Feel free to send me a private tweet @JCSBruegel if you want to discuss this issue and point me to some recent ongoing work on the subject.Jeremie Cohen-Setton

A similar methodological problem arises when other fields indicate another outcome.Like with interestrates at the moment.Basically Macro assume rates will remain low while Finance indicates they will have to rise. Simply you get a system that longer term will be under pressure and could at some time make unexpected moves and F up the good intentions.

Politics is probably the standard one. Simply getting the game that you are playing wrong. Not dealing with nice friendly academics but wanting the have people that can not count to five even if they can use their fingers vote for you in a next election.

Also offers advantages. The sticky wages discussion is actually for the average businessman likely beyond moronic so bloody obvious it is. They will instinctively take the side of the person that states that wages are sticky also on other subjects of which they know little.As well as in discussions works always great move the other side to in this case to behaviour and do a bit of homework yourself and you butcher them. Utterly simple strategy that still always works incredibly well. Bit of speed at the crucial moments so they cannot properly overthink what is happening (as they are specialists in other fields and donot have the knowledge of other fields ready).

new hire staffingNice post.Simon, yes macroeconomists can be skeptical of causes of recessions that in theory at least could be solved simply by the government telling people that if only they weren't so foolish and saw through the money illusion then they wouldn't lose their job/they would get a job (which they really, really want even at lower pay in many models of nominal wage rigidity). So it's important to have a story/microufoundations like the efficiency wage stories. Furthermore, there's a difference between observing lack of adjustment and knowing that this failure to adjust in some dimensions will have strong aggregate effects

"Basically, the methodologies you mention are decided upon by the Eu's "Output gap "Working Group" (OGWG), a group of experts representing Eu national governments. In fact, the OGWG did not decide to change its methodology but to make an ad hoc change in the application of the methodolgy to data for Spain, as the normal computation of the NAWRU for Spain appeared to be so improbable, and Spain is a "large" euro area member state, and Spanish objections prevailed"

Yep, but the Spanish proposal was not just an ad hoc one. If you look at their 2013 Stability Programe (unfortunately in Spanish only) they discuss the ongoing debate they are pursuing on the OGWG and explain they are arguing for the application of the OECD methodology in calculating the output gap. This can't possibly be just for Spain. They even base their justification on what they call a "forward looking neo Keynesian inflation expectations methodology" (strange this from the current Spanish government) which seems to have its origins in some of Jordi Gali's work on trends in unit labour costs.

The problem here is much more to do with estimating trend growth than estimating the NAIRU (which like the output gap is very much the derived variable in this context, strange this deriving things from a postulated unobservable, but still).

I think Krugman is wrong to attribute the blocking of the methodology change simply to politics from Berlin, since Madrid's interests are equally political, and whose opinion you share depends on how much of Spain's unemployment you think is structural (collapse of the construction/real estate activity) and how much merely cyclical. The sudden stop in the construction sector could well be used as a theoretical justification for a sudden surge in the structural level of unemployment (it ain't comin back) in much the same way the UK enclosure movement in the XVI century left a lot of displaced people to drift into the cities with nothing obvious to do.

The whole issue with the non empirically observable entities whose existence models postulate is that the theoretical justification has to be more rigourous than usual. Which is why the EU Commission and the IMF Latvia staff use the production function approach, since there is some underlying rationale. Simply showing markers on an unemployment rate/inflation diagram doesn't pass the test.

Incidentally, and again Krugman doesn't seem to have noticed, but one of the reasons why the Spain request has been put on hold must surely be that the Latvian literature would need revising, since the IMF economists (Blanchard et al) came up with a current 10% structural unemployment rate and an output gap (excess) of 12% in 2007 using this approach.This is the only way Latvia's recovery can be considered complete and satisfactory.

So at the end of the day here what we have is a horribly inaccurate methodology (end of series data bias), measuring entities whose own existential claim is only appreciable at the level of a model with the results being appropriated by one side or the other as they see fit to justify the pursuit of their own short term objectives.

One last point, the Spanish government don't seem to have realized that in doing this (their only real objective is to have the structural deficit revised downwards so Rajoy can lower taxes going into the 2015 election, and they seem to be challenged when it comes to thinking about more than one thing at once) they are reinforcing the argument of those who are worrying about the potential for deflation in Spain. They claim such risk doesn't exist, but the size of the output gap they are defending suggests it clearly does.

At a broader level, it does seem to me that people are missing how the new labour market reforms are now working, namely via job - and even population, Spaniards leaving new migrants arriving - churn. Most of the ULC correction on the periphery is simply due to labour shedding. Real wages have only risen at below the Euro Area average since the end of 2011. Basically VAT was being systematically raised (to lower deficits AND sustain inflation) and most wages were indexed. Now labour forces (at least in Spain) are being systematically restructured with younger workers coming in at lower salaries than those paid to the older workers who are restructured out. Very Japan since 1997 ish. Which is why I fully expect deflation to now take hold.

No one is having their nominal wages reduced. They are simply losing their jobs, with others being employed subsequently at much lower salaries and on very different contracts. The result is effectively the same, but it works on a tricklenomics basis - over the next ten years or so.

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time.