Wednesday, April 18, 2012

Equilibria, unique and not-so-unique (guest post by Roger Farmer)

Roger Farmer had some very interesting things to say regarding my last post on equilibrium analysis, but unfortunately Blogger somehow imposes a maximum comment length! And really, it's too interesting to languish down in the comments section, so now it's a guest post! Without further ado:

***

As a proponent of models with multiple equilibria, let me say a few words about your very interesting post on disequilibrium. I am a big fan of Bob Lucas' insistence on restricting ourselves to equilibrium models. Why?

The idea of disequilibrium is borrowed from the physical sciences where it has meaning in the context of, for example, Newtonian mechanics. A ball rolling down an inclined plane is an example of a physical system in disequilibrium. When it reaches the bottom of the plane, friction ensures that the ball will come to rest. That is an equilibrium. But it is not what we mean by an equilibrium in economics.

An economic equilibrium, in the sense of Nash, is a situation where a group of decision makers takes a sequence of actions that is best, (in a well defined sense), on the assumption that every other decision maker in the group is acting in a similar fashion. In the context of a competitive economy with a large number of players, Nash equilibrium collapses to the notion of perfect competition. The genius of the rational expectations revolution, largely engineered by Bob Lucas, was to apply that concept to macroeconomics by successfully persuading the profession to base our economic models on Chapter 7 of Debreu's Theory of Value, as opposed to the hybrid models of Samuelson's neoclassical synthesis. In Debreu's vision, a commodity is indexed by geographical location, by date and by the state of nature. Once one applies Debreu's vision of general equilibrium theory to macroeconomics, disequilibrium becomes a misleading and irrelevant distraction.

The use of equilibrium theory in economics has received a bad name for two reasons.

First, many equilibrium environments are ones where the two welfare theorems of competitive equilibrium theory are true, or at least approximately true. That makes it difficult to think of them as realistic models of a depression, or of a financial collapse, since the welfare of agents in a model of this kind will be close to the best that can be achieved by a social planner. An outcome that is best, in this sense, does not seem, to me, to be a good description of the Great Depression or of the aftermath of the 2008 financial crisis.

Second, those macroeconomic models that have been studied most intensively, classical and new-Keynesian models, are ones where there is a unique equilibrium. Equilibrium, in this sense, is a mapping from a narrowly defined set of fundamentals to an outcome, where an outcome is an observed temporal sequence of unemployment rates, prices, interest rates etc. Models with a unique equilibrium do not leave room for non-fundamental variables to influence outcomes. It is not possible, for example, for movements in markets to be driven by sentiment; what George Soros has called "the mood of the market". It is for that reason that conventional theory seeks to explain large asset price swings as disequilibrium phenomena.

Multiple equilibrium models do not share these shortcomings (see, for example, this) And they do not need to appeal to disequilibrium explanations to account for phenomena that are anomalous when one adopts the unique-equilibrium perspective. But although multiple equilibrium models have advantages in these respects, they lead to a new set of questions. It is easy enough to write down an economic model where more than one outcome is possible. But how would a rational economic agent behave if placed into an environment that was the real world analog of the economist's model?

The answer, I believe, is that a model with multiple equilibria is an incomplete model. It must be closed by adding an equation that explains the behavior of an agent when placed in an indeterminate environment. In my own work I have argued that this equation is a new fundamental that I call a belief function. It represents the end result of a process of learning; either from past observations of economic data or from copying the behavior of ones peers.

The belief function is a mapping from past observable variables to expectations of all future variables that are relevant to a decision maker. It is this function that guides behavior when a rational expectations model is indeterminate. The belief function provides a way for social psychology to influence economic outcomes, and in my view; it should be accorded the same methodological status as that of preferences, technology and endowments in a classical or new-Keynesian model.

Some recent authors have argued that rational expectations must be rejected and replaced by a rule that describes how agents use the past to forecast the future. That approach has similarities to the use of a belief function to determine outcomes, and when added to a multiple equilibrium model of the kind I favor, it will play the same role as the belief function. The important difference of multiple equilibrium models, from the conventional approach to equilibrium theory, is that the belief function can coexist with the assumption of rational expectations. Agents using a rule of this kind, will not find that their predictions are refuted by observation. It is the belief function itself that selects an equilibrium.

I work with models of multiple equilibrium that have incomplete labor markets (see, for example, this); as a consequence of this incompleteness, these models have multiple steady state equilibria. Incomplete labor market models fit the data better than their classical or new-Keynesian counterparts ( see http://rogerfarmer.com/newweb/pdffiles/farmer_phelps_volume_revision.pdf ). And because they do not imply that all unemployment is socially optimal, they are able to account for the mass human misery caused by persistent unemployment that we observe in periods like the Great Depression or the 2008 financial crisis.

Like their classical or new-Keynesian counterparts, incomplete labor market models explain data as a unique mapping from fundamentals to outcomes. But fundamentals include more than just technology, preferences and endowments; they also include a role for market sentiment. Outcomes in these models can sometimes be very, very bad.

This brings me to your excellent post on the use of the disequilibrium assumption in economics. If by disequilibrium, I am permitted to mean that the economy may deviate for a long time, perhaps permanently, from a social optimum; then I have no trouble with championing the cause. But that would be an abuse of the the term 'disequilibrium'. If one takes the more normal use of disequilibrium to mean agents trading at non-Walrasian prices, as in the models of Benassy and Dreze from the 1970s; I do not think we should revisit that agenda. Just as in classical and new-Keynesian models where there is a unique equilibrium, the concept of disequilibrium in multiple equilibrium models is an irrelevant distraction that has been imported, inappropriately, from the physical sciences where equilibrium means something very different from its modern usage in economics.

Roger Farmer

***

(back to Noah)

Lots of very deep stuff here, so I'll just add a few quick responses/comments:

1. Roger's post addresses, in a much more precise and well-informed way, possibilities (2) and (3) from my last post. But it rejects possibility (1), which involves non-Walrasian prices.

2. I really like the idea of a "belief function". This is basically chucking Rational Expectations - not because people are irrational, but because the notion, as Lucas thought of it, of people's beliefs coinciding with the equilibrium outcome of an economic model doesn't make a lot of sense in a world of constantly shifting multiple equilibria.

3. I heavily suspect that I am one of the people importing the idea of "disequilibrium in multiple equilibrium models" from the physical sciences. I am not yet convinced that it is an inappropriate analogy, or that non-Walrasian prices are a fruitless line of inquiry, but I will have to read a lot more to understand properly...

Anyway, much thanks again to Roger for the guest post. This is not the most attention-grabbing media-friendly stuff, but it is very deep and important to the future of both how economics is done and how the discipline is perceived by outsiders.

Do we really get multiple equilbria in this model with inter-temporal substitution.

Its seems to me that what we are getting at here is an amplification effect. Congestion in the labor market causes the effects of interest rate changes to be greater than we would otherwise suspect. Which is fine, that makes a lot of sense to me.

Potentially you could also explain why the economy would never recover from bad enough monetary policy as well.

However, given all of that why not think of your asset parameter as measuring expectations of monetary policy. After all the stock market is highly sensitive to monetary policy.

There is so much wrong here that I'm not sure where to begin (maybe I'm just having a contrarian day, since I've been arguing in the direction of Karl Smith all morning while I should be thesis writing).

First of all, Nash equilibrium most certainly does not in any way reduce to perfect competition in the presence of many players. I don't know where he gets this. Oligopoly reduces to perfect competition, but that is another matter entirely (conversely bertrande competition reduces to perfect competition with only two players); in general, though, it takes a hellava lot more assumptions that that to get perfect competition; and with imperfect information, the answer is "never" (more players can make the problem worse). That's issue one.

The second problem with this is what you identified as issue #3: why reject the non-Walrasian prices? In no model with sticky prices can prices be truly Walrasian. Until now I had THOUGHT that was one of the major arguments deployed against NK models. Granted that I'm not an expert in that, but now I'm confused... or someone's confused. Nor is it really true that equilibrium disequilibrium models (which are sequences of equilibria evolving in time) really so different than what farmer claims to be doing--his belief function is just the (proposed) disequilibrium process. You have an equilibrium at any given point in time given the parameters of the disequilibrium--it's like a constrained maximization. This may or may not perclude DSGE methodology, certainly you can do equilibrium disequilibrium with Bellman Equations if you wanted to. To make a long story short: I don't understand what Farmer is worried about.

Third, his belief functions are not new. Read up, for example, on Self-Confirming Equilibria... in fact just last week I advised an undergrad (for her senior thesis) to explore SCE as a replacement for rational expectations (which is based on the Bayesian Nash concept). In other words, this is not a new idea. It wasn't a new idea when I mentioned it to the student. It wasn't a new idea when the subject came up in a seminar a few weeks ago. Hell, it wasn't a new idea when Fudenburg and Tirole invented SCEs.

It is a standard result that Nash equilibrium for infinite players yields the perfect competition equilibrium. Indeed it is basically by construction using the definition of a perfect competition eq'm so it would be strange if it did not.

That is not correct anonymous. Essentially you need the same conditions for this as you do for the welfare theorems and I certainly hope I don't need to go into what you need for that. You also need to make strong assumptions about the structure of the game in the presence of imperfect information. As an extreme example, the market for lemons unwinds no matter how many buyers and sellers in the model.

Again, I'm not sure where this is even coming from... perhaps you're thinking about the Core Equivalence Theorem (which would explain a lot)? If so I should warn you that perfect competition is an assumption of the theorem (you need to replicate the entire economy many times) not a result of the theorem (which is that the Walrasian Eq'm obtains (is the unique element in the Core)). At any rate that isn't even a "Nash Equilibrium"... the Core is a concept of cooperative game theory.

Actually, anonymous, it has occured to me what you are probably thinking... As I recall Lucas had an argument that perfect competition would be an evolutionary end state for an economy (given enough agents). This is wrong, though (for "theory of the firm" sorts of reasons).

Essentially, in the real world you have hold-up problems in contracting (essentially because contracts are incomplete and surpluses will be reapportioned after renegotiation) that are solved by forming firms. But the existence of firms destroys the Lucas result--the evolutionary end state may be a world with relatively-few relatively-large firms. I think (but I'm not entirely sure) that in this sort of world, the only thing that keeps the firms from being monopolies in the steady state are the fact that there are internal hold-ups as well which limit the extent of firms.

In physics models serve for describing what works. And as long as it works, the more basic the assumptions, the better the model. And what the model tells you is why it works. Like an apple falling from a tree. You don't need to model the air, the wind or the shape of the apple, not to mention relativity or quantum physics. You know that the apple falls, and you may calculate the time it takes to reach the ground.In economics, you may have basic models describing basic mechanisms, like price adjustment on a competitive market. And the model tells how the mechanism works.In the economy, we do have things that compare to apples falling from trees during a few decades, and then, suddenly, the apples don't fall anymore, they go up in the air. And your model that explains why it's supposed to fall is of no use under those circumstances. Complex models might in theory try to catch that, but economics is nowhere near being able to modelize such complexity.Worse, because your model has been right for a long time, when it no longer is, you don't question the model, you question the reality, and you make suggestions for the reality to adapt to the model, and not the other way around.

"And what the model tells you is why it works. Like an apple falling from a tree. You don't need to model the air, the wind or the shape of the apple, not to mention relativity or quantum physics. You know that the apple falls, and you may calculate the time it takes to reach the ground."

Sentence 1 is contradicted by the following sentences. The base Newtonian model, IIRC, doesn't really give a 'why', just that objects are attracted towards each other, with no explanation of why this attraction works.

What does it mean to answer the question of "why"? In physics, "explanation" usually just means description of multiple phenomena in terms of one "underlying" phenomenon (e.g. a falling apple and the moon's orbit in terms of one phenomenon of gravity).

Well, if you prefer, the model tells you which are the fundamental forces, how they interact, and what is the expected outcome (apple on the ground). It is indeed description rather than explanation. This is a semantic issue, it does not change the point.

Here's an example from Physics that kind of goes against micro-foundations.

Let's say we you have a tank full of water. In the tank there is blue water and regular water separated by a barrier. When you remove the barrier, the water will mix together and create lightly colored blue water.

Now imagine you had a camera that could film each of the molecules moving around and bouncing into each other. If you play the tape you will see each collision perfectly follows the laws of physics. The tape ends with what we would expect lightly colored blue water.

If we were to play the tape backward, we would also see that each molecular collision still follows the laws of physics, but the tape ends with the blue water separated from the regular water. This we know is highly unlikely.

While the microfoundations were perfectly within the laws of physics, one of the end results is one that we would not expect.

Wonderful example, anonymous. It is certainly true that the second law of thermodynamics is a purely emergent phenomenon even though entropy itself is "microfounded" as are individual interactions. I wish I had thought of that, but leave it to Feynman to get to the heart of the matter.

In macroeconomics, we don't have microfoundations to the point of the individual consumer (i.e. we don't literally model the decisions of every agent in the economy, or every particle in the system). I don't think macroeconomists want to go in that direction at all (hence the resistance to agent based modelling). What we do have is a model for the aggregate behavioural response to a policy as a function of fundamentals. An important difference between physics models and economic ones is that expectations about the future affect decisions today, and so it is important to capture this channel. It is not necessary that microfoundations be completely analogous to the micro level decision however, because we are modelling the aggregate behavioural response, not the individual level one. See for example much of Prescott's writing on interpreting the Frisch elasticity in macroeconomic models. I think a lot of the discussion here implicitly assumes that microfoundations refer to particle level interactions, they do not. Whether they are structurally invariant under the policy considered, well, that depends on the model and the policy.

"In the context of a competitive economy with a large number of players, Nash equilibrium collapses to the notion of perfect competition."I'm not sure what result Farmer is referring to here. The Arrow-Debreu model is not a well-specified game (no-one chooses prices, and if agents choose actions that do not satisfy the aggregate resource constraints, the outcome isn't well-defined), so Nash equilibrium does not apply. To turn it into a 'market game' you need to specify who sets prices, what happens if demand exceeds supply, etc.; different assumptions (Shubik, Dubey, Gale...) lead to different results. As far as I know, there are few papers showing that with a large number of players, the Nash equilibrium of such a market game is a competitive equilibrium allocation. (Certainly not when there are more than 2 goods, so we are talking about general, not partial equilibrium.)

Even if Nash equilibrium did collapse to perfect competition, this is *full information* Nash equilibrium we are talking about. Every agent needs to know not just her own preferences and endowments, but the preferences and endowments of every agent in the economy, which is an absurdly huge informational burden. This is a million miles away from the Hayekian fairy tale we tell undergrads in which, even though knowledge is dispersed and private, equilibrium prices convey all the information we need to know to reach an efficient allocation. It would be nice to have the result that, in a market game with dispersed information, agents learn their way to the competitive equilibrium allocation. This would be similar in spirit to the older literature on stability (e.g. Franklin Fisher). But as far as I know, such a result does *not* exist. So I see no reason to believe that markets tend towards competitive equilibrium.

I would also disagree with Farmer slightly about terminology. Economists use the word 'equilibrium' to mean not 2, but 3 different things:1. Market clearing: each agent signals his excess demands as if he could buy and sell as much as he wants at the given prices, and these excess demands sum to zero in each market;2. Consistent beliefs/actions: in some sense, every agent is doing the best that he can, given what others are doing (Nash equilibrium)3. Either the whole state of the economy, or some interesting function of that state, does not change over time ('equilibrium' in the physical sciences; we usually call this a steady state)Without further assumptions, no one of these conditions implies the other two. Confusion arises from the fact that we use the same word to mean 3 different things, as if they were equivalent.

@Keshav A number of responders have criticized me for confounding Nash equilibrium with Walrasian equilibrium and the point is well taken. You are correct to point to three different uses of the equilibrium concept and I do not want to disparage a research agenda that seeks to investigate, more closely, the connections between (1) and (2).

But I do not think that that agenda will help us to fix the problems with existing DSGE models in macro. Lucas pointed out that we cannot observe a market in disequilibrium. He was right. Indeed, in the sense in which the word is used in macroeconomics, we cannot observe a market. The best we can hope for is to observe sequences of trades and the prices at which those trades take place.

How should we organize our observations? In my view, it is helpful to view agents as goal oriented decision makers who take actions to achieve what they perceive to be optimal outcomes. In a game theoretic formulation of an economy, we would need to specify the information structure of each agent, the timing of moves, the payoff structure of the game and the space of actions. That way of proceeding would explain data as a consistent set of plans; definition (2) in your taxonomy.

The general equilibrium approach of Debreu Chapter 7 ( your definition (1)), takes a short cut. No single agent sets prices; but prices are chosen such that quantities, prices and expectations of future prices are mutually consistent. In this rational expectations equilibrium environment, the data are explained as a sequence of choices that obey the equilibrium consistency requirements at all points in time and in all states of nature.

Is that a useful way to seek to understand the world? That depends on two things. First, does it help us to understand data? Second, does it provide us with a guide to good policy? If one sticks to standard DSGE models with a unique equilibrium ( in the sense of (1) in (2)), a case can be made that the answer to both questions is no. But the same charge cannot be leveled at DSGS models with incomplete labor markets.

Explaining persistent unemployment as the stubborn failure of prices to adjust to their equilibrium values will go only so far. At some point we should entertain the idea that persistent unemployment is itself an equilibrium phenomenon, and at that point, the idea of trading at 'false prices', in other words 'disequilibrium', becomes an impediment to our understanding.

I should add that, by multiple equilibrium models, I do not restrict myself to dynamical systems where there is more than one steady state. Incomplete labor market models have a continuum of steady state equilibria as well as a continuum of non-stationary equilibria, where equilibrium here is in your sense (1). When closed with a specification of how agents form beliefs, these models are capable of understanding economic data and of guiding economic policy.

I haven't read the old literature on non-Walrasian trades that you reference in your post. But here's my question: What happens when the equilibria shift on a time-scale similar to the time-scale at which prices change? Might the process by which prices converge to Walrasian values then be important in predicting which equilibrium the economy will move towards? It seems to me that it might. Have you read that Andrew Lo paper I linked to in my last post?

Noah I haven't read the Lo paper - but I will look at it. Rather than disequilibrium prices,I prefer to think of disequilibrium expectations. If the fundamentals are non-stationary and shifting then it would make sense for an agent to use a learning rule that puts more weight on recent information (constant gain learning). That would make sense, even in a unique equilibrium model.

The reason I am not totally sold on that approach is that I do not see that it can easily explain a decade of 15% unemployment. I do not see a good culprit to explain which of the fundamentals shifted to cause the Great Depression. In my approach, the Depression was an equilibrium phenomenon. But not in the RBC sense of the term. It was an equilibrium with involuntary unemployment.

I take it for granted that, taken literally, the assumption of consistent plans is false. Whether the general equilibrium approach is useful must depend on 'how false' this assumption is, i.e. how quickly people adjust their inconsistent expectations so as to become consistent. It doesn't only depend on whether a general equilibrium model can explain the behavior of the variables we care about. (I'm not sure if this is what you meant by "does it help us to understand data?".) Even if we could explain the aggregate variables of interest (during the Depression, say) using a DSGE model, the model would probably make poor out-of-sample predictions unless the assumptions underlying DSGE modelling (rational expectations, market clearing) were close to reality.

I don't claim that a viable disequilibrium modelling framework exists (there are many relevant lines of research, but as far as I know most of them were abandoned in the 1970s, and all of them have unresolved problems). However, I don't think it's obvious that a non-DSGE model cannot explain, e.g., persistent unemployment. To explain 15% unemployment lasting, say, 5 years, it's not necessary to assume that wages would take 5 years to adjust to equate labor supply and demand. A key insight of the Barro-Grossman/Dreze/Benassy 'non-Walrasian equilibrium' literature is that disequilibrium in one market can explain unemployment in another market. Somewhat relatedly, there is the old Keynesian argument that even if prices fall when there is excess supply of goods, and nominal wages fall when there is excess supply of labor, *real* wages may not adjust to clear the labor market. (All we get is deflation - which either restores the economy to equilibrium via the real balance effect according to Pigou, or, more likely, destabilizes the economy further via debt-deflation according to Fisher/Tobin/Minsky.) So it's not obvious how sluggish adjustment towards equilibrium must be for persistent unemployment to be a disequilibrium phenomenon (just as it's not obvious whether the observed frequency of price adjustment can explain monetary non-neutrality in a New Keynesian model, due to strategic complementarities) - indeed, if the equilibrium wasn't stable, no speed of price adjustment would be fast enough. To answer this question we would need a viable disequilibrium research program, which, sadly, doesn't exist (to my knowledge). I admit that the difficulties with constructing such a research program are a good argument for pursuing alternatives which do not stray too far from the equilibrium fold, e.g. incomplete labor market models.

(Even aside from its relevance to macro, I think the disequilibrium project is interesting for the same reason that microfoundations are usually interesting - it is more satisfying to know how and why prices and expectations adjust to bring people's plans into equilibrium, than to just assume they do so. General equilibrium theory lacks microfoundations. But this is a separate question.)

Regarding single equilibria, it is worth looking at why they occur in DSGE other similar models. People building DSGE models do not place an assumption that there exists a unique equilibrium. Instead, what happens is that economist build a linear DSGE model (could be raw-linear or log-linear), and the single equilibrium point is a mathematical certainty in such a model. Belief functions do not help this issue--they are typically EWMA's of past conditions, and EWMA's are still linear functions. To have multiple equilibria you need to have nonlinearities.

Not really. Equations of a DSGE model are typically nonlinear, and uniqueness of equilibrium, or existence of a single steady state, is determined by assumptions of the model (e.g. concave production function), not by solution method. OTOH, you can have indeterminacy (i.e. continuum of solutions) in linear model as well.

What linearization does is finding a linear approximation to functions that describe the solution, like consumption function, transition law for state variables, etc., around the steady state. There are also nonlinear methods, but all they typically do is to give you more precise approximation to those functions, instead of changing qualitative nature of the solution.

ivansml: Perhaps I should be more precise. In a model with multiple equilibria, linearization allows you to assume that shocks won't knock you to a different equilibrium. It's really an assumption about stability, not functional form.

The assumptions that lead to uniqueness of the equilibrium are different things.

The point is that at some point the modeler assumes that the linearization is sufficient. At this point there is only one equilibrium. Also, this requires a model form where the model is linearizable in the first place--it is certainly possible to have a model form which is not linearizable.

There is a theorem in finite general equilibrium models which asserts that under certain regularity conditions, the number of equilibria is finite and odd. Kehoe and Levine have an econometrica paper which shows that the same is not true in overlapping generations (OG) models and my work with Jess Benhabib showed that this result also breaks down once there are increasing-returns to scale in the technology.

In OG models and in models with externalities, there may be a continuum of dynamic equilibria. The first-generation indeterminacy literature exploited that result to show that there are equilibria in which beliefs influence outcomes. Dave Cass and Karl Shell called this idea 'sunspots'.

I think of incomplete labor market models as 'second generation' indeterminacy models. Here the problem is not that there are many dynamic equilibria; it is indeterminacy of the steady state.

Most smart people I know do understand multiple equilibra and also form a belief function; however the belief function takes the form of a subjective probability distribution (spend 5 minutes at GS quantitative strategies and I guarantee someone will ask you to quantify your subjective probabiility distribution).

So say we have regulatory outcome A with price impact P(A) and regultory outcome B with price impact P(B). then the market price can be used to infer the "market implied proability" of A vs B or the market-implied subjective probability distribution. This is done all the time (for example, in M&A you can infer the market-implied-probability of the merger given some reasonable outcomes).

So, I would not say that multiple equilibria is inconsistent with rational expectations at all, and I think it makes perfect sense to try to rationalize the outcomes.

also, i meant to note that i do think "disequilibrium dynamics" are important. unemployment in sticky wage models is fundamentally a disequilibrium phenomenon. how do I know what the real equilibrium is? how do i know how long it takes to get there? how do i even know i'm in disequilibrium if there are multiple equilibria? How do i know which one i am moving towards?

in a predator-prey model for example there are two important equilibria to worry about(no animals, and all prey no predators). Some conditions lead to one of these, some to a homeostasis between predator/prey populations. this is analagous to overfishing the river or a competition between technologies that could lead to a potential monopolist. Its importnant to know whic direction we are headed.

What if you could perfectly model an economy going forward from this point and under Policy A, real GDP would be 0.2 percent higher after 20 years, inflation would stay at or below 2 percent for the entire time but unemployment would stay above 8 percent for 10 years. In comparison, under Policy B, real GDP would be 0.2 precent lower after 20 years, inflation would go as high as 4 percent, but unemployment would go below 6 percent after only 2 years.

Which policy would you promote? Does Model A accurately reflect the high negative social costs and effects on wealth inequality? What is the effect on individuals who are unemployed for long periods? Do we tolerate a lost generation of workers to achieve a modest increase in GDP 20 years from now?

The problem with the models is VALUES. Which values are most important? Growth? Inequality? Employment? What are the Time frames? Is trading a modest increase in GDP 20 years from now worth the social costs of high unemployment over a substantial portion of the productive years for much of the potential workforce?

What if you chose Policy A but some unforeseen shock to the economy occurred that would have made Policy B a better choice for GDP? Would sacrificing the workforce on the alter of high unemployment have been worth the quest for a moderate increase in GDP?

Part of the problem is the questions the models are asked to solve. Right now, our biggest problem is unemployment that is too high in the short term. Discussions of long term policy for GDP or inflation are great, but they are starting from current conditions rather than starting 2 years from now after optimal policies for reducing unemployment have been in place. The question we need to ask is, "What policy will most quickly reduce our high unemployment problem with the least negative effects on growth and inflation and once we get to full employment, what policies are best for conditions going forward?"

This leads to another point that POLICY matters and POLICY goals change over time and feedback into the economic conditions in ways that cannot be anticipated.

We get arguments about the best policy for GDP and inflation starting with current conditions, but that ignores the most pressing current problem and assumes that we let it fester. Jobs, short term, are an important value in our system and that is not directly reflected in many of the questions the models are asked to solve. The values are bad.-jonny bakho

I find Farmer's comment to be incompetent and ignorant, to be kind. Redefining the state space to include dimensions for dated commodities, for instance, doesn't answer the question of why one would expect the economy to approach equilibrium.

First of all, in a model with a unique equilibrium we can easily have "sentiments" create huge (welfare disastrous) swings in aggregates. For instance, if agents think that with a certain probability the credit crunch will reduce investment opportunities and lower future growth, then they will start to save, spend less, and a crisis will emerge already today. Even if there has been no change to current fundamentals (only to the probability of future fundamentals). So what is the difference between this and sunspot models? Well, in sunspot models you can essentially say, "suppose I think I will behave like an erratic idiot tomorrow, then it is optimal to behave like an erratic idiot today", but since it is "optimal" being an "idiot" has no meaning, the belief is self-fulfilling. The "belief function" would guide us to how "idiotic" (or smart) we think of ourselves in the future, and limit the veritable plethora of retarded outcomes. I am not sure this is the way forward.

Second, the multiplicity of equilibria is often a very local phenomenon, and only holds in linearized models. Once you leave that realm, models can instead be entirely explosive, or collapsing. Do we really think that it's a good idea to exploit knife-edge properties which only emerges under a silly approximation as the main explanation of crises. I doubt that.

You are incorrect to assert that multiplicity of equilibria (or even indeterminacy of equilibria) is a local phenomenon. As a counter-example, consider any well defined general equilibrium model of money where there is typically a continuum of non-stationary equilibria converging to a steady state where money has no value. In models of the inflation tax, this indeterminate steady state is one where money has positive value. As an example, see my 1997 Macroeconomic Dynamics paper with Michael Woodford.

Thank you for the reply. But mind you, I said that "the multiplicity of equilibria is often a very local phenomenon", and a counter-example is not enough. Generating sunspots commonly means that the researcher must rig the model to get enough roots inside the unit circle. And in a nonlinear model these roots will change value depending on the state of the economy (this is not very precise, but I think you get the idea). I am not, and very few people in the literature are, convinced that this is more than a mathematical curiosity ...

This is probably a topic for a longer post but I have to (respectfully) disagree with Farmer here. Equilibrium requires the mutual consistency of individual plans. In Arrow-Debreu this is brought about by the coordinating role of prices, given the assumption that we have complete futures markets for all contingent claims. But RE assumes that individual plans are mutually consistent without concerning itself with the process by which such consistency is attained. In other words, without concerning itself with the stability of equilibria. Even when there is a unique equilibrium path, it need not be locally stable under a broad class of disequilibrium adjustment dynamics (see for instance Howitt, JPE 1992). In fact, I think that such instability is at the heart of economic crises. It's not multiplicity of equilibria that is the problem, it's the instability of equilibria with respect to disequilibrium dynamics. Explicit consideration of such dynamics is an important area of research which should be encouraged rather than shut down.

Having said that, Farmer does make a very important point that RE is consistent will all kinds of economic fluctuations and crises. One could certainly model economies in this way. The question is whether one should.

"The idea of disequilibrium is borrowed from the physical sciences where it has meaning in the context of, for example, Newtonian mechanics. A ball rolling down an inclined plane is an example of a physical system in disequilibrium. When it reaches the bottom of the plane, friction ensures that the ball will come to rest. That is an equilibrium. But it is not what we mean by an equilibrium in economics."

This shows a complete misunderstanding of the meaning of the equilibrium metaphor in economics (yes folks, its a metaphor). As Farmer goes on to point out equilibrium is a state in which the economy is static. Same thing as the ball that has come to a standstill.

Disequilibrium theory -- in both physics and economics -- has to do with movement and dynamics that have no ultimate 'end point'. Equilibrium theory, on the other hand, does posit an 'end point' (albeit usually in the future). This feeds into the fact that equilibrium theory is implicitly teleological (and outdated!) while disequilibrium theory is evolutionary or teleonomical (and not outdated!).

All the other social sciences have dealt with these problems over the course of the last 50 years -- from psychology through anthropology to sociology. Economics remains stubbornly stuck at the point of not even recognising the metaphors they are deploying or the larger methodological issues. Some have -- like Mirowski -- but they are politely ignored.

In saying this, I just got back from a conference yesterday where a neoclassical tried to model the modern banking system using systems dynamics. Afterwards I asked him what he thought such an approach might do to the idea of equilibrium. After a short conversation I'm fairly convinced he'll be moving far away from equilibrium analysis.

Change will come as people apply new computer models to analysis of concrete problems. The profession haven't even begun to grasp what the equilibrium metaphor actually meant -- they could have figured it out, however, by reading Marx, Hegel, Whitehead or any other number of process philosophers.

Very interesting stuff for someone with a background in population ecology, another field in which we often use analogies to physical dynamical systems to think about what's going on.

From that perspective, I found Roger Farmer's comments a little difficult to understand. It's quite common in ecology and evolutionary biology to write down and analyze dynamical models of, say, the population dynamics of interacting species (predator and prey for instance) in which those species adaptively adjust their behavior. Further, the details of how "adaptive" adjustment of behavior happens can be varied to reflect all kinds of things (e.g., the organisms only have limited information, etc.). Such systems have features like Nash equilibria that can be understood with game theoretic-type analyses, but they're still dynamical systems and so it still makes perfect sense to talk about their equilibria, local and global stability of those equilibria, other sorts of attractors like limit cycles or chaos, bifurcations, etc. Is the same not true in economics? Am I missing or misunderstanding something here?

"Once one applies Debreu's vision of general equilibrium theory to macroeconomics, disequilibrium becomes a misleading and irrelevant distraction."

Also sorry, but this is an absolutely absurd statement and anyone fooled by it should take a course in basic logic. I might as well say:

"Once one applies the Creationist vision of Intelligent Design to the analysis of biology evolution becomes a misleading and irrelevant distraction."

In methodology we call such a statement a 'tautology'. Neoclassical methodology is basically a giant tautology. This is why many serious methodologists basically stopped engaging with the discipline in the late-70s.

The problem with equilibrium in current market is fact, that behaving in not very reasonable way (speculations, feeding up bubble) is more profitable, than looking in to further horizon, and focusing on longer term profit – which is more “reasonable” for keeping the continuity. The lack of equilibrium is just result of wrong regulation and some kind of dysfunction of the capital market. We can call it fraud or speculation, it is not very important, the fact is that the biggest profit is taken from actions that put market out of equilibrium. Market in capitalism is free, that is obvious, but there are regulations and tax, which can shape profit levels for many different strategies. If free market was regulated against speculation and bubbles, it would be in equilibrium much of the time. The problem is in stabilization of the market. The rules and taxation need to be shaped in a way it maximizes profit for strategies, that are targeting at equilibrium. Market must be stable for the good of capitalism and economy. STABLE means that: for little (finite) disturbance at input there is little (finite) disturbance at output. Regulations and taxation should address that problem. Market is in dysfunction if for small change in input output is huge and goes to almost infinity (this is just bubble). In that condition, it just can't regulate itself, and it just doesn't work. STABILITY of market guarantees its proper functioning. Otherwise bailouts are needed, each and every time global leader are speaking the market is freaking out, bubbles are just everyday life, etc.

The epistemology of these discussions breaks open the fragile eggshell of what we perceive to be the truth. Roger Farmer's research agenda won't change now; he has vested his career in his perception that general equilibrium --irrespective of whether it is Walrasian or the legerdemain of Arrow-Debreau-- is reasonable and as such we are confined to tread the path of believing (as Stephen Mark Cohn put it) in "a micro-founded auctioneer model of the macro economy". What's wrong with macro foundations where there is a continuous feedback between macro and micro levels? Agents (be they households, firms, etc.) neither make decisions in a vacuum nor do they have foresight to plan ahead --I am not referring to the perfect forecasting straw man argument-- they must deal with institutional realities of oligopoly, market power, emotion, class (oh, sorry - that is for Marxist analysis). Time to put the Lucas critique into the recycle bin and hit empty and there is no point in hedging Freshwater with Saltwater. Here's to hoping for some superior thinking from the next generation.

I think perhaps I'm missing something here. But to be honest, I think this discussion is missing the point by miles. We have just been through a long period of "stability" where there were large persistant current account imbalances, large increases in debt levels and large increases in asset price/income ratios. This "stability" was very unstable and many people noticed and said so. Is this "equilibrium"? If it is, it is a very strange equilibrium, that includes the seeds of its own demise. If the real world doesn't actually follow an equilibrium path, why are why trying to pretend that we can sensibly model it as though it does?

P.S.My own view is that economics is a special case of ecology, and it should methods borrowed and adapted from ecology. (i.e. We should be modelling births, deaths and reproduction of interacting and evolving classes of agents.)

Professor Lars Pålsson Syll writes in "The nodal point of the macroeconomics debate"16 July, 2012 http://larspsyll.wordpress.com/2012/07/16/the-nodal-point-of-the-macroeconomics-debate/

"This summer both Oxford professor Simon Wren-Lewis and Nobel laureate Paul Krugman have had interesting posts up discussing modern macroeconomics and its alleged needs of microfoundations.

Most “modern” mainstream neoclassical macroeonomists more or less subscribe to the view that microfoundations somehow has lead to better models enabling us to make better predictions of future macroeconomic events...Yours truly basically side with Wren-Lewis and Krugman on this issue, but I will try to explain why one might be even more critical and doubtful than they are re microfoundations of macroeconomics.

Microfoundations today means more than anything else that you try to build macroeconomic models assuming “rational expectations” and hyperrational “representative actors” optimizing over time. Both are highly questionable assumptions.

The concept of rational expectations was first developed by John Muth (1961) and later applied to macroeconomics by Robert Lucas (1972). Those macroeconomic models building on rational expectations-microfoundations that are used today among both “new classical” and “new keynesian” macroconomists, basically assume that people on the average hold expectations that will be fulfilled. This makes the economist’s analysis enormously simplistic, since it means that the model used by the economist is the same as the one people use to make decisions and forecasts of the future.

Macroeconomic models building on rational expectations-microfoundations assume that people, on average, have the same expectations. Someone like Keynes for example, on the other hand, would argue that people often have different expectations and information, which constitutes the basic rational behind macroeconomic needs of coordination. Something that is rather swept under the rug by the extremely simple-mindedness of assuming rational expectations in representative actors models, which is so in vogue in “New Classical” and “New Keynesian” macroconomics. But if all actors are alike, why do they transact? Who do they transact with? The very reason for markets and exchange seems to slip away with the sister assumptions of representative actors and rational expectations.

Macroeconomic models building on rational expectations microfoundations impute beliefs to the agents that is not based on any real informational considerations, but simply stipulated to make the models mathematically-statistically tractable. Of course you can make assumptions based on tractability, but then you do also have to take into account the necessary trade-off in terms of the ability to make relevant and valid statements on the intended target system. Mathematical tractability cannot be the ultimate arbiter in science when it comes to modeling real world target systems. One could perhaps accept macroeconomic models building on rational expectations-microfoundations if they had produced lots of verified predictions and good explanations. But they have done nothing of the kind"

Professor Lars Pålsson Syll writes in "The nodal point of the macroeconomics debate"16 July, 2012 http://larspsyll.wordpress.com/2012/07/16/the-nodal-point-of-the-macroeconomics-debate/

"This summer both Oxford professor Simon Wren-Lewis and Nobel laureate Paul Krugman have had interesting posts up discussing modern macroeconomics and its alleged needs of microfoundations.

Most “modern” mainstream neoclassical macroeonomists more or less subscribe to the view that microfoundations somehow has lead to better models enabling us to make better predictions of future macroeconomic events...Yours truly basically side with Wren-Lewis and Krugman on this issue, but I will try to explain why one might be even more critical and doubtful than they are re microfoundations of macroeconomics.

Microfoundations today means more than anything else that you try to build macroeconomic models assuming “rational expectations” and hyperrational “representative actors” optimizing over time. Both are highly questionable assumptions.

The concept of rational expectations was first developed by John Muth (1961) and later applied to macroeconomics by Robert Lucas (1972). Those macroeconomic models building on rational expectations-microfoundations that are used today among both “new classical” and “new keynesian” macroconomists, basically assume that people on the average hold expectations that will be fulfilled. This makes the economist’s analysis enormously simplistic, since it means that the model used by the economist is the same as the one people use to make decisions and forecasts of the future.

Macroeconomic models building on rational expectations-microfoundations assume that people, on average, have the same expectations. Someone like Keynes for example, on the other hand, would argue that people often have different expectations and information, which constitutes the basic rational behind macroeconomic needs of coordination. Something that is rather swept under the rug by the extremely simple-mindedness of assuming rational expectations in representative actors models, which is so in vogue in “New Classical” and “New Keynesian” macroconomics. But if all actors are alike, why do they transact? Who do they transact with? The very reason for markets and exchange seems to slip away with the sister assumptions of representative actors and rational expectations.

Macroeconomic models building on rational expectations microfoundations impute beliefs to the agents that is not based on any real informational considerations, but simply stipulated to make the models mathematically-statistically tractable. Of course you can make assumptions based on tractability, but then you do also have to take into account the necessary trade-off in terms of the ability to make relevant and valid statements on the intended target system. Mathematical tractability cannot be the ultimate arbiter in science when it comes to modeling real world target systems. One could perhaps accept macroeconomic models building on rational expectations-microfoundations if they had produced lots of verified predictions and good explanations. But they have done nothing of the kind"