Find my old posts

It’s time to get rid of the ”representative agent” in monetary theory

“Tis vain to talk of adding quantities which after the addition will continue to be as distinct as they were before; one man’s happiness will never be another man’s happiness: a gain to one man is no gain to another: you might as well pretend to add 20 apples to 20 pears.”

I have often felt that modern-day Austrian economists are fighting yesterday’s battles. They often seem to think that mainstream economists think as if they were the “market socialists” of the 1920s and that the “socialist-calculation-debate” is still on-going. I feel like screaming “wake up people! We won. No economist endorses central planning anymore!”

However, I am wrong. The Austrians are right. Many economists still knowingly or out of ignorance today endorse some of the worst failures of early-day welfare theory. Economists have known since the time of Jeremy Bentham that one man’s happiness can not be compared to another man’s happiness. Interpersonal utility comparison is a fundamental no-no in welfare theory. We cannot and shall not compare one person’s utility with another man’s utility. But this is exactly what “modern” monetary theorists do all the time.

Take any New Keynesian model of the style made famous by theorists like Michael Woodford. In these models the central banks is assumed to be independent (and benevolent). The central banker sets interest rates to minimize the “loss function” of a “representative agent”. Based on this kind of rationalisation economists like Woodford find theoretical justification for Taylor rule style monetary policy functions.

Nobody seems to find this problematic and it is often argued that Woodford even has provided the microeconomic foundation for these loss functions. Pardon my French, but that is bullsh*t. Woodford assumes that there is a representative agent. What is that? Imagine we introduced this character in other areas of economic research? Most economists would find that highly problematic.

There is no such thing as a representative agent. Let me illustrate it. The economy is hit by a negative shock to nominal GDP. With Woodford’s representative agent all agents in the economy is hit in the same way and the loss (or gain) is the same for all agents in the economy. No surprise – all agents are assumed to be the same. As a result there is no conflict between the objectives of different agents (there is basically only one agent).

But what if there are two agents in the economy. One borrower and one saver. The borrower is borrowing from the other agent at a fixed nominal interest rate. If nominal GDP drops then that will effectively be a transfer of wealth from the borrower to the saver.

This might of course of course make the Calvinist ideologue happy, but what would the modern day welfare theorist say?

The modern welfare theorist would of course apply a Pareto criterion to the situation and argue that only a monetary policy rule that ensures Pareto efficiency is a good monetary policy rule: An allocation is Pareto efficient if there is no other feasible allocation that makes at least one party better off without making anyone worse off. Hence, if the nominal GDP drops and lead to a transfer of wealth from one agent to another then a monetary policy that allows this does not ensure Pareto efficiency and is hence not an optimal monetary policy.

David Eagle has shown in a number of papers that only one monetary policy rule can ensure Pareto efficiency and that is NGDP level targeting (See David’s guest posts here, here and here). All other policy rules, inflation targeting, Price level targeting and NGDP growth targeting are all Pareto inefficient. Price level targeting, however, also ensures Pareto efficiency if there are no supply shocks in the economy.

This result is significantly more important than any result of New Keynesian analysis of monetary policy rules with a representative agent. Analysis based on the assumption of the representative agent completely fails to tell us anything about the present economic situation and the appropriate response to the crisis. Just think whether a model with a “representative country” in the euro zone or one with Greece (borrower) and Germany (saver) make more sense.

It is time to finally acknowledge that Bentham’s words also apply to monetary policy rules and finally get rid of the representative agent.

14 Comments

Really great post, Lars. I hope this generates some reaction, as I get so frustrated every time I hear a ‘but what about the savers!’ type reaction to a monetary policy rule that actually reduces unfair and inefficient transfers of wealth.

bill woolsey

Even if there are debtors and creditors, a “one good” model makes it appear that nominal GDP targeting prevents desired risk sharing.

Suppose we have corn farmers who borrow to pay for seed. The creditors and the farmer are sharing the expected gains. The creditor is paid first and the farmer collects the residual. If there is a bad harvest (a supply shock) then of course the creditor is paid first and the farmer gets what is left. If this is the only good in the economy, this is the only supply shock possible. If monetary policy raises the price level because of the supply shock, then part of this loss is imposed on the creditor. But if they wanted to share this risk, the lender should have bought stock.

Now, suppose that this market is 10% of the economy and oil is 20%. The supply of oil falls, and oil prices rise. Do we use monetary policy to force down the price of corn so that the price level doesn’t rise? Should the farmer bear this risk for the lender? The point of the contract was’t for the farmer to shelter the lender from risk from the oil market. It was to reduce risk to the creditor for the corn market (includng his particular farm.)

And evolved monetary system, like gold, does not protect the creditor in the corn market from a reduced return because oil gets more expensive.

Shifting to a managed monetary system with inflation targeting to make debt less subject to risk from supply shocks is likely a mistake.

Integral

I feel like I need to come to the (half-hearted) defense of NK theory here.

1. The NK loss function is analogous to the Pareto criterion in the representative-agent world. Indeed, the NK loss function is a second-order approximation of the utility loss to the representative agent. So there is no conflict between the loss function and the Pareto criterion, only a conflict between representative agent models and heterogenous agent models.

2. I agree that there is no general loss function under heterogenous agent models. One gets into the social welfare problem and has to choose a social welfare function, …, et cetera. There the Pareto criterion is useful but is extremely limited in the useful things it can say. Don’t get me wrong: where it can be used, Pareto dominance is powerful; but its application is limited. I do not have a ready alternative in mind.

3. Of course any study of debt dynamics requires heterogeneity and can only use the NK loss function under certain assumptions about the aggregation of social weflare, assumptions which may be dubious and indeed implicitly require either cardinal utility or interpersonal utility comparisons. I agree there as well.

amv

Nice post! Yet, I side with Integral, especially with his first point. After all, the normative benchmark of NK predictions is a RBC-model prediction, which implements an optimal allocation (an optimal stochastic consumption profile). Minimizing the loss function is to minimize the deviations between the two, i.e., between RBC outcomes and outcomes in face of nominal rigidities.

You are right: the representative agent is a poor aggregation device. The Sonneschein-Mantel-Debreu results tell us why (restrictions on individuals in addition to those which establish existence in the most general AD-setting, do not translate to restrictions on the economic system). BUT: Austrian economics is no alternative! Austrians just talk about macroeconomic consequences of individual decision making; they don’t prove their propositions, they don’t provide a complete list of their assumptions, etc. Please let me know, if there is any Austrian result out there, which overcomes the aggregation problem!

Alex Salter

Yup. Once you start thinking of the market, not in comparative static terms, but as a process, you can start taking money seriously. It’s not just the numeraire which is used to compare relative prices in a general equilibrium model. It’s an emergent phenomenon which is, first and foremost, a medium of exchange and the method of economic calculation. Thinking about things this way makes it very clear the theory of monetary disequilibrium, analyzed from the perspective of an unfolding process with heterogeneous agents and institutions, is the way to go.

amv

1. The biggest problem with disequilibrium analysis: the role of expectations. How do you define them? In disequilibrium analysis Shackle’s claim that expectations are indeterminate bites. Anything goes.

2. Why does everybody think that the notion of ‘disequilibrium’ is more “real” than is the notion of ‘equilibrium’? Is the antagonism of a fictitious concept necessarily “out there”?

BTW, I started as an Austrian. The most important blow the my early “believe system” was Kirzner’s concept of ‘alertness’. It is so evidently a stopgap for proper theory, yet so widely accepted … made me doubt; still does.

Alex Salter

1) Yes, expectations are tricky, and they probably take on many, many forms given agent heterogeneity. Some are rational, some others are adaptive, and many more can’t be accurately captured by any mathematical modeling technique. This doesn’t rule out us making “if, then” statements premised on specific types of expectations.

2) The real world is a world of disequilibrium. Equilibrium refers to a state of affairs where all plans are perfectly reconciled. Given the prevalence of error, plan change, etc., it’s much more meaningful to characterize the social order as something simply not definable by equilibrium terminology, or, if you accept some parts of the Walrasian framework, as tending towards an equilibrium which, due to both Kirznerian and Schumpeterian moments, is never reached. The only way equilibrium becomes meaningful is in the tautological sense, in which case it becomes uninteresting as an end-state. Tautologies/axioms/the self-evident ought to be premises, not conclusions.

3) Kirznerian alertness is an extension of the Misesian explanation of entrepreneurship. It is an omnipresent feature of human action and, when applied, characterizes the process by which, in a pseudo-Walrasian framework, the process of competition as a discovery procedure results in equilibrating tendencies. I don’t know what you mean by “stopgap for theory.” I hope you mean something more sophisticated than “not mathematical, therefore not theory.”

David Eagle

Integral and amv made some points that I want to comment about. Their points were:

Integral: “The NK loss function is analogous to the Pareto criterion in the representative-agent world. … there is no conflict between the loss function and the Pareto criterion, only a conflict between representative agent models and heterogenous agent models. … I agree that there is no general loss function under heterogenous agent models.”

amv: “…the representative agent is a poor aggregation device. The Sonneschein-Mantel-Debreu results tell us why (restrictions on individuals in addition to those which establish existence in the most general AD-setting, do not translate to restrictions on the economic system).”

First, the loss function is not just used by the New Keynesians. I do not view Kydland and Prescott as New Keynesians. Their time inconsistency result was based on a loss function. When I took a class from Tom Sargent, he talked about the time inconsistency problem. At the time, I did not understand the time inconsistency problem from Tom’s lectures because I was thinking Pareto efficiency, not loss functions. Later when I read Kydland and Prescott, then I understood.

Second, I do remember in graduate school (a long, long time ago), that some utility functions are aggregatable, including constant relative risk aversion (CRRA) utility functions, including the logarithmic utility function. (If I am wrong, please correct me.) I use identical CRRA utility functions often in my work, because it is the only way to make sure that all individuals have average relative risk aversion. However, if you aggregate these utility functions, and you then maximize the aggregated utility function, that does not mean that the individual utility functions are maximized. Rather, if the individual utility functions are maximized, then the aggregate utility function must also be maximized. Another way to say this is that a necessary, but not sufficient condition for individual utility maximization is that the aggregate utility function be maximized. I think that is what is going on here; maximizing the aggregate utility function will not guarantee that individuals are sharing risk in a Pareto-efficient manner.

I do have a final question for someone: I think someone indicated that one can get the loss function involving inflation from either a representative-agent utility function or some aggregated utility function. I would like to see a reference on that. (Thanks)
P.S.: For amv: I did look up “Sonneschein-Mantel-Debreu” at http://en.wikipedia.org/wiki/Sonnenschein%E2%80%93Mantel%E2%80%93Debreu_theorem. Thanks for pointing me in this direction.