Macroeconomic Models

Robert Skidelsky doesn’t always get things completely right. For example, he often talks about “New Classical economics” as if that is the dominant paradigm today, but that term has a very specific meaning and refers to a class of models that is no longer popular in macroeconomics.

Let’s back up. The New Classical model had four important elements, the assumption of rational expectations, the assumption of the natural rate hypothesis, the assumption of continuous market clearing that Skidelsky refers to below, and an assumption that agents have imperfect information. The imperfect information assumption was quite clever in that it allowed proponents of this model to explain correlations between money and income without acknowledging that systematic, predictable policy based upon something like a Taylor rule would have any effect at all (put another way, only unexpected changes in monetary policy matter, expected changes are fully neutralized by private sector responses to the policy).

The New Classical model did contribute to the movement in macroeconomics toward microeconomic foundations and to the use of rational agents within macro models, but the model itself could not simultaneously explain both the duration and magnitude of actual cycles, it had difficulty explaining some key correlations among macroeconomic variables, and it was difficult to understand why a market for the absent information did not develop if the consequences of imperfect information were as large as the New Classical model implied. In addition, one of the model’s key results that only unexpected changes in money can affect real variables did not hold up when taken to the data (though there are still a few die-hards on this). So the profession moved on.

The New Classical model had replaced the old Keynesian model after it became widely believed that the models’ shortcomings were partly responsible for the problems we had in the 1970s, and for the theoretical reasons that will be described in a moment. But while the New Classical economists were having their day in the sun, the Keynesians were quietly working behind the scenes to fix the problems that caused the old Keynesian model to go out of favor (or not so quietly in a few cases). The old Keynesian model had a poor model of expectations – if expectations were considered at all they were usually modeled as a naive adaptive process – and in addition, it was not clear that the relationships embedded within the old Keynesian model were consistent with optimizing behavior on behalf of households and firms. The New Keynesian model solved this by deriving macroeconomic relationships from microeconomic optimizing behavior, and by adopting the rational expectations framework. And they made one other change important change. In order for systematic monetary policy such as following a Taylor rule to affect real variables such as output and employment, there must be some type of friction that prevents the economy from immediately moving to it long run equilibrium value. The friction in the New Classical model is informational, agents optimize given the information that they have, but because the information is imperfect the decisions they make take the economy away from its optimal long-run path.

In the New Keynesian model the friction that gives monetary policy its power to affect output and employment is sluggish movement of prices and wages (generally modeled through something called the Calvo pricing rule, a source of controversy because there are questions about the extent to which this rule is consistent with micro-founded optimizing behavior, though others assert there are rationales for the Calvo structure, e.g. isomorphic relationships to other models, that are sufficient to overcome these concerns).

To me, the New Keynesian model is about as mainstream as they get, so I’m puzzled by the opening to this column that claims modern macro completely embraces fully flexible prices. I think what he has in mind is some version of a Real Business Cycle model where prices are, in fact, assumed to be fully flexible, agents are rational etc. so that actual output is always equal to potential (so there’s no need for policy to do anything but maximize the growth of potential output, hence the supply-side orientation of advocates of this approach). And I’m sure we could have a lively debate about which model has more proponents, but to say that mainstream economics subscribes fully to the notion of continuous market clearing when price rigidities are at the heart of a major class of modern models seems to miss the mark (and the assertion that agents are assumed to have perfect information is equally puzzling, they optimize given what they know, but they are not assumed to know everything and the efficient market hypothesis he discusses does not require this).

I don’t disagree with the main message of the column that prevention of financial crashes through regulation is better than trying to cure them with policy, though I might quibble with particulars, but as someone who has been an advocate of the New Keynesian model, and quite resistant to pure Real Business Cycle approaches, I wanted to make clear that not all of us believe that assuming fully flexible prices and continuous market clearing is the proper way to model the economy. (A synthesis of the New Keynesian and Real Business Cycle models is what I have pushed in the past, though I’m now reconsidering the types of frictions that ought to be embedded in these models given recent events, and whether the mechanisms for generating bubbles in these structures are sufficient. I am also quite sympathetic to learning models as a replacement for the assumption of strict rationality):

Risky Risk Management, by Robert Skidelsky, Project Syndicate: Mainstream economics subscribes to the theory that markets “clear” continuously. The theory’s big idea is that if wages and prices are completely flexible, resources will be fully employed, so that any shock to the system will result in instantaneous adjustment of wages and prices to the new situation.

This system-wide responsiveness depends on economic agents having perfect information about the future, which is manifestly absurd. Nevertheless, mainstream economists believe that economic actors possess enough information to lend their theorizing a sufficient dose of reality.

The aspect of the theory that applies particularly to financial markets is called the “efficient market theory,” which should have blown sky-high by last autumn’s financial breakdown. But I doubt that it has. Seventy years ago, John Maynard Keynes pointed out its fallacy. When shocks to the system occur, agents do not know what will happen next.

In the face of this uncertainty, they do not readjust their spending; instead, they refrain from spending until the mists clear, sending the economy into a tailspin. It is the shock, not the adjustments to it, that spreads throughout the system. The inescapable information deficit obstructs all those smoothly working adjustment mechanisms ― i.e., flexible wages and flexible interest rates ― posited by mainstream economic theory.

An economy hit by a shock does not maintain its buoyancy; rather, it becomes a leaky balloon. Hence Keynes gave governments two tasks: to pump up the economy with air when it starts to deflate, and to minimize the chances of serious shocks happening in the first place.

Today, that first lesson appears to have been learned… But, judging from recent proposals in the United States, the United Kingdom, and the European Union to reform the financial system, it is far from clear that the second lesson has been learned.

Admittedly, there are some good things in these proposals. For example, the U.S. Treasury suggests that originators of mortgages should retain a “material” financial interest in the loans they make, in contrast to the recent practice of securitizing them. This would, among other things, reduce the role of credit rating agencies. … The underlying problem, though, is that both regulators and bankers continue to rely on mathematical models that promise more than they can deliver for managing financial risks.

Although regulators now place their faith in “macro-prudential” models to manage “systemic” risk, rather than leaving financial institutions to manage their own risks, both sides lumber on in the untenable belief that all risk is measurable (and therefore controllable), ignoring Keynes’s crucial distinction between “risk” and “uncertainty.”

Salvation does not lie in better “risk management” by either regulators or banks, but, as Keynes believed, in taking adequate precautions against uncertainty.

As long as policies and institutions to do this were in place, Keynes argued, risk could be let to look after itself. Treasury reformers have shirked the challenge of working out the implications of this crucial insight.

Mark Thoma is a member of the Economics Department at the University of Oregon. He joined the UO faculty in 1987 and served as head of the Economics Department for five years. His research examines the effects that changes in monetary policy have on inflation, output, unemployment, interest rates and other macroeconomic variables with a focus on asymmetries in the response of these variables to policy changes, and on changes in the relationship between policy and the economy over time. He has also conducted research in other areas such as the relationship between the political party in power, and macroeconomic outcomes and using macroeconomic tools to predict transportation flows. He received his doctorate from Washington State University.