Saturday, September 12, 2009

Testimony of David Colander submitted to the Congress of the United States, House Science and Technology Committee, for the Hearing on “The Risks of Financial Modeling: VaR and the Economic Meltdown,” September 10, 2009

Mr. Chairman and members of the committee: I thank you for the opportunity to testify. My name is David Colander. I am the Christian A Johnson Distinguished Professor of Economics at Middlebury College. I have written or edited over forty books, including a top-selling principles of economics textbook, and 150 articles on various aspects of economics. I was invited to speak because I was one of the authors of the Dahlem Report [revised and republished in Critical Review vol. 21, nos. 2-3, Spring-Summer 2009] in which we chided the economics profession for its failure to warn society about the impending financial crisis, and I have been asked to expand on some of the themes that we discussed in that report. (I attach that report as an appendix to this testimony.)Introduction

One year ago, almost to the day, the U.S. economy had a financial heart attack, from which it is still recovering. That heart attack, like all heart attacks, was a shock, and it has caused much discussion about who is to blame, and how can we avoid such heart attacks in the future. In my view much of that discussion has been off point. To make an analogy to a physical heart attack, the US had a heart attack because it is the equivalent of a 450-pound man with serious ailments too numerous to list, who is trying to live as if he were still a 20 year old who can party 24-7. It doesn’t take a rocket economist to know that that will likely lead to trouble. The questions I address in my testimony are: Why didn’t rocket economists recognize that, and warn society about it? And: What changes can be made to see that it doesn’t happen in the future?

Some non-economists have blamed the financial heart attack on economist’s highly technical models. In my view the problem is not the models; the problem is the way economic models are used. All too often models are used in lieu of educated common sense, when in fact models should be used as an aid to educated common sense. When models replace common sense, they are a hindrance rather than a help.

Modeling the Economy as a Complex SystemUsing models within economics or within any other social science, is especially treacherous. That’s because social science involves a higher degree of complexity than the natural sciences. The reason why social science is so complex is that the basic unit in social science, which economists call agents, are strategic, whereas the basic unit of the natural sciences are not. Economics can be thought of the physics with strategic atoms, who keep trying to foil any efforts to understand them and bring them under control. Strategic agents complicate modeling enormously; they make it impossible to have a perfect model since they increase the number of calculations one would have to make in order to solve the model beyond the calculations the fastest computer one can hypothesize could process in a finite amount of time. Put simply, the formal study of complex systems is really, really, hard. Inevitably, complex systems exhibit path dependence, nested systems, multiple speed variables, sensitive dependence on initial conditions, and other non-linear dynamical properties. This means that at any moment in time, right when you thought you had a result, all hell can break loose. Formally studying complex systems requires rigorous training in the cutting edge of mathematics and statistics. It’s not for neophytes.

This recognition that the economy is complex is not a new discovery. Earlier economists, such as John Stuart Mill, recognized the economy’s complexity and were very modest in their claims about the usefulness of their models. They carefully presented their models as aids to a broader informed common sense. They built this modesty into their policy advice and told policy makers that the most we can expect from models is half-truths. To make sure that they did not claim too much for their scientific models, they divided the field of economics into two branches—one a scientific branch, which worked on formal models, and the other political economy, which was the branch of economics that addressed policy. Political economy was seen as an art which did not have the backing of science, but instead relied on the insights from models developed in the scientific branch supplemented by educated common sense to guide policy prescriptions.

In the early 1900s that two-part division broke down, and economists became a bit less modest in their claims for models, and more aggressive in their application of models directly to policy questions. The two branches were merged, and the result was a tragedy for both the science of economics and for the applied policy branch of economics.

It was a tragedy for the science of economics because it led economists away from developing a wide variety of models that would creatively explore the extraordinarily difficult questions that the complexity of the economy raised, questions for which new analytic and computational technology opened up new avenues of investigation.[1] Instead, the economics profession spent much of its time dotting i’s and crossing t’s on what was called a Walrasian general equilibrium model which was more analytically tractable. As opposed to viewing the supply/demand model and its macroeconomic counterpart, the Walrasian general equilibrium model, as interesting models relevant for a few limited phenomena, but at best a stepping stone for a formal understanding of the economy, it enshrined both models, and acted as if it explained everything. Complexities were just assumed away not because it made sense to assume them away, but for tractability reasons. The result was a set of models that would not even pass a perfunctory common sense smell test being studied ad nauseam.

Initially macroeconomics stayed separate from this broader unitary approach, and relied on a set of rough and ready models that had little scientific foundation. But in the 1980s, macroeconomics and finance fell into this “single model” approach. As that happened it caused economists to lose sight of the larger lesson that complexity conveys —that models in a complex system can be expected to continually break down. This adoption by macroeconomists of a single-model approach is one of the reasons why the economics profession failed to warn society about the financial crisis, and some parts of the profession assured society that such a crisis could not happen. Because they focused on that single model, economists simply did not study and plan for the inevitable breakdown of systems that one would expect in a complex system, because they had become so enamored with their model that they forgot to use it with common sense judgment.

Models and Macroeconomics Let me be a bit more specific. The dominant model in macroeconomics is the dynamic stochastic general equilibrium (DSGE) model. This is a model that assumes there is a single globally rational representative agent with complete knowledge who is maximizing over the infinite future. In this model, by definition, there can be no strategic coordination problem—the most likely cause of the recent crisis—such problems are simply assumed away. Yet, this model has been the central focus of macro economists’ research for the last thirty years.

Had the DSGE model been seen as an aid to common sense, it could have been a useful model. When early versions of this model first developed back in the early 1980s, it served the useful purpose of getting some intertemporal issues straight that earlier macroeconomic models had screwed up. But then, for a variety of sociological reasons that I don’t have time to go into here, a majority of macroeconomists started believing that the DSGE model was useful not just as an aid to our understanding, but as the model of the macroeconomy. That doesn’t say much for the common sense of rocket economists. As the DSGE model became dominant, important research on broader non-linear dynamic models of the economy that would have been more helpful in understanding how an economy would be likely to crash, and what government might do when faced with a crash, was not done.[2]

Similar developments occurred with efficient market finance models, which make similar assumptions to DSGE models. When efficient market models first developed, they were useful; they led to technological advances in risk management and financial markets. But, as happened with macro, the users of these financial models forgot that models provide at best half truths; they stopped using models with common sense and judgment. The modelers knew that there was uncertainty and risk in these markets that when far beyond the risk assumed in the models. Simplification is the nature of modeling. But simplification means the models cannot be used directly, but must be used judgment and common sense, with a knowledge of the limitations of use that the simplifications require. Unfortunately, the warning labels on the models that should have been there in bold print—these models are based on assumptions that do not fit the real world, and thus the models should not be relied on too heavily—were not there. They should have been, which is why in the Dahlem Report we suggested that economic researchers who develop these models be subject to a code of ethics that requires them to warn society when economic models are being used for purposes for which they were not designed.

How did something so stupid happen in economics? It did not happen because economists are stupid; they are very bright. It happened because of incentives in the academic profession to advance lead researchers to dot i’s and cross t’s of existing models, rather than to explore a wide range of alternative models, or to focus their research on interpreting and seeing that models are used in policy with common sense. Common sense does not advance one very far within the economics profession. The over-reliance on a single model used without judgment is a serious problem that is built into the institutional structure of academia that produces economic researchers. That system trains show dogs, when what we need are hunting dogs.

The incorrect training starts in graduate school, where in their core courses students are primarily trained in analytic techniques useful for developing models, but not in how to use models creatively, or in how to use models with judgment to arrive at policy conclusions. For the most part policy issues are not even discussed in the entire core macroeconomics course. As students at a top graduate school said, “Monetary and fiscal policy are not abstract enough to be a question that would be answered in a macro course” and “We never talked about monetary or fiscal policy, although it might have been slipped in as a variable in one particular model.” (Colander, 2007, pg 169).

Suggestions Let me conclude with a brief discussion of two suggestions, which relate to issues under the jurisdiction of this committee, that might decrease the probability of such events happening in the future.

* Include a wider range of peers in peer review The first is a proposal that might help add a common sense check on models. Such a check is needed because, currently, the nature of internal-to-the-subfield peer review allows for an almost incestuous mutual reinforcement of researcher’s views with no common sense filter on those views. The proposal is to include a wider range of peers in the reviewing process of NSF grants in the social sciences. For example, physicists, mathematician, statisticians, and even business and governmental representatives, could serve, along with economists, on reviewing committees for economics proposals. Such a broader peer review process would likely both encourage research on much wider range of models and would also encourage more creative work.

* Increase the number of researchers trained to interpret models The second is a proposal to increase the number of researchers trained in interpreting models rather than developing models by providing research grants to do that. In a sense, what I am suggesting is an applied science division of the National Science Foundation’s social science component. This division would fund work on the usefulness of models, and would be responsible for adding the warning labels that should have been attached to the models.

This applied research would not be highly technical and would involve a quite different set of skills than the standard scientific research would require. It would require researchers who had an intricate consumer’s knowledge of theory but not a producer’s knowledge. In addition it would require a knowledge of institutions, methodology, previous literature, and a sensibility about how the system works. These are all skills that are currently not taught in graduate economics programs, but they are the skills that underlie judgment and common sense. By providing NSF grants for this work, the NSF would encourage the development of a group of economists who specialized in interpreting models and applying models to the real world. The development of such a group would go a long way toward placing the necessary warning labels on models, and make it less likely that fiascos like a financial crisis would happen again.

Colander, David. 2007. The Making of an Economist Redux. Princeton, New Jersey: Princeton University Press.

Solow, Robert. 2007. “Reflections on the Survey” in Colander 2007.

NOTES[1] Some approaches working outside this Walrasian general equilibrium framework that I see as promising includes approaches using adaptive network analysis, agent based modeling, random graph theory, ultrametrics, combinatorial stochastic processes, cointegrated vector autoregression, and the general study of non-linear dynamic models.

[2] Among well known economists, Robert Solow stands out in having warned about the use of DSGE models for policy. (See Solow, in Colander, 2007, pg 235.) He called them “rhetorical swindles.” Other economists, such as Post Keynesians, and economic methodologists also warned about the use of these models. For a discussion of alternative approaches, see Colander, ed. (2007). So alternative approaches were being considered, and concern about the model was aired, but those voices were lost in the enthusiasm most of the macroeconomics community showed for these models.

If there were a shred of evidence that "educated common sense" existed or could be produced, this might be more interesting. But I have never seen such things; it was precisely the "common-sense" types who said housing prices had never gone down before, etc. It was the "common-sense" people who came up with the monetary and fiscal policies that exacerbated the Great Depression.

More and more, I've come to realize that ideas like "common sense" and "wisdom" (often opposed rhetorically to "cleverness" or "technique") are applied ex post to whomever turns out to have been right but have almost no ex ante usefulness.

Fantastic analysis, I'm really glad this was part of the testimony before Congress. For those who missed it, also part of the testimony was Nassim Taleb (of Black Swan fame) as seen here. I see the this testimony as the long-overdue coming of age of complexity economics.

One of the main lessons of complexity economics that I suspect won't be learned this time around by policy makers is that the very act of trusting in incentives and regulation -- no matter how complete and "perfect" they are -- creates the conditions for the boom-bust dynamic to repeat. The particulars are always different and the more perfect the incentive/regulation scheme is (and the more trust we have in it) the bigger the next crisis will be. Noted complexity researcher, Alfred Hubler, draws a strong correlation to forrest fire dynamics.

For this reason, I think Colander's suggestions regarding peer review and model interpretation are wise. I also agree with the promising approaches outlined in the Notes, especially agent-based modeling combined with adaptive network analysis. Closed form analyses of any sort are ipso facto a bad idea: they lead to the misplaced trust referred to above. Moreover, they create the false dichotomy between the endogenous and the exogenous, which is at the heart of why we are always solving the last crisis (instead of avoiding the future one).

The one thing we know for certain is that the next black swan is coming and by definition we can't predict when and what it will look like. But we can stop acting so surprised when it shows up, and we can make the system more resilient to its effects by accepting this basic truth and building in some heterogeneity both in terms of our ability to perceive the black swan and our ability to respond to it.

I suspect what David Colander actually means by "common sense" is not common sense at all, but some notion of informed perception.

The reality is that it is very easy for abstract thought to go wrong. Hence the importance of mathematics for ensuring rigour in reasoning, use of evidence to ensure contact with the real world, etc. "Common sense" is "gut abstraction", so a dubious thing to lean on.

I don't agree that black swans are completely unpredictable. If one keeps the same rule set for the system, they are. But statistical prediction from complex combinatory models can suggest changes in the rule set that would make them less probable in the first place. Kauffman's analysis of NK Boolean Networks in the evolution of Complex Adaptive Systems demonstrates just this parameter effect. If you change the rule set, the probability of such things as Complexity Catastrophe can change. That is where our focus should be with regard to regulation in financial systems, designing the proper rule set. Of course constant vigilance and modeling based on updated data would be necessary to keep up with innovations in market systems, but that isn't a problem. Non-linear models haven't had the chance to prove their salt yet. They should certainly be much better than equilibrium models, especially for creating robust rule sets.