Thursday, November 16, 2006

I felt like a piece of meat, that's why.

This comment at Unfogged set off the “why I hate modeling” rant I promised you months ago, back when I hated lawyers or something.

Regarding economic modeling. Any model is a simplification of reality which hopefully captures the essential features of whatever you are trying to study. So there are two issues. Are you correctly describing what will happen given the simplified situation? And have you included enough? I think the main problem with economic modeling is that it can be difficult to include enough to reasonably represent reality and still have a simple enough model that you can figure out how it will behave.On further thought there is another big problem with economic modeling, that it is often hard to test the results. This makes it hard to devise reasonable models and hard to be confident in their predictions.

No no no no no! Those aren’t the main problems with any complicated modeling. The main problem with modeling is that any complex model encapsulates dozens or hundreds of the modelers’ assumptions, none of which are necessarily wrong, but many of which could be interpreted differently by reasonable people at the next meeting. That is fine, and it has to happen for the model to be built. But it is impossible to document all of those assumptions, which happen over years, and permeate every step of the model. Those assumptions are in the data collection and cleaning, in the equations that run the engine, in the interface between models, in the output presentation. Good modelers flag the big assumptions, but you can’t get them all and they aren’t irrelevant in the aggregate.

Or maybe they are irrelevant, but you can’t tell! Because complicated models aren’t transparent. They can’t be. You can ask for the code, but only another modeling wizard can read it, if she can, because after six years of graduate students, well, that code isn’t so clean anymore. You could ask for the hydrologic record the data is based on, but that has also been groomed within an inch of its life, and all the insider modelers know that of course they had to decide that we can’t really include 1927, because that was a ridiculous year that makes all the models go haywire. All the modeling wizards DO know that some records have asterisks, maybe for good reasons, but those are still agreements made by knowledgeable insiders.

So far models hide too many assumptions to be neutral, and they can only be understood by a priest class, but then, the outputs can’t be useful. I saw it! How when the model predicted outrageous things, everyone just agreed that the model was broken. It had to be, because the things it said were hugely out of scale. But that means, by default, that the only acceptable model output was output that agreed with common sense and conventional wisdom. But I already knew what common sense said! Fuck! Pay me all that money, and I can tell you the range that model answers are allowed to fall into! Shit, I can get that done in the next hour or so.

As far as I can tell, complex modeling is good for two things. First, if you get results that support your agenda, you can wave the model around and people will believe it more than they believed you. Second, it trains graduate school modelers to think very hard about a complicated topic and learn it in detail. That is good, because they will likely grow up to be managers who have to deal with hard policy questions.

I am not altogether opposed to modeling, because I think it is mostly white collar welfare and it supports sexy graduate students. I think it is not worse than many other ways to discuss hard questions. But modeling is as crude and biased and well-intentioned and fallible as, for example, crafting legislation. It should only be used with a constant awareness of its limitations. Modelers are often good about that; people using models to discuss policy are often not.

(To his credit, my professor who had me working on a large scale engineering-economics model of California water was also a skeptic. He was the first to tell me that “all models are wrong, but some are useful.” Had I stuck around, he would have supported a paper I wanted to write testing my idea that model output isn’t better than conventional wisdom. I wanted to interview a dozen or so of the big players, and ask them to quantify their gut feel for the same things our model was predicting (what agencies would sell/buy what water at what price) and see how their aggregate opinion compared to the model output. I thought of that before Wisdom of Crowds, I’ll have you know.)

"Modelers are often good about that; people using models to discuss policy are often not."

So, do you think it's reasonable to expect journalists, Congressmen, and whoever else to understand all that? And if so, where's the disconnect? Are the wonks not wonky enough, are the modellers not communicating well enough with the public, something else?

Justin:Then my guess is that you are working in a fairly mechanistic system, with limited heteroskedasticity (can I tell you how I love throwing that word around?). Yeah?

PTM:Because you want another rant, on how we don't address complex policy issues well, and what governance structures would improve that? Are you sure you don't want more about canal control? Or dating? Surely anything would be better.

I think it really depends on what you use the model for. I'd agree that models are not very good for prediction, but I think they can be good for testing hypotheses. If you think you understand how something behaves, go ahead and build a quantitative model based on your intuition, and see what happens. Of course it's possible to keep on tweaking the model until it gives the right answer, but if you have to do that, it indicates you're on the wrong track. At any rate, it's a good way of discovering the limits of your understanding.

I don't necessarily disagree, though I do think that the frontier of contexts that can be usefully modeled is advancing. What's been happening with your water-price problem since you left? Has some clever, sexy grad student come along and cleaned things up, or is it still just as bad?

"after six years of graduate students, well, that code isn’t so clean anymore" sounds like it's not necessarily a fundamental shortcoming of the approach. It's worth noting that there's a lot of interesting work going on in the area of making it easier to write clean & safe code, though it will be a long time before some of those techniques filter out of the research groups and onto engineers' desks. You weren't using Excel, were you?

Also, good interface design ought to go a long way toward helping people outside the priesthood understand models. If you can see graphically how twiddling a given knob affects supply which affects price or whatever, then you can still get something out of a model even if you couldn't begin to grok the math or the assumptions.

If you have even five or six different variables that are all interacting with each other, that's a lot of relationships to keep straight. Models force you to specify all those relationships and track them completely through the system. Concerned about how the housing slowdown will affect consumer spending? Well, it will lead to a drop in construction employment, which will reduce employment by so much (until people find new jobs, which will take so long), which will reduce personal income by so much, some fraction of which will be reduced consumer spending, which will in turn lose you some jobs in retail...etc. Hard to do all that out by hand.

But in social science and policy areas where we really don't know much, modelling is not science. It's a way of systematically working out all the conclusions from your various assumptions. But the assumptions are so ungrounded and arbitrary and untestable that they are driving the conclusions.

But what about something like climate models? A fair amount of the scientific consensus on global warming comes from those models. They have generated testable predictions that have later proven true. As I understand it, a lot of the model assumptions are grounded in well understood physics. Yet everyone admits that some of the macro relationships involved are very imperfectly understood.

See, I haven't yet been convinced that the output of that model is any more reliable from the averaged gut feeling of several oldtimers in the field.

I don't know about climate change models firsthand. I believe they are tremendously complicated and a lot of work by very impressive people has gone into them. Big as they are, I bet there's an awful lot of jury-rigged stuff in there. Also, their predictions are within a fairly large range, aren't they?

Honestly, I believe climate change predictions because I believe constructing the models forced smart people to think it through very carefully. Essentially, I trust the effects of creating the models on the researchers, not the model itself.

I can't deny I love that word, but I meant that a lack of heteroskedasticity would explain Justin's models' accuracy.

"Essentially, I trust the effects of creating the models on the researchers, not the model itself."

That's kind of what I was saying, models are a tool for informing researchers. Those few good practical modelers I have known personally take their model results as one input into their decision processes.

Essentially, I think your gut feelings are more accurate when you do a model and consult its findings.

"So you're saying the models are all in our [the brainiac's] heads; the numbers and equasions are just notes?"

That goes too far IMO, letting the model do its thing will take your intuition farther than you could have on your own. Because you simply can't keep track of all the implications of all the relationships you believe are there. But the model can.

Even if a model comes up with a result an expert feels in his gut must be wrong, then s/he can learn a lot by tracking the exact reason it came up with the strange result. That forces the modeler to question their own assumptions in an interesting way.

Yoyo:No, the models are there, in the computer and other people can run them. Divorced from the modeler's judgement and interpretation, I trust them almost not at all. With the modeler right next to me, to explain every last compromise, I'll still have some reservations.

Marcus:Essentially, I think your gut feelings are more accurate when you do a model and consult its findings.

Naw, not sure of that, either. I mean, that's what we have to do for stuff like climate change, which we can't live through in advance. But for stuff like which water district will sell water at what price first, I don't know that people with years in the field would have to write a model. I think they can just guess fairly accurately. Certainly with as much accuracy as I trusted our model (which is a prestigious one. I'm not doubting all modeling because I happened to be involved in a bad process.).

I'm not sure about the context in which the word 'models' are used but perhaps from an economist's perspective it isn't necessarily a large-scale, complex system he/she is describing and so it may be different for an engineer's idea of what modeling is. A model can be a small-scale theoretical construct.

Of course it's elitist and only understandable by a 'priestly class'. Same applies for much of modern education: who can read a technical philosophical article nowadays, for example?

the assumptions are made explicit in an economic model and so the person writing this is correct in my opinion. They are not clarified by data-they are stylized representations of the data. And I'm not sure if a model-in the economic sense- rests or falls according to its ability to predict reality. In some sense, if it is internally consistent and tells a story that's what works. It is not to say that reality is necessarily like that but *if* it were, what would happen if some things changed , holding others constant.

that economics has borrowed an engineering approach (Edgeworth, Cournot?) is one of the reasons it has moved away from its tradtional approach that saw it as a part of 'ethics'.

If policy makers can't understand the models, the policy makers need a better education. I would expect every educated person to have some familiarity with political thought that is a mere 150 years old (Marxism). It is underreasonable to expect some familiarity with mathematics that is a 300 years old (differential calculus)?

Anyway, there already is a good method for determining the accuracy of models, and that method is the marketplace. Companies are built on models of the world: how consumers behave, how certain systems work, and those that get their models right thrive. Think of Google for example. They had a model: links tell you more than text about which web pages are useful (the eigenvectors of the transition matrix. w00t!) They were right and they made a big bundle of money. Those search engines whose model was wrong are now out of business.

brillo: Economics has no value if it does not model reality. It lacks the sort of solid theoretical background that has led to successful abstract model building in, say, physics (and many physicists question whether some of that is physics anymore, too). Even worse, people outside the field are happy to use such models as if they were `scientifically proven facts' to justify policy decisions. So where does that leave us?

Megan, you claim that models -- neither the details of their implementation, nor their underlying assumptions -- are not transparent, and that therefore, they shouldn't be used to guide policy. But what is the best alternative to model-building? Some ideal of informed, systematic discourse? I would argue that the things that are problems with models are tenfold problems in the alternative(s).

Obviously, complex models are not ever meant to be 100% accurate and precise representations of complex truths. But the real benefit of model building, I believe, is in the conversations you have to have to build them, and in the elucidation of the decision makers' values and beliefs.

Well, mostly I just like rants. But I'm also genuinely curious. I see a couple dynamics going on. First of is standard ivory-tower stuff that seems to happen in most noncommercial academic fields. Second is a bunch of people doing their best in a bad system. Public political folks have lots of fields to comment on, and can't really be expert on any of them. Academic economists aren't paid to talk to reporters. Voters are busy living their lives. And so on. And as moderately* as I've thought about this, I haven't had many useful thoughts.

Quirkybook:Obviously, complex models are not ever meant to be 100% accurate and precise representations of complex truths. But the real benefit of model building, I believe, is in the conversations you have to have to build them, and in the elucidation of the decision makers' values and beliefs.

We mostly agree. Those conversations intruct the modelers, who may become influential. The elucidation of modelers' thoughts is a good start, but it is very difficult to make all those assumptions so clear that the model can be safely transferred to other users. I don't think models often characterize decision-makers' thoughts, and I don't think decision-makers understand how much they are taking on faith. They think they are receiving the wisdom of SCIENCE.

(And, models are probably not more flawed than other ways to solve complex problems.)

Interesting post. I started as an engineer and created some complex mathematical models (air pollution levels, detecting radio signals in noisy environments, etc). Thought they were nifty. Then I became a lawyer and worked at a regulatory agency. The agency almost never relied on models to justify policy results when making rules. Instead, rules were based on the policy leanings of the politically appointed decision makers, informed by paper comments and legal arguments of interested parties. Kind of the "conventional wisdom" approach you describe. It seems to work out OK, although success is hard to measure - in the some ways the issues these regulators face are so complex that "traditional" economic/mathematical modeling may not be that useful.

A good economic model can be either very simple with very explicit assumptions or less simple with more assumptions and specific to a particular small question. Either way the assumptions are specific and explicit and straightforward to explain, share and compare against reality.

2:55 Anonymous:Honestly, I think that for the big models, it honestly isn't possible to make all the assumptions explicit in a way the public, even policy elite, can understand. Stuff like changing the boundaries of groundwater cells (not the same as actual groundwater basins, but their mathematical characterization) would change the model, and there were hundreds of decisions like this. I believe those decisions were made in good faith, but I also think they can't be cataloged and explained in a public meeting. It took me a couple months to understand most of what the model I was working on did.

I'm going to stand by my belief that complex models create a priest class, that interpret the oracle based on things the public doesn't see. Not necessarily bad, but not transparent.

"S": where does that leave us?Perhaps economics should admit it isn't a science and-as in life-the best moments cannot be mapped.

As for reality: I think economics is more concerned with observed behaviour rather than essences or "nature". for example, it is only important that people act in a way that is consistent with it, "as if" they are calculating costs and benefits. whether they do so in "reality" is, perhaps, not the issue. The final victory of nominalism!

secondly, how realistic the modeling of reality is hasn't stopped economics from being a dominant paradigm. to say that individuals are self-interested people who maximise utility -and that *only* -is to be quite unrealistic.

thirdly, I do not know anything about physics or (complex)engineering models but as far as economic models go, not the macr-economy ones, then the original post seems spot on.

Whether the public understands them is a separate issue, perhaps. To make an assumption explicit is not the same as making it publically accessible.

It is probably worth separating the problems that economics has internally from the problems that it has externally. That is, the public face of economics is largely determined by people outside the discipline. Unfair, perhaps, but insomuch as the economists don't fight it, not entirely their fault.

Lacking a really solid theoretical underpinning, much of economic argument is pretty much rhetoric, but rhetoric that has been dressed up as science. The laymans respect for science (as the layman understands it) helps this work as a rhetorical device. But is either veering close to cargo-culting or a pretty slimy way to treat the public. The public, as you note, cannot expect to know all the details; they can, however demand basic honesty about what is being done.

In as much as you can say that economics is approached scientifically, it is much closer to biology, say, than physics. Coming at it that way might be more fruitful.

By the way, you are quite right that economics *should* be more concerned with observed behaviour. It's not so clear that this is what is being done, though.

Anonymous, great points which I fully take on board-especially about the dominance of science (and what Sen calls the 'engineering approach').

I'm not sure, though, how one would explain the enormous success of the discipline (and other social sciences model themselves on economics now: sociology, political science, ...)if it is completely devoid of theoretical underpinnings or of these are unrelaistic (which I think they are).

Perhaps we have to look, as you say, at the public face. the thing is, as far as i understand, that economics is deeply intertwined with the liberal tradition and what Gellner would call the individualistic-atomistic-universalist way of thinking.

The "ideal" of bourgeois man and the idea of 'negative liberty' that are behind the notion of markets must be persuasive to large numbers of people (the public face)[notwithstanding that markets have been instituted by political actors...Polanyi].

Economics is not self-reflective like other subjects because that would mean the undermining of the actual economic and political systems that are in place.

Yes, i think Megan's comments are pretty sound when it comes to macro models and said as much earlier on. Her point , though, about specialisation or the inability of the layman to understand the models is a profound one. E.Said makes this point in his wonderful Reith lectures, 'Representations of the Intellectual'.

this debate strikes me as mostly semantic. vast areas of economics don't use "models" anything like the sort that megan describes, which presumably predominate in engineering and perhaps in macro, which is not my area.

in the sort of economic modeling that i consider "typical," we write down small models based axiomatically on rational choice, and with extremely simple and transparent assumptions. the validity of these assumptions is admittedly questionable, but they are at least transparent. generally these models are solvable in closed form, and then we then go the the "maximize and equilibrium" route, and in the best case, try to generate comparative statics from underlying variables that are empirically testable and nonobvious.

the quoted comment in the post accurately describes some of the difficulties with *this* sort of modeling, and megan's criticism is pretty much irrelevant to it.

billo (2:01) has it mostly right, but i take issue with his statement that "I'm not sure if a model-in the economic sense- rests or falls according to its ability to predict reality." i used to be more sympathetic to this view (ie economic models "tell a story"), but now i think it is mostly vacuous. the assumptions underlying the workhorse economic models are clearly "wrong" in a strict sense of the word (and get more "wrong" every day with the growth of behavioral economics). given that, our only hope of recovering knowledge of economic processes is to generate testable predictions and bring them to the data. to the extent that this isn't possible (not just from a practical perspective, but from a "suppose i could get any data i wanted" perspective) you've hardly got much of a theory - or a science.

This is a good discussion. I agree with Anon 6:58 and Alex, but let me go further.

1) The Wisdom of crowds was hardly a discovery -- the ideas that markets effectively aggregate information is VERY old. It's academic lineage is often traced to Hayek's paper [On the Use of Knowledge in Society American Economic Review, 1945, 35, 519-530]

2) A useful fact to remember is that many engineering and physics models do NOT have actors within them that can change their behavior as a result of understanding the model. One famous instance was when the so-called Philip's curve predicted that unemployment was inversely related to inflation. As soon as policy-makers thought they could allow inflation to rise to reduce employment, they were proven wrong by the adjusted "rational expectations" of people. Misery and failure all-round (1970s).

Economists (and others?) call this a problem of endogeneity, ie, that something in the model cannot be held constant because it relies on another part of the model.

3) @Noel: The result of a bad model in the policy world can be a bad policy that sticks around to cause harm. In the marketplace, competition will destroy a bad model [as you point out]

4) Despite what Alex says, many social scientists still lean too heavily on models. They often miss (almost by definition) feedback loops, nonlinear effects and cross-effects.

5) @Billo: Many models _are_ consistent, but are still used to prove a POV or belief. It's very difficult for others to argue with that model, since different assumptions (results) can be dismissed bc they "contradict the literature".

6) How to predict a blockbuster? Try the Hollywood Stock exchange. (www.hsx.com)