Milton Friedman’s Thermostat

Via Matthew Yglesias on Twitter, this insight from the Worthwhile Canadian Initiative economics blog looks to plausibly have relevance for political scientists.

Milton Friedman’s thermostat is an idea that has very broad application, and has nothing in particular to do with Monetarism or even macroeconomics. Or even economics. … Everybody knows that if you press down on the gas pedal the car goes faster, other things equal, right? And everybody knows that if a car is going uphill the car goes slower, other things equal, right? But suppose you were someone who didn’t know those two things. And you were a passenger in a car watching the driver trying to keep a constant speed on a hilly road. You would see the gas pedal going up and down. You would see the car going downhill and uphill. But if the driver were skilled, and the car powerful enough, you would see the speed stay constant. So, if you were simply looking at this particular “data generating process”, you could easily conclude: “Look! The position of the gas pedal has no effect on the speed!”; and “Look! Whether the car is going uphill or downhill has no effect on the speed!”; and “All you guys who think that gas pedals and hills affect speed are wrong!”

And no, you can not get around this problem by doing a multivariate regression of speed on gas pedal and hill. That’s because gas pedal and hill will be perfectly colinear. And no, you do not get around this problem simply by observing an unskilled driver who is unable to keep the speed perfectly constant. That’s because what you are really estimating is the driver’s forecast errors of the relationship between speed gas and hill, and not the true structural relationship between speed gas and hill. And it really bugs me that people who know a lot more econometrics than I do think that you can get around the problem this way, when you can’t. And it bugs me even more that econometricians spend their time doing loads of really fancy stuff that I can’t understand when so many of them don’t seem to understand Milton Friedman’s thermostat. Which they really need to understand.

If the driver is doing his job right, and correctly adjusting the gas pedal to the hills, you should find zero correlation between gas pedal and speed, and zero correlation between hills and speed. Any fluctuations in speed should be uncorrelated with anything the driver can see. They are the driver’s forecast errors, because he can’t see gusts of headwinds coming. And if you do find a correlation between gas pedal and speed, that correlation could go either way. A driver who over-estimates the power of his engine, or who under-estimates the effects of hills, will create a correlation between gas pedal and speed with the “wrong” sign. He presses the gas pedal down going uphill, but not enough, and the speed drops.

One possible place where this idea applies is to political campaigns. Journalists depict political scientists as saying that campaigns don’t matter. But as per the cars climbing hills, campaigns do matter – campaign operatives respond to changing conditions, just as the driver e.g. puts her foot on the accelerator when the hill gets steeper. If they didn’t do this, the candidate would be in trouble. But much (not all) of the time, their actions will be a wash. One of the interesting implications of the blogpost is that if one just observes an actor’s actions, and not the conditions under which she is taking that action, one is likely to get confused, especially where the actor is only able to respond to changing conditions in a partly adequate fashion.

Watch what happens on a really steep uphill bit of road. Watch what happens when the driver puts the pedal to the metal, and holds it there. Does the car slow down? If so, ironically, that confirms the theory that pressing down on the gas pedal causes the car to speed up! Because it means the driver knows he needs to press it down further to prevent the speed dropping, but can’t. It’s the exception that proves the rule.

Here, for example, a naive observer might take a particular campaign action, which is associated with the candidate’s defeat, as evidence of incompetence by the campaign. It’s not – it may be correlated only because it is the best thing that the campaign can do under particularly difficult external circumstances.

8 Responses to Milton Friedman’s Thermostat

Regarding your last paragraph: this may be true in some cases, but I don’t think that choosing Palin as VP nominee was the best thing that the McCain campaign could do under particularly difficult external circumstances. I don’t think it was a good decision at all.

To put it another way, people do make mistakes. I’m sure you realize this but I worry that too quick a reading of your post will give people the impression that optimal decisions are the norm.

Regarding campaigns in general: yes, I believe that campaigns matter a lot but when they are equally funded they tend to cancel each other out.

Find a total idiot driver, who doesn’t understand the relation between gas pedals and speed, and who makes random jabs at the gas pedal that you know for certain are uncorrelated to hills or anything else that might affect the car’s speed, and then do a multivariate regression of speed on gas and hills. But you had better be damned sure you know those jabs at the gas pedal really are random, and uncorrelated with hills and stuff. Which means this can only work if you are certain that you know more about what is and is not a hill than the driver does. Or you are certain he’s pressing the gas pedal according to the music playing on the radio. Or something that definitely isn’t a hill. Are you really really sure your instrument isn’t a hill, or correlated with hills? And if so, why doesn’t the driver know this, and why does he jab at the gas pedal in time with that instrument? You had better have a very good answer to those questions. And no, Granger-Sims causality does not answer those questions, or even try to.

This is a great analogy. On one level, it highlights threats to causal inference when analyzing observational data (if only we randomly assigned how far people press the gas pedal–aside from all the accidents that would cause!). On another level, it points to the importance of theory and measurement. Pressing the gas pedal down is a distal cause for making the car go. It’s the engine that actually makes the car go. What if we instead measured the power output of the engine? We’d certainly see an interaction between the steepness of the hill and the power needed to generate the observed speed. So, getting the mechanisms right in both the theory and measurement would also address this problem. Of course, that’s easier said than done.

Isn’t the fundamental problem here that there is no variation in the dependent variable? If the speed changed for a moment, then in that moment your measurement for hills and your measurement for pedal displacement would not be collinear and your regression would pick it up. Perhaps you would need several such moments to pick this relationship up, but as long as you have both independent variables in your model, it seems incorrect to say that “A driver who over-estimates the power of his engine, or who under-estimates the effects of hills, will create a correlation between gas pedal and speed with the ‘wrong’ sign.” That would only be correct if you weren’t monitoring hill level.

I’m not sure that measurements taken from the engine help Kevin – a regression will still show that engine power goes up and down with no effect on overall speed.

It is true that causation does not imply correlation if there are unknown confounding variables. But I think it is not true that you cannot uncover the data generating process if you have all the relevant variables measured correctly. The author of this piece makes 2 claims here.

First, he says you can’t get around the problem using multiple regression if the predictors are perfectly correlated. It is true that multiple regression fails in this case, but you don’t need to use multiple regression if there is no variability in the relationship. Just plot the 3 variables together and you see a perfect relationship. If you really want to use multiple regression, just add some random noise to one of the two independent variables and it works fine. In practice, we don’t have perfectly precise measurements, so we aren’t going to run into this problem.

Second, he claims that if the driver doesn’t control the gas perfectly with respect to hills so that there is some fluctuation in speed, you still can’t uncover the true data generating process using multiple regression. You get a combination of the data generating process of interest plus the error process of the driver. He argues that the only way you could get an unbiased estimate of the relationships is if the measurement error had mean 0. I don’t think this is true.

Take the relationship speed=-2*hill+3*gas. Given the same hills, if you give too much or too little gas on average to maintain a constant speed, the speed variable adjusts accordingly because the relationship is deterministic. The author is ignoring that error in using the gas creates fluctuations in speed. The “idiot driver” is unnecessary.

Maybe this is just a poor analogy. It would be problematic if there was bias in how you recorded some of the variables. For example, if you systematically underestimated how much gas you were giving the car, you would end up with a biased estimate of the coefficient for gas. But this is not the claim being made here. I haven’t read Milton’s original account so I don’t know what he was claiming.

Is this analysis right or am I missing something here?

About

The mission of this blog is described in our inaugural post. And, technically, an orangutan is an ape, not a monkey.