An IV won’t save your life if the line is tangled

In the official story the causal question comes first and then the clever researcher comes up with an IV. I suspect that often it’s the other way around: you find a natural experiment and look at the consequences that flow from it. And maybe that’s not such a bad thing. See section 4 of this article.

More generally, I think economists and political scientists are currently a bit overinvested in identification strategies. I agree with Heckman’s point (as I understand it) that ultimately we should be building models that work for us rather than always thinking we can get causal inference on the cheap, as it were, by some trick or another. (This is a point I briefly discuss in a couple places here and also in my recent paper for the causality volume that Don Green etc are involved with.)

I recently had this discussion with someone else regarding regression discontinuity (the current flavor of the month; IV is soooo 90’s), but I think the point holds more generally, that experiments and natural experiments are great when you have them, and they’re great to aspire to and to focus one’s thinking, but in practice these inferences are sometimes a bit of a stretch, and sometimes the appeal of an apparently clean identification strategy masks some serious difficulty mapping the identified parameter to underlying quantities of interest.

I'm having trouble of thinking of a single paper in economics where the conclusions are widely believed but identification doesn't come from IV, regression discontinuity, natural experiments, or real ones.

There is just a huge initial credibility shortfall in the profession for papers without exogenous variation.

I confess not reading Morck and Yeung (too wordy and data/equation-free for my taste), but Tabarrok's summary states that "if a variable is a good IV for X then it can't also be a good IV for Y without also controlling for X and vice-versa". This doesn't seem to make any sense to me unless there is something very specific in this context that I'm missing.

@Ricardo
Generally speaking I agree with you. 'A' can cause changes in 'B' and 'C' without there being any relationship between 'B' and 'C'. However, it seems sensible when reusing IV to deal with 'C' after 'B' to ask if changes in 'B' could cause further changes in 'C' and vice versa. If so, you have a problem, but it seems absurd to call this an externality. If the prediction about the relationship between 'A' and 'B' was never right in the first place, and this just shows why.

The assumptions of an IV include that it is associated with the exposure but not associated with the outcome (except through the exposure) or with any of the possible confounding variables.

So, let's take an Intrumental Variable (say Physican Medication Preference) being looked at by Researcher A. It may be associated with the type of NSAID (a type of anti-inflammatory drug) given but not with patient specific characteristics nor, independently, with the outcome (all cause mortality).

Now, let's say that some clever researcher (call her B) argues that this IV (physician medication preference) is an instrument for soci-economic status (with the outcome being the same: all cause mortality). However, Researcher A has already shown that the instrument is associated with NSAID use (which is a potential confounding factor between socioeconomic status and all cause mortality).

So the existence of the two studies (of different exposures) suggests that physician medication preference is not a valid instrument (as either one study is incorrect in associating the instrument with the exposure or both studies are subject to confounding by the other factor). Once you get a few of these examples, it becomes hard to argue that the instrument is doing any better than a straightforward analysis (as it is now susceptible to unknown confounding, residual confounding, and so forth).

Now it is possible that the same instrument could randomize multiple exposures but then the exposures have to be unrelated to each other. That can be tricky to prove if there are too many studies using the same instrument for a diverse set of exposures.

Here's the example from the Wikipedia article "Instrumental Variable," which I don't find terribly confidence-inducing:

"For example, suppose a researcher wishes to estimate the causal effect of smoking on general health (as in Leigh and Schembri 2004[3] ). … The researcher may proceed to attempt to estimate the causal effect of smoking on health from observational data by using the tax rate on tobacco products as an instrument for smoking in a health regression."

Well, I guess … But it seems like you are just getting yourself snarled up in a bigger hairball than the original question: Since cigarettes are addictive, how much do higher taxes on cigarettes reduce smoking? And how fast? (I wouldn't be surprised if raising cigarette taxes reduces smoking in a generation by discouraging teens to not smoke enough to get hooked, but doesn't have much effect for a number of years.) And what if the legislators of health-conscious states like Colorado are more likely to raise taxes on cigarettes as a symbol of objection to cigarettes, while legislators in poor health states like West Virginia keep taxes low because so many constituents enjoy a delicious, calming smoke? Or what if some states with especially hooked populaces have high taxes because they generate a lot of money?

I'm a lot more uncertain about this "instrumental variable" than I am about the basic question of whether smoking is bad for you. (Of course it is.)

Heckman critique is about how to get the right causal effect, not about using statistical modeling to do something else. From what I understand, both Heckman and those criticized by him are somewhat obsessed with the same thing: causal inference. Both see other types of statistical analysis as inferior. The difference is that Heckman trusts economic theory in helping economists to get the right model while others prefer clever research designs, less dependent on theoretical assumptions (based on economic theory and generally untestable).

Thus, from what I understand, your own position (from your books and blog) and that of Heckman are fundamentally different regarding the role of statistical modeling. This is so in at least two important aspects: 1) you're very suspicious of the use economic theory – with its expected utility apparatus, etc – in helping us in getting the right model, so right that it doesn't even have to checked; and 2) you consider regression as a tool to investigate conditional probabilities among quantities of interest, even when causal inference is not possible.

I agree with Heckman's point (as I understand it) that ultimately we should be building models that work for us rather than always thinking we can get causal inference on the cheap, as it were, by some trick or another.

My perspective on statistics is definitely different from Heckman's (see the linked article for more on that point) but I think I'm in agreement with him on the above point (even if we're coming to it from different directions).