econometrics

The 2013 Nobel prize in economics was won by Fama, Shiller, and some other dude, according to most media accounts. Fama and Shiller were pretty easy to explain: one of them is at Chicago and is associated with a theory called “efficient markets,” so he’s the free market guy. Shiller criticized the Chicago guy, so we know where to put him on the political spectrum. But this third guy, Hansen, well, he’s at Chicago, but he does some sort of theoretical econometrics, so if we’re the Guardian we’ll just assume he’s “ultra-conservative” and then ignore him, or if we’re anyone else we’ll skip to just ignoring him (even the Economist gives up, complaining they can’t explain his work without “writing all sorts of equations in our newspaper.”) This post attempts to provide a relatively gentle, albeit with all sorts of equations, introduction to part of the third guys’s research, focusing on applications in causal modeling in microeconomics rather than the examples from finance or macroeconomics.

Bryant Chen and Judea Pearl have published a interesting piece in which they critically examine the discussions (or lack thereof) of causal interpretations of regression models in six econometrics textbooks. In this post, I provide brief assessments of the discussion of causality in nine additional econometrics texts of various levels and vintages, and close with a few remarks about causality in textbooks from the perspective of someone who does, and teaches, applied econometrics. Like Chen and Pearl, I find some of these textbooks provide weak or misleading discussion of causality, but I also find one very good and one excellent discussion in relatively recent texts. I argue that the discussion of causality in econometrics textbooks appears to be improving over time, and that the oral tradition in economics is not well-reflected in econometrics textbooks.

Commonly econometricians conduct inference based on covariance matrix estimates which are consistent in the presence of arbitrary forms of heteroskedasticity; the associated standard errors are referred to as “robust” (also, confusingly, White, or Huber-White, or Eicker-Huber-White) standard errors. These are easily requested in Stata with the “robust” option, as in the ubiquitous

reg y x, robust

Everyone knows that the usual OLS standard errors are generally “wrong,” that robust standard errors are “usually” bigger than OLS standard errors, and it often “doesn’t matter much” whether one uses robust standard errors. It is whispered that there may be mysterious circumstances in which robust standard errors are smaller than OLS standard errors. Textbook discussions typically present the nasty matrix expressions for the robust covariance matrix estimate, but do not discuss in detail when robust standard errors matter or in what circumstances robust standard errors will be smaller than OLS standard errors. This post attempts a simple explanation of robust standard errors and circumstances in which they will tend to be much bigger or smaller than OLS standard errors.

This post briefly surveys some of the methods and results in the literature on health and income inequality, closing with some remarks on problems with the existing literature and where future research may take us. It is not intended as anything resembling a comprehensive survey; Lynch et al (2004) provides a useful review of the empirical literature up to that time.

Antibiotic overuse causes great social harm yet is largely absent from public discussion of drug policy. There is a textbook external effect of an antibiotic prescription: the more antibiotics are used, the higher the risk we all face of resistant infections. As a result, there tends to be too much use of antibiotics. There have been ongoing efforts to reduce use of antibiotics, particularly in the context of treating respiratory infections, in part by educating GPs, the supply side of the relationship, on appropriate use.

A new working paper by Michael Luca estimates the effect of Yelp reviews on Seattle restaurant revenues. Disentangling causality here is difficult, as even if reviews have no effect on revenues we would expect to observe reviews and revenues both moving with changes in underlying relative quality. Luca exploits a quirk in the way Yelp presents information: average scores are reported rounded in 0.5 star bins on a 5 star scale. For example, underlying average scores of 2.76 and 3.24 are both reported as “3 stars,” but a good review which bumps the average up to 3.25 bumps the reported score up to 3.5 stars. The estimates show that Yelp reviews do have a substantial effect on revenues.Read the rest of this entry »