If Not Data, What?

Over the years, I have become increasingly skeptical of the power of statistical techniques to measure causation in complex systems. … I happen to believe that concealed handguns do deter crime and allowing concealed handguns is a good thing. And you can claim that the evidence that shows I’m right is "good" statistical analysis. The other side disagrees. They claim it’s "bad" statistical analysis. Who’s right? I have no idea. But what’s clear to me is that my belief in the virtues of allowing concealed hand guns has little to do with the empirical evidence. And I would argue that the opponents are really in the same boat. They just don’t like guns and they’ve dressed up their prejudices in fancy statistical analysis. …

If Russ relies little on data to draw his conclusions, then on what does he rely? Perhaps he relies on theoretical arguments. But can’t we say the same thing about theory, that we mainly just search for theory arguments to support preconceived conclusions? If so, what is left, if we rely on neither data nor theory?

Try saying this out loud: "Neither the data nor theory I’ve come across much explain why I believe this conclusion, relative to my random whim, inherited personality, and early culture and indoctrination, and I have no good reasons to think these are much correlated with truth." That does not seem a conclusion worth retaining. If this is really your situation, you should move to a nearly intermediate position of uncertainty. Either you should believe that truth-correlated data or theory has substantially influenced your belief, or you should retain only a very weak belief.

As I understand his point from reading the whole article, Roberts’ is not arguing that we shouldn’t conform our beliefs to the best available evidence, but that when the evidence is not overwhelming we are likely to let our biases shape what we think. Roberts cites his own case simply as an instance that confirms this general claim. I agree that his beliefs about guns, given his second-order belief about how those first-order beliefs were formed, are unwarranted and should be abandoned. But this only makes his general claim more convincing, since it shows that the bias he has identified keeps operating even in those who are aware of them.

http://www.bangor.ac.uk/~afpe5d/ neal hockley

Maybe I’m misinterpreting the passage, by I read Russ as advocating that very thing: a move to very weak beliefs.

Many questions of enormous practical importance can never realistically be answered through regression analysis. For example, what determines differences in economic growth of countries? The number of plausible explanatory variables for which interesting hypotheses have been formulated and evidence claimed far exceeds the sample size – we can never test these hypotheses! (see Easterly in May’s AER).

Moreover, the dataset is essentially fixed – new countries are created only rarely, and the dataset increases longitudinally by just 1 year per year. In theory, hypotheses generated from casual observation of one set of data (recent history) should be tested on another (to prevent over-fitting) – but there is no other dataset!

The truth is undoubtedly highly complex and non-linear. There is not enough information in any conceivable dataset to draw firm conclusions about the underlying process, in a timely fashion, except in the case of very large effect sizes – the real world provides very little information on underlying truth.

This limits are ability to understand the truth, and in these circumstances, agnosticism is the correct response.

Constant

But can’t we say the same thing about theory, that we mainly just search for theory arguments to support preconceived conclusions?

Not necessarily. The claim about theory does not follow from the observation about evidence. Evidence and theory are not the same thing, and do not share the same weaknesses. Specifically, he writes:

I have become increasingly skeptical of the power of statistical techniques to measure causation in complex systems

This is a very specific statement about statistical techniques. It is not a blanket rejection of data, but a statement about the weakness of statistical techniques in a particular area (“to measure causation in complex systems”). There is no particular reason to draw the conclusion that theory also has little value in this particular situation.

On the contrary. One might even argue that theory becomes especially useful in dealing with complex systems, because the truth of the theory can be tested with simpler systems. Since the theory can be tested with simpler systems, then this fact allows us to get past the above-mentioned difficulty we have of testing claims with complex systems. If you can’t test a claim with complex systams, then fine: test a theory with simpler systems. After having tested it, if it passes the test and if it applies to the claim we are interested in, then apply it to the claim in complex systems.

April

I would add to Neal’s comments (on Easterly and what we know about what causes economic growth in developing countries) that there are many important policy questions related to developing countries where the data are far from sufficient to test our hypotheses about what is going on.
In these cases, I believe it’s a huge advantage for those involved in policy making and advising to be able to keep in mind the fact that we don’t know for sure what the right policy is. The problem is that it is extremely uncomfortable to do.
When this awareness is suppressed, as is often the case, it increases the tendency to suppress opposing views, and reduces willingness to take in new information that might lead to questioning of the policy.

Constant

In the case that Russ Roberts is considering, it is not just any complexity that renders the data difficult to interpret, but specifically, I would say, the large number and large effect of uncontrolled extraneous variables. That is, if crime is a function “f” of many variables, like so:

f(a,b,c,d,e,f,g,h,i,j,k)

and if “concealed carry” is “a”, and if “b”, “c”, and the rest all vary widely between different areas and have a large effect on “f”, then it can be very difficult to determine from all the available data what the effect of “a” on “f” is.

This is not just any kind of “complexity”, but a particular kind. This kind of complexity can be dealt with by conducting controlled experiments. A computer game, for example, might control for everything except for concealed carry among the NPCs, with the the player playing the role of the criminal.

http://web.mac.com/william_c_hutton_jr/iWeb william

It’s ok.

Russ simply has very strong, narrow Priors. His Priors overwhelm the data (no matter how good the data may, or may not be).

The fact that others have very different strong, narrow Priors and the same data means they will reach a completely different conclusion.

http://www.mccaughan.org.uk/g/ g

The question is how he and those others acquired those priors.

Floccina

Yes I think what is left is theory and logic. I have heard that Mises belived that the economic data was to messy to rely on and so you are left with logic and theory.

http://thesaifhouse.wordpress.com saifedean

I don’t see why not trusting some forms of data and regression means automatically having to trust hunches, intuition, flimsy theory, or indoctrination. This is a false dichotomoy.

A lot of the statistical work that is presented as scientific has a lot of problems in it, and very patently expresses bias at many times.

Russ is correct in some of his criticisms of taking regressions too seriously. That is a statement that should be evaluated on its own. When the data is not convincing, we should not believe in it. To believe in it because we do not want to believe in anecdotes is not academically honest.

It is much better to admit that we do not know something rather than to rely on poor anecdotes, theory, or intuition or poor shoddy data.

The illusion of knowledge is more dangerous than the absence of knowledge. When the data can tell us something, when regressions can be honestly interpreted to establish causality, then we beleive in them. When they don’t, we have to be honest and admit we do not know much. We may then be better off by trying to synthesize our little data, theory, and anecdotes into a nuanced picture that doesn’t claim to know too much. But believing in data because it is inherently better than other forms of knowledge is dishonest.

In that regard, you’re right in criticizing Russell who does seem to hold some very strong beliefs at times when they are not supported by data. But that was not the claim he was making in this passage.

http://cob.jmu.edu/rosserjb Barkley Rosser

Well, I confess to a prejudice against handguns because a relative died as a ten year old child due to an accident with a family-owned handgun. However, it strikes me that the case here either way is very contingent. So, it may be the case that in the US where there is little gun control and guns are all over the place and we have far higher homicide and suicide rates due to the use of guns and handguns than other high income countries, the horse is just way out of the barn in terms of ubiquitous gun insanity, so that allowing concealed handguns might just reduce violence from all the other gun-toting maniacs. But it is pretty clear that on a global basis, the countries that have done a better job of controlling the availability and use of guns more generally very strongly tend to have lower homicide, and especially suicide, rates. I think this is causal, although it is probably my wicked prior showing itself in that thought.

Constant

the countries that have done a better job of controlling the availability and use of guns more generally very strongly tend to have lower homicide

If you truly include everything, this may not be the case. For starters, Nazi Germany and the Soviet Union must be included among the countries that implemented gun control.

http://entitledtoanopinion.wordpress.com TGGP

If you want causality, it would seem to me you’d have to show variation over time. The United States is far different from other countries in a whole host of respects, and the states within the U.S also differ in many ways. Charles Murray advocated using a “trend-line test” to see if we can guess when legislation came into effect, and I think in most cases he examined one could not do so very easily (this is not itself an argument against legislation, because that means the downsides are undetectable as well). A brief explanation and critique of Murray’s trend-line test by Jeffrey Friedman can be found here.

http://www.google.com 4σ

the countries that have done a better job of controlling the availability and use of guns more generally very strongly tend to have lower homicide

Excellent point. We most certainly need to include state-sponsored homicide in the numbers.

http://www.satisfice.com/blog James Bach

An alternative to data or theory is “story” (or what you might call an implicit mental model). I think, looking at Tversky and Kahneman’s work, as well as, oh, just about all of social science, that it’s clear that data does not have much influence over beliefs. I doubt that you, in your daily life, actually collect much data about matters relating to your everyday beliefs. I sure don’t.

If I have a coherent story about how my world works, especially if the story is personally rewarding or self-justifying (e.g. I am good, I like guns, so guns are good) then I may work hard to qualify data that supports me and disqualify the data that doesn’t.

A theory purports to explain something about the world, but a story, in my terms, is a *proxy* for the world. We literally live inside our stories, only forced out in the event of a Kuhnian sort of crisis. To tell us that we “should” live by data seems absurd to me, unless you are claiming that a life lived according to data is more happy or satisfying. Of course, it may be to someone like you (and pretty much me, too) who enjoy analyzing data and putting together justifications that may stand the test against our critical colleagues, but it is hardly likely to improve the lives of most people, based on surveys showing how many people “believe in God” and therefore clearly don’t live by data.

A slightly disturbing thing I notice about the writers on this blog is that although they enjoy exploring rationality (which is cool) they are too often reluctant to appreciate the value and dynamics of irrationality, and that leads to oversimplifying and dismissing it. Man, the world RUNS on irrationality. Look around you! Watch a press secretary at work! We need to respect our enemy (our *powerful* enemy) and learn the ways of our enemy, if we are to overcome it.

— James

Gray Area

“If you want causality, it would seem to me you’d have to show variation over time.”

This doesn’t work. In fact, you cannot in general draw causal conclusions from statistical data alone (without making additional causal assumptions), no matter how sophisticated you are with the data. Fortunately, there is a lot of work being done now on exactly how few assumptions are needed to answer particular causal questions.

http://www.bthomson.com Brandon Thomson

“To tell us that we ‘should’ live by data seems absurd to me, unless you are claiming that a life lived according to data is more happy or satisfying.”

Thank you for this comment, Mr. Bach. I think overcoming bias is a worthy goal, but it’s easy to forget that this is just a value judgment.

http://profile.typekey.com/robinhanson/ Robin Hanson

Neal, Russ is not a person well characterized as having relatively weak beliefs on social science.

Constant, but there are so many different ways to apply simple theory to complex situations.

William, I criticize having strong narrow “priors” not based in some way on previous evidence or analysis.

Saife, my main point was to criticize Russ’s strong beliefs in the apparent absence of strong support.

James, perhaps you should pen an essay on the value of irrationality for us to ponder.

George Weinberg

I get the impression that RR was looking referring specifically to cases where intelligent, well informed people have very different opinions. If all the smart people are on one side and only stupid people are on the other, then it’s a pretty safe bet that we’re right and they’re wrong 😉 But if there are intelligent people on both sides, it may be a bit presumptuous to assume that our side formed its beliefs via pure unbiased reason but the other side is largely a product of wishful thinking.

George Weinberg

I get the impression that RR was looking referring specifically to cases where intelligent, well informed people have very different opinions. If all the smart people are on one side and only stupid people are on the other, then it’s a pretty safe bet that we’re right and they’re wrong 😉 But if there are intelligent people on both sides, it may be a bit presumptuous to assume that our side formed its beliefs via pure unbiased reason but the other side is largely a product of wishful thinking.

Constant

Robin, physics is a simple theory which is discovered in lab experiments where complexity is managed, and remains valid outside and is applicable outside the lab as well, amidst the vast comlexity. For example, we can be confident that perpetual motion machines will not work. The vast complexity of the outside does not render the simple theory useless. The line that there are so many ways to apply the simple theory to the complex alleged perpetual motion machine is an argument that the crackpot inventor might try to use in order to discourage us from using our intelligence against his claims.

J Thomas

“But it is pretty clear that on a global basis, the countries that have done a better job of controlling the availability and use of guns more generally very strongly tend to have lower homicide, and especially suicide, rates.”

It could be that the places people tend not to have guns and also have little cultural tradition of solving personal problems with firearms, it’s easier to do gun control. The causation might run from low homicide to gun control, rather than the other way around.

Similarly, there are cultural differences between US cities and states that tend toward gun control versus none. In some places, a rise in crime rates leads to gun control — whether that results in improvements or not. In other places, a rise in crime rates leads to capital punishment — whether that results in capital punishment or not.

Since places where they solve problems with gun control differ in important but subtle ways from places where they solve problems with capital punishment, it’s hard to make crude statistical comparisons between them.

http://profile.typekey.com/robinhanson/ Robin Hanson

J and others, this isn’t the place to argue about gun control.

J Thomas

Robin, I’m arguing about correlation versus causation. About the difficulty of doing statistics across populations that have systematic variation.

AO

I expected Russ’ post and Robin’s response to set off a flurry of back and forth amongst econ blogs, but I haven’t really seen that. I’ve heard Russ Roberts say that economics is really an art and not a science, which seems like an important criticism. Robin, such a disagreement seems like an elephant in the room of economics, which I would think you would want to boil down to a specific disagreement of facts. Why isn’t an active, engaged debate happening? Does nobody take his argument seriously, or does everyone take it as given?

Peter Boettke

Robin and others,

Isn’t Russ just saying that “facts” never speak to us in an unambiguous manner and instead always demand “interpretation” through various theoretical frameworks and respect for the network of statements that are made (both focal and subsidiary) in developing a hypothesis?

What about the relevance of the Duhem-Quine thesis even in the natural sciences, let alone the social sciences?

We are all interested in tracking truth and overcoming unwarranted bias, but stating that as a goal is slightly different from achieving it unambiguously in practice. But rather than go down that path, perhaps it might be better to address the Duhem-Quine thesis and its implications in not only making sense of the problem that Russ identifies, but the project of overcoming bias in general.

Pete

http://profile.typekey.com/robinhanson/ Robin Hanson

Peter, that doesn’t sound at all to me like what Russ said, and I don’t yet see the relevant of the Duhem-Quine thesis for the project of overcoming bias. But if you’ll blog an argument somewhere, I’ll blog a response here.

http://www.johnrlott.com John Lott

Dear J Thomas:

Regarding correlation and causation that is precisely why some research try to provide many qualitatively different empirical tests. For example, with right-to-carry laws: 1) violent crime falls, 2) the size of the drop increases over time the longer the law is in effect because more permits are issued, 3) there are differences between violent crimes where a criminal comes in contact with a victim who might be able to defend herself and a property crime where there is no contact (violent crimes fall relative to property crimes), 4) there are differences between different types of violent crimes (e.g., between murder generally and multiple victim public shootings because the probability that someone will be able to defend themselves with multiple victim public shootings is much higher than the case where there is only one or two victims present), 5) a comparison between adjacent counties on opposite sides of a state border, and 6) differential benefits across different types of victims.

http://www.whoismaryrosh.com/ Mary Rosh

I agree with everything John says. And he’s so handsome too!

And whatever people say about the errors in John’s book, just look at all the good reviews it got on Amazon. How could his book be bad, when he shows his faith in it by reviewing it so many times?

http://www.johnrlott.com John Lott

Is that Tim? In any case, the inaccuracy of the source that you reference is discussed here and here.

http://www.mccaughan.org.uk/g/ g

John, the links you provide don’t say anything about the alleged errors in your book; at most they indicate that on a couple of occasions Tim Lambert has been overzealous in identifying sockpuppets. Given your history (which is a matter of record) of self-aggrandizing sockpuppetry via “Mary Rosh” over a three-year period, I think that’s pardonable; it certainly doesn’t seem to me to destroy Tim Lambert’s credibility.

And, btw, the “Mary Rosh” comment was mine; I am not Tim Lambert, have never met Tim Lambert, and so far as I can recall have had no interactions with Tim Lambert other than reading his blog from time to time.

http://www.johnrlott.com John Lott

Dear “g”:

You apparently didn’t look at many of the issues raise in just those two references.

1) The coding errors were in a paper by Plassmann and Whitley, not More Guns, Less Crime. The small number of errors that they argue did not effect their basic results were in data that was added from 1997 to 2000, not in the data that MGLC used from 1977 to 1996. Plassmann has a discussion here. You might also want to see their discussion here.
2) As to the survey, see here.

http://cob.jmu.edu/rosserjb Barkley Rosser

John,

Well, Robin Hanson has pointed out that this is not the time or place to be arguing about (hand)gun control. However, since you have popped up, how about answering my questions? Why does the US have such higher homicide and suicide rates than other high income countries that pretty much all have much stricter gun controls of all sorts? And, is it because we already have so many guns around given our past lack of gun control that your studies on handgun controls in states within the US might be correct?

http://www.johnrlott.com John Lott

Dear Rosser:

1) If you look across Western Europe or Europe as a whole, there is not the relationship that you imply. Switzerland has a much higher gun ownership rate than the US, but it has one of the lowest murder rates in Europe. Norway and Finland also have gun ownership rates that are just a little less than that in the US. If you read either MGLC or The Bias Against Guns, I have extensive discussions on the problem with purely cross sectional data. Let me note that the countries that currently have low murder rates in Europe tended to have even lower murder rates before gun control. England is a good example. It had no gun control prior to 1920 and yet London, a city of millions of people, had a total of two gun murders in 1900. It had five armed robberies. After stricter gun regulations in the 1950s and 1997, the UK say increases in murder and violent crime. You really need panel data to answer this question properly.

2) If you look around the US, murders are extremely geographically concentrated. 50 percent of US counties have zero murders in any given year. 25 percent have one murder. just over 3 percent of counties account for over 70 percent of all murders. Those 3 percent also happen to be the lowest gun ownership rate counties in the US. Again, though, I wouldn’t put a lot of weight on purely cross sectional data.

3) If you look across all countries for which the data is available and control for income, there actually appears to be more crime where there are stricter gun regulations. The top ten countries in terms of murder have either complete or virtually complete gun bans. Even in the past during the 1970s and 1980s in the former USSR, despite the totalitarian state, they had a murder rate much higher than ours in the US. Again, however, panel data is necessary. I have just tried to frame this discussion in terms of the cross sectional data that you reference.

If you are looking for a detailed discussion of these points, please see The Bias Against Guns.