Tuesday, December 24, 2013

I'll just be honest, I've never really cared much for writing. A good post covering the sort of issues I want to tackle on this blog takes a ton of time and effort if I don't want to leave low-hanging fruit to damage my credibility. I simply value other things I can do with my free time than that, so it's been months since I've written anything at all. I feel I have to revisit the post that took off, though, and write about what has and hasn't changed in my thinking since.

I started this blog as a place where friends could find evidence-based answers to questions that pop up all the time in the media, and hopefully then they'd direct others. After 7 posts, that was pretty much out the window, which still sort of blows my mind. It's very easy (and often justified) to be cynical of any institution, even science. It's also easy to instinctively fear something and assume you know enough about it to justify it. Critical thinking is something you really have to develop over a long period of time, but done right, it's probably the highest pinnacle of human ingenuity in history. That or maybe whiskey.

On the other hand, sometimes it's easy to slide closer to nihilism. Knowing something about randomness and uncertainty can lead into being overly critical of sensible hypotheses. There's the famous BMJ editorial mocking the idea that there's "no evidence" for parachutes being a good thing to have in a free fall. You always have to have that concept in the back of your mind, to remind you that some things really don't need to climb the hierarchy of good evidence. I've spent some of the last year worrying if maybe I didn't do enough to show that's not where my analysis of Kevin Drum's article came from.

I think most people saw that I was trying to illustrate the evidence linking lead and crime through the lens of evidence-based medicine. The response was resoundingly positive, and much criticism was centered around my half-assed statistical analysis, which I always saw as extremely tangential to the overall point. The best criticism forced me to rethink things and ultimately led to today.

Anyway, anecdotes and case reports are the lowest level of evidence in this lens. The ecological studies Drum spends much space describing by Rick Nevin (which I mistakenly identified as cross-sectional) are not much higher on the list. That's just not debatable in any way, shape, or form. A good longitudinal study is a huge improvement, as I think I effectively articulated, but if you read through some other posts of mine (on BPA, or on the evidence that a vegetarian diet reduces ischemia), you'll start to sense the problems those may present as well. Nevin's ecological studies practically scream "LOOK OVER HERE!" If I could identify the biggest weakness of my post, it's that I sort of gave lip-service to the a Bayesian thought-process suggesting that, given the circumstances, these studies might amount to more than just an association. I didn't talk about how simple reasoning and prior knowledge would suggest something stronger, and use this to illustrate the short-comings of frequentist analysis. I just said that in my own estimation, there's probably something to all of this. I don't know how convincing that was. I acknowledge one of my stated purposes saying how the present evidence would fail agency review may have come off as concern-trolling.

On the other hand, if there is indeed something more to this, it seems reasonable to expect a much stronger association in the cohort study than was found. Going back to Hill's criteria, strength of the association is the primary factor in determining the probability of an actual effect. When these study designs were first being developed to examine whether smoking caused lung cancer, the associations were literally thousands of times stronger than what was found for lead and violent crime. The lack of strength is not a result of small sample sizes or being underpowered, it's just a relatively small effect any way you look at it. It would have been crazy to use the same skeptical arguments I made in that instance, and history has judged those who did properly.

Ultimately, I don't know how well the lens of evidence-based medicine fits the sort of question being asked here. Cancer epidemiology is notoriously difficult because of the length of time between exposure and onset of disease, and the sheer complexity of the disease. It still had a major success with identifying tobacco smoke as a carcinogen, but this was due to consistent, strong, and unmistakable longitudinal evidence of a specific group of individuals. Here, we're talking about a complex behavior, which may be even more difficult to parse. My motivation was never to display prowess as a biostatistician, because I'm not. It was never to say that I'm skeptical of the hypothesis, either. It was simply to take a step back from saying we have identified a "blindingly obvious" primary cause of violent crime and we're doing nothing about it.

I think the evidence tells us, along with informed reasoning, that we have a "reasonably convincing" contributing cause of violent crime identified, and we're doing nothing about it. That's not a subtle difference, and whether one's intentions are noble or not, if I think evidence is being overstated, I'm going to say something about it. Maybe even through this blog again some time.