An extremist, not a fanatic

October 22, 2014

Persuasion with statistics

Mark Thoma's point that apparently strong econometric results are often the product of specification mining prompts Lars Syll to remind us that eminent economists have long been wary of what econometrics can achieve.

I doubt if many people have ever thought "Crikey, the t stats are high here. That means I must abandon my long-held beliefs about an important matter." More likely, the reaction is to recall Dave Giles' commandments nine and 10. (Apparently?) impressive econometric findings might be good enough to get you published. But there's a big difference between being published and being read, let alone being persuasive.

This poses a question: how, then, do statistics persuade people to change their general beliefs (as distinct from beliefs about single facts)?

Let me take an example of an issue where I've done just this. I used to believe in the efficient market hypothesis. And whilst, like Noah, I still think this is good enough for most investors' practical purposes - index trackers out-perform (pdf) most active managers - I now believe there are significant deviations from the hypothesis, one of them being that there is momentum in share prices: past winners carry on rising and past losers continue to fall.

How was I convinced of this? As Campbell Harvey and colleagues point out, there are huge numbers (pdf) of patterns in the cross-section of returns. Most (though not all) leave me cold. Why has momentum been an exception?

There were two general things that persuaded me of this.

The first was evidence from different data sets. When I first encountered the case for momentum in Jegadeesh and Titman's paper, I merely thought: "that's interesting. I wonder if it applies elsewhere." So I set up a very simple hypothetical basket of momentum stocks for the UK - and found that it too has out-performed over long periods. And there's since been evidence that momentum effects exist in currencies, commodities, international stock markets and in 19th century markets.

The fact that different data say the same thing is something I found persuasive.

Secondly, there's powerful theory explaining momentum - all the more so because there is more than one theory.

One such explanation is simply that investors under-react to good news, causing shares to drift up rather than - as the EMH predicts - fully embody the good news immediately. This is intuitively plausible because casual empiricism tells us that Bayesian conservatism is widespread. But it's also consistent with another finding - that there's post-earnings announcement drift.

But this is not the only potential explanation. Another is that people have limited attention; some things escape their notice, so they might not spot when some stocks enjoy good news. This is consistent with the finding that that shares which see steady drips of news have stronger momentum effects than those which get a big splash of it.

And then we have an explanation for why smarter investors don't eliminate these irrationalities. Victoria Dobrynskaya has shown that momentum strategies have the wrong sort of beta: high downside beta and low upside. This means they carry benchmark risk - the danger of underperforming the general market. This makes them unattractive to those fund managers who fear being punished for under-performing.

My point here is, perhaps, a trivial one. The above is not a story about statistical significance (pdf). Single studies are rarely persuasive. Instead, the process of persuading people to change their mind requires diversity - a diversity of data sets, and a diversity of theories. Am I wrong? Feel free to offer counter-examples.

Comments

Not sure it's exactly what you're looking for, but here's my anecdote.
I have cycled as my main form of urban transport for many years. When I moved to London, I started wearing a helmet, but then did some reading around whether that was a sensible safety precaution. I came to the conclusion that I felt comfortable with not wearing a helmet, so now I cycle without.
So far, so good.
But one day a much respected colleague, also a cyclist, asked why I didn't wear a helmet (she does). I explained that my understanding of the research led me to believe that it wasn't a particularly useful thing to do. She was genuinely shocked that I was applying statistics to my real life, and clearly disapproved.
People don't like to make that link.

Essentially you are generating replications. Importantly, given a hypothesis based on data mining, you have then explored it in lots of different circumsances, and found that the hypothesis seems to hold. That's a fairly scientific approach.

Having a solid theory to explain statistical data is useful as well, as correlations on their own could just be randomness, although one should obviously be wary of post hoc theories created to explain data.

Basically then: mine the data, knowing that you will almost certainly pick out trends which are statistical artifacts. Try to explain those trends with sensible theories. For those that make some kind of theoretical sense to you, see if you can replicate those results. If you can, then you have decent evidence that something is going on.

I don't think there are any statistics that would make me feel comfortable in encouraging my children not to ware a helmet while cycling in urban traffic. I sure would like to see the research that you read.

Replication is central, of course. Stats books (the ones I've seen) don't make a big point of this, so when I teach statistics I go out of my way to show how much more important it is than whatever p value you get in a single study. (Not to mention the specification mining that generates that p value.) Nevertheless, there's a big caveat: replication does its job to the extent that the different studies are truly independent of each other. First off is the difference in data sets, which is what moved you about momentum effects. Good. But another concern is that the results you see in any particular set are dependent on the assumptions packed into the data collection and analysis methods underlying them. If these assumptions are not mandatory or self-evident, there may be other ways of doing such a study. In that case the power of replication depends on whether you see the same results across a variety of methodological choices.

I mention this because I've encountered quite a bit of replication in economics that doesn't convince at all because the same dodgy methods are applied to every data set. (Exhibit A is the value of statistical life literature.)

Good writing, including statistical, requires a belief in symmetrical conversation between writer and reader, a mutual responsibility to further understanding. My sense is Chris' post assumes that, and that the persuasion is focused on the intelligent engaged reader, not just another economist. I think that should be explicit.

Given that, there's the rhetorical ideal of ethos, pathos, logos. I believe many symbolic analysts (e.g., economists, statisticians) go directly to logos. I used to do that, attempting to persuade solely by the use of reasoning, largely ignoring ethos and pathos, character and emotion. As I see it, ethos in analytical writing is more about conveying that you care; furthermore not just caring about the issues but also in the sense of diligence, about having been thorough. Narrative, such as a story or example, can be a useful way to convey a sense of pathos. If you want to persuade analytically and you're writing for the intelligent engaged reader, then ethos, pathos, logos—all three—should be part of the deal.

One other thought is that for the writing to be persuasive it should answer "Is it reasonable?" Reasonableness runs through Chris' examples and may also be reflected in Sophia's bicycle helmet example, though Sophia is the person to judge that.

Chris asked for examples, which I don't have off hand. Perhaps others do?