Perhaps there are some kinds of debates where people don’t want to find the right answer so much as they want to win the argument. Perhaps humans reason for purposes other than finding the truth — purposes like increasing their standing in their community, or ensuring they don’t piss off the leaders of their tribe. If this hypothesis proved true, then a smarter, better-educated citizenry wouldn’t put an end to these disagreements. It would just mean the participants are better equipped to argue for their own side.

My immediate first thought was that you could replace "politics" with "basketball" and come to many of the same conclusions, and discover most of the reason that so many of us disagree about the "right" way to measure basketball performance.

Later on in the article:

Being better at math didn’t just fail to help partisans converge on the right answer. It actually drove them further apart. Partisans with weak math skills were 25 percentage points likelier to get the answer right when it fit their ideology. Partisans with strong math skills were 45 percentage points likelier to get the answer right when it fit their ideology. The smarter the person is, the dumber politics can make them.

Consider how utterly insane that is: being better at math made partisans less likely to solve the problem correctly when solving the problem correctly meant betraying their political instincts. People weren’t reasoning to get the right answer; they were reasoning to get the answer that they wanted to be right.

And I think we see this all the time in analyzing basketball performance. I see people with graduate degrees in STEM fields make glaring mistakes in statistical analysis. How is this possible? Well, one possibility is that they are ignoring what they know because they are trying to make the data fit their current worldview. Since many of us feel almost as strongly about basketball as we do about politics, this article provides some strong evidence that we are seeing a cognitive bias in action.

It's a really interesting article, and I suggest reading the whole thing.

Some political topics (gun control and abortion for example) lack 'good faith' between the differing views (US centrist). It then is not so much what the math says but the assumption that the numbers used are somehow wrong to begin with.

It's also important to note that two equally sophisticated and competent people can have divergent views about politicial issues because their ideas of what society should look like- the state of affairs that government policy ought to attempt to bring about- may differ fundamentally.

For instance, Jonathan Haidt has done extensive research about the average psychological differences between conservatives and liberals; for instance, liberals pay intense attention to issues of "harm" and "fairness," and are thus likely to support universal welfare and social-services provision. Conversely, conservatives may find themselves deeply offended by the idea of "freeloaders"- people who receive more in tax money than they pay into the system, and who perhaps do not work, at least in the formal, taxpaying economy- and thus find themselves opposed to basic-income or social-insurance programs that tend to extend assistance to people who, perhaps, are not "contributing" through taxes and employment in the formal economy.

In such circumstances, a liberal and a conservative with equally sophisticated means of evaluating public policy may reach diametrically opposed opinions as to what changes should be made, not because of any errors on either of their parts but rather because they want to live in fundamentally different societies.

In politics (and everywhere else) people reason backwards. We're very bad at looking at stacks of evidence and drawing unbiased conclusions. However, the human mind is AMAZING at starting with a conclusion and then rationalizing backwards. It's dumb that our minds work like this, but that's how it is so we should develop systems that try to avoid our biases and produce rigorous conclusions despite our cognitive backwardsness.

The scientific method already works like this. Anything can be a hypothesis, it's all about finding an unbiased, verifiable way of demonstrating it. Of course, biased skeptics will refuse to accept the conclusions, and poke holes in the argument or methodology, but that's actually great because it serves as a valuable check on the rigor of the experiment.

Similarly, statistical analysis, done in a responsible and open way can serve as a valuable sanity check and way to verify theories. People will try to make data fit their worldview and people will attack that data with their own data. Eventually, if we debate on the correct terms, the stronger side will win and that's how we make progress. It's nothing to be distressed about.

+1 on Al_S' point about misreading the study. In fairness, inference is remarkably difficult even when you've trained extensively with it, but that article still badly misinterpreted the findings of the study and overreached on the implications.

It shouldn't be surprising that STEM professionals make serious errors in statistical analysis - we don't train them in statistics! In many of the hard sciences you don't deal with stochastic data at all; in life sciences you get just enough to make sense of clinical trials, and not much more. It's not just possible to go an entire career as an engineer or a lab scientist without ever looking at a regression equation; it's probable. It's a far cry from working with field data and trying to infer causality when seemingly everything suffers from endogeneity issues.

I followed your link, thanks for that. You are correct, Klein misinterpreted, or at best, took the results of the study too far.

It would be, however, a large stretch to say that this makes them "dumber" than 538.

For instance, the very fact that Klein linked the study and let you do follow-up research to determine this puts them in a whole different universe from 538, which generally a) uses only their own research, rather than looking to pre-existing academic research and b) tends to obfuscate the methodology of their own research and make it hard to reproduce.