So I see. What I wasn’t sure of was why particular fallacies would be selected for if the aim was to judge accurately rather than according to prior beliefs.

“Two problems with that view: 1) it makes no testable prediction, or not much; 2) the biases of reasoning are systematic, not random error.”

It predicts that we would use quick-to-apply heuristics rather than exact rules. The biases of reasoning are systematic because we use imperfect heuristics. But it doesn’t suggest any system to the imperfections – so you would get many systematic imperfections that counted against your own arguments as often as they did for.

So if you took a list of all possible syllogisms, valid and invalid, determined which ones people got right and wrong, and then showed that the particular ones people got wrong most often followed a pattern related to their use in arguments, that would demonstrate the hypothesis. Finding a random pattern, or patterns based on frequency of use or applying a smaller subset of rules would tell a different story. Picking out a few examples and explaining them in argumentative terms is, as you noted, at risk of being a “just so” story.

But I was mainly trying to test your hypothesis by comparing predictions with alternative hypotheses.

Taking a hypothesis, making predictions from it, and confirming those predictions are true doesn’t confirm the hypothesis. It’s a case of confirming the consequent: A implies B, B, therefore A.

To test a hypothesis, you have to first show that it makes different predictions from all other hypotheses. Then checking the predictions disconfirms the alternatives, leaving the one you want. That’s modus tollens: A implies B, not B, therefore not A.

So if I can offer an alternative hypothesis that can explain the same observations (even if it doesn’t do so uniquely), it bears on whether the first hypothesis has been demonstrated or not. I’m not trying to prove my alternative.

I’m sure you know all that, but I thought I’d clarify that that’s why I took the approach I did.

>Would there not also be a selective pressure to accurately judge the arguments of others?

Yes, that is half our theory.

>If one persuades many, to their ruin, one profits, but many lose. Does the gain of one outweigh the losses of the many? And if an indiscriminate suspicion of glib arguers were to be selected for, what of the advantages of cooperation? Surely, each argument is meant not only to be told, but to be listened to. Are the listeners not best served by the most accurate judgements?

Yes. For communication to be stable in any species, it has to benefit both senders and receivers, it can’t just be manipulation.

>It has long seemed to me the easiest solution to why our reasoning is flawed is that computation is costly. It costs energy, attention, a huge and delicate brain straining at the limits of anatomical possibility, and time to think. It is designed by a blind and fallible process, out of materials not best suited to the task.

Two problems with that view: 1) it makes no testable prediction, or not much; 2) the biases of reasoning are systematic, not random error.

Why is the mammalian retina wired up backwards? Why does the recurrent laryngeal nerve follow the route it does? Why can’t humans digest cellulose? We don’t expect bodies to be perfect – quite the reverse; it is truly astonishing that the brain is as accurate and effective as it is.

>You can get 80% of the effect for 20% of the effort, and vice versa. We use heuristics that work most of the time – especially for problems of the sort faced on the plains of Africa. One rarely has the precise information needed for finer judgements; a coarse approximation is justified by the quality of the inputs. (Precise odds processed with Bayesian exactitude would be a good case of the ludic fallacy.)

Yes, but biases are best understood as adaptive deviation due to an underlying sound heuristic, which is what we do with reasoning.

>And while the need to make a good argument does explain confirmation bias quite well – or the need for resistance to too hasty persuasion for that matter – what of all the other fallacies: correlation implying causation, affirming the consequent, the conjunction fallacy, illicit major, and so on? Would we not predict from the hypothesis that fallacies would all be of a type to make confirmation of belief easier, when many of these can fall either way with equal facility?

Some of these are not always fallacious, and others have nothing to do with reasoning (e.g. conjunction fallacy).

>An interesting hypothesis – I will be interested to see how well you argue in its defence.

Thanks!
If you’re really interested, I urge you to read our paper (which is long and technical I’m afraid). It’s freely available here:

]]>By: Nullius in Verbahttp://blogs.discovermagazine.com/intersection/2011/08/16/new-point-of-inquiry-hugo-mercier-did-reason-evolve-for-arguing/#comment-56669
Tue, 16 Aug 2011 21:46:09 +0000http://blogs.discovermagazine.com/intersection/?p=20389#comment-56669Would there not also be a selective pressure to accurately judge the arguments of others?

If one persuades many, to their ruin, one profits, but many lose. Does the gain of one outweigh the losses of the many? And if an indiscriminate suspicion of glib arguers were to be selected for, what of the advantages of cooperation? Surely, each argument is meant not only to be told, but to be listened to. Are the listeners not best served by the most accurate judgements?

It has long seemed to me the easiest solution to why our reasoning is flawed is that computation is costly. It costs energy, attention, a huge and delicate brain straining at the limits of anatomical possibility, and time to think. It is designed by a blind and fallible process, out of materials not best suited to the task.
Why is the mammalian retina wired up backwards? Why does the recurrent laryngeal nerve follow the route it does? Why can’t humans digest cellulose? We don’t expect bodies to be perfect – quite the reverse; it is truly astonishing that the brain is as accurate and effective as it is.

You can get 80% of the effect for 20% of the effort, and vice versa. We use heuristics that work most of the time – especially for problems of the sort faced on the plains of Africa. One rarely has the precise information needed for finer judgements; a coarse approximation is justified by the quality of the inputs. (Precise odds processed with Bayesian exactitude would be a good case of the ludic fallacy.)

And while the need to make a good argument does explain confirmation bias quite well – or the need for resistance to too hasty persuasion for that matter – what of all the other fallacies: correlation implying causation, affirming the consequent, the conjunction fallacy, illicit major, and so on? Would we not predict from the hypothesis that fallacies would all be of a type to make confirmation of belief easier, when many of these can fall either way with equal facility?

An interesting hypothesis – I will be interested to see how well you argue in its defence.