Search form

Classic Research In Evolutionary Psychology: Reasoning

I’ve consistently argued that evolutionary psychology, as a framework, is a substantial, and, in many ways, vital remedy to some wide-spread problems: it allows us to connect seemingly disparate findings under a common understanding, and, while the framework is by itself no guarantee of good research, it forces researchers to be more precise in their hypotheses, allowing for conceptual problems with hypotheses and theories to be more transparently observed and addressed. In some regards the framework is quite a bit like the practice of explaining something in writing: while you may intuitively feel as if you understand a subject, it is often not until you try to express your thoughts in actual words that you find your estimation of your understanding has been a bit overstated. Evolutionary psychology forces our intuitive assumptions about the world to be made explicit, often to our own embarrassment.

As I’ve recently been discussing one of the criticisms of evolutionary psychology – that the field is overly focused on domain-specific cognitive mechanisms – I feel that now would be a good time to review some classic research that speaks directly to the topic. Though the research to be discussed itself is of recent vintage (Cosmides, Barrett, & Tooby, 2010), the topic has been examined for some time, which is whether our logical reasoning abilities are best convinced of as domain-general or domain-specific (whether they work equally well, regardless of content, or whether content area is important to their proper functioning). We ought to expect domain specificity in our cognitive functioning for two primary reasons (though these are not the only reasons): the first is that specialization yields efficiency. The demands of solving a specific task are often different from the demands of solving a different one, and to the extent that those demands do not overlap, it becomes difficult to design a tool that solves both problems readily. Imagining a tool that can both open wine bottles and cut tomatoes is hard enough; now imagine adding on the requirement that it also needs to function as a credit card and the problem becomes exceedingly clear. The second problem is outlined well by Cosmides, Barrett, & Tooby (2010) and, as usual, they express it more eloquently than I would:

The computational problems our ancestors faced were not drawn randomly from the universe of all possible problems; instead, they were densely clustered in particular recurrent families.

Putting the two together, we end up with the following: humans tend to face a non-random set of adaptive problems in which the solution to any particular one tends to differ from the solution to any other. As domain-specific mechanisms solve problems more efficiently than domain-general ones, we ought to expect the mind to contain a large number of cognitive mechanisms designed to solve these specific and consistently-faced problems, rather than only a few general-purpose mechanisms more capable of solving many problems we do not face, but poorly-suited to the specific problems we do. While such theorizing sounds entirely plausible and, indeed, quite reasonable, without empirical support for the notion of domain-specificity, it’s all so much bark and no bite.

Thankfully, empirical research abounds in the realm of logical reasoning. The classic tool used to assess people’s ability to reason logically is the Wason selection task. In this task, people are presented with a logical rule taking the form of “if P, then Q“, and a number of cards representing P, Q, ~P, and ~Q (i.e. “If a card has a vowel on one side, then it has an even number on the other”, with cards showing A, B, 1 & 2).They are asked to point out the minimum set of cards that would need to be checked to test the initial “if P, then Q” statement. People’s performance on the task is generally poor, with only around 5-30% of people getting it right on their first attempt. That said, performance on the task can become remarkably good – up to around 65-80% of subjects getting the correct answer – when the task is phrased as a social contract (“If someone [gets a benefit], then they need to [pay a cost]“, the most well known being “If someone is drinking, then they need to be at least 21″). Despite the underlying logical form not being altered, the content of the Wason task matters greatly in terms of performance. This is a difficult finding to account for if one holds to the idea of a domain-general logical reasoning mechanism that functions the same way in all tasks involving formal logic. Noting that content matters is one thing, though; figuring out how and why content matters becomes something of a more difficult task.

While some might suggest that content simply matters as a function of familiarity – as people clearly have more experience with age restrictions on drinking and other social situations than vaguer stimuli – familiarity doesn’t help: people will fail the task when it is framed in terms of familiar stimuli and people will succeed at the task for unfamiliar social contracts. Accordingly, criticisms of the domain-specific social contract (or cheater-detection) mechanism shifted to suggest that the mechanism at work is indeed content-specific, but perhaps not specific to social contracts. Instead, the contention was that people are good at reasoning about social contracts, but only because they’re good at reasoning about deontic categories – like permissions and obligations – more generally. Assuming such an account were accurate, it remains debatable as to whether that mechanism would be counted as a domain-general or domain-specific one. Such a debate need not be had yet, though, as the more general account turns out to be unsupported by the empirical evidence.

We’re just waiting for critics to look down and figure it out.

While all social contracts involve deontic logic, not all deontic logic involves social contracts. If the more general account of deontic reasoning were true, we ought to not expect performance difference between the former and latter types of problems. In order to test whether such differences exist, Cosmides, Barrett, & Tooby’s (2010) first experiment involved presenting subjects with a permission rule – “If you do P, you must do Q first” – varying whether P was a benefit (going out at night), neutral (staying in), or a chore (taking out the trash; Q, in this case, involved tying a rock around your ankle). When the rule was a social contract (the benefit), performance was high on the Wason task, with 80% of subjects answering correctly. However, when the rule involved staying in, only 52% of subjects got it right; that number was even lower in the garbage condition, with only 44% accuracy among subjects. Further, this same pattern of results was subsequently replicated in a new context involving filing/signing forms as well. This results is quite difficult to account for with a more-general permission schema, as all the conditions involve reasoning about permissions; they are, however, consistent with the predictions from social contract theory, as only the contexts involving some form of social contract ended up eliciting the highest levels of performance.

Permission schemas, in their general form, also appear unconcerned with whether one violates a rule intentionally or accidentally. By contrast, social contract theory is concerned with the intentionality of the violation, as accidental violations do not imply the presence of a cheater the way intentional violations do. To continue to test the distinction between the two models, subjects were presented with the Wason task in contexts where the violations of the rule were likely intentional (with or without a benefit for the actor) or accidental. When the violation was intentional and benefited the actor, subjects performed accurately 68% of the time; when it was intentional but did not benefit that actor, that percentage dropped to 45%; when the violation was likely unintentional, performance bottomed-out at 27%. These results make good sense if one is trying to find evidence of a cheater; they do not if one is trying to find evidence of a rule violation more generally.

In a final experiment, the Wason task was again presented to subjects, this time varying three factors: whether one was intending to violate a rule or not; whether it would benefit the actor or not; and whether the ability to violate was present or absent. The pattern of results mimicked those above: when benefit, intention, and ability were all present, 64% of subjects determined the correct answer to the task; when only 2 factors were present, 46% of subjects got the correct answer; and when only 1 factor was present, subjects did worse still, with only 26% getting the correct answer, which is approximately the same performance level as when there were no factors present. Taken together, these three experiments provide powerful evidence that people aren’t just good at reasoning about the behavior of other people in general, but rather that they are good at reasoning about social contracts in particular. In the now-immortal words of Bill O’Reilly, “[domain-general accounts] can’t explain that“.

“Now cut their mic and let’s call it a day!”

Now, of course, logical reasoning is just one possible example for demonstrating domain specificity, and these experiments certainly don’t prove that the entire structure of the mind is domain specific; there are other realms of life – such as, say, mate selection, or learning – where domain general mechanisms might work. The possibility of domain-general mechanisms remains just that – possible; perhaps not often well-reasoned on a theoretical level or well-demonstrated at an empirical one, but possible all the same. The problem in differentiating between these different accounts may not always be easy in practice, as they are often thought to generate some, or even many, of the same predictions, but in principle it remains simple: we need to place the two accounts in experimental contexts in which they generate opposing predictions. In the next post, we’ll examine some experiments in which we pit a more domain-general account of learning against some more domain-specific ones.

References: Cosmides L, Barrett HC, & Tooby J (2010). Adaptive specializations, social exchange, and the evolution of human intelligence. Proceedings of the National Academy of Sciences of the United States of America, 107 Suppl 2, 9007-14 PMID: 20445099

Comment

Post Comment

Your name

E-mail

The content of this field is kept private and will not be shown publicly.