Monthly Archives: January 2012

Whether in schools or in other public spheres, public intellectuals must struggle to create the conditions that enable students and others to become cultural producers who can rewrite their own experiences and perceptions by engaging with various texts, ideological positions, and theories. They must construct pedagogical relations in which students learn from each other, learn to theorize rather than simply ingest theories, and begin to address how to decenter the authoritarian power of the classroom. Students must also be given the opportunity to challenge disciplinary borders, create pluralized spaces from which hybridised identities might emerge, take up critically the relationship between language and experience, and appropriate knowledge as part of a broader effort at self-definition and ethical responsibility. What I am suggesting here is that public intellectuals move away from the rigid, ideological parameters of the debate about the curriculum or canon. What is needed is a new language for discussing knowledge and authority and the possibility of giving the students a role in deciding what is taught and how it is taught under specific circumstances. The question is not merely, who speaks and under what conditions? It is also about how to see universities (and public schools) as important sites of struggle over what is taught and for control of the conditions of knowledge production itself.

Previously I blogged about an experiment which used the time it takes people to make decisions to try and elucidate something about the underlying mechanisms of information processing (Stafford, Ingram & Gurney, 2011) . This post is about the companion paper to that experiment, reporting some computational modelling inspired by the experiment (Stafford & Gurney, 2011).

The experiment contained a surprising result, or at least a result that I claim should surprise some decision theorists. We has asked people to make a simple judgement – to name out loud the ink colour of a word stimulus, the famous Stroop Task (Stroop, 1935). We found that two factors which affected the decision time had independent effects – the size of the effect of each factors was not effected by the other factor. (The factors were the strength of the colour, in terms of how pale vs deep it was, and how the word was related to the colour, matching it, contradicting it or being irrelevant). This type of result is known as “additive factors” (because they add independently of each other. On a graph of results this looks like parallel lines).

There’s a long tradition in psychology of making an inference from this pattern of experimental results to saying something about the underlying information processing that must be going on. Known as the additive factors methodology (Donders, 1868–1869/1969; Sternberg, 1998), the logic is this: if we systematically vary two things about a decision and they have independent effects on response times, then the two things are operating on separate loci in the decision making architecture – thus proving that there are separate loci in the decision making architecture. Therefore, we can use experiments which measure only outcomes – the time it takes to respond – to ask questions about cognitive architecture; i.e. questions about how information is transformed and combined as it travels between input and output.

The problem with this approach is that it commits a logical fallacy. True separate information processing modules can produce additive factors in response data (A -> B), but that doesn’t mean that additive factors in response time data imply separate information processing modules (B -> A). My work involved taking a widely used model of information processing in the Stroop task (Cohen et al, 1990) and altering it so it contained discrete processing stages, or not. This allowed me to simulate response times in a situation where I knew for certain the architecture – because I’d built the information processing system. The result was surprising. Yes, a system of discrete stages could generate the pattern of data I’d observed experimentally and reported in Stafford, Ingram & Gurney (2011), but so could a single stage system in which all information was continuously processed in parallel, with no discrete information processing modules. Even stranger, both of these kind of systems could be made to produce either additive or non-additive factors without changing their underlying architecture.

The conclusion is straightforward. Although inferring different processing stages (or ‘modules’) from additive factors in data is a venerable tradition in psychology, and one that remains popular (Sternberg, 2011), it is a mistake. As Henson (2011) points out, there’s too much non-linearity in cognitive processing, so that you need additional constraints if you want to make inferences about cognitive modules.

Thanks to Jon Simons for spotting the Sternberg and Henson papers, and so inadvertantly promting this bit of research blogging

Henson, R. N. (2011). How to discover modules in mind and brain: The curse of nonlinearity, and blessing of neuroimaging. A comment on Sternberg (2011). Cognitive Neuropsychology, 28(3-4), 209-223. doi:10.1080/02643294.2011.561305

I’ve been wondering if it would be remotely possible to measure the amount of eccentricity in a culture. In particular, I’m wondering about the historical trend in number of people who are “characters” – ie the distinctly usual. Anecdotally, I’ve been told that 60 years ago there were more people who marched to the beat of a different drum, and it isn’t hard to imagine a story about the homogenising influence of modern and commercial culture. It also isn’t hard to imagine that all sorts of selection biases and preconceptions are at work, so that there really hasn’t been any change in this over recent history. So – could it be measured?

I was doing some research the other day, on what questions people ask about psychology. This tends to overlap, but not by much, with the questions that we as professional psychology researchers invesitgate. If you’re interested you can look for yourself:

Very common, it seems, is the question “Am I normal?” or “is this normal?”. Did people always ask this question, or is it particularly modern? If you do a google ngram search for the words “strange” and “normal” you get an interesting pattern:

More normal (in red), and less strange (in blue) over the last two centuries. They even appear inversely related at points – notice the damping of ‘strangeness’ around WWI and WWII and a surge in ‘normality’.

I’ve had a pair of papers published recently and I thought I’d have a go at putting simply what the research reported in them shows.

The first is called ‘Pieron’s Law holds during Stroop conflict: insights into the architecture of decision making‘. It reports a variation on the famous Stroop task. The Stroop task involves naming the ink colour of various words, words which can themselves be the name of colours. So you find yourself looking at the word GREEN in red ink and your job is to say “red”. If the word matches the ink colour people respond faster and more accurately; if the word doesn’t match, they are slower and less accurate. What we did was vary the strength of the colour component of the stimilus – e.g. we used more and less intense red ‘ink’ (actually we presented the stimuli on a computer screen, so the ink was pixel values). There’s a well established relationship between stimulus strength and responding – the ‘Pieron’s Law’ of the title – showing how response speed decreases with increasing stimulus strength.

So our experiment simply took two well know psychological findings and combined them in a single experiment. The result is interesting because it can help us arbitrate between different theories of how decisions are made. One popular theory of decision making is that all the information relevant to the decision is optimally combined to produce the swiftest and most accuracte response (Bogacz, 2007). There’s lots of support for this theory, including evidence from looking at responses of humans making simple judgements, recordings from the brain cells of monkeys and deep connections to statistical theory. It’s without doubt that the brain can and does integrate information optimally in some circumstances. What is interesting to me is that this optimal information integration perspective is completely at odds with the most successful research programme in post-war psychology: the heuristics and biases approach. This body of evidence suggest that human decision making is very non-optimal, with all sorts of systemmatic errors creeping into the way people combine information to make a decision. The explanation for these errors is that we process information using heuristics, mental shortcuts which give a good answer most of the time and cut down on the amount of effort which have to expend in deciding (“do what you did last time” is probably the most common decision heuristic).

My experiment connects to these ideas because it asked people to make a simple judgement (the colour of the ink), like the experiments supporting an optimal information integration perspective on decision making, but the judgement requested was just marginally more complex because we manipulate both Stroop condition (whether the word and ink matched) and colour strength. If you are a straight-down-the-line optimal information decision theorists then you must believe that evidence about the decision based on the word is combined with evidence about the decision based on the colour to make a single ‘amount of evidence’ variable which drives the decision. In the paper I call this the ‘common metric’ hypothesis. The logic is a bit involved (see the paper), but a consequence of this hypothesis is that the size of the effect of the word condition should vary across the colour strength condition, and vice versa. In other words, you should see an interaction. Visually, the lines on the graph of results would be non-parallel.

Here’s what we found:

What you’re looking at is a graph of response times (the y-axis) for different colour strengths (the x-axis). The three lines are the three Stroop conditions: when the word matches the colour (‘congruent’), when it doesn’t match (‘conflict’) and when there is no word (‘control’). The result: there is no interaction between these two factors – the lines are parallel.

The implication is that you don’t need to move very far from simple perceptual decision making before human decision making starts to look non-optimal – or at least non optimal in the sense of combining information from different sources. This is important because of the widespread celebration of decision making as informationally optimal. Reconciling this research programme with the wider heuristics and biases approach is important work, and fits more generally with an honourable tradition in science of finding “boundary conditions” where one way the world works gives way to another way.

Psychology in the Pub is a Sheffield event which happens in the Showroom Cinema Bar. I’m giving a talk there on the 15th of March and I’ve just written the blurb. Here it is for your enjoyment

Thinking Meat: Understanding brain and mind

You’re brain weighs the same as half a brick and has the consistency of warm butter. Yet such a mundane object allows you to have every thought you’ve ever had, every feeling, dream or hope. This talk will be an introduction to what I view as the central puzzle of psychology: how the brain creates the mind. I’ll discuss fundamental insights from the study of perception and action and suggest how these provide important clues for understanding all of human psychology. The talk will feature: Lego Robots! ‘Subliminal messages’! Britney Spears! Pirates! And a no-holds-bared personal revelation from the speaker

The content will be similar to the talk I gave in Manchester recently, which you can hear here

Once upon a time, I, Chuang Chou, dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of my happiness as a butterfly, unaware that I was Chou. Soon I awaked, and there I was, veritably myself again. Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly, dreaming I am a man. Between a man and a butterfly there is necessarily a distinction. The transition is called the transformation of material things.