The problem of believing in belief

Sam Harris is better known as a leading atheist, but he’s also completing a PhD in cognitive neuroscience and a forthcoming study by Harris is a flawed but important contribution to how we understand the neuropsychology of belief.

Harris and his colleagues asked participants to respond to a number of statements with buttons presses indicating that they either believed, disbelieved or were undecided about each proposition.

The participants were shown statements relating to mathematics, geography, word meaning, general knowledge, ethics, religion and their own life.

While they were doing this brain activity was measured by a fMRI scanner, with a view to finding out which areas of the brain were involved in ‘belief’ and ‘belief states’.

It’s a straightforward study and you may wonder why no-one has ever done it before. It’s possibly because, from what we know about belief, it’s not clear that this study tells us much more about belief rather than what happens when people respond to questions.

Belief is a concept that is used all the time in psychology but is a pain to define in a way that science would be happy with. If you’re not convinced Eric Schwitzgebel’s guide to the problem is about as good as you’re likely to read, but I’m going to give a quick run through of the most relevant issues here.

One of the main problems is that experimental neuropsychology relies on measuring brain and behaviour during activities, and there is no single activity that represents ‘believing’.

When do you believe Paris is the capital of France? Only when you think about it or all the time? Presumably, we believe it all the time as we don’t assume someone has stopped believing it when they think about something else or are unconscious, when asleep perhaps.

The above example treats belief as a proposition stored in memory (a semantic memory in psychology parlance), but you can easily respond to a belief question if you’ve never thought about a proposition before in your life.

Do you believe tigers wear pink pyjamas? Presumably you don’t, but it’s unlikely you’ve ever thought about this before. It’s an answer reconstructed from fragments of other information you have in memory, reasoning and ‘gut instinct’ to varying degrees.

Saying you believe something can work the same way, of course. You may never have thought about it before, but you can say you believe it.

Just these two examples show that saying you believe or disbelieve can involve retrieving a ‘fact’ from memory, or might involve any number of other mental processes to give an answer.

Furthermore, its not even clear that two people retrieving facts from memory are even thinking about the same thing.

Here’s another question. Do you believe snow is white? Imagine two people are asked this question. One believes snow is frozen water, the other believes it’s star dust.

Considering that each person believes that the subject is something completely different, are they answering the same belief question, or is one answering ‘I believe frozen water is white’ while the other is answering ‘I believe stardust is white’? Now scale that up to concepts like democracy or religion.

This is known as the atomism vs holism debate in philosophy and concerns whether we can ever consider belief is isolation (‘snow is white’), or whether we can only consider them in relation to other beliefs that might need to be accessed at the same time (what we believe a word represents, or, even, what we believe the about what we believe).

These issues are essential for neuropsychologists, because they predict different patterns of brain activity, even though the behaviour (e.g. responding ‘I believe’) is exactly the same.

The point of having so many topics in Harris study is that despite these issues, on average, there might be some brain differences involved in answering ‘believe’ or ‘disbelieve’ regardless of the topic, but the mental processes involved in answering these questions might be so diverse that it’s difficult to say whether the average brain activity actually describes ‘belief’ in any meaningful sense.

This doesn’t mean the study is worthless though, and in fact, it’s an essential step in the scientific study of belief.

Science tends to start big, obvious and practical, and work through objections, new ideas and problems over time with new experiments. This study is one of the early but essential, big, obvious and practical steps.

Interestingly, some philosophers (known as eliminative materialists) argue that the concept of belief is just one we’ve inherited from everyday or ‘folk psychology’ and because of the conceptual problems with it, we’ll eventually realise there are no distinct mind or brain process that can be coherently identified as ‘belief’.

Like the concept of ‘rooting for your team’, we’ll just realise its too broad to be scientifically useful and we’ll disregard the idea of ‘belief’ mechanisms in the brain in favour of a variety of better specified concepts that reliably map onto mind and brain processes.

Importantly, studies into the neuropsychology of belief, like this one, can help answer these questions, and eventually, they are likely to have profound implications for everything from lie detection to clinical medicine.

Link to full-text of Harris’s study.Link to Schwitzgebel’s on belief for the Encyclopaedia of Philosophy.Link to write-up from Time.

Share this:

Related

16 thoughts on “The problem of believing in belief”

“It’s possibly because from what we know about belief it’s not clear that this study tells us much more about belief rather than what happens when people respond to question”
AND
“study by Harris is a flawed but important contribution to how we understand the neuropsychology of belief.”
How do these two match?

It’s possible that the experiment is too vague to tell us anything meaningful about belief, but there’s a possibility that belief is as simple as described in the results of this study.
Either way, you’ve got to take the first, simple step so the findings can be further tested, refined or refuted.
Hence, it is possibly wrong, but it’s still an essential first step that hasn’t been done until now.

Incredibly fascinating, since belief is usually thought of as strictly the realm of psychologist and philosopher alike, and even then not in these terms. One thing I’d be interested to know is if such a study discerned subtle differences in categories of beliefs — for example, is the belief in a higher power the same as the “belief” that snow is white? The former is a debatable issue that requires self-knowledge and in some cases the “gut-feelings” you speak of. The latter is (for the purposes of this argument) basically irrefutable, in others we don’t have to strain to believe it. To be honest, I think the word “belief” is in itself too vague (which you also pointed out) and maybe a study like this would enable useful sub-categorization.

I thought that, between Wittgenstein’s ‘Private Language Argument’, Putnam’s ‘Twin Earth’ argument, Clark and Chalmers’ work on extended cognition and so on, no one seriously believed that the content of beliefs is wholly in the head.
What is being done here is exactly the same as asking a subject to pray about, say, cats and then saying that by watching what is going on with FMRI.
When someone claims to have found the seat of the soul in this manner would you want to accept that claim as well?
All this wonderful new technology allows us to ask new and exciting questions to the world. What it does not do is excuse us from the age old problem of asking good questions and interpreting the answers correctly. As always, gigo.

Somehow, the middle paragraph of my post above seems to have been lost. It should read:
What is being done here is exactly the same as asking a subject to pray about, say, cats and then saying that, by watching what is going on with FMRI, one can identify precisely where the seat of the soul is located.

This one of the many examples of the necessity of philosophy in guiding empirical research. In order to engage in any emprical research dedicated to underpin the neural basis of “belief” irrespective of its content, we must operationalized it before. As it is seen, philosophers serve the function to guide research offering a conceptual road-map.
In relation to the comments of M4tt, is fair to say that the debate between internalists and externalists is stagnated. Maybe is not suffient a neurocentric perspective but where to look in the immensity extension of the enviroment for fixing content?

Anibel,
I would say concluded rather than stagnated. Just like with the nature / nurture debate we are arguing about how much is in the head and how much is in the world. No one credible is arguing that it is ALL one or the other.
As to where to look. Well, Andy Clark’s work on extended cognition is the state of the art here.
Personally I think we should be looking at language, how it is instantiated, how it is used, how we reach agreement on terms, the limits of what can be said. Let’s call this project ‘linguistic philosophy’ 😉

If language is the alledged “artifact” that surpass or extend us beyond the cranium or skull is not the perfect “artifact” that augment our cognition because it lays outside from us.
Language it is instantiated in wide network within “brain” and many of its pragmatics depends and is about the brain.

I think you may be conflating the notions of instantiation and dependence.
The fact is that language is, by it’s very nature, a public activity. As Wittgenstein demonstrated, fifty odd years ago, a logically private language is logically impossible.
Neural function, qualia and all are quintessentially private. This is why there is a problem of other minds.
All of the ‘pragmatics’ you refer to are, by their very nature, part of a public language game. If they are about communication then they have agreed public meanings. Everything that is important about language can only be public.
So, while the brain may superpositionally (or not) instantiate a virtual machine that instantiates the conceptual and symbolic frameworks that we call language it certainly has noting to do with it’s meanings. That’s just a good old fashioned Rylean category error.

I wonder what would happen if the experiment was repeated using philosophy graduates as subjects – that is, people who might have some idea of the implications of the phrase ‘I believe’?
Neurotypicals are very generous when it comes to bestowing ‘belief’ on all sorts of notions they have no intention of testing, and the way many of them use the term, ‘belief’ seems to mean ‘more likely than not’ or worse, ‘hope’.
For instance, a lady I met yesterday assured me the best way to cure soft tissue laxity in her lower face was going to be accupuncture.
When questioned as to how she’d come to this conclusion she quoted the price of the proposed treatment then changed the subject rapidly. She doesn’t believe it and she knows she doesn’t, but she’ll swear blind she does – and this is what you are up against when asking ordinary people about their beliefs – layer after layer of recieved wisdom, prejudice, magical thinking, brainwashing, hope and doubt.
And so, as the ability to ‘think’ in any meaningful sense goes to the very heart of this experiment, I’d calibrate the test using subjects I could converse with without getting the feeling bricks were being dropped on my logical faculties.
Thats is, philosophy graduates and other people trained in logic. They ‘believe’ with some degree of caution and there is a better chance they all mean approximately the same thing by ‘belief’.
I think to calibrate a belief detector using only neurotypicals is like trying to invent the world’s first steeplechase then roadtesting it with a herd of stampeding cattle.

I wonder what would happen if the experiment was repeated using philosophy graduates as subjects – that is, people who might have some idea of the implications of the phrase ‘I believe’?
Neurotypicals are very generous when it comes to bestowing ‘belief’ on all sorts of notions they have no intention of testing, and the way many of them use the term, ‘belief’ seems to mean ‘more likely than not’ or worse, ‘hope’.
For instance, a lady I met yesterday assured me the best way to cure soft tissue laxity in her lower face was going to be accupuncture.
When questioned as to how she’d come to this conclusion she quoted the price of the proposed treatment then changed the subject rapidly. She doesn’t believe it and she knows she doesn’t, but she’ll swear blind she does – and this is what you are up against when asking ordinary people about their beliefs – layer after layer of recieved wisdom, prejudice, magical thinking, brainwashing, hope and doubt.
And so, as the ability to ‘think’ in any meaningful sense goes to the very heart of this experiment, I’d calibrate the test using subjects I could converse with without getting the feeling bricks were being dropped on my logical faculties.
Thats is, philosophy graduates and other people trained in logic. They ‘believe’ with some degree of caution and there is a better chance they all mean approximately the same thing by ‘belief’.
I think to calibrate a belief detector using only neurotypicals is like trying to invent the world’s first steeplechase then roadtesting it with a herd of stampeding cattle.

I wonder what would happen if the experiment was repeated using philosophy graduates as subjects – that is, people who might have some idea of the implications of the phrase ‘I believe’?
Neurotypicals are very generous when it comes to bestowing ‘belief’ on all sorts of notions they have no intention of testing, and the way many of them use the term, ‘belief’ seems to mean ‘more likely than not’ or worse, ‘hope’.
For instance, a lady I met yesterday assured me the best way to cure soft tissue laxity in her lower face was going to be accupuncture.
When questioned as to how she’d come to this conclusion she quoted the price of the proposed treatment then changed the subject rapidly. She doesn’t believe it and she knows she doesn’t, but she’ll swear blind she does – and this is what you are up against when asking ordinary people about their beliefs – layer after layer of recieved wisdom, prejudice, magical thinking, brainwashing, hope and doubt.
And so, as the ability to ‘think’ in any meaningful sense goes to the very heart of this experiment, I’d calibrate the test using subjects I could converse with without getting the feeling bricks were being dropped on my logical faculties.
Thats is, philosophy graduates and other people trained in logic. They ‘believe’ with some degree of caution and there is a better chance they all mean approximately the same thing by ‘belief’.
I think to calibrate a belief detector using only neurotypicals is like trying to invent the world’s first steeplechase then roadtesting it with a herd of stampeding cattle.

Icemaiden
I’m not sure that you quite grasp the standard philosophical meaning of the word ‘belief’.
I’m sorry but I have wasted to much time explaining basic stuff to people recently. check out Wikipedia if you really care to understand.

Is it really so complicated? It does matter if it is snow or Paris or tigers in pyjamas, the cognitive process to “build the case” could come from anywhere, “Did I play A minor 7th or A Maj 9th?” clearly perceptual in real time process but… at the end of any internal deliberation there’s the Judgement, “What is my Confidence in Conclusion X?”

Confidence games of stage magicians, politicians and other con-men can maybe precocious some clues to unlock Confidence as a Feeling, which has long been the position of Buddhist psychology (eg Morita) In the case of the Harris, since there is a timeline in these test cases, perhaps the neurology is in the very final moments?