Do We Believe Everything We’re Told?

Some early experiments on anchoring and adjustment tested whether distracting the subjects – rendering subjects cognitively "busy" by asking them to keep a lookout for "5" in strings of numbers, or some such – would decrease adjustment, and hence increase the influence of anchors. Most of the experiments seemed to bear out the idea that cognitive busyness increased anchoring, and more generally contamination.

Looking over the accumulating experimental results – more and more findings of contamination, exacerbated by cognitive busyness – Daniel Gilbert saw a truly crazy pattern emerging: Do we believe everything we’re told?

One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it. This obvious-seeming model of cognitive process flow dates back to Descartes. But Descartes’s rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

Over the last few centuries, philosophers pretty much went along with Descartes, since his view seemed more, y’know, logical and intuitive. But Gilbert saw a way of testing Descartes’s and Spinoza’s hypotheses experimentally.

If Descartes is right, then distracting subjects should interfere with both accepting true statements and rejecting false statements. If Spinoza is right, then distracting subjects should cause them to remember false statements as being true, but should not cause them to remember true statements as being false.

A much more dramatic illustration was produced in followup experiments by Gilbert, Tafarodi and Malone (1993). Subjects read aloud crime reports crawling across a video monitor, in which the color of the text indicated whether a particular statement was true or false. Some reports contained false statements that exacerbated the severity of the crime, other reports contained false statements that extenuated (excused) the crime. Some subjects also had to pay attention to strings of digits, looking for a "5", while reading the crime reports – this being the distraction task to create cognitive busyness. Finally, subjects had to recommend the length of prison terms for each criminal, from 0 to 20 years.

Subjects in the cognitively busy condition recommended an average of 11.15 years in prison for criminals in the "exacerbating" condition, that is, criminals whose reports contained labeled false statements exacerbating the severity of the crime. Busy subjects recommended an average of 5.83 years in prison for criminals whose reports contained labeled false statements excusing the crime. This nearly twofold difference was, as you might suspect, statistically significant.

Non-busy participants read exactly the same reports, with the same labels, and the same strings of numbers occasionally crawling past, except that they did not have to search for the number "5". Thus, they could devote more attention to "unbelieving" statements labeled false. These non-busy participants recommended 7.03 years versus 6.03 years for criminals whose reports falsely exacerbated or falsely excused.

This suggests – to say the very least – that we should be more careful when we expose ourselves to unreliable information, especially if we’re doing something else at the time. Be careful when you glance at that newspaper in the supermarket.

PS:According to an unverifiedrumorI just made up,people will be less skepticalof this blog postbecause of the distracting color changes.

Gilbert, D. 2002. Inferential correction. In Heuristics and biases: The psychology of intuitive judgment. You recognize this citation by now, right?

Spinoza’s view seems on the face of it much more likely than Descartes’s, because it is much easier to implement. Anyone who has programmed knows that the easiest way to write a program to deal with an input is just to accept it, and that a check can be computationally expensive. Furthermore, how is one to understand a sentence without at least modeling the belief that the sentence is intended to elicit, so that one might at least understand what it means (the sentence itself is merely a character/phoneme string and so does not yield meaning intrinsically), and the obvious and readily available way to model such a belief is to actually enter it. Much easier simply to enter into that actual brain state associated with the belief and add maybe a flag to mark it as nonserious, than to enter into a wholly different state. We may infer from child studies that the higher order skill of contemplating a belief without holding it is not immediately acquired, for it is only at age 4 or so (I think) that a child is able to understand that others have beliefs that differ from reality.

Michael Rooney

Did you just believe that Descartes was modeling “cognitive-process flow” because some psychologist told you so? Or is possible that Descartes was, y’know, prescribing how rationalists should approach belief, rather than how we generally do?

Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

That sounds like what Sam Adams was saying at the Singularity Summit — the idea of “superstition” being essential to learning in some respects.

Jeremy McKibben-Sanders

This reminds me of a proof I was working on the other day. I was trying to show that a proposition (c) is true, so I used the following argument.

If (1) is true, then either (a) is true or (c) is true.
If (2) is true, then either (b) is true or (c) is true.
(a) and (b) cannot both be true.
(1) and (2) are true, so therefore (c) must be true.

This seems to follow Descartes’ model of consideration and then acceptance of the proposition (c). However, I could have saved myself about half a page of space if I had simply started out by rejecting (c) and then waiting for a contradiction to “appear.”

Of course this is quite the opposite of the Spinoza model, but like Constant said, it makes sense that you can save time and brain power by actively modeling a belief and then seeing what follows. As for why acceptance is the default, I’m not exactly sure. Perhaps it is simply quicker to accept a proposition rather than to waste time looking for its opposite.

So doesn’t this tie in well with your previous article about the denier’s dilemma? It seems, if Gilbert/Spinoza are right, that the CDC mythbusters problem of people mis-remembering as “true” the myths presented by the CDC, is an example of this mechanism (strengthened by reinforcement effects of re-encountering the myth).

One of the most obvious examples of commonly encountered unreliable information are advertisements.
Gilbert’s results suggest that knowing that the information in advertisements is highly unreliable doesn’t make you immune to their effects. This suggests that it’s a good idea to avoid perceiving advertisements entirely, especially in situations where you’re trying to concentrate on something else.
The obvious way to do this is to aggressively use ad-blockers wherever possible; unfortunately there are still media where this isn’t practical.

What about statements that are so loaded to their listeners that they’re rejected outright, with seemingly no consideration? Are they subject to the same process (and have such outrageous implications that they’re rejected at once), or do they work differently?

Constant

Contrary to what many seem to believe, I consider advertising to be one of the least harmful sources of unreliable information. For one thing, the cacophony of advertisements send us contradictory messages. “Buy my product.” “No, buy my product.” One might argue that even such contradictory messages have a common element: “buy something”. However, I have not noticed that I spend less money now that I hardly ever put myself at the mercy of television advertising, so I have serious doubts about whether advertising genuinely increases a person’s overall spending. I notice, also, that I do not smoke, even though I have seen plenty of advertisements for particular brands of cigarettes. The impact of all those cigarette advertisements on my overall spending on cigarettes has evidently been minimal.

For another, the message itself seems not all that harmful in most cases. For example, suppose that advertising is ultimately the reason that I buy Tide detergent rather than another brand of detergent. How much am I harmed by this? The detergents all do pretty much the same thing.

And in many specific cases, where people’s behavior has been blamed on the nefarious influence of advertising, what I generally see is that the accuser has curiously neglected some alternative, very likely explanations. Smoking is attractive because it delivers a drug. Smoking was popular long before it was advertised. I suspect that no more than a very small fraction of smokers started smoking because of advertising.

I have heard that advertising mainly shifts consumers from one brand to another. In that sense it is wasteful and an economist could give an argument for taxing it. I happen to like the subsidy of media by advertisements, so I wouldn’t advocate it.

If people are that much more trusting when they’re distracted, then it’s important not to multi-task if you need to evaluate what you’re looking at. Maybe it’s just important to not multi-task.

Nick Tarleton

In addition to advertisements, should we avoid fiction when we’re distracted?

nick

“Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.”

Whether this view is more accurate than DesCartes’ view depends on whether the belief in question is already commonly accepted. When in the typical situation a typical person Bob says “X is Y, therefore I will perform act A” or “X should be Y, therefore we should perform act A”, Bob is not making a statement about X or Y, he is making a statement about himself. All the truth or reality that is required for Bob to signal his altruism is that it be probable that he believes that X is Y or that X should be Y. The probability of this belief depends far more on what else Bob and his peers believe than it does about the reality or truth of “X is Y”.

Between teaching mathematics to freshmen and spending most of my time learning mathematics, I’ve noticed this myself. When presented with a new result, the first inclination, especially depending on the authority of the source, is to believe it and figure there’s a valid proof of it. But occasionally the teacher realizes that they made a mistake and may even scold the students for not noticing since it is incredibly obvious (e.g. changing something like ||z – z_0|| to ||z – z_1|| between steps, even though a few seconds thinking reveals it to be a typo rather than a mathematical insight).

Sometimes (and for a few lucky people, most of the time) individuals are in a mental state where they are actively thinking through everything being presented to them. For me, this happens a few times a semester in class, and almost always during meetings with my advisor. And occasionally I have a student who does it when I’m teaching. But in my experience this is a mentally exhausting task and often leaves you think-dead for a while afterwards (I find I can go about 40 minutes before I give out).

All this leads me to a conclusion, largely from my experience with what behavior produces what effects, that in mathematics the best way to teach is to assign problems and give students clues when they get stuck. The problems assigned, of course, should be ones that result in the student building up the mathematical theory. It’s certainly more time consuming, but in the end more rewarding, in terms of both emotional satisfaction and understanding.