Jesse Marczyk Ph.D.

“Couldn’t-Even-Possibly-Be-So Stories”: Just-World Theory

When "Explanations" Aren't.

Just-world theory, as presented by Hafer (2000), is a very strange kind of theory. It begins with the premise that people have a need to believe in a just or fair world, so people think that others shouldn’t suffer or gain unless they did something to deserve it. More precisely, “good” people are supposed to be rewarded and “bad” people are supposed to be punished, or something like that, anyway. When innocent people suffer, then, this belief is supposedly “threatened”, so, in order to remove the threat and maintain their just-world belief, people derogate the victim. This makes the victim seem less innocent and more deserving of their suffering, so the world can again be viewed as just.

I'll bet that guy made Santa's "naughty" list...

Phrased in terms of adaptationist reasoning, just-world theory would go something like this: humans face the adaptive problem of maintaining a belief in a just world in the face of contradictory evidence. People solve this problem with cognitive mechanisms that function to alter that contradictory evidence into confirmatory evidence. The several problems with this suggestion ought to jump out clearly at this point, but let’s take them one at a time and examine them in some more depth. The first issue is that the adaptive problem being posited here isn’t one; indeed, it couldn’t be. Holding a belief, regardless of whether that belief is true or not, is a lot like “feeling good”, in that neither of them, on their own, actually do anything evolutionary useful. Sure, beliefs (such as “Jon is going to attack me”) might motivate you to execute certain behaviors (running away from Jon), but it is those behaviors which are potentially useful; not the beliefs per se. Natural selection can only “see” what you do; not what you believe or how you feel. Accordingly, an adaptive problem could not even potentially be the maintaining of a belief.

But let’s assume for the moment that maintaining a belief could be a possible adaptive problem. Even granting this, just-world theory runs directly into a second issue: why would contradictory evidence “threaten” that belief in the first place? It seem perfectly plausible that an individual could simply believe whatever it is was important to believe and be done with it, rather than trying to rationalize that belief to ensure it’s consistent with other beliefs or accurate. For instance, say, for whatever reason, it’s adaptively important for people to believe that anyone who leaves their house at night will die. Then someone who believes this observes their friend Max leaving the house at night and return very much alive. The observer in this case could, it seems, go right along believing that anyone who leaves their house at night will die without also needing to believe either that (a) Max didn’t leave his house at night or (b) Max isn’t alive. While the observer might also believe one or both of those things, it would seem to be irrelevant as to whether or not they did.

On a related note, it’s also worth noting that just-world theory seems to imply that the adaptive goal here is to hold an incorrect belief – that “the world” is just. Now there’s nothing implausible about the suggestion that an organism can be designed to be strategically wrong in certain contexts; when it comes to persuading others, for instance, being wrong can be an asset at times. When you aren’t trying to persuade others of something, however, being wrong will, at best, be neutral to, at worst, exceedingly maladaptive. So what does Hafer (2000) suggest the function of such incorrect beliefs might be?

By [dissociating from an innocent victim], observers can at least be comforted that although some people are unjustly victimized in life, all is right with their own world and their own investments in the future (emphasis mine)

As I mentioned before, this explanation couldn’t even possibly work, as “feeling good” isn’t one of those things that does anything useful by itself. As such, maintaining an incorrect belief for the purposes of feeling good fails profoundly as a proper explanation for any behavior.

On top of all the aforementioned problems, there’s also a major experimental problem: the just-world theory only seems to have been tested in one direction. Without getting too much into the methodological details of her studies, Hafer (2000) found that when a victim was “innocent”, subjects who were primed for thinking about their long-term plans were slightly more likely to blame the victim for their negative life outcome, derogate them, and disassociate from them (i.e. they should have been more cautious and what happened to them is not likely to happen to me), relative to subjects who were not primed for the long term. Hafer’s interpretation of these results was that, at least in the long-term condition, the innocent victim threatened the just-world belief, so people in turn perceived the victim as less innocent.

While the innocent-victims-being-blamed angle was examined, Hafer (2000) did not examine the opposite context: that of the undeserving recipient. Let’s say there was someone you really didn’t like, and you found out that this someone recently came into a large sum of money through an inheritance. Presumably, this state of affairs would also “threaten” your just-world belief; after all, bad people are supposed to suffer, not benefit, so you’d be left with a belief-threatening inconsistency. If we presented subjects with a similar scenario, would we expect them to “protect” their just-world belief by reframing their disliked recipient as a likable and deserving one? While I admittedly have no data bearing on that point, my intuitive answer to the question would be a resounding “probably not”; they’d probably just view their rival as richer pain-in-their-ass after receiving the cash. It’s not as if intuitions about who’s innocent and guilty seem to shift simply on the basis of received benefits and harms; the picture is substantially more nuanced.

To reiterate, I’m happy to see psychologists thinking about functions when developing their research; while such a focus is by no means sufficient for generating good research or sensibly interpreting results (as we’ve just seen), I think it’s an important step in the right direction. The next major step would be for psychological researchers to better learn how to differentiate plausible and non-plausible functions, and for that they need evolutionary theory. Without evolutionary theory, ostensible explanations like “feeling good” and “protecting beliefs” can be viewed as acceptable and, in some cases, even as useful, despite them being anything but.

Just World belief seems to me to be just a special case of cognitive dissonance reduction. So if you distrust the former, you put into question the latter. Here is it in wiki formulation: "The theory of cognitive dissonance in social psychology proposes that people have a motivational drive to reduce dissonance by altering existing cognitions". But this is a corner stone of social psychology and has been tested a lot of times. To quote from memory: If people are talked into holding a counter attitudinal lecture for very little money, they will change their beliefes in the direction of the lecture. They would not need to do so, judging from simple evolutionary logic. If people have just bought a car, they start looking only for advertisements that make their new car look good. They would not need to, again. So actually, from the logic of your argument, you would have to put into question the works of Leon Festinger and a lot of the work in that tradition.

This explanation - the dissonance reduction function - doesn't work for the same reasons that just-world theory doesn't: reducing dissonance, it and of itself, doesn't do anything. In order to reach a functional account, you need to reference some fitness-enhancing effect.

To be clear, people do alter their attitudes and attempt to appear consistent. These are, however, descriptions of things which need an explanation; not explanations in themselves.

I agree with Rolf. Just world beliefs are a form of dissonance reduction.

There is a branch of dissonance theory that proposes that dissonance reduction is an adaptive process. It's called the action-based model of dissonance. The idea is that reducing dissonance regarding our choices allows us to take effective and unconflicted action regarding those decisions. This presumes that, in general, it's better to follow through with decisions than to second-guess ourselves and continue to vacillate between choices.

That at least resembles a potential functional account. One wants to take action, rather than remain immobilized like the donkey that starves to death because it finds itself between two equally-appealing stacks of food. It puts the emphasis on what dissonance reduction might do that useful, rather than how it makes an organism feel. I happen to have a different take on the matter, but your account is at least plausible.

When it comes to just-world theory, however, the theory is positing that one makes decisions and takes actions on the basis of incorrect information. The posited function of these just-world mechanisms is to be wrong. How exactly being wrong is suppose to aid in good decision making would still need to be resolved.

There are other cases where peoople make decisions on the basis of incorrect information, and they belong to the most interesting phenomena in our faculty. Look at "choice blindness", for example, where people start to defend choices (reduce dissonance) that they even did not make in the first place.

I've already written about choice blindness previously (here: http://popsych.org/dinner-with-a-side-of-moral-stances/). I'm not sure there's much to that beyond subjects simply not paying attention, but if there is, the authors don't really offer much of an explanation for it either.

there are a lot of phenomena in psychology offending simple evolutionary logic in worse ways than the ones we are talking about. Look at "magical thinking". I do not mean fringe phenomena like astrology but the common magical thinking embedded in everyday cognition that none of us can escape. People who effect an insurance for example do not only think they now are insured in case bad thing happen. They also think that now bad things will not happen in the first place! Look it up:

http://psp.sagepub.com/content/34/10/1346.abstract

Well, if that is not incorrect information. These boring philistinea at the insurance companies in reality are Harry Potter disciples. And there are other data that show we do highly social things (like going to vote) only because of magical thinking ("If I do not do it, the others will also refrain from doing it).

Yes, exactly, the action-based model is about the need to avoid being Buridan's ass.

One could simply posit that just world beliefs are an over-generalization of a fundamentally adaptive process. We'd never claim that dissonance reduction is always beneficial, only that it's beneficial more often than not. So, just world beliefs could be an instance where it's not beneficial.

On the other hand, just world beliefs could be adaptive if the "good" behaviour they support is adaptive. For example, let's say I believe that non-smokers don't get lung cancer. This belief helps me not to smoke tobacco. Then a non-smoker of my acquaintance gets lung cancer. So I convince myself that he must be lying; he was actually a closet smoker. My (incorrect) just world belief then continues to support the adaptive behavior of avoiding smoking tobacco.