That’s one of the claims of Jacques Ellul in his book Propaganda which got me quite excited about it. His explanation of this is that intellectuals want to have an opinion on every subject, they follow current events carefully, and because they’re intelligent they think they can understand what is going on. This leads to their being more prone to propaganda because there isn’t enough time to have an informed opinion on every subject, because following current events carefully means being led by the news agenda and investing energy in comprehending things from within that given framework, and because intelligence is not enough to understand complex events which require huge breadth of knowledge and experience.

I find this idea very interesting, but there’s another aspect that I want to focus on, which is that intellectuals want to try to understand the world by simplifying it. They want to reduce complex ideas to simple models of them, and to understand them by doing so. This ties in with Ellul’s claim because if you have a simple model of the world that you think explains everything, it’s very hard to give it up. You end up reinterpreting events and facts to fit the theory rather than the much more onerous and difficult prospect of giving up the theory, which would require you to rethink the whole way you look at the world. One of Ellul’s points is that one of the two main functions of propaganda – what he calls integration propaganda – is to intensify currently existing ways of looking at the world and to turn them into actions. Integration propaganda must work better on someone who has a strong personal incentive not to give up his already existing simplified model of the world.

What I would like to understand though, is why people who seem to be intelligent, caring, and even kind, can be capable of believing things that are quite mad, and have consequences which are morally horrific. The obvious example is Nazism, but there are many less dramatic examples. Some people are not upset, for example, by the sight of a homeless person freezing on the streets during the winter.

I can see two sorts of explanation for this, at the emotional level and the rational level. I’m going to come back to the emotional part in a future post, but roughly speaking it’s something like cognitive dissonance. It’s too hard to live in the world if you are to have an emotional reaction to everything that is horrible, and so we have a strong incentive to try to see the world in a way that makes unpleasant things inevitable or out of our control. The other type of explanation is that we want to try to understand things by simplifying them, but that the world is too messy and complicated for this to really work, so we end up making the facts fit the theory.

The neoconservative economist believes that free markets are always efficient, so he sees the creation of new markets as the solution to all the world’s problems. The Marxist sees everything in terms of a dialectical process and class conflict. Both see the pain and suffering that happen as a consequence of these theories as necessary, and so are not shocked by them. The theist believes absolutely in the teachings of their religion, and so cannot see the human suffering that those beliefs can entail. For example, the Catholic who opposes the use of condoms in Africa. On the other hand, the vehement atheist sees that belief in God is wrong and so blames religion for all the world’s problems, blinding himself to the political or economic cause of many of them. Consequently, they can end up supporting incredibly bloody war and torture on a scale that dwarfs the Crusades, as in the case of Christopher Hitchens and Sam Harris for example.

The intellectual is particularly prone to this sort of thinking, because reductionism is our intellectual cultural heritage, something they are totally immersed in. Reducing complex situations to simple models of them through mathematics, physics, etc. has enabled us to make enormous leaps forward in our understanding and control of the physical world. But there is no successful reductionist model of politics or of human problems. Attempting to find reductionist explanations of politics or human behaviour is a reasonable scientific endeavour, albeit so far an unsuccessful one. But believing that we are already in a position to understand people or politics so simplistically, and – worse – acting on those beliefs, is a gross intellectual error (even if it is understandable). Empiricism is incredibly hard, even trained scientists working in much more concrete fields than politics or human behaviour find it very difficult to separate a good scientific explanation of a phenomenon from a confusion.

We cannot wait for an empirical scientific understanding of politics. We have to try to understand the world now, and make decisions and actions based on that understanding. I think it is important that we recognise that we cannot be over-reliant on reductionist models to guide our thinking on these matters, but that leaves a huge open problem of what we can rely on. My feeling is we can use these models of the world, but we need to bear in mind that none of them have a very wide scope, and that all of them are likely to be wrong in fairly major ways. In the end, we need to rely on our essentially human judgement rather than our theories as the final arbiter of our political thinking. That doesn’t mean abandoning reason and logic, it means being committed to pragmatically training our judgement and trying to make decisions as best as possible within the limits of our ability to reason about the world. It means attempting to imperfectly understand complex situations as they are rather than perfectly understanding over-simplifications of them. It means attending to the details rather than trying to find a theory that enables us to ignore them.

This is of course, incredibly difficult. One strategy that may make it more tractable is the idea of having multiple, overlapping, and weakly held principles for understanding the world rather than a smaller number of strongly held principles which attempt to explain everything in one grand scheme. Combined with this is the strategy of having multiple views on a given situation with varying degrees of conviction, rather than having a single one. These views can even be, in fact probably must be, mutually contradictory. Again, this is not to abandon reason in favour of accepting contradiction, but to remember to bear in mind that there are alternative views on a given situation rather than to put the alternatives out of mind. This may of course not be the best strategy. It would have been a bad strategy in the long term for understanding physics, for example. The test for whether or not it is a good strategy, is whether it helps us to get a better understanding of things from an empirical and pragmatic point of view. The main point is not that this strategy is necessarily the best one, but that the reductionist strategy is consistently leading us into error.

I’ll end with an example of this method applied to a reasonably contemporary political problem, the US invasion of Iraq. Before the invasion happened, there was huge debate about whether or not it was a good thing, or could be a good thing. Perhaps, regardless of the US’ reasons for wanting the war, it could have been a good thing for the Iraqi people. Well absolutely. It could, despite the hundreds of thousands of casualties, still be a good thing in the long run, although that seems a very remote possibility now. The reason I opposed it was not because I could foresee these hundreds of thousands of deaths – in fact that vastly exceeded my worst imaginings of how bad it could be – but that everything about the proposed war was dubious. The US and the UK governments lied to us repeatedly and their motives were clearly not either disarmament or helping the Iraqi people. Mostly, I felt that whether or not the war had a positive outcome would depend on the way in which it was conducted, and given that the principal agent in that clearly didn’t have the interests of disarmament or the Iraqi people at heart, I couldn’t believe that they would conduct it well. My opposition to the war was not based on predictions about what would happen, it wasn’t based on the illegality of the war according to international law (which wouldn’t concern me greatly if the war had really been a huge success for the Iraqi people), it was made in ignorance of what the US’ real motives were in the war. And yet, I believe, despite all that uncertainty and ignorance on my part, my judgement was essentially correct, and that subsequent events have shown that to be the case. You can read what I wrote about it in February 2003 here.

p.s. I’m not sure that I would recommend Ellul’s book. I haven’t finished it yet, but it appears to be rather self-contradictory from chapter to chapter and even occasionally from paragraph to paragraph.

p.p.s. When you’re reading a book about propaganda on the train, it’s weird how suddenly when you look up from it you realise that everyone around you is reading propaganda: the Economist telling you how great capitalism is; the glossy magazine telling women they have to look like these incredibly thin models; everything stuffed full of adverts, advertorials and PR-driven stories.

A great post and I agree with the tone of your point. But then we live in large societies and social conglomerations are prone to an intellectual herd mentality. Hence the hysteria. And this mentality isn’t simply limited to the most pedestrian facets of society but also extends well into our political, economic and security machinery. Perhaps the best case scenario (strategically) lies in a shift of focus from reaction (3 oz. bottles and granny taking her shoes off at the airport) to resilience (a framework to ensure recovery from that which we cannot predict.)

I think I lean more towards Cognitive Dissonance. Some things we have to ignore because they’re too inconvenient.

Evolutionary Psychology is a bit of a mugs game – its ideas are hardly testable – but try this: Think of the way our brains work in handling complex information. We learn to pattern-match, tending towards the answers that are right most of the time. In Decision Making psychology, this is known as ‘satisficing’. It explains such many visual illusion phenomena where the brain leaps to the wrong conclusion based on a stimulus which *mostly* looks like a familiar object. It will be right 99.9etc% of the time…except when some cruel psychologist tries to trick us. Why do we do this? Better to run away from something that might be a lion than to hang around until you’re sure. Plus it’s an efficient way of using our limited brains. Is reductionism an inevitable human tendency?

On to Hitchens…I’m afraid I have to reluctantly defend him again. Yes, he does say that ‘Religion poisons everything’, and I see a certain paranoia in some of his recent Islamophobic comments…but that’s not why he supported the Iraq war. He was capable of recognising the difference between al-Qaeeda and a secular tyrant. To an extent his position was intellectually respectable…but wrong for the same reasons you give for your own opposition. (Much the same as mine.) Dare I say…you oversimplified his views. ;-)

(I may blog – or e-mail – more on his intellecual respectability. At very least, he’s not disingenuous in the way the Republican cabinet are).

I think you’re right about cognitive dissonance (or something like it), and I’m going to write something about that as soon as I get the time.

I don’t think reductionism is inevitable, but it’s kind of natural and unsurprising as you say, especially given how successful it’s been. But we can do better. After all, it seems as though we’re finally beginning to get over belief in God and we’re long past believing in spirits and fairies, although these things are fairly natural.

But you make a fair point about Hitchens, he didn’t support the war for any reasons to do with religion.

[…] In a previous entry I talked about Jacques Ellul’s book “Propaganda” and the idea that intellectuals are most subject to propaganda because they want to believe that they understand the world, but lacking sufficient time to really do so they rely on answers provided to them by others (putting them in the power of those others). The other aspect of propaganda is what Ellul calls “integration propaganda”. The idea is that once you have participated in an action, you will rationalise that action and create a justification for why it was the right thing to do. The propagandist only needs to get you to participate in an action and you will do the intellectual reorganisation yourself. This is an aspect of propaganda that most people don’t understand (believing that propaganda is just a way of getting people to believe something by repeatedly saying it, or some other such simplification). This is essentially a form of cognitive dissonance: nobody wants to consider themselves the bad guy. Or in the framework of this essay, people want to think of themselves as coherent and consistent, so if they took an action they must have had reason for doing so. Recognising that we are not consistent, rational beings working to some perhaps unknown moral code then has the potential to free us from integration propaganda. Taking our own arationality and inconsistent as a given, we would no longer feel the same requirement to create a self-justifying rationalisation, and so the propaganda would not have its desired effect. No Comments so far Leave a comment RSS feed for comments on this post. TrackBack URI Leave a comment Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> […]

i think this first comment is extremely true. however, i would qualify it a little. i think that science intellectuals are particularly vulnerable if they try to import their ‘first principles’ approach into more political arenas and in my experience can tend to latch onto the first coherent explanation of something they come across. i think that good humanities intellectuals should be less vulnerable as they will tend to be more cautious towards causal explanations of things. in fact, this difference in approach and possibility of being more robust in the face of propaganda should be one raison d’etre for humanities as well as science intellectuals.

the above is a ridiculous generalisation of course. but i have too many experiences of being frustrated by the naivite of know-it-all scientists giving a two paragraph explanation of things from the credit crunch to the iraq war.

Hey Sean. I don’t agree that scientists are worse. There is a particular sort of crass scientific type that does what you say (but not all of them, and maybe theoretical scientists like you and I are more prone to it than people who work more empirically), but there is another type of trap that intellectuals in the humanities can fall into. They have explicitly political theories to apply, and these theories are largely untested against any evidence. Moreover, the grand nature of these theories and the apparent universal applicability of them can lead you into thinking you have a level of understanding of the world that you don’t really have.

hey. well, i did say ‘good’ humanities intellectuals.. my impression is that the current tendency in humanities is not towards grand theories. of course, like you said, the scientist i outlined is a stereotype. however, i do think that there is a possibly interesting counterintuitive fact that the scientists, despite their supposed grounding in empirical self-critical methodology, etc., are more apt to extrapolate incorrectly from partial knowledge. but that’s just an impression.

1. We spot patterns where none exist. He calls these cognitive illusions, analogous to optical illusions. We expect too much alternation in a random sequence.

2. We have a bias toward positive evidence. Given a hypothesis, we look for evidence to prove it, not for evidence to disprove it. He gives an elegantly simple example: There are four cards, each with a letter on one side and a vowel on the other. The faces you can see are ‘A’, ‘B’, ‘2’, and ‘3’. You are asked to test the hypothesis that each card with a vowel on one side will have an even number on the other. There is, of course, no need to turn over the ‘2’ to test this hypothesis, but most people will. They’re looking for information consistent with the hypothesis. Similarly, asked to check whether someone is an extrovert, most of us will ask her “Do you like going to parties?”, not “Do you like curling up on the sofa with a good book?”

3. Lotto winners get more TV time than lotto losers do. The information is more readily available to our brains. The MMR “victims” were also highly publicised.

4. Communal reinforcement. Asch’s experiments into social conformity are mentioned.

Yes, I think all those are correct, but not enough to explain all the things I’d like to understand. In particular, in the entry above I was thinking about propaganda, and why intellectuals should be subject to it (perhaps more than non-intellectuals). Only (3) and (4) really bear on this, and although I’m sure they contribute, it’s not clear how they contribute, or that they’re enough on their own to explain it all.

[…] hold a position that might be wrong, should be compared to an earlier article in which I argue that intellectuals are prone to propaganda because they think they understand things better than they do,…. On the face of it, that looks contradictory but I think it’s not: they are two types of […]