Cultural Contention over the Concept of Brainwashing (Benjamin Zablocki, 2001)

NOTE: The following article is taken from the 5th chapter of Misunderstanding Cults: Searching for Objectivity in a Controversial Field, entitled, Towards a Demystified and Disinterested Scientific Theory of Brainwashing.

That Word ‘Brainwashing’

The word brainwashing is, in itself, controversial and arouses hostile feelings. Since there is no scientific advantage in using one word rather than another for any concept, it may be reasonable in the future to hunt around for another word that is less polemical. We need a universally recognized term for a concept that stands for a form of influence manifested in a deliberately and systematically applied traumatizing and obedience-producing process of ideological resocialization.

Currently, brainwashing is the generally accepted term for this process, but I see no objection to finding another to take its place. There are in fact other terms, historically, that have been used instead, like ‘thought reform’ and ‘coercive persuasion.’ Ironically, it has been those scholars who complain the most about ‘the B-word’ who have also been the most insistent that none of the alternatives is any better. As long as others in the field insist on treating all possible substitute constructions as nothing more than gussied-up synonyms for a mystified concept of brainwashing (see, for example, Introvigne 1998: 2), there is no point as yet in trying to introduce a more congenial term.

An overly literal reading of the word brainwashing (merely a literal translation of the accepted Chinese term shi nao) could be misleading, as it seems to imply the ability to apply some mysterious biochemical cleanser to people’s brains. However, the word has never been intended as a literal designator but as a metaphor. It would be wise to heed Clifford Geertz’s (1973: 210) warning in this connection, to avoid such a ‘flattened view of other people’s mentalities [that] more complex meanings than [a] literal reading suggests [are] not even considered.’

Thus, please don’t allow yourself to become prejudiced by a visceral reaction to the word instead of attending to the underlying concept. There is a linguistic tendency, as the postmodernist critics have taught us, for the signified to disappear beneath the signifier. But the empirically based social sciences must resist this tendency by defining terms precisely. The influence of media-driven vulgarizations of concepts should be resisted. This chapter argues for the scientific validity of a concept, not a word. If you are interested in whether the concept has value, but you gag on the word, feel free to substitute a different word in its place. I myself have no particular attachment to the word brainwashing.

But if all we are talking about is an extreme form of influence, why do we need a special name for it at all? The name is assigned merely for convenience. This is a common and widely accepted practise in the social sciences. For example, in economics a recession is nothing more than a name we give to two consecutive quarters of economic contraction. There is nothing qualitatively distinctive about two such consecutive quarters as opposed to one or three. The label is assigned arbitrarily at a subjective point at which many economists begin to get seriously worried about economic performance. This label is nevertheless useful as long as we don’t reify it by imagining that it stands for some real ‘thing’ that happens to the economy when it experiences precisely two quarters of decline. Many other examples of useful definitions marking arbitrary points along a continuum could be cited. There is no objective way to determine the exact point at which ideological influence becomes severe and encompassing enough, and its effects long lasting enough, for it to be called brainwashing. Inevitably, there will be marginal instances that could be categorized either way. But despite the fact that the boundary is not precisely defined, it demarcates a class of events worthy of systematic study.

The Reciprocal Moral Panic

Study of brainwashing has been hampered by partisanship and tendentious writing on both sides of the conflict. In one camp, there are scholars who very badly don’t want there to be such a thing as brainwashing. Its non-existence, they believe, will help assure religious liberty, which can only be procured by defending the liberty of the most unpopular religions. If only the non-existence of brainwashing can be proved, the public will have to face up to the hard truth that some citizens choose to follow spiritual paths that may lead them in radical directions. This camp has exerted its influence within academia. But, instead of using its academic skills to refute the brainwashing conjecture, it has preferred to attack a caricature of brainwashing supplied by anti-cult groups for litigational rather than scientific purposes.

In the other camp, we find scholars who equally badly do want there to be such a thing as brainwashing. Its existence, they believe, will give them a rationale for opposition to groups they consider dangerous. A typical example of their reasoning can be found in the argument put forth by Margaret Singer that ‘Despite the myth that normal people don’t get sucked into cults, it has become clear over the years that everyone is susceptible to the lure of these master manipulators’ (Singer 1995: 17). Using a form of backward reasoning known as the ecological fallacy, she argues from the known fact that people of all ages, social classes, and ethnic backgrounds can be found in cults to the dubious conclusion that everyone must be susceptible. These scholars must also share some of the blame for tendentious scholarship. Lacking positions of leadership in academia, scholars on this side of the dispute have used their expertise to influence the mass media, and they have been successful because sensational allegations of mystical manipulative influence make good journalistic copy.

It’s funny in a dreary sort of way that both sides in this debate agree that it is a David and Goliath situation, but each side fancies itself to be the David courageously confronting the awesome power of the opposition. Each side makes use of an exaggerated fear of the other’s influence to create the raw materials of a moral panic (Cohen 1972; Goode and Ben Yehudah 1994). Thus, a disinterested search for truth falls victim to the uncompromising hostility created by each side’s paranoid fear of the power of the other.

David with the head of Goliath.

The ‘cult apologists’ picture themselves as fighting an underdog battle against hostile lords of the media backed by their armies of ‘cult-bashing’ experts. The ‘cult bashers’ picture themselves as fighting an underdog battle for a voice in academia in which apologists seem to hold all the gatekeeper positions. Each side justifies its rhetorical excesses and hyperbole by reference to the overwhelming advantages held by the opposing side within its own arena. But over the years a peculiar symbiosis has developed between these two camps. They have come to rely on each other to define their positions. Each finds it more convenient to attack the positions of the other than to do the hard work of finding out what is really going on in cults. Thomas Robbins (19888: 74) has noted that the proponents of these two models ‘tend to talk past each other since they employ differing interpretative frameworks, epistemological rules, definitions… and underlying assumptions.’ Most of the literature on the subject has been framed in terms of rhetorical disputes between these two extremist models. Data-based models have been all but crowded out.

Between these two noisy and contentious camps, we find the curious but disinterested scientist who wants to find out if there is such a thing as brainwashing but will be equally satisfied with a positive or negative answer. I believe that there can and should be a moderate position on the subject. Such a position would avoid the absurdity of denying any reality to what thousands of reputable ex-cult members claim to have experienced–turning this denial into a minor cousin of holocaust denial. At the same time, it would avoid the mystical concept of an irresistible and overwhelming force that was developed by the extremist wing of the anti-cult movement.

One of the most shameful aspects of this whole silly affair is the way pro-religion scholars have used their academic authority to foist off the myth that the concept of brainwashing needs no further research because it has already been thoroughly debunked. Misleadingly, it has been argued (Introvigne forthcoming; Melton forthcoming) that the disciplines of psychology and sociology, through their American scholarly associations, have officially declared the concept of brainwashing to be so thoroughly discredited that no further research is needed. Introvigne, by playing fast and loose with terminology, attempts to parlay a rejection of a committee report into a rejection of the brainwashing concept by the American Psychological Association. He argues that ‘To state that a report “lacks scientific rigor” is tantamount to saying that it is not scientific’ (Introvigne 1998: 3), gliding over the question of whether the ‘it’ in question refers to the committee report or the brainwashing concept.2Conveniently, for Introvigne, the report in question was written by a committee chaired by Margaret Singer, whose involuntarist theory of brainwashing is as much a distortion of the foundational concept as Introvigne’s parody of it.

The truth is that both of these scholarly associations (American Psychological Association and American Sociological Association) were under intense pressure by a consortium of pro-religious scholars (a.k.a. NRM scholars) to sign an amicus curiae brief alleging consensus within their fields that brainwashing theory had been found to be bunk. This was in regard to a case concerning Moonie brainwashing that was before the United States Supreme Court (Molko v Holly Spirit Ass’n., Supreme Court of Calif. SF 25038; Molko v Holly Spirit Ass’n, 762 p.2d 46 [Cal. 1988], cert. Denied, 490 U.S. 1084 [1989]). The bottom line is that both of the associations, after bitter debate, recognized that there was no such consensus and refused to get involved. Despite strenuous efforts of the NRM scholars to make it appear otherwise, neither professional association saw an overwhelming preponderance of evidence on either side. Both went on the record with a statement virtually identical to my argument in this chapter: that not nearly enough is known about this subject to be able to render a definitive scientific verdict, and that much more research is needed. A few years later, the Society for the Scientific Study of Religion went on record with a similar statement, affirming ‘the agnostic position’ on this subject and calling for more research (Zablocki 1997: 114).

Although NRM scholars have claimed to be opposed only to the most outrageously sensationalized versions of brainwashing theory, the result, perhaps unintended, of their campaign has been to bring an entire important area of social inquiry to a lengthy halt. Evidence of this can be seen in the fact that during the period of 1962 to 2000, a time when cults flourished, not a single article supportive of brainwashing has been published in the two leading American journals devoted to the sociology of religion, although a significant number of such articles have been submitted to those journals and more than a hundred such articles have appeared in journals marginal to the field (Zablocki 1998: 267)

The erroneous contention that brainwashing theory has been debunked by social science research has been loudly and frequently repeated, and this ‘big lie’ has thus come to influence the thinking of neutral religion scholars. For example, even Winston Davis, in an excellent article on suicidal obedience in Heaven’s Gate, expresses characteristic ambivalence over the brainwashing concept:

‘Scholarship in general no longer accepts the traditional, simplistic theory of brainwashing… While the vernacular theory of brainwashing may no longer be scientifically viable, the general theory of social and psychological conditioning is still rather in good shape… I therefore find nothing objectionable [sic] in Benjamin Zablocki’s revised theory of brainwashing as ‘a set of transactions between a charismatically led collectivity and an isolated agent of the collectivity with the goal of transforming the agent into a deployable agent.’ The tale I have to tell actually fits nicely into several of Robert Lifton’s classical thought reform categories (Davis 2000: 241-2).

The problem with this all too typical way of looking at things is the fact that I am not presenting some new revised theory of brainwashing but simply a restatement of Robert Lifton’s (1989, 1999) careful and rigorous theory in sociological terms.

There are, I believe, six issues standing in the way of our ability to transcend this reciprocal moral panic. Let us look closely at each of these issues with an eye to recognizing that both sides in this conflict may have distorted the scientifically grounded theories of the foundational theorists–Lifton (1989), Sargant (1957), and Schein (1961)– as they apply to cults.

The Influence Continuum

The first issue has to do with the contention that brainwashing is a newly discovered form of social influence involving a hitherto unknown social force. There is nothing about charismatic influence and the obedience it instills that is mysterious or asks us to posit the existence of a new force. On the contrary, everything about brainwashing can be explained entirely in terms of well-understood scientific principles. As Richard Ofshe has argued: ‘Studying the reform process demonstrates that it is no more or less difficult to understand than any other complex social process and produces no results to suggest that something new has been discovered. The only aspect of the reform process that one might suggest is new, is the order in which the influence procedures are assembled and the degree to which the target’s environment is manipulated in the service of social control. This is at most an unusual arrangement of commonplace bits and pieces’ (1992: 221-2).

Would-be debunkers of the brainwashing concept have argued that brainwashing theory is not just a theory of ordinary social influence intensified under structural conditions of ideological totalism, but is rather a ‘special’ kind of influence theory that alleges that free will can be overwhelmed and individuals brought to a state of mind in which they will comply with charismatic directives involuntarily, having surrendered the capability of saying no. Of course, if a theory of brainwashing really did rely upon such an intrinsically untestable notion, it would be reasonable to reject it outright.

The attack on this so-called involuntarist theory of brainwashing figures prominently in the debunking efforts of a number of scholars (Barker 1989; Hexham and Poewe 1997; Melton forthcoming), but is most closely identified with the work of Dick Anthony (1996), for whom it is the linchpin of the debunking argument. Anthony argues, without a shred of evidence that I have been able to discover, that the foundational work of Lifton and Schein and the more recent theories of myself (1998), Richard Ofshe (1992), and Stephen Kent (Kent and Krebs 1998) are based upon what he calls the ‘involuntarism assumption.’ It is true that a number of prominent legal cases have hinged on the question of whether the plaintiff’s free will had been somehow overthrown (Richardson and Ginsburg 1998). But nowhere in the scientific literature has there been such a claim. Foundational brainwashing theory has not claimed that subjects were robbed of their free will. Neither the presence nor the absence of free will can ever be proved or disproved. The confusion stems from the difference between the word free as it is used in economics as an antonym for costly, and as it is used in philosophy as an antonym for deterministic. When brainwashing theory speaks of individuals losing the ability to freely decide to obey, the word is being used in the economic sense. Brainwashing imposes costs, and when a course of action has costs it is no longer free. The famous statement by Rousseau (1913, p.3) that ‘Man is born free, and everywhere he is in chains,’ succinctly expresses the view that socialization can impose severe constraints on human behaviour. Throughout the social sciences, this is accepted almost axiomatically. It is odd that only in the sociology of new religious movements is the importance of socialization’s ability to constrain largely ignored.

Unidirectional versus Bi-directional Influence

The second issue has to do with controversy over whether there are particular personality types drawn to cults and whether members are better perceived as willing and active seekers or as helpless and victimized dupes, as if these were mutually exclusive alternatives. Those who focus on the importance of the particular traits that recruits bring to their cults tend to ignore the resocialization process (Anthony and Robbins 1994).3Those who focus on the resocialization process often ignore personal predispositions (Singer and Ofshe 1990).

All this reminds me of being back in high school when people used to gossip about girls who ‘got themselves pregnant.’ Since that time, advances in biological theory have taught us to think more realistically of ‘getting pregnant’ as an interactive process involving influence in both directions. Similarly, as our understanding of totalistic influence in cults matures, I think we will abandon undirectional explanations of cultic obedience in favour of more realistic, interactive ones. When that happens, we will find ourselves able to ask more interesting questions than we do now. Rather than asking whether it is the predisposing trait or a manipulative process that produces high levels of uncritical obedience, we will ask just what predisposing traits of individuals interact with just what manipulative actions by cults to produce this outcome.

A number of the debunking authors use this artificial and incorrect split between resocialization and predisposing traits to create a divide between cult brainwashing theory and foundational brainwashing theory as an explanation for ideological influence in China and Korea in the mid-twentieth century. Dick Anthony attempts to show that the foundational literature really embodied two distinct theories. One, he claims, was a robotic control theory that was mystical and sensationalist. The other was a theory of totalitarian influence that was dependent for its success upon pre-existing totalitarian beliefs of the subject which the program was able to reinvoke (Anthony 1996: i). Anthony claims that even though cultic brainwashing theory is descendant from the former, it claims its legitimacy from its ties to the latter.

The problem with this distinction is that it is based upon a misreading of the foundational literature (Lifton1989; Schein 1961). Lifton devotes chapter 5 of his book to a description of the brainwashing process. In chapter 22 he describes the social structural conditions that have to be present for this process to be effective. Anthony misunderstands this scientific distinction. He interprets it instead as evidence that Lifton’s work embodies two distinct theories: one bad and one good (Anthony and Robbins 1994). The ‘bad’ Lifton, according to Anthony, is the chapter 5 Lifton who describes a brainwashing process that may have gone on in Communist reindoctrination centres, but which, according to Anthony, has no applicability to contemporary cults. The ‘good’ Lifton, on the other hand, describes in chapter 22 a structural situation that Anthony splits off and calls a theory of thought reform. Anthony appears to like this ‘theory’ better because it does not involve anything that the cult actually does to the cult participant (Anthony and Robbins 1995). The cult merely creates a totalistic social structure that individuals with certain predisposing traits may decide that they want to be part of.

Unfortunately for Anthony, there are two problems with such splitting. One is that Lifton himself denies any such split in his theory (Lifton 1995, 1997). The second is that both an influence process and the structural conditions conducive to that process are necessary for any theory of social influence. As Lifton demonstrates in his recent application of his theory to a Japanese terrorist cult (Lifton 1999), process cannot be split off from structure in any study of social influence.

Condemnatory Label versus Contributory Factor

The third issue has to do with whether brainwashing is meant to replace other explanatory variables or work alongside them. Bainbridge (1997) and Richardson (1993) worry about the former, complaining that brainwashing explanations are intrinsically unifactoral, and thus inferior to the multifactoral explanations preferred by modern social science. But brainwashing theory has rarely, if ever, been used scientifically as a unifactoral explanation. Lifton (1999) does not attempt to explain all the obedience generated in Aum Shinrikyo by the brainwashing mechanism. My explanation of the obedience generated by the Nruderhof relies on numerous social mechanisms of which brainwashing is only one (Zablocki 1980). The same can be said for Ofshe’s explanation of social control in Synanon (1976). Far from being unifactoral, brainwashing is merely one essential element in a larger strategy for understanding how charismatic authority is channelled into obedience.

James Thurber once wrote a fable called The Wonderful (1957), which depicted the cultural collapse of a society that was free to express itself using twenty-five letters of the alphabet but was forbidden to use the letter O for any reason. The intellectual convolutions forced on Thurber’s imaginary society by this ‘slight’ restriction are reminiscent of the intellectual convolutions forced on the NRM scholars by their refusal to include brainwashing in their models. It is not that these scholars don’t often have considerable insight into cult dynamics, but the poor mugs are, nevertheless, constantly getting overwhelmed by events that their theories are unable to predict or explain. You always find them busy playing catch-up as they scramble to account for each new cult crisis as it develops on an ad hoc basis. The inadequacy of their models cries out ‘specification error’ in the sense that a key variable has been left out.

The Thurberian approach just does not work. We have to use the whole alphabet of social influence concepts from Asch to Zimbardo (including the dreaded B-word) to understand cultic obedience. Cults are a complex social ecology of forces involving attenuation effects (Petty 1994), conformity (Asch 1951), crowd behaviour (Coleman 1990), decision elites (Wexler 1995), deindividuation (Festinger, Pepitone et. al. 1952), extended exchange (Stark 1999), groupthink (Janis 1982), ritual (Turner (1969), sacrifice and stigma (Iannaccone 1992), situational pressures (Zimbardo and Anderson 1993), social proof (Cialdini 1993), totalism (Lifton 1989), and many others. Personally, I have never seen a cult that was held together only by brainwashing and not also by other psychological factors, as well as genuine loyalty to ideology and leadership.

Arguments that brainwashing is really a term of moral condemnation masquerading as a scientific concept have emerged as a reaction to the efforts of some anti-cultists (not social scientists) to use brainwashing as a label to condemn cults rather than as a concept to understand them. Bromley (1998) has taken the position that brainwashing is not a variable at all but merely a peremptory label of stigmatization–a trope for an ideological bias, in our individualistic culture, against people who prefer to live and work more collectivistically. Others have focused on the observe danger of allowing brainwashing to be used as an all-purpose moral excuse (It wasn’t my fault. I was brainwashed!), offering blanket absolution for people who have been cult members–freeing them from the need to take any responsibility for their actions (Bainbridge 1997; Hexham and Poewe 1997; Introvigne forthcoming; Melton forthcoming). While these allegations represent legitimate concerns about potential abuse of the concept, neither is relevant to the scientific issue. A disinterested approach will first determine whether a phenomenon exists before worrying about whether its existence is politically convenient.

Obtaining Members versus Retaining Members

The fourth issue has to do with a confusion over whether brainwashing explains how cults obtain members or how they retain them. Some cults have made use of manipulative practices like love-bombing and sleep deprivation (Galanti 1993), with some degrees of success, in order to obtain new members. A discussion of these manipulative practices for obtaining members is beyond the scope of this chapter. Some of these practices superficially resemble techniques used in the earliest phase of brainwashing. But these practices, themselves, are not brainwashing. This point must be emphasized because a false attribution of brainwashing to newly obtained cult recruits, rather than to those who have already made a substantial commitment to the cult, figures prominently in the ridicule of the concept by NRM scholars. A typical straw man representation of brainwashing as a self-evidently absurd concept is as follows: ‘The new convert is held mentally captive in a state of alternate consciousness due to “trance-induction techniques” such as meditation, chanting, speaking in tongues, self-hypnosis, visualization, and controlled breathing exercises … the cultist is [thus] reduced to performing religious duties in slavish obedience to the whims of the group and its authoritarian or maniacal leader’ (Wright 1998: 98).

Foundational brainwashing theory was not concerned with such Svengalian conceits, but only with ideological influence in the service of the retaining function. Why should the foundational theorists, concerned as they were with coercive state-run institutions like prisons, ‘re-education centres,’ and prisoner-of-war camps have any interest in explaining how participants were obtained? Participants were obtained at the point of a gun.4 The motive of these state enterprises was to retain the loyalties of these participants after intensive resocialization ceased. As George Orwell showed so well in his novel 1984, the only justification for the costly indoctrination process undergone by Winston Smith was not that he love Big Brother while Smith was in prison, but that Big Brother be able to retain that love after Smith was deployed back into society. Nevertheless, both ‘cult apologists’ and ‘cult bashers’ have found it more convenient to focus on the obtaining function.

If one asks why a cult would be motivated to invest resources in brainwashing, it should be clear that this can not be to obtain recruits, since these are a dime a dozen in the first place, and, as Barker (1984) has shown, they don’t tend to stick around long enough to repay the investment. Rather, it can only be to retain loyalty, and therefore decrease surveillance costs for valued members who are already committed. In small groups bound together only by normative solidarity, as Hechter (1987) has shown, the cost of surveillance of the individual by the group is one of the chief obstacles to success. Minimizing these surveillance costs is often the most important organizational problem such groups have to solve in order to survive and prosper. Brainwashing makes sense for a collectivity only to the extent that the resources saved through decreased surveillance costs exceed the resources invested in the brainwashing process. For this reason, only high-demand charismatic groups with totalistic social structures are ever in a position to benefit from brainwashing.5

This mistaken ascription of brainwashing to the obtaining to the obtaining function rather than the retaining function is directly responsible for two of the major arguments used by the ‘cult apologists’ in their attempt to debunk brainwashing. One has to do with a misunderstanding of the role of force and the other has to do with the mistaken belief that brainwashing can be studied with data on cult membership turnover.

The widespread belief that force is necessary for brainwashing is based upon a misreading of Lifton (1989) and Schein (1961). A number of authors (Dawson 1998; Melton forthcoming; Richardson 1993) have based their arguments, in part, on the contention that the works of foundational scholarship on brainwashing are irrelevant to the study of cults because the foundational literature studied only subjects who were forcibly incarcerated. However, Lifton and Schein have both gone on public record as explicitly denying that there is anything about their theories that requires the use of physical force or threat of force. Lifton has specifically argued (‘psychological manipulation is the heart of the matter, with or without the use of physical force’ [1995: xi]) that his theories are very much applicable to cults.6The difference between the state-run institutions that Lifton and Schein studied in the 1950s and 1960s and the cults that Lifton and others study today is in the obtaining function not in the retaining function. In the Chinese and Korean situations, force was used for obtaining and brainwashing was used for retaining. In cults, charismatic appeal is used for obtaining and brainwashing is used, in some instances, for retaining.

A related misconception has to do with what conclusions to draw from the very high rate of turnover among new and prospective recruits to cults. Bainbridge (1997), Barker (1989), Dawson (1998), Introvigne (forthcoming), and Richardson (1993) have correctly pointed out that in totalistic religious organizations very few prospective members go on to become long-term members. They argue that this proves that the resocialization process cannot be irresistible and therefore it cannot be brainwashing. But nothing in the brainwashing model predicts that it will be attempted with all members, let alone successfully attempted. In fact, the efficiency of brainwashing, operationalized as the expected yield of deployable agents7per 100 members, is an unknown (but discoverable) parameter of any particular cultic system and may often be quite low. For the system to be able to perpetuate itself (Hechter 1987), the yield need only produce enough value for the system to compensate it for the resources required to maintain the brainwashing process.

Moreover, the high turnover rate in cults is more complex than it may seem. While it is true that the membership turnover is very high among recruits and new members, this changes after two or three years of membership when cultic commitment mechanisms begin to kick in. this transition from high to low membership turnover is known as the Bainbridge Shift, after the sociologist who first discovered it (Bainbridge 1997: 141-3). After about three years of membership, the annual rate of turnover sharply declines and begins to fit a commitment model rather than a random model.8

Membership turnover data is not the right sort of data to tell us whether a particular cult practises brainwashing. The recruitment strategy whereby many are called but few are chosen is a popular one among cults. In several groups in which I have observed the brainwashing process, there was very high turnover among initial recruits. Brainwashing is too expensive to waste on raw recruits. Since brainwashing is a costly process, it generally will not pay for a group to even attempt to brainwash one of its members until that member has already demonstrated some degree of staying power on her own.9

Psychological Traces

The fifth issue has to do with the question of whether brainwashing leaves any long-lasting measurable psychological traces in those who have experienced it. Before we can ask this question in a systematic way, we have to be clear about what sort of traces we should be looking for. There is an extensive literature on cults and mental health. But whether cult involvement causes psychological problems is a much more general question than whether participation in a traumatic resocialization process leaves any measurable psychological traces.

There has been little consensus on what sort of traces to look for. Richardson and Kilbourne (1983: 30) assume that brainwashing should lead to insanity. Lewis (1983: 30) argues that brainwashing should lead to diminished IQ scores. Nothing in brainwashing theory would lead us to predict either of these outcomes. In fact, Schein points out that ‘The essence of coercive persuasion is to produce ideological and behavioral change in a fully conscious, mentally intact individual’ (1959: 437). Why in the world would brainwashers invest scarce resources to produce insanity and stupidity in their followers? However, these aforementioned authors (and others) have taken the absence of these debilitative effects as ‘proof’ that brainwashing doesn’t happen in cults. At the same time, those who oppose cults have had an interest, driven by litigation rather than science, in making exaggerated claims for mental impairment directly resulting from brainwashing. As Farrell has pointed out, ‘From the beginning, the idea of traumatic neurosis has been accompanied by concerns about compensation’ (1998: 7).

Studies of lingering emotional, cognitive, and physiological effects on ex-members have thus far shown inconsistent results (Katchen 1997; Solomon 1981; Ungerleider and Wellisch 1983). Researchers studying current members of religious groups have found no significant impairment or disorientation. Such results have erroneously been taken as evidence that the members of these groups could, therefore, not possibly have been brainwashed. However, these same researchers found these responses of current members contaminated by elevations on the ‘Lie’ scale, exemplifying ‘an intentional attempt to make a good impression and deny faults’ (Ungerleider and Wellisch 1983: 208). On the other hand, studies of ex-members have tended to show ‘serious mental and emotional dysfunctions that have been directly caused by cultic beliefs and practices (Saliba 1993: 106). The sampling methods of these latter studies have been challenged (Lewis and Bromley 1987; Solomon 1981), however, because they have tended to significantly over-sample respondents with anti-cult movement ties. With ingenious logic, this has led Dawson (1998: 121) to suggest in the same breath that cult brainwashing is a myth but that ex-member impairment may be a result of brainwashing done by deprogrammers.

All this controversy is not entirely relevant to our question, however, because there is no reason to assume that a brainwashed person is going to show elevated scores on standard psychiatric distress scales. In fact, for those for whom making choices is stressful, brainwashing may offer psychological relief. Galanter’s research has demonstrated that a cult ‘acts like a psychological pincer, promoting distress while, at the same time, providing relief’ (1989: 93). As we shall see below, the brainwashing model predicts impairment and disorientation only for people during some of the intermediate stages, not at the end state. The popular association of brainwashing with zombie or robot states comes out of a misattribution of the characteristics of people going through the traumatic brainwashing process to people going through the traumatic brainwashing process to people who have completed the process. The former really are, at times, so disoriented that they appear to resemble caricatures of zombies or robots. The glassy eyes, inability to complete sentences, and fixed eerie smiles are characteristics of disoriented people under randomly varying levels of psychological stress. The latter, however, are, if the process was successful, functioning and presentable deployable agents.

Establishing causal direction in the association between cult membership and mental health is extremely tricky, and little progress has been made thus far. In an excellent article reviewing the extensive literature in this area, Saliba (1993: 108) concludes: ‘The study of the relationship between new religious movements and mental health is in its infancy.’ Writing five years later, Dawson (1998: 122) agrees that this is still true, and argues that ‘the inconclusiveness results of the psychological study of members and ex-members of NRMs cannot conceivably be used to support either the case for or against brainwashing.’ Saliba calls for prospective studies that will establish baseline mental health measurements for individuals before they join cults, followed by repeated measures during and afterward. While this is methodologically sensible, it is impractical because joining a cult is both a rare and unexpected event. This makes the general question of how cults affect mental health very difficult to answer.

Fortunately, examining the specific issue of whether brainwashing leaves psychological traces may be easier. The key is recognizing that brainwashing is a traumatic process, and, therefore, those who have gone through it should experience an increasing likelihood in later years of post-traumatic stress disorder. The classic clinical symptoms of PTSD — avoidance, numbing, and increased arousal (American Psychiatric Association 1994: 427) — have been observed in many ex-cult members regardless of their mode of exit and current movement affiliations (Katchen 1997; Zablocki 1999). However, these soft and somewhat subjective symptoms should be viewed with some caution given recent controversies over the ease with which symptoms such as these can be iatrogenically implanted, as, for example, false memories (Loftus and Ketcham 1994).

In the future, avenues for more precise neurological tracking may become available. Judith Herman (1997: 238) has demonstrated convincingly that ‘traumatic exposure can produce lasting alterations in the endocrine, autonomic, and central nervous systems … and un the function and even the structure of specific areas of the brain.’ It is possible in the future that direct evidence of brainwashing may emerge from brain scanning using positron emission tomography. Some preliminary research in this area has suggested that, during flashbacks, specific areas of the brain involved with language and communication may be inactivated (Herman 1997: 240; Rauch van der Kolk, et. al. 1996). Another promising area of investigation of this sort would involve testing for what van der Kolk and McFarlene (1996) have clinically identified as ‘the black hole of trauma.’ It should be possible to determine, once measures have been validated, whether such traces appear more often in individuals who claim to have gone through brainwashing than in a sample of controls who have been non-brainwashed members of cults for equivalent periods of time.

Separating the Investigative Steps

The final issue is a procedural one. There are four sequential investigative steps required to resolve controversies like the one we have been discussing.these steps are concerned with attempt, existence, incidence, and consequence. A great deal of confusion comes from nothing more than a failure to recognize that these four steps need to be kept analytically distinct from one another.

To appreciate the importance of this point, apart from the heat of controversy, let us alter the scene for a moment and imagine that the scientific conflict we are trying to resolve is over something relatively innocuous — say, vegetarianism. Let us imagine that on one side we have a community of scholars arguing that vegetarianism is a myth, that nobody would voluntarily choose to live without eating meat and that anyone who tried would quickly succumb to an overpowering carnivorous urge. On the other side, we have another group of scholars arguing that they had actually seen vegetarians and observed their non-meat-eating behavior over long periods of time, and that, moreover, vegetarianism is a rapidly growing social problem with many new converts each year being seduced by this enervating and debilitating diet.

It should be clear that any attempt to resolve this debate scientifically would have to proceed through the four sequential steps mentioned above. First, we would have to find out if anybody ever deliberately attempts to be a vegetarian. Maybe those observed not eating meat were simply unable to obtain it. If nobody could be found voluntarily attempting to follow a vegetarian diet, we would next have to observe him carefully enough and long enough to find out whether he succeeds in abstaining from meat. If we observe even one person successfully abstaining from meat, we would have to conclude that vegetarianism exists, increasing our confidence in the theory of the second group of researchers. But the first group could still argue, well, maybe you are right that a few eccentric people here and there do practise vegetarianism, but not enough to constitute a social phenomenon worth investigating. So, the next step would be to measure the incidence of vegetarianism in the population. Out of every million people, how many do we find following a vegetarian diet? If it turns out to be very few, we can conclude that, while vegetarianism may exist as a social oddity, it does not rise to the level of being a social phenomenon worthy of our interest. If, however, we find a sizeable number of vegetarians, we still need to ask, ‘So what?’ This is the fourth of our sequential steps. Does the practice of vegetarianism have any physical, psychological, or social consequences? If so, are these consequences worthy of our concern?

Each of these investigative steps requires attention focused on quite distinct sets of substantive evidence. For this reason, it is important that we not confuse them with one another as is so often done in ‘apologist’ writing about brainwashing, where the argument often seems to run as follows: Brainwashing doesn’t exist, or at least it shouldn’t exist, and even if it does the numbers involved are so few, and everybody in modern society gets brainwashed to some extent, and the effects, if any, are impossible to measure. Such arguments jump around, not holding still long enough to allow for orderly and systematic confirmation or disconfirmation of each of the steps.

Once we recognize the importance of keeping the investigative steps methodologically distinct distinct from one another, it becomes apparent that the study of brainwashing is no more problematic (although undoubtedly much more difficult) than the study of an advertising campaign for a new household detergent. It is a straightforward question to ask whether or not some charismatic groups attempt to practise radical techniques of socialization designed to turn members into deployable agents. If the answer is no, we stop because there can be no brainwashing. If the answer is yes, we go on to a second question: Are these techniques at least sometimes effective in producing uncritical obedience? If the answer to this question is ye (even for a single person), we know that brainwashing exists, although it may be so rare as to be nothing more than a sociological oddity. therefore, we have to take a third step and ask. How frequently is it effective? What proportion of those who live in cults are subjected to brainwashing, and what proportion of these respond by becoming uncritically obedient? And, finally, we need to ask a fourth important question: How long do the effects last? Are the effects transitory, lasting only as long as the stimulus continues to be applied, or are they persistent for a period of time thereafter, and, if so, how long? Let us keep in mind the importance of distinguishing attempt from existence, from incidence, from consequences.

To be continued…

NOTES

When I speak of ego dystonic behaviour, I refer to behaviour that was ego dystonic to the person before joining the cult and after leaving the cult.

I have no doubt that Introvigne, who is a European attorney, is sincere in his desire to stifle brainwashing research out of fear that any suggestion that brainwashing might possibly occur in cults will be seized on by semi-authoritarian government committees eager to suppress religious liberty. Personally, I applaud Introvigne’s efforts to protect the fragile tree of religious freedom of choice in the newly emerging democracies of Eastern Europe. But I don’t appreciate his doing so by (perhaps inadvertently) sticking his thumb on the scales upon which social scientists attempt to weigh evidence.

The Anthony and Robbins article cited demonstrates how little we really know about traits that may predispose people to join cults. They say ‘…some traditionally conservative religious groups attract people who score highly on various measures of totalitarianism, e.g., the F scale or Rokeach’s Dogmatism scale… It seems likely that these results upon certain Christian groups would generalize to alternative religious movements or cults, as many of them have theological and social beliefs that seem similar to those in some fundamentalist denominations’ (1994:470).. Perhaps, but perhaps not. No consensus has yet emerged from numerous attempts to find a cult personality type, but this seems like a promising area of research to continue.

Some, it is true, were nominally volunteers into re-education programs. However, the power of the state to make their lives miserable if they did not volunteer cannot be ignored.

Unfortunately, however, uncritical obedience can be wayward and dangerous. It can be useful to a cult leader when the cult is functioning well. But it often has been perverted to serve a destructive or self-destructive agenda in cults that have begun to disintegrate.

Some confusion on this subject has emerged from the fact that Lifton has distanced himself from those attempting to litigate against cults because of alleged brainwashing. He has constantly argued (and I wholeheartedly agree) that brainwashing, in and of itself, where no force is involved, should not be a matter for the law courts.

Formal definitions for this and other technical terms will be presented in the next section of this chapter.

In other words, the probability of a person’s leaving is inversely dependent upon the amount of time he or she has already spent as a member.

The ‘cult-basher’ version of brainwashing theory has played into this misunderstanding by confounding manipulative recruitment techniques (like sleep deprivation and ‘love-bombing’) with actual brainwashing. While there may be some overlap in the actual techniques used, the former is a method for obtaining new members, whereas brainwashing is a method for retaining old members.