Cultural and political values to restore America's democracy

Tag Archives: thinking

It is not pleasant to contemplate the thoughts that must be passing through the mind of the Owl of Minerva as the dusk falls and she undertakes the task of interpreting the era of human civilization, which may now be approaching its inglorious end.

“The likely end of the era of civilization is foreshadowed in a new draft report by the Intergovernmental Panel on Climate Change (IPCC), the generally conservative monitor of what is happening to the physical world.”

The era opened almost 10,000 years ago in the Fertile Crescent, stretching from the lands of the Tigris and Euphrates, through Phoenicia on the eastern coast of the Mediterranean to the Nile Valley, and from there to Greece and beyond. What is happening in this region provides painful lessons on the depths to which the species can descend.

The land of the Tigris and Euphrates has been the scene of unspeakable horrors in recent years. The George W. Bush-Tony Blair aggression in 2003, which many Iraqis compared to the Mongol invasions of the 13th century, was yet another lethal blow. It destroyed much of what survived the Bill Clinton-driven UN sanctions on Iraq, condemned as “genocidal” by the distinguished diplomats Denis Halliday and Hans von Sponeck, who administered them before resigning in protest. Halliday and von Sponeck’s devastating reports received the usual treatment accorded to unwanted facts.

One dreadful consequence of the US-UK invasion is depicted in a New York Times “visual guide to the crisis in Iraq and Syria”: the radical change of Baghdad from mixed neighborhoods in 2003 to today’s sectarian enclaves trapped in bitter hatred. The conflicts ignited by the invasion have spread beyond and are now tearing the entire region to shreds.

Much of the Tigris-Euphrates area is in the hands of ISIS and its self-proclaimed Islamic State, a grim caricature of the extremist form of radical Islam that has its home in Saudi Arabia. Patrick Cockburn, a Middle East correspondent for The Independent and one of the best-informed analysts of ISIS, describes it as “a very horrible, in many ways fascist organization, very sectarian, kills anybody who doesn’t believe in their particular rigorous brand of Islam.”

Cockburn also points out the contradiction in the Western reaction to the emergence of ISIS: efforts to stem its advance in Iraq along with others to undermine the group’s major opponent in Syria, the brutal Bashar Assad regime. Meanwhile a major barrier to the spread of the ISIS plague to Lebanon is Hezbollah, a hated enemy of the US and its Israeli ally. And to complicate the situation further, the US and Iran now share a justified concern about the rise of the Islamic State, as do others in this highly conflicted region.

Egypt has plunged into some of its darkest days under a military dictatorship that continues to receive US support. Egypt’s fate was not written in the stars. For centuries, alternative paths have been quite feasible, and not infrequently, a heavy imperial hand has barred the way.

After the renewed horrors of the past few weeks it should be unnecessary to comment on what emanates from Jerusalem, in remote history considered a moral center.

Eighty years ago, Martin Heidegger extolled Nazi Germany as providing the best hope for rescuing the glorious civilization of the Greeks from the barbarians of the East and West. Today, German bankers are crushing Greece under an economic regime designed to maintain their wealth and power.

The likely end of the era of civilization is foreshadowed in a new draft report by the Intergovernmental Panel on Climate Change (IPCC), the generally conservative monitor of what is happening to the physical world.

Putting the Freeze on Global Warming

The report concludes that increasing greenhouse gas emissions risk “severe, pervasive and irreversible impacts for people and ecosystems” over the coming decades. The world is nearing the temperature when loss of the vast ice sheet over Greenland will be unstoppable. Along with melting Antarctic ice, that could raise sea levels to inundate major cities as well as coastal plains.

The era of civilization coincides closely with the geological epoch of the Holocene, beginning over 11,000 years ago. The previous Pleistocene epoch lasted 2.5 million years. Scientists now suggest that a new epoch began about 250 years ago, the Anthropocene, the period when human activity has had a dramatic impact on the physical world. The rate of change of geological epochs is hard to ignore.

One index of human impact is the extinction of species, now estimated to be at about the same rate as it was 65 million years ago when an asteroid hit the Earth. That is the presumed cause for the ending of the age of the dinosaurs, which opened the way for small mammals to proliferate, and ultimately modern humans. Today, it is humans who are the asteroid, condemning much of life to extinction.

The IPCC report reaffirms that the “vast majority” of known fuel reserves must be left in the ground to avert intolerable risks to future generations. Meanwhile the major energy corporations make no secret of their goal of exploiting these reserves and discovering new ones.

A day before it ran a summary of the IPCC conclusions, The New York Times reported that huge Midwestern grain stocks are rotting so that the products of the North Dakota oil boom can be shipped by rail to Asia and Europe.

One of the most feared consequences of anthropogenic global warming is the thawing of permafrost regions. A study in Science magazine warns that “even slightly warmer temperatures [less than anticipated in coming years] could start melting permafrost, which in turn threatens to trigger the release of huge amounts of greenhouse gases trapped in ice,” with possible “fatal consequences” for the global climate.

Arundhati Roy suggests that the “most appropriate metaphor for the insanity of our times” is the Siachen Glacier, where Indian and Pakistani soldiers have killed each other on the highest battlefield in the world. The glacier is now melting and revealing “thousands of empty artillery shells, empty fuel drums, ice axes, old boots, tents and every other kind of waste that thousands of warring human beings generate” in meaningless conflict. And as the glaciers melt, India and Pakistan face indescribable disaster.

Sad species. Poor Owl.

The views expressed in this post are the author’s alone, and presented here to offer a variety of perspectives to our readers.

Noam Chomsky is Institute Professor emeritus in the Department of Linguistics and Philosophy at the Massachusetts Institute of Technology. Among his recent books are Hegemony or Survival, Failed States, Power Systems, Occupy, and Hopes and Prospects. His latest book, Masters of Mankind, will be published soon by Haymarket Books, which is also reissuing twelve of his classic books in new editions over the coming year. His website is www.chomsky.info.

The man who claims that he is about to tell me the secret of human happiness is eighty-three years old, with an alarming orange tan that does nothing to enhance his credibility. It is just after eight o’clock on a December morning, in a darkened basketball stadium on the outskirts of San Antonio, and — according to the orange man — I am about to learn ‘the one thing that will change your life forever.” I’m skeptical, but not as much as I might normally be, because I am only one of more than fifteen thousand people at Get Motivated!, America’s “most popular business motivational seminar,” and the enthusiasm of my fellow audience members is starting to become infectious.

“So you wanna know?” asks the octogenarian, who is Dr. Robert H. Schuller, veteran self-help guru, author of more than thirty-five books on the power of positive thinking, and, in his other job, the founding pastor of the largest church in the United States constructed entirely out of glass. The crowd roars its assent. Easily embarrassed British people like me do not, generally speaking, roar our assent at motivational seminars in Texas basketball stadiums, but the atmosphere partially overpowers my reticence. I roar quietly.

“Here it is, then,” Dr. Schuller declares, stiffly pacing the stage, which is decorated with two enormous banners reading “MOTIVATE!” and “SUCCEED!,” seventeen American flags, and a large number of potted plants. “Here’s the thing that will change your life forever.” Then he barks a single syllable — “Cut!” — and leaves a dramatic pause before completing his sentence: ‘… the word ‘impossible’ out of your life! Cut it out! Cut it out forever!”

The audience combusts. I can’t help feeling underwhelmed, but then I probably shouldn’t have expected anything different from Get Motivated!, an event at which the sheer power of positivity counts for everything. “You are the master of your destiny!” Schuller goes on. “Think big, and dream bigger! Resurrect your abandoned hope! … Positive thinking works in every area of life!’

The logic of Schuller’s philosophy, which is the doctrine of positive thinking at its most distilled, isn’t exactly complex: decide to think happy and successful thoughts — banish the spectres of sadness and failure — and happiness and success will follow. It could be argued that not every speaker listed in the glossy brochure for today’s seminar provides uncontroversial evidence in support of this outlook: the keynote speech is to be delivered, in a few hours’ time, by George W . Bush, a president far from universally viewed as successful. But if you voiced this objection to Dr. Schuller, he would probably dismiss it as “negativity thinking.” To criticize the power of positivity is to demonstrate that you haven’t really grasped it at all. If you had, you would stop grumbling about such things, and indeed about anything else.

The organisers of Get Motivated! describe it as a motivational seminar, but that phrase — with its suggestion of minor-league life coaches giving speeches in dingy hotel ballrooms — hardly captures the scale and grandiosity of the thing. Staged roughly once a month, in cities across North America, it sits at the summit of the global industry of positive thinking, and boasts an impressive roster of celebrity speakers: Mikhail Gorbachev and Rudy Giuliani are among the regulars, as are General Colin Powell and, somewhat incongruously, William Shatner. Should it ever occur to you that a formerly prominent figure in world politics (or William Shatner) has been keeping an inexplicably low profile in recent months, there’s a good chance you’ll find him or her at Get Motivated!, preaching the gospel of optimism.

As befits such celebrity, there’s nothing dingy about the staging, either, which features banks of swooping spotlights, sound systems pumping out rock anthems, and expensive pyrotechnics; each speaker is welcomed to the stage amid showers of sparks and puffs of smoke. These special effects help propel the audience to ever higher altitudes of excitement, though it also doesn’t hurt that for many of them, a trip to Get Motivated! means an extra day off work: many employers classify it as job training. Even the United States military, where “training” usually means something more rigorous, endorses this view; in San Antonio, scores of the stadium’s seats are occupied by uniformed soldiers from the local Army base.

Technically, I am here undercover. Tamara Lowe, the self-described “world’s No. 1 female motivational speaker,” who along with her husband runs the company behind Get Motivated!, has been accused of denying access to reporters, a tribe notoriously prone to negativity thinking. Lowe denies the charge, but out of caution, I’ve been describing myself as a “self-employed businessman” — a tactic, I’m realizing too late, that only makes me sound shifty. I needn’t have bothered with subterfuge anyway, it turns out, since I’m much too far away from the stage for the security staff to be able to see me scribbling in my notebook. My seat is described on my ticket as “premier seating,” but this turns out to be another case of positivity run amok: at Get Motivated!, there is only “premier seating,” “executive seating,” and “VIP seating.”

In reality, mine is up in the nosebleed section; it is a hard plastic perch, painful on the buttocks. But I am grateful for it, because it means that by chance I’m seated next to a man who, as far as I can make out, is one of the few cynics in the arena — an amiable, large-limbed park ranger named Jim, who sporadically leaps to his feet to shout I’m so motivated!” in tones laden with sarcasm.

He explains that he was required to attend by his employer, the United States National Park Service, though when I ask why that organization might wish its rangers to use paid work time in this fashion, he cheerily concedes that he has “no fucking clue.” Dr. Schuller’s sermon, meanwhile, is gathering pace. “When I was a child, it was impossible for a man ever to walk on the moon, impossible to cut out a human heart and put it in another man’s chest … the word ‘impossible’ has proven to be a very stupid word!” He does not spend much time marshaling further evidence for his assertion that failure is optional: it’s clear that Schuller, the author of “Move Ahead with Possibility Thinking” and “Tough Times Never Last, but Tough People Do!,” vastly prefers inspiration to argument. But in any case, he is really only a warm-up man for the day’s main speakers, and within fifteen minutes he is striding away, to adulation and fireworks, fists clenched victoriously up at the audience, the picture of positive-thinking success.

It is only months later, back at my home in New York, reading the headlines over morning coffee, that I learn the news that the largest church in the United States constructed entirely from glass has filed for bankruptcy, a word Dr. Schuller had apparently neglected to eliminate from his vocabulary.

For a civilization so fixated on achieving happiness, we seem remarkably incompetent at the task.One of the best-known general findings of the “science of happiness” has been the discovery that the countless advantages of modern life have done so little to lift our collective mood. The awkward truth seems to be that increased economic growth does not necessarily make for happier societies, just as increased personal income, above a certain basic level, doesn’t make for happier people. Nor does better education, at least according to some studies. Nor does an increased choice of consumer products. Nor do bigger and fancier homes, which instead seem mainly to provide the privilege of more space in which to feel gloomy.

Perhaps you don’t need telling that self-help books, the modern-day apotheosis of the quest for happiness, are among the things that fail to make us happy. But, for the record, research strongly suggests that they are rarely much help. This is why, among themselves, some self-help publishers refer to the “eighteen-month rule,” which states that the person most likely to purchase any given self-help book is someone who, within the previous eighteen months, purchased a self-help book — one that evidently didn’t solve all their problems. When you look at the self-help shelves with a coldly impartial eye, this isn’t especially surprising. That we yearn for neat, book-sized solutions to the problem of being human is understandable, but strip away the packaging, and you’ll find that the messages of such works are frequently banal. The “Seven Habits of Highly Effective People” essentially tells you to decide what matters most to you in life, and then do it; “How to Win Friends and Influence People” advises its readers to be pleasant rather than obnoxious, and to use people’s first names a lot. One of the most successful management manuals of the last few years, “Fish!,” which is intended to help foster happiness and productivity in the workplace, suggests handing out small toy fish to your hardest-working employees.

As we’ll see, when the messages get more specific than that, self-help gurus tend to make claims that simply aren’t supported by more reputable research. The evidence suggests, for example, that venting your anger doesn’t get rid of it, while visualising your goals doesn’t seem to make you more likely to achieve them. And whatever you make of the country-by-country surveys of national happiness that are now published with some regularity, it’s striking that the “happiest” countries are never those where self-help books sell the most, nor indeed where professional psychotherapists are most widely consulted. The existence of a thriving “happiness industry” clearly isn’t sufficient to engender national happiness, and it’s not unreasonable to suspect that it might make matters worse.

Yet the ineffectiveness of modern strategies for happiness is really just a small part of the problem. There are good reasons to believe that the whole notion of “seeking happiness” is flawed to begin with. For one thing, who says happiness is a valid goal in the first place? Religions have never placed much explicit emphasis on it, at least as far as this world is concerned; philosophers have certainly not been unanimous in endorsing it, either. And any evolutionary psychologist will tell you that evolution has little interest in your being happy, beyond trying to make sure that you’re not so listless or miserable that you lose the will to reproduce.

Even assuming happiness to be a worthy target, though, a worse pitfall awaits, which is that aiming for it seems to reduce your chances of ever attaining it. “Ask yourself whether you are happy,” observed the philosopher John Stuart Mill, “and you cease to be so.” At best, it would appear, happiness can only be glimpsed out of the corner of an eye, not stared at directly. (We tend to remember having been happy in the past much more frequently than we are conscious of being happy in the present.) Making matters worse still, what happiness actually is feels impossible to define in words; even supposing you could do so, you’d presumably end up with as many different definitions as there are people on the planet. All of which means it’s tempting to conclude that “How can we be happy?” is simply the wrong question — that we might as well resign ourselves to never finding the answer, and get on with something more productive instead.

But could there be a third possibility, besides the futile effort to pursue solutions that never seem to work, on the one hand, and just giving up, on the other? After several years reporting on the field of psychology as a journalist, I finally realized that there might be. I began to think that something united all those psychologists and philosophers — and even the occasional self-help guru — whose ideas seemed actually to hold water. The startling conclusion at which they had all arrived, in different ways, was this: that the effort to try to feel happy is often precisely the thing that makes us miserable. And that it is our constant efforts to eliminate the negative — insecurity, uncertainty, failure, or sadness — that is what causes us to feel so insecure, anxious, uncertain, or unhappy. They didn’t see this conclusion as depressing, though. Instead, they argued that it pointed to an alternative approach, a “negative path” to happiness, that entailed taking a radically different stance towards those things that most of us spend our lives trying hard to avoid. It involved learning to enjoy uncertainty, embracing insecurity, stopping trying to think positively, becoming familiar with failure, even learning to value death. In short, all these people seemed to agree that in order to be truly happy, we might actually need to be willing to experience more negative emotions — or, at the very least, to learn to stop running quite so hard from them. Which is a bewildering thought, and one that calls into question not just our methods for achieving happiness, but also our assumptions about what “happiness” really means.

Which is how I came to find myself rising reluctantly to my feet, up in a dark extremity of that basketball stadium, because Get Motivated!’s excitable mistress of ceremonies had announced a “dance competition,” in which everyone present was obliged to participate. Giant beach balls appeared as if from nowhere, bumping across the heads of the crowd, who jiggled awkwardly as Wham! blared from the sound system. The first prize of a free trip to Disney World, we were informed, awaited not the best dancer but the most motivated one, though the distinction made little difference to me: I found the whole thing too excruciating to do more than sway very slightly. The prize was eventually awarded to a soldier. This was a decision that I suspected had been taken to pander to local patriotic pride, rather than strictly in recognition of highly motivated dancing.

* * *

One of the foremost investigators of the problems with positive thinking is a professor of psychology named Daniel Wegner, who runs the Mental Control Laboratory at Harvard University.

This is not, whatever its name might suggest, a CIA-funded establishment dedicated to the science of brainwashing. Wegner’s intellectual territory is what has come to be known as “ironic process theory,” which explores the ways in which our efforts to suppress certain thoughts or behaviors result, ironically, in their becoming more prevalent. I got off to a bad start with Professor Wegner when I accidentally typed his surname, in a newspaper column, as “Wenger.” He sent me a crabby email (“Get the name right!”), and didn’t seem likely to be receptive to the argument that my slip-up was an interesting example of exactly the kinds of errors he studied. The rest of our communications proved a little strained.

The problems to which Wegner has dedicated much of his career all have their origins in a simple and intensely irritating parlor game, which dates back at least to the days of Fyodor Dostoevsky, who reputedly used it to torment his brother. It takes the form of a challenge: can you — the victim is asked — succeed in not thinking about a white bear for one whole minute? You can guess the answer, of course, but it’s nonetheless instructive to make the attempt. Why not try it now? Look at your watch, or find a clock with a second hand, and aim for a mere ten seconds of entirely non-white-bear-related thoughts, starting … now.

My commiserations on your failure.

Wegner’s earliest investigations of ironic process theory involved little more than issuing this challenge to American university students, then asking them to speak their inner monologues aloud while they made the attempt. This is a rather crude way of accessing someone’s thought processes, but an excerpt from one typical transcript nonetheless vividly demonstrates the futility of the struggle:

Of course, now the only thing I’m going to think about is a white bear … Don’t think about a white bear. Ummm, what was I thinking about before? See, I think about flowers a lot … Okay, so my fingernails are really bad … Every time I really want, like … ummm … to talk, think, to not think about the white bear, then it makes me think about the white bear more …

At this juncture, you might be beginning to wonder why it is that some social psychologists seem to be allowed to spend other people’s money proving the obvious. Of course the white bear challenge is virtually impossible to win. But Wegner was just getting started. The more he explored the field, the more he came to suspect that the internal mechanism responsible for sabotaging our efforts at suppressing white bear thoughts might govern an entire territory of mental activity and outward behavior. The white bear challenge, after all, seems like a metaphor for much of what goes wrong in life: all too often, the outcome we’re seeking to avoid is exactly the one to which we seem magnetically lured.

Wegner labelled this effect “the precisely counterintuitive error,” which, he explained in one paper, “is when we manage to do the worst possible thing, the blunder so outrageous that we think about it in advance and resolve not to let that happen. We see a rut coming up in the road ahead, and proceed to steer our bike right into it. We make a mental note not to mention a sore point in conversation, and then cringe in horror as we blurt out exactly that thing. We carefully cradle the glass across the room, all the while thinking ‘Don’t spill’ and then juggle it onto the carpet under the gaze of our host.”

Far from representing an occasional divergence from our otherwise flawless self-control, the capacity for ironic error seems to lurk deep in the soul, close to the core of our characters. Edgar Allan Poe, in his short story of the same name, calls it “the imp of the perverse”: that nameless but distinct urge one sometimes experiences, when walking along a precipitous cliff edge, or climbing to the observation deck of a tall building, to throw oneself off — not from any suicidal motivation, but precisely because it would be so calamitous to do so. The imp of the perverse plagues social interactions, too, as anyone who has ever laughed in recognition at an episode of “Curb Your Enthusiasm” will know all too well.

What is going on here, Wegner argues, is a malfunctioning of the uniquely human capacity for metacognition, or thinking about thinking. “Metacognition,” Wegner explains, “occurs when thought takes itself as an object.” Mainly, it’s an extremely useful skill: it is what enables us to recognize when we are being unreasonable, or sliding into depression, or being afflicted by anxiety, and then to do something about it. But when we use metacognitive thoughts directly to try to control our other, everyday, “object-level” thoughts — by suppressing images of white bears, say, or replacing gloomy thoughts with happy ones — we run into trouble. “Metathoughts are instructions we give ourselves about our object-level thinking,” as Wegner puts it, “and sometimes we just can’t follow our own instructions.”

When you try not to think of a white bear, you may experience some success in forcing alternative thoughts into your mind. At the same time, though, a metacognitive monitoring process will crank into action, to scan your mind for evidence of whether you are succeeding or failing at the task. And this is where things get perilous, because if you try too hard — or, Wegner’s studies suggest, if you are tired, stressed, depressed, attempting to multi-task, or otherwise suffering from “mental load” — metacognition will frequently go wrong. The monitoring process will start to occupy more than its fair share of limelight on the cognitive stage. It will jump to the forefront of consciousness — and suddenly, all you will be able to think about is white bears, and how badly you’re doing at not thinking about them.

Could it be that ironic process theory also sheds light on what is wrong with our efforts to achieve happiness, and on the way that our efforts to feel positive seem so frequently to bring about the opposite result? In the years since Wegner’s earliest white bear experiments, his research, and that of others, has turned up more and more evidence to support that notion. One example: when experimental subjects are told of an unhappy event, but then instructed to try not to feel sad about it, they end up feeling worse than people who are informed of the event, but given no instructions about how to feel. In another study, when patients who were suffering from panic disorders listened to relaxation tapes, their hearts beat faster than patients who listened to audiobooks with no explicitly “relaxing” content. Bereaved people who make the most effort to avoid feeling grief, research suggests, take the longest to recover from their loss. Our efforts at mental suppression fail in the sexual arena, too: people instructed not to think about sex exhibit greater arousal, as measured by the electrical conductivity of their skin, than those not instructed to suppress such thoughts.

Seen from this perspective, swathes of the self-help industry’s favorite techniques for achieving happiness and success — from positive thinking to visualizing your goals to “getting motivated” — stand revealed to be suffering from one enormous flaw. A person who has resolved to “think positive” must constantly scan his or her mind for negative thoughts — there’s no other way that the mind could ever gauge its success at the operation — yet that scanning will draw attention to the presence of negative thoughts. (Worse, if the negative thoughts start to predominate, a vicious spiral may kick in, since the failure to think positively may become the trigger for a new stream of self-berating thoughts, about not thinking positively enough.) Suppose you decide to follow Dr. Schuller’s suggestion and try to eliminate the word “impossible” from your vocabulary, or more generally try to focus exclusively on successful outcomes, and stop thinking about things not working out. As we’ll see, there are all sorts of problems with this approach. But the most basic one is that you may well fail, as a result of the very act of monitoring your success.

This problem of self-sabotage through self-monitoring is not the only hazard of positive thinking. An additional twist was revealed in 2009, when a psychologist based in Canada named Joanne Wood set out to test the effectiveness of “affirmations,” those peppy self-congratulatory phrases designed to lift the user’s mood through repetition. Affirmations have their origins in the work of the nineteenth-century French pharmacist Emile Coue, a forerunner of the contemporary positive thinkers, who coined the one that remains the most famous: “Every day, in every way, I am getting better and better.”

Most affirmations sound pretty cheesy, and one might suspect that they would have little effect. Surely, though, they’re harmless? Wood wasn’t so sure about that. Her reasoning, though compatible with Wegner’s, drew on a different psychological tradition known as “self-comparison theory.”Much as we like to hear positive messages about ourselves, this theory suggests, we crave even more strongly the sense of being a coherent, consistent self in the first place. Messages that conflict with that existing sense of self, therefore, are unsettling, and so we often reject them — even if they happen to be positive, and even if the source of the message is ourselves. Wood’s hunch was that people who seek out affirmations would be, by definition, those with low self-esteem — but that, for that very same reason, they would end up reacting against the messages in the affimations, because they conflicted with their self-images. The result might even be a worsening of their low self-esteem as people struggled to reassert their existing self-images against the incoming messages.

Which is exactly what happened in Wood’s research. In one set of experiments, people were divided into subgroups of those with low and high self-esteem, then asked to undertake a journal-writing exercise; every time a bell rang, they were to repeat to themselves the phrase “I am a lovable person.” According to a variety of ingenious mood measures, those who began the process with low self-esteem became appreciably less happy as a result of telling themselves that they were lovable. They didn’t feel particularly lovable to begin with — and trying to convince themselves otherwise merely solidified their negativity. “Positive thinking” had made them feel worse.

* * *

The arrival of George Bush onstage in San Antonio was heralded by the sudden appearance of his Secret Service detail. These were men who would probably have stood out anywhere, in their dark suits and earpieces, but who stood out twice as prominently at Get Motivated! thanks to their rigid frowns. The job of protecting former presidents from potential assassins, it appeared, wasn’t one that rewarded looking on the bright side and assuming that nothing could go wrong.

Bush himself, by contrast, bounded onstage grinning. “You know, retirement ain’t so bad, especially when you get to retire to Texas!” he began, before launching into a speech he had evidently delivered several times before. First, he told a folksy anecdote about spending his post-presidency cleaning up after his dog (“I was picking up that which I had been dodging for eight years!”) Then, for a strange moment or two, it seemed as if the main topic of his speech would be how he once had to choose a rug for the Oval Office (“I thought to myself, the presidency is going to be a decision-making experience!”). But his real subject, it soon emerged, was optimism. “I don’t believe you can lead a family, or a school, or a city, or a state, or a country, unless you’re optimistic that the future is going to be better,” he said. “And I want you to know that, even in the darkest days of my presidency, I was optimistic that the future was going to be better than the past for our citizens and the world.”

You need not hold any specific political opinion about the forty-third president of the United States to see how his words illustrate a fundamental strangeness of the “cult of optimism.” Bush was not ignoring the numerous controversies of his administration — the strategy one might have imagined he would adopt at a motivational seminar, before a sympathetic audience and facing no risk of hostile questions. Instead, he had chosen to redefine them as evidence in support of his optimistic attitude.

The way Bush saw it, the happy and successful periods of his presidency proved the benefits of an optimistic outlook, of course — but so did the unhappy and unsuccessful ones. When things are going badly, after all, you need optimism all the more. Or to put it another way: once you have resolved to embrace the ideology of positive thinking, you will find a way to interpret virtually any eventuality as a justification for thinking positively. You need never spend time considering how your actions might go wrong.

Could this curiously unfalsifiable ideology of positivity at all costs — positivity regardless of the results — be actively dangerous? Opponents of the Bush administration’s foreign policies might have reason to think so. This is also one part of the case made by the social critic Barbara Ehrenreich, in her 2009 book “Bright-Sided: How Positive Thinking Is Undermining America.” One underappreciated cause of the global financial crisis of the late 2000s, she argues, was an American business culture in which even thinking about the possibility of failure — let alone speaking up about it at meetings — had come to be considered an embarrassing faux pas.

Bankers, their narcissism stoked by a culture that awarded grand ambition above all, lost the capacity to distinguish between their ego-fueled dreams and concrete results. Meanwhile, homebuyers assumed that whatever they wanted could be theirs if they wanted it badly enough ( how many of them had read books such as “The Secret, which makes exactly that claim?) and accordingly sought mortgages they were unable to repay. Irrational optimism suffused the financial sector, and the professional purveyors of optimism — the speakers and self-help gurus and seminar organizers — were only too happy to encourage it. “To the extent that positive thinking had become a business in itself,” writes Ehrenreich, “business was its principal client, eagerly consuming the good news that all things are possible through an effort of mind. This was a useful message for employees, who by the turn of the twenty-first century were being required to work longer hours for fewer benefits and diminishing job security. But it was also a liberating ideology for top-level executives. What was the point in agonizing over balance sheets and tedious analyses of risks — and why bother worrying about dizzying levels of debt and exposure to potential defaults — when all good things come to those who are optimistic enough to expect them?”

Ehrenreich traces the origins of this philosophy to nineteenth-century America, and specifically to the quasi-religious movement known as New Thought. New Thought arose in rebellion against the dominant, gloomy message of American Calvinism, which was that relentless hard work was the duty of every Christian — with the additional sting that, thanks to the doctrine of predestination, you might in any case already be marked to spend eternity in Hell. New Thought, by contrast, proposed that one could achieve happiness and worldly success through the power of the mind.

This mind-power could even cure physical ailments, according to the newly minted religion of Christian Science, which grew directly from the same roots. Yet, as Ehrenreich makes clear, New Thought imposed its own kind of harsh judgmentalism, replacing Calvinism’s obligatory hard work with obligatory positive thinking. Negative thoughts were fiercely denounced — a message that echoed “the old religion’s condemnation of sin” and added “an insistence on the constant interior labour of self- examination.”

Quoting the sociologist Micki McGee, Ehrenreich shows how, under this new orthodoxy of optimism, “continuous and neverending work on the self [was] offered not only as a road to success, but also to a kind of secular salvation.”

George Bush, then, was standing in a venerable tradition when he proclaimed the importance of optimism in all circumstances. But his speech at Get Motivated! was over almost as soon as it had started. A dash of religion, a singularly unilluminating anecdote about the terrorist attacks of September 11, 2001, some words of praise for the military, and he was waving goodbye — “Thank you, Texas, it’s good to be home!” — as his bodyguards closed in around him. Beneath the din of cheering, I heard Jim, the park ranger in the next seat, emit a sigh of relief. “OK , I’m motivated now,” he muttered, to nobody in particular. “Is it time for some beer?”

“There are lots of ways of being miserable,” says a character in a short story by Edith Wharton, “but there’s only one way of being comfortable, and that is to stop running around after happiness.” This observation pungently expresses the problem with the “cult of optimism” — the ironic, self-defeating struggle that sabotages positivity when we try too hard. But it also hints at the possibility of a more hopeful alternative, an approach to happiness that might take a radically different form. The first step is to learn how to stop chasing positivity so intently. But many of the proponents of the “negative path” to happiness take things further still, arguing — paradoxically, but persuasively — that deliberately plunging more deeply into what we think of as negative may be a precondition of true happiness.

Perhaps the most vivid metaphor for this whole strange philosophy is a small children’s toy known as the “Chinese finger trap,” though the evidence suggests it is probably not Chinese in origin at all. In his office at the University of Nevada, the psychologist Steven Hayes, an outspoken critic of counterproductive positive thinking, keeps a box of them on his desk; he uses them to illustrate his arguments. The “trap” is a tube, made of thin strips of woven bamboo, with the opening at each end being roughly the size of a human finger. The unwitting victim is asked to insert his index fingers into the tube, then finds himself trapped: in reaction to his efforts to pull his fingers out again, the openings at each end of the tube constrict, gripping his fingers ever more tightly. The harder he pulls, the more decisively he is trapped. It is only by relaxing his efforts at escape, and by pushing his fingers further in, that he can widen the ends of the tube, whereupon it falls away, and he is free.

In the case of the Chinese finger trap, Hayes observes, “doing the presumably sensible thing is counterproductive.” Following the negative path to happiness is about doing the other thing — the presumably illogical thing — instead.

Our world is a place where information can behave like human genes and ideas can replicate, mutate and evolve
By James Gleick, Smithsonian magazine, May 2011,

What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life.’ It is information, words, instructions,” Richard Dawkins declared in 1986. Already one of the world’s foremost evolutionary biologists, he had caught the spirit of a new age. The cells of an organism are nodes in a richly interwoven communications network, transmitting and receiving, coding and decoding. Evolution itself embodies an ongoing exchange of information between organism and environment. “If you want to understand life,” Dawkins wrote, “don’t think about vibrant, throbbing gels and oozes, think about information technology.”
We have become surrounded by information technology; our furniture includes iPods and plasma displays, and our skills include texting and Googling. But our capacity to understand the role of information has been sorely taxed. “TMI,” we say. Stand back, however, and the past does come back into focus.
The rise of information theory aided and abetted a new view of life. The genetic code—no longer a mere metaphor—was being deciphered. Scientists spoke grandly of the biosphere: an entity composed of all the earth’s life-forms, teeming with information, replicating and evolving. And biologists, having absorbed the methods and vocabulary of communications science, went further to make their own contributions to the understanding of information itself.Jacques Monod, the Parisian biologist who shared a Nobel Prize in 1965 for working out the role of messenger RNA in the transfer of genetic information, proposed an analogy: just as the biosphere stands above the world of nonliving matter, so an “abstract kingdom” rises above the biosphere. The denizens of this kingdom? Ideas.“Ideas have retained some of the properties of organisms,” he wrote. “Like them, they tend to perpetuate their structure and to breed; they too can fuse, recombine, segregate their content; indeed they too can evolve, and in this evolution selection must surely play an important role.”
Ideas have “spreading power,” he noted—“infectivity, as it were”—and some more than others. An example of an infectious idea might be a religious ideology that gains sway over a large group of people. The American neurophysiologist Roger Sperry had put forward a similar notion several years earlier, arguing that ideas are “just as real” as the neurons they inhabit. Ideas have power, he said:Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet.
Monod added, “I shall not hazard a theory of the selection of ideas.” There was no need. Others were willing.
Dawkins made his own jump from the evolution of genes to the evolution of ideas. For him the starring role belongs to the replicator, and it scarcely matters whether replicators were made of nucleic acid. His rule is “All life evolves by the differential survival of replicating entities.” Wherever there is life, there must be replicators. Perhaps on other worlds replicators could arise in a silicon-based chemistry—or in no chemistry at all.
What would it mean for a replicator to exist without chemistry? “I think that a new kind of replicator has recently emerged on this very planet,” Dawkins proclaimed near the end of his first book, The Selfish Gene, in 1976. “It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind.” That “soup” is human culture; the vector of transmission is language, and the spawning ground is the brain.
For this bodiless replicator itself, Dawkins proposed a name. He called it the meme, and it became his most memorable invention, far more influential than his selfish genes or his later proselytizing against religiosity. “Memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation,” he wrote. They compete with one another for limited resources: brain time or bandwidth. They compete most of all for attention. For example:
Ideas. Whether an idea arises uniquely or reappears many times, it may thrive in the meme pool or it may dwindle and vanish. The belief in God is an example Dawkins offers—an ancient idea, replicating itself not just in words but in music and art. The belief that Earth orbits the Sun is no less a meme, competing with others for survival. (Truth may be a helpful quality for a meme, but it is only one among many.)
Tunes. This tune has spread for centuries across several continents.
Catchphrases. One text snippet, “What hath God wrought?” appeared early and spread rapidly in more than one medium. Another, “Read my lips,” charted a peculiar path through late 20th-century America. “Survival of the fittest” is a meme that, like other memes, mutates wildly (“survival of the fattest”; “survival of the sickest”; “survival of the fakest”; “survival of the twittest”).
Images. In Isaac Newton’s lifetime, no more than a few thousand people had any idea what he looked like, even though he was one of England’s most famous men. Yet now millions of people have quite a clear idea—based on replicas of copies of rather poorly painted portraits. Even more pervasive and indelible are the smile of Mona Lisa, The Scream of Edvard Munch and the silhouettes of various fictional extraterrestrials. These are memes, living a life of their own, independent of any physical reality. “This may not be what George Washington looked like then,” a tour guide was overheard saying of the Gilbert Stuart portrait at the Metropolitan Museum of Art, “but this is what he looks like now.” Exactly.Memes emerge in brains and travel outward, establishing beachheads on paper and celluloid and silicon and anywhere else information can go. They are not to be thought of as elementary particles but as organisms. The number three is not a meme; nor is the color blue, nor any simple thought, any more than a single nucleotide can be a gene. Memes are complex units, distinct and memorable—units with staying power.
Also, an object is not a meme. The hula hoop is not a meme; it is made of plastic, not of bits. When this species of toy spread worldwide in a mad epidemic in 1958, it was the product, the physical manifestation, of a meme, or memes: the craving for hula hoops; the swaying, swinging, twirling skill set of hula-hooping. The hula hoop itself is a meme vehicle. So, for that matter, is each human hula hooper—a strikingly effective meme vehicle, in the sense neatly explained by the philosopher Daniel Dennett: “A wagon with spoked wheels carries not only grain or freight from place to place; it carries the brilliant idea of a wagon with spoked wheels from mind to mind.” Hula hoopers did that for the hula hoop’s memes—and in 1958 they found a new transmission vector, broadcast television, sending its messages immeasurably faster and farther than any wagon. The moving image of the hula hooper seduced new minds by hundreds, and then by thousands, and then by millions. The meme is not the dancer but the dance.
For most of our biological history memes existed fleetingly; their main mode of transmission was the one called “word of mouth.” Lately, however, they have managed to adhere in solid substance: clay tablets, cave walls, paper sheets. They achieve longevity through our pens and printing presses, magnetic tapes and optical disks. They spread via broadcast towers and digital networks. Memes may be stories, recipes, skills, legends or fashions. We copy them, one person at a time. Alternatively, in Dawkins’ meme-centered perspective, they copy themselves.
“I believe that, given the right conditions, replicators automatically band together to create systems, or machines, that carry them around and work to favor their continued replication,” he wrote. This was not to suggest that memes are conscious actors; only that they are entities with interests that can be furthered by natural selection. Their interests are not our interests. “A meme,” Dennett says, “is an information-packet with attitude.” When we speak of fighting for a principle or dying for an idea, we may be more literal than we know.
Tinker, tailor, soldier, sailor….Rhyme and rhythm help people remember bits of text. Or: rhyme and rhythm help bits of text get remembered. Rhyme and rhythm are qualities that aid a meme’s survival, just as strength and speed aid an animal’s. Patterned language has an evolutionary advantage. Rhyme, rhythm and reason—for reason, too, is a form of pattern. I was promised on a time to have reason for my rhyme; from that time unto this season, I received nor rhyme nor reason.Like genes, memes have effects on the wide world beyond themselves. In some cases (the meme for making fire; for wearing clothes; for the resurrection of Jesus) the effects can be powerful indeed. As they broadcast their influence on the world, memes thus influence the conditions affecting their own chances of survival. The meme or memes comprising Morse code had strong positive feedback effects. Some memes have evident benefits for their human hosts (“Look before you leap,” knowledge of CPR, belief in hand washing before cooking), but memetic success and genetic success are not the same. Memes can replicate with impressive virulence while leaving swaths of collateral damage—patent medicines and psychic surgery, astrology and satanism, racist myths, superstitions and (a special case) computer viruses. In a way, these are the most interesting—the memes that thrive to their hosts’ detriment, such as the idea that suicide bombers will find their reward in heaven.Memes could travel wordlessly even before language was born. Plain mimicry is enough to replicate knowledge—how to chip an arrowhead or start a fire. Among animals, chimpanzees and gorillas are known to acquire behaviors by imitation. Some species of songbirds learn their songs, or at least song variants, after hearing them from neighboring birds (or, more recently, from ornithologists with audio players). Birds develop song repertoires and song dialects—in short, they exhibit a birdsong culture that predates human culture by eons. These special cases notwithstanding, for most of human history memes and language have gone hand in glove. (Clichés are memes.)Language serves as culture’s first catalyst. It supersedes mere imitation, spreading knowledge by abstraction and encoding.
Perhaps the analogy with disease was inevitable. Before anyone understood anything of epidemiology, its language was applied to species of information. An emotion can be infectious, a tune catchy, a habit contagious. “From look to look, contagious through the crowd / The panic runs,” wrote the poet James Thomson in 1730. Lust, likewise, according to Milton: “Eve, whose eye darted contagious fire.” But only in the new millennium, in the time of global electronic transmission, has the identification become second nature. Ours is the age of virality: viral education, viral marketing, viral e-mail and video and networking. Researchers studying the Internet itself as a medium—crowdsourcing, collective attention, social networking and resource allocation—employ not only the language but also the mathematical principles of epidemiology.
One of the first to use the terms “viral text” and “viral sentences” seems to have been a reader of Dawkins named Stephen Walton of New York City, corresponding in 1981 with the cognitive scientist Douglas Hofstadter. Thinking logically—perhaps in the mode of a computer—Walton proposed simple self-replicating sentences along the lines of “Say me!” “Copy me!” and “If you copy me, I’ll grant you three wishes!” Hofstadter, then a columnist for Scientific American, found the term “viral text” itself to be even catchier.
Well, now, Walton’s own viral text, as you can see here before your eyes, has managed to commandeer the facilities of a very powerful host—an entire magazine and printing press and distribution service. It has leapt aboard and is now—even as you read this viral sentence—propagating itself madly throughout the ideosphere!
Hofstadter gaily declared himself infected by the meme meme.
One source of resistance—or at least unease—was the shoving of us humans toward the wings. It was bad enough to say that a person is merely a gene’s way of making more genes. Now humans are to be considered as vehicles for the propagation of memes, too. No one likes to be called a puppet. Dennett summed up the problem this way: “I don’t know about you, but I am not initially attracted by the idea of my brain as a sort of dung heap in which the larvae of other people’s ideas renew themselves, before sending out copies of themselves in an informational diaspora…. Who’s in charge, according to this vision—we or our memes?”
He answered his own question by reminding us that, like it or not, we are seldom “in charge” of our own minds. He might have quoted Freud; instead he quoted Mozart (or so he thought): “In the night when I cannot sleep, thoughts crowd into my mind…. Whence and how do they come? I do not know and I have nothing to do with it.”
Later Dennett was informed that this well-known quotation was not Mozart’s after all. It had taken on a life of its own; it was a fairly successful meme.
For anyone taken with the idea of memes, the landscape was changing faster than Dawkins had imagined possible in 1976, when he wrote, “The computers in which memes live are human brains.” By 1989, the time of the second edition of The Selfish Gene, having become an adept programmer himself, he had to amend that: “It was obviously predictable that manufactured electronic computers, too, would eventually play host to self-replicating patterns of information.” Information was passing from one computer to another “when their owners pass floppy discs around,” and he could see another phenomenon on the near horizon: computers connected in networks. “Many of them,” he wrote, “are literally wired up together in electronic mail exchange…. It is a perfect milieu for self-replicating programs to flourish.” Indeed, the Internet was in its birth throes. Not only did it provide memes with a nutrient-rich culture medium, it also gave wings to the idea of memes. Meme itself quickly became an Internet buzzword. Awareness of memes fostered their spread.
A notorious example of a meme that could not have emerged in pre-Internet culture was the phrase “jumped the shark.” Loopy self-reference characterized every phase of its existence. To jump the shark means to pass a peak of quality or popularity and begin an irreversible decline. The phrase was thought to have been used first in 1985 by a college student named Sean J. Connolly, in reference to an episode of the television series “Happy Days” in which the character Fonzie (Henry Winkler), on water skies, jumps over a shark. The origin of the phrase requires a certain amount of explanation without which it could not have been initially understood. Perhaps for that reason, there is no recorded usage until 1997, when Connolly’s roommate, Jon Hein, registered the domain name jumptheshark.com and created a web site devoted to its promotion. The web site soon featured a list of frequently asked questions:
Q. Did “jump the shark” originate from this web site, or did you create the site to capitalize on the phrase?
A. This site went up December 24, 1997, and gave birth to the phrase “jump the shark.” As the site continues to grow in popularity, the term has become more commonplace. The site is the chicken, the egg and now a Catch-22.
It spread to more traditional media in the next year; Maureen Dowd devoted a column to explaining it in the New York Times in 2001; in 2002 the same newspaper’s “On Language” columnist, William Safire, called it “the popular culture’s phrase of the year”; soon after that, people were using the phrase in speech and in print without self-consciousness—no quotation marks or explanation—and eventually, inevitably, various cultural observers asked, “Has ‘jump the shark’ jumped the shark?” Like any good meme, it spawned mutations. The “jumping the shark” entry in Wikipedia advised in 2009, “See also: jumping the couch; nuking the fridge.”
Is this science? In his 1983 column, Hofstadter proposed the obvious memetic label for such a discipline: memetics. The study of memes has attracted researchers from fields as far apart as computer science and microbiology. In bioinformatics, chain letters are an object of study. They are memes; they have evolutionary histories. The very purpose of a chain letter is replication; whatever else a chain letter may say, it embodies one message: Copy me. One student of chain-letter evolution, Daniel W. VanArsdale, listed many variants, in chain letters and even earlier texts: “Make seven copies of it exactly as it is written” (1902); “Copy this in full and send to nine friends” (1923); “And if any man shall take away from the words of the book of this prophecy, God shall take away his part out of the book of life” (Revelation 22:19). Chain letters flourished with the help of a new 19th-century technology: “carbonic paper,” sandwiched between sheets of writing paper in stacks. Then carbon paper made a symbiotic partnership with another technology, the typewriter. Viral outbreaks of chain letters occurred all through the early 20th century. Two subsequent technologies, when their use became widespread, provided orders-of-magnitude boosts in chain-letter fecundity: photocopying (c. 1950) and e-mail (c. 1995).
Inspired by a chance conversation on a hike in the Hong Kong mountains, information scientists Charles H. Bennett from IBM in New York and Ming Li and Bin Ma from Ontario, Canada, began an analysis of a set of chain letters collected during the photocopier era. They had 33, all variants of a single letter, with mutations in the form of misspellings, omissions and transposed words and phrases. “These letters have passed from host to host, mutating and evolving,” they reported in 2003.
Like a gene, their average length is about 2,000 characters. Like a potent virus, the letter threatens to kill you and induces you to pass it on to your “friends and associates”—some variation of this letter has probably reached millions of people. Like an inheritable trait, it promises benefits for you and the people you pass it on to. Like genomes, chain letters undergo natural selection and sometimes parts even get transferred between coexisting “species.”
Reaching beyond these appealing metaphors, the three researchers set out to use the letters as a “test bed” for algorithms used in evolutionary biology. The algorithms were designed to take the genomes of various modern creatures and work backward, by inference and deduction, to reconstruct their phylogeny—their evolutionary trees. If these mathematical methods worked with genes, the scientists suggested, they should work with chain letters, too. In both cases the researchers were able to verify mutation rates and relatedness measures.
Still, most of the elements of culture change and blur too easily to qualify as stable replicators. They are rarely as neatly fixed as a sequence of DNA. Dawkins himself emphasized that he had never imagined founding anything like a new science of memetics. A peer-reviewed Journal of Memetics came to life in 1997—published online, naturally—and then faded away after eight years partly spent in self-conscious debate over status, mission and terminology. Even compared with genes, memes are hard to mathematize or even to define rigorously. So the gene-meme analogy causes uneasiness and the genetics-memetics analogy even more.
Genes at least have a grounding in physical substance. Memes are abstract, intangible and unmeasurable. Genes replicate with near-perfect fidelity, and evolution depends on that: some variation is essential, but mutations need to be rare. Memes are seldom copied exactly; their boundaries are always fuzzy, and they mutate with a wild flexibility that would be fatal in biology. The term “meme” could be applied to a suspicious cornucopia of entities, from small to large. For Dennett, the first four notes of Beethoven’s Fifth Symphony (quoted above) were “clearly” a meme, along with Homer’s Odyssey (or at least the idea of the Odyssey), the wheel, anti-Semitism and writing. “Memes have not yet found their Watson and Crick,” said Dawkins; “they even lack their Mendel.”
Yet here they are. As the arc of information flow bends toward ever greater connectivity, memes evolve faster and spread farther. Their presence is felt if not seen in herd behavior, bank runs, informational cascades and financial bubbles. Diets rise and fall in popularity, their very names becoming catchphrases—the South Beach Diet and the Atkins Diet, the Scarsdale Diet, the Cookie Diet and the Drinking Man’s Diet all replicating according to a dynamic about which the science of nutrition has nothing to say. Medical practice, too, experiences “surgical fads” and “iatro-epidemics”—epidemics caused by fashions in treatment—like the iatro-epidemic of children’s tonsillectomies that swept the United States and parts of Europe in the mid-20th century. Some false memes spread with disingenuous assistance, like the apparently unkillable notion that Barack Obama was not born in Hawaii. And in cyberspace every new social network becomes a new incubator of memes. Making the rounds of Facebook in the summer and fall of 2010 was a classic in new garb:
Sometimes I Just Want to Copy Someone Else’s Status, Word for Word, and See If They Notice.
Then it mutated again, and in January 2011 Twitter saw an outbreak of:
One day I want to copy someone’s Tweet word for word and see if they notice.
By then one of the most popular of all Twitter hashtags (the “hashtag” being a genetic—or, rather, memetic—marker) was simply the word “#Viral.”In the competition for space in our brains and in the culture, the effective combatants are the messages. The new, oblique, looping views of genes and memes have enriched us. They give us paradoxes to write on Möbius strips. “The human world is made of stories, not people,” writes the novelist David Mitchell. “The people the stories use to tell themselves are not to be blamed.” Margaret Atwood writes: “As with all knowledge, once you knew it, you couldn’t imagine how it was that you hadn’t known it before. Like stage magic, knowledge before you knew it took place before your very eyes, but you were looking elsewhere.” Nearing death, John Updike reflected on
A life poured into words—apparent waste intended to preserve the thing consumed.
Fred Dretske, a philosopher of mind and knowledge, wrote in 1981: “In the beginning there was information. The word came later.” He added this explanation: “The transition was achieved by the development of organisms with the capacity for selectively exploiting this information in order to survive and perpetuate their kind.” Now we might add, thanks to Dawkins, that the transition was achieved by the information itself, surviving and perpetuating its kind and selectively exploiting organisms.
Most of the biosphere cannot see the infosphere; it is invisible, a parallel universe humming with ghostly inhabitants. But they are not ghosts to us—not anymore. We humans, alone among the earth’s organic creatures, live in both worlds at once. It is as though, having long coexisted with the unseen, we have begun to develop the needed extrasensory perception. We are aware of the many species of information. We name their types sardonically, as though to reassure ourselves that we understand: urban myths and zombie lies. We keep them alive in air-conditioned server farms. But we cannot own them. When a jingle lingers in our ears, or a fad turns fashion upside down, or a hoax dominates the global chatter for months and vanishes as swiftly as it came, who is master and who is slave?

What a fascinating thing! Total control of a living organism! — psychologist B.F. Skinner

The corporatization of society requires a population that accepts control by authorities, and so when psychologists and psychiatrists began providing techniques that could control people, the corporatocracy embraced mental health professionals….

[Noam Chomsky and Lewis Mumford said] society ruled by benevolent control freaks—was antithetical to democracy...Behaviorism and consumerism, two ideologies which achieved tremendous power in the twentieth century, are cut from the same cloth.The shopper, the student, the worker, and the voter are all seen by consumerism and behaviorism the same way: passive, conditionable objects…, there is an insidious incentive for control-freaks in society…

The Anti-Democratic Nature of Behavior Modification

Behavior modification is fundamentally a means of controlling people and thus for Kohn, “by its nature inimical to democracy, critical questioning, and the free exchange of ideas among equal participants.”…

In democracy, citizens are free to think for themselves and explore, and are motivated by very real—not phantom—intrinsic forces, including curiosity and a desire for justice, community, and solidarity.

What is also scary about behaviorists is that their external controls can destroy intrinsic forces of our humanity that are necessary for a democratic society….

Behavior modification can also destroy our intrinsic desire for compassion, which is necessary for a democratic society…How, in a democratic society, do children become ethical and caring adults? They need a history of being cared about, taken seriously, and respected, which they can model and reciprocate…

Full text

What a fascinating thing! Total control of a living organism! — psychologist B.F. Skinner

The corporatization of society requires a population that accepts control by authorities, and so when psychologists and psychiatrists began providing techniques that could control people, the corporatocracy embraced mental health professionals.

In psychologist B.F. Skinner’s best-selling book Beyond Freedom and Dignity [3] (1971), he argued that freedom and dignity are illusions that hinder the science of behavior modification, which he claimed could create a better-organized and happier society.

During the height of Skinner’s fame in the 1970s, it was obvious to anti-authoritarians such as Noam Chomsky (“The Case Against B.F. Skinner” [4]) and Lewis Mumord that Skinner’s worldview—a society ruled by benevolent control freaks—was antithetical to democracy. In Skinner’s novel Walden Two (1948), his behaviorist hero states, “We do not take history seriously”; to which Lewis Mumford retorted, “And no wonder: if man knew no history, the Skinners would govern the world, as Skinner himself has modestly proposed in his behaviorist utopia.”

As a psychology student during that era, I remember being embarrassed by the silence of most psychologists about the political ramifications of Skinner and behavior modification.

In the mid-1970s, as an intern on a locked ward in a state psychiatric hospital, I first experienced one of behavior modification’s staple techniques, the “token economy.” And that’s where I also discovered that anti-authoritarians try their best to resist behavior modification. George was a severely depressed anti-authoritarian who refused to talk to staff but, for some reason, chose me to shoot pool with. My boss, a clinical psychologist, spotted my interaction with George, and told me that I should give him a token—a cigarette—to reward his “prosocial behavior.” I fought it, trying to explain that I was 20 and George was 50, and this would be humiliating. But my boss subtly threatened to kick me off the ward. So, I asked George what I should do.

George, fighting the zombifying effects of his heavy medication, grinned and said, “We’ll win. Let me have the cigarette.” In full view of staff, George took the cigarette and then placed it into the shirt pocket of another patient, and then looked at the staff shaking his head in contempt.

Unlike Skinner, George was not “beyond freedom and dignity.” Anti-authoritarians such as George—who don’t take seriously the rewards and punishments of control-freak authorities—deprive authoritarian ideologies such as behavior modification from total domination.

Behavior Modification Techniques Excite Authoritarians

If you have taken introductory psychology, you probably have heard of Ivan Pavlov’s “classical conditioning” and B.F. Skinner’s “operant conditioning.”

An example of Pavlov’s classical conditioning? A dog hears a bell at the same time he receives food; then the bell is sounded without the food and still elicits a salivating dog. Pair a scantily-clad attractive woman with some crappy beer, and condition men to sexually salivate to the sight of the crappy beer and buy it. The advertising industry has been utilizing classical conditioning for quite some time.

Skinner’s operant conditioning? Rewards, like money, are “positive reinforcements”; the removal of rewards are “negative reinforcements”; and punishments, such as electric shocks, are labeled in fact as “punishments.” Operant conditioning pervades the classroom, the workplace, and mental health treatment.

Skinner was heavily influenced by the book Behaviorism (1924) by John B. Watson. Watson achieved some fame in the early 1900s by advocating a mechanical, rigid, affectionless manner in child rearing. He confidently asserted that he could take any healthy infant and, given complete control of the infant’s world, train him for any profession. When Watson was in his early forties, he quit university life and began a new career in advertising at J. Walter Thompson.

Behaviorism and consumerism, two ideologies which achieved tremendous power in the twentieth century, are cut from the same cloth.The shopper, the student, the worker, and the voter are all seen by consumerism and behaviorism the same way: passive, conditionable objects.

Who are Easiest to Manipulate?

Those who rise to power in the corporatocracy are control freaks, addicted to the buzz of power over other human beings, and so it is natural for such authorities to have become excited by behavior modification.

Alfie Kohn, in Punished by Rewards (1993), documents with copious research how behavior modification works best on dependent, powerless, infantilized, bored, and institutionalized people. And so for authorities who get a buzz from controlling others, this creates a terrifying incentive to construct a society that creates dependent, powerless, infantilized, bored, and institutionalized people.

Many of the most successful applications of behavior modification have involved laboratory animals, children, or institutionalized adults. According to management theorists Richard Hackman and Greg Oldham in Work Redesign (1980), “Individuals in each of these groups are necessarily dependent on powerful others for many of the things they most want and need, and their behavior usually can be shaped with relative ease.”

Similarly, researcher Paul Thorne reports in the journal International Management (“Fitting Rewards,” 1990) that in order to get people to behave in a particular way, they must be “needy enough so that rewards reinforce the desired behavior.”

It is also easiest to condition people who dislike what they are doing. Rewards work best for those who are alienated from their work, according to researcher Morton Deutsch (Distributive Justice, 1985). This helps explain why attention deficit hyperactivity disorder (ADHD)-labeled kids perform as well as so-called “normals” on boring schoolwork when paid for it (see Thomas Armstrong’s The Myth of the A.D.D. Child, 1995). Correlatively, Kohn offers research showing that rewards are least effective when people are doing something that isn’t boring.

In a review of the literature on the harmful effects of rewards, researcher Kenneth McGraw concluded that rewards will have a detrimental effect on performance under two conditions: “first, when the task is interesting enough for the subjects that the offer of incentives is a superfluous source of motivation; second, when the solution to the task is open-ended enough that the steps leading to a solution are not immediately obvious.”

Kohn also reports that at least ten studies show rewards work best on simplistic and predictable tasks. How about more demanding ones? In research on preschoolers (working for toys), older children (working for grades) and adults (working for money), all avoided challenging tasks. The bigger the reward, the easier the task that is chosen; while without rewards, human beings are more likely to accept a challenge.

So, there is an insidious incentive for control-freaks in society—be they psychologists, teachers, advertisers, managers, or other authorities who use behavior modification. Specifically, for controllers to experience the most control and gain a “power buzz,” their subjects need to be infantilized, dependent, alienated, and bored.

The Anti-Democratic Nature of Behavior Modification

Behavior modification is fundamentally a means of controlling people and thus for Kohn, “by its nature inimical to democracy, critical questioning, and the free exchange of ideas among equal participants.”

For Skinner, all behavior is externally controlled, and we don’t truly have freedom and choice. Behaviorists see freedom, choice, and intrinsic motivations as illusory, or what Skinner called “phantoms.” Back in the 1970s, Noam Chomsky exposed Skinner’s unscientific view of science, specifically Skinner’s view that science should be prohibited from examining internal states and intrinsic forces.

In democracy, citizens are free to think for themselves and explore, and are motivated by very real—not phantom—intrinsic forces, including curiosity and a desire for justice, community, and solidarity.

What is also scary about behaviorists is that their external controls can destroy intrinsic forces of our humanity that are necessary for a democratic society.

Researcher Mark Lepper was able to diminish young children’s intrinsic joy of drawing with Magic Markers by awarding them personalized certificates for coloring with a Magic Marker. Even a single, one-time reward for doing something enjoyable can kill interest in it for weeks.

Behavior modification can also destroy our intrinsic desire for compassion, which is necessary for a democratic society. Kohn offers several studies showing “children whose parents believe in using rewards to motivate them are less cooperative and generous [children] than their peers.” Children of mothers who relied on tangible rewards were less likely than other children to care and share at home.

How, in a democratic society, do children become ethical and caring adults? They need a history of being cared about, taken seriously, and respected, which they can model and reciprocate.

Today, the mental health profession has gone beyond behavioral technologies of control. It now diagnoses noncompliant toddlers with attention deficit hyperactivity disorder, oppositional defiant disorder, and pediatric bipolar disorder and attempts to control them [5] with heavily sedating drugs. While Big Pharma directly profits from drug prescribing, the entire corporatocracy benefits from the mental health profession’s legitimization of conditioning and controlling.

Cultures that endure carve out a protected space for those who question and challenge national myths. Artists, writers, poets, activists, journalists, philosophers, dancers, musicians, actors, directors and renegades must be tolerated if a culture is to be pulled back from disaster. Members of this intellectual and artistic class, who are usually not welcome in the stultifying halls of academia where mediocrity is triumphant, serve as prophets. They are dismissed, or labeled by the power elites as subversive, because they do not embrace collective self-worship. They force us to confront unexamined assumptions, ones that, if not challenged, lead to destruction. They expose the ruling elites as hollow and corrupt. They articulate the senselessness of a system built on the ideology of endless growth, ceaseless exploitation and constant expansion. They warn us about the poison of careerism and the futility of the search for happiness in the accumulation of wealth. They make us face ourselves, from the bitter reality of slavery and Jim Crow to the genocidal slaughter of Native Americans to the repression of working-class movements to the atrocities carried out in imperial wars to the assault on the ecosystem. They make us unsure of our virtue. They challenge the easy clichés we use to describe the nation—the land of the free, the greatest country on earth, the beacon of liberty—to expose our darkness, crimes and ignorance. They offer the possibility of a life of meaning and the capacity for transformation.

Human societies see what they want to see. They create national myths of identity out of a composite of historical events and fantasy. They ignore unpleasant facts that intrude on self-glorification.…

It leaves us blind. And this is what has occurred. We are lost at sea in a great tempest. We do not know where we are. We do not know where we are going. And we do not know what is about to happen to us.

The psychoanalyst John Steiner calls this phenomenon “turning a blind eye.” He notes that often we have access to adequate knowledge but because it is unpleasant and disconcerting we choose unconsciously, and sometimes consciously, to ignore it….We too, Steiner wrote, turn a blind eye to the dangers that confront us, despite the plethora of evidence that if we do not radically reconfigure our relationships to each other and the natural world, catastrophe is assured…

“It is only our absurd ‘scientific’ prejudice that reality must be physical and rational that blinds us to the truth,” Goddard warned. There are, as Shakespeare wrote, “things invisible to mortal sight.” But these things are not vocational or factual or empirical. They are not found in national myths of glory and power. They are not attained by force. They do not come through cognition or logical reasoning. They are intangible. They are the realities of beauty, grief, love, the search for meaning, the struggle to face our own mortality and the ability to face truth. And cultures that disregard these forces of imagination commit suicide. They cannot see…

…Students who are denied the wisdom of the great oracles of human civilization—visionaries who urge us not to worship ourselves, not to kneel before the base human emotion of greed—cannot be educated. They cannot think…

The vital importance of thought, Arendt wrote, is apparent only “in times of transition when men no longer rely on the stability of the world and their role in it, and when the question concerning the general conditions of human life, which as such are properly coeval with the appearance of man on earth, gain an uncommon poignancy.” We never need our thinkers and artists more than in times of crisis, as Arendt reminds us, for they provide the subversive narratives that allow us to chart a new course, one that can assure our survival…

And here is the dilemma we face as a civilization. We march collectively toward self-annihilation. Corporate capitalism, if left unchecked, will kill us. Yet we refuse, because we cannot think and no longer listen to those who do think, to see what is about to happen to us. We have created entertaining mechanisms to obscure and silence the harsh truths, from climate change to the collapse of globalization to our enslavement to corporate power, that will mean our self-destruction. If we can do nothing else we must, even as individuals, nurture the private dialogue and the solitude that make thought possible. It is better to be an outcast, a stranger in one’s own country, than an outcast from one’s self. It is better to see what is about to befall us and to resist than to retreat into the fantasies embraced by a nation of the blind.

Full text

Cultures that endure carve out a protected space for those who question and challenge national myths. Artists, writers, poets, activists, journalists, philosophers, dancers, musicians, actors, directors and renegades must be tolerated if a culture is to be pulled back from disaster. Members of this intellectual and artistic class, who are usually not welcome in the stultifying halls of academia where mediocrity is triumphant, serve as prophets. They are dismissed, or labeled by the power elites as subversive, because they do not embrace collective self-worship. They force us to confront unexamined assumptions, ones that, if not challenged, lead to destruction. They expose the ruling elites as hollow and corrupt. They articulate the senselessness of a system built on the ideology of endless growth, ceaseless exploitation and constant expansion. They warn us about the poison of careerism and the futility of the search for happiness in the accumulation of wealth. They make us face ourselves, from the bitter reality of slavery and Jim Crow to the genocidal slaughter of Native Americans to the repression of working-class movements to the atrocities carried out in imperial wars to the assault on the ecosystem. They make us unsure of our virtue. They challenge the easy clichés we use to describe the nation—the land of the free, the greatest country on earth, the beacon of liberty—to expose our darkness, crimes and ignorance. They offer the possibility of a life of meaning and the capacity for transformation.

Human societies see what they want to see. They create national myths of identity out of a composite of historical events and fantasy. They ignore unpleasant facts that intrude on self-glorification. They trust naively in the notion of linear progress and in assured national dominance. This is what nationalism is about—lies. And if a culture loses its ability for thought and expression, if it effectively silences dissident voices, if it retreats into what Sigmund Freud called “screen memories,” those reassuring mixtures of fact and fiction, it dies. It surrenders its internal mechanism for puncturing self-delusion. It makes war on beauty and truth. It abolishes the sacred. It turns education into vocational training. It leaves us blind. And this is what has occurred. We are lost at sea in a great tempest. We do not know where we are. We do not know where we are going. And we do not know what is about to happen to us.

The psychoanalyst John Steiner calls this phenomenon “turning a blind eye.” He notes that often we have access to adequate knowledge but because it is unpleasant and disconcerting we choose unconsciously, and sometimes consciously, to ignore it. He uses the Oedipus story to make his point. He argued that Oedipus, Jocasta, Creon and the “blind” Tiresias grasped the truth, that Oedipus had killed his father and married his mother as prophesized, but they colluded to ignore it. We too, Steiner wrote, turn a blind eye to the dangers that confront us, despite the plethora of evidence that if we do not radically reconfigure our relationships to each other and the natural world, catastrophe is assured. Steiner describes a psychological truth that is deeply frightening.

I saw this collective capacity for self-delusion among the urban elites in Sarajevo and later Pristina during the wars in Bosnia and Kosovo. These educated elites steadfastly refused to believe that war was possible although acts of violence by competing armed bands had already begun to tear at the social fabric. At night you could hear gunfire. But they were the last to “know.” And we are equally self-deluded. The physical evidence of national decay—the crumbling infrastructures, the abandoned factories and other workplaces, the rows of gutted warehouses, the closure of libraries, schools, fire stations and post offices—that we physically see, is, in fact, unseen. The rapid and terrifying deterioration of the ecosystem, evidenced in soaring temperatures, droughts, floods, crop destruction, freak storms, melting ice caps and rising sea levels, are met blankly with Steiner’s “blind eye.”

Oedipus, at the end of Sophocles’ play, cuts out his eyes and with his daughter Antigone as a guide wanders the countryside. Once king, he becomes a stranger in a strange country. He dies, in Antigone’s words, “in a foreign land, but one he yearned for.”

William Shakespeare in “King Lear” plays on the same theme of sight and sightlessness. Those with eyes in “King Lear” are unable to see. Gloucester, whose eyes are gouged out, finds in his blindness a revealed truth. “I have no way, and therefore want no eyes,” Gloucester says after he is blinded. “I stumbled when I saw.” When Lear banishes his only loyal daughter, Cordelia, whom he accuses of not loving him enough, he shouts: “Out of my sight!” To which Kent replies:

See better, Lear, and let me still remain
The true blank of thine eye.

The story of Lear, like the story of Oedipus, is about the attainment of this inner vision. It is about morality and intellect that are blinded by empiricism and sight. It is about understanding that the human imagination is, as William Blake saw, our manifestation of Eternity. “Love without imagination is eternal death.”

The Shakespearean scholar Harold Goddard wrote: “The imagination is not a faculty for the creation of illusion; it is the faculty by which alone man apprehends reality. The ‘illusion’ turns out to be truth.” “Let faith oust fact,” Starbuck says in “Moby-Dick.”

“It is only our absurd ‘scientific’ prejudice that reality must be physical and rational that blinds us to the truth,” Goddard warned. There are, as Shakespeare wrote, “things invisible to mortal sight.” But these things are not vocational or factual or empirical. They are not found in national myths of glory and power. They are not attained by force. They do not come through cognition or logical reasoning. They are intangible. They are the realities of beauty, grief, love, the search for meaning, the struggle to face our own mortality and the ability to face truth. And cultures that disregard these forces of imagination commit suicide. They cannot see.

“How with this rage shall beauty hold a plea,” Shakespeare wrote, “Whose action is no stronger than a flower?” Human imagination, the capacity to have vision, to build a life of meaning rather than utilitarianism, is as delicate as a flower. And if it is crushed, if a Shakespeare or a Sophocles is no longer deemed useful in the empirical world of business, careerism and corporate power, if universities think a Milton Friedman or a Friedrich Hayek is more important to its students than a Virginia Woolf or an Anton Chekhov, then we become barbarians. We assure our own extinction. Students who are denied the wisdom of the great oracles of human civilization—visionaries who urge us not to worship ourselves, not to kneel before the base human emotion of greed—cannot be educated. They cannot think.

To think, we must, as Epicurus understood, “live in hiding.” We must build walls to keep out the cant and noise of the crowd. We must retreat into a print-based culture where ideas are not deformed into sound bites and thought-terminating clichés. Thinking is, as Hannah Arendt wrote, “a soundless dialogue between me and myself.” But thinking, she wrote, always presupposes the human condition of plurality. It has no utilitarian function. It is not an end or an aim outside of itself. It is different from logical reasoning, which is focused on a finite and identifiable goal. Logical reason, acts of cognition, serve the efficiency of a system, including corporate power, which is usually morally neutral at best, and often evil. The inability to think, Arendt wrote, “is not a failing of the many who lack brain power but an ever-present possibility for everybody—scientists, scholars, and other specialists in mental enterprises not excluded.”

Our corporate culture has effectively severed us from human imagination. Our electronic devices intrude deeper and deeper into spaces that were once reserved for solitude, reflection and privacy. Our airwaves are filled with the tawdry and the absurd. Our systems of education and communication scorn the disciplines that allow us to see. We celebrate prosaic vocational skills and the ridiculous requirements of standardized tests. We have tossed those who think, including many teachers of the humanities, into a wilderness where they cannot find employment, remuneration or a voice. We follow the blind over the cliff. We make war on ourselves.

The vital importance of thought, Arendt wrote, is apparent only “in times of transition when men no longer rely on the stability of the world and their role in it, and when the question concerning the general conditions of human life, which as such are properly coeval with the appearance of man on earth, gain an uncommon poignancy.” We never need our thinkers and artists more than in times of crisis, as Arendt reminds us, for they provide the subversive narratives that allow us to chart a new course, one that can assure our survival.

“What must I do to win salvation?” Dimitri asks Starov in “The Brothers Karamazov,” to which Starov answers: “Above all else, never lie to yourself.”

And here is the dilemma we face as a civilization. We march collectively toward self-annihilation. Corporate capitalism, if left unchecked, will kill us. Yet we refuse, because we cannot think and no longer listen to those who do think, to see what is about to happen to us. We have created entertaining mechanisms to obscure and silence the harsh truths, from climate change to the collapse of globalization to our enslavement to corporate power, that will mean our self-destruction. If we can do nothing else we must, even as individuals, nurture the private dialogue and the solitude that make thought possible. It is better to be an outcast, a stranger in one’s own country, than an outcast from one’s self. It is better to see what is about to befall us and to resist than to retreat into the fantasies embraced by a nation of the blind.