How Do You Think?

The human brain is a curious organ. We are continually learning new and very exciting things about this incredibly complicated masterpiece of evolution. But, every now and then, we learn something that shakes to the core our understanding of accepted concepts. This has been the case recently with our understanding of placebos.

Placebos as I’m sure you know, are fake substances, treatments, or procedures that, in and of themselves, have no active ingredient or direct effect. Regardless of the inert nature of placebos, they can and do elicit some powerful responses. This has long been attributed to a person’s beliefs and expectations about the legitimacy of effect. One Japanese study, for example, showed that people who believed they were being exposed to real poison ivy (but were actually exposed to a placebo), developed a painful response that mimicked the actual reaction to legitimate poison ivy (Blakeslee, 1998). In this study, only the subjects’ belief was responsible for the skin rashes. The mind’s powerful capacity, in this case, produced very real and painful responses regardless of the presence of any true irritant.

This is the placebo effect. Again, attribution lies in the person’s beliefs and expectations rather than in the substance, drug, treatment, or procedure. It is important to note that although the beliefs and expectations emanate from one’s mind, the bodily responses can be very real. Scientists use placebos in trials to evaluate treatment efficacy because of this innate gullibility.

The gold standard for evaluation of treatment efficacy (in drug trials) involves double-blinded, randomized, and placebo controlled procedures because, people seem to get better simply from getting attention and/or treatment regardless of form (within limits). This is only true where psychological factors overlay the manifestation of symptoms (Crislip, 2006). Placebos do little to reduce serious medical issues like MS, stroke, or cancer; but people with particularly subjective experiences of pain, affective disorders, and psychologically mediated disorders can and do report improvements when unknowingly taking placebos. Formerly, it was presumed that the patient had to believe that the sugar pill was indeed an actual drug. This presumption is what has been recently questioned. What if the patient knows it is a placebo, will that reduce the treatment efficacy? Recent evidence suggests not!

We must consider a few important things about placebos. The American Medical Association says that “the use of a placebo without the patient’s knowledge may undermine trust, compromise the patient-physician relationship, and result in medical harm to the patient.” Regardless, Kaptchuk et al (2010) noted “a recent national survey of internists and rheumatologists in the US found that while only small numbers of US physicians surreptitiously use inert placebo pills and injections, approximately 50% prescribe medications that they consider to have no specific effect on patients’ conditions and are used solely as placebos (sometimes called “impure placebos”). “Prescribing placebos necessarily involves deception and this brings the practice into question. But deception may no longer be necessary!

Researchers at Harvard Medical School and Beth Israel Deaconess Medical Center in Boston, Massachusetts just published a startling study (Placebos without deception: A randomized controlled trial in irritable bowel syndrome) in the Public Library of Science journal PLoS ONE (Kaptchuk, 2010). And as the title suggests, placebos were knowingly used to evaluate their effectiveness in treating irritable bowel syndrome (IBS). Participants were recruited from advertisements for “a novel mind-body management study of IBS” in newspapers and fliers and from referrals from health care professionals. The authors note that the symptoms of IBS constitute one of the top 10 reasons people seek primary-care treatment.

Eighty, primarily female (70%) subjects with a mean age of 47 years (±18 yrs.), diagnosed with Irritable Bowel Syndrome (IBS), were randomly assigned to one of two groups for a three week trial. One group was an open-label placebo group where the treatment provided was acknowledged to be placebo pills “made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes.” The second group received no-treatment but they did receive the same quality of interaction with health-care providers that the open placebo participants received. Several standardized measures of IBS symptom severity were used.

The subjects knew they were taking inert pills – but the suggestion was made, by the physician, that the pills have resulted in significant improvements in previous clinical trials (which was true and due to placebo effect). As it turns out, the open-label placebo produced significantly higher mean global improvement scores at both the 11-day trial midpoint and at the end of the three week treatment.

Of the participants in the open- or honest-placebo group, 59% experienced adequate relief versus 35% of those in the no treatment group. The authors concluded that “Placebos administered without deception may be an effective treatment for IBS. Further research is warranted in IBS, and perhaps other conditions, to elucidate whether physicians can benefit patients using placebos consistent with informed consent.” The difference between the groups is substantial. Although 35% of the subjects in the no treatment group experienced adequate symptom resolution without any treatment, they were the recipients of some physician attention. Such attention alone is associated with improved outcomes. Further, the 24 point difference is unlikely to be explained by differences in the natural history of the disease (the natural ebb and flow of symptom presentation) or regression toward the mean (the natural tendency for repeated measures to move more closely to the average value) (Crislip, 2006). Because subjects were randomly assigned to the groups, these latter effects are likely to cancel each other out.

So, as it turns out, taking an honest-placebo may substantially reduce IBS symptoms in over half of sufferers. How does this happen? I suggest that because an authority figure suggested that the pill had resulted in significant symptom reduction in previous trials there remains a high probability of the expectancy effect. I question whether it should be considered an “Honest-Placebo.”

So we have the attention provided by the physician, the expectancy affect, and the added ritual of taking the pills. The authors suggest that this latter factor may play an important role in the effect. Regardless, the sugar pill appears to have done something. Perhaps this helps explain why billions of dollars are spent annually on vitamins, “natural remedies,” and homeopathy despite lack of biological plausibility or evidence of treatment efficacy. We are, it seems, inherently gullible people. Placebos work, because the mind is a powerful thing – simply thinking you’re being treated, can make you feel better. Also on the plus side – no long list of scary side effects – although, we could get them, I suppose, if doctors were to imply the possibility.

I have learned some things of late that have congealed in such a way, so as to leave me in a bit of an existential crisis. One of my most precious beliefs, specifically, the importance of parenting style, as it pertains to child intelligence and personality outcomes, has been relegated to the proverbial dust heap alongside the id, the wandering uterus, and the Oedipus complex.

Why would the salience of parenting style even come into question? It seems ridiculous to pose such a question. Of course parenting style matters! It is widely believed that parents can and do shape and mold their youngsters in a meaningful way that plays out in the formation of an adult. But, and this is a big but, the reality is that the impact of what you do as a parent has a much narrower impact than you might think.

I discussed this last week in my post Ten Best Parenting Tips: But Does it Really Matter? where I shared the Parent’s Ten (Epstein, 2010) and laid out a question of the quality of the research used to delineate these ten great tips. In that post I noted that:

“We are lead to believe, based on the results of [the Epstein study], that we, as parents, can shape our children, and thus by engaging in The Parent’s Ten, produce happier, healthier, and wiser children. But can we really? Is there an illusion of cause here? Are these simply correlations? The findings of behavioral genetics would suggest that this is an illusion – that these variables vary in predictable ways based on the influence of a third variable – genes.”

Genes are the sticking point. Epstein did not control for the effect of shared genes in this study. It is likely that children who have well functioning parents will be likewise well functioning, not because of the parenting style employed by their parents, but because of their shared genes. Well functioning and happy adults breed happy and well functioning children. Ultimately, parenting style, seems to have little impact on such outcomes.

The current research from behavioral genetics provides a preponderance of evidence leading to the same conclusion: that the home environment, as it is influenced by parents, accounts for 0 to 10% of the variance in the personality and intelligence outcomes of children! Heredity (genes) accounts for about 50%. A long standing question about the remainder has ultimately pointed in the direction of the child’s peer group whereby they account for 40-50% of the variance on personality and intelligence outcomes (Pinker, 2002). As it turns out, peers are the nurture influence in the nature and nurture interplay.

This latter notion runs counter to nearly everything we have been taught regarding human development over the last 100 years (Gladwell, 1998). Freud first put parents at the core of the child’s personality and neurosis development, and there they have remained. Mothers in particular have fielded more than their share of blame with regard to the pathology of their offspring. Cold maternal parenting style, after all, had been blamed for autism. And perfection seeking mothers have been blamed for the development of anorexia in their teenage daughters. We know that these relationships are unfounded. Regardless, the thinking persists, and bad outcomes are attributed to bad parenting whereas good outcomes are the fruit of sound parenting. The problem with this type of thinking is that the research has not born it out.

The Minnesota studies of twins and the Colorado Adoption Project have made it clear (Harris, 1998): parents contribute their genes and that seems to be it. When it comes to personality variables such as openness to experience, conscientiousness, extroversion-introversion, antagonism-agreeableness, and neuroticism, parents affect this only through heritability. Factors like IQ, language proficiency, religiosity, nicotine dependence, hours of television watching, and political conservatism/liberalism are all hugely influenced by genes (Pinker, 2002).

How do we know this? Adopted children resemble their biological parents not their adoptive parents (Gladwell, 1998). Also, as Steven Pinker (2002) points out, “Identical twins reared apart are highly similar; identical twins reared together are more similar than fraternal twins reared together; biological siblings are far more similar than adoptive siblings. All this translates into substantial heritability values…”

And consider smoking. Who can forget the TV ad portraying a child watching and pondering the emulation of his father’s smoking behavior. The slogan was something akin to “like father like son.” They had it right, but the smoking behavior in front of the child was not the culprit. Children of smokers are two times more likely to smoke as children of non-smokers. What we see is that nicotine addiction is heavily influenced by genes. Adoptive children of smokers do not have elevated rates of smoking, and this greatly diminishes the role that modeling plays in the equation.

The psychologist Eric Turkheimer pulled together the unusually robust evidence from extensive studies of twins (fraternal and identical) reared together and apart as well as studies of adopted children relative to biological children and concluded that there are three important laws that help explain the development of personality characteristics and intelligence. Steven Pinker suggests that these laws constitute the most important discovery in the history of psychology (2002). The three laws are as follows:

All Human traits are heritable;

The effect of being raised in the same family is smaller than the effect of the genes; and

A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families.

So does this mean that what we do as parents doesn’t really matter? And does it mean that my role as a child psychologist helping parents manage very difficult children is a waste of time? This is my crisis.

Well … it does matter! How a parent treats and manages a child within the home will affect how the child behaves in the home and how the child feels about the parent. These are important issues. If poor parenting results in difficult feelings for the parent, “these feelings can last a lifetime – but they don’t necessarily cross over into the life the child leads outside the home” (Harris quoted in Gladwell, 1998). Here is the important point “whatever our parents do to us is overshadowed, in the long run, by what our peers do to us” (Harris quoted in Gladwell, 1998). The home environment is very important for all involved – and parenting style can greatly impact that environment. So parenting style does matter – if only for the establishment of sanity in the home. It seems to me that treating a child well is an ethical obligation. But if that’s not enough encouragement for treating another human being well, perhaps you should do so, in hopes that when you are old and frail, your children may treat you well (Harris paraphrased in Gladwell, 1998).

Pinker (2002) adds an important provision:

“Differences among homes don’t matter within the samples of homes netted by these studies, which tend to be more middle-class than the population as a whole. But differences between those samples and other kinds of homes could matter. The studies exclude cases of criminal neglect, physical and sexual abuse, and abandonment in a bleak orphanage, so they do not show that extreme cases fail to leave scars. Nor can they say anything about the differences between cultures… In general, if a sample comes from a restricted range of homes, it may underestimate effects of homes across a wider range.“

We do know that parenting style can have adverse consequences when a child is subjected to neglect or abuse. This is hugely important! Its not that parenting style doesn’t matter. It matters greatly! Parents can establish a happy encouraging environment, provide for the development of essential skills and knowledge; BUT, again, over the long term, it seems that these contributions do not shape the personality or intelligence of their children. Their gene’s are responsible for their contributions. What seems to be more important, when it comes to shaping the genetic contribution, is where a parent raises their child. It’s the peer group that finishes the job. Now that is scary! And I thought my crisis was over.

What makes a good parent? Really? What can we do to ensure that our children grow up happy, healthy and wise? There is a lot of advice out there – some of which, on the surface seems quite sage. But history is replete with really bad advice – some based in moral authority and some in the ill formed wisdom of so called experts. New advice is commonplace and how often have you been confused by the contradictory nature of yesterday’s and today’s tips? There are enough schools of thought out there to confirm and satisfy almost any advocate of any “reasonably sane” parenting approach and even some not so prudent approaches. There is a pretty good reason for this variability and I’ll get to that in a minute, but first, lets look at a recent article from Scientific American MIND that provides a summary of a scientific analysis resulting in a list of the top ten most effective child rearing practices.

In What Makes a Good Parent? the author, Robert Epstein, shares the results of a study on parenting skills that he carried out at UC San Diego, with a student (Shannon Fox). The results were presented at the annual convention of the American Psychological Association this past summer. Epstein and Fox looked at parenting techniques advised by experts, strategies commonly employed by parents, and strategies that seemingly had efficacy in the real world. They collected their data online from nearly 2000 parents who volunteered to take a test of parenting skills at Epstein’s website: http://MyParentingSkills.com. The test was devised by Epstein based on the literature, whereby ten parenting techniques that had robust evidence with regard to good outcomes were selected and measured. Epstein had the 10 skills assessed by 11 parenting experts to further evaluate their validity. The participants answered 100 questions pertaining to their agreement (on a 5 point agree to disagree scale) with the ten parenting variables (e.g., “I generally encourage my child to make his or her own choices,” “I try to involve my child in healthful outdoor activities,” “No matter how busy I am, I try to spend quality time with my child.”). In addition to these questions the test asked questions pertaining to important variables such as income and educational levels of the parents, marital status, parenting experience, age, as well as questions regarding the happiness, health and functioning capacity of their child/ren.

The results, coined by the author as The Parent’s Ten, make perfect sense to me as a parent of three reasonably well adjusted, happy and successful college students. They also gel with my exposure to the literature and my experiences guiding parents within my professional capacity as a child psychologist over the last 16 years. Here is an excerpt from the article:

“Here are 10 competencies that predict good parenting outcomes, listed roughly in order from most to least important. The skills – all derived from published studies – were ranked based on how well they predict a strong parent-child bond and children’s happiness, health and success.

Love and affection. You support and accept the child, are physically affectionate, and spend quality one-on-one time together.

Stress management. You take steps to reduce stress for yourself and your child, practice relaxations techniques and promote positive interpretations of events.

Relationship skills. You maintain a healthy relationship with your spouse, significant other or co-parent and model effective relationship skills with other people.

Autonomy and independence. You treat your child with respect and encourage him or her to become self-sufficient and self-reliant.

Education and learning. You promote and model learning and provide educational opportunities for your child.

Life skills. You provide for your child, have a steady income and plan for the future.

Behavior Management. You make extensive use of positive reinforcement and punish only after other methods of managing behavior have failed.

Health. You model a healthy lifestyle and good habits, such as regular exercise and proper nutrition, for your child.

Religion. You support spiritual or religious development and participate in spiritual or religious activities.

Safety. You take precautions to protect your child and maintain awareness of the child’s activities and friends.“

Although you may not find these results all that surprising, Epstein suggests that they are because if you look closely at the list you’ll see that the vast majority of the skills are parental personality and/or life skill issues. As this study suggests, a child’s well-being, it seems, is most closely associated with how a parent treats oneself (e.g., manages stress and maintains a healthy diet and exercise regimen), how one gets along with the co-parent (e.g., maintains and models important healthy relationships), as well as the efficacy of one’s life skills (e.g., sustains income and plans for the future), and how deeply one values education.

These “skills” constitute a full 50% of the list and when weighted, based on the degree of association, likely account for a huge and disproportionate amount of the influence on child happiness, health, and adaptive functioning outcomes. And several of the other “skills” (e.g., affection, respect for the dignity of children, degree of parental control imposed, and even level of spirituality) really are behaviors that are known to vary associated with one other crucial, yet unmentioned variable.

You see, the presumption here is that children are brought into the world as malleable blank slates that we can mold through the type of parenting we employ. The reality is that parents who employ these skills likely do so as a function of their intelligence and personality, which are heavily influenced by their genes. The truth of the matter is likely that children whose parents care for themselves, have good social skills, and plan for the future will have happier, healthier, and wiser children, but not because of the parenting skills employed during their upbringing, but because of their shared genes. Epstein did not control for the effect of shared genes in this study. And neither have most of the researchers looking at the relationship between parenting behavior and children outcomes (Pinker, 2002). The current research from behavioral genetics suggests that the home environment, as it is influenced by parents, accounts for 0 to 10% of the variance in the wellness outcomes of children! Heredity accounts for about 50% and the child’s peer group accounts for the remainder (40-50%) (Pinker, 2002).

Epstein asks what parental characteristics are associated with good outcomes and finds that women produce only slightly better outcomes than men. Likewise they found that married individuals produce slightly happier children than divorced parents. Gay individuals actually report slightly happier children than do straight individuals. And no differences were noted associated with race or ethnicity, but more educated individuals had the best outcomes. He notes that “Some people just seem to have a knack for parenting, which cannot be easily described in terms of specific skills.” He’s got that right! That knack, although unacknowledged by Epstein, is largely a function of one’s genes. Temperament is a personality trait that we know is hugely influenced by genes and Epstein notes that “Keeping calm is probably step one in good parenting.”

So we have another conundrum. We are lead to believe, based on the results of this study, that we, as parents, can shape our children, and thus by engaging in The Parent’s Ten, produce happier, healthier, and wiser children. But can we really? Is there an illusion of cause here? Are these simply correlations? The findings of behavioral genetics would suggest that this is an illusion – that these variables vary in predictable ways based on the influence of a third variable – genes.

Next week I’ll delve into this notion of whether how one parents really matters. This exploration comes with significant discomfort for me as I am a behavioral child psychologist with 11 years of training and 16 years of practice steeped in the belief that I can help parents make a difference in the lives of their children. I have long accepted the notion that the nature-nurture debate is not an either-or issue. I see in my life and practice that outcomes are clearly the result of the influences of both nature and nurture. Regardless, I have held the notion that it is parenting to a large extent, that accounts for a large portion of the nurturing influence. Now I have to look carefully at the evidence, be willing to shed the ideological notion that we are blank slates, and accept the reality of the situation, no matter how hard and contrary to my beliefs. This necessitates true intellectual honesty and deep scientific scrutiny.

Are you Happy? What makes you happy? These questions, although seemingly rudimentary, are more difficult to answer than you might think. As it turns out, happiness, as a condition, eludes clear understanding.

Throughout history, mankind has grappled with a definition of this emotion. Perhaps the most meaningful framing of happiness is rooted in the Aristotelian concept of eudaimonia. Eudaimonia suggests that fulfillment comes not from experiencing the feeling of joy, but from living a virtue-based and meaningful life. Central to this notion is an emphasis on being a good person. Others have put forth perhaps equally telling notions. Nietzsche wrote that “the secret of reaping the greatest fruitfulness and greatest enjoyment from life is to live dangerously.” Bertrand Russel noted that “To be without some of the things you want is an indispensable part of happiness.” These latter two concepts acknowledge something important about the reality of happiness that Ayn Rand denied when she wrote that happiness is “a state of non-contradictory joy, joy without penalty or guilt.” (Salerno, 2010).

We all know (I hope) the feeling of happiness. We might surmise that, if given the power to manipulate our circumstances, we would be able to effectively engineer our world in a way that would guarantee this desirable state. But, as it turns out, as Nietzsche and Russel suggest, happiness is paradoxical.

We think we know what we want, but the acquisition of one’s desires often fails to live up to expectations and sometimes it brings regret, remorse, guilt, or dissonance. Those situations or items we covet in hopes that they will bring us happiness, come with detractors. Many women for example, desire children. Yet many mothers struggle with the need for fulfillment beyond domestic responsibilities (Salerno, 2010). And these two pursuits often collide in stressful ways. We are it seems, hard wired to pursue some goals that are, by their very nature, contradictory when happiness is concerned.

Life’s most prized aspirations, namely children and wealth, actually do not tend to bolster happiness. When looking at the research on the impact of children on maternal levels of happiness, the conclusions suggest that child rearing has a neutral to negative affect on quality of life. Positive associations are hard to come by. And although it appears that there is a slight positive relationship between wealth and happiness, there are numerous caveats to this correlation. Lottery winners for example, after the initial excitement of the win end up being no happier or even less contended than they were before the draw. And people in the United States, the richest nation in the world, report overall lower levels of happiness than folks from poorer countries. (Salerno, 2010).

In reality, our daily lives are comprised of unending battles between opposing objectives. On the one hand, we are drawn to selfish, indulgent, freedom while at the same time we are constrained by altruism, frugality, and commitment (Salerno, 2010). We can’t have it both ways and this conundrum often leaves us conflicted. After all, if we all were to pursue or own selfish interests we would have a highly dysfunctional, disjointed, and even dangerous society. The drive for social cohesion and the necessary restraint have deep evolutionary and strongly compelling roots. And then there is the drive to build social status through material acquisition or conspicuous consumption. This pursuit is really a zero sum game. Whatever you accumulate, there are many others that have bigger and better houses, cars, and jewels. It is all quite complicated and we are a curious lot. We want happiness, yet often what we aspire to, diminishes our happiness. I am reminded of the proverb: “Be careful of what you wish for. You just might get it.” What we want and what really brings happiness are often opposing forces or at least likely to stir conflict. This seems to be especially true with regard to deeper, genetically driven, intuitive drives (e.g., procreation and status building).

A similar paradox plays out in society where it is need, or misery, that catalyzes advancement. To paraphrase Plato: Necessity is the mother of invention. We prosper through innovation, creativity, and achievement: all of which, to some degree, stem from discontent (Salerno, 2010). Sociologists Allan Horowitz and Jerome Wakefield suggest in their book, The Loss of Sadness, that sadness has a clear evolutionary purpose – essentially to propel adaptation. Daniel Gilbert (2006), a happiness guru from Harvard University once wrote that “We have a word for animals that never feel distress, anxiety, fear, and pain. That word is dinner.” It seems that contentedness fosters passivity and stagnation. For example, college students who score very high on measures of happiness rarely have correspondingly high GPAs. And the perkiest adults among us tend to make less money than their more even-keeled colleagues. (Salerno, 2010). I refer to yet another paradox in “Adversity: Had Enough?” where I shared research that contends that happiness is strongest in those that have experienced two to four adverse life events. Moderate amounts of adversity seem to bolster one’s capacity to tolerate and cope with future stressors and elevate one’s general level of contentedness (Seery, 2010). One might assume, that smooth sailing brings happiness, but as it turns out, this is not quite true. And a newly released study from Harvard University suggests that lower levels of happiness are associated with mind-wandering (Killingsworth, 2010). I discussed this in Multitasking: The Illusion of Efficacy, where I suggested that the mantra of FOCUS & FINISH will result in more efficiency (Nass, 2010), but as it turns out, it may also bring one a better mood.

Okay, so what brings people true happiness? There are general circumstances that appear to be associated with higher overall levels of happiness. For example married people tend to be happier than singles, church goers happier than atheists, and people with friends tend to be happier than the insular (Salerno, 2010). Recent findings suggest that people in their 50s are happier than those in their 20s (Stone, 2010).

To me happiness has to do with how you frame it and mostly about your expectations. It is helpful to think of life as a transient series of states dappled with moments of joy. It is unrealistic to expect a chronic state of bliss. We are much too inclined to misery to ever accomplish this. And this brings me to perhaps my greatest offering:

Misery exists in the gap

between expectations and reality.

Think about it. I am suggesting that a flexible and open minded focus on the world and the realities of its constraints will help you avoid misery. The most miserable people I know have the most rigid expectations about life, about others behavior, about rules, about fairness, and about shoulds. We have a concept in psychology called the tyranny of the shoulds (coined by Karen Horney) whereby one’s expectations that things should go a certain way, result in subsequent neuroses. This is often true it seems because generally our expectations are unrealistic. The more rigid and prolific one is with regard to expectations, the more likely they are to be slapped down by reality. These folks are consistently victimized by life.

Happiness I contend is a multidimensional construct. In part, it is an absence of misery. But that doesn’t tell us what it is. Perhaps Charles Shultz had it right when he said “Happiness is a warm puppy.” In reality we have to accept that it is paradoxical and that pursuit of it is a personal responsibility. This latter fact is a stressor for many (Salerno, 2010). I myself get joy from shared moments of close interpersonal intimacy, from adventure, from persevering on challenging tasks, from increased understanding of the world around me, and from the contributions I make toward the betterment of other people’s lives. I am happy because I make a difference, because I choose to include adventure in my life, and because I am very fortunate to live in this time and place where I am relatively well off (although not wealthy) and loved.

Are you as perplexed as I regarding the acrimony in American Politics? The rift is peppered with claims of amorality and threats of calamity. It’s almost as if the opposing parties come from entirely different realities. Perhaps they do. I have gained some insight into the liberal-conservative divide thanks to Jonathon Haidt’s work, particularly his Moral Foundations Theory.

Haidt contends that the political divide itself boils down to five universal and transcendent morals held to varying degrees by individuals across all cultures and civilizations. He demonstrated how these moral values group in predictable ways. In particular, he has identified two dichotomous groupings that had been previously discussed respectively by John Stuart Mill and Emile Durkheim.

Haidt describes the first cluster as the Individualizing Foundation, where the emphasis of one’s moral imperative is on the rights and welfare of all individuals. Features of this foundation include “widespread human concern about caring, nurturing, and protecting vulnerable individuals from harm” (Haidt, 2009). The second cluster of values is referred to as the Binding Foundation, which weighs more heavily moral issues that increase social cohesiveness and social order. Rather than focusing on individual equality and personal rights, the emphasis of the Binding Foundation is on loyalty, obedience, duty, self-restraint, respect of authority, piety, self-sacrifice for the group, vigilance for traitors or free-loaders, and orderly cultural boundaries.

Haidt noted that liberals value above all the Individualizing Foundation and hold a relative devaluation of the Binding Foundation. Conservatives, on the other hand, tend to hold the Binding Foundation as being of equal relative importance as the Individualizing Foundations. This conceptualization helped me understand why less affluent conservatives support the Republican agenda regardless of the negative economic impact that such support bestows upon them. They vote based on values that resonate with them. It also helps explain how people at each extreme can take a stand that they contend is morally superior while their adversaries are viewed as being unprincipled and amoral. The reality is that each perspective stems from a position of deeply held principles.

I recently finished reading Steven Pinker’s book entitled The Blank Slate: The Modern Denial of Human Nature. Rather than looking at this political divide in terms of morality, Pinker frames it in terms of divergent views of human nature. Underlying this political divide is a deeper and more rancorous debate about what defines human nature. This issue is as old as civilization itself and was, for example, evident in the divergent lifestyles of the conflicted Greek City States of Athens and Sparta. Pinker contends that the political divide really comes down to how individuals attribute the motives and behaviors of people in general. It is a very basic question of how one views the human race and what drives human behavior.

Pinker takes a stand against the commonly held notion that human nature is a blank slate shaped exclusively through environmental circumstances influenced by economic, political, and social forces. The notion of a blank slate concedes social determinism, which is a position that is favored by liberals. Evolutionary biology and cognitive neuroscience bring to the table substantial evidence that suggests that there are indeed genetic or biological determinants of behavior. Accepting this reality comes with the dreadful reality that such notions guided the eugenics movement that resulted in the holocaust (and other horrible crimes of humanity).

As it turns out, political attitudes, for example, are largely, although not entirely, determined by heredity. Pinker quotes a study of political attitudes among identical twins reared apart where the correlation coefficient was .62. This suggests that genetics accounts for 38% of the determination of political attitude. Such a notion is sacrilege to those on the left. It is deeply disturbing for me, as one who leans heavily to the left on political issues, to learn that my inclinations to accept the findings of these increasingly powerful sciences at some level, distances me from other liberal thinkers. How can this be?

You see, liberals emanate from the sociological tradition that holds the position that society “is a cohesive organic entity and its individuals are mere parts. People are thought to be social by their very nature and to function as constituents of a larger superorganism” (Pinker, 2002 p. 284). On the other hand, conservatives tend to hold the belief that “society is an arrangement negotiated by rational, self-interested individuals. Society emerges when people agree to sacrifice some of their autonomy in exchange for security from the depredations of others wielding their own autonomy” (Pinker, 2002 p. 285).

The modern theory of evolution aligns best with the latter economic contract paradigm, where natural selection results in complex individual adaptations benefiting individuals rather than the species or community. This theory holds that “all societies – animal and human – seethe with conflicts of interest and are held together by shifting mixtures of dominance and cooperation” and that “reciprocal altruism, in particular, is just the traditional concept of the social contract restated in biological terms” (Pinker, 2002 p. 285). To make this dichotomy more clear it might help to think of the sociological tradition as being consistent with Marxist thinking while the social contract is more consistent with Milton Friedman’s free-market conservatism.

At the core of these paradigms are very different conceptualizations of human nature. Thomas Sowell has captured this dichotomy in his book A Conflict of Visions where he delineates those visions as being either constrained or unconstrained. Pinker adapted these labels to be more descriptive and thus refers to them respectively as Tragic (a term Sowell later adopted) and Utopian. These visions refer to the “perfectibility of man” whereas the Tragic Vision holds that “humans are inherently limited in knowledge, wisdom, and virtue” and that as a result “all social arrangements must acknowledge those limits.” This pessimistic view of human nature, is steeped in biological determinism and the acknowledgment of self interested motives. The liberal or Utopian View contends that “psychological limitations are artifacts that come from our social arrangements.” It is believed that economic deprivation elicits social depravity and that social engineering can eradicate the ills of society.

Sowell and Pinker suggest that these very visions of human nature shape the belief mechanisms or morals that result in divergent social policies. For example, people who hold the Tragic Vision are more likely to support a strong military because of an inherent human selfishness and the inclination to compete for resources. They are more likely to value religion, tough criminal sentences, strong policing, and judicial restraint because people need to be constrained in order to maintain an orderly and cohesive society. Likewise, because of this pessimistic view of human nature, people inclined to hold such a view are likely to be censorious, meritocratic, pragmatic, and pro business.

People holding the Utopian View are likely to be idealistic, egalitarian, pacifistic, secularist, and more likely to tolerate homosexuality, to be in favor of the rehabilitation of criminals, judicial activism, generous social welfare programs, and affirmative action. They are also more likely to be environmentalists. Pinker’s contention is that all these values, more or less, are heritable and that as a result, people are likely to hold them as self defining. Subsequently, these beliefs are typically not amenable or susceptible to change because they are often held without a rationally based understanding of them. Such deeply held (intuitive) and heritable attitudes quickly spark emotional responses when challenged and people do not move away from such notions even when reason compels them to do so.

So it seems, at the core of the contentious political divide there are discrepant realities pertaining to the very essence of what it is to be a human being. And that essence is evolving regardless of the ideologies that shape the political climate. Perhaps we can escape the gridlock by acknowledging the disconnect between ideology and reality and embrace a truer essence of humanity. That reality, it seems, is a blend of the Tragic and Utopian Visions where human behavior is guided by both social and biological determinants. Reality, as it turns out, is often queerer than one can suppose.

Breaking the chains of ideology necessarily involves abandoning and overpowering intuition, which is itself, a formidable task. But social morays have evolved over time as we have gained deeper insight into humankind. Lets hope for continued evolution!

I have long suspected that a certain amount of adversity in life ultimately leads to greater degrees of happiness. This is contrary to the commonly held notion that suggests that traumatic stress is inherently harmful. It can be argued, as Friedrich Nietzsche once said, “That which does not kill us makes us stronger.” I’m in sync with Nietzsche here: hard times build resilience and help one appreciate the better times with deeper enthusiasm. A recent Scientific American Podcast indicated that I might just be right. In Adversity Is Linked to Life Satisfaction, Christie Nicholson reviews the results of a multiyear study by Mark Seery, Alison Holman, and Roxane Cohen Silver that was just published in the Journal of Personality and Social Psychology. Using a national survey panel consisting of 2,398 subjects who were assessed on multiple occasions over a four year period, the authors tested for “…relationships between lifetime adversity and a variety of longitudinal measures of mental health and well-being: global distress, functional impairment, posttraumatic stress symptoms, and life satisfaction.” In their analysis of the data they found that:

“people with a history of some lifetime adversity reported better mental health and well-being outcomes than not only people with a high history of adversity but also than people with no history of adversity.”

For the purposes of this study adversity included: “own illness or injury, loved one’s illness or injury, violence (e.g., physical assault, forced sexual relations), bereavement (e.g., parent’s death), social/environmental stress (e.g., serious financial difficulties, lived in dangerous housing); relationship stress (e.g., parents’ divorce); and disaster (e.g., major fire, flood, earthquake, or other community disaster).” It is important to note that adverse events were measured using a frequency count rather than any qualitative analysis of degree of adversity.

The implications one might draw from these findings is that without at least some adversity, individuals do not learn through experience how to manage stress; therefore, “the toughness and mastery they might otherwise generate remains undeveloped.” Overwhelming levels of adversity, are more likely to exceed one’s capacity to manage stress, and thereby impede toughness and mastery. The authors are careful to note that these data are correlative and as such do not establish causation, but they contend that moderate exposure to lifetime adversity may contribute to the development of resilience.

So, it seems, as Nicholson notes:

“… there’s a sweet spot, where a certain amount of struggle is good and produces a toughness and sense of control over one’s life, but anything above or below that amount is correlated with the inverse: Distress, anxiety, and feelings of being overwhelmed.”

You might ask “Where is this Goldilocks Zone?” At what quantity does adversity benefit one’s life perspective and where does it cross a line? Seery et al., acknowledged that it is impossible to pin point the exact parameters of such a sweet spot, but that the data suggests that around two to four adverse events may sufficiently enhance one’s capacity to sustain happiness and tolerate stress. However, and this is important to note, They do not recommend engineering disasters for those who have been “fortunate” enough to escape adversity.

This research reminded me of a story by an unknown author that my mother sent me a few years back. I’m guessing that it has made the rounds on the internet. Regardless, and despite the melodrama, it seems relevant here. What is cogent here is the notion of just enough.

I Wish You Enough

At an airport I overheard a father and daughter in their last moments together. They had announced her plane’s departure and standing near the door,he said to his daughter,

“I love you, I wish you enough.”

She said, “Daddy, our life together has been more than enough. Your love is all I ever needed. I wish you enough, too, Daddy.”

They kissed good-bye and she left.

He walked over toward the window where I was seated. Standing there I could see he wanted and needed to cry. I tried not to intrude on his privacy, but he welcomed me in by asking, “Did you ever say good-bye to someone knowing it would be forever?”

“Yes, I have,” I replied. Saying that brought back memories I had of expressing my love and appreciation for all my Dad had done for me. Recognizing that his days were limited, I took the time to tell him face to face how much he meant to me.

So I knew what this man was experiencing.
“Forgive me for asking, but why is this a forever good-bye?” I asked.

“I am old and she lives much too far away. I have challenges ahead and
the reality is, her next trip back will be for my funeral, ” he said.

“When you were saying good-bye I heard you say, ‘I wish you enough.’
May I ask what that means?” He began to smile. “That’s a wish that has been handed down from other generations. My parents used to say it to everyone.”

He paused for a moment and looking up as if trying to remember it in detail, he smiled even more. “When we said ‘I wish you enough,’ we were wanting the other person to have a life filled with enough good things to sustain them,” he continued and then turning toward me he shared the following as if he were reciting it from memory.

“I wish you enough sun to keep your attitude bright.
I wish you enough rain to appreciate the sun more.
I wish you enough happiness to keep your spirit alive.
I wish you enough pain so that the smallest joys in life appear much bigger.
I wish you enough gain to satisfy your wanting.
I wish you enough loss to appreciate all that you possess.
I wish enough “Hello’s” to get you through the final “Good-bye.”

I don’t suppose that it is a reach to suggest that exposure to small inconveniences such as rain or pain will likewise help you be more appreciative of sunshine and comfort. After all, we as humans tend to quickly habituate to smooth roads. Without a few potholes, we tend to take unbroken roads for granted. But, the adversity study is suggesting more than this. Its about developing resilience or reparative mechanisms that help us cope with future stressors. This is referred to as adversarial growth, of which, I wish you enough.

References:

Nicholson, C. (2010). Adversity Is Linked to Life Satisfaction. Scientific American Podcast.http://www.scientificamerican.com/podcast/episode.cfm?id=adversity-is-linked-to-life-satista-10-10-16

Halloween seems like an appropriate time to discuss superstition. What with ghosts and goblins and black cats and witches and all. But would not Easter or Christmas, or any other evening that a five year old loses a tooth be an equally appropriate time? In actuality, we massage magical thinking in our children with notions of Santa Claus, the Easter Bunny, and the tooth fairy. And recall if you will, some of your favorite children’s books and the supernatural forces employed to delight your youthful whimsies. Magic is, along with the thinking employed to delight in it, seemingly a rite of childhood, and in some ways the essence of what it is to be a child.

Much as magical thinking has its roots in childhood fantasies, superstition too has its roots in our species’ youth. In that nascent time we lacked the capacity to understand the forces and whims of the natural world around us. Our ancestors struggled to survive, and living another day in part depended on their ability to make sense of the forces that aided or impinged upon them. We must not forget that our forefathers lived much like the non-domesticated animals around us today. Survival was a day to day reality dependent upon the availability of life sustaining resources like food, water and shelter, and was often threatened by predation or the forces of nature. Death was a real possibility and survival a real struggle. The stakes were high and the hazards were plentiful. As it turns out, these are the very conditions under which superstition is likely to thrive.

So what is superstition? Bruce Hood, author of The Science of Superstition, notes that superstition is a belief “that there are patterns, forces, energies, and entities operating in the world that are denied by science…” He adds that “the inclination or sense that they may be real is our supersense.” It involves an inclination to attempt to “control outcomes through supernatural influence.” It is the belief that if you knock on wood or cross your fingers you can influence outcomes in your favor. It is the belief that faithfully carrying out rituals as part of a wedding ceremony (e.g., wearing something blue, something new, something borrowed) or before going to bat or before giving a big speech will improve outcomes. It is also the belief that negative outcomes can come as a result of stepping on a crack, breaking a mirror, or spilling salt. Hood argues that supersense goes beyond these obvious notions and surfaces in more subtle ways associated with touching an object or entering a place that we feel has a connection with somebody bad or evil. For example, how would you feel if you were told that you had to wear Jeffery Dalmer’s T-shirt or that you were living in a house where ritualistic torture and multiple murders took place? Most of us would recoil at the thought of this. Most of us also believe (erroneously) that we can sense when someone is looking at us, even when we cannot see them doing so. These beliefs and much of the value we place on sentimental objects stems from this style of thinking.

Michael Shermer (2000), in his book, How We Believe, eloquently describes our brains as a Belief Engine. Underlying this apt metaphor is the notion that “Humans evolved to be skilled pattern seeking creatures. Those who were best at finding patterns (standing upwind of game animals is bad for the hunt, cow manure is good for the crops) left behind the most offspring. We are their descendants.” (Shermer, p. 38). Chabris and Simons (2009) note that this refined ability “serves us well, enabling us to draw conclusions in seconds (or milliseconds) that would take minutes or hours if we had to rely on laborious logical calculations.” (p. 154). However, it is important to understand that we are all prone to drawing erroneous connections between stimuli in the environment and notable outcomes. Shermer further contends that “The problem in seeking and finding patterns is knowing which ones are meaningful and which ones are not.“

From an evolutionary perspective, we have thrived in part, as a result of our tendency to infer cause or agency regardless of the reality of threat. For example, those who assumed that rustling in the bushes was a tiger (when it was just wind) were more likely to take precautions and thus less likely, in general, to succumb to predation. Those who were inclined to ignore such stimuli were more likely to later get eaten when in fact the rustling was a hungry predator. Clearly from a survival perspective, it is best to infer agency and run away rather than become lunch meat. The problem that Shermer refers to regarding this system is that we are subsequently inclined toward mystical and superstitious beliefs: giving agency to unworthy stimuli or drawing causal connections that do not exist. Dr. Steven Novella, a neurologist, in his blog post entitled Hyperactive Agency Detection notes that humans vary in the degree to which they assign agency. Some of us have Hyperactive Agency Detection Devices (HADD) and as such, are more prone to superstitious thinking, conspiratorial thinking, and more mystical thinking. It is important to understand as Shermer (2000) makes clear:

“The Belief Engine is real. It is normal. It is in all of us. Stuart Vyse [a research psychologist] shows for example, that superstition is not a form of psychopathology or abnormal behavior; it is not limited to traditional cultures; it is not restricted to race, religion, or nationality; nor is it only a product of people of low intelligence or lacking education. …all humans possess it because it is part of our nature, built into our neuronal mainframe.” (p. 47).

Bruce Hood takes this notion further and adds that the cultural factors discussed at the opening of this piece and other intuitive inclinations such as dualism (a belief in the separation of mind and body), essentialism (the notion that all discernible objects harbor an underlying reality that although intangible, gives each and every object it’s true identity), vitalism (the insistence that there is some big, mysterious extra ingredient in all living things), holism (that everything is connected by forces), and anism (the belief that the inanimate world is alive) shape adult superstition. These latter belief mechanisms are developmental and naturally occurring in children: they are the tendencies that make magic and fantasy so compelling for children. It is when they lurk in our intuition or are sustained in our rational thought that we as adults fall victim to this type of illusion.

It is interesting to note that much like our ancestors, we are more prone to this type of thinking when faced with high stakes, a low probability of success, and incomprehensible controlling circumstances. Think about it. In baseball, batters often have complex superstitious rituals associated with batting. The best hitters experience success only one in three times at bat. And the speed at which they have to decide to swing or not and where to position the swing defies the rational decision making capacity of humans. On the other hand, these very same athletes have no rituals when it comes to fielding a ball (which is a high probability event for the proficient).

Superstition is a natural inclination with deep evolutionary and psychological roots embedded deeply in our natural child development. These tendencies are nurtured and socialized as a part of child rearing and spill over into adult rituals in predictable circumstances (particularly when there is a low degree personal control). When one deconstructs this form of thinking it makes complete and total sense. This is not to suggest that reliance on superstitions is sensible. Often, however, the costs are low and the rituals therein can be fun. There are some potential costs associated with such thinking. Some of the dangers are materialized in notions such as vaccines cause autism and homeopathy will cure what ails you in lieu of scientific medicine. Resignation of personal power in deference to supernatural forces is a depressive response pattern. Reliance on supernatural forces is essentially reliance on chance and in some cases its applications actually stack the deck against you. So be careful when employing such tactics. But, if you’re in the neighborhood, NEVER EVER walk under my ladder. I’ve been known to drop my hammer.

I’m sure you have heard of subliminal messages. You know that classic story where it was alleged that flashing the words DRINK COKE on a movie screen for a fraction of a second would increase cola buying behavior at the concession stand. Well, that was a hoax, but you should know that I can, in other ways, tap into your subconscious thoughts and make you smarter, dumber, more assertive, or more passive for a short period of time.

This is not brainwashing! It has a different name. In the field of psychology, this interesting phenomena is referred to as priming. John Bargh (now at Yale University) and colleagues formerly at New York University demonstrated the legitimacy of priming in a very interesting paper entitled Automaticity of Social Behavior: Direct Effects of Trait Construct and Stereotype Activation on Action (Bargh, Chen, & Burrows, 1996). These researchers contend “that social behavior is often triggered automatically on the mere presence of relevant situational features [and that] this behavior is unmediated by conscious perceptual or judgmental processes.” One of the studies they used to empirically demonstrate the implications of automatic social behavior (priming) involved a group of undergraduates from NYU who were given the scrambled sentence test. The test involves the presentation of a series of five scrambled word groupings. From each grouping one is to devise a grammatical four word sentence. For example, one of the groupings might include the words: bluethe from is sky. From this grouping your job would be to write The sky is blue. A typical scrambled sentence test takes about five minutes.

The scrambled sentence test is a diversion and a means to present words that may influence or prime the subject’s behavior, thoughts, or capabilities. In this study the subjects were randomly assigned to one of two groups. One group was presented with scrambled sentences that were sprinkled with words like “bold,” “intrude,” “bother,” “rude,” “infringe,” and “disturb.” The second group was presented with scrambled sentences containing words like “patiently,” “appreciate,” “yield,” “polite,” and “courteous.” Each student independently completed their test in one room and were told upon completion to walk down the hall to get their next task from an experimenter in another office. For every subject, however, there was another student (a stooge) at the experimenter’s office asking a series of questions forcing the subject to wait. Bargh and colleagues predicted that those primed with words like “rude” and “intrude” would interrupt the stooge and barge in quicker than those primed with words like “polite” and “yield.” Bargh anticipated that the difference between the groups would be measured in milliseconds or at most, seconds. These were New Yorkers, after all, with a proclivity to be very assertive (Gladwell, 2005). The results were surprisingly quite dramatic!

Those primed with the “rude” words interrupted after about 5 minutes. Interestingly, the university board responsible for approving experiments involving human subjects limited the wait period in the study to a maximum of ten minutes. The vast majority (82%) of those primed with the “polite” words never interrupted at all. It is unknown how long they would have waited. The difference between these groups based simply on the nature of the priming words was huge! In the same paper Bargh et al., (1996) presented how students primed with words denoting old age (e.g., worried, Florida, lonely, gray, bingo, forgetful) walked more slowly leaving the office after completing the scrambled sentence test than they did on their way to the testing office. It is suggested that the subjects mediated their behavior as a result of thoughts planted in their sub-conscious pertaining to being old. These thoughts, in this case, resulted in the subjects behaving older (e.g., walking more slowly).

Priming one to be more or less polite or sprite is interesting, but there are disturbing and perhaps very damaging implications of this phenomena.

Dijksterhuis and van Knippenberg, a research team from Holland, looked at how priming might affect intellectual performance (1998). Their subjects were divided into two random groups. The first group was tasked for five minutes with thinking and writing down attributes pertaining to being a college professor. The second group was tasked with thinking about and listing the attributes of soccer hooligans. Following this thinking and writing task, the subjects were given 47 challenging questions from the board game Trivial Pursuits. Those in the “professorial” priming group got 55.6% of the items correct while those primed with soccer hooliganism got only 42.6% correct. One group was not smarter than the other – but it is contended that those in the “smart” frame of mind were better able to tap into their cognitive resources than those with a less erudite frame of mind.

And then there is the research from Claude Steele and Joshua Aronson (1995). These psychologists investigated the impact on African Americans of reporting one’s race before taking a very difficult test. They employed African American college students and a test made up of 20 questions from the Graduate Record Exam (GRE). The students were randomly split into two groups. One group had to indicate their race on the test while the others did not. Those who indicated their race got half as many of the GRE items correct as their non-race-reporting counterparts. Simply reporting that they were African American seemed to prime them for lower achievement.

All of these effects were accomplished completely and totally outside the awareness of the involved parties. In fact, this is an essential attribute. Effective priming absolutely necessitates that it be done outside the subject’s awareness. Awareness negates the effect.

Regardless, consider the implications, intended or otherwise of such priming. Malcolm Gladwell in his book Blink notes: “The results from these experiments are, obviously quite disturbing. They suggest that what we think of as freewill is largely an illusion: much of the time, we are simply operating on automatic pilot, and the way we think and act – and how well we think and act on the spur of the moment – are a lot more susceptible to outside influences than we realize.” (p. 58).

Yes, It is disturbing on a personal level with regard to the vulnerability of rational decision making, but I am more concerned about the ethical implications of our insight into this tool. Priming may be used by those with the power, influence, and intentions to manipulate outcomes to serve ideological purposes. On yet another level the reality of this phenomena supports my contention in Do we all get a fair start? that there is no true equal starting point. Societal morays and the media in particular shape how we think about others and ourselves in profound ways. We all are susceptible to stereotypes, prejudices, and biases and these tendencies can cut in multiple directions. They can also be used to bolster negative attitudes or weaken individuals in destructive ways. I am not suggesting that the sky is falling or that there is a huge ideological conspiracy going on, but we must be aware of our vulnerabilities in this regard. And we must act to avoid constraining individuals as a function of subgroup affiliation.

I don’t know if you caught it the other night when you were watching the news while skimming your email, checking your twitter and RSS feeds, and updating your Facebook status, but there was an interesting story about multitasking. Silly me, who actually watches the news anymore? Anyways, much of the recent buzz on this endemic behavior (among the technologically savvy) is not good. Multitasking is a paradox of sorts – where we tend to romanticize and overestimate our ability to split attention among multiple competing demands. The belief goes something like this: “I’ve got a lot to do and if I work on all my tasks simultaneously I’ll get them done faster.” However, what most of us fail to realize is that when we split our attention, what we are actually doing is dividing an already limited and finite capacity in a way that hinders overall performance. And some research is showing that chronic multitasking may have deleterious affects on one’s ability to process information even when one is not multitasking (Nass, 2009).

Advances in computer technology seem to fuel this behavior. If you do a Google search on multitasking you will get a mix of information on the technological wonders of machines that can multitask (AKA computers) mixed with news regarding how bad media multitasking is for you.

Think about it. There has been increasing pressure on the workforce to be more productive and gains in productivity have been made lockstep with increases in personal computing power. Applications have been developed on the back of the rising tide of computer capacity, thus making human multitasking more possible. These advances include faster microprocessors, increased RAM, increased monitor size, the internet itself, browsers that facilitate the use of multiple tabs, relatively inexpensive computers with sufficient power to keep open email, word processing programs, Facebook, Twitter, iTunes, and YouTube. Compound these tools with hardware that allows you to do these things on the go. No longer are you tethered to the desktop computer with an Ethernet cable. Wifi and 3G connectivity allow all the above activities almost anywhere via use of a smart phone, laptop, iPad, or notebook computer. Also in the mix are devices such as bluetooth headsets and other headphones that offer hands free operation of telephones.

Currently, technology offers one the ability to divide one’s attention in ways inconceivable only a decade ago. The ease of doing so has resulted in the generalization of this behavior across settings and situations including talking on cell phones while driving, texting while driving, texting while engaged in a face to face personal interactions, and even cooking dinner while talking on the phone. Some of these behaviors are dangerous, some rude, and all likely lead to inferior outcomes.

Don’t believe it? If you don’t, you are likely among the worst skilled of those who multitask. “Not me!” you may claim. Well research has shown that those who routinely multitask are also the most confident in their ability to do so (Nass, 2009). But when you look at the products of these “confidently proficient” multitaskers, you find the poorest outcomes.

Multitasking involves shifting attention from one task to another, refocusing attention, sustaining attention, and exercising ongoing judgment about the pertinence and salience of various competing demands. Doing this successfully is exceptionally difficult and is likely well beyond the capacity of most typical human beings. Our brains can only generally concentrate on one task at a time, and as such, multitasking necessitates devoting shorter periods of time on dissimilar tasks. As a result, overall effectiveness, on all tasks is reduced.

Researchers at the University of Michigan Brain, Cognition and Action Laboratory, including Professor David E. Meyer, point out that the act of switching focus itself has deleterious effects. When you switch from task A to task B you lose time in making the transition and the completion time of the transition itself increases with the degree of complexity of the task involved. Depending on how often you transition between stimuli, you can waste as much as 40% of your productive time just in task switching (APA, 2006).

Shorter periods of focus reduce overall time on task and each transition reduces this time further. Dr. Glenn Wilson at the Institute of Psychiatry, University of London in 2005 discovered that his subjects experienced a 10-point fall in their IQ when distracted by incoming email and phone calls. This effect size was “more than twice that found in studies of the impact of smoking marijuana” and was similar to the effects of losing a night’s sleep (BBC, 2005).

As for the negative long term affects of multitasking, Dr. Nass noted that:

“We studied people who were chronic multitaskers, and even when we did not ask them to do anything close to the level of multitasking they were doing, their cognitive processes were impaired. So basically, they are worse at most of the kinds of thinking not only required for multitasking but what we generally think of as involving deep thought.”

Nass (2009) has found that these habitual multitaskers have chronic filtering difficulties, impaired capacity to manage working memory, and slower task switching abilities. One must be careful to avoid the Illusion of Cause in this situation. Correlation is not causation and we must avoid inferring that multitasking causes these cognitive declines. The reverse may be true or other undetected variables may cause both.

Much of the research in this area is in its infancy and thus limited in scope and depth, so it is prudent to be a bit skeptical about whether or not multitasking is bad for you. But with regard to the efficacy of multitasking – when you look at the issue from an anecdotal perspective, apply the tangentially related evidence logically, and then consider the data, you have to conclude that multitasking on important jobs is not a good idea. If you have important tasks to accomplish, it is best to focus your attention on one task at a time and to minimize distractions. To do so, avoid temptation to text, tweet, watch TV, check your email, talk on the phone, instant message, chat on Facebook, Skype, or otherwise divide you attention. If you believe employing these other distractions helps you do better, you are deluding yourself and falling victim to the reinforcement systems that make multitasking enjoyable. Socializing, virtually or otherwise, is more pleasurable than the arduous processes involved in truly working or studying.

You can likely apply the same principles to plumbing, cooking, housework, woodworking, etc. The key to success, it seems is to FOCUS on one task at a time, FINISH the job, and then move one. You’ll save time, be more efficient, and do a better job! Remember – FOCUS & FINISH!

Why do you sometimes choose that scrumptious chocolate desert even when you are full? Why is it that you are sometimes drawn in by the lure of the couch and TV when you should be exercising or at least reading a good book? And why do you lose your patience when you are hungry or tired? Do these situations have anything to do with a weak will?

What is willpower anyways? Perhaps it is your ability to heed the advice proffered by that virtuous and angelic voice in your head as you silence the hedonistic diabolical voice that goads you toward the pleasures of sloth or sin. Or perhaps, as Sigmund Freud once contended, it is your ego strength that enables you to forgo the emotionally and impulsively driven urges of the id. These images resonate so well with us because it often feels as though there is a tug-of-war going on inside our heads as we consider difficult or sometimes even routine choices. Often, reason prevails, and other times it does not. What is really at play here? Is it truly willpower? Is it really a matter of strength or even of choice?

As it turns out, like all issues of the human mind, it is complicated. Studies within the disciplines of psychology and neuroscience are offering increased clarity regarding this very issue. It is important to understand however, that the human brain is composed of a number of modules, each of which are striving to guide your choices. There really isn’t a top down hierarchy inside your brain with a chief executive who is pulling and pushing the levers that control your behavior. Instead, at various times, different modules assert greater amounts of control than others, and thus, the choices we make, do likewise vary in terms of quality over time. As a result of advances in technology and understanding, we are becoming increasingly aware of the key variables associated with this variation.

At a very basic level we know of two major (angelic v. diabolical) driving forces that guide our decisions. Within and across these forces there are multiple modules emitting neurotransmitters that ultimately influence the choices that we make. Broadly, the two forces are reason and emotion. As I discussed in previous posts, What Plato, Descartes, and Kant Got Wrong: Reason Does not Rule and Retail Mind Manipulation, there is not actually a true competitive dichotomy between these two forces; instead, there appears to be a collaborative interplay among them. Regardless of their collaborative nature, we do experience a dichotomy of sorts when we choose the cheeseburger and fries over the salad, the chocolate cake over the fruit salad, or abstinence over indulgence.

Now that I have clouded the picture a bit, lets look at one study that may help reintroduce some of that clarity that I mentioned.

At Stanford University, Professor Baba Shiv, under the ruse of a study on memory, solicited several dozen undergraduate students. He randomly assigned the students to two groups. For conveniences sake, I will label the groups the 2 Digit Group and the 7 Digit Group. The students in the 2 Digit Group were given a two digit number (e.g., 17) to memorize whereas those in the 7 Digit Group where tasked with a seven digit number (e.g., 2583961). In Room-A, each individual, one subject at a time, was given a number to memorize. Once provide with the number they were given as much time as they needed to commit the number to memory. They were also told that once they had memorized the number that they were to go to Room-B, down the hall, where their ability to recall the number would be tested. As each individual student made the transition from the first room to the testing room, they were intercepted by a researcher offering them a gratuity for their participation. The offer was unannounced and provided prior to entering the testing room (Room-B). The offer included either a large slice of chocolate cake or a bowl or fruit salad.

One would expect, given the random nature of group assignment, that those in the 2 Digit group would select the cake or fruit salad in the same proportions as those in the 7 Digit group. As it turned out, there was a striking difference between the groups. Those in the 2 Digit Group selected the healthy fruit salad 67% of the time. On the other hand, those in the 7 Digit Group selected the scrumptious, but not so healthy, cake 59% of the time. The only difference between the groups was the five digit discrepancy in the memorization task. How could this seemingly small difference between the groups possibly explain why those saddled with the easier task would make a “good” rational choice 67% of the time while those with a more challenging task made the same healthy choice only 41% of the time?

The answer likely lies in the reality that memorizing a seven digit number is actually more taxing than you might think. In 1956, Psychologist George Miller published a classic paper entitled “The Magical Number Seven, Plus or Minus Two” whereby he provided evidence that the limit of short term memory for most people is in fact seven items. This is why phone numbers and license plates are typically seven digits in length. Strings of letters or numbers that are not logically grouped in some other way, when approaching seven items in length, tend to max out one’s rational processing ability. With seven digits, one is likely to have to recite the sequence over and over in order to keep it in short term memory. It appears that those in the 7 Digit Group relative to the 2 Digit Group had reached the limits of their rational capacity and were less likely to employ good reason-based decision making with regard to the sweets. Those in the 2 Digit Group were not so preoccupied and were likely employing a more rationally based decision making apparatus. They made the healthy choice simply because they had the mental capacity to weigh the pros and cons of the options.

An overtaxed brain is likely to fall back on emotional, non-rational mechanisms to make choices and the outcomes are not always good. When you are cognitively stressed – actively engaged in problem solving – you are less likely to make sound, reason-based decisions regarding tangential or unrelated issues. That is one of the reasons why we “fall off the wagon” when we are overwhelmed.

And if you compound cognitive preoccupation with fatigue and hunger – then you may have more problems. You know those times at the end of the day when you are tired, hungry, and really irritable? Your muscles are not the only tissues that fatigue when they are not well nourished. Your brain is a major consumer of nutritional resources – and it, particularly the reasoning portion of your brain, many scientists believe, does not tolerate glucose deficits. Your grumpiness may be the result of the diminished capacity of your brain to employ reason in order to work out and cope with the little annoyances that you typically shrug off.

So, it seems, willpower is one’s ability to use the reasoning portion of your brain to make sound healthy decisions. Studies like the one above, suggest that willpower is not a static force. We must accept the limits of our willpower and realize that this source of control is in a near constant state of fluctuation – depending on one’s state of cognitive preoccupation, fatigue and perhaps blood glucose levels. It is very important that you know your limits and understand the dynamic nature of your rational capacity – and if you do, you may proactively avoid temptation and thus stay in better control of your choices. Relying on your willpower alone does not provide you with dependable safety net. Be careful to not set yourself up for failure.