The myth of the mad genius began with a misinterpretation of Plato’s “divine madness” and has since gathered support and credibility because of public fascination, media distortion, and enthusiastic pseudoscience.

Most myths are popular because they offer quick solutions to the world’s ongoing puzzles, whether found in nature or in the baffling variations in human behavior. This is especially true when a myth is ancient, photogenic, and has a whiff of science about it. The mad genius myth qualifies on all counts, deriving its credibility from expert proponents (hereinafter called “mythers”) who proclaim the existence of a solid empirical link between great creativity and bipolar disorder, so that those who are blessed with the one must also be saddled with the other. This equation is patently false but too rarely challenged, even by mental health professionals who should know better, for reasons that will soon be explained. Meanwhile, the tragedy of the mad genius myth is that it encourages viewing the genius through a warped, dusty, and generic lens; this not only negates the individualism of society’s most exceptional talents, it also diminishes their accomplishments as a product of mental disturbance.

Popularity and Power

The myther’s favorite poster boy is Vincent van Gogh, whose painting Self-Portrait with Bandaged Ear supplies their most convenient visual.
Chances are this will continue despite new indications that someone else may have severed that famous lobe—this would be his roommate Paul Gauguin, who was
notorious for his hot temper as well as his skill with a sword. Scholars have found intriguing clues to this alternate scenario among van Gogh’s 874
letters, which were posted in 2010 at www.vangoghletters.org. But such facts will matter little to mythers, whose exclusive focus on in­ternal pathology
tends to minimize the impact of external stressors and events on the life of the artist.

In van Gogh’s case, this requires dis­counting his poverty, loneliness, and repeated romantic and occupational failures, as well as the physical
consequences of epilepsy, absinthe poisoning, and late-stage (tertiary) syphilis; moreover, when viewed from this narrow perspective, his suicide stands
unexamined as the ultimate proof of his mental illness. But syphilis also killed Theo van Gogh soon after Vincent died, suggesting that the afflicted
painter took his own life at the prospect of losing his beloved brother, who was also his only friend and patron. Surely one doesn’t need bipolar disorder
to experience this level of grief and despair. And as it happens, Vincent’s suicide has begun to generate its own swirl of controversy, with recent
theories about gun accidents and other fingers on the trigger. (For a critical discussion, see Joe Nickell’s column, “The ‘Murder’ of Vincent van Gogh,”
SI, September/October 2012.)

It should be noted that the symptoms of tertiary syphilis, which raged in many quarters until its 1906 cure, closely mimic the mood swings and psychosis
expected of a bipolar diagnosis. A short list of famous victims includes Ludwig van Beethoven, Robert Schumann, Oscar Wilde, and Edgar Allan Poe (Hayden
2003), who have all been trailed by whispers of madness. But it’s easy for any artist to acquire a pathological label, since, in addition to soft-pedaling
any physical reasons for aberrant behavior, mythers prefer to ignore the crazy-making heartbreak and struggle of the creative life itself. For a fuller
discussion of this point, see Chapter 8, “They Must Be Crazy” (Schlesinger 2012, 153–170).

To be fair, fascination with shattered brilliance is hardly new. The parade of the gifted and doomed stretches at least as far back as Icarus, one of the
first to be punished for flying too high. Shakespeare’s tragic heroes were also inevitably destroyed by their internal flaws (Macbeth’s ambition, Hamlet’s
indecision).Today, the breathless media coverage of every celebrity derailment continues to underscore the danger of great talent, while softening any
painful jealousy it evokes in the less-gifted observer. In fact, the myth’s capacity to neutralize envy is a major reason for its popularity and endurance.

Another is the contribution of those creatives (and wannabes) who deliberately cultivate a wild, eccentric pose in order to appear more brilliant than they
really are and get a pass on such mundane responsibilities as holding up their half of a relationship and paying their share of the rent. Steve Allen, that
most un-mad of geniuses, called this ploy “the Bohemian excuse,” and its global appeal could well sustain the myth all by itself.

History, in Brief

The mad genius notion benefits from the mystique that clings to virtually every Big Idea from ancient Greece: the conviction that it expresses something
profound that “has always been known.” The irony is that this one idea began by distorting the original wisdom. For example, Plato invented the concept of
“divine madness” to describe a visit from the gods that delivered precious inspiration and enabled the artist to create. This was a fortunate,
unpredictable, and short-term event, nothing like the ongoing mental disorder it would become in our time. Over the centuries Aristotle’s benign view of
melancholia was also twisted to fit the stereotype—often by influential writers who were themselves depressed and sought to alchemize their own suffering
into inherent proof of their superiority.

One such popularizer was the Renaissance monk Marsilio Ficino, whose self-serving translation of ancient texts contained the news that being born under
Saturn (as he was) was a sure sign of genius (Wittkower and Wittkower 1963). In 1621, British theologian Robert Burton published The Anatomy of Melancholy: 988 pages that helped distract him from his own “distemper.” In his rambling and wildly popular tome, Burton considered
everything from geography to goblins, but the

book’s most enduring legacy was to cement the connection between the words melancholy, depression, and artistic endeavor. Today
the most prolific myther is psychologist Kay Redfield Jamison, who extols such benefits of her own bipolar disorder as “loving more, and being more loved”
(Jamison 1995, 217–218). Her scientific claims will be evaluated a bit later.

The eighteenth century catapulted the mad genius into permanent cultural icon by merging colorful fiction about the tragic, brilliant hero with the noisy
creative struggles of real poets like Poe, Shelley, and Byron. In a perfect storm of influence, all this intense literary and psychosocial drama was soon
legitimized by a new scientific focus on the exceptional mind. Certainly the European climate was ripe for Charles Darwin’s Origin of Species
(1859), which sketched the contours of natural supremacy and triggered the eugenicist dream of breeding a superior race of humans. But since ambivalence is
an integral part of the myth, by century’s end early psychiatrists were expounding on the dark side of genius and its common heredity with idiots and
criminals.

Primitive as these speculations were, they are still being used to supply the myth with a long “scientific” pedigree. But this only works if the references
are kept vague—and so virtually every writer who cites Cesare Lombroso’s 1895 book The Man of Genius as credible historical background neglects to
reveal that he actually described geniuses as “stammering, sexually sterile, pasty-faced vagabonds of inadequate beard” (1895, 5–37, vi).

These waters are further muddied by the ongoing controversy over whether mental illness—including bipolar disorder—exists at all. Ten years ago the
American Psychiatric Association was forced to admit that, despite its practitioners’ reliance on medical remedies and terminology, not a single one of the
“disease” categories collected in their official manual (the DSM) had a known biological basis or sign (American Psychiatric Association 2003). This is
still true today, and in fact the newest edition, the DSM-5, has inspired numerous journalistic exposés, passionate public criticism, and unprecedented
protests within the guild itself. Even “the nation’s shrink”—the director of the National Institute of Mental Health—has slammed the DSM’s lack of
validity, noting that the United States government’s active search for an alternative classification system has been going on for two years and is mandated
to continue for eight more (Insel 2013).

Not surprisingly, mythers prefer to disregard such challenges, relying on public confusion about science to make their statements sound impressive. For
example, in the past few years, they keep invoking a “proven” genetic link between bipolar disorder and great creativity. Aside from leapfrogging over the
increasing doubts about the diagnosis, they are exploiting the widely reported jubilation over the “mapping” of the human genome—as well as the equally
common ignorance about what this actually means. The fact is that this fabled guide only identifies the sequencing of protein chains, rather than
establishing precise and immutable links between specific genes and any particular behavior or disease. In any case, even if reliable DNA causality is ever
established, individual variations in environment and experience will always affect its expression. But this is much too equivocal for those who want the
world to believe that the genetic ingredients of creativity and bipolar disorder, as well as the alleged link between them, are all backed by “indisputable
scientific evidence” (Schlesinger 2012, 114).

Wobbly Research

In fact, they are not. And future prospects are dim, due to the enormous research obstacle that blocks any empirical inquiry from the start: the lack of
universal definitions or measurements for either variable. The presence of creativity can only be inferred from subjective judgments, such as evaluating
arbitrary paper-and-pencil tests of its many amorphous and endlessly-debated components. Similarly, since bipolar disorder lacks the tangible clues from
blood tests, brain scans, or diseased tissue that a true physical illness would provide, its verdict turns on such slippery criteria as whether its
politically-determined, behavioral “symptoms” are “persistent” during a “distinct” period—temporal parameters that are fully as ambiguous as the symptoms
themselves. This makes it impossible to reliably identify these two concepts, let alone connect them in any definitive way—an exercise approximately as
futile as trying to nail two cubes of Jell-O together.

What gets served up instead is a stack of mismatched studies that incorporate so many different concepts, designs, instruments, populations, and
methodologies that their findings are virtually incompatible. Mythers get around this by claiming the results “all point in the same direction,” although
this is no surprise when researchers start with the same agenda. In its own hopeful way, the increasingly popular “meta-analysis” approach vaults over such
inconvenient disparities by homogenizing them and declaring this to be “scientific.” And so the mad genius gets elevated from fallacy to fact.

Methods to the Madness

The mythers derive their pseudoscientific rationale from three primary sources: psychiatrists Nancy Andreasen and Arnold Ludwig and psychologist Jamison,
whose 1989 study and 1993 book have become the sacred scrolls of the mad genius movement (for detailed critiques of all three, see Schlesinger 2009, 2012).
Much like citing Lombroso’s work without quoting his words, many writers repeat the dramatic claims of this trio without divulging how they were derived.
After three decades of studying this literature, I suspect that too few people have actually read the originals, assuming that someone earlier in the chain
has already done the proper vetting. The result is that the frequent repetition of these references in both the popular and professional literature creates
that coveted whiff of scientific legitimacy all by itself.

Andreasen’s 1987 study is still called “the groundbreaker.” Comparing thirty writers she knew at the Iowa Writers’ Workshop with thirty roughly matched
individuals in “non-creative” fields like social work and law, she was their sole interviewer and judge, using her own diagnostic protocol that was not
printed with the article and was only available upon request. Andreasen’s shocking result—that fully 80 percent of her male, white, and middle-aged writers
were mood-disordered—was quickly scooped by Psychology Today and Science News as if it described all writers of every age, race, type,
geography, and gender. Even a full generation later, few realize that Andreasen’s stunning majority of disturbed writers reflects such a small, homogenous
group—and moreover, one living at a retreat that happens to be famous for attracting burned-out professionals. But most damaging, since Andreasen’s results
failed to reach statistical significance, is that they could never be more than suggestive rather than the supposed “proof” of creative pathology they have
since become.

Jamison’s 1989 study has had a greater impact with an even weaker methodology; it effectively launched her as the go-to media expert on the mad genius and
continues to appear in virtually everything written on the subject, both inside and outside the field. Like Andreasen, the lone interviewer who hand-picked
her white, male, and middle-aged acquaintances, Jamison interviewed forty-seven award-winning British playwrights, poets, novelists, biographers, and
visual artists to determine the rate of mood disorders among them. Again, she also invented her own diagnostic criteria, uniquely claiming that her
subjects’ reports of treatment were the most stringent index of their disorder. But the fatal flaw was her lack of a control group, which precluded any of
the customary statistical analyses and left her with only simple percentages to report.

This would seem to end the matter right there. Yet some of those percentages are so impressive that they continue to trump the study’s lack of validity.
For instance, her 50 percent rate of mental illness among poets seems to confirm the common perception that theirs is an especially perilous art—unless you
read the article for yourself and learn that this number only represents a total of nine people. Equally obscure is the fact that the hefty 12.5
percent of visual artists taking antidepressants actually reflects only one person.

It doesn’t help that the original article is so difficult to find. Although it appeared in something called Psychiatry, that journal is not one of
the prestigious, peer-reviewed journals of similar title that can be searched through the customary professional data­bases. Rather, this Psychiatry is the interdisciplinary newsletter of a non-degree-granting therapy school in Washington, D.C. My copy took some determined off-track
digging to find, requiring waiting several weeks and paying a $27 fee to a retrieval service that seems to have since disappeared (it’s interesting that
Jamison’s current online CV lists 126 articles, and this is the only one that omits naming the publication). Such hurdles can easily encourage busy
researchers (and cash-strapped grad students) to simply trust their colleagues’ verdict and pass along the news without examining the content themselves.

The third major resource is Arnold Ludwig’s book, The Price of Greatness: Resolving the Creativity and Madness Controversy (1995), which actually
does nothing of the sort. But the title has apparently convinced researchers that there’s no need to go any further, since—once again—writers are far more
likely to acknowledge the work than to describe it.

Ludwig’s method was to gather 1,004 New York Times biographies of eminent people from different fields (96 percent of them white), as if they were
all shaped by the same ingredients of success. To justify bundling together such dissimilar luminaries as Amelia Earhart, Harry Houdini, and Marvin Gaye,
Ludwig invented a number of common explanatory variables like “oddness” and “anger at mother.” Despite the fact that these are neither defined nor
measurable, they appear in the fifty-five pages of charts and graphs that reassure the reader that something scientific has occurred. Ironically, although
Ludwig admits that “mental illness is not essential for artistic success” (1995, 7), his work—or at least its combination of titles—is regularly used to
argue the opposite.

A final truth-twisting technique is the “psychological autopsy,” in which mythers comb the histories of long-dead geniuses for “evidence” that they were
bipolar, even though the verdict is implicit in their choice of whose lives to examine in the first place. Any surviving diaries and letters are
scrutinized for signs of delight and disappointment, since finding both is the bipolar jackpot. Autopsists will also dissect other people’s writings for
concerns about the artist’s state of mind. These become additional “data” despite the clear possibility of personal agendas or simple gossip—i.e., that the
target’s correspondents are doing an early version of Tweeting and for similar motives: to alleviate boredom and stir things up.

Yet hearsay and rumor can easily qualify any artist for the mad genius list, the most popular of which is Jamison’s collection of 166 (1993, Appendix A).
Although to her credit she softens her diagnoses with the word probable, that word tends to fall off whenever her selections are cited. It doesn’t
help that she has also produced several concerts to “honor” supposedly bipolar composers by exhibiting musical “evidence” of their pathology. In the first
of these, in 1989, the National Symphony played composers’ happy and sad compositions together to demonstrate their aberrant fluctuations of mood (for a
full discussion of this video, see Schlesinger 2012, 106–109).

Ultimately, all this diagnosing of the long-dead seems pointless as well as mean-spirited. As every clinician knows, it can be tricky enough to evaluate
someone who is sitting right in front of you; in fact, the difficulty of getting clinicians to agree on a diagnosis is an ongoing and familiar problem, and
the main reason that the DSM criteria keep shifting around. What is less well-known is the fact that such remote assessment is actually unethical, since
psychiatry’s own Goldwater Rule prohibits the public diagnosis of any individual without benefit of a face-to-face interview. But few people know or care
about all this.

In the end, the mad genius myth is far too popular to give up—it’s old and glamorous and shimmers with a pseudoscientific patina. Besides, the currency of
psychology has always been abstraction: aside from neuroscientists who directly examine the brain, hypotheticals may be the best it can offer. This would
be fine if it enabled people to accept and appreciate the differences among us, whether in temperament or accomplishment or anything else. The problems
arise when such variations are pathologized without proof, diminishing the bright stars who bring such joy and beauty to the rest of us.

Sylvia Browne continues to offer $850 phone readings, sell books, deliver public lectures, and head her own church as she remains one of the most famous
psychics in the United States. My 2010 coauthored article, “Psychic Defective: Sylvia Browne’s History of Failure,” compiled every publicly available
prediction Browne made on missing person and death cases, totaling 115 readings, and concluded Sylvia Browne was mostly correct zero times, mostly wrong in
twenty-five cases, and had ninety unknown outcomes (Skeptical Inquirer, March/April 2010). In the last three years there have been developments in the
cases of Amanda Berry, Nicholle Coppler, Jerry Cushey, Alexandra Ducsay, Dustin Ivey, Hunter Horgan, Amanda Lankey, Christopher Mader, Dena McCluskey, Michelle O’Keefe, and Pat Viola that were
listed as having unknown outcomes.

Illustration by Neil Davies

This article updates the previous analysis with a new reading, bringing the total to 116 cases, and investigates changes in those eleven cases with
previously unknown conclusions by showing Browne mostly wrong in eight, with three remaining in the unknown category. The result? The evidence demonstrates
Browne still has never been mostly correct in a single case, thirty-three cases have mostly incorrect predictions, and eighty-three cases have unverified
outcomes. The article also looks at the human toll Browne’s predictions have had and other notable predictions that can be finally evaluated.

On April 21, 2003, Amanda Berry went missing a day before her seventeenth birthday. Louwana Miller, Berry’s mother, was desperate to find her daughter and
believed Browne was the key to solving the disappearance. In 2004, Miller was flown to The Montel Williams Show where Browne told the grief
stricken mother that “she’s not alive,” mentioned “water” as a location where Berry was, and said she was dead because “your daughter was not the type that
would not have called you” (Radford 2013). Besides claiming that a potential person of interest was “sort of Cuban-looking, short kind of stocky build,
heavyset,” she said he was “maybe 21, something like that, 21, 22.” When Miller asked if she would ever see her daughter, Browne told the bereaved mother,
“yeah, in heaven, on the other side.”

The impact of Miller’s appearance with Browne on Montel was crushing for a mother who held out hope her daughter would be found alive. In a
detailed interview with Miller by Stephen Hudak, the mother said she believed Browne “98 percent” (Hudak 2004). When Miller died of heart failure in 2006,
reporter Regina Brett explained how hard Miller worked at drawing attention to the case and looking for her daughter “before that psychic did her in”
(Brett 2006). According to that article, Browne was more specific than what was aired on television, telling Miller that Amanda “died on her birthday,”
“she didn’t suffer,” and “that her black hooded jacket was in a dumpster with DNA on it.”

Browne’s prediction was wrong. On May 6, 2013, Berry fled after being held in torturous conditions for ten years, and police called Berry a hero for her
escape that led law enforcement to her kidnapper and two other abducted girls, Gina DeJesus and Michelle Knight. DeJesus had been missing since 2004 and
Knight was kidnapped in 2000. In the chilling phone call to police, Berry identified herself as Amanda Berry, missing for ten years and identified her
captor as Ariel Castro, a fifty-two-year-old man. In a statement posted on Browne’s website, a message says Sherry Cole, Amanda Berry’s cousin, “reached
out to Sylvia this morning to let her know that she supports her, loves her, knows Sylvia never claims to be 100 percent right, but wanted to let her know
that she was accurate in her description of the perpetrators at the time” (“Sylvia’s Statement on Amanda Berry” 2013). That Castro, born in Puerto Rico, is
“Cuban-looking” is debatable for several reasons, including the fact that being Cuban is a nationality that includes a broad category of people who have
ancestors from Africa, Europe, or both. Thus, in the broadest sense many people, not just Latin Americans, could fit that description. Browne was wrong
about the kidnapper’s age; she claimed in 2004 that the person involved was about twenty-one or twenty-two, but Castro is currently fifty-two, making her
claim off by two decades, as he was in his early forties at the time. Her description was also that the suspect was “short,” but an online booking
photograph shows he is about sixty-five inches (AP Photo 2013), which is only slightly smaller than the most recent U.S. government data that lists an
average height of 67.1 inches for Hispanic males between forty and fifty-nine and 69.5 inches of all males between fifty and fifty-nine (United States
Department of Health and Human Services 2010). Castro may have a “stocky build,” but he does not appear to be “heavyset.” The website statement also
referred to Browne’s description of the “perpetrators,” despite police announcing Castro “ran the show and he acted alone” (Dolan et al. 2013). This
reading is moved into the wrong category of the “Psychic Defective” list.

In 2004 Browne told the mother of Amanda Berry that “she’s not alive.” Amanda and two other women escaped captivity on May 6, 2013. (Photo: Dan Callister, PacificCoastNews/Newscom)

Browne remains relatively quiet on Berry’s rescue aside from saying her “heart goes out to Amanda Berry,” but in 2012 Browne’s website posted a video of
her 2002 Montel appearance about Nicholle Coppler. In citing the case as a “validation,” Browne wrote she accurately told the mother that Nicholle
Coppler “was no longer alive and could be located in or under the house and that the person who killed her was also involved with both young boys and
girls” (“Webcast Previews” 2012). When Coppler went missing in 1999, she ran away from home and met an older man, Glen Fryer. Police suspected Fryer was
involved in the disappearance early on, and in August 2001 they not only found Coppler’s identification at his house but also her hair and photos. He was
arrested for rape and child pornography after police retrieved videos and photos of him raping underage girls, including a girl who was murdered in
Kentucky in 2000. Fryer, who was a suspect in his wife’s murder as well, agreed to a plea deal, but on February 18, 2002, he committed suicide before
telling detectives what he did to Coppler.

Nine months after the guilty plea and suspect’s death in 2002, Nicholle’s mother, Krista Coppler, appeared with Browne in November 2002 on Montel where she told Krista the obvious outcome that Coppler is deceased. The mother asked, “Do you know where she’s at?” and Browne replied,
“She’s right near his house.” Krista then asked Browne, “Is she in his basement?” and Browne responded vaguely with “yeah, in the house or under the
house.” According to Lima News in 2012, “police found her skeleton after the house was demolished and while the foundation was being dug out”
(Sowinski 2012). Out of the entire reading Browne was correct on the most likely scenario given Fryer’s guilty plea, suicide, connection to two previous
murders, and the evidence: Coppler was deceased. Her other predictions about Coppler being “under” the house, “near” the house, that she was “smothered,”
that Fryer transported girls “across state lines,” that she did not leave the house, that people named Kevin and Billy were involved, three males were
involved, or that she was killed out of fear for reporting Fryer’s crimes are either wrong or unsubstantiated. For example, Coppler’s remains cannot be
“near” or “under” the house while also being “in” the house.

In total, Browne’s “validated” statements for the Coppler reading were one or, at best, two out of ten predictions (counting the body buried next to the
foundation as either “in” or “under” the house as per Browne’s website claim). Accepting the body as being “in” the house makes Browne’s two other
statements about the remains resting “near” or “under” the house incorrect. Therefore in this reading with ten claims, Browne has a 10 percent or, at best,
20 percent accuracy, while 20 percent of her statements were wrong and the remaining 60 percent of her statements, including cause of death and possible
accomplices, are unknown. Due to a lack of evidence that could either confirm or deny Browne’s other six statements, this reading remains in the unknown
outcome category. This case is also a revealing look at how Browne operates. In the transcript, it was Krista’s statements about Fryer’s basement that
prompted Browne to focus on the home’s interior. Furthermore, nearly ten years lapsed between the reading and finding her remains; law enforcement found
the deceased and Browne played no role in police locating the body.

Browne was also proved wrong in her predictions about the August 1992 murder of Hunter Horgan, a priest at St. John’s Episcopal Church in Louisiana. In
1997, Browne was paid $400 by local police for the reading in which she claimed, “The priest was killed by a ‘young mulatto’ homosexual who was enraged by
Hunter’s rejection of his advances” (McMillan 1997). The psychic said, “Someone was in love with the minister and he [the minister] wasn’t predisposed to
be in love with a man” and the “priest was trying to help him” (McMillan 1997). While Browne said she expected the perpetrator “to get caught,” she claimed
that “somebody with the street name of ‘King’ directed gang people to do it,” but when asked for a name she declined, saying, “she is concerned about the
ethics of doing so” (McMillan 1997).

In 2007, the investigation was reignited in what turned out to be a highly unusual case that Browne failed to predict. After re-interviewing two men,
police accused Derrick Odomes, an African American who lived across from the church cemetery, of robbing and murdering Horgan and obtained DNA and
fingerprints from Odomes that linked him to the crime. As it turned out, Horgan was robbed. Both his wallet and car were stolen, and police found his pants
pockets were “turned inside out” (Monroe 2011). The trial was slow to move forward because Odomes’s lawyer argued he should be tried as juvenile, because
Odomes was fourteen in August 1992 and therefore legally a juvenile. In August 2011, Odomes, at age thirty-three was found guilty for the murder he
committed as a fourteen year old. The judge sentenced Odomes to incarceration until he was twenty-one, but since he was over that age he did not serve any
time and faces life in prison for other charges (Nolan 2011). As for Browne’s predictions on the murder, a gang was not involved, multiple people did not
commit the crime, no “homosexual advances” were motivating factors, there was no evidence Odomes loved Horgan, no mentions about Odomes being “mulatto,”
and no person named “King” was involved. Browne’s prediction is placed in the wrong category, since most of her claims were not supported by fact or they
indeed contradicted what was presented at trial.

In 2003, Browne gave a reading to Sonya Helmantoler on Montel about the 2001 disappearance of her brother Jerry Cushey Jr. A transcript of the
reading could not be located, but a journalist at the time wrote: “Browne said Cushey had been struck on the head and choked and his body dumped,” pointing
to “how hard it is to find a body in water” (Smydo 2003). Another journalist wrote that “Browne told Helmantoler on The Montel Williams Show that
Jerry was killed because he saw something he shouldn’t” (Brubaker 2006). In 2010, Ronald Curran and Christopher Myers, Cushey’s roommate, were charged with
the shooting death of Cushey and hiding his body over a drug debt Cushey owed. In 2011, Myers pled guilty and Curran pled guilty in 2012 (Buckley 2012).
Myers took police to the two locations where they buried Cushey’s body in wooded areas (Buckley 2010). Browne’s statements about the reason, manner of
death, and location of the body were false. This was a mostly wrong prediction and has been moved to that category.

On October 11, 2006, Browne did a reading about the death of Alexandra Ducsay for Linda, her mother, and said her daughter’s murderer is “sort of like” the
“Zodiac Killer.” Browne gave a name, but it was censored by The Montel Williams Show, claiming he “got in and followed her in” and it was linked
to “four” other women who were found and told the mother to search for rapists in the area. In September 2012, Matthew Pugh, Alexandra’s former boyfriend,
was charged with murder and burglary after a small piece of tape led police to him (Juliano and Cleary 2012). Pugh is only accused of one murder, but as he
is awaiting trial this case will remain in the unknown category.

In contrast, Browne gave more detail in her October 26, 2005, reading on Montel for Tamara Ivey, mother of deceased Dustin Ivey, by saying a
teenage boy and a “dark-haired young” female were involved. Said Browne, “I think it’s going to be solved really soon” and “a sexual predator” was the
suspect who used “a rock.” Tamara replied, “They told me that it wasn’t sexual.” In 2006, Richard Joshua Collier, Dustin’s brother and Tamara’s son, was
charged with Dustin’s murder. Police claimed the two got into an argument and Collier killed his brother. He was found not guilty at trial (Stoner 2007).
If the police and the prosecutor’s charges are correct and Browne was right about the case being solved “soon,” then all other details in Browne’s reading
were false. Conversely, if the charges were wrong then Browne’s timeline as well as the nature of his death were incorrect. Browne’s verifiable statements
in either instance were mostly incorrect, which puts this reading in the wrong category.

On February 8, 2006, Amanda Lankey’s 2004 murder was featured on Montel, where Browne spoke with her mother, Victoria Foster. Browne asked, “Do
you know anybody by the name of [censored]” to which the mother said yes. Browne claimed, “There was also a female involved with the first initial of ‘C,’”
and Browne said Amanda was killed in a “car,” specifically a blue Honda Civic. Browne said Amanda met the person on the Internet. In 2004, Cecil Wallis Sr.
was immediately named a person of interest in the murder because Lankey was last seen alive at his house and her body was found not far from that location.
In 2011, Wallis Sr. committed suicide before trial in an unrelated rape case involving teen girls at the same home between 1998 and 2002 (Tunison 2011).
Assuming Cecil Wallis Sr. was behind the murder, there is no evidence a female with a “C” was “involved,” and Browne was wrong about how the person met
Lankey. Cecil Wallis Sr. was not charged with Lankey’s murder, and without more evidence or a trial there are too many unknowns. Thus, this case remains in
the unknown category.

Sylvia Browne’s November 30, 2005, reading for Samantha Mader, mother of Christopher Mader, had a much clearer outcome. Browne gave the mother a name,
which was again censored, and claimed Christopher’s murder stemmed from the killer not liking “the food” at the bar he worked at, then later the killer
“saw him passing by, and shot him.” Browne also told the mother to start looking “where he ate breakfast.” Matthew Correll and Shawn Myers were charged
with the murder, and Correll was found guilty and Myers pled guilty in 2012 (Newman 2012). The two had attempted to rob Mader. Browne’s predictions were
not true about how many people were involved, the reason for the murder, or how the crime happened. This case has been put in the wrong category.

On February 26, 2003, Browne made predictions for Dena McCluskey’s stepmother Donna, asking the stepmother, “Who is David?” and Donna responded, “David
doesn’t ring a bell at all.” Browne then said, “She’s in like a basement thing” locally and “the reason I brought up David is because David, with an ‘L,’
last name ‘L,’ like something like [censored] or something, knows about this.” In 2007, the police found Dena McCluskey’s body “in a secluded area of
Tuolumne County” and arrested Russell Todd Jones for her murder (Ahumadara 2007). In 2011, Jones was found guilty of voluntary manslaughter in the killing
of Dena McCluskey, Jones’s roommate (Ahumadara 2011). He admitted to burying her body in a shallow grave near property owned by his parents after burning
her body. Browne’s predictions were false. There was no David involved, or an “L” last name, she was wrong about the body’s location and a “basement” and
failed to mention that the person involved was her daughter’s roommate. This reading is moved into the wrong category.

In October 2000, Browne sat down with Patricia O’Keefe, the mother of Michelle O’Keefe, who was murdered in February 2000. A transcript could not be
located, but according to news reports Browne said the killer was “a blue-eyed, dark-skinned white man named Lee or Leon, who fled the scene on a shuttle
bus” (Botonis 2000b). She further obfuscated, saying the murderer is “very dark-complected and could be mistaken as being black” and “he had a blue uniform
with a pocket and a badge or something over it” (Botonis 2000b). Browne then claimed O’Keefe’s murder was part of a series of murders at that location and
that the gun used in the murder could be found “in a large green metal trash can next to an elevator or door” that had not been emptied since the murder
eight months before. In response to the taping, police announced they were following the tips Browne offered not because they believed her, but “you don’t
reject any information,” as “a person could say they’re a psychic and really be trying to give you information either firsthand or from another source”
(Botonis 2000a).

In late 2009, Raymond Lee Jennings was found guilty after three trials for Michelle O’Keefe’s February 2000 murder and was later sentenced to forty years.
Long before Browne’s October 2000 reading, on April 4, 2000, Jennings was told by police he was the suspect in the murder (Brown 2012). Jennings, a
security guard at the school where O’Keefe was killed, was the sole witness and told conflicting accounts of what happened (Fausset and Blankstein 2001).
For example, he told investigators about when he first saw O’Keefe, which contradicted his earlier statements and physical evidence (Fausset and Blankstein
2001). While Browne was wrong about the suspect being named Leon, she was correct about one of his names being Lee. Browne’s website celebrated this fact
by promoting a Dateline episode showing Browne saying it was “white man named Lee or Leon, who fled the scene on a shuttle bus,” which had no
further analysis or clips from the show (“The Girl With The Blue Mustang” 2010). It is important to note that Raymond Lee Jennings was named as the suspect
less than two months after the murder and six months before Browne’s involvement. This case received national attention before Browne’s reading,
and O’Keefe’s murder was even featured on America’s Most Wanted in the summer of 2000. No physical evidence, such as a gun, was discovered despite
Browne’s claims and police following up on her statements. She was correct about the name Lee, being white, and eye color, which could have been surmised
by anyone who followed the case knowing that Jennings had been the suspect since April. Browne was wrong about the Leon name, his being “dark-skinned,”
“very dark-complected,” “could be mistaken as being black,” and he did not “flee,” as he stayed at the scene and did not take “a shuttle bus.” Furthermore,
Browne’s claims about where the gun was were false, and O’Keefe’s death was not part of a series of other murders. While one might expect a security guard
to have a blue uniform and a badge, this was not the case. According to the Dateline episode, his uniform consisted of black pants, a black
jacket, and a brown shirt. The shirt had the company’s red logo with a pattern of a badge on the sleeves and chest, but it was not a badge, and Browne’s
claim that it had “something over it” is unclear. So while she was correct on three statements that police already knew months before, Browne was wrong on
at least ten claims. This reading is moved into the mostly wrong category.

Similarly, on February 11, 2004, Browne conducted a reading for Jim Viola, whose wife Pat Viola went missing from Bogota, New Jersey, in 2001. The psychic
said she “had a major seizure,” was then given a ride by a grocery truck driver, and the husband needed to look in Akron, Ohio (Mahabir 2004). In September
2012, authorities announced they had Pat Viola’s body since July 27, 2002, when they found it washed ashore on a Rockaway beach in New York. DNA tests of
the bones were taken in 2006 and new samples from 2011 led to the identification (Baustista and Superville 2012). Pat Viola was dead at the time of
Browne’s reading so she could not have been alive in Ohio since her remains were in New York. This reading is moved to the mostly wrong category.

Browne’s dismal record has not dissuaded people from asking her questions about criminal cases. In 2011, she was asked by Angela Spinks, in front of an
Albuquerque, New Mexico, audience, who killed Lloyd, Dixie, and Steven Ortiz, her parents and brother, with a pickaxe on Father’s Day. According to
journalist Nico Roesler, Browne told Spinks the murderer was Jesse Rios, her brother-in-law (Roesler 2012). The police had previously questioned Rios and
his wife Cherie Ortiz-Rios, who found the bodies and lived on the property (Roesler 2012). An official with the New Mexico state police “told the family to
disregard Browne’s answer because the show was rigged and that it was a stunt” (Roesler 2012). The murders remain unsolved, and it is unclear what, if any,
information Browne knew about the triple homicide from the media. Adding this case to the list of Browne readings with unknown outcomes to the “Psychic
Defective” article brings the total to 116 cases total with eighty-three unknown outcomes.

These readings are not Browne’s only miserable predictions in recent years. Browne predicted in Prophecy (2005): “After Pope John Paul II passes,
there will be only one more elected pope” and wrote “he will be succeeded by what is essentially a triumvirate of popes” (Browne and Harrison 2005). In
2013, Pope Benedict XVI resigned, the first in nearly 600 years, and Pope Francis was elected, becoming the first pope from the Americas. Browne’s
predictions about the Pope were wrong, and she failed to predict these rare moments in the papacy. In End of Days (2008) Browne made predictions
such as: “Many of the dramatic advancements in our space travel will be the direct result of what we’ve learned from them, from the manned Mars exploration
in 2012” (Browne and Harrison 2008). There was no 2012 mission to Mars. In 2011, Browne predicted Mitt Romney would defeat Barack Obama in the 2012
presidential election, only to reverse herself in late September 2012 when Romney was trailing in polls and received negative press for his private
comments made to donors (Skomal 2011).

If one focuses only on the missing person cases, Browne’s prediction about Amanda Berry was not even the first time Browne told a mother her child was dead
when the missing child was later found alive. In 2003, Browne told the parents of Shawn Hornbeck he was dead, but he was found alive in 2007. After her
failed prediction received media attention, Browne released a statement to CNN’s Anderson Cooper saying: “She cannot possibly be 100 percent correct in
each and every one of her predictions. She has, during a career of over 50 years, helped literally tens of thousands of people” (“Psychic Told Parents That
Son Was Dead” 2007). The question is, if Browne cannot be 100 percent accurate then just how accurate is she? The Ortiz reading has been added to the
metric, while Browne was wrong in the cases of Amanda Berry, Jerry Cushey, Dustin Ivey, Hunter Horgan, Christopher Mader, Dena McCluskey, Michelle O’Keefe,
and Pat Viola. The Nicholle Coppler, Alexandra Ducsay and Amanda Lankey cases remain on the unknown list. Following these recent updates to the “Psychic
Defective” article, Browne has never been mostly accurate out of 116 readings, with thirty-three cases mostly wrong and eighty-three unverified
predictions.

When the Strangling Angel was at her strongest in Norway and Germany, 715 people each day were infected. She took refuge in the throats and hearts of the unprepared. The Strangling Angel—diphtheria—found comfort on the boots of German soldiers as they marched across Europe in the 1940s. Even though she visited over one million people in Europe at the time, she was forgotten and her lessons were lost.

The merciless lessons of diphtheria are drowned out today by the echo chamber of anti-vaccine activists trying to convince parents not to vaccinate their
children and to discourage people from vaccinating themselves. An online collection of factoids meant to support the anti-vaccination position reverberates
around the Internet, cut from one
website and pasted onto another without fact checking or context. Two of these anti-vaccination echoes are: “In Germany, compulsory mass vaccination
against diphtheria commenced in 1940 and by 1945 diphtheria cases were up from 40,000 to 250,000” (Allen 1985), and its usual sidekick, “In nearby Norway,
which refused vaccinations, there were 50cases of diphtheria.”

The truth about the relationship between diphtheria and vaccines in the 1940s cannot be expressed as a shout against hard walls while standing at the
bottom of a canyon. A higher vantage point is needed to see the truth about vaccines in Europe in the 1940s.

Diphtheria 101

Most industrialized westerners are blissfully ignorant of diphtheria today. To understand how it spread across Europe, we must first understand the basics
of the disease.

Diphtheria is a highly contagious bacterial disease of two types: respiratory and skin (cutaneous). Skin diphtheria can cause redness, sores, and ulcers.
Mild fever, sore throat, and chills are the first symptoms of respiratory diphtheria. Diphtheria then creates a toxin that makes a blue or gray-green
coating that sticks to the throat and nose. The coating thickens in the throat, making it hard to swallow and robbing the patient of breath. Some patients’
necks swell, sometimes to the width of the head, a condition called Bull Neck. The toxin can also travel to the nervous system, causing paralysis, and to
the heart causing heart failure. Diphtheria was once called the “Strangling Angel” because of how it kills.

The bacteria that cause diphtheria reside in the upper respiratory system. It is spread by close contact with an infected person or contact with droplets
of saliva, the toxin, or other bodily fluids. Occasionally objects soiled by an infected person can spread the disease. Susceptibility increases in
overcrowded, unsanitary, and poor socioeconomic conditions. Research also indicates that stress and starvation make a person more likely to contract
diphtheria. Between 5 and 20 percent of people who get diphtheria die, depending on age. Children are at the highest risk of death.

Diphtheria was a scourge on Europe’s residents during World War II. Europe saw more than one million recorded diphtheria cases in 1943, not counting Russia
(Stowman 1945).

“Don’t Get Stuck”

Don’t Get Stuck!
asserts: “Vaccination was made compulsory [in Germany] at the beginning of the Second World War; and the diphtheria rate soared up to 150,000 cases, while
in unvaccinated Norway, there were only 50 cases” (Allen 1985). Allen offers no sources for this information. Allen is the past president of the natural
medicine advocacy group American Natural Hygiene Society.

Hard Numbers

In 1945, the United Nations Relief and Rehabilitation Administration1 released diphtheria numbers for several European countries over several
years, including Norway and Germany (UNRRA 1945).

• In 1940, Germany had 143,585 cases and Norway had 149.

• In 1943 (the last year in the report), Germany had 238,409 cases and Norway had 22,787.

Allen’s numbers are reflective of only about one-third of the total cases recorded in Norway and Germany.

Diphtheria’s Dark History

In the 1920s and 1930s, diphtheria killed thousands of people every year in Europe, but not all countries were equally affected (Rosen 1948). Diphtheria
was one of the top three killers of people under age fifteen in England and Wales in the 1930s. Germany wrestled with exponential increases in diphtheria
infection rates between 1920 and 1940 from about 50 to over 200 per 100,000 people (Baten and Wagner 2003). Norway escaped being ravaged by diphtheria and
had a long-term steady decline in the numbers of diphtheria cases.

Due to the small number of cases in Norway, no national program for diphtheria vaccinations was instated. Epidemiologists of the time described Norway as
almost completely non-immunized (Anderson 1947).

Allen’s claims do not include other European countries, but England is essential to note. England implemented a mass diphtheria vaccination program in 1940
in response to the outbreaks. The value of vaccines was made clear to the public. In 1940, England had 47,683 cases of diphtheria and in 1944, the number
had dramatically dropped to 29,446 (UNRRA 1945).

Germany did not follow England’s lead, but neither did it go the way of Norway. In 1945, Dr. G. Stuart of the European Regional office of the United
Nations Relief and Rehabilitation Administration summarized the Ger­man vaccine program:

The reason underlying the high morbidity in Germany and its incorporated territories is largely determined by the absence of any nation-wide policy of
immunization comparable to that so successfully applied in Great Britain. On the other hand, a large-scale campaign was introduced in the pre-war period in
Western Germany, while an increase in diphtheria morbidity and mortality has since led the Reich and Prussian Ministry of the Interior to approve
immunization in those parts of Germany particularly affected. Moreover, immunization is compulsory for all youths at the beginning of their Landjahr—i.e.,
their year of agricultural service. (Stuart 1945)

(Landjahr was a voluntary program for all youngsters except university students, who were required to participate.)

Germany had an incomplete, noncompulsory diphtheria vaccination program. A large percentage of, maybe even most, citizens were not vaccinated against
diphtheria. Even though the original claim and Allen’s claim have been fully discredited, the numbers have not yet been put in appropriate context to shed
light on the larger issue of vaccine effectiveness. We need to understand why Germany and Norway’s numbers went up at such a high rate.

There are two obvious factors that have yet gone unexplored: World War II and the Holocaust.

Sardines and Sanitation

In September 1935, Germany passed the Nuremburg Laws, depriving Jews of many of the rights and protections of citizenship. In October 1935, the laws were
extended to cover Roma (gypsies), blacks, and other “undesirables.” Between 1933 and 1939, new laws banned Jews from municipal hospitals, forced them out
of schools at all levels, and severely limited Jewish doctors’ ability to practice medicine. This is not a comprehensive list, but it does demonstrate how
restricted health care was for Jewish and other “undesirable” Germans.

In October 1939, the German-occupied territory of Piotrków Try­bunalski in Poland opened the first ghetto specifically for Jews. Also in 1939, Germany
greatly expanded its use of concentration camps. The number of people held in concentration camps quadrupled from 1939 to 1942. Undesirables from all over
Europe—Jews, the mentally ill, Roma, communists, gay people, political dissidents—were also imprisoned in the camps. The estimated number of camps ranges
greatly, some estimate up to 15,000 by the end of the war. Some were temporary, others existed for several years.

(There were several different types of concentration camps, including labor camps and extermination or death camps. For the purposes of examining
diphtheria in Germany during this time, the specific type of camp is not important, so all the different types of camps will be referred to as
concentration camps.)

The Nazis packed people into concentration camps at an even higher rate than they did in the ghettos and further restricted access to clothes, shoes, soap,
food, medicine, and clean water. Many camps used prisoners for slave labor. The clothing was inadequate to protect from the cold. Camp prisoners had to
contend with starvation, unending stress, exhaustion, and exposure. From a germ’s point of view, it was a perfect place to reproduce. Unrelenting tidal
waves of disease swept through the camps without conscience or mercy.

“Quite aside from hard-to-measure traumas such as the drawn-out anticipation of an impending catastrophe, the incarceration itself, the dehumanization, the
sustained fear of death, I could point to some very tangible assaults upon my health in the concentration camp,” Jewish linguist Werner Weinberg recalled
about his experience in Westerbrock Camp and Bergen-Belsen Concentration Camp. “Among them were prolonged starvation and exposure; being worked beyond my
endurance and strength; every cut and bruise turning into festering wounds accompanied by high fever; diphtheria, dysentery, hepatitis, and a bout with
typhus that very nearly killed me” (Weinberg 1984).

Bergen-Belsen Concentration Camp was located in northern Germany near Calle. Housed in Bergen-Belsen were Jews, Roma, criminals, Jehovah’s Wit­nesses,
homosexuals, prisoners of war, and political prisoners. In July 1944, approximately 7,300 prisoners lived in Bergen-Belsen. By April 1945, the number rose
to 60,000. Many of those people were evacuated from other camps or regions in the German occupied territories. Food rations did not rise proportionally.

Bergen-Belsen was liberated on April 15, 1945, by the British Army. The liberators were shocked by what they found: more than 60,000 prisoners in various
stages of starvation and almost all suffering from disease.

Lieutenant Colonel M.W. Conin of the Royal Medical Corps at Belsen was one of the first medics from the Allied Forces to enter Bergen-Belsen Concentration
Camp. Said Conin, “One had to get used early to the idea that the individual just did not count. One knew that 500 a day were dying and that 500 a day were
going to go on dying before anything we could do would have the slightest effect. It was, however, not easy to watch a child choking to death from
diphtheria when you knew a tracheotomy and nursing would save it” (Reilly et al. 1997).

Fear Medicine

Prison block infirmaries and camp hospitals were short of all medical supplies including cots, life-saving drugs, sterile supplies, diagnostic tools,
staff, and anesthesia. Prisoners had to share beds even if their diseases were contagious. The block infirmaries were often staffed by other prisoners
without medical training, who were left to administer treatment, diagnose problems, and even perform surgery. The infirmary staff would often try to hide
advanced illness from Nazi doctors who would come to check on the patients because the Nazi doctors were usually performing “selection.” When patients were
“selected” it usually meant they were chosen to be put to death. According to numerous accounts, prisoners resisted going to the camp doctors. The
infirmaries were called “waiting rooms for the crematoria” in some camps.

Just an Experiment

In some camps prisoners were subjected to deadly medical experiments, including new vaccine development. Research on typhus, smallpox, cholera, malaria,
yellow fever, tuberculosis, paratyphoid, and diphtheria was conducted on the prisoners.

In written testimony given for the International Auschwitz Committee, former prisoner Dr. Stanislaw Klod­zinski described the medical experiments he saw
performed by SS doctors and pharmaceutical company representatives: “These preparations, he tried out on prisoners of the Auschwitz camp for experimental
purpose regarding typhus, typhoid fever, and various para-typhoid diseases, diarrhoea [sic], tuberculosis of the lungs, erysipelas, scarlet fever
and other diseases” (International Auschwitz Committee 1986).

When disease levels got too low in the camps where research was taking place, Nazi doctors intentionally injected prisoners with disease and sent them out
into the populations at the camps to re-infect the prisoners and keep diseases active in the camps. By having a lasting infection, they could study the
effectiveness of vaccines as well as the long-term effects of the diseases (Baumslag 2005, 145).

War

Lieutenant William Smith of the Canadian Army explains death on the front in World War II in his online account of Operation Infatuate (an amphibious
landing to take Walcheren, a Dutch Island). “There in early December, outside Groesbeek, on the edge of the Reichswald Forest, I was wounded on patrol.
Brown was killed, shot in the kidneys by a sniper, and Doakes died of diphtheria in a hospital somewhere in Holland” (Smith 1944).

Smith’s story was common. Soldiers killed each other; diphtheria and other diseases killed soldiers. Soldiers, especially those on the front lines and in
prisoner of war (POW) camps, underfed, under-dressed, packed together in tight groups and exhausted, were vulnerable to disease. Every aspect of war made
the soldiers and civilians more vulnerable to disease.

Capture

Conditions for POWs during World War II varied depending on rank, circumstance, and the country in which they were captured. In Germany, some POWs were
kept in castles, others were forced to work as slave laborers. Many lived in concentration camps or in conditions similar to them.

In January 1945, tens of thousands (numbers vary between 30,000–120,000) malnourished Allied prisoners of war were forced to march in groups of up to 300
across Poland and Germany in what came to be known as The Long March. Temperatures dipped to a biting -13 degrees Fahrenheit. The prisoners were given
inadequate water and food. They had to resort to drinking from ditches and scavenging for food, including eating rats. They were forced to sleep on the
ground in the freezing conditions, which resulted in amputations due to frostbite. POWs died from exposure, dysentery, exhaustion, pneumonia, typhus, and
diphtheria. Between 1,121 and 2,200 POWs died during the three-month winter march.

Diphtheria on the March

Germany’s policy of slave labor, concentration camps, ghettos, and a lack of vaccines made it a festering pustule of disease. As Germany marched across
Europe, disease became a second army, a wake of death behind the tanks and guns.

The European countries with the greatest increases in diphtheria during 1940–1944 were Norway, Belgium, the Netherlands, France, and Denmark. The
Netherlands saw a whopping forty-fold increase in diphtheria cases, which was dwarfed by Norway’s 112-fold increase in cases (Stuart 1945). The increases
in diphtheria rates all followed the German occupations in those countries. Germany’s official numbers did not even double (UNRRA 1945).

Belgium was invaded by Germany in May 1940, and with the invaders came a considerable increase in diphtheria. In 1939, Belgium had 2,419 cases of
diphtheria. By 1941, the number had skyrocketed to 4,271. Even worse was 1943 at 16,072 or about 1,340 cases per month (UNRRA 1945). In Sep­tember 1944,
the Canadians pushed into Belgium and started to shove Germany out. In early November, the Germans were forced out of Belgium. In November, the cases were
down to 447 (Stuart 1945).

The Netherlands was also invaded by Germany in May 1940. In 1939, they had 1,273 cases of diphtheria. 1940 had a shocking 5,501 cases. The exponential
growth continued for the next two years, with 1942 seeing 19,527 and 1943 seeing 56,603. In August 1944 the number of cases was up to 60,226. The Germans
were booted from the Netherlands in early 1945. In 1945, diphtheria rates in the Netherlands dropped faster than they gained the year before, to 49,730
cases (Stuart 1945; Anderson 1947; UNRRA 1945).

The Netherlands and Belgium had one significant commonality—in­complete vaccine programs. The Netherlands stopped their previously widespread but not
comprehensive diphtheria vaccine program during the war. Belgium attempted widespread vaccinations but did not make it mandatory.

Norway Was Different

Unlike the Netherlands and Belgium, Norway was completely undefended against diphtheria. The rates of diphtheria dipped so low the country had little
natural resistance and there was no national or compulsory vaccination program. When Norway was invaded in April of 1940, it set in motion an astronomical
spread of diphtheria.

In 1939, Norway had 71 cases of diphtheria. By 1943 it was up to 22,787 (UNRRA 1945). Norway was caught totally unprepared. With no inoculations and no
natural immunities, the population was at the mercy of the troop movements and the disease’s course. Norway’s decision to have an unvaccinated populace was
a deadly mistake.

Lessons Learned

Allen was on to something—diphtheria in Europe during the 1940s is a compelling anecdote in the current debate over the safety and effectiveness of
vaccines. Unfortunately for the original online assertion, Allen’s position, and the anti-vaccination position, the vaccination programs in Norway and
Germany argue in favor of mass, compulsory vaccinations and show the dangers of the anti-vaccination movements to the health of all people.

In countries without complete vaccination programs, when the disease was introduced, it spread at an almost inconceivable rate. Norway’s mistake—be it a
result of hubris or ignorance—was its belief it could control a disease without vaccines and its failure to adequately consider changing conditions outside
its control.

Europe in the 1940s is a case study that demonstrates the importance of paying attention to the health of all countries and helping them eradicate their
diseases. The world is a smaller place than it was during WWII. All people are just a short plane ride from the next continent. A person infected with
diphtheria can take between two to five days to show symptoms. In that time an infected person can travel thousands of miles by plane and potentially
contact thousands of people.

Germany and Norway in the 1940s also teach us that human rights abuses are not just matters of morality for the persons directly involved. Altruism is not
the only reason to help people in conditions like those in ghettos and camps; enlightened self-interest may be necessary to protect from outbreaks of
disease. The take away from World War II Europe diphtheria rates and the effectiveness of vaccines would be most clearly stated as:

In the 1930s and 1940s, Germany created a breeding ground for diphtheria. They did not implement a comprehensive, compulsory vaccine program. They
restricted medical care for large sections of the population, and crowded those people into concentration camps and ghettos. That concentration and the
accompanying war conditions led to outbreaks of disease in Germany. As a consequence, diphtheria could be found in the footsteps of German soldiers in
countries they invaded during World War II. After being invaded by Germany, Norway, which had no diphtheria vaccine program, saw unimaginable increases of
diphtheria (149 cases in 1940 to 22,787 in 1943). Other countries with incomplete inoculations, like Belgium, the Netherlands, France, and Denmark also had
huge increases. England had an extensive vaccination program and the incidents of diphtheria during World War II declined. Vaccinations are essential to
protecting people from disease and merely controlling it by other means is not sufficient.

The anti-vaccination movement in America today uses Hannah Allen’s claims about Germany and Norway to try to discourage vaccinations in children and
adults. As a result of their efforts, nearly ten percent of children in America are not fully vaccinated. The number of parents filing for exemptions to
school vaccine requirements is increasing steadily. America is having outbreaks of diseases previously controlled through vaccinations, like whooping cough
and measles. America may soon be as vulnerable as Belgium was in the 1940s. If anti-vaccination proponents see their goal accomplished, America risks
becoming another Norway.

Dr. Paul Offit, developer of the rotavirus vaccine, summarized his objection to the anti-vaccination claims about vaccines in 1940s Germany plainly: “I
can’t believe we are still discussing this in the 21st century. There is no debate. Look at the history of vaccinations in the world and you come away with
the following conclusion: immunization rates increase, disease decreases. It is just that simple.”

Acknowledgments

For research assistance I thank: Kristian Frøland, student at Norwegian University of Science & Technology, and Timothy Binga of Center for Inquiry
Libraries.

Note

1. United Nations Relief and Rehabilitation Administration was the branch of the United Nations tasked with planning and coordinating relief efforts for
WWII war victims.

Human-caused (anthropogenic) global warming has been a topic of major scientific interest for more than half a century, and its consequences are broadly
apparent in rising surface and ocean temperatures, dramatic changes in Arctic ice, rising sea levels, and a multitude of stresses placed on ecosystems
around the planet, such as poleward migration of bark beetles killing vast forests. However, some powerful organizations dispute the reality of this
climate change and the role of the greenhouse effect as its cause. In the United States, there is widespread public confusion—or even outright denial—about
global warming.

Atmospheric CO2 as measured from the Mauna Loa Observatory since 1958. Note the small repeating seasonal variations. There is a steady
overall increase from 315 to 400 ppm, and the slight upward curvature shows the acceleration in the deposition of CO2. Carbon dioxide is
the primary greenhouse gas in the atmosphere, and as such is the main driver of climate change. The increasing CO2 greenhouse is causing an
energy imbalance, with more heat added to the Earth every year. However, only a part of this excess energy goes to increase the surface air
temperatures, which are also subject to a variety of other influences. Data from Pieter Tans, NOAA/ESRL
(www.esrl.noaa.gov/gmd/ccgg/trends/) and Ralph Keeling, Scripps Institution of Oceanography (scrippsco2.ucsd.edu/).

A major recent source of public misunderstanding is the slowing of the rise of temperature (the so-called temperature “plateau”) that is apparent in annual
average global surface temperature over the past fifteen years, following rapid warming in the preceding twenty-five years. Adding to some people’s
bewilderment is the fact that atmospheric and climate scientists tend to downplay this supposed plateau and continue to assert that the planet is warming
at a dangerous rate. What is the reality behind this divergence of opinion between scientists and their critics concerning the reality and significance of
the putative plateau?

Evidence for Global Warming

Before addressing the surface temperature issue, it’s helpful to review the scientific case for continuing global climate change. The reality of the
greenhouse effect and the implications of increasing carbon dioxide in the atmosphere have been known since the pioneering work of Svante Arrhenius in
1896. The steady increase in atmospheric CO2 since the beginning of the industrial revolution has been monitored directly since 1959
and reconstructed for earlier years from atmospheric samples trapped in ice. In May 2013 the concentration reached 400 ppm, 43 percent above pre-industrial
CO2 levels. This increase is almost entirely caused by human burning of fossil fuels and deforestation. Basic physics tells us that the
increasing CO2 greenhouse effect will affect the temperature of the planet; the key question is by how much.

In the past two decades we have been able, for the first time, to measure the greenhouse heating and to track where the excess heat is deposited. Excess
greenhouse gases trap heat in the atmosphere and reduce infrared radiation to space. The imbalance causes the Earth to absorb more energy than it radiates.
Most of that energy is going into the ocean, not the surface or atmosphere. Global ocean temperatures are now measurable for the first time by the Argo
deep ocean probes—an international network of more than 3,000 measuring stations that measure ocean temperatures down to two kilometers depth. One
consequence of ocean heating is sea level rise from thermal expansion of the water, now taking place at an average rate of 3.3 ± 0.4 mm per year (based on
data from 1993 to 2009).

The effects of global warming are visible in many natural systems. The most dramatic changes are in the Arctic. In the Arctic Ocean, the minimum summer ice
cover has shrunk by more than 50 percent, and the residual ice is only about half as thick as it was thirty years ago. The Greenland ice sheet is rapidly
shrinking, as measured by satellites that sense the total mass of ice, and in the summer of 2012 it lost an astounding 500 cubic kilometers of ice. Most of
the ice does not melt in place, but surface melt water lubricates the ice flow and causes much more ice to flow into the sea. The warming ocean then
supplies the energy to melt this ice. There is similar ice loss in the Antarctic, primarily by erosion of floating ice shelves from below by warmer
seawater.

All this evidence demonstrates the recent acceleration of global warming. But what about the evidence from global surface temperatures?

The Temperature Plateau

The often-quoted global surface temperatures are a measurement of only a very small part of the global energy balance. However, they are important for two
reasons. First, they are part of a continuous record of thermometer measurements that can be traced back a century and a half (and extended further into
the past by geologists), unlike the measurements of deep ocean heat or polar ice loss, which are limited to the past few decades. Second, these
measurements are easily appreciated by the public, relating to their sense of what warming means.

Surface temperatures fluctuate for many reasons other than greenhouse warming. For example, they are slightly affected by small variations in solar heating
(the solar activity cycle). They are dominated on the short term by weather and by such multiyear cycles as the El Nino-Southern Oscillation (ENSO), which
involves major redistribution of heat between the ocean and the atmosphere. The year 1998 saw one of the largest ENSO events in history, with resulting
high measured temperatures. For all of these reasons, climate scientists looking for the effects of greenhouse warming prefer to average surface
temperatures over timescales longer than a decade to help smooth out the “noise” caused by the solar cycle, ENSO, and other short-term changes due to
volcanic eruptions.

Two perspectives on the global average temperature changes since 1970. In the upper panel, the data are fit with a straight line. In the lower panel, they are fit with several straight-line segments, giving a “stair-step” fit. Both show the same temperature increase of 0.7 C over forty-two years, but we react differently depending on the way the data are presented. The plots show annual temperatures, relative to the average from 1964–1994. Data are averages from NASA (Goddard Institute for Space Studies), NOAA (National Climatic Data Center), and the UK Met Office (Hadley Centre). Figure compliments of Dana Nuccitelli (skepticalscience.com).

Climate is not the same as weather, nor is it defined by this short-term interannual variability. Climate has long been considered to be a twenty- to
thirty-year average. So whatever plateau may appear in annual temperature records, it is dominated by short-term fluctuations. There is no plateau in
climatologically significant temperature. However, we are impatient and unwilling to wait twenty to thirty years to assess the reality of climate change.
Looking at just the past decade of surface temperature measurements, there is an apparent plateau.

The two charts on p. 9 illustrate two different ways to look at the same information. Both plot the measured global temperatures from 1970 to 2012. The
first fits a straight line in order to derive an average rate of heating over the past half century. The second fits the same data with a series of
straight lines. The multiple straight-line fits show a temperature plateau from 2001 to 2012.

These charts illustrate two equally plausible ways to look at the data. One indicates a steady increase in temperature; the other shows a stair step or
“escalator.” Both show that the temperatures are rising, either continuously or episodically—take your choice. The temperatures in every decade over the
past half-century have been higher than in the previous decade, and the two warmest years were both in the past decade, in 2005 and 2010.

Is the short-term temperature “plateau” significant? Perhaps, taken alone, it might be. But seen in the context of other evidence for a rapidly warming
planet, this recent fluctuation in the surface temperature data is not evidence against global warming. Heat is being deposited on our planet, whether or
not it yet shows up in short-term surface temperature data.

Conclusion

This short-term temperature plateau has become the primary argument of those who either question or outright deny the reality of human-caused global
warming. Their argument is part of an overall denialist position that does not recognize the reality of climate science. Sometimes it is said that carbon
dioxide is such a minor fraction of the atmosphere that it could not contribute to climate change, or even that it is absurd to imagine that anything
humans do can have a major effect on our planet. This narrative asserts that climate scientists are simply seeing a correlation between temperatures and CO2 concentration and claiming naively that this correlation implies causation. Therefore, if the temperatures do not rise steadily with CO2, this disproves the entire idea of climate change.

The denialists do not acknowledge the broad-based evidence of climate change, or the thousands of scientific papers published annually that strengthen our
understanding of climate and have never been rebutted scientifically. Their tactic is similar to that of the evolution deniers, who ignore entirely the
research of evolutionary scientists and assert, simply, that biologists believe absolutely in the correctness of Darwin and have been naively interpreting
all evidence according to a Darwinian dogma.

Those who deny climate science and evolutionary biology set up a strawman caricature of science and often succeed because so few people understand how
science really operates.

]]>Dr. Oz’s Questionable WizardryMon, 16 Dec 2013 11:01:00 EDTinfo@csicop.org ()http://www.csicop.org/si/show/dr_ozs_questionable_wizardry
http://www.csicop.org/si/show/dr_ozs_questionable_wizardry
Miracles are pretty rare events. Except on television’s Dr. Oz Show, where they appear with astonishing frequency. Oz of course doesn’t
claim to raise the
dead or part the Red Sea, but he does raise people’s hopes of parting with their flab. And he’s certainly not shy about flinging the word
miracle about. But it seems miracles fade as quickly as they appear. Raspberry ketones, acai berries, and African mango, once hyped as
amazing “fat busters,” have already given way to newer wonders.

Granted, Dr. Oz—or more likely his producers—do not pull miracles out of an empty hat. They generally manage to toss in a smattering of stunted facts that
they then nurture into some pretty tall tales. Like the ones about chlorogenic acid or Garcinia cambogia causing effortless weight loss. The former piqued
the public’s interest when the great Oz introduced green coffee bean extract as the next diet sensation. Actually “chlorogenic acid” is not a single
compound but rather a family of closely related compounds found in green plants, which perhaps surprisingly contain no chlorine atoms. The name derives
from the Greek “chloro” for pale green and “genic” meaning “give rise to.” (The element chlorine is a pale green gas, hence its name.)

It was an “unprecedented” breakthrough, Oz curiously announced, apparently having forgotten all about his previous weight-control miracles. This time the
“staggering” results originate from a study of green coffee bean extract by Joe Vinson (2012), a respected chemist at the University of Scranton who has a
long-standing interest in antioxidants, such as chlorogenic acid. Aware of the fact that chlorogenic acid had been shown to influence glucose and fat
metabolism in mice, Vinson speculated that it might have some effect on humans as well. Since chlorogenic acid content is reduced by roasting, a green
coffee bean extract was chosen for the study.

In cooperation with colleagues in India who had access to volunteers, Vinson designed a trial whereby overweight subjects were given, in random order, for
periods of six weeks each, either a daily dose of 1,050 mg of green coffee bean extract, a lower dosage of 700 mg, or a placebo. Between each six-week
phase there was a two-week “washout” period during which the participants took no supplements. There was no dietary intervention; the average daily calorie
intake was about 2,400. Participants burned roughly 400 calories a day with exercise. On average there was a loss of about a third of a kilogram per week.
Interesting but hardly “staggering.” And there are caveats galore.

The study involved only eight men and eight women, which amounts to a very statistically weak sample. Their diet was self-reported, a notoriously
unreliable method. The subjects were not really blinded since the high-dose regimen involved three pills and the lower dose only two pills. A perusal of
the results also shows some curious features. For example, in the group that took placebo for the first six weeks, there was an eight kilogram weight loss
during the placebo and washout phase, but almost no further loss during the high-dose and low-dose phases. By the time, though, that critics reacted to
Oz’s glowing account, overweight people were already heading to the health food store to pick up some green coffee bean extract that might or might not
contain the amount of chlorogenic acid declared on the label. As for Dr. Oz, he had already moved on to his next “revolutionary” product, Garcinia
cambogia, unabashedly describing it as the “Holy Grail” of weight loss.

We were actually treated to the Grail in action. Sort of. Dr. Oz, with guest Dr. Julie Chen, performed a demonstration using a plastic contraption with a
balloon inside that was supposed to represent the liver. A white liquid, supposedly a sugar solution, was poured in, causing the balloon, representing a
fat cell, to swell. Then a valve was closed, and as more liquid was introduced, it went into a different chamber, marked “energy.” The message was that the
valve represents Garcinia extract, which prevents the buildup of fat in fat cells. While playing with balloons and a plastic liver may make for
entertaining television, it makes for pretty skimpy science.

Contrary to Dr. Oz’s introduction that “you are hearing it here first,” there is nothing new about Garcinia. There’s no breakthrough, no fresh research, no
“revolutionary” discovery. In the weight-control field, Garcinia cambogia is old hat. Extracts of the rind of this small pumpkin-shaped Asian fruit have
long been used in “natural weight loss supplements.” Why? Because in theory, they could have an effect.

The rind of the fruit, sometimes called a tamarind, is rich in hydroxycitric acid (HCA), a substance with biological activity that can be related to weight
loss. Laboratory experiments indicate that HCA can interfere with an enzyme that plays a role in converting excess sugar into fat, as well as with enzymes
that break down complex carbohydrates to simple sugars that are readily absorbed. Furthermore, there are suggestions that Garcinia extract stimulates
serotonin release, which can lead to appetite suppression.

Laboratory results that point toward possible weight loss don’t mean much until they are confirmed by proper human trials. And there have been some:
fifteen years ago a randomized trial involving 135 subjects who took either a placebo or a Garcinia extract equivalent to 1500 mg of HCA a day for three
months, showed no difference in weight loss between the groups (Heymsfield et al. 1998). A more recent trial (Kim et al. 2011) involving eighty-six
overweight people taking either two grams of extract or placebo for ten weeks echoed those results. In between these two major studies there were several
others (Onakpoya et al. 2011), some of which did show a weight loss of about one kilogram over a couple of months, but these either had few subjects or
lacked a control group.

Basically, it is clear that if there is any weight loss attributed to Garcinia cambogia, it is virtually insignificant. But there may be something else
attributed to the supplement, namely kidney problems (Li and Bordelon 2011). Al­though incidence is rare, even one is an excess when the chance of a
benefit is so small. So Garcinia cambogia, like green coffee bean extract, can hardly be called a miracle. But it seems Dr. Oz puts his facts on a diet
when it comes to fattening up his television ratings.

]]>Bigfoot Lookalikes: Tracking Hairy Man-BeastsWed, 11 Dec 2013 09:12:00 EDTinfo@csicop.org ()http://www.csicop.org/si/show/bigfoot_lookalikes_tracking_hairy_man-beasts
http://www.csicop.org/si/show/bigfoot_lookalikes_tracking_hairy_man-beasts
Although Sasquatch—after 1958 generally called Bigfoot—is most
associated with the Pacific North­west (a region loosely ranging from northern California to Oregon, Washington, British Columbia, and southern Alaska),
sightings are reported throughout the United States and Canada (Bord and Bord 2006). Many of these turn out to be hoaxes—notably Roger Patterson’s filming
of “Bigsuit” in 1967. (He used a gorilla suit purchased from costume-seller Phil Morris, converted it to Bigfoot by modifying the face and adding pendulous
breasts, and enlisted a man named Bob Heironimus to wear the suit [Long 2004; Nickell 2011, 68–73].) Many other Bigfoot sightings are no doubt
misperceptions resulting from expectation and excitement (Nickell 2011, 94–96).

But misperceptions of what? Over my years as a skeptical cryptozoologist, I have looked for real, natural lookalikes to explain various reported
“monsters.” For example, the round-faced, gliding, “Flatwoods Monster” of 1952 with its “terrible claws” seemed almost certainly to be a barn owl, just as
“Mothman” of 1966, with its large, shining red eyes, could be identified as a barred owl (Nickell 2011, 159–66, 175–81). Again the legendary “giant eel” of
Lake Crescent, Newfoundland, was probably inspired by otters swimming in a line (who are also known to be mistaken for some lake and sea monsters) (Nickell
2007; 2012a). Given these and other examples of monster lookalikes—I think of my work in this regard as that of a paranatural naturalist—we may ask: Are
there animals that might be mistaken for Bigfoot?

A Candidate

As it happens, there is one especially good candidate for many sightings of Bigfoot—even for some of the non-hoaxed imprints of his big feet. The earliest
record of potential Sasquatch footprints comes from an explorer named David Thompson, who while crossing the Rockies at what is today Jasper, Alberta, came
upon a strange track in the snow. Measuring eight by fourteen inches, it had four toes with short claw marks, a deeply impressed ball of the foot, and an
indistinct heel imprint (Green 1978, 35–37; Hunter with Dahinden 1993, 16–17).

The claws do not suggest the legendary man-beast. Indeed, John Napier, a primate expert at the Smithsonian Institution and author of Bigfoot
(1973, 74), thought the print could well have been a bear’s (whose small inner toe might not have left a mark). Thompson himself thought it likely “the
track of a large old grizzled bear” (quoted in Hunter with Dahinden 1993, 17).

But what about sightings? It is not uncommon for eyewitnesses to state that at first impression their Bigfoot looked like a bear, thus proving the
similarity (see Figure 1). Yet many go on to rule out that identification, based on some aspect of appearance or behavior. However, as considerable
evidence in fact shows, many Sasquatch/Bigfoot encounters may well have been of bears. Mistaken identifications could be due to poor viewing conditions,
such as the creature being seen only briefly, or from a distance, in shadow or at nighttime, through foliage, or the like—especially while the observer is,
naturally, excited. Non-expert observation is also problematic, as is expectancy, the tendency of people who are expecting to see a certain thing to be
misled by something resembling it (Nickell 2012b, 347).

Comparisons

A published compilation of 1,002 American and Canadian Sasquatch/Bigfoot reports from 1818 to November 1980 is instructive (Bord and Bord 2006, 215–310).
Analysis of the cases (which are presented as brief abstracts) reveals that not only general anatomy but also color variations, footprints, behavior, and
geographical distribution of Sasquatch/Bigfoot are often quite similar to those of bears.

Anatomy. Bigfoot is typically de­scribed as a large, hairy man-beast. It is said to walk on two legs, to have long arms, large shoulders, and, often, no neck.
Although it is frequently likened to an ape, it has been reported many times to have claws (Bord and Bord 2006, 215–310; Wright 1962).

Like Bigfoot, bears can appear as large, big-shouldered, hairy, manlike beasts. Their anatomy is consistent with bipedal standing (hence the long “arms”)
though much less so with walking—and, according to the Smithsonian expert John Napier (1973, 62), “At a distance a bear might be mistaken for a man when
standing still. . . .” Consider this incident of a creature on the porch of a ranch house in western Washington State in 1933 (related at second hand,
years later, by the daughter of the woman who observed it):

It was moonlight outside, and at first she thought it was a bear on the porch, but this animal was standing on its back legs and was so large it was
bending over to look in the window. She said it appeared over 6 feet tall and it didn’t look like a bear at all in the moonlight. She said in a few minutes
it walked over [no doubt only a couple of steps] and jumped off the porch and started around the house. She went into the kitchen so she could get a good
look and she said it looked just like an ape. (Lund 1969)

Ape, Bigfoot, bear? You decide, but remember, this was bear territory. And a standing black bear can be up to seven feet tall (Yosemite 2013).

During several days in April 2013 in New York State’s massive Adirondack Park, where there are scattered Bigfoot encounters, I talked to hunters and others
who had witnessed standing bears. One man, at whose remote home I boarded for an evening, told me of once standing face to face with a black bear: it was
on its hind legs looking in the window at him!

Figure 1. Split-image illustration compares a standing bear (left) to the creature it is often mistaken for, Sasquatch/Bigfoot (right). Drawing by Joe Nickell.

The often-reported action of Bigfoot running on all fours is entirely consistent with a bear, as in a case of late April 1897. Near Sailor, Indiana, two
farmers witnessed a man-sized beast covered with hair walking on its hind legs, but it “afterwards dropped on its hands and disappeared with rabbit-like
bounds” (Bord and Bord 2006, 23, 221). No doubt the “hands” were really paws. Again, in 1970, a Manitoba man saw a seven-foot, dark Bigfoot “stand up” by
the roadside at night. And in 1972, at an Iowa state park, a seven-foot brown Bigfoot was shot at and “ran away on all fours” (Bord and Bord 2006, 260,
264; see also Green 1978, 246, 178).

One Bigfoot report was inspired when, in April 1978, a Maryland farmer saw a “bear” walking upright across a field, followed by two “smaller creatures on
all fours” (Bord and Bord 2006, 300). This is consistent with a mother bear in alert mode with cubs. Bears often stand on their hind legs to look and to
sniff the air, and black bears usually have a litter of two, born in January or early February (“Black Bears” 2013; Whitaker 1996, 703). And so,
apparently, a stated bear encounter was converted by enthusiasts into a sighting of “Bigfoot.” Some months earlier, in the fall of 1977, two South
Dakota boys (ages twelve and nine) saw only “long hairy legs” in the bushes (Bord and Bord 2006, 294), and that likely bear became another “Bigfoot.”
Reports of Bigfoot’s gait as “peculiar” or the like (Bord and Bord 2006, 290, 291) could be consistent with the awkward gait of an upright bear.

Coloration. Like descriptions of Sasquatch/Bigfoot, black bears can not only be black but also dark brown, brown, cinnamon, blond, off-white, and white (Herrero
2002, 131–34). The same is true of grizzly (brown) bears (Ursus arctos), which—just like a Bigfoot reported in northern California (Bord and Bord
2006, 246)—often has dark-brown, silver-tipped hair (Herrero 2002, 133; Whitaker 1996, 706).

“To confuse the novice further,” states a noted authority, “there are also variations in color patterning on the coats of each species.” This is due to
genetic factors and to molting. With most bears, a lightening in the color of the coat occurs between molts (Herrero 2002, 133, 134).

In nighttime sightings, color may go unreported, but the animal’s eye-shine is frequently described. There are numerous reports of “gleaming eyes,” “large
glowing eyes,” “green shining eyes,” “glowing amber eyes,” and the like, including occasionally “red eyes” (Bord and Bord 2006, 259–300). Generally, bear
eyeshine is reported as ranging “from yellow to yellowish orange, though some people report seeing red or green” (“Backpacker” 2013). The North American
Bear Center mentions a black bear with mismatched eyes, due to an injured eye that “shines red rather than yellow” (“Mating” 2013).

Footprints. Bigfoot has been re­ported to leave tracks that had two to six toes and ranged in length up to twenty or more inches (Bord and Bord 2006, 215–310). Of
course, many large tracks—like the fourteen-inch ones of Patterson’s “Bigsuit” creature—are hoaxed (Nickell 2011, 66–75; Daegling 2004, 157–87).

As to bears, Napier (1973, 150–51) observes that “The hindfoot of the bear is remarkably human-like,” and that near the end of summer when worn down, the
claws “may not show up at all” in tracks. Also at moderate speeds the hindfoot and forefoot prints may superimpose to “give the appearance of a single
track made by a bipedal creature” (Napier 1973, 151).

Bears’ five-toed hindprints range from about seven to nine inches long for the black bear to approximately ten to twelve for the grizzly (brown) bear,
although some can be more than sixteen inches, and “In soft mud, tracks may be larger” (Whitaker 1996, 704, 707). As bear expert Herrero cautions: “I don’t
give measurements because track size varies so much depending on substratum. If a track seems very large, look at other track characteristics.”

A bear’s smallest toe (the innermost one, as opposed to that of humans) “may fail to register” (Whitaker 1996, 704), no doubt explaining many four-toed
“Bigfoot” tracks. As well, “In mud a black bear’s toe separation may not show” (Herrero 2002, 178), possibly giving rise to the illusion that—depending on
just where there might be a slight separation—a “four”-toed track might appear to have been made with only two very broad toes, or even perhaps three.
Rare, six-toed tracks (unlikely for either Bigfoot or bear) were found in Iowa in 1980 after a witness saw a “strange creature on all fours eating [a]
carcass” (Bord and Bord 2006, 307). Except for the tracks (which were probably due to some anomaly like the overlapping of hind and fore feet), the
creature is consistent with a bear.

None of the tracks mentioned in the 1,002 abstracts under study, representing reports from 1818–1980 (Bord and Bord 2006, 215–310), were reported to have
dermal ridges (the friction ridges of, for instance, fingerprints). These are common to both apes and man, as well as, presumably, to an ape-man. (Although
in 1982, a U.S. Forest Service patrolman discovered such prints in Oregon’s Blue Mountains, in the Mill Creek Watershed, noted Bigfoot skeptic Michael
Dennett [1989] turned up evidence that those tracks were part of an elaborate hoax.)

Behavior. Bigfoot’s reported actions are quite varied. Aside from such outlandish reports as of a Sasquatch treating an Indian for snakebite or kidnapping people,
numerous acts attributed to the fabled creature again have a ready explanation: bears. For example, Bigfoot often eats berries, fruit, grubs, vegetation
such as corn, fish, animal carcasses, and human rubbish. It may be seen day or night. It often visits campsites, like one raided by a “cinnamon-colored
Bigfoot” in Idaho in the summer of 1968 that left tooth marks on food containers. It also peers into homes and vehicles, and sometimes shows aggression
(Bord and Bord 2006, 215–310; Merrick 1933).

Similarly, bears share these and other aspects of behavior with Bigfoot. For example, bears feed on most nonpoisonous types of berries (which they eat by
moving their mouths along branches). As well, they tear open rotten logs for grubs, and they feed on fruit, corn, and other vegetation, fish, live or dead
mammals, and human rubbish (Herrero 2002, 183, 149–71, 47; Whitaker 1996, 708). Bears likewise are encountered both day and night (Herrero 2002, 170;
Whitaker 1996, 703–709). They visit homes, vehicles, and campsites looking for food, and they sometimes show aggression (Herrero 2002, 83–87; Whitaker
1996, 703–709). These and other parallels with Bigfoot are striking.

Then there are Bigfoot’s vocalizations—many of which could well be those of bears. For example, Bigfoot often growls (Bord and Bord 2006, 237, 256, 268).
One “snarled and hissed” at witnesses (268), and another “chattered its teeth” (255), while others “screamed” when shot at (247, 252). Similarly, bears
growl and snort, and they make loud huffing or puffing noises (Herrero 2002, 15, 16, 115). Their most common defensive display is “blowing with clacking
teeth”; as well, they may bawl (from pain), moan (in fear), bellow (in combat), and make a deep-throated, pulsing noise (when seriously threatened). Cubs
“readily scream in distress” (Rogers 1992, 3–4).

Distribution. The habitat of Bigfoot in the 1,002 abstracts we are studying—from 1818 to November 1980—is extensive. It includes most continental American states
(excepting Delaware, Rhode Island, and South Carolina) and eight of thirteen Canadian provinces. The greatest number of sightings were in Washington State
(110), followed by California (104), British Columbia (90), and Oregon (77)—that is, in the Pacific Northwest, the traditional domain of Sasquatch—followed
by Pennsylvania and Florida (42 each). It is reportedly seen in woods and fields, along streams, and so on (Bord and Bord 2006, 215–310; Nickell 2011,
225–29).

The distribution of black bears is strikingly similar, as shown by population maps provided by the Audubon Society (Whitaker 1996, 704) and elsewhere
(Herrero 2002, 80). America’s grizzly population was once quite extensive and included the western states (Herrero 2002, 4); however, grizzlies are now
relegated mostly to Yellowstone Park (chiefly in northwest Wyoming) and its vicinity, and to portions of the northernmost areas of Washington, Idaho, and
Montana, as well as most of British Columbia, Northwest Territories, the Yukon Territory, and Alaska (Herrero 2002, 4; Whitaker 1996, 708). Like Bigfoot,
bears are also seen in woods and fields, along streams, and so on.

Assessment

Again and again come eyewitness reports of Bigfoot that sound like misreports of bears. In Washington State, for instance, in 1948, a man saw a “thin,
black-haired, 6-ft Bigfoot squatting on [a] lake shore.” In September 1964 a Pennsylvania man spotted “Bigfoot peering in a window of his mother’s home at
dusk,” while a man sleeping in his car in northwest California was “woken by Bigfoot shaking it.” In July 1966, a British Columbia woman saw “head and
shoulders of Bigfoot above 6-ft raspberry bushes at night.” In June 1976, three Floridians saw a creature “6 ft tall, with long black hair, standing in a
clump of pine trees.” In August 1980, two Pennsylvania men “Driving down a mountain, saw husky black hairy creature standing in road.” And so on, and on
(Bord and Bord 2006, 230, 241, 244, 287, 309).

Let it be understood that I am in no way saying that all Sasquatch/Bigfoot sightings involve bears. After all, some are surely other misidentifications or
hoaxes involving people in furry suits (Nickell 2011, 72–73). As well, Venezuela’s “Loy’s Ape” of the 1920s was identified as a large spider monkey, and
two specimens of China’s legendary Yeren, shot in 1980, proved to be the endangered golden monkey (Nickell 2011, 85–87, 96).

I am merely pointing out, what should now be obvious, that many of the best non-hoax encounters can be explained as misperceptions of bears. Of creatures
in North America, standing bears are the best lookalike for the bipedal, hairy man-beasts called Bigfoot. Bears also frequently behave like Bigfoot, and
they are found in regions common to the legendary creature—no certain trace of which, in the fossil record or otherwise, has ever been discovered.

]]>The Mysteries of LeonardoFri, 06 Dec 2013 16:01:00 EDTinfo@csicop.org ()http://www.csicop.org/si/show/the_mysteries_of_leonardo
http://www.csicop.org/si/show/the_mysteries_of_leonardo
Leonardo da Vinci not only epitomizes genius and creativity, but he is also one of the most sought-after sources of mysteries, both real and invented.
Probably the most famous example of this is The Da Vinci Code’s many legends linked to this inventor, but there are many other examples,
as we shall see.

Leonardo the Heretic

According to some authors of historical fiction, Leonardo was a heretic. Evidence of this is supposedly hidden in his painting The Last Supper,
where it is said that the Master himself expressed his belief that Christ was married to Mary Magdalene. The woman is to be identified with the Apostle
showing feminine traits and sitting to the right of Jesus. Further evidence in support of the genius’s heresy would be the lack of the chalice with the
wine on the table, symbol of the Eucharist, and the presence of a disembodied hand holding a menacing knife.

What are the facts? In reality, for The Last Supper, the magnificent mural painting adorning the refectory of the Convent of Santa Maria delle
Grazie, in Milano, Italy, Leonardo took his inspiration from the Gospel of John, where there is no mention either of the chalice with the wine or of the
Eucharist. In addition, the hand with the knife belongs to Peter (as demonstrated by the preparatory drawings by Leonardo preserved at the Windsor Royal
Library) and refers to an episode in the Gospel, where Peter cuts the ear of the servant of the High Priest. Finally, the delicate appearance of John
belongs to the iconography of the time, where the younger apostle, Jesus’s favorite, was always represented as a teenager with long hair and
gentle features.

Leonardo and his Virgin of the Rocks painting.

Esoteric Symbols Everywhere

Although maybe not a heretic, and certainly not a devout Catholic, it is possible that Leonardo may have been in contact with ideas that, at his time, were
considered heretical, such as neo-Platonic and Gnostic ones. The latter, for example, included belief in Sophia, the mother goddess who created
the world, which Leonardo might have wanted to represent with Leda, a lost painting. In some remaining copies of the painting, the great mother is
seen as a “cosmic egg,” from which other eggs give rise to humans. The Gnostics also believed that there were two forms of Jesus, one carnal, who died on
the cross, and one that was only spirit. Another famous painting by Leonardo, the Virgin of the Rocks, shows two children similar to one another:
perhaps the one commonly referred to as John the Baptist was actually Jesus’s double and his identity was disguised in order to get it accepted by
the religious clients. Unlikely, but no one can tell for sure today.

Another question then concerns Leonardo’s propensity to often portray St. John the Baptist. Some people wondered if this attachment to the saint did not
conceal something else, maybe the adherence to the cult of St. John, the same one held by the Knights of the Order of Malta. However, these are assumptions
that art historians are still questioning.

Was Leonardo a Member of the Priory of Sion?

The entire Da Vinci Code by Dan Brown (note the absurdity of calling the Master not by his name, Leonardo, but by the name of the town where he
was born, Vinci, apparently believing it to be his surname) revolves around the mysterious and ancient sect of the “Priory of Sion,” keeper of secrets and
founded by the ever-present order of the Templars. It is said that among its members there were luminaries such as Isaac Newton, Victor Hugo, and of course
Leonardo. In reality, the sect was invented out of whole cloth by Frenchman Pierre Plantard in 1956. Plantard took the name “Priory of Sion” from a hill
above Annemasse, where he planned to install a retreat house. As for the list of the “initiated,” Plantard copied it from the list of alleged “Imperators,”
that is, the supreme heads, of the Ancient and Mystical Order Rosae Crucis, founded in 1915 in the United States by another creator of fantasies, Harvey
Spencer Lewis, with whom Plantard was in contact. Anti-Semitic, anti-Masonic, and a member of the French right, Plantard orchestrated this plot in order to
create a historical line, likely to prove his own descent from the Merovingian as an heir to a dynasty lost in the mists of history, giving him wide room
to maneuver and a huge advantage over many orders of competing Grand Masters and maybe leading him to a leading political role. It didn’t work.

Leonardo Author of the Shroud of Turin?

According to Lynn Picknett and Clive Prince, the Shroud of Turin was the work of Leonardo. Writer Victoria Hazel interprets a passage in the Codex Atlanticus as a “confession” on the part of its author: “When I painted Domene God an infant, you put me in prison: now, if I do him big,
you will make me worse things.” Not only that, according to Lillian Schwartz, the face of the Shroud fits with the self-portrait of Leonardo and would
therefore be an experiment in pre-photographic techniques devised by the genius of Vinci.

In fact, the historical sources (the first written reference to the Shroud is a memorial in 1389), as well as the scientific radiocarbon dating, show that
the Shroud was prepared in a period of time included between 1260 and 1390. It is therefore likely to be a work of art, but very unlikely that it was
painted by Leonardo, since it was already around at least for a century before he was born.

Mirror Writing

Leonardo possessed an unusual mirror writing technique; that is, he wrote going from right to left and often started to write on the last sheet, and then
reached the first. This peculiarity has often been interpreted as an attempt put in place by Leonardo to keep his work secret and incomprehensible to most
people. Those who considered him a heretic had even come to call it the “writings of the devil” because of this characteristic. In fact, it was his
spontaneous way of writing. Neurologists have shown that his was a habit acquired in childhood, natural for lefties that were not corrected, as Leonardo
was. He wrote also with “normal” calligraphy, but with less ease and especially in demonstrative occasions, such as for some topographic maps. Not
surprisingly, Leonardo did dictate to others his letters of introduction.

Who Is Actually the Mona Lisa?

The identity of the woman depicted in the most famous portrait in the world has long been debated. Some authors have suggested, citing evidence not always
credible, that the woman was a Sforza, perhaps Catherine, or her mother, Caterina Buti del Vacca, or even her half-sister Bianca. In addition, there are
those who think that the Mona Lisa is nothing less than a self-portrait of Leonardo, as shown by a superposition of the two faces to the computer. In fact,
it is quite certain that the woman portrayed is Lisa Gherardini, that is, “Monna” (short for “Madonna” or as we would say today “Lady”). Lisa, wife of
Francesco del Giocondo (hence “Gioconda,” as the painting is also called). Rather more difficult to exactly establish, however, is the location in the
background. The bridge on the right is reminiscent of one in Buriano, near Arezzo, but it is more likely that this is an idealized landscape dreamed up by
Leonardo.

The Lost Remains of Leonardo

Finally, one last mystery: What happened to the remains of Leonardo? His tomb no longer exists and no one knows where his bones lie now. At his death he
was buried in the Church of Saint-Florentin in Amboise, France. But in 1802, due to the erosion of time and revolutionary vandalism, the ruins of the
chapel were destroyed, and the gravestones and tombstones were used to restore the castle. Children used to play with the abandoned bones, so a gardener
picked them up and buried them. In 1863, the poet Arsène Houssaye discovered an intact skeleton, with a bent arm and a very broad skull. Not far from that
spot he also unearthed fragments of a slab half deleted with the following readable letters: EO DUS VINC. It is perhaps Latin for Leonardus Vincius? These
bones ended up in the castle of Amboise, where they still are and where it is stated that they “supposedly” belong to Leonardo.

But, like many other questions sur­rounding the incredible life of the Renaissance marvel, this one will probably remain forever unanswered as well.