Eyewitness (Un)reliability and the Filedrawer Effect

August 25, 2015

Dennis Plucknett and his two sons, along with a friend, went hunting at a camp in northeast Florida in 2004. It was early morning, and the group had gotten separated. His fourteen-year-old son Alex was sitting in a ditch about 220 meters away from his father when someone yelled, "Hog!" The elder Plucknett grabbed his .308-caliber Ruger rifle with scope, steadied his aim, and fired one round at a boar in the distance. Within minutes, Alex was dead of a massive head wound, killed by his father's shot.

As the first line of the Associated Press report read, "A father mistook his teenage son's black cap for a wild boar during a weekend hunting trip and shot the boy to death with his rifle". Hunting accidents are fairly common, and, not being a hunter, I at first didn't pay much attention to the story. A tragedy, yes, but nothing having to do with me, or anything I could relate to.

Or was there? As I read on, I realized that this case is an excellent example of the fallibility of eyewitness testimony--the backbone of much forteana. From UFOs to ghosts, Bigfoot to black cats, eyewitnesses claim to see many strange things, and the reliability of their reports is important.

It was an accident, a terrible mistake, but there is more to it than that. The elder Plucknett mistook his son's black toboggan cap for a boar. This longtime hunter could not distinguish a piece of cloth less than a square foot wide from a living, full-size animal. Dennis Plucknett had years of experience, good vision, and a rifle scope. He believed he had a boar in his sights instead of his son's head. Because someone called out "Hog", Plucknett's mind was searching for a hog, and his expectations clearly guided his perceptions. A small black cap became a hog in his mind.

A similar process occurs in lakes and woods where mysterious creatures are said to lurk. Waves, large fish, and logs are thus turned into lake monsters; bear, elk, or other creatures are turned into Bigfoot. The process is well documented, but many cryptozoologists insist that eyewitnesses are more reliable than they are. Often monster researchers and writers will mention that an eyewitness is an upstanding member of the community (or a reverend, or a police officer) as if that somehow ensures against misidentifications. Would such cryptozoologists argue that Mr. Plucknett is not an upstanding citizen, or more likely to be mistaken than others?

We also see another common element of eyewitness identifications: overconfidence. Plucknett was absolutely certain about his identification; we can assume that there was no doubt in his mind that he had a boar in his sights. If he was uncertain about what he thought he saw, he would not have pulled the trigger. Think of what would have happened had he not shot his son, if he had missed or the gun had misfired. He likely would have been absolutely sure he had seen a boar, because it would not have been proven to him otherwise. Now transfer this situation to a Bigfoot sighting in a wooded area: A hiker sees something large and dark move in the woods. He turns and sees a glimpse of a large, bipedal creature disappearing into the brush. He is absolutely sure of what he saw, and reports it with total conviction and confidence to a researcher. Yet without a body or photograph to compare it to, we still don't know what he saw. All we have is a sincere, believable eyewitness who says he is certain about his description. Yet eyewitnesses can be absolutely honest and positive-and dead wrong.

A related but often overlooked issue in eyewitness reporting is what's called the "filedrawer effect." A statistical term, it refers to the tendency of researchers to not report or publish experiments that do not support the expected outcome. So, for example, let's suppose one hundred studies of ESP are conducted. Of those, 75 show no results beyond what would be expected by chance; 20 show some small effect; and 5 suggest strong evidence for ESP. It is likely that many, if not most, of the studies that found no effect will not be reported or published because the researcher didn't find what they were looking for. Studies showing no effect are not terribly interesting and may be seen as a wasted effort.

But this is a problematic research bias, because "no effect" is a legitimate outcome. In the example above, if even a quarter of the "no effect" studies are published, along with the other 25 ones that showed some effect, this gives later researchers the impression that over half of the studies conducted showed some effect, which is not true. It's a variation of the fallacy of remembering the hits and ignoring the misses.

Douglas Stokes pointed out this problem in parapsychological research (see "The Shrinking Filedrawer", Skeptical Inquirer, May/June 2001), but I believe the concept can be very useful in understanding eyewitness reporting as well. Sightings that seem mysterious at first but are soon identified are only rarely reported. If a man, out for a stroll near sunset, glimpses a bizarre, two-headed creature out on a pond, he will likely consider it a mystery and perhaps report it to a cryptozoologist friend. But if the man waited a few seconds longer to see the bizarre shape untangle into two swans who then fly away, he will likely be surprised but go on his way and think little more of it.

With a few notable exceptions, researchers and investigators usually don't hear about or record examples where eyewitnesses were confident in their reports yet proven wrong. As a result, the unexplained or mysterious reports and sightings are selectively chosen, making them seem more common and significant than they really are. If one person sees something strange but is unable to identify it (and reports it), it's a mystery. Yet hundreds or thousands of other people in similar circumstances (but with different areas of expertise) might have seen the same thing and recognized it immediately-none of whom would have reported the sighting, nor considered it mysterious.

Dennis Plucknett's case is an example of a tragically verified misidentification. But it is only a small fraction of such cases, and notable for the fact that it was proven wrong and came to light. Logically and statistically, it stands to reason that for every case that makes news or gets written up, many more similar, everyday examples of misidentifications are not reported. Witnesses may feel silly for admitting they were fooled, or believe their sightings are not worth reporting since nothing unusual was discovered. After all, if I realize I was fooled by a floating log or a large deer, why would I bother to mention it to cryptozoologists? And even if I did, would they be interested and recognize the value of my report, or ask why I'm wasting their time with a solved, "non-report"?

This fundamental--but rarely recognized--bias in eyewitness (and investigator) reporting creates a dramatic underestimation of how often (and how easily) we misperceive or misunderstand what we see. But to accurately understand eyewitness reports, researchers can't just pick and choose. They must take all of them into account-focusing not just on the unidentified but also the misidentified. Only with this scientific and statistical understanding will real understanding emerge.

The fact that eyewitness descriptions are often unreliable is well-documented and explained by psychological processes. However, recent research also suggests a brain-chemistry explanation for why this may be. Dr. Amy Arnsten of Yale Medical School has discovered that when people are confronted by fear or stress (as often accompanies anomaly sightings), the brain responds by releasing an enzyme called protein kinase C (PKC). PKC helps the brain respond quickly and instinctually, but it also impairs short-term memory and the ability to concentrate.

In other words, eyewitnesses have more difficulty remembering details, sequences, and events-actually recording the memories-under these conditions. The effect is significant, and is not necessarily only triggered by terrifying experiences: "This kind of memory, the ability to concentrate, seems to be impaired when exposed to mild stresses", Arnsten reported. Arnsten's findings were published in the October 28, 2004, issue of the journal Science. Many sightings of mysterious creatures or events are made under poor conditions: in low light or darkness, in brief glimpses, at long distances, and so forth. These conditions are exactly the circumstances under which the accuracy of reports are at their worst. Coupled with the perception- and memory-distorting influences of PKC, it's little wonder such reports are dubious.

If eyewitnesses can be mistaken in life-and-death cases, they are likely no more accurate in monster sightings and descriptions. This does not mean that all eyewitness accounts of cryptids are mistakes, though many probably are, as most reputable cryptozoologists agree. But this point should be driven home, and the psychological literature is clear and uncontested: We are all subject to misunderstandings, misperceptions, and mistakes-and we are all overconfident in those beliefs and perceptions. This seemingly mundane case can serve as an important lesson for investigators of all types and cryptozoologists in particular. Can you always trust your eyes and perceptions? How sure are you about what you see? Sure enough to take a life if you are wrong?

Commenting is not available in this weblog entry.

E-Mail Updates

Sign up for e-mail updates from CFI

Search All Free Thinking Blogs

Benjamin Radford, M.Ed., is a scientific paranormal investigator, a research fellow at the Committee for Skeptical Inquiry, deputy editor of the Skeptical Inquirer, and author, co-author, contributor, or editor of twenty books and over a thousand articles on skepticism, critical thinking, and science literacy. His newest book is Bad Clowns; his next, Investigating Ghosts: The Scientific Search for Spirits, will be out in Fall 2017.