Archive

In my previous post, I looked at the first three sins of memory using Harvard University psychology professor Daniel Schacter’s book The Seven Sins of Memory as a guide. In this post I’ll look at the remaining four sins.

Misattribution: attributing a memory to an incorrect source

“Most people, probably, are in doubt about certain matters ascribed to their past. They may have seen them, may have said them, done them, or they may only have dreamed or imagined they did so.” –William James, The Principles of Psychology, Volume 1 (1890). Chapter X. The Consciousness of Self

In 1975 Australian psychologist Donald M. Thomson went on television to discuss the psychology of eyewitness testimony. The day after the broadcast, Thomson was picked up by local police, who told him that the previous evening a woman who was raped and left unconscious in her apartment had named Thomson as her attacker. Fortunately, Thomson had a watertight alibi, having been on television at the time of the attack and in the presence of the assistant commissioner of police. It turned out the victim had been watching Thomson on television just prior to being attacked, and had confused his face with that of her attacker.

Misattribution is attributing an event to something with which it really has no connection or association. Information is retained in memory, but the source of the memory is forgotten. This is also what happens when you mix up details from two separate events and combine them into one cohesive memory. According to some theories of memory, misattribution errors are a result of failed memory binding – the binding together of individual parts of a memory into one cohesive unit. Your recollection of an event wasn’t appropriately tied to your recollection of the source of the event.

While most misattributions are not so dramatic as that of Donald Thomson, In 1998 Gary Wells at Iowa State University and his colleagues identified 40 different US miscarriages of justice that relied on eye-witness testimony. Many of these falsely convicted people served many years in prison, some even facing death sentences. Some examples of misattribution that have been studied in the lab include:

Misattributing the source of memories. People regularly say they read something in the newspaper or online, when actually a friend told them about it or they read it someplace other than where they claim they did. In one study, participants with “normal” memories regularly made the mistake of thinking they had acquired a trivial fact from a newspaper, when actually the experimenters had supplied it. In another experiment, published in Psychological Science in 2010, researchers found that people who had watched a video of someone else doing a simple action – shaking a bottle or shuffling a deck of cards, for example – often remembered doing the action themselves two weeks later. Sometimes an idea or memory is attributed to ourselves that actually belongs to someone else. This is a common source of unintentional plagiarism. In one early study, people were asked to generate examples of particular categories of items, like species of birds. It was found that people, without realizing, plagiarized each other about 4% of the time. Later studies found rates as high as 27% using different types of tasks.

Misattributing a face in the wrong context. This is what happened to Donald Thomson. Studies have shown that memories can become blended together, so that faces and circumstances are merged.

Misattributing an imagined event. In one experiment, participants were asked either to imagine performing an action or actually asked to perform it, such as breaking a toothpick. Sometime later they went through the same process again. Later still, they were asked whether they had performed that action or just imagined it. Those who imagined the actions more frequently the second time were more likely to think they’d actually performed the actions the first time. The experiment demonstrated how easily our memory can transform fantasy into reality. Even something as simple as imagining a childhood event can convince a person that it really occurred

Suggestibility: implanted memory from others

Suggestibility results from outside information being absorbed and incorporated into the memory of an event. These false memories can be implanted as a result of leading questions, comments, or suggestions when a person is trying to recall a past experience. For example, you may remember wearing a black skirt to a party a month ago, but if someone insists that you were wearing a red skirt, it may alter your memory. According to Daniel Schacter, the following are six different types of questions that can illicit a false answer or inaccurate memory:

1. Assumptive Question. This bases the question on an assumption. “How much will the price of gas go down next month?” assumes that the price will go down.
2. Linked Statement. This links two different items together and does not provide the same information for both items. Asking “Would you prefer to live in Clinton or Terre Haute where the crime rate is high?” doesn’t mention anything about the crime rate in Clinton. You can also put something else of significance within the question (note the social coercion in this statement): “What do you think about Larry Jackson? Many people are opposed to him.”
3. Implication Question. Asking questions that gets the other person to think of consequences or implications of current or past events links the past with the future in an inescapable chain of cause-and-effect. “If you stay out late tonight, how will you remain awake at work tomorrow morning?”
4. Asking for Agreement. This is typically the closed question that requires either a “yes” or “no” answer, making it easier for the person being asked to say “yes” than “no”. “Do you agree that we need to save the whales?”
5. Tag Question. These usually involve short phrases that end in a short question that is often negative. Because the questions are tagged onto the end of statements, they effectively make a command look like a question. “You are coming to the very important LSS meeting, aren’t you?” “That’s a good thing to do, isn’t it?”
6. Coercive Question. The context or tone of the question results in either an implicit or explicit coercion. In the question “How can you say that you will not be there?” the questioner implies negative consequences for not attending. In the question “How can you say you won’t come?” the questioner implies that there is no good reason for you not coming.

The tendency of elderly adults to rely on general familiarity in storing memories leaves them particularly vulnerable to false memories. This makes them fertile ground for scammers and other con artists. For example, scammers often call elderly individuals, saying things like “The check you sent us doesn’t quite cover your balance, and we’ll need you to send in another.” Doubting their memories, elderly victims often comply.

As noted in my previous post, critics of repressed memory therapy maintain that many therapists are not helping patients recover repressed memories, but are suggesting and planting false memories of alien abduction, sexual abuse, and satanic rituals. Studies in the suggestibility of children indicated that there were pronounced age-related differences in suggestibility, with preschool children being particularly susceptible to misleading suggestions. Early studies on which this conclusion was based were criticized on several grounds (e.g. unrealistic scenarios, truncated age range). While these studies on which this conclusion was based were criticized on several grounds (including unrealistic scenarios and truncated age range), newer studies that have addressed these criticisms, however, have largely confirmed the earlier conclusions. These studies indicate that preschool children are disproportionately vulnerable to a variety of suggestive influences. The studies also appear to show that while young children are often accurate reporters, suggestive questioning not only distorts children’s factual recall, but has a strong influence on their interpretation of events.

The day may come when it’s possible to physically implant memories in the human brain. Using a technique called optogenetics, Nobel laureate and neuroscientist Susumu Tonegawa and a team of MIT researchers earlier this year implanted a false memory into a mouse’s brain. To do that, they manipulated individual cells in the mouse hippocampus, the part of the brain responsible for memory formation, to make them responsive to light.
Several mice were placed in a chamber glowing with reddish light and allowed to explore. The next day, they were placed in a second chamber and given electric shocks on their feet to encode a fear response. Scientists also shone light into their brains, activating memories of the first chamber. When the mice were placed back in the first chamber, they froze, expecting shocks that never came.

Bias: distortion based upon knowledge, beliefs, and perspective

Memory bias is a cognitive bias that either enhances or impairs the recall of a memory (either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. If different people observe the same object or event, they will describe it from different different perspectives. In and example used by Schacter. he notes that four people might describe the movie The Wizard of Oz in different ways based on their interests:

1. The young child will tell the story, listing the sequence of events (not necessarily in the right order).
2. The emotional child will explain that the movie was very scary with witches and wizards and flying monkeys.
3. The adolescent will explain the special effects in the movie.
4. The intellectual will identify the themes of the movie.

A study by Dr. Carey Morewedge from Harvard University and colleagues does an excellent job of demonstrating how memory bias works. In the study, 62 subway passengers were randomly allocated to one of three groups. Each was asked to describe a time in the past when they had missed the train, but in subtly different ways:

Free recallers were asked to describe any instance.

Biased recallers were asked to describe the worst instance.

Varied recallers were asked to describe any three instances.

Participants then indicated how happy or unhappy they were on those occasion(s). The results showed that people in both the “free recall” and “biased recall” groups remembered equally depressing times when they had last missed the train. This suggests that when trying to recall a past incidence of a single event, people will naturally recall the worst instance, whether they’re trying to or not. But participants in the “varied recall” group were more positive, suggesting that out of the three events they had recalled, at least one of them was positive. Recalling more than one event, then, makes it more likely that at least one of them is more positive.

After being primed with memories of past experiences of missing the train, participants were then asked to rate how unhappy they would be if they were to miss the train today, testing how memory bias affected their prediction of their feelings in the future. The free recallers made the worst prediction about how they would feel in the future, significantly worse than the varied recallers and the biased recallers. The free recallers and biased recallers were both remembering past experiences that were equally bad, but the biased recallers made the lowest prediction. The researchers concluded that when people are explicitly asked to recall the worst event, they’re aware that it’s the worst event. When people are allowed to recall any event they like, they still recall the worst event, but don’t realize they’ve done so. Because of this, those in free recall make much worse predictions about how they will experience the same event in the future.

Two subsequent studies by the same authors replicated these findings. In the first study, people demonstrated the memory bias when trying to predict positive events in the future, demonstrating that free reign people naturally recall an especially positive example of a particular event. then go on to make much more positive predictions about the emotional effect on them of the same event in the future. The second study extended the same findings to a more natural situation where one group wasn’t asked to recall anything when making a prediction about how they would experience an event in the future. People still demonstrated the same memory bias for predicting future events. Even when not specifically prompted to access past events, people still display the same bias.

False memories due to bias usually result from a desire to reduce psychological discomfort by having one’s thoughts and memories remain consistent. People tend to rely on inference in a wide variety of situations. Studies show that people also infer they’ve seen an event’s cause when they’ve really only seen its effect. People will also remember that they felt a particular way in the past that coincides with how they feel in the present, or even that they were worse off many years ago to make themselves feel better about where they are now. Other things that can bias memories include:

Beneffectance: the tendency to believe the past glories were the result of our actions, while past disgraces were someone else’s fault.

Conservatism or Regressive Bias: the tendency to remember high values and high likelihoods lower than they actually were and low ones higher than they actually were.

Consistency bias: the tendency to remember past attitudes and behavior as resembling your current attitudes and behavior.

Egocentric bias: the tendency to recall the past in a self-serving manner, like the fish you caught as bigger than it was or the grades you received were higher than they were.

Hindsight bias: the tendency to think that we could easily have predicted past events when in fact we can’t.

Illusion-of-truth effect: the tendency to identify as true statements those previously heard (even if they cannot consciously remember having heard them), regardless of the actual validity of the statement.

inference-based bias: remember all the cheerleaders from high school as having blonde hair and the football players as being dumb jocks

Reminiscence bump: the tendency to remember more events from adolescence and early adulthood than from other periods of our lives.

Rose-tinted specs: the tendency to remember how wonderful things were in the past when it wasn’t.

Stereotypical bias: memory distorted towards stereotypes, such as “black-sounding” names being misremembered as names of criminals.

Telescoping effect: the tendency to displace recent events backward in time and remote events forward in time, so that recent events appear more remote, and remote events, more recent.

While the persistence of memory can be vital to our survival, at the same time it can leave us haunted by past events we might rather forget. Living in a comfortable modern society may mean a person has relatively few real life-threatening dangers to face on a regular basis. But when people are exposed to more precarious environments, making the same mistake twice can be disastrous. But images and recollections of a traumatic event can become an intrusive and sometimes unbearable part of everyday experience. The most common example of this is Post-traumatic Stress Disorder (PTSD). Audie Murphy, the most decorated American soldier in history at the time of World War II, suffered from PTSD as a result of his experiences. According to his first wife, he suffered terrible nightmares and always slept with a gun under his pillow. Another example is the case of Donnie Moore of the California Angels, who threw the pitch that lost his team the 1986 American League Championship against the Boston Red Sox. Moore fixated on the bad play and eventually committed suicide.

The persistence of disturbing past episodes may also be important in depression, producing a dangerous cycle which may be key to the maintenance of depressive disorders. Ruminating over past events can lead to depression, while depression leading back in to rumination.

The most common form of treatment for negative persistence is called called critical incident stress debriefing, or CISD. Critical incident stress debriefing (CISD) has been used since 1983 as a component of Critical Incident Stress Management. The processes are intended to help individuals manage their normal stress reactions to abnormal events. The idea is that people who survive a painful event should express their feelings soon after so the memory isn’t “sealed over” and repressed, which could lead to post-traumatic stress disorder. Although used extensively, research findings to date yield mixed results. In a study in 2008, a group randomized trial of critical incident stress debriefing (CISD) with platoons of 952 peacekeepers, CISD was compared with a stress management class (SMC) and survey-only (SO) condition. The study found that CISD did not differentially hasten recovery compared to the other two conditions. For soldiers reporting the highest degree of exposure to mission stressors, CISD was minimally associated with lower reports of posttraumatic stress and aggression (vs. SMC), higher perceived organizational support (vs. SO), and more alcohol problems than SMC and SO.

An alternative to therapy is the use of drugs to help remove persistent memories. In 2008, Alain Brunet, a clinical psychologist at McGill University, identified 19 patients who had been suffering for several years from serious stress and anxiety disorders such as PTSD due to traumas including sexual assaults, car crashes, and violent muggings. People in the treatment group were given the drug propranolol, a beta-blocker used for conditions like high blood pressure and performance anxiety that inhibits norepinephrine, a neurotransmitter involved in the production of strong emotions. Brunet asked subjects to write a detailed description of their traumatic experiences and then gave them a dose of propranolol. While the subjects were remembering the event, the drug suppressed the visceral aspects of their fear response, ensuring that the negative feeling was somewhat contained. One week later, the patients returned to the lab and were exposed once again to a description of the traumatic event. Subjects who got the placebo demonstrated levels of arousal consistent with PTSD (for example, their heart rate spiked suddenly), but those given propranolol showed significantly lower stress responses.

As I noted, human memory has served as plot fodder for science fiction for years, but there’s room for many more stories to come. Maybe one will be yours.

Johnson, Barbara C. The Suggestibility of Children: Evaluation by Social Scientists (From the Amicus Brief for the Case of State of New Jersey v. Michaels (1994), Presented by Committee of Concerned Social Scientists) http://law2.umkc.edu/faculty/projects/ftrials/mcmartin/suggestibility.html

Memory is imperfect. This is because we often do not see things accurately in the first place. But even if we take in a reasonably accurate picture of some experience, it does not necessarily stay perfectly intact in memory. Another force is at work. The memory traces can actually undergo distortion. With the passage of time, with proper motivation, with the introduction of special kinds of interfering facts, the memory traces seem sometimes to change or become transformed. These distortions can be quite frightening, for they can cause us to have memories of things that never happened. Even in the most intelligent among us is memory thus malleable.

Elizabeth Loftus, Memory: Surprising New Insights into How We Remember and Why We Forget

Human memory has served as plot fodder for science fiction for years. In “We Can Remember It for You Wholesale,” a short story by Philip K. Dick, REKAL Incorporated can implant “extra-factual memories” – memories of things that never happened that are “more real than the real thing.” In The Golden Age by John Wright, the novel’s posthuman protagonist has deleted the last 300 years of his memories — including, of course, the memory of doing so. As he tries to reconstruct why, he learns that everyone else deleted their memories of him as well. And in the movie Inception, a team of corporate spies infiltrate people’s dreams to discover information and plant false memories.

So what do we really know about human memory? How accurate is it? Can it be manipulated, and if so, how? Using Harvard University psychology professor Daniel Schacter’s book, The Seven Sins of Memory as a guide, let’s look at what we know, think we know, and aren’t sure we know about the fallibility of human memory.

Transience: a decreasing memory over time
In 1885, German psychologist Hermann Ebbinghaus published his groundbreaking article “Über das Gedchtnis” (“Memory: A Contribution to Experimental Psychology”) in which he described experiments he conducted on himself to describe the process of forgetting. In order to test for new information, Ebbinghaus tested his memory for periods of time ranging from 20 minutes to 31 days, memorizing nonsense syllables, such as “WID” and “ZOF”. By repeatedly testing himself after various time periods and recording the results, he was the first to describe the shape of what is known as the Ebbinghaus forgetting curve, which revealed a relationship between forgetting and time. Initially, information is often lost very quickly after it is learned. Factors such as how the information was learned and how frequently it was rehearsed play a role in how quickly these memories are lost. The stronger the memory, the longer one retains it. A typical graph of the forgetting curve shows that humans tend to halve their memory of newly learned knowledge in a matter of days or weeks unless they consciously review the learned material.

Transience can be seen in both short- and long-term memory. For psychologists, short-term memory, means just the things that are in your mind right now, while long-term memory is anything you store to be retrieved at a later time. Studies have shown that both types of memory can be extremely fragile over their respective timescales. In Schacter’s, The Seven Sins of Memory, he describes how several days after the acquittal of O.J. Simpson, a group of California undergraduates provided researchers with detailed accounts of how they learned about the jury’s verdict. When the researchers tested students’ memories 15 months later about the verdict, only half recalled accurately how they found out about the decision. Asked again nearly three years after the verdict, less than 30% of students’ recollections were accurate; nearly half were dotted with major errors.

Absent-mindedness: forgetting to do things
This is memory loss resulting from failure to pay attention when carrying out an act—putting your keys or glasses down without registering where you’re putting them. Schacter uses the example of cellist Yo Yo Ma. In October 1999, Ma left his $2.5 million cello, made in 1733 by Antonio Stradivari, in a New York cab. Apparently, he was preoccupied with other things and forgot to remind himself to ask the cab driver to retrieve his cello from the trunk.

There are two central factors in how and why we are absent-minded. One is how much attention we’re paying at the crucial moment, the other is how deeply we encode a memory.

A classic study demonstrates how central attention is to absent-mindedness. In 1999, an experiment on change-blindness was conducted by D. J. Simons and C. F. Chabris. Participants watched a video of people passing a basketball between each other, and were asked to count the number of passes. After about 30 seconds of people passing the basketball, a person dressed in a gorilla suit walks right through the center of the scene, stops, turns, looks at the camera, then turns again and walks out of shot. On average around half the people who took part didn’t notice the gorilla.

1. Shallow processing: participants were shown a word and asked to think about the font it was written in.
2. Intermediate processing: participants were shown a word and asked to think about what it rhymes with.
3. Deep processing: participants were shown a word and asked to think about how it would fit into a sentence, or which category of “thing” it was.

Participants who encoded the information most deeply remembered the most words when given a surprise test later. But it also took them longer to encode the information in the first place. But most importantly, participants had to do the right type of encoding. For example considering a word’s meaning for a long time did help its recall, but putting equivalent effort into thinking about its structure didn’t.

Another type of absent-mindedness involves prospective memory – trying to remember to do something in the future. These tasks involve setting a mental alarm clock triggered either by some event occurring, like leaving work, or by a particular time. Psychologists have found the ways in which we are absent-minded in prospective memory can depend on whether we’re trying to remember a future event or a future time. Normally we depend on external cues to jog our memories, such as looking at a clock, or at a note we’ve left ourselves. We usually forget event-based prospective memories when we don’t see the cue. We don’t notice the clock, for example, because we’re in a hurray to get somewhere. Time-based prospective memories depend more on how good we are at generating cues for ourselves. For example, someone might remember to brush their at the same time by always doing as soon as they wake up and right before going to bed.

Absent-mindedness can have can have disastrous consequences. A pilot forgets a crucial item on their takeoff checklist and misses a problem that causes the plane to crash, or a surgeon forgets to suture an artery when finishing operating on a patient that causes them to die from internal bleeding. But sometimes it can be a blessing. Take the case of the Russian journalist Solomon Shereshevskii. Shereshevskii’s memory was so perfect he could remember everything he heard or read. But he found it difficult to ignore insignificant events. A sneeze or cough would be imprinted on his memory forever. And his memories were so highly detailed he found it difficult to think in the abstract or know which facts were important and which weren’t. Shereshevskii eventually became a social recluse; ending up a prisoner to his immense memory.

Blocking: the tip-of-the-tongue experience
This is characterized by being able to retrieve quite a lot of information about the target word without being able to retrieve the word itself. You may know the meaning of the word, how many syllables the word has, or its initial sound or letter, but you can’t retrieve it. The experience is coupled with a strong feeling you know the word and that it’s hovering on the edges of your thought. Studies on blocking have shown that around half of the time we will become ‘unblocked’ after about a minute. The rest of the time it may take days to recover the memory.

A study published in the journal Neuron, shows that we’re also able to voluntarily forget things. Researchers were able to discover two methods of forgetting by conducting fMRI brain scans on volunteers as they remembered, and then purposely forgot, associations between word pairs. The first way is to essentially stop the brain’s remembering system from working, trying to block them out entirely. The second way is to have a substitute memory for the brain to remember instead of the one we want to block out, thinking about other things that would replace memories of the associations.

Another type of blocking is said to be caused by experiences so horrific that the human brain seals them away, only to be recalled years later either spontaneously or through therapy. This type of blocking, known by the diagnostic term dissociative amnesia and more colloquially as recovered memories. In Sigmund Freud’s theory of “repression” the mind automatically banishes traumatic events from memory to prevent overwhelming anxiety. Freud further theorized that repressed memories cause “neurosis,” which could be cured if the memories were made conscious.

The theory of unconsciously repressing the memory of traumatic experiences is controversial. Most psychologists accept as fact that it’s common to consciously repress unpleasant experiences, and to spontaneously remember such events long afterward. Most of the controversy centers around recovered memories during repressed memory therapy (RMT). Critics of RMT maintain that many therapists are not helping patients recover repressed memories, but are suggesting and planting false memories of alien abduction, sexual abuse, and satanic rituals.

During the 1980s, claims of childhood sexual abuse based on recovered memories led to numerous highly publicized court cases. A number of the supposed victims retracted their allegations in the early 1990s, saying they had been swayed by therapeutic techniques. The argument continued throughout the 1990s, driven by high profile cases such as that of actress Roseanne Barr, and by people who claimed that their abusers had been set free because of testimony against repressed memories by psychologists such as Elizabeth Loftus, a research psychologist who has devoted her life to the study of memory.

So how real are repressed memories? Some argue that it’s plausible that memories of childhood sexual abuse could be buried for years and then recalled, and that motivated forgetting, dissociative amnesia, or some other mechanism could account for repressed memories. Others argue that the idea of recovered repressed memories is implausible, contradicting a) that vivid experiences (as sexual abuse would presumably be) create lasting memories, b) memories change and are reconstructed over time, even those that are easily accessible and frequently recalled, and c) thoughts that we experience as remembered may come from sources other than memories of actual experiences of our own.

Asserting that [Schrödinger’s] cat is both alive and dead is akin to a baseball fan saying that the Yankees are stuck in a superposition of both won and lost until he reads the box score. It’s an absurdity, a megalomaniac’s delusion that one’s personal state of mind makes the world come into being.

There are a number of different interpretations of quantum mechanics, but the two most popular (and the ones most found in science fiction) are the Copenhagen Interpretation and the Many-Worlds interpretation. In the Copenhagen Interpretation, developed principally by physicists Niels Bohr and Werner Heisenberg in 1920, a quantum particle doesn’t exist in one state or another, but in all possible states at once. It isn’t until we observe its state that a quantum particle is forced to choose one probability, and that’s the state that we observe. This is still the orthodox and most popular interpretation.

In the Many-Worlds interpretation, first developed by physicist Hugh Everett in 1957, for each possible outcome of any given action, the universe splits to accommodate each one, and everything that could have possibly happened in the past, but didn’t, has occurred in some other universe or universes. This interpretation removes the observer from the equation, and appears to reconcile the observation of non-deterministic events, such as random radioactive decay, with the fully deterministic equations.

But now there’s a relatively new interpretation called Quantum Bayesianism (or QBism) that combines quantum theory with Bayesian probability theory in an effort to eliminate the paradoxes found in previous interpretations, or at least put them in a less troubling form. It does this by redefining the wave function – a mathematical expression of objects in the quantum state. In earlier interpretations, the wave function is a real property of the object. But under QBism, the wave function is simply a mathematical tool and nothing more. The wave function has no bearing on the reality of the object being studied, just as the long-division problem to calculate your car’s fuel consumption has no effect on the gas mileage. Remove the wave function, and paradoxes – particles seem to be in two places at once, information appears to travel faster than the speed of light, cats can be dead and alive at the same time – vanish.

The notion that the the wave function isn’t real goes back to Danish physicist Niels Bohr, who considered the wave function a computational tool: it gave correct results when used to calculate the probability of particles having various properties, but there wasn’t a deeper explanation of what the wave function is. Einstein also favored a statistical interpretation of the wave function. But Qbism’s interpretation began in a short paper published in January 2002 under the title “Quantum Probabilities as Bayesian Probabilities,” by Carlton M. Caves of the University of New Mexico, Christopher A. Fuchs, then at Bell Labs in Murray Hill, N.J., and Ruediger Schack of the University of London.

QBism begins with Bayesian probability, which basically says, “I don’t know how the world is. All I have to go on is finite data. So I’ll use statistics to infer something from those data about how probable different possible states of the world are.” (For more on Bayesian probability, see my post “What, Exactly, Is Probability?“) It then applies this to determine the result of the wave function.

Let’s see how this differs by looking at the famous Schrödinger’s cat thought experiment devised by Austrian physicist Erwin Schrödinger in 1935. Schrödinger wrote:

One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter, there is a tiny bit of radioactive substance, so small that perhaps in the course of the hour, one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges, and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat mixed or smeared out in equal parts. It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation.

In traditional interpretations, before an observer looks inside the box, the wave function describing the system is in a superposition where the stat of the cat is both “alive” and “dead.” When an observer is introduced, the wave function collapses the cat into one state or another (or the universe splits and the cat collapses into both states, one in each of the universes.) But QBism says that the wave state is simply a description of the observer’s mental state – their experience of the world in which they live in, as opposed the the reality that is that world, and these personal degrees of belief can be described using Bayesian probability.

Is there any evidence to support this interpretation? One of the key principles of quantum mechanics is the Born rule, which tells observers how to calculate the probability of a quantum event using the wave function. The Born rule states that the probability to find a quantum object at a certain place at a certain time equals the square of its wave function. Recently, Christopher Fuchs was able to demonstrate that the Born rule could be rewritten almost entirely in terms of Bayesian probability theory without referring to a wave function. This means that it’s possible to predict the results of experiments using probabilities without and no wave function, providing evidence that the wave function is just a tool that tells observers how to calculate their personal beliefs, or probabilities, about the quantum world around them. Additionally, QBism is currently being used in quantum computer science for Quantum Bayesian networks.

Gomatam, Ravi Niels Bohr’s Interpretation and the Copenhagen Interpretation – Are the two incompatible? Philosophy of Science, 74(5) December 2007

Schrödinger, Erwin, Die gegenwärtige Situation in der Quantenmechanik (The present situation in quantum mechanics), Naturwissenschaften (translated by John D. Trimmer in Proceedings of the American Philosophical Society)

“Probability is the bane of the age,” said Moreland, now warming up. “Every Tom, Dick, and Harry thinks he knows what is probable. The fact is most people have not the smallest idea what is going on round them. Their conclusions about life are based on utterly irrelevant – and usually inaccurate – premises.”

Anthony Powell, “Casanova’s Chinese Restaurant” in 2nd Movement in A Dance to the Music of Time, University of Chicago Press, 1995

Because many events can’t be predicted with total certainty, often the best we can do is say what the probability is that an event will occur – that is, how likely it is to happen. The probability that a particular event (or set of events) will occur is expressed on a linear scale from 0 (impossibility) to 1 (certainty), or as a percentage between 0 and 100%.

The analysis of events governed by probability is called statistics, a branch of mathematics that studies the possible outcomes of given events together with their relative likelihoods and distributions. It is one of the last major areas of mathematics to be developed, with its beginnings usually dated to correspondence between the mathematicians Blaise Pascal and Pierre de Fermat in the 1650′s concerning certain problems that arose from gambling.

Chevalier de Méré, a French nobleman with an interest in gaming and gambling questions, called Pascal’s attention to an apparent contradiction concerning a popular dice game that consisted in throwing a pair of dice 24 times. The problem was to decide whether or not to bet even money on the occurrence of at least one “double six” during the 24 throws. A seemingly well-established gambling rule led de Méré to believe that betting on a double six in 24 throws would be profitable, but his own calculations indicated just the opposite. This problem (as well as others posed by de Méré) led to the correspondence in which the fundamental principles of probability theory were formulated for the first time.

Statistics is routinely used in in every social and natural science. It is making inroads in law and in the humanities. It has been so successful as a discipline that most research is not regarded as legitimate without it. It’s also used in a wide variety of practical tasks. Physicians rely on computer programs that use probabilistic methods to interpret the results of some medical tests. Construction workers use a chart based on probability theory when mixing the concrete for the foundation of buildings, and tax assessors use a statistical package to decide how much the house is worth.

While there a number of forms of statistical analysis, the two dominant forms are Frequentist and Bayesian.

Bayesian analysis is the older form, and focuses on P(H|D) – the probability (P) of the hypothesis (H), given the data (D). This approach treats the data as fixed (these are the only data you have) and hypotheses as random (the hypothesis might be true or false, with some probability between 0 and 1). This approach is called Bayesian because it uses Bayes’ Theorem to calculate P(H|D).

The conceptual framework for Bayes’ Theorem was developed by the Reverend Thomas Bayes), and published posthumously in 1764. It was perfected and advanced by French physicist Pierre Simon Laplace, who gave it its modern mathematical form and scientific application. Bayes’ theorem has a 250-year history, and the method of inverse probability that was developed from it dominated statistical thinking into the twentieth century.

For the Bayesian:
• Probability is subjective – a measurement of the degree of belief that an event will occur – and can be applied to single events based on degree of confidence or beliefs. For example, Bayesian can refer to tomorrow’s weather as having a 50% chance of rain.
• Parameters are random variables that have a given distribution, and other probability statements can be made about them.
• Probability has a distribution over the parameters, and point estimates are usually done by either taking the mode or the mean of the distribution.

A Bayesian basically says, “I don’t know how the world is. All I have to go on is finite data. So I’ll use statistics to infer something from those data about how probable different possible states of the world are.”

Frequentist (sometimes called “a posteriori”, “empirical”, or “classical”) analysis focuses on P(D|H), the probability (P) of the data (D), given the hypothesis (H). That is, this approach treats data as random (if you repeated the study, the data might come out differently), and hypotheses as fixed (the hypothesis is either true or false, and so has a probability of either 1 or 0, you just don’t know for sure which it is). This approach is called frequentist because it’s concerned with the frequency with which one expects to observe the data, given some hypothesis about the world.

Frequentist statistical analysis is associated with Sir Ronald Fisher (who created the null hypothesis and p-values as evidence against the null), Jerzy Neyman (who was the first to introduce the modern concept of a confidence interval in hypothesis testing) and Egon Pearson (who with Neyman developed the concept of Type I and II errors, power, alternative hypotheses, and deciding to reject or not reject based on an alpha level). They use the relative frequency concept – you must perform one experiment lots of times and measure the proportion where you get a positive result.

For the Frequentist:
• Probability is objective and refers to the limit of an event’s relative frequency in a large number of trials. For example, a coin with a 50% probability of heads will turn up heads 50% of the time.
• Parameters are all fixed and unknown constants.
• Any statistical process only has interpretations based on limited frequencies. For example, a 95% confidence interval of a given parameter will contain the true value of the parameter 95% of the time.
• Referring to tomorrow’s weather as having a 50% chance of rain would not make sense to a Frequentist because tomorrow is just one unique event, and cannot be referred to as a relative frequency in a large number of trials. But they could say that 70% of days in April are rainy in Seattle.

A Frequentist basically says, “The world is a certain way, but I don’t know how it is. Further, I can’t necessarily tell how the world is just by collecting data, because data are always finite and noisy. So I’ll use statistics to line up the alternative possibilities, and see which ones the data more or less rule out.”

Frequentist and Bayesian approaches represent deeply conflicting approaches with deeply conflicting goals. Perhaps the most important conflict has to do with alternative interpretations of what “probability” means. These alternative interpretations arise because it often doesn’t make sense to talk about possible states of the world. For instance, there’s either life on Mars, or there’s not.

We don’t know for sure which it is, but we can say with certainty that it’s one or the other. So if you insist on putting a number on the probability of life on Mars (i.e. the probability that the hypothesis “There is life on Mars” is true), you are forced to drop the Frequentist interpretation of probability. A Frequentist interprets the word “probability” as meaning “the frequency with which something would happen in a lengthy series of trials”.

The Bayesian interprets the word “probability” as “subjective degree of belief” – the probability that you (personally) attach to a hypothesis is a measure of how strongly you (personally) believe that hypothesis. So a Frequentist would never say “There’s probably not life on Mars”, unless they were speaking loosely and using that phrase as shorthand for “The data are inconsistent with the hypothesis of life on Mars”. But the Bayesian would say “There’s probably not life on Mars”, not as a loose way of speaking about Mars, but as a very literal and precise way of speaking about their beliefs about Mars. A lot of the choice between Frequentist and Bayesian statistics comes down to whether you think science should comprise statements about the world, or statements about our beliefs.

Let’s look at the simple task of flipping a coin. The flip of a fair coin has no memory, or as mathematicians would say, each flip is independent. Even if by chance the coin comes up heads ten times in a row, the probability of getting heads or tails on the next flip is precisely equal. You may believe that a coin that, because a flipped coin has come up heads ten times in a row, that “tails is way overdue”, but the coin doesn’t know and doesn’t care about the last ten flips; the next flip is just as likely to be the eleventh head in a row as the tail that breaks the streak. The probability that the flip of a fair coin will come up heads or tails, then, is 50%.

But what, exactly, do we mean when we say that the probability is 50%? A Frequentist would say that if the probability of landing or either side is 50%, this means that if we were to repeat the experiment of flipping the coin a large number of times, we would expect to see approximately the same number of heads as tails. That is, the ratio of heads to tails will approach 1:1 as we flip the coin more and more times.

In contrast, a Bayesian would say that probability is a very personal opinion. What probability of 50% means to you is different from what it might mean to me. If pressed to place a bet on the outcome of flipping a single coin, you would just as well guess heads or tails. More generally, if you were to bet on the flip of a coin and was told that the probability of either side coming up was 50%, and the rewards for guessing correctly on any outcome are equal, then it would make no difference to you what side of the coin you bet on.

Both approaches are addressing the same fundamental problem (what are the odds that flipping a coin will result in it landing heads up), but attack the problem in reverse orders (the probability of getting data, given a model, versus probability of a model, given some data). It’s quite common to get the same basic result out of both methods, but many will argue that the Bayesian approach more closely relates to the fundamental problem in science (we have some data, and we want to infer the most likely truth.)

So, which approach is best? The Frequentist position would seem to be the answer. In our coin-flipping example, the probability of a fair coin landing heads is 50% because it lands heads half the time. Defining probability in terms of frequency seems to be the empirical thing to do. After all, frequency is “real”. It isn’t metaphysical, like “degree of certainty,” or “degree of warranted belief.” You can go out and observe it.

However, the Frequentist position also has some significant problems. First, it requires the long run relative frequency interpretation of probability – that is, the limiting frequency with which that outcome appears in a long series of similar events. Dice, coins and shuffled playing cards can be used to generate random variables; therefore, they have a frequency distribution, and the frequency definition of probability theory can be used. Unfortunately, the frequency interpretation can only be used in cases such as these. Another problem is that almost all prior information is ignored, and it doesn’t allow you to incorporate what you already know. Even more seriously, a hypothesis that may be true may be rejected because it hasn’t predicted observable results that have not occurred.

But the Bayesian position has its own set of problems. Bayesian calculations almost invariably require integrations over uncertain parameters, making them computationally difficult. Second, Bayesian methods require specifying prior probability distributions, which are often themselves unknown. Bayesian analyses generally assume so-called “uninformative” (often uniform) priors in such cases. But such assumptions may or may not be valid, and more importantly, it may not be possible to determine their validity with any degree of certainty.

Finally, though Bayes’ theorem is trivially true for random variables X and Y, it’s not clear that parameters or hypotheses should be treated as random variables. It’s accepted that you can talk about the probability of observed data given a model – the frequency with which you would obtain those data in the limit of infinite trials. But if you talk about the “probability”’ of a one-time, non-repeatable event that is either true or false, there is no frequency interpretation.

While both approaches have their (often rabid) proponents, I would argue that the approach you take depends on the question (or questions) you’re asking. Let’s take the hypothetical case of a patient you want to perform a test on.

You know the patient is either healthy (H) or sick (S). Once you perform the test, the result will either be Positive (+) or Negative (-). Now, let’s assume that if the patient is sick, they will always get a Positive result. We’ll call this the correct (C) result and say that if the patient is healthy, the test will be negative 95% of the time, but there will be some false positives. In other words, the probability of the test being Correct, for healthy people, is 95%. So the test is either 100% accurate or 95% accurate, depending on whether the patient is healthy or sick. Taken together, this means the test is at least 95% accurate.

These are the statements that would be made by a Frequentist. The statements are simple to understand and are demonstrably true. But what if we ask a more difficult, and arguably a more useful question – given the test result, what can you learn about the health of the patient?

If you get a negative test result, the patient is obviously healthy, as there are no false negatives. But what if the test is positive? Was the test positive because the patient was actually sick, or was it a false positive? This is where the frequentist and Bayesian diverge. Everybody will agree that this cannot be answered at the moment. The frequentist will refuse to answer. The Bayesian will be prepared to give you an answer, but you’ll have to give the Bayesian a prior first – i.e. tell it what proportion of the patients are sick.

If you are satisfied with statements such as “for healthy patients, the test is very accurate” and “for sick patients, the test is very accurate”, the Frequentist approach is best. But for the question “for those patients that got a positive test result, how accurate is the test?”, a Bayesian approach is required.

There is a common and ancient opinion that certain prophetic women who are popularly called ‘screech-owls’ suck the blood of infants as a means, insofar as they can, of growing young again. Why shouldn’t our old people, namely those who have no [other] recourse, likewise suck the blood of a youth? — a youth, I say who is willing, healthy, happy and temperate, whose blood is of the best but perhaps too abundant. They will suck, therefore, like leeches, an ounce or two from a scarcely- opened vein of the left arm; they will immediately take an equal amount of sugar and wine; they will do this when hungry and thirsty and when the moon is waxing. If they have difficulty digesting raw blood, let it first be cooked together with sugar; or let it be mixed with sugar and moderately distilled over hot water and then drunk.

If your world-building has a medieval flavor and you’re looking to add some period-authentic medicine, you need look no further than cannibalism. For more than 200 years, cannibalism was a routine part of medicine. Walk in to the shop of any apothecary (the equivalent of today’s pharmacist), and you would find, among other things, the skull of a man killed by violent death, human blood (which could include menstrual blood), human urine (separated by sex, and if the urine came from a woman, by whether she was a virgin or not), human fat, and mummia.

Skulls

Perhaps the simplest form of this type of medicine was the skull and the moss of the skull. But not just any skull would do. It was widely believed that the skulls used should be from those who suffered violent death. There were disagreements of which type of violent death was best. The German professor Rudolf Goclenius (fl. c.1618) held that skulls should come from those who had been hanged. Flemish chemist, physiologist, and physician Jan (or Jean) Baptist van Helmont disagreed, claiming that a body broken on the wheel would do just as well. He also explained the skull was the most efficacious of all the human bones because, after death, “…all the brain is consumed and dissolved in the skull… by the continual… imbibing of [this] precious liquor” of dissolved brains “the skull acquires such virtues.”

One of the most important sources of skulls in England was Ireland. Sir Humphrey Gilbert slaughtered thousands of Irish men, women, and children during the late 1560′s, severing the heads of those he captured and place them in long rows, like a wall, leading to his tent. The skulls rotted and moss grew on them, and he began exporting the skulls to England, where they ended up being used as medicine by the English aristocracy. So much money was made by exporting the skulls that the English introduced an import tax of one shilling for each one. As late as 1778, the skulls were still liable for duty and were also listed amongst goods which were imported into England before being exported elsewhere.

One of the earliest descriptions of using human skull is from the 1651 book by John French, The Art of Distillation. One of the methods described in the book for turning human skull into spirit involved braking the skull up into small pieces, placing them a glass retort. Heat them in a “strong fire” which will eventually yield “a yellowish spirit, a red oil, and a volatile salt.” The salt and spirit are then further distilled for an additional 2-3 months. This spirit of the skull was said to be good for falling-sickness, gout, dropsy, and as a general panacea for all illnesses.

A different recipe for turning human skull into spirit was developed by Jonathon Goddard, Professor of Physic at London’s Gresham College, and was purchased by King Charles II for £6,000 (and enormous sum of money), which became know as “the King’s Drops.” This concoction was used against epilepsy, convulsions, diseases of the head, and often as an emergency treatment for the dying. Charles even manufactured and sold himself. Unfortunately they didn’t do Charles much good, as he died on February 6, 1685, after being treated with high doses of the distillation after falling ill four days earlier. The drops failed again in December of 1694, when despite having taken some of the King’s Drops, Queen Mary II died.

The moss of the skull, called usnea, was also important. Francis Bacon (d.1626), the father of scientific inquiry, probably started the trend in consuming fresher skulls with moss growing on them. Chemist and physicist Robert Boyle (d.1691) then found another use. One summer Boyle was badly afflicted by nosebleeds. During a violent bleed, Boyle decided to use “some true moss of a dead man’s skull” which had been sent from Ireland. The usual method was to insert the moss, often powdered, directly into one’s nostrils. But Boyle said he found that he was able to completely halt the bleeding merely by holding the moss in his hand, thus confirming that the moss could work at a distance.

Mummia

Mummia, or mummy, was a powder made from ground mummies. There were broadly four type of mummy – the mineral pitch (also known as “natural mummy”, “transmarine mummy”, or bitumen), matter derived from embalmed Egyptian corpses (“true mummy” or “mumia sincere”), the relatively recent bodies of travelers “drowned” in sandstorms in the Arabian desert (“Arabian mummy”), and flesh taken from fresh corpses, preferably those of felons who had died no more than three days prior to the flash being collected, then treated and dried.

Mummy was thought to cure everything from headaches to stomach ulcers. For example, in 1747, successful London physician Robert James, in his book Pharmacopeia Universalis: or A New Universal English Dispensatory, wrote

Mummy resolves coagulated Blood, and is said to be effectual in purging the Head, against pungent Pains of the Spleen, a Cough, Inflation of the body, Obstruction of the Menses and other uterine Affections: Outwardly it is of Service for consolidating Wounds. The Skin is recommended in difficult Labours, and hysteric Affections, and for a Withering and Contraction of the Joints. The Fat strengthens, discusses, eases pains, cures Contractions, mollifies the Hardness of Cicatrices, and fills up the pits left by the Measles. The Bones dried, discuss, astringe, stop all Sorts of Fluxes, and are therefore useful in a Catarrh, Flux of the Menses, Dysentery, and Lientery, and mitigate Pains in the Joints. The Marrow is highly commended for Contractions of the Limbs. The Cranium is found by Experience to be good for Diseases of the Head, and particularly for the Epilepsy; for which Reason, it is an Ingredient in several anti-epileptic Compositions. The Os triquerum, or triangular Bone of the Temple, is commended as a specific Remedy for the Epilepsy. The Heart also cures the same Distemper.

But the use of mummy as a medicine goes back much further. Thomas Willis, a 17th-century pioneer of brain science, brewed a drink for apoplexy, or bleeding, that mingled powdered human skull and chocolate.

In 1575, John Banister, Queen Elizabeth’s surgeon, describes a mummy plaster for a tumerous ulcer and a drink made of mummy and water of rhubarb for ulcers of the breast. In 1562, physician William Bullein published Bullein’s Bulwark of Defence Against all Sickness, which recommended mummy mixed with wild fennel, juice of black poppy, gentian, honey, and wild yellow carrots to make “Therica Galeni”, a treatment for ”the falling sickness… and convulsions”, headaches (including migraines), stomach pains, the “spitting of blood”, and “yellow jaundice”.

Earlier, anatomist and medical writer Berengario da Carpi (d.1530) made frequent use of mummy in medical plasters using a family secret recipe going back decades. His family insured they had sufficient amounts of mummy by keeping mummified heads in their house.

Blood

It is said that in July of 1492, the physician to dying Pope Innocent VIII bribed three healthy youths to help him save the pope. The youths were then bled, and the pope drank their blood, still fresh and hot. But the blood did not save the pope, and all three youths died of the bloodletting.

The belief that blood could cure disease goes back at least to Roman times. Between the first and the sixth century a single theological and several medical authors reported on the consumption of gladiator’s blood or liver to cure epileptics. The origins of this belief are thought to lie in Etruscan funeral rites. After the prohibition of gladiatorial combat in about 400 AD, an executed individual (particularly had he been beheaded) became the “legitimate” successor to the gladiator. Pliny the Elder (AD 23-79), one of the great historians of the Roman Empire, described the mad rush of spectators into arenas to drink the blood of fallen gladiators:

Epileptic patients are in the habit of drinking the blood even of gladiators, draughts teeming with life, as it were; a thing that, when we see it done by the wild beasts even, upon the same arena, inspires us with horror at the spectacle! And yet these persons, forsooth, consider it a most effectual cure for their disease, to quaff the warm, breathing, blood from man himself, and, as they apply their mouth to the wound, to draw forth his very life; and this, though it is regarded as an act of impiety to apply the human lips to the wound even of a wild beast! Others there are, again, who make the marrow of the leg-bones, and the brains of infants, the objects of their research!

Plin. Nat. 28.2

In the 16th and 17th centuries, various distillations of blood were used to treat consumption, pleurisy, apoplexy, goat, and epilepsy, as well as used for a general tonic for the sick. Moyse (or Moise) Charas, an apothecary in France during the reign of Louis XIV who compendiums of medication formulas, specified blood should be from “healthy young men”. Robert Boyle also had a lot to say about medicine, and was very interested in distillations of human blood. In 1663 he published Some Considerations touching the Usefulness of Experimental Natural Philosophy, in which he advises to

take of the blood of a healthy young man as much as you please, and whilst it is yet warm, add to it twice its weight of good spirit of wine, and incorporating them well together, shut them carefully up in a convenient glass vessel.

Poor people couldn’t afford physicians, and turned to other options for acquiring blood. English traveler Edward Browne reports that, while touring Vienna, he had the good fortune to be present at a number of executions. After one execution, he reports that “while the body was in the chair” he saw “a man run speedily with a pot in his hand, and filling it with the blood, yet spurting out of his neck, he presently drank it off, and ran away… this he did as a remedy against the falling-sickness.” In Germanic countries, the executioner was considered a healer; a social leper but with almost magical powers.

Fat

Human fat was mentioned in European pharmacopoeias as early as the 16th century. It was used to treat ailments on the outside of the body. German doctors, for instance, prescribed bandages soaked in it for wounds, and rubbing fat into the skin was considered a remedy for gout and rheumatism. But it could be used for other diseases as well. Human fat was frequently cited as a powerful treatment for rabies. Robert James, who we met earlier, published a book in 1741 on rabies. In it, he discusses the work of French surgeon J. P. Desault, including the remedy the surgeon had “…tried with constant success, and which I propose to prevent and cure the hydrophobia… the ointment made of one third part of mercury revived from cinnabar, one third part of human fat, and as much of hog’s lard.”

In Scotland, human fat was being sold and used as early as the beginning of the 17th century. An apothecary in Aberdeen, Scotland advertised advertised as part of his available medical ingredients “…human fat at 12s Scots per ounce”. The source of the fat was most likely executed criminals, as it was the most common source of fat available. But sometimes human fat came from much darker actions.

In July 1601, the Spanish began the siege of Ostend, one of the bloodiest battles of the Dutch revolt against the Spanish, and one of the longest sieges in history. An account of the battle tells of how on October 17, 1601, the Spanish ran in to a trap in an attack. Alll the attackers were killed, and afterwards “…the surgeons of the town went thither… and brought away sacks full of man’s grease which they had drawn out of the bodies.” It’s likely that the fat was then used to treat wounds from the battle.

Cannibalism as medicine may shock our sensibilities today, but it can be a useful starting point for developing medicinal practices in your world-building.

References

Bostock, John, 1855. The Natural History of Pliny the Elder. London, England: Taylor and Francis

“He had the greatest mind since Einstein, but it didn’t work quickly. He admitted his slowness often. Maybe it was because he had so great a mind that it didn’t work quickly… I watched him hit that ball. I watched it bounce of the edge of the table and move into the zero-gravity volume, heading in one particular direction. For when Priss sent that ball toward the zero-gravity volume – and the tri-di films bear me out – it was already aimed directly at Bloom’s heart! Accident? Coincidence? …Murder?” The Billiard Ball – Isaac Asimov

In “The Billiard Ball”, first published in the March 1967 issue of If, Asimov presents a story in which scientific competition rises to the level of murder. Maybe. Asimov understood that scientists are human beings, and can be arrogant, petty, cruel, and filled with hatred. These traits, in turn, can make for a compelling science fiction story. And if you’re looking for inspiration, there’s plenty to be found.

Lord Kelvin, a brilliant mathematician and physicist, accused Wilhelm Roentgen, who announced the discovery of X-rays in 1895, of fraud. He argued that the cathode-ray tube, which Roentgen had used in his discovery, had been in use for a decade, and therefore if X-rays actually existed, someone would have already discovered them. True, he eventually came around and apologized, but calling a fellow scientist a fraud is pretty serious.

But Kelvin was actually the least of Roentgen’s attackers. Roentgen had borrowed a cathode-ray tube from physicist Philipp Lenard, who had been exploring fluorescence using cathode-ray tubes before Roentgen, although he failed to pursue its origins or photographically document his findings. Lenard became angry that Roentgen hadn’t acknowledged his work in developing some of the technology that lead to Roentgen’s discovery, and for years he both demanded credit for the discovery of X-rays while simultaneously (and wrongly) arguing that they were just a kind of cathode ray with new properties instead of a different phenomenon. Lenard’s attacks on Roentgen lasted until Lenard died in 1947, and because of the attacks, Roentgen left orders for all his papers concerning X-rays prior to 1900 burned, unopened, upon his death. Lenard went on to be an early member of the Nazi party, an advisor to Adolf Hitler, Chief of Aryan physics, and a fierce opponent of Albert Einstein and “the Jewish fraud” of relativity.

Then there’s English inventor and scientist Robert Hooke. Hooke was a polymath, and is often referred to as the English Leonardo Da Vinci. He discovered Hooke’s Law (the extension of a spring is proportional to the applied force), contributed to knowledge of respiration, insect flight and the properties of gases, coined the term “cell” to describe the individual units making up larger organisms, invented the universal joint and the anchor escapement in clocks and numerous other mechanical devices, his work on gravitation preceded Newton’s, his Micrographia was the first book on microscopy, his astronomical observations were some of the best seen at the time, and he was an architect of distinction and a Surveyor for the City of London after the Great Fire.

But he was also an ass, especially when it came to Isaac Newton.

Hooke and Newton were involved in a dispute over the idea of the force of gravity following an inverse square relationship to define the elliptical orbits of planets, as well as Newton’s theory of light and colors. In 1672, Newton was elected Fellow of the Royal Society of London, and his first letter on Light and Colors was read to the Society. Hooke, at the time a respected senior scientist and Curator of Experiments for the Royal Society, attacked Newton’s theory, and also claimed that he had invented a reflecting telescope before Newton (Newton had actually invented it in 1668). Newton fought back, and won, but in In January 1676, Hooke again attacked Newton, alleging that Newton had plagiarized Hooke’s Micrographia, which contained Hooke’s own theory of light.

Despite the attacks, Hooke and Newton corresponded and in private correspondence, Newton had shared calculations that, he believed, showed that the path of a body falling to Earth would be a spiral. Unfortunately, Hooke realized that Newton’s argument only held true if the body were precisely on the equator, and in the more general case the path would be an ellipse. In 1679, just after Newton’s mother had died, Hooke exposed the error to the Royal Society, and after briefly responding to Hooke, he stopped writing anyone for over a year.

In 1686, when the first book of Newton’s ‘Principia’ was presented to the Royal Society, Hooke claimed that Newton had obtained from him the “notion” of “the rule of the decrease of Gravity, being reciprocally as the squares of the distances from the Center”. Only the diplomatic intervention of Edmund Halley persuaded Newton to allow the publication of the final volume of the Principia trilogy, with Halley telling Newton that Hooke was merely making a public fool of himself, and Newton removing every reference to Hooke in the volume.

But before you feel too bad for Newton, don’t, because Newton could be just as much of an ass as Hooke was.

John Flamsteed may not be a name that’s familiar to you, but was Astronomer Royal, and over 30 years had measured the positions of thousands of stars with a precision far exceeding anything undertaken before him. When Newton needed observations on the ‘double’ comet of 1680, he turned to Flamsteed, who provided him with the observations. There were some small errors in the data Flamsteed sent to Newton, and Flamsteed attempted to make amends by carrying out some of the calculations that Newton needed for himself. Newton, however, Newton caustically informed Flamsteed that he needed his observations, not his calculations. Feeling mistreated Flamsteed threatened to withhold his data.

Newton needed these calculations for a new section he was planning for the second edition of the Principia, around 1703, on a “Theory of the Moon”, so using his courtly influence, he persuaded Queen Anne’s husband, George, to commission a royal star catalogue, to be printed by the Royal Society. Flamsteed could hardly refuse this commission from his direct employer; but the moment he handed his draft data over to the Royal Society it was certain to go straight to Newton, who now dominated there. Flamsteed stalled, publishing the data as slowly as possible, and making certain it wasn’t the data that Newton needed. When Flamsteed argued with Newton over an error in Newton’s measurement of the size of stars in Opticks, Newton deliberately excluded Flamsteed from the discussions about the publication of his catalogue, and his request for a £2,000 grant to purchase a new telescope was rejected under Newton’s influence. In 1708, Prince George died, and the star catalogue project died with him. In retaliation, when Flamsteed’s membership of the Royal Society lapsed in 1709, Newton refused to renew it, effectively expelling Flamsteed.

But Newton wasn’t through with Flamsteed. He needed Flamsteed’s data, and by 1711, had persuaded Queen Anne to take up the mantle of sponsor of her late husband’s project. In a note to Flamsteed in 1711, Newton threatened that “[If you] make any excuses or unnecessary delays it will be taken for an indirect refusal to comply with Her Majesty’s order.”

The matter came to a head with the eclipse of 4 July, 1711. Observations of the eclipse would be invaluable to Newton’s calculations, but Flamsteed refused a direct order to observe it. He was ordered to explain himself before a panel of the Royal Society, and the council that was to stand judgment over Flamsteed was selected by the President of the Royal Society (Newton) and consisted of Newton and two of his most loyal supporters. The council, to no one’s surprise, ordered the immediate publication of all Flamsteed’s hard-won data.

Flamsteed’s masterwork, Historia Coelestis, was finally published in 1712, against Flamsteed’s wishes and without his involvement. The following year, Newton issued the second edition of his Principia, compete with a lunar theory based on Flamsteed’s data.