Ronald Fisher is a geneticist and statistician working at Rothamsted Experimental Station, an agricultural research institute located in the English countryside.

Before coming to Rothamsted, Fisher was instrumental in reconciling Charles Darwin’s notion of evolution by natural selection with Gregor Mendel’s Laws of Genetics. Basically, if you’ve ever wondered how Darwin’s observations of Finches and Mendel’s experiments with pea plants led to our modern understanding of evolution, one of the people you have to thank for that is Ronald Fisher.

It’s also worth pointing out, given Fisher’s influence on genetics, that he was an outspoken eugenicist. After all, this was the early twentieth century and the history of science is not exactly a straight line of people or non-horrific views on society.

Anyway, back to the countryside.

Long-term experiments with wheat, grass, and roots abound at Rothamsted, giving Fisher a bumper crop of data to analyze. However, though the overall quantity of data is high, sample sizes are low. An influential study of the effects of rainfall on wheat incorporates data from just thirteen plots of land.

Concerned with generalizing the results of such experiments, after all, the point of this type of research is to increase crop production, Fisher synthesizes several recent advances in “small sample statistics” into a framework known as significance testing.

He takes a statistical test called the Student’s t-test, which was initially developed by statistician to monitor the quality of Guinness, and develops a complementary test known which he calls the Analysis of Variance (ANOVA).

To ensure these innovations are accessible to the research community beyond Rothamsted, Fisher publishes Statistical Methods for Research Workers. Central to the book, and significance testing more generally, is the null hypothesis- the position that there is no significant difference between groups of data. In Fisher’s conception, devices like t-tests and ANOVAs are tests of the null hypothesis. The results of such tests indicate the likelihood of observing a result when the null hypothesis is true. In quantitative terms, this likelihood is expressed as a p-value.

Fitting it’s origins in applied research, the utility of Fisher’s framework is best demonstrated with a practical example. Suppose Fisher and his colleagues want to study the effect of a particular method of fertilization on the growth of grass. To do this, they obtain yield measurements from ten plots that use the method and ten that do not. These numbers are small, but reflective of the time and effort that goes into harvesting good data. Before examining the two groups of data, Fisher reminds his colleagues that the null hypothesis stipulates that there is no difference between the fertilized and unfertilized plots. This is a really abstract way of talking about something as exciting as watching grass grow, so he reiterates that the null hypothesis is essentially that the fertilization method has no effect. Then, he runs a t-test.

A resulting p-value of 0.50 indicates that, assuming the fertilization method has no effect, the probability of Fisher and his colleagues obtaining their yield measurements is fifty percent. A resulting p-value of 0.10 indicates that the probability is ten percent. In Statistical Methods for Research Workers, Fisher introduces an informal criterion for rejecting the null hypothesis: p < 0.05.

“The value for which p = 0.05, or 1 in 20, is 1.96 or nearly 2 ; it is convenient to take this point as a limit in judging whether a deviation is to be considered significant or not.”

We’ve arrived finally at p<0.05

Almost a decade after the publication of Statistical Methods for Research Workers, Jerzy Neyman and Egon Pearson address what they view as a fundamental asymmetry in Fisher’s framework. Namely, though it’s intended to help researchers evaluate the results of experiments, the focus on null hypotheses doesn’t really give researchers any way to evaluate experimental hypotheses. Basically, the argue with increasing volume, you can use Fisher’s methods to evaluate if there’s a difference between two groups- but you can’t used it to make a statement about what’s causing it.

Though their “hypothesis testing” framework draws heavily from Fisher’s, Neyman and Pearson’s has a fundamentally different goal. Rather than giving researchers tools to evaluate the results of agricultural experiments, their goal is determining the most optimal test for deciding between competing hypotheses. These hypotheses include Fisher’s null hypothesis, but also a variety of “alternative” or experimental hypotheses. To this end, they introduce three important concepts to the burgeoning field of research-oriented statistics: Type I Error- The probability of incorrectly rejecting the null hypothesis, Type II Error- the probability of incorrectly accepting the null hypothesis, and Power- the probability of correctly rejecting the null hypothesis correctly.

Disagreements between Fisher and Neyman and Pearson soon escalates into open antagonism. No seriously, reading accounts of these debates you get the sense that Fisher’s true talent wasn’t in biology or statistics, but in expressing his ego mostly through yelling.

However, despite the controversy, the two frameworks are soon combined and presented as one in research methods textbooks. What emerges is an enormously and immediately influential model of statistical testing that incorporates Pearson’s null hypothesis, Neyman and Pearson’s alternative hypotheses, and a focus on observing p-values less than 0.05.

So when we talk about p-values and things like p-hacking, we’re talking about a method for evaluating the difference between groups of data that was designed by an evolutionary biologist and eugenicist for use in agriculture. We’re also talking about a debate about how what this number means and how to use it that has been ongoing for more than ninety years.

Cover Art

If Bold Signals were to have a central thesis, it would be that science is people.

At the time of this writing, I have recorded the first five interviews for the third season of Bold Signals. I asked a social psychologist about studying unconscious biases and promoting reproducible research, chatted with a physicist about traveling long distances to examine tiny particles, listened to a neuropsychologist and a cognitive ecologist describe their work studying behavior in subcortical structures and in prairie voles, and talked with a science journalist about what it means to write about science from the outside. These interviews cover a lot of ground, but they’re united by the notion that science is a fundamentally human enterprise.

Look forward to all this and more, as Bold Signals returns for its third season on December 12th!

What does it mean to talk about science?

Since November 8th, I’ve thought a lot about what it means to talk about science in an environment full of fake news and “post truth”. As a scientist turned librarian, I want to believe that the accuracy and integrity of information matters. As a person who reads the news and lives in the world, I’m not sure what to believe.

I’ve never been shy about the fact that the whole point is to show that science is populated by a the diverse group of hard-working, dedicated people. In an environment where political figures are calling for end to “political correctness” and “politicized” science, I think it’s important to show that science, like politics, is all about people.

So, in addition to the usual interviews, this season will feature two regular segments borne from my anxiety about what it means to talk about science today.

Scenes from a Replication Crisis: I’ve usually used the first segment of each episode to muse about current events in science. From my thoughts on a controversial new book to my concern about a breach of research ethics, I have always grounded these segments in science’s present. Instead, in an effort to further understand how science is a product of people, I’ll use this segment to explore science’s past and examine the pressures and incentives that have led science’s ongoing “replication crisis”. Listen as I describe the life and times of Ronald Fisher and try to make the history of p-values interesting to people without a background in statistics or epistemology!

Bold Signals Documentary Club: If the first segment of the podcast is explicitly about the evolution of scientific methods, the last is about how we talk about science. Last season I used this segment as an excuse to read a bunch of science fiction. This season, I’ll be watching documentaries. I’ll explore the ways in which science is presented and perceived. In the first batch of episodes, I’ll be reviewing Cosmos: A Personal Journey. Listen as I critique probably the most beloved science documentary series of all time!

A Note on the Release Schedule

If you were paying close attention, you may have noticed that the release schedule started to slip at the end of season two. I handle every aspect of producing the podcast myself and, though it may not sound like it, a lot of time and effort goes into each episode of Bold Signals. After moving across the country and starting a new job earlier this year, it became difficult to release a new episode every other Friday.

I’d like very much to keep to a regular release schedule, but I have another pretty major life event set to occur near the end of the year. I’m not sure to what extent this will affect the podcast’s release schedule, but I suspect it’ll be rather irregular this season. Sorry for the inconvenience and thanks in advance for your patience.