Top Clicks

Science In Our Lives

The Internet has provided a powerful forum for science educators. Popular figures like Neil DeGrasse Tyson and Bill Nye routinely use YouTube, Twitter, and other platforms to stress the importance of scientific research. Teachers can use digital textbooks, simulated laboratories, and powerful Internet tools like MATLAB to make their lessons increasingly collaborative and accessible. But at the same time, the Internet has given support and assets to professional science deniers. The computer both giveth and taketh away knowledge.

It’s important, then, for people with scientific training to use the tools at their disposal – both online and offline – to defend the scientific method and the discoveries it has yielded. You don’t need a PhD in natural or applied sciences to do that. Bill Nye doesn’t have a PhD, and he’s a master science educator. And luckily at Tangents, we have our own scientist on staff.

Casey Camire is a professional chemist, NASA enthusiast, and amateur historian. Tangents spoke to Casey to get his take on which scientific principles everybody needs to know.

TANGENTS: Casey, how would you define the scientific method?

CASEY: To define the scientific method, we must first define science. Science, in a fundamental sense, is the set of theories which best describe reality as we can most accurately measure or observe it. It’s not a belief system that operates on faith – it’s a system in which I precisely describe the conditions in which I observed something, and you are able to replicate those conditions and make the same observation. Science is, in essence, about one thing; reproducibility. I do x, get y, and if my methods were good, you do x, and get y.

The scientific method, therefore, is the system by which we refine the set of theories which, to the best of our ability to observe and measure, approximate reality most accurately. The key to the scientific method is being observant. The first step is simply to observe something, and wonder why that might be happening. A classic example of this is an experiment run by physicist Richard Feynman. He noticed that ants on his kitchen floor, despite going in all directions, eventually formed straight lines from a food source to the colony. He was perplexed, as ants had no notion of geometry, and therefore shouldn’t be able to immediately “draw” straight lines. Those are the first two steps – observe and ask. Once you’ve asked a question, you can either generate a falsifiable hypothesis (a statement that has a clear true or false answer), or you can continue to observe. Feynman continued to observe, tracing the path of the first ant to find food with a colored pencil. He noticed that the path is at first quite “squiggly” but over time as multiple ants trace the path they flatten it out. Having made more precise observations, he then hypothesized that ants, when following a pheromone trail laid by an earlier ant, will follow along until the trail begins to curve, at which point they will “drift” off the trail until picking it up again. He tested this hypothesis by overlaying the different traces he had made of the ants moving to and from a food source, and sure enough, successive ants would “drift” from a curved area of the trail, wandering off it in a straight line, until they picked up the same trail again. Over time, the line became quite straight.

What’s important there is Feynman’s hypothesis is falsifiable. He could be right, or he could be wrong. He could have overlaid the paths he traced and found no discernible pattern, leading him to reject his hypothesis. In fact, in science it is quite common to assume that the negative of your hypothesis is true. This is called the null hypothesis; it assumes that nothing is happening. With a null hypothesis, as you run your experiments or make your observations, you begin to collect evidence that shows something is indeed happening, and the probability of nothing happening decreases until you can say with reasonable certainty that not nothing is happening. Very often that’s enough to make progress in a field. Bear in mind, saying not nothing is happening is not the same as saying something is happening, as you haven’t eliminated any possibilities, just that you can’t say something isn’t happening. Very often the difference between the hard sciences and the soft sciences is the use of a null hypothesis. In hard sciences, like physics, chemistry, or biology, the entire system under investigation can be carefully controlled, and the variables under investigation can be precisely modulated. In those cases, you can say that your “independent variable” (what you are changing) is responsible for the changes observed in your dependent variable (what you are measuring). In the soft sciences, such as psychology, sociology, or ecology, you cannot control all parameters, so using something like a null hypothesis, and finding that not nothing is indeed happening (without necessarily saying what that is) is more useful, as it points to areas deserving more intense observation.

Observing, asking questions, and forming hypotheses are the first three steps of the scientific method. I’ve already alluded to the fourth step, which is analysis of the data and observations gathered. Proper analysis allows you to accept or reject your hypothesis. The final step, drawing a conclusion, requires you to summarize your findings, and build the case as to why your either accepted or rejected your hypothesis.

A key step to good science beyond just following the above method is reproducibility. In modern times, this takes the form of peer review, wherein the journals which publish generally accepted science subject submitted papers to a panel of experts, who carefully read the paper for how carefully the hypothesis is explained, how well the experiments were designed, how well the data was collected and analyzed, and how valid the conclusions drawn from that analysis are. Ideally, published science should be reproducible. I should, based on your publication, be able to reproduce your experiment and get the same results. If I can, congratulations – you’ve added more to our understanding of reality. You’ve refined the tool by which we describe existence.

TANGENTS: This may sound odd, but let’s ask the implications question: So what? Why does the scientific method matter?

CASEY: This is all very important because, generally speaking, humans aren’t very good at it. Perhaps the biggest obstacle in the way of building accurate models of reality is our psychological biases. We like to think we’ve got things figured out when we haven’t, and we like to think something is happening when nothing is. The scientific method, when followed properly (and again, us being human it often isn’t) has proven to be a reliable, reproducible way of expanding our understanding and description of existence. Taking it into your everyday life, it can be a useful tool to keep your own biases at bay. By remaining skeptical and trusting in evidence, you’ll be able to sort through “10 Weight Loss Tricks That Doctors DON’T Want You To Know” and ways to actually take care of yourself, for example. In today’s information overload world, we must resist the urge to find patterns in noise, ascribing events to causes which are unrelated. A solid grasp of what science means and how it’s done is one of the best ways to improve your understanding of the world around you. As Carl Sagan said “It is far better to perceive the universe as it really is then to persist in delusion, however satisfying an reassuring.” Just look how much has changed since we as a species have gotten a hang of this method – and think of how much more we can do.

TANGENTS: Sagan was an optimist, and a master communicator. Are there scientific discoveries that you think everyone, regardless of their profession, should understand?

CASEY: I would say that being aware of scientific discoveries is less important than having some of the same mental furniture that scientists have. One of the most important pieces of mental furniture is a realistic sense of scale. One thing the average person struggles with is scale, especially when dealing with sizes much smaller or much larger than those found in everyday life. For example, let’s just consider the stars. The nearest star to us, the sun, is approximately 8 light-minutes away. Light, moving at 180,000 miles per second, still takes 8 minutes to reach Earth. To put that into perspective, a single photon of light could circle the earth at least 8 times in one second – yet it still takes light 8 minutes to reach Earth. And that’s just our nearest star. There are stars that are many millions of lightyears away. When you look up in the sky, you’re seeing photons that may have left their stars when dinosaurs still ruled.

Think to yourself – those photons left their star, traveled literally billions, trillions of miles, avoiding everything in their path for millions of years, until they were intercepted, perfectly, by the retina in your eye, something that itself perhaps wasn’t quite finished evolving when that star ejected that photon, and your retina, the beautifully tuned chemical machine it is, fired off a chemical signal that made its way to your brain, a structure only very recently evolved, and your brain, perhaps the finest instrument in all existence, perceived that photon as “starlight.” All of those systems, the nuclear fusion of a star that perhaps no longer exists, the vast distance of interstellar space, the lonely sojourning photon, your finely tuned retina, and your masterfully perceptive brain, all in motion, all so impossibly well-placed that, for an instant, you experience what was millions of years ago.

Essential to all of that is scale. The ability to perceive just how large the universe is, the ability to perceive just how old our sun and planet are, the ability to perceive how incredibly complex the systems of life are, and the ability to wonder at our own wonder, is all due to appreciating scale. From scale, you can gain objectivity; from objectivity, you can begin to better appreciate any scientific principle you’d like.