Cognitive Science: An Introduction/What is Science?

Cognitive science is an interdiscipline, which means that is uses methodologies from multiple disciplines to study problems thought to be important by multiple disciplines. Although some of these sub-disciplines are not traditionally thought of as sciences (e.g., philosophy and sometimes linguistics), in general cognitive science is considered to be a scientific field.

"Casual observation" is what most of us do all day. It is an unsystematic noticing of patterns in our world. For example, you might notice that people's memories aren't always perfect. Sometimes people forget things, particularly if they haven't thought of them in a while. From this you might make a theory.

A theory is a broad description of how a part of our world works. A hypothesis is a prediction of how a particular experiment will turn out.

For example, your theory of gravity might be that unsupported objects will fall toward the center of the Earth. A hypothesis might be that if you let go of a jar of peanut butter held five feet off of the ground, it will drop toward the floor. Note that there are many, many hypotheses that are predicted by a theory. You test hypotheses in experiments, and use the results to support, contradict, and modify the original theory. For example, the simple theory I proposed here predicts that if you let go of a helium balloon five feet off of the ground, it too will drop to the floor. This would not happen, which is why the original theory is too simple (by the way, on the moon a helium balloon *would* fall to the ground).

In practice, sometimes people use the words "theory" and "hypothesis" interchangeably, but the distinction is important to recognize.

Hypotheses are tested using experimental or quasi-experimental methods. In a Quasi-experiment, you find a situation in the world that tests your hypothesis (sometimes called a "natural experiment") and carefully observe the outcome. This is often done in fields such as political science, where the scientists cannot set up experimental conditions (e.g. setting up two countries with two different governments.) They just have to wait until political systems appear, and observe what happens. Similarly, we can't ethically damage people's brains to see how they behave, so we wait until people get brain injuries, and then observe. In general, quasi-experiments are done when experimentation is unethical or too expensive (either in terms of time or money) to do.

An experiment is a condition you set up in the world, with a careful way of observing the outcome of particular variables. To test a forgetting theory, you might bring people into the laboratory and ask them to remember lists of words, and later observe how well they remember those words.

The "systematic" part is very important, because people have many biases. As we will see in the "cognitive myths" chapter, there is no effect of the full moon on human behavior. But because of confirmation bias (as we will see in the "cognitive biases" section), casual observation can lead to people believing that the hypothesis has been supported. Only through careful, systematic observation can we tell if something is really happening. This systematic observation is key to science, and one of the reasons it has worked so much better than many other ways of getting beliefs about the world.

The results of experiments often don't turn out the way you planned, and this often means that there's something wrong with the theory. So using the results, the scientist changes the theory and devices another hypothesis and experiment to test it.

Above we talked about how an individual scientist behaves, but another important aspect of science is the fact that it happens are a part of a scientific community. Scientific results are communicated to other scientists (published). In particular, experiments are described so that other scientists can attempt to "replicate" the study. So if another scientist disagrees with your theory, she can try (and sometimes fail) to replicate it, or use a slightly different version of the experiment to shed light on the overall theory.

Ideally, this replication can catch errors and fraud. It's science's built-in error detection mechanism. This is why it's so important for the results of experiments to be available to other scientists.

Because theories are under constant revision, ideally getting better and better, scientists are often loathe to talk about theories being "proven." Scientists admit that all scientific theories are works in progress, and that no particular theory should be considered the complete, ultimate truth. Contemporary students of science are not taught that the findings of the great scientists are to be revered and left unquestioned, but that they are being trained to stand on their shoulders and find out more—and perhaps even to show that they are wrong. This is an enormous intellectual break with other cultural views of our relationship to knowledge and authority.

Science is a shockingly recent phenomenon. It only got started around year 500 CE, in Europe. Before that, (and for most people until about 1500) it was generally inconceivable that we could put fruitfully put effort into making new medicines and other technologies.[1]

Some have suggested that an important part of the scientific revolution was our admission of ignorance. Before that, dominate philosophies held not that they knew everything, but that they knew everything worth knowing. Science not only gave humans a powerful tool to discover things about the world, but gave us the mindset that there were countless interesting questions about the natural world.[1]