My emerging philosophy of science (part 1)

In his book The Invention of Science David Wootton argues that “the idea of discovery is … a precondition for the invention of science” (p.54) and, linked with this, that the idea of discovery itself had to be invented. He points to the works of Francis Bacon which “built a philosophy of science around the idea of discovery” (p.91) and to the ways in which Bacon pointed to Columbus as the model for this process. Importantly, Wootton – a historian – argues that there was a need to foster a new attitude. This attitude was the conviction that there are important discoveries to be made, in contrast to the belief that there is no such thing as new knowledge and that past authorities (such as Aristotle) had already learned everything and shouldn’t be doubted. In this sense a new philosophy was needed to help address the barriers that existed to new forms of inquiry we call science.

A bit like Bacon’s philosophy, I recently noticed that my own emerging (partly formed) philosophy of science is grounded in a few core ideas from which implications follow.

One of the key things I believe ought to inform a philosophy of science are the findings that have emerged from cognitive science and psychology over recent decades. Given that science and research is done by people we ought to consider the latest findings on human cognition and psychology and what they might mean for the research process.

Some of the key ideas that have emerged are the ideas of cognitive mechanisms, biases and heuristics, and new conceptions of the human mind such as the modularist conception of the human mind proposed by many cognitive scientists such as Hugo Mercier and Dan Sperber’s in their book The Enigma of Reason. We appear to be gaining a stronger understanding of both the cognitive competencies that human beings share and the cognitive issues we also share. As an example of the latter, consider research on causal illusions where humans erroneously reach causal conclusions. Research psychologists Fernando Blanko and Helena Matute argue that extensive experimental research has shown that causal illusions are “part of a healthy and well-adapted cognitive system” (Blanko & Matute, 2018, p.67). If correct, these findings mean that healthy normal human beings “are made to hold false beliefs and to jump to conclusions on the basis of very little evidence” (p.67).

Regarding cognitive mechanisms, the capacity of human beings to reason and related cognitive tendencies stand out as absolutely crucial. Mercier and Sperber present convincing evidence that reason can be understood as a kind of inferential mechanism (of which the human mind is believed to contain several) which automatically produces intuitive inferences about reasons for adopting a conclusion. Such inferential processes are common in the research process (e.g. when interpreting evidence, forming arguments, etc).

Whilst there is evidence that such inferences are commonly drawn in a broadly-speaking rational manner (Mercier and Sperber, 2017), there is also strong evidence that reason is “geared to the retrospective use of reasons for justification” (p.147) – i.e. justifying conclusions already reached (via backward inference) – and that reason “is very much biased in our favor” (p.145). For instance, regarding the latter aspect “reasons that come easily to our mind are likely to confirm or even strengthen our initial intuitions” (p.146).

A simple way to put such issues is that, unless we’re strongly pushed to examine our reasons for believing something, reason tends to confirm things we already believe (myside bias) or that we want to be true (similar to motivated reasoning). Mercier and Sperber present cases showing that scientists and experts are also prey to such biases, which can prevent even-handed consideration of alternative hypotheses or competing theories.

In my doctoral thesis I made related arguments about the importance of the reasoning dimensions of knowledge practices, and about how theories of cognitive mechanisms and biases can be used to enhance these practices. These arguments drew on the theory of reasoning proposed by Mercier and Sperber and evidence presented in their book.

Based on related analysis of real-world cases and experimental evidence, Mercier and Sperber suggest that the social context in which science is done is crucial. For them, the social dimensions of reasoning are crucial such as whether a scientist has “serious interlocutors” (p.326), “skeptical peers” (p.327), and whether they anticipate or are forced to address strong counterarguments. This interactionist view of human reason seriously questions the mythology of the solitary genius scientist.

I have come to agree with this analysis. Typically (there may be exceptions) if science is to provide a reliable way of seeking truth then the social context must be such that it minimises the extent to which inquiry and conclusions are biased. Moreover, we should expect argument development to be biased and lazy, interpret arguments accordingly, and we must hope that the collective process of exchanging arguments brings us closer to the truth and do whatever we can to address social factors that may hamper the exchange of reasons.

Regarding other cognitive biases and tendencies – such as the illusion of causality – it also follows that for causal kinds of questions we ought to try to be very cautious when reaching conclusions (knowing that we have the tendency to jump to conclusions too soon) and, secondly, we typically should place differing value of different types of evidence according to whether it reasonably warrants making causal inferences and according to the level of risk that it could provide misleading information. As I learned more about biases like causal illusions I found myself having both a stronger appreciation of the importance of methods and practices that can reliably inform causal inferences and also being more cautious when making causal claims, though such biases are hard to overcome.

Perhaps greater public understanding of cognitive science will provide a basis for greater public caution regarding some inferences and for placing value on expert knowledge.

The issues, theories and concepts noted above only begin to touch the surface. Researchers have identified a vast array of different cognitive biases (link, link), many of which are likely to influence research processes given that these tend to influence the way people interpret evidence and information (more broadly) and/or shape inferential processes.

This appears to support and necessitate a somewhat new philosophy of science that is both justificatory (about why we need science) and somewhat prescriptive (about how inquiry ought to be conducted and organised taking into account new knowledge of cognitive mechanisms, biases and heuristics). I said ‘somewhat new philosophy’ because many Enlightenment and pragmatist philosophers placed cognitive limitations and capacities at the centre of their philosophising. The difference is that today we have access to findings from cutting-edge cognitive science and other psychological sciences.

UPDATE (March 21, 2018): It turns out that a philosopher of science Kevin deLaplante has articulated a very similar perspective grounded in two ideas: 1) human beings are prone to biases that lead them to make errors (justificatory idea); and 2) scientific methodology aims to neutralise the effects of biases and thereby reduce error (prescriptive idea).