EVENTS

Yesterday marked the ten-year anniversary of my beginning blogging. For the first seven years I wrote on my university’s blogging platform and then three years ago I was invited to join the FreethoughtBlog network. Initially I just felt the need to try blogging but was not at all sure what form the blog would take and what I would write about. But I settled fairly quickly into a rhythm and though there have been some minor changes over time, basically it has ended up as me writing about whatever I felt worth writing about at the moment, mainly to clarify my own ideas about those issues with the help of the commenters. I have been impressed with the knowledge and insights that many readers have provided.[Read more…]

“First, in Part II you discuss the concepts of Know-How and Know-Why. I am curious as to what extent these concepts might be applied to understanding the differences between the Hard Sciences (Physics, Chemistry, &c.) and the Soft Sciences (Psychology, Sociology, &c.) Are what we call Soft Sciences sciences at all?”

Science has considerable prestige as providing reliable knowledge and as a result many fields of study aspire to that label. But the issue of what distinguishes science from non-science is as yet unresolved. The know-how/know why distinction of Aristotle ceased to be considered viable as a means of distinguishing science from non-science when Newton came along. His laws of motion and gravity were spectacularly successful in explaining the motion of objects, especially the solar system. He thus provided the ‘know-why’ that had been previously missing from the purely empirical field of astronomy, lifting it into the realm of science.

But Newton’s laws had serious know-why deficiencies of their own because they had no explanation for why distant inanimate objects exerted forces on each other. Up until then, forces were believed to be exerted by contact, and the introduction of mysterious forces that acted at a distance was somewhat of an embarrassment. But the immense achievement of unifying our understanding of celestial and terrestrial motion led many to deem that what Newton had done to be unquestionably science, despite its lack of know-why for its major elements. Know-why ceased to be a requirement for science. This development was in some sense inevitable because we now realize that every theory is based on some other theory and that at some point we just have to say ‘and that’s just the way it seems to be’, without being able to elucidate any further.

The search for better ways to distinguish science from non-science went on. To be able to definitely say whether something belongs in some category or not (whether it be science or anything else) requires one to specify bothnecessary and sufficient conditions for belonging in that category. We can specify some necessary conditions for science. It needs, for example, to be empirical, predictive, and materialistic, and Thomas Kuhn added the condition that it also work within a paradigm. But suitable sufficient conditions are much harder to come by.

If a theory fails to meet the necessary conditions threshold, it means that it is definitely not science, which is why so-called ‘intelligent design theory’ has been deemed to be not science. But meeting the necessary threshold only allows us to conclude that the theory could be science, not that it definitely is.

This inability to say definitely that something is a science has not proven to be a problem for those areas (the so-called ‘hard sciences’ such as almost all areas of physics, chemistry, and biology) that are, by broad consensus, unambiguously considered to be science, because nobody except philosophers of science cares whether they meet any criteria or not. But it has proven problematic for the soft sciences, where there is no such unanimity. The scientific status of some areas of physics (such as string theory) has also been challenged on the grounds that it has as yet not generated any predictions that can be tested empirically.

“Second, in Part VII you use the electron as an example of a universal claim that can never be proven because we can never test each and every electron in the universe. I wondered if it would be possible to make the claim that any particle that does not have the mass and charge of an electron is not an electron in the same way that we can state that any atom the does not have solely a single proton is not Hydrogen?”

You can define away the immediate problem by saying that the electron is a particle having a set number of properties. But this simply defers the problem. It does not let us off the hook because we cannot say (for example) that every hydrogen atom has one of these particles because we cannot test each and every atom to see if that is true. We simply have to make the universal claim that it does, and that cannot be proven either.

“Third, in Part X you write “that however much data may support a theory, we are not in a position to unequivocally state that we have proven the theory to be true.” Where does this leave Laws such as the laws of gravity and thermodynamics? Do we no longer speak of Laws as such?”

The terms ‘law’ and ‘theory’ do not have any ranking order epistemologically in that there is no sense in which a law is truer than a theory. For example, Newton’s laws of motion are known to have limited validity and not be true when it comes to the very small or the very fast, while Einstein’s theories of special and general relativity are believed to have no violations.

What gets called a ‘law’ and what gets called a ‘theory’ differ in what they imply, though accidents of historical naming can also play a role. A law tends to be an empirical universal generalization of observed relationships between measurable quantities. So the law of conservation of energy says that if we were to measure the sum of all the energy components of a closed system at one time, that total will remain the same if we measure all the components at another time. Newton’s laws of motion give us the relationships between forces and mass and acceleration. Boyle’s law gives us the relationship between the pressure and volume of a gas. These are all empirical generalizations and none of them try to explain why these relationships hold true.

A theory, on the other hand, consists of a more complicated explanatory structure that specifies the elements of the system that it deals with, as well as how those elements behave and the relationships among them. A theory might be able to explain what undergirds a law, though it rarely proves it because of the many extra assumptions that are needed. So, for example, the kinetic theory of gases tells us what elements comprise an ideal gas and how they interact with each other and their container. Using that theory, we can understand where Boyle’s law comes from. Similarly, quantum theory tells us that the conservation of energy is connected to the invariance of laws under time translations, i.e., that the laws of science do not change with time.

Share this:

The roots of religion lie in deep evolutionary history. The book Why we Believe in God(s) by J. Anderson Thomson with Clare Aukofer (2011) marshals the evidence from psychology and neuroscience to argue that the tendency to belief in supernatural agencies by itself has no survival value but that it exists because it is a by-product of qualities that evolved for other purposes and which do have survival value, such as the tendency to detect agency behind natural events.[Read more…]

Share this:

Theologians often try to claim that they can arrive at eternal truths about god by using pure logic. In some sense, they are forced to make this claim because they have no evidence on their side but it is worthwhile to examine if it is possible to arrive at any truth purely logically. If so, we can see if that method can be co-opted to science, thus bypassing the need for evidence.

In mathematics, there is one way to prove that something is true using just logic alone and this is the method known as reductio ad absurdum or reduction to absurdity. The way it works is like this. Suppose you think that some proposition is true and want to prove it. You start by assuming that the negation of that proposition is true, and then show that this leads to a logical contradiction or a result that is manifestly false. This would convincingly prove that the starting assumption (the negation of the proposition under consideration) was false and hence that the original proposition was true.

The most famous example of this kind of proof is the simple, short, and elegant proof of the proposition that √2 (the square root of 2) is NOT a rational number. I believe that everyone should know this beautiful proof and so I will give it here.

This proof starts by assuming that the negation of that proposition is true, i.e., that the square root of two IS a rational number. You can then show that this assumption leads to a logical contradiction, as follows.

A rational number is one that can be written as the ratio of two integers. For example, the number 1.5 is rational because it can be written as 6/4, 12/8, 3/2, and so on. Similarly 146.98 is a rational number because it can be written as 14698/100. Conversely, the famous number π=3.1415927… is not a rational number. It cannot be written as the ratio of two integers since the number does not terminate AND there is no repeating pattern of digits.

(As a slight digression, to see why an infinite but repeating pattern is a rational number, take the number 4.3151515… where the sequence 15 is repeated indefinitely. Call this number y. If we multiply y by 10, we get 10y=43.151515… If we multiply y by 1000, we get 1000y=4315.151515… Subtracting 10y from 1000y, we get 990y=4272 exactly, since the repeating numbers after the decimal points are equal in both cases. Hence y=4.3151515… =4272/990 exactly and is thus a rational number. Similar reasoning can be applied with any repeating sequence.)

So IF √2 is a rational number, then it can be written as the ratio a/b, where a and b are integers. We then make sure that the ratio has been ‘simplified’ as much as possible by getting rid of all common factors. For example in the case of 146.98 discussed above, the ratio 14698/100 can be simplified to 7349/50 by cancelling the only common factor that the numerator and denominator share, which is the number 2. In the case of 1.5, the ratio we would use is 3/2, since the others have common factors.

So our starting assumption becomes that √2=a/b where a and b are integers that do not have any common factors. We can now multiply each side by itself to get 2=a2/b2. Hence a2=2b2. This implies that a2 is an even number (because it has a factor of 2). But if the square of a number is even, that means the number itself must be even. Hence a=2c, where c is also an integer. This leads to (2c)2=2b2 and thus b2=2c2. This implies that b2 is an even number and hence b is also an even number. Thus b also has a factor 2 and we have arrived at the conclusion that a and b both have the common factor 2. But if a and b have a common factor, this contradicts what we did at the start of the proof where we got rid of all their common factors. We have thus arrived at a logical contradiction. Hence our starting assumption that the square root of 2 is rational must be wrong. Since there are only two possible alternatives (the square root of 2 is either rational or not rational), we can conclude that it is not rational.

Note that we have proven a result to be true without appealing to any experimental data or the ‘real’ world. As far as I am aware, the only way to prove that a proposition is true using pure logic alone is of this nature, to show that the negation of the proposition leads to a logical contradiction of this sort.

Philosophers and theologians down the ages have tried to apply the reductio ad absurdum argument to prove the existence of god using logic alone. But the problem is that assuming that there is no god does not lead to a logical contradiction. So instead they appealed to what they felt was manifestly true, that the assumption that god did not exist meant that the existence and properties of the universe were wholly inexplicable. Almost all arguments for the existence of god are at some level appeals to this kind of incredulity.

But this is not a logical contradiction, since they are after all appealing to the empirical properties of the universe. In days gone by when much of how the world works must have seemed deeply mysterious, this subtle equating of empirical incredulity with logical contradiction may have passed without much notice. Even if what was shown was not strictly a logical contradiction, if the negation of a proposition ‘god exists’ seemed to lead to an obvious disagreement with data in that the properties of the world could not be explained, the negation of the proposition could be rejected, thus proving the original proposition to be true and that god exists.

But those arguments no longer hold since science has explained much of how the would works. Assuming that god does not exist no longer leads to either a logical or empirical contradiction.

Next: Some concluding thoughts

Share this:

Karl Popper’s model of falsification makes the scientific enterprise process seem extremely rational and logical. It also implies that science is progressing along the path to truth by successively eliminating false theories. Hence it should not be surprising that practicing scientists like it and still hold on to it as their model of how science works. In the previous post in this series, I discussed how Thomas Kuhn’s work cast serious doubt on the validity of Karl Popper’s falsification model of scientific progress, replacing it with a seemingly more subjective process in which scientists switched allegiance from an old theory to a new one based on many factors, some of them subjective, and that this transition had some of the elements of a gestalt switch. This conclusion was disturbing to many.

Another historian and philosopher of science Imre Lakatos was one of those concerned that Thomas Kuhn’s model of gestalt switches implied a certain amount of irrationality in the way that scientists choose a new paradigm over the old or the way they pick problems to work on. In his major work The Methodology of Scientific Research Programmes (1978) he argued that scientists are rational in the way they choose paradigms, and he proposed a new model (which he called methodological falsificationism) that he contrasted with Popper’s older model (which he called ‘naïve falsificationism’), that he claimed solved some of its difficulties

In Popper’s naïve falsification model, when there is disagreement between the predictions of a theory and observations or experiment, the theory must be abandoned. Kuhn and Lakatos agree with Duhem that when such a disagreement occurs, it is not obvious where to place the blame for the failure so summarily discarding the theory is unwarranted. In such situations Duhem appealed to the vague ‘good sense’ of the individual scientists and of the collective scientific community to determine what to do. Kuhn refined this by saying that the choice of which direction to proceed is based on whether the scientific community perceives the existing paradigm to be in a crisis or not, and that when there is a crisis, the revolutionary switch to a new paradigm is akin to a gestalt switch, whose precise mechanism is hard to pin down, in which individual scientists suddenly see things in a new way.

Lakatos agrees with Kuhn (and disagrees with Popper) that experimental tests are never simply a contest between theory and experiment. At the very least they are three-cornered fights between an old paradigm, a new emerging rival, and experiment. But he disagrees with Kuhn that a crisis within the old paradigm is necessary for scientists to switch their allegiance to a new one (p. 206). He argues that a new theory is acceptable over its predecessor if it (1) explains all the previous successes of the old theory; (2) predicts novel facts that the old theory would have forbidden or would not even have considered; and (3) some of its novel predictions are confirmed. (p. 227)

Lakatos says that Kuhn places too much reliance on vague psychological processes to explain scientific revolutions and that the process is more rational, that scientists proceed in a systematic way in choosing between competing theories. In Lakatos’ model of methodological falsificationism, he emphasizes that experimental data is never free of theory. An experimental result in its raw form is simply a sensory observation, such as dot on screen, a pointer reading on a meter, a click of a Geiger counter, a track in a bubble chamber, a piece of bone, etc., none of which have any obvious meaning by themselves. In order to give them some meaning, we have to use theories that interpret the raw sensory experience. For example, a fossil bone that is found is useless unless one can determine what animal it belongs to and how old it is, all of which require the use of other theories. In addition, we have to assume that our knowledge about the other elements surrounding the raw data is unproblematic.

Meanwhile, a theoretical prediction is never the product of a single theory but consists of a combination of four components: the basic theory being investigated, the initial conditions, various auxiliary hypotheses that are needed to actually implement the theory, plus the invocation of ceteris paribus (roughly meaning “all other things being equal”) clauses. For example, to understand the origins of the Solar System, we need Newton’s laws but we also need to make assumptions about the initial state of the gas (the initial conditions), that the laws have not changed since the time the Earth was formed (an auxiliary hypothesis), and that no other unknown factors played a role in the formation (the ceteris paribus clauses).

Lakatos said that when there is a disagreement between a theoretical prediction and experimental data (where the two are interpreted in these more complex ways), scientists use both a ‘negative heuristic’ and ‘positive heuristic’ to systematically investigate and isolate the cause of this disagreement and that this process is what makes science rational.

The ‘negative heuristic’ says that one must deflect attention away from the ‘hard core’ theory when there is an inconsistency between predictions and experiment. In other words, scientists look for the culprit in all the factors other than the basic theory. The ‘positive heuristic’ consists of “a partially articulated set of suggestions or hints on how to change, develop the ‘refutable variants’ of the research program, how to modify, sophisticate, the ‘refutable’ protective belt.” (p. 243) So the positive heuristic tells scientists how to systematically investigate the initial conditions, auxiliary hypotheses, ceteris paribus clauses, etc., in short everything other than the basic theory. These two strategies protect the basic theory from being easily overthrown. This is important because good theories are hard to come by and one must not discard them too hastily.

Lakatos claims that this process rationally determines how scientists select problems to work on and how they resolve paradigm conflicts (contrasting it with Kuhn’s suggestion that scientists intuitively know what to do in such situations). In some sense, Lakatos seems to be fleshing out the rules of operation that Kuhn refers to but does not elaborate.

Lakatos argues that as long as a basic theory is fruitful and the negative and positive heuristics provide plenty of avenues for people to investigate and thus steadily produces new facts that both advance knowledge and are useful (a state of affairs that he calls a progressive problemshift), the basic theory will be retained. This is why Newtonian physics, one of the most fruitful theories of all time, is still with us even though it would be considered falsified using Popper’s criterion. It is only when the theory runs of steam, when all these avenues of investigation are more or less exhausted and do not seem to provide much opportunity to discover novel facts that we have what he calls a degenerating problemshift. At that point, scientists start abandoning their allegiance to the old theory and seek a new one, eventually leading to a scientific revolution.

Next: Truth by logical contradiction

Share this:

The philosopher of science Pierre Duhem said in his book The Aim and Structure of Physical Theory (1906, translated by Philip P. Wiener, 1954) that despite the fact that there is no way to isolate any given theory from all other theories, scientists are saved from sterile discussions about which theory is best because the collective ‘good sense’ of the scientific community can arrive at verdicts based on the evidence, and these verdicts are widely accepted. In adjudicating the truth or falsity of theories this way, the community of scientists are like a panel of judges in a court case (or a panel of doctors dealing with a particularly baffling set of symptoms), weighing the evidence for and against before pronouncing a verdict, once again showing the similarities of scientific conclusions to legal verdicts. And like judges, we have to try to leave our personal preferences at the door, which, as Duhem pointed out, is not always easy to do.

Now nothing contributes more to entangle good sense and to disturb its insight than passions and interests. Therefore, nothing will delay the decision which should determine a fortunate reform in a physical theory more than the vanity which makes a physicist too indulgent towards his own system and too severe towards the system of another. We are thus led to the conclusion so clearly expressed by Claude Bernard: The sound experimental criticism of a hypothesis is subordinated to certain moral conditions; in order to estimate correctly the agreement of a physical theory with the facts, it is not enough to be a good mathematician and skillful experimenter; one must also be an impartial and faithful judge. (p. 218)

This is why the collective judgment of the community, in which individual biases get diluted, carries more weight than the judgment of a single member, like the way that major legal decisions are made by a jury or a panel of judges rather than a single person.

Duhem’s idea that we are ultimately dependent on the somewhat vague collective ‘good sense’ of the scientific community to tell us what is true and what is false may be disturbing to some as it seems to demote scientific ‘truth’, reducing it from being objectively determined by data to an act of collective judgment, however competent the community making that judgment is. Surely there must be more to it than that? After all, science has achieved amazing things. Our entire modern lives are made possible because of the power of scientific theories that form the foundation of technology. In short, science works exceedingly well. How can it work so well if the theories we have developed were not true in some objective sense?

Such feelings are so strong that people continue to try and find ways to show that scientific theories, if not absolutely true now, are at least progressing along the road to truth. Popper’s idea of falsification seemed, at least initially, to provide a mechanism to understand how this steady progress might be occurring.

It was Thomas Kuhn who delivered the most devastating critique of Karl Popper’s idea that scientific theories can be falsified if a key prediction of the theory turns out to be contradicted by experiment. In Kuhn’s landmark book The Structure of Scientific Revolutions (1969), he pointed out that falsification fails in two ways. One way is an extension of Duhem’s argument, that it is never the case that a pure theoretical prediction based on a single theory is compared with a piece of empirical data. In the event of disagreement, there are always other linked theories that can be blamed. Secondly, even if we accept the idea of falsification at face value, it would not describe actual scientific practice. Kuhn’s book contains a wealth of examples that show how scientists live and work quite comfortably, for decades and sometimes even for centuries, with a theory that has been contradicted by data in a few instances, until finally discarding the theory or resolving the contradictions. As long as a theory seems to be generally working well, scientists are not too perturbed by the occasional disagreement, seeing them as merely unsolved problems and not as falsifying events. In fact, he points out that new theories almost always have very little evidence in support of them and disagree with a lot of data. If Popper’s model were applied rigorously, every theory would be falsified almost from the get-go.

So how do old theories get rejected and replaced by new ones? Kuhn says that during the period of ‘normal science’, most scientists work within a given scientific ‘paradigm’ (which consists of a basic theory plus the rules of operation), picking problems that promise to elucidate the workings of the paradigm. They are not looking to overthrow the paradigm but to stretch its boundaries. In the process, they sometimes encounter problems that resist solutions. If these discrepancies multiply and if a few key ones turn out to be highly resistant to attack by even the best practitioners in the field, science enters a period of crisis in which people start seriously investigating alternative theories. At some point, individual scientists start switching allegiance to a promising new theory that seems to solve some outstanding and vexing problems that the old one failed to solve and this process can begin to snowball. Kuhn suggests that the switch from seeing the old theory as true to seeing it as false and needing to be replaced by the new one is similar to a gestalt switch, a sudden realization of a new truth that is not driven purely by logic.

Kuhn’s views aroused considerable passions. Some anti-science people (religious and non-religious alike) have seized on his idea that scientific revolutions are not driven purely by objective facts to extend his views well beyond what he envisaged and claim that science is an irrational activity and that scientific knowledge is just another form of opinion and has no claim to privileged status. Kuhn spent a good part of the rest of his life arguing that this was a distortion of his views and that scientific knowledge had justifiable claims to being more reliable because of the ways that science operated.

Next: The rational progress of science

Share this:

The previous post illustrated a crucial difference between science and religion that explains why scientists can resolve disagreements amongst themselves as to which theory should be considered true but religious people cannot agree as to which god is the one true god. In competition between scientific theories, after some time the weight of evidence is such that one side concedes that their theory should be rejected, resulting in a consensus verdict. In religion, since evidence plays no role, and reason and logic are invoked only when they support your own case and discarded by appealing to faith when reason goes against you, there is no basis for arriving at agreement. It would be unthinkable for a scientist to argue in favor of his or her theory by denying evidence and logic and telling people that they must have faith in the theory for it to work.

Science can come to a consensus not because all individual scientists on the losing side change their minds. Some of them can be as dogged as the most fervent believer in god in holding on to their beliefs, and as inventive in finding new reasons for belief, though they will never resort to appealing to supernatural forces or faith. The key difference is that over time, the advocates of a failing theory become less influential, more marginalized, and eventually die out. The next generation of research students chooses their areas of study when they are older and more aware of the field and tend to avoid signing on to failing theories, so that those declining theories eventually fade from the scene, to be found only in historical archives. Unlike in the case of religion, there is no institutional structure dedicated to perpetuating old theories, nor is there a sacred text that must be adhered to. As much as scientists admire the works of Isaac Newton and Albert Einstein and Charles Darwin, they do not treat them as divinely inspired. Science has moved on since they were written and their original theories have been modified and elaborated on, even if they still bear their names. Every generation of students is taught the current version of accepted theories, not the original ones.

In the case of religions, however, they are forced to conform to ancient texts. Furthermore, children are not allowed to choose their religious beliefs when they are of more mature age, the way that research scientists choose which theories they want to work with. Religions indoctrinate the next generation of impressionable children with those ancient beliefs when they are very young, thus ensuring that those beliefs persist. Furthermore, there is a vast industry (churches, priests, theologians, etc.) whose very livelihood depends on those ancient religious ideas being perpetuated. Scientists can shift their allegiance from one theory to another without losing their jobs. A theologian or priest cannot. Can you imagine a pope saying that after some thought he has come to the conclusion that there is no god or that Buddhism is the true religion? Hence even though the evidence against the existence of god is far more overwhelming than that against old and rejected scientific theories, theologians will cling on to their old ideas, never conceding that they are wrong, invoking more and more ad hoc hypotheses to justify their beliefs.

This is why science progresses but religions are stuck in a rut, the only progress in the latter being the new excuses that need to be invented to explain why there is no evidence for god, as science makes god increasingly unnecessary as an explanatory concept. In fact, the field of theology largely consists of explaining why there is no evidence for god. Religious believers have the wiggle room to do this because pure logic is never sufficient to eliminate a theory. This is why believers in god who claim that since logical or evidentiary arguments cannot disprove the existence of god, therefore it is reasonable to believe that god exists, are saying something meaningless.

In science too we cannot eliminate the phlogiston theory of combustion or the ether or the geocentric model of the solar system by logic or evidence. So how are scientists able to say with such confidence that some theories (like gravity) are true and that others (like ether or phlogiston) are false? Pierre Duhem (The Aim and Structure of Physical Theory, Pierre Duhem, 1906, translated by Philip P. Wiener, 1954) said that we have to appeal to the collective ‘good sense’ of the scientific community as a whole to arrive at a judgment of which theory is better. It is the community of professionals working in a given scientific area that is the best judge of how to weigh the evidence and decide whether a theory is right or wrong, true or false, rather than any individual member of that community, since scientists are like any other people and prone to personal failings that can cloud their judgment, unless they exercise great vigilance over themselves.

Next: How ‘good sense’ emerges in science

Share this:

In the previous post, I discussed Karl Popper’s idea of using falsification as a demarcation criterion to distinguish science from non-science. The basic idea is that for a theory to be considered scientific, it has to make risky predictions that have the potential that a negative result would require us to abandon the theory. i.e., declare it to be false. If you cannot specify a test with the potential that a negative result would be fatal to your theory, then according to Popper’s criterion, that theory is not scientific.

Of course, I showed that falsification cannot be used to identify true theories by eliminating all false alternatives, because there is no limit to the theories can be invented to explain any set of phenomena. But steadily eliminating more and more false theories surely has to be a good thing in its own right. This is why falsificationism is highly popular among working scientists because it enables them to claim that science progresses by closing down blind alleys.

But there is a deeper problem with the whole methodology of falsificationism and that it is that even if prediction and data disagree, we cannot infer with absolute certainty that the theory is false because of the interconnectedness of scientific knowledge. Pierre Duhem pointed out over a century ago that in science one is never comparing the predictions of a single theory with experimental data, because the theories of science are all inextricably tangled up with one another. As Duhem said (The Aim and Structure of Physical Theory, Pierre Duhem, 1906, translated by Philip P. Wiener, 1954, p. 199, italics in original):

To seek to separate each of the hypotheses of theoretical physics from the other assumptions on which this science rests is to pursue a chimera; for the realization and interpretation of no matter what experiment in physics imply adherence to a whole set of theoretical propositions.

The only experimental check on a physical theory which is not illogical consists in comparing the entire system of the physical theory with the whole group of experimental laws, and in judging whether the latter is represented by the former in a satisfactory manner.

In other words, since every scientific theory is always part of an interconnected web of theories, when something goes wrong and data does not agree with the prediction, one can never pinpoint with certainty exactly which theory is the culprit. Is it the one that is ostensibly being tested or another one that is indirectly connected to the prediction? One cannot say definitively. All one knows is that something has gone wrong somewhere. Duhem provides an illuminating analogy of the difficulty facing a scientist by saying that the work of a scientist is more similar to that of a physician than a watchmaker.

People generally think that each one of the hypotheses employed in physics can be taken in isolation, checked by experiment, and then, when many varied tests have established its validity, given a definitive place in the system of physics. In reality, this is not the case. Physics is not a machine which lets itself be taken apart; we cannot try each piece in isolation and, in order to adjust it, wait until its solidity has been carefully checked. Physical science is a system that must be taken as a whole; it is an organism in which one part cannot be made to function except when the parts that are most remote from it are called into play, some more so than others, but all to some degree. If something goes wrong, if some discomfort is felt in the functioning of the organism, the physicist will have to ferret out through its effect on the entire system which organ needs to be remedied or modified without the possibility of isolating this organ and examining it apart. The watchmaker to whom you give a watch that has stopped separates all the wheelworks and examines them one by one until he finds the part that is defective or broken. The doctor to whom a patient appears cannot dissect him in order to establish his diagnosis; he has to guess the seat and cause of the ailment solely by inspecting disorders affecting the whole body. Now, the physicist concerned with remedying a limping theory resembles the doctor and not the watchmaker. (p. 187)

Duhem is arguing that one can never deduce whether any individual scientific theory is false, even in principle. This seems to be fly in the face of direct human experience. Anyone with even a cursory knowledge of scientific history knows that individual scientific theories have routinely been pronounced wrong and been replaced by new ones. How could this happen if we cannot isolate a single theory for comparison with data? How can scientists decide which of two competing theories is better at explaining data if a whole slew of other theories are also involved in the process? Is Duhem saying that we can never arrive at any conclusion about the truth or falsity of any scientific theory?

Not quite. What he goes on to say is that, like a physician, a scientist has to exercise a certain amount of discerning judgment in identifying the source of the problem, all the while being aware that one does not know for certain. Duhem argues that this is where the reasoned judgment of the scientific community as a whole plays a role in determining the outcome, overcoming the limitations imposed by strict logic. While there may be a temporary period in which scientists argue over the merits of competing theories,

In any event this state of indecision does not last forever. The day arrives when good sense comes out so clearly in favor of one of the two sides that the other side gives up the struggle even though pure logic would not forbid its continuation… Since logic does not determine with strict precision the time when an inadequate hypothesis should give way to a more fruitful assumption, and since recognizing this moment belongs to good sense, physicists may hasten this judgment and increase the rapidity of scientific progress by trying consciously to make good sense within themselves more lucid and more vigilant. (Duhem, p. 218, my italics.)

In the next post, I will discuss the importance that the consensus judgment of expert communities plays in science.

Share this:

In the previous post in this series, I wrote about the fact that however much data may support a theory, we are not in a position to unequivocally state that we have proven the theory to be true. But what if the prediction disagrees with the data? Surely then we can say something definite, that the theory is false?

The philosopher of science Karl Popper, who was deeply interested in the question of how to distinguish science from non-science, used this idea to develop his notion of falsifiability. He suggested that what makes a theory scientific was that it should make predictions that can be tested, saying that “the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.” (Conjectures and Refutations: The Growth of Scientific Knowledge, 1963, p. 48)

Popper’s motivation for doing this was his opposition to the claims of the supporters of Marxism, Freudian psychoanalysis, and Jungian psychology that their respective theories were scientific. He said that those theories seemed to be so flexible that almost anything that happened could be claimed to be in support of the theory. While supporters of these theories used these alleged successes as demonstrating the strength of their theories, Popper argued the converse: that a theory that could never be proven wrong was not scientific. A scientific theory was one that made risky predictions that laid bare the possibility that a negative result would require the discarding of the theory. A theory whose predictions could never be contradicted by any conceivable data was not a scientific theory.

Popper also said that humans were born with a innate tendency to make conjectures, to construct a universal theory based on whatever data was at hand, and that we held on to that theory until it was refuted (or falsified) by new data, whereupon we replaced it with a new universal theory. This process of conjectures and refutations went on all the time and was how science functioned. He claimed that this model also solved the problem of induction, why we expected that things that had always happened in the past would continue to happen in the future, when logically there was no reason to infer that.

Although Popper’s main goal was to solve what was known as the demarcation problem, i.e., the ability to distinguish science from non-science, his idea of falsifiability seemed to also advance us along the goal of distinguishing truth from falsehood, because if a prediction disagrees with the data, then we can conclude that the theory is false. This feature seems to give us some hope that we can arrive at a true theory by a back door mechanism. If we can enumerate all the possible alternatives to a theory and prove that all but one are false, then the one remaining theory must be true. To quote Sherlock Holmes in The Sign of Four, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”

But that option also proves to be illusory, for a purely practical reason. In science, one can never be sure that one has exhausted all the alternatives. There is no limit to the number of theories that can be postulated to explain any given set of phenomena and so showing one or any number of them to be false does not prove that any of the remaining alternatives are true.

This is the fatal flaw of the arguments of almost all religious believers, especially the creationists and the intelligent designers. Their strategy is to argue that there are only two possible explanations for some phenomenon, an intervention by god or an explanation based on naturalistic science. For example, in explaining the diversity of life, the competing theories are said to be evolution by natural selection or a designer/god. They would then seek some phenomenon that had not been convincingly explained by the scientific theory that encompasses it, declare that the scientific theory had thus been falsified, and triumphantly conclude that the phenomenon must be the work of a god. But that is a false dichotomy. Even in the highly unlikely event that some day in the future the theory of evolution by natural selection experiences a serious enough crisis that scientists suspect it to be false, this would not imply that ‘god did it’ would be accepted as the true explanation. There will be no shortage of other scientific theories competing to replace the theory of evolution, all of them having at least some supporting evidence.

This kind of flawed argument is what religious believers advance even now, with the current candidates for god’s intervention being the origin of life, the origin of the universe, the mind, consciousness, intelligence, morality, etc. They have no choice but to pursue this fundamentally flawed strategy because they have no positive evidence for god.

Next: Are theories falsifiable?

Share this:

In mathematics, the standard method of proving something is to start with the axioms and then apply the rules of logic to arrive at a theorem. In science, the parallel exercise is to start with a basic theory that consists of a set of fundamental entities and the laws or principles that are assumed to apply to them (all of which serve as the scientific analogues of axioms) and then apply the rules of logic and the techniques of mathematics to arrive at conclusions. For example, in physics one might start with the Schrodinger equation and the laws of electrodynamics and a system consisting of a proton and electron having specific properties (mass, electric charge, and so on) and use mathematics to arrive at properties of the hydrogen atom, such as its energy levels, emission and absorption spectra, chemical properties, etc. In biology, one might start with the theory of evolution by natural selection and see how it applies to a given set of entities such as genes, cells, or larger organisms.

The kinds of results obtained in science using these methods are not referred to as theorems but as predictions. In addition to the mathematical ideas of axioms, logic, and proof, in science we are also dealing with the empirical world and this gives us another tool for determining the validity of our conclusions, and that is data. This data usually comes either in the form of observations for those situations where conditions cannot be repeated (as is sometimes the case in astronomy, evolution, and geology) but more commonly is in the form of experimental data that is repeatable under controlled conditions. The comparison of these predictions with experimental data or observations is what enables us to draw conclusions in science.

It is here that we run into problems with the idea of truth in science. While we can compare a specific prediction with experimental data and see if the prediction holds up or not, what we are usually more interested in is the more basic question of whether the underlying theory that was used to arrive at the prediction is true. The real power of science comes from its theories because it is those that determine the framework in which science is practiced. So determining whether a theory is true is of prime importance in science, much more so than the question of whether any specific prediction is borne out. While we may be able to directly measure the properties of the entities that enter into our theory (like the mass and charge of particles), we cannot directly test the laws and theories under which those particles operate and show them to be true. Since we cannot treat the basic theory as an axiom whose truth can be established independently, this means that the predictions we make do not have the status of theorems and so cannot be considered a priori true. All we have are the consequences of applying the theory to a given set of entities, i.e., its predictions, and the comparisons of those predictions with data. The results of these comparisons are the things that constitute evidence in science.

So what can we infer about the truth or falsity of a theory using such evidence? For example, if we find evidence that supports a proposition, does that mean that the proposition is necessarily true? Conversely, if we find evidence that contradicts a proposition, does that mean that the proposition is necessarily false?

To take the first case, if a prediction agrees with the results of an experiment, does that mean that the underlying theory is true? It is not that simple. The logic of science does not permit us to make that kind of strong inference. After all, any reasonably sophisticated theory allows for a large (and usually infinite) number of predictions. Only a few of those may be amenable to direct comparison with experiment. The fact that those few agree does not give us the luxury of inferring that any future experiments will also agree, a well known difficulty known as the problem of induction. So at best, those successful predictions will serve as evidence in support of our theory and suggest that it is not obviously wrong, but that is about it. The greater the preponderance of evidence in support of a theory, the more confident we are about its validity, but we never reach a stage where we can unequivocally assert that a theory has been proven true.

So we arrive at a situation in science that is analogous to that in mathematics with Godel’s theorem, in that the goal of being able to create a system such that we can find the true theories of science turns out to be illusory.