Article/Document:

The Difference Between Science and Pseudoscience (Skeptical)

Michael Shermer

Summary: Are we living in the Age of Science? It would seem so from the above examples. But if we are, why do so many pseudoscientific and non-scientific traditions abound? Religions, myths, superstitions, mysticisms, cults, New Age beliefs, and nonsense of all sorts have penetrated every nook and cranny of both popular and high culture. One may rationalize that compared to the magical thinking of the Middle Ages things are not so bad. But statistically speaking pseudoscientific beliefs are experiencing a revival in the late 20th century.

Michael Shermer (Skeptic)
,
Ph.D.

(The following is excerpted from WHY PEOPLE BELIEVE WEIRD THINGS: Pseudoscience, Superstition, and Other Confusions of our Time (W. H. Freeman, 1997) by Michael Shermer)

The Most Precious Thing We Have:The Difference Between Science and Pseudoscience

“If there is any science that I am capable of promoting, I think it is the science of science itself, the science of investigation, or method.” —John Stuart Mill

Science has made the modern world. It gives us plastics and plastic explosives, cars and tanks, Supersonic Transports and B-1 bombers. Science has put a man on the moon and missiles in the silos. Developments in the medical sciences allow us to live twice as long as people did a mere 150 years ago. But we now have an overpopulation problem, without a corresponding overproduction solution, threatening us more than any single disease in history.

Growth in the physical sciences have given us electricity, computers, lights, automobiles, and lasers. But for the first time we have the combined nuclear, chemical, and biological potential to cause the extinction of the human species. Discoveries and theories in evolution and cosmology have given us insights into the origins of life and humans. But for many people these ideas and their corresponding ideologies are threatening to traditional personal and religious beliefs and comfortable status quo.

The part of the world known as the Industrial West could, in its entirety, be seen as a monument to the Scientific Revolution begun over 400 years ago, succinctly captured in a single phrase by one of its initiators: “knowledge itself is power.” When Francis Bacon penned those words in the early 17th century, he was equating two elements that encapsulated the offspring of the Scientific Revolution to which he helped give birth—the scientific method. In his utopian work, New Atlantis, Bacon described his goal for the novum organon, or new instrument of science: “The End of our Foundation is the knowledge of Causes and secret motions of things, and the enlarging of the bounds of Human Empire, to the effecting of all things possible” (1965, p. 447). Through this new instrument Bacon felt that humans would be able to subdue and overcome the miseries holding back humanity. For Bacon, science’s ultimate purpose, then, is “the empire of man over things,” in which you are to “bind [nature] to your service and make her your slave” (p. 375). The following limerick (in sexual metaphor) from Daniel Defoe, sums up this new attitude of dominion in an 18th-century linking of science and technology (1958, p. 178):

Nature’s a Virgin, very chaste and coy.To court her’s nonsense, if you will enjoy,She must be ravish’t—When she’s forced, she’s free,A perfect prostitute to Industry.

The Geometric Growth of Science

The industrial applications of technological developments that have resulted from scientific research have been startling, to say the least. We live in an age of science and technology. The statistics used to represent the tangible perquisites of this most powerful system stagger the imagination. The historian of science, Derek J. de Solla Price, in his book Little Science, Big Science, has observed that “using any reasonable definition of a scientist, we can say that 80 to 90 percent of all the scientists that have ever lived are alive now. Alternatively, any young scientist, starting now and looking back at the end of his career upon a normal life span, will find that 80 to 90 percent of all scientific work achieved by the end of the period will have taken place before his very eyes, and that only 10 to 20 percent will antedate his experience” (1963, pp. 1-2).

De Solla Price’s conclusions are well supported with evidence. There are now, for example, well over 100,000 scientific journals published each year, producing over six million articles to be digested—clearly an impossible task. The Dewey Decimal Classification now lists well over 1,000 different classifications under the title of “Pure Science,” within each of which are dozens of specialty journals. The graph in Figure 1 depicts the growth in the number of scientific journals, from the founding of the Royal Society in 1662 when there were two, to the present.

Virtually every field of learning shows a similar exponential growth curve. As the number of individuals working in the field grow, so too does the amount of knowledge, creating more jobs, attracting more people, and so on. The membership growth curves for the American Mathematical Society (founded in 1888) and the Mathematical Association of America (founded in 1915), are dramatic demonstrations of this phenomenon.

Regarding the accelerating rate of increase of individuals entering the sciences, in 1965 the Junior Minister of Science and Education in Great Britain made this astonishing observation (Hardison, 1988, p. 14):

For more than 200 years scientists everywhere were a significant minority of the population. In Britain today they outnumber the clergy and the officers of the armed forces. If the rate of progress which has been maintained ever since the time of Sir Isaac Newton were to continue for another 200 years, every man, woman and child on Earth would be a scientist, and so would every horse, cow, dog, and mule.

The rate of increase in transportation speed has also shown geometric progression, most of the change being made in the last one percent of human history. Fernand Braudel tells us, for example, that “Napoleon moved no faster than Julius Caesar” (1979, p. 429). But in the last century the growth in the speed of transportation has been astronomical (figuratively and literally). The salient dates presented in Figure 3 illustrate the rise of transportation speed and show the familiar exponential growth curve.

One final example of technological progress based on scientific research will serve to drive the point home. Timing devices in various forms—dials, watches, and clocks—have improved in their efficiency, and the decrease in error can be graphed over time.

In virtually every field of human achievement associated with science and technology the rate of progress matches that of the examples above. Reflecting on this rate of change, economist Kenneth Boulding observed (Hardison, 1988, p. 14):

As far as many statistical series related to activities of mankind are concerned, the date that divides human history into two equal parts is well within living memory. The world of today is as different from the world in which I was born as that world was from Julius Caesar’s. I was born in the middle of human history.

Pseudoscience in the Age of Science

Are we living in the Age of Science? It would seem so from the above examples. But if we are, why do so many pseudoscientific and non-scientific traditions abound? Religions, myths, superstitions, mysticisms, cults, New Age beliefs, and nonsense of all sorts have penetrated every nook and cranny of both popular and high culture. One may rationalize that compared to the magical thinking of the Middle Ages things are not so bad. But statistically speaking pseudoscientific beliefs are experiencing a revival in the late 20th century. A 1990 Gallup poll of 1,236 adult Americans show percentages of belief in the paranormal that are alarming (pp. 137-146):

Other popular beliefs of our time that have little to no veracity in evidence include: Dowsing, the Bermuda triangle, poltergeists, biorhythms, creationism, levitation, psychokinesis, astrology, ghosts, psychic detectives, UFOs, remote viewing, Kirlian auras, emotions in plants, life after death, monsters, graphology, crypto-zoology, clairvoyance, mediums, pyramid power, faith healing, Big Foot, psychic prospecting, haunted houses, perpetual motion machines, antigravity locations, and, amusingly, astrological birth control. Other polls show that these phenomena are not the quirky beliefs of a handful on the lunatic fringe. They are more pervasive than most of us like to think, and this is curious considering how far science has come since the Middle Ages.

In his book, Religion and the Decline of Magic (1971), historian Keith Thomas claims that with the development of a systematic method of science built up during the Scientific Revolution of the 16th and 17th centuries, “the notion that the universe was subject to immutable natural laws killed the concept of miracles, weakened the belief in the physical efficacy of prayer, and diminished faith in the possibility of direct divine inspiration” (p. 643). Science alone, however, was not enough to displace magic since the people of that time “emancipated themselves from these magical beliefs without necessarily having devised any effective technology with which to replace them.” Surprisingly (at least to those holding a warfare model of science and theology), Thomas identifies religion as a significant force in the decline of magical thinking: “In the seventeenth century they were able to take this step because magic was ceasing to be intellectually acceptable, and because their religion taught them to try self-help before invoking supernatural aid” (p. 663).

Those involved with the skeptical movement for any length of time, however, might find Thomas’ descriptive word “decline” difficult to believe in view of the acceptance of paranormal beliefs revealed in the above poll. Nevertheless, one in four or one in six is probably significantly lower than a 15th-century poll might have revealed. Nine out of 10 (or even 99 out of a 100) would likely be an understated figure for those who accepted without question what are today considered paranormal beliefs.

For the most part faith in ghosts or telepathy probably changes little in the work and play habits of the majority of the one in four who claim such special knowledge. Except for the occasional “fool separated from his money,” or recipient of faith healing who disposes of needed medication, today’s paranormal beliefs probably seem relatively harmless. They are not. The reason is that if someone is willing to accept such claims on nonexistent evidence, what else are they willing to believe? Like the problem of drugs where marijuana allegedly leads to heroine, “harmless” beliefs in ghosts and UFOs might lead to more dangerous beliefs and practices. Believing you were sexually molested by space aliens might seem silly and innocuous, but believing you were sexually molested by parents or relatives can land someone in jail.

Where the lack of scientific thinking has a larger and more significant impact is within the social realm. Individuals, groups, and nations have been trying to solve such social problems as war, crime, and poverty for millennia, and yet these social ills still abound. It would appear that we have failed to remember the past in such a way as not to repeat it. Can we apply the scientific method to solving social problems? If so, should we? Have the so-called “social sciences” been scientific in their analysis of human behavior, both past and present?

The Division of the Sciences

“The methods of science have been enormously successful wherever they have been tried. Let us then apply them to human affairs.” —B. F. Skinner

The application of scientific principles to the betterment of the human condition has a long history that may be traced 4,500 years back to ancient Mesopotamia, when the Sumerians created writing, mathematics, calendrical astronomy, and astrology to improve all aspects of their lives, including agriculture, politics, and religion. The application of scientific methods to the understanding of human behavior, however, has a much briefer history dating to the Enlightenment, when the Newtonian spirit diffused to other fields of learning that would later become known as the social sciences. The social sciences have had considerably less success than their counterparts in the physical and biological sciences, leaving us at the close of the 20th century with a plethora of life-threatening problems and social scientists groping for answers in what many observers see as a desperate race against time. (e.g., Rifkin, 1987; Holton, 1986; Brown, 1986; Capra, 1982; Bronowski, 1973). War, revolution, slavery, poverty, pollution, unemployment, monetary inflation, economic depression, crime, racism, sexism, religious persecution, social conflict, and failing education confront us on every side with no solution in sight. The problem confronting the entire social sphere may be reduced to a single cause: We have an 18th-century social, political, and economic system that must accommodate a 20th-century physical and biological science and technology. In short, we have the technical means for self-annihilation without the social means for preventing it.

Why is this? Why have the physical and biological sciences outdistanced the social sciences in the identification of causality and the prediction and control of future actions of their respective subjects? There are two possible answers: one, the traditional paradigms of the physical and biological sciences, long emulated by social scientists, are inadequate for the exploration of such a complex subject as human behavior; two, the social sciences (e.g., psychology, sociology, economics, and anthropology) and the historical sciences (e.g., cosmology, geology, paleontology, evolutionary biology, archaeology, and human history), differ in method and procedure from the experimental sciences. Though the quality of results need not differ, history and the social sciences are typically seen as inferior to those of the experimental sciences, and thus their conclusions are assumed to be less “hard,” an idea that gave rise to the appellation “soft sciences.” The pecking order is understood by all academics and felt with a sting by those wedged into the bottom.

Such divisions and hierarchical rankings among the sciences are neither productive nor conducive to interdisciplinary integration, one of the most important generators of paradigm shifts and scientific revolutions. A more constructive approach toward optimizing the amount of scientific progress might be to perceive the similarities and differences among the various sciences as one of type or methodology, rather than of value differences. Within the framework of this analysis all natural and human phenomenon may be classified and examined within the physical, biological, or social sciences. Within each of these sciences two methodologies are used: experimental and historical. Cosmology, for example, is a historical physical science. Physiology is an experimental biological science. Paleontology is a historical biological science. History and archaeology are historical social sciences.

Such an intellectual structure is the first step toward a rational application of science in the realm of human behavior. In Washington D.C. there is a visual icon of George Santayana’s historical admonition: “Progress, far from consisting in change, depends on retentiveness. Those who cannot remember the past, are condemned to fulfill it” (1905). The quality of our predictions, however, can be no better than the quality of our knowledge of the past. Since science is the best tool we have for understanding causality in all realms, a science of human behavior must be applied rigorously to both the present and past so that we may profit from both our triumphs and failures.

The Science of History

To praise science without defining it is on par with exclaiming one’s love of art without a clue as to what it is that makes it appealing—it may be fun at a party but where does that get you? Philosophers, historians of science, and many scientists themselves have written at length describing the process that encompasses this cultural tradition called science (cf, Eddington, 1929 and 1958; Popper, 1934; Frank, 1957; Gillispie, 1960; Kuhn, 1962 and 1977; Harre, 1970 and 1985; Westfall, 1971; Holton, 1986; Olson, 1982 and 1991; Cohen, 1985; Losee, 1987; and Woolgar, 1988). In a discussion of the importance of semantic precision in science, the philosopher Karl Popper cautioned: “criticism will be fruitful only if we state our problem as clearly as we can and put our solution in a sufficiently definite form—a form in which it can be critically discussed” (1957, p. 16). Such terms and concepts as science, fact, paradigm, and progress are commonly used but less commonly agreed upon in meaning. Because of their importance in the development of this analysis, and so the reader will know how these terms are used in this context, definitions of these terms will be presented, starting with science itself:

Science is a set of mental and behavioral methods designed to describe and interpret observed or inferred phenomenon, past or present, aimed at building a testable body of knowledge open to rejection or confirmation.

Science is a specific way of thought and action for the purpose of understand the world perceived either directly or indirectly, past or present. Mental methods include hunches, guesses, ideas, hypotheses, theories, and paradigms; behavioral methods include background research, data collection, data organization, colleague collaboration and communication, experiments, correlation of findings, statistical analyses, manuscript preparation, conference presentations, and publications. This description works well for understanding the physical and biological sciences, but can it apply to human history?

There are many (probably most) who would argue that the historical “sciences” are not sufficiently rigorous to be so classified. With this broader definition of science, however, it may be observed that history can be, and on many levels already is, a science. Practicing historians have developed “mental and behavioral methods” in their historical analyses that attempt to contribute to a “testable body of knowledge open to rejection or confirmation” about past phenomena. Their mental and behavioral methods are learned in graduate training and professional development. An organized historical work is rejected or confirmed by the community of historians through the testing of hypotheses and theories and the examination of historical data. Through this process historical phenomena are described and interpreted and may become factual, in the following sense of the word:

Scientific facts are data or conclusions confirmed to such an extent it would be reasonable to offer temporary agreement.

(Adapted from Gould’s 1983 {p. 255} definition of a fact: “In science, ‘fact’ can only mean confirmed to such a degree that it would be perverse to withhold provisional assent”).

James Kloppenberg (1989) has argued in a similar fashion in his description of “pragmatic hermeneutics,” a clumsy term that means historical facts, hypotheses, and interpretations “can be checked against all the available evidence and subjected to the most rigorous critical tests” and “if they are verified provisionally, they stand” and if “disproved, new interpretations must be advanced and subjected to similar testing” (p. 1030).

History is a science using a different mode of analysis than the experimental sciences. The historical sciences are rooted in the rich array of data from the past that, while nonreplicable, are nevertheless valid as sources of information for piecing together specific events and confirming general hypotheses. The inability to actually observe past events or set up controlled experiments is no obstacle to a sound science of paleontology or geology, so why should it be for a sound science of human history? The key is the ability to test one’s hypothesis. Based on data from the past the historian tentatively constructs a hypothesis then checks that against “new” data uncovered from the historical source. Archaeologist William Adams concludes a paper on “Invasion, Diffusion and Evolution,” with the following statement on the need for evidence that sounds as potent as any experimental scientist would demand (1968, p. 213):

As long as there is no ultimate proof in archaeology, every existing interpretation has to be subject to reexamination in the light of fresh discoveries. There is unhappily no point at which we can forget the evidence and accept the interpretation. Since every theory is no more than a probability, any building of theory on theory will significantly reduce the probability. Only solid evidence will significantly reduce the probability. Only solid evidence can ultimately serve as the building blocks of history.

In fact, it may reasonably be argued that most of the “observations” of archetypal experimental scientists— astronomers and physicists—are not made with their senses but with recording equipment, and thus they, like historical scientists, are really examining “artifacts” of observation. The tracks of a sub-atomic particle in a cloud chamber, the images of a planet recorded in binary digits in a computer, the video and audio “images” and “sounds” transduced from rescrambled magnetic bits on a tape, are not direct observations. They are artifacts from a “past” observation, even if the past is just seconds, minutes, or hours. In the case of astronomy the past may be millions of years in the time it takes light to arrive from distant galaxies, and in this sense astronomy is a type of historical science. Phenomena are not, in some artificial dichotomy, either observable or unobservable for either the experimental or historical scientist. There is a continuum from direct observation with light to indirect observation with artifacts, ranging from more to less reliable. Noted for his trenchant defense of the historical sciences, Gould makes this argument for the “high status” of history (1989, p. 282):

We cannot see a past event directly, but science is usually based on inference, not unvarnished observation (you don’t see electrons, gravity, or black holes either). The firm requirement for all science—whether stereotypical or historical—lies in secure testability, not direct observation. We must be able to determine whether our hypotheses are definitely wrong or probably correct. History’s richness drives us to different methods of testing, but testability is our criterion as well. We work with our strength of rich and diverse data recording the consequences of past events; we do not bewail our inability to see the past directly.

Based on this analysis the following definition may be made:

History is a product of the discovery and description of past phenomena. Therefore, we may make this general definition, which follows from those above:

A science of history is a set of mental and behavioral methods designed to discover, describe, and interpret past phenomena, aimed at building a testable body of knowledge open to rejection or confirmation.

History is a product of both discovery and description because there is an objective past to be discovered, but describing and interpreting it can be subjective. Since the facts never speak for themselves the scientific enterprise is fundamentally a human one. As Henri Poincare observed regarding the role of interpretation in science: “A group of facts is no more a science than a pile of bricks is a building” (Frank, 1957, p. 87). Theory influences observation in both the experimental and historical sciences, and as the philosopher of history Arthur Danto noted: “One does not go naked into the archives. But then, it might be argued, neither does one go naked into the laboratory” (1965, p. 101).

Paradigms and Progress

I believe that science and scientific paradigms are not only different from all other non-scientific paradigms, but contain certain features that make them progressive. Progress, taken in a value-neutral sense, means the cumulative growth of knowledge over time. Let us examine first what a paradigm is, and then, what constitutes progress.

The Kuhnian (1962) usage of paradigm is generally adopted here, where a paradigm defines the “normal science” of an age, founded on “past scientific achievements…that some particular scientific community acknowledges for a time as supplying the foundation for its further practice” (p. 10). Today, textbooks are the primary proselytizers and protectors of the paradigm, presenting to the next generation the past generations’ knowledge and theories. Before textbooks, Kuhn notes that the classics served in this capacity. They did so in two ways that form the basis for Kuhn’s definition of paradigm:

Their achievement was sufficiently unprecedented to attract an enduring group of adherents away from competing modes of scientific activity. Simultaneously, it was sufficiently open-ended to leave all sorts of problems for the redefined group of practioners to resolve. Achievements that share these two characteristics I shall henceforth refer to as ‘paradigms,’ a term that relates closely to ‘normal science’ (p. 10).

Kuhn was challenged by Margaret Masterman for not definintg paradigm clearly (Lakatos and Musgrave, 1970, pp. 59-89). His 1977 expanded definition of “all shared group commitments, all components of what I now wish to call the disciplinary matrix” (p. 319), without extensive examples and discussion, still fails to give the reader a sense of just what Kuhn means by paradigm. Because of this lack of clarity, the following definition will be used, based on that given for science:

A scientific paradigm is a mental model shared by most but not all members of a scientific community, designed to describe and interpret observed or inferred phenomena, past or present, and aimed at building a testable body of knowledge open to rejection or confirmation.

A paradigm is usually shared by most but not all because most of the time competing paradigms coexist—a necessity for new paradigms to displace old ones. Philosopher of science Michael Ruse, in The Darwinian Paradigm (1989), has identified at least four usages of the word, including:

(1) Sociological, focusing on “a group of people who come together, feeling themselves as having a shared outlook (whether they do really, or not), and to an extent separating themselves off from other scientists” (pp. 124-125). Behaviorists and humanists in the psychology are a good example of a sociological paradigm.

(2) Psychological, where individuals within the paradigm literally see the world differently from those outside the paradigm. An analogy can be made to people viewing the reversible figures in perceptual experiments, such as the old woman/ young woman shifting figure where the perception of one precludes the perception of the other. In this particular perceptual experiment, the presentation to subjects of a strong “young woman” image, followed by the ambiguous figure, always produces the perception of the young woman; the presentation of a strong “old woman” image, followed by the ambiguous figure, produces the perception of the old woman 95 percent of the time (Leeper, 1935).

(3) Epistemological, where “one’s ways of doing science are bound up with the paradigm” because the research techniques, problems, and solutions are determined by the hypotheses, theories, and models. A theory of phrenology that leads to the development of phrenological equipment for measuring bumps on the skull would be an example of an epistemological paradigm.

(4) Ontological, where in the deepest sense “what there is depends crucially on what paradigm you hold. For Priestley, there literally was no such thing as oxygen....In the case of Lavoisier, he not only believed in oxygen: oxygen existed” (pp. 125-126). Similarly, for Buffon, Lyell, and others, varieties in a population were merely degenerates from the originally created kind; nature eliminated them to preserve the essence of the species. For Darwin and Wallace, varieties were the key to evolutionary change.

My definition of paradigm is applicable in the sociological, psychological, and epistemological uses. To make it wholly ontological, however, would mean that any paradigm is as good as any other paradigm because there is no outside source for corroboration. Tea-leaf reading and economic forecasting, sheep’s livers and meteorological maps, astrology and astronomy, all equally determine what is, in an ontological paradigm. Obviously I do not accept this, which is why I added the modifier “scientific” to my definition. As difficult as it is for economists and meteorologists to predict the future, they are still better at it than tea-leaf readers and sheep’s liver diviners. Astrologers cannot explain the interior workings of a star, predict the outcome of colliding galaxies, or chart the course of a spacecraft to Jupiter. Astronomers can for the simple reason that they operate in a scientific paradigm that is constantly refined against the harsh judge of nature herself.

I also assume that science is progressive because science has certain built-in self-correcting features: experimentation, corroboration, and falsification. These characteristics make scientific paradigms different from all other paradigms, which include pseudoscience, non-science, superstition, myth, religion, and art. The reason that pseudoscience, non-science, superstition, myths, religion, and art are not progressive is that they do not have the goal or the mechanism to allow the accumulation of knowledge that builds on the past. Progress, in this cumulative sense, is not their purpose. This is an observation, not a criticism. Individuals in these paradigms do not stand on the shoulders of giants in the same manner as scientists. While there is change in myths, religions, and art styles, it is not progressive change. Artists do not improve upon the styles of their predecessors, they change them. (Materials and techniques may improve, but these changes are incorporated to enhance the skill of the artist, not to help the style of art progress.) Priests, rabbis, and ministers do not attempt to improve upon the sayings of their masters; they parrot, interpret, and teach them. Pseudoscientists do not correct the errors of their predecessors, they perpetuate them. Science has a self-correcting feature that operates like natural selection in nature. Science, like nature, preserves the gains and eradicates the mistakes. When paradigms shift (for example, during scientific revolutions) scientists do not abandon the entire science; just as a new species is not begun from scratch. Rather, what remains useful in the paradigm is retained, as new features are added and new interpretations given; just as homologous features of an organism’s skeletal structure remain the same while new changes are constructed around it (the whale’s flipper retains the same bone structure as its land-based ancestor—carpals, metacarpals, and phelanges—adding tissue and tendons as needed). Einstein emphasized this point in reflecting upon his own contributions to physics and cosmology (in Weaver, 1987, v. ii, p. 133):

Creating a new theory is not like destroying an old barn and erecting a skyscraper in its place. It is rather like climbing a mountain, gaining new and wider views, discovering unexpected connections between our starting point and its rich environment. But the point from which we started out still exists and can be seen, although it appears smaller and forms a tiny part of our broad view gained by the mastery of the obstacles on our adventurous way up.

The shift from one scientific paradigm to another may be a mark of improvement in the understanding of causality, the prediction of future events, or the alteration of the environment. It is, in fact, the attempt to refine and improve the paradigm that can ultimately lead to its own demise, as anomalous data unaccounted for by the old paradigm (as well as old data accounted for but capable of reinterpretation) fit into the new paradigm in a more complete way. Werner Heisenberg, who did just that in his addition of the uncertainty principle to physics, sums up how and where these shifts occur (Weaver, v. ii, p. 475):

It is probably true quite generally that in the history of human thinking the most fruitful developments frequently take place at those points where two different lines of thought meet. Hence if they actually meet, that is, if they are at least so much related to each other that a real interaction can take place, then one may hope that new and interesting developments may follow.

What causes a paradigm to shift and who is most likely to be involved in the shift? Kuhn explains: “Almost always the men who achieve these fundamental inventions of a new paradigm have either been very young or very new to the field whose paradigm they change” (1962, p. 90). Interestingly, this quote heads the first volume of Martin Bernal’s brilliant and controversial Black Athena, a historical analysis of the Afroasiatic influence on classical Western civilization. Bernal quotes Kuhn “to justify my presumption, as someone trained in Chinese history, to write on subjects so far removed from my original field.” Although Bernal is not audacious enough to propose his own paradigm shift, he does claim the “changes of view…are nonetheless fundamental” (1987, p. 1). Bernal contrasts the “Aryan model” of Greek history that views Greece as essentially Indo-European, with the “Ancient Model” that sees it as Afroasiatic, or Levantine and Egyptian. Bernal suggests replacing the Aryan Model not with the Ancient Model, but with what he calls the Revised Ancient Model that “accepts that there is a real basis to the stories of Egyptian and Phoenician colonization of Greece set out in the Ancient Model. However, it sees them as beginning somewhat earlier, in the first half of the 2nd millennium BC.” Bernal, however, does not want to completely abandon the Aryan Model because the Revised Ancient Model “tentativ ely accepts the Aryan Model’s hypothesis of invasions—or infiltrations—from the north by Indo-European speakers sometime during the 4th or 3rd millennium BC” (p. 2).

Clearly Bernal is not suggesting a simple paradigm replacement with no transfer of knowledge or interpretive framework. Instead, his is a program where a paradigm may dislodge another paradigm while retaining elements of the old. The Revised Ancient Model, he explains, “adds no extra unknown or unknowable factors. Instead it removes two introduced by proponents of the Aryan Model.” Bernal claims these are: “(1) the non-Indo-European speaking ‘Pre-Hellenic’ peoples upon whom every inexplicable aspect of Greek culture has been thrust; and (2) the mysterious diseases of ‘Egyptomania’, ‘barbarophilia’ and interpretatio Graeca which, the ‘Aryanists’ allege, have deluded so many otherwise intelligent, balanced and informed Ancient Greeks with the belief that Egyptians and Phoenicians had played a central role in the formation of their culture.” Bernal’s entire analysis is a splendid example of historical science using the hypothetico-deductive method to turn dry-as-dust data into enlightening historical experiments (p. 7):

The removal of these two factors and the revival of the Ancient Model leaves the Greek, West Semitic and Egyptian cultures and languages in direct confrontation, generating hundreds if not thousands of testable hypotheses-predictions that if word or concept a occurred in culture x, one should expect to find its equivalent in culture y. These could enlighten aspects of all three civilizations, but especially those areas of Greek culture that cannot be explained by the Aryan Model.

Despite his claims of neutrality in testing the models in Volume 1 of Black Athena, in Volume 2 Bernal admits “I have given up the mask of impartiality between the two models.” Considering his commitment to the one and his obvious distaste for the other, Bernal says “instead of judging their competitive heuristic utility in a ‘neutral’ way, I shall try to show how much more completely and convincingly the Revised Ancient Model can describe and explain the development and nature of Ancient Greek civilization than can the Aryan Model” (1991, p. 3). The Revised Ancient Model has built upon components of both the Ancient and the Aryan Models while at the same time replacing them. There is cumulative growth and paradigmatic change. This is scientific progress, which in the context of this analysis may be defined as follows:

Scientific progress is the cumulative growth of a system of knowledge over time, in which useful features are retained and non-useful features are abandoned, based on the rejection or confirmation of testable knowledge.

The Triumph of Science

These definitions of science, scientific paradigm, and scientific progress are not to be confused with those of earlier positivist scientists and historians who envisioned science as a systematic step by step unfolding of the Truth about Reality through a progressive system of positive knowledge. The classic statement of that position was made by George Sarton, the founder of the history of science discipline and of its flagship journal, Isis (1936, p. 5):

Definition. Science is systematized positive knowledge, or what has been taken as such at different ages and in different places. Theorem. The acquisition and systematization of positive knowledge are the only human activities which are truly cumulative and progressive. Corollary. The history of science is the only history which can illustrate the progress of mankind.

Though I have defined science as progressive, I admit it is not possible to know if the knowledge uncovered by the scientific method is positive (“certain”), or not, because we have no outside source—no Archimedean point—from which to view Reality. Further, science is not the only human activity that is cumulative and progressive. Technology also meets the criteria for being labeled progressive. Therefore, a history of both science and technology would illustrate the progress of civilization. The general distaste modern historians of science have for Sarton’s definition, theorem, and corollary may not lie with the conception of science as progressive; rather, it is with the inference that progress is morally good and valuable for human society—because only science is progressive, it is morally better than all other human traditions. By this positivist analysis cultures that embrace science and technology are better than cultures that do not. Such attitudes can and have led to imperialistic and racist attitudes toward “lesser” peoples who do not understand the world as “clearly” as those in the West. The general dislike by modern thinkers for this philosophy of science is understandable and justified. We need not, however, throw out science because of its previous identification with moral progress. We have, after all, learned a great deal about the history and philosophy of science since Sarton’s time.

There is no question that science is heavily influenced by the culture in which it is embedded, and that scientists may all share a common bias that leads them to think a certain way about nature. But this does not take anything away from the progressive nature of science. Progress in this sense is meant as a value-neutral description. Progress is neither good nor bad; it simply is. Many think progress is good, but there are plenty who think progress is destructive, and they, in turn, generally dislike science and technology—at least they are consistent.

It must be noted as well that those who do not embrace science and technology may be just as happy as those who do, maybe even more so. But this is an a-scientific statement because happiness is a subjective, nonquantifiable emotion. We cannot judge or define progress based on happiness. The only thing that can rationally be said about happiness is that all humans want more of it. The method of attaining greater happiness, however, is totally subjective and a matter of individual choice. An automobile may be one individual’s pride and joy, another’s headache and nightmare. Quiet solitude in a remote mountain retreat may bring peace and serenity to some, anxiety and boredom to others. Happiness, like good art, can never mean more than, “I like it;” unhappiness, like bad art, can never mean more than, “I do not like it.” Try telling an artist of abstract paintings why a Rembrandt is “better” than an irregular series of lines and cubes; or tell a fan of John Cage’s random noises that Beethoven’s Ninth is “superior.” After hours of fruitless attempts to find some objective standard of judgment, it will always come down to “I like it” or “I do not like it.”

In this regard Sydney Hook makes this interesting comparison between the arts and sciences: “Raphael’s Sistine Madonna without Raphael, Beethoven’s sonatas and symphonies without Beethoven, are inconceivable. In science, on the other hand, it is quite probable that most of the achievements of any given scientist would have been attained by other individuals working in the field” (1943, p. 35). The reason for this is that science, with progress as one of its primary goals, seeks understanding through objective methods (even though it rarely attains it). The arts seek provocation of emotion and reflection through subjective means. The more subjective the endeavor, the more personal it becomes, and therefore difficult if not impossible for anyone else to replicate. The more objective the pursuit, the more likely someone else would have made the achievement. Darwin’s theory of natural selection would have been (and, in fact, was by Wallace) replicated because the scientific process is empirically verifiable. In a crude dichotomy, the difference between science and art is discovery versus creation. Freud’s theory of psychoanalysis probably would not have been presented by another, because it was a creation of one individual’s mind more than it was a discovery.

We cannot, in any absolute sense, equate happiness with progress, or progress with happiness. But if an individual finds happiness in the progress produced by science and technology, there is a rational way to quantify and define how this progress can be accomplished. As scientific progress was defined above, the definition for technological systems can similarly be made:

Technological progress is the cumulative growth of a system of knowledge and artifacts over time, where useful features are retained and non-useful features are abandoned, based on the rejection or acceptance of the technologies in the market.

Therefore it becomes possible to make a rational distinction between progressive and non-progressive cultures (that makes no judgments on whether these differences are good or bad, moral or immoral):

Progressive cultures have as a primary goal the cumulative growth of a system of knowledge and artifacts over time, where useful features are retained and non-useful features are abandoned, based on the rejection or confirmation of testable knowledge, and the rejection or acceptance of artifacts.

Cultural progress is inextricably linked with both scientific progress and technological progress. Culture, of course, involves much more than science and technology, but for a cultural tradition to be progressive, it must meet the above definition of cumulative growth through an indebtedness to the past. In science, useful features are retained and non-useful features are abandoned through the confirmation or rejection of testable knowledge. The scientific method, in this way, is constructed to be progressive. In technology, useful features are retained and non-useful features are abandoned based on the rejection or acceptance of the technologies in the market. For science, the market is primarily the community of scientists. For technology, the market is primarily the consuming public. Other cultural traditions (art, myths, religion) may retain some of the features found in science and technology, such as being accepted or rejected within their own community or by the public, but none have as their primary goal cumulative growth through an indebtedness to the past. Thus, only science and technology are truly progressive.

Cultures that encourage the development of science and technology will be progressive. Cultures that inhibit the development of science and technology will be nonprogressive. This does not make one culture better than another culture, or one way of life more moral than another way of life, or one people happier than another people. But if an individual or group desires a lifestyle that includes the vast diversity of knowledge and artifacts, cherishes novelty and change, seeks an ever-growing standard of living as defined in the Industrial West, then a progressive system based on science and technology will produce that culture. No doubt, in this age of “political correctness” where most peoples of the world have not shared this bias of science and progress (and are seen to have been exploited by this philosophy), this is not a popular position. Among academics as well, the word “progress” has taken on a pejorative meaning, implying superiority over those who have not progressed as far. In my oral doctoral defense I was firmly advised by my committee to replace the modifier “progress” with “change” when referring to science. My response to those who challenge the validity of science and the scientific method as a means of understanding causality in the world, is to quote one of the greatest scientists of the 20th century, if not the millennium, Albert Einstein:

One thing I have learned in a long life: that all our science, measured against reality, is primitive and childlike—and yet it is the most precious thing we have.