During recent discussions about the scientific standing of both psychology and mathematics, I've come to realize I've been harboring an unwarranted assumption — that educated people understand what science is, and therefore what it isn't. But many recent debates have ground to a halt, not on whether a particular field is scientific, but on the definition of science itself.

For reasons to be explained, resolving the meaning of science isn't a simple matter of picking up a dictionary — that won't work. Fortunately, because of the high value of science in modern society, this question has been studied in depth, and the meaning of science is very clear, even though it is not very obvious.

Unfortunately, like a number of modern technical questions, there is a degree of self-reference built into this question, which is why this discussion properly belongs to scientific philosophy, not science, and cannot be resolved scientifically — a definition of science is a matter of consensus, not proof. This constraint, although important, is not a practical barrier to resolution, because science that works is easy to distinguish from science that doesn't.

The question has a number of important implications. If a clear, unambiguous definition of science cannot be constructed and agreed upon, then anyone can claim his pet belief is scientific through the simple expedient of redefining science arbitrarily. If this latitude were to be allowed, it would degrade the practice of science to a dangerous degree.

Dictionary

An obvious answer to our question might be to pick up a dictionary and look up the word "science." Isn't a dictionary a repository of word definitions? Well, this may surprise you, but the answer is no — dictionaries don't tell us how words are defined, they tell us how people define words.

Isn't that the same thing? No again. Some words have proper definitions that are not known to the general public, but it is the general public's understanding of words that fills dictionaries. To say this another way, dictionaries don't prescribe, they describe. Dictionaries tell us what people think words mean, even if those notions sometimes make no sense. Here's an example — let's look up the word "literally":

actually; without exaggeration or inaccuracy: The city was literally destroyed.

in effect; in substance; very nearly; virtually.

It's the same as before — definition 4 contradicts the other definitions!

If dictionaries were meant to contain word definitions, self-contradicting examples like this (there are many) would undermine their credibility. But that is not a dictionary's intended purpose — dictionaries do not define words, they dispassionately record how people use words. If our purpose is to discover what people think a word means, a dictionary is an appropriate reference, but if we need a formal technical definition, a dictionary won't give it to us.

So if our question is "what do people think science means?", we might use a dictionary to answer it:

a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws: the mathematical sciences.

systematic knowledge of the physical or material world gained through observation and experimentation.

This probably accurately reflects what people think science means — an organized collection of scientific results. But because scientific results are indistinguishable from astrological results, this definition doesn't provide any guidance as to the true meaning of science, which lies in what scientists do to acquire scientific results, and how that differs from ordinary activity — for that, we need to look elsewhere. A dictionary won't tell us.

Encyclopedia

Let's find out whether an encyclopedia can provide a definition of science:

(In this encyclopedia entry I jump past some history and stage-setting to this definition:)

"Despite the existence of well-tested theories, science cannot claim absolute knowledge of nature or the behavior of the subject or of the field of study due to epistemological problems that are unavoidable and preclude the discovery or establishment of absolute truth. Unlike a mathematical proof, a scientific theory is empirical, and is always open to falsification, if new evidence is presented. Even the most basic and fundamental theories may turn out to be imperfect if new observations are inconsistent with them. Critical to this process is making every relevant aspect of research publicly available, which allows ongoing review and repeating of experiments and observations by multiple researchers operating independently of one another. Only by fulfilling these expectations can it be determined how reliable the experimental results are for potential use by others."

To put this concisely and in my own words, science is a discipline that gives evidence, experimentation and observation the highest priority, doesn't presume to be a source of absolute truth, and accepts that scientific theories may perpetually be falsified by new findings.

Definition

Based on the foregoing encyclopedia definition and with an exception to be explained below, scientific theories cannot ever be proven true, but may perpetually be proven false by new evidence. An explanation of this was perhaps best offered by philosopher David Hume, who said, "No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion."

Mathematics

The exception to the above is mathematics. Mathematics differs from other sciences in a single way — mathematical statements can be proven true. This exception doesn't mean mathematics isn't a science, it means certain mathematical questions can be conclusively resolved. In all other respects, mathematics is a normal science, with the usual emphasis on evidence, experiments, and the acceptance of falsifiability.

There is an important terminological difference between mathematics and the other sciences. What would be called a hypothesis (a theory without evidence, see below) in other sciences is called a conjecture in mathematics, and a theory is called a theorem. The other difference between mathematics and other sciences is that a mathematical theorem is proven and therefore beyond the possibility of falsification.

Evidence

Scientific evidence must meet a very high standard of objectivity and repeatability, compared to which legal evidence seems like gossip. For example, the standard of legal evidence sufficient to put someone to death — "beyond a reasonable doubt" — isn't remotely suitable for a scientific investigation.

Among other things, scientific evidence must be objective (it must appear the same to two similarly equipped observers), it must be repeatable, and it cannot be susceptible to more than one interpretation. If I see a bright light in the sky, it might be a UFO, but it might also be Venus — and because of a scientific precept called Occam's Razor (the simplest explanation is usually correct), it probably is Venus. This wouldn't make very good scientific evidence, though it's plenty good enough for the Discovery Channel's next UFO special.

By contrast, if I am standing in an open, flat field and I see a rock fall from the sky, and there are no airplanes overhead, and the rock has a burned, crusty external appearance (a so-called "fusion crust") and it burns my hand as I pick it up, then it might be a meteorite. That's more like scientific evidence.

The other difference between legal and scientific evidence is that a scientist is voluntarily skeptical of his own evidence, and tries to think of ways the evidence doesn't actually support his theory. The reason for this is that, unlike law, science isn't adversarial — with rare exception scientists work together toward an accurate, unambiguous interpretation of evidence.

Authority

There is one property of science that cannot be overemphasized: in science, evidence has the highest priority, and authority means nothing. This may represent the most common confusion about the nature of science. Scientific evidence is all that matters, and scientific authority — one's own and that of others — can only be an obstacle to clear thinking about evidence. In short, the largest amount of scientific eminence is trumped by the smallest amount of scientific evidence.

On the topic of scientific authority, the common term "scientific law" is always a misnomer. Because any scientific theory may be refuted by evidence, to describe a theory as "law" is to mislead the public about the nature of scientific knowledge. The term "law" refers to something immutable and authoritative, terms that are inconsistent with the meaning of scientific theory. The deplorable and frequent use of the term "scientific law" in popular writing only reflects the absence of scientific content in college journalism courses.

Hypotheses

In spite of these strict rules, scientists are free to imagine anything they care to, and even publish their imaginings — these are called "hypotheses," ideas that are not accompanied by evidence and therefore don't have the status of scientific theories. Hypotheses are important to science, but they sometimes lead to confusion among nonscientists, who think the existence and publication of hypotheses means that science isn't scientific. But not all scientific thought is enclosed in scientific theories — quite the contrary. An example of a hypothesis is string theory in physics, which, despite its name and frequent discussion, is a hypothesis because there is no evidence for or against it (and little prospect for obtaining evidence). The distinction between hypothesis and theory is perfectly clear to a scientist if not to a layman.

The importance of hypotheses within science cannot be overemphasized. Many powerful ideas begin as hypotheses, and decades may pass before evidence surfaces (if ever). Two examples of hypotheses that had to await evidence are Einstein's relativity theories, first published in 1905 and 1916 but not fully verified until the 1960s, and Alfred Wegener's theory of Plate Tectonics, first put forth in 1912 but not fully supported by evidence until the 1960s. In the latter case and unfortunately, the originator had passed on before his ideas were confirmed.

And not all hypotheses need to be right to be useful. The Ether Theory (properly, a hypothesis) served as an interim explanation for certain physical phenomena between the times of Isaac Newton, whose ideas made the ether seem necessary, and Albert Einstein, who dispensed with any need for it. In spite of being wrong, the Ether Theory (hypothesis) had been fleshed out in sufficient detail that it could be tested and falsified — which it was, by the Michelson-Morley Experiment, a turning point in modern physics. The meaning of this example is that hypotheses can be useful even when wrong, but they do eventually need to be tested against evidence.

The Null Hypothesis

There is one key property of scientific thinking that is often overlooked but crucial to understanding how science works. That property is the null hypothesis. In practice and to simplify a technical point, under the null hypothesis a claim is assumed to be false unless and until it is supported by evidence. In normal human communications, things are assumed to be true unless proven false, but in science, that outlook is too undisciplined to lead to anything useful. It is because of the null hypothesis that scientists are regarded as skeptical of ideas unaccompanied by evidence.

Description and Explanation

Some fields collect evidence but don't shape theories to explain and generalize the evidence (or that have theories and evidence that don't meaningfully relate to each other). Those fields are diplomatically called "descriptive sciences" (on the ground that they describe but don't explain), but this is a misguided charity on the ground that all scientific fields must have testable, falsifiable theories. Fields that don't shape theories place themselves beyond the possibility of experimental falsification and on that basis are not sciences but beliefs.

The List

Here is a concise list of the elements of science:

The highest priority is given to evidence.

A theory is an idea supported by evidence.

A hypothesis is an idea not supported by evidence.

Evidence must eventually result in a theory that:

addresses existing evidence.

generalizes specific cases.

can be tested using, and potentially be falsified by, evidence.

An idea with no supporting evidence is assumed to be false (the null hypothesis).

A field with evidence but no theories is not scientific.

A field with theories but no evidence is not scientific.

Infinite Primes

Here is a worked example of science that exemplifies some (but not all) of the conditions in the above list. It is an experiment consisting of a hypothesis followed by an experiment (it is formally a proof by contradiction). It is a statement about prime numbers that was first described by Euclid in ancient times.

Hypothesis: there is a largest prime number. Let's call this largest prime p.

Create a new number q equal to the product of all the prime numbers between 2 and p, plus 1: q = product of primes {2 ... p} + 1.

The new number q has no factors in {2 ... p} because to divide q by any of them will produce a remainder of 1.

The number q, which is larger than p, is either:

itself prime, or

composite with prime factors larger than p.

Statement (4) falsifies statement (1).

This example meets the definition of a testable, falsifiable scientific hypothesis. It leads to a theory about prime numbers — it relies on a specific experimental outcome to make a general statement about prime numbers (there are an infinity of prime numbers). The difference between this mathematical example and all the other sciences is that it proves by contradiction that there are an infinity of prime numbers, and because the proof applies to all prime numbers, there is only one falsification, after which the result is no longer subject to falsification.

The merit of this example is that it is easy to understand, but its drawback is that it is not like scientific hypotheses and theories outside of mathematics, all of which are perpetually falsifiable. The next section of this article contains more realistic examples of science.

Summary

In consequence of the described properties, science and scientists are characterized by humility, a readiness to accept criticism and acknowledge error, and openness to new theoretical interpretations and evidence, mixed with a stern discipline about the distinction between hypothesis and theory.

Experimentation

In this section we show how scientific principles collide with human nature. Among scientific results, the earlier mathematical example is somewhat unrealistic in exchange for being easy to understand. Real-world science is often more complex and fraught with possibilities for error, so more safeguards are necessary to assure that we are actually studying what we think we are, and that the results actually mean what we think they do.

Orange Juice

Here is a more realistic research example, one that shows why science is organized as it is. A young researcher wanted to test the idea that citric acid can prevent pregnancy, so he organized an experimental group of young women and conducted his study. After several years of data collection, his study appeared successful and the researcher wrote an article describing his findings.

Before publication, in a process called "peer review," scientific articles are examined by other scientists in the same field. During peer review the researcher received a call from an older, more experienced colleague who said, "I see a problem with the dropout dates for the subjects who left your study." The young scientist explained that yes, a few subjects decided to become mothers and dropped out of his experiment, nothing out of the ordinary. The reviewer replied, "I suggest that you compare the dropout dates against the delivery dates," and hung up the phone.

The young researcher, a bit annoyed at the pickiness of his older colleague, compared the dates, and discovered to his shock that the delivery dates were all less than nine months after the dropout dates. In a subsequent investigation, he discovered that all the subjects who dropped out had become pregnant during the study, but because they liked the young researcher and didn't want to hurt his study or his feelings, they didn't reveal their real reason for dropping out. The researcher was dismayed to discover his study meant precisely nothing.

Discipline and Control

The foregoing example is much more instructive than the earlier mathematical example, and it shows why scientific research must be conducted in a highly disciplined way. Researchers must be vigilant to detect what are called hidden assumptions, assumptions that influence the work but that are not consciously examined.

It is also important to design experiments so that the effect being measured actually arises from the intended cause. To accomplish this goal, an experimental design may include something called a control group, a group of subjects chosen for comparison purposes who are identical to the experimental group but who are not experimented upon. The control-group issue is especially important in human studies.

Prospective, Double-blind

In human studies, the very best quality science arises from a study that randomly selects two groups of subjects that are as much alike as practical (an experimental and a control group), then sets up the study in such a way that neither the subjects nor the researchers know which group a particular subject belongs to — this is called the "double-blind" criterion. This may seem excessively strict, but there are any number of studies that could not be successfully replicated because this standard was not met.

The above-described study design is technically called a "prospective double-blind study." It is called "prospective" because it chooses experimental subjects for future study, and "double-blind" because during the study, neither the researchers nor the subjects know who is a control and who is an experimental subject.

Ethical Constraints

Society imposes mandatory constraints on science that limit the quality of any research that involves human subjects (and to a lesser degree, animal subjects). The primary constraint is ethical standards, standards that weigh the rights and safety of experimental subjects against the desire for reliable, high-quality results. In some cases these constraints can be managed by acquiring something called "informed consent", which essentially means an agreement to participate based on accurate knowledge of the experiment's risks.

But in many cases informed consent can't be acquired for the simple reason that scientists honestly don't know what the risks are. Another problem for informed consent arises in psychology, where victims of a mental disturbance might be ideal experimental subjects except that, because of their mental state, they cannot provide informed consent.

Retrospective Studies

One solution to the above ethical constraints is to choose experimental subjects from the population at large, based on the presence of a condition to be studied. This frees the scientist from the moral hazard inherent in creating the experimental group — the group already exists in the population at large, all one need do is locate them and sign them up. This is called a "retrospective" or "historical cohort" study, meaning a study of subjects who are found to already have a condition or ailment of interest.

There are a number of pitfalls in retrospective studies that nearly always prevent them from producing reliable science, and retrospective studies have a very poor reputation for trustworthiness and reliability.

One problem surrounds the issue of cause and effect. Let's say a study is designed to establish the value of (hypothetical) "Vitamin X" in improving intelligence. A retrospective study is designed and shows, sure enough, those who took "Vitamin X" were smarter than those who didn't. But because retrospective studies don't have any meaningful controls, the study may only prove that those who took "Vitamin X" were those intelligent enough to take a daily vitamin.

Another problem lies in the selection process. Retrospective studies ordinarily don't get to choose their subjects, but depend on those willing to contact the researchers. This means that, even at the design stage, there is a bias built into the experimental group. After that, things generally get worse. As a class, retrospective studies only serve to show how dreadful science can become if all its discipline is abandoned.

Self-reporting

Another pitfall in human studies is a dependence on the accuracy of the subjects' personal account. It has been repeatedly shown that studies that depend on self-reporting are extremely unreliable, because self-reporting is itself unreliable. Unfortunately most psychological and sociological studies rely to a greater or lesser extent on self-reporting.

Summary

The best science arises from prospective studies in which experimental and control groups are selected in advance, neither subjects nor researchers know which subjects belong to which group, there is no dependence on subjective reporting, and ethical issues are not present. Unfortunately for human research, virtually no human studies meet all these standards.

Scientific Architecture

So far we've discussed how one recognizes scientific ideas and activities — we now turn to how an entire field may be determined to be scientific. At first glance "architecture" may seem a peculiar property to discuss with respect to science, but not unlike a building, a field may attain the status of a science only by adhering to certain structural standards. Here is a minimal list of such standards:

Does research address and potentially falsify one or more core theories that define the field?

Does research have the potential to change how the field is practiced?

Let's examine each of these points in detail.

A. Does research address and potentially falsify one or more core theories that define the field?

At first glance the reader may wonder whether this is a legitimate criterion. Doesn't the presence of scientific activity within a field automatically confer scientific status to the field as a whole? Actually, no, and here's why not — let's say I'm an astrologer, and I'm planning a research project. I want to statistically break down the U.S. population by astrological sign — that way, I can order supplies intelligently and focus my efforts appropriately, with an evidence-based idea of who my clients are. So I consult a statistical database of U.S. births by date, process the data, and break it down by the astrological "signs" (this result is for U.S. births in 2003):

Okay. I've created a scientifically valid statistical result in astrology, and the study turns out to have practical value in the daily activities of astrologers. Does this scientifically valid result make astrology itself scientific? No, of course not. Why? Because, regardless of its practical significance, my research doesn't address or potentially falsify the core theories of astrology. My result may help astrologers organize their activities, but because it neither addresses nor answers any basic questions about astrology, this scientific result cannot confer a scientific status to the field in which it took place.

This example has particular relevance to a number of "soft sciences" like psychology and sociology, whose practitioners apparently (and wrongly) believe the presence of scientists and published research confers a scientific status to their fields, but this can only be true if basic theoretical principles are asserted, tested and potentially falsified.

As explained in the previous section, the "null hypothesis" is a scientific precept that says assertions are assumed to be false unless and until there is evidence to support them. In scientific fields the null hypothesis serves as a threshold-setting device to prevent the waste of limited resources on speculations and hypotheses that are not supported either by direct evidence or reasonable extrapolations from established theory. It is also a way to focus attention on evidence.

There are a number of pseudoscientific fields in which assumptions stand in for evidence and research never falsifies theories, but in other respects those fields have the outward appearance of sciences. The theories of such a field may never have been meaningfully tested, but by ignoring the null hypothesis the field's practitioners can proceed as though they have been (untested ideas are simply assumed to be true). Certainty arises, not from rigorous experiment, but from faith, belief and assumption.

C. Does research have the potential to change how the field is practiced?

This standard requires that a field be unified by rigorously tested core theories, something that serves two purposes — it forges a link between research and practice, and it guards against undisciplined and potentially dangerous practices in fields where life and health may be at stake. The merit of this standard can be seen in mainstream medicine, where before a therapy can be offered in a clinical setting, it must be shown to agree with theory and be validated by research as well.

Unification

The three standards listed above show the importance of theoretical unification, the idea that a field may be regarded as scientific only if it is is unified by testable, falsifiable theoretical principles on which everyone agrees. If this unification is not present, individual practitioners may craft independent theories and, regardless of the merit of those theories, the field cannot be regarded as scientific.

Let me offer an example from physics, a field that perfectly exemplifies the interplay of research, theory and practice. When I use a Global Positioning System (GPS) receiver to find my way across the landscape, every aspect of the experience is governed by rigorously tested physical theory. The semiconductor technology responsible for the receiver's integrated circuits obeys quantum theory and materials science. The mathematics used to reduce satellite radio signals to a terrestrial position honors Einstein's relativity theories (both of them, and for different reasons) as well as orbital mechanics. If any of these theories is not perfectly understood and taken into account, I won't be where the GPS receiver says I am and that could easily have serious consequences.

Because physical theories are rigorously tested and because the practice of physics honors the theories of physics, a gigantic airliner can approach a runway and land in conditions of zero visibility (a "Category III approach"), without significant risk to passenger safety. Public trust is well-placed in physics as a scientific discipline.

In physics, research findings cause theory to be modified according to the rules of scientific evidence. The result of a change in theory is that every activity remotely related to physics — civil, mechanical and electrical engineering, among others — is required to change its practice in step with new research findings, and a failure to take the theory of physics into account can easily end the career of someone engaged in the practice of physics.

Because of their adherence to rigorously tested theories, physics and mathematics are the scientific fields by which other fields are judged. It must be said that, for lack of adherence to the standards described above, many widely practiced fields are scientific in name only. Some of those fields have the status of sciences only because of a public perception that, because they ought to be sciences, therefore they are sciences. This is called a "logical fallacy," the topic of the next section.

Logical Fallacies

When certain topics are allowed to intrude, discussions of scientific and other issues tend to fall apart. Students of debate learn to avoid what are called "logical fallacies", tactics that have no place in a productive debate and that can only waste the time of the participants. The subject of logical fallacies is particularly important to science, because what is important to science seems diametrically opposed to what people believe is important.

It is important to understand that logical fallacies are not merely weak debate tactics, they represent arguments that have no validity whatever and have no place in intelligent debate. Here are some examples of fallacies with particular relevance to scientific discussions:

This argument abandons the discussion topic and instead tries to attack the opponent in one way or another. "You're only against capital punishment because you're stupid!" It is easy to identify this fallacy, but not so easy to avoid it.

This argument appeals to authority rather than to evidence. Because authority has no role in science, this fallacy is particularly important in scientific debates, where the views of "experts" carry no weight. This fact about science may come as a shock to nonscientists who regularly hear from one or another scientific "expert" expounding on matters of public importance.

There are two problems with scientific expertise. One is that, as explained above, expertise has no importance to science, only evidence does. The other is that scientists tend to be very highly specialized (in modern times — this wasn't always true), consequently their views on other topics aren't necessarily superior to those of a nonscientist.

One example of this are the "eugenic" beliefs and campaign that Nobel Prizewinner William Shockley pursued late in his life. Shockley was a physicist, co-inventor of the transistor while at Bell Labs, but his outspoken views on the supposed racial inferiority of black people made him a perfect example of a scientist misusing his status to pursue unscientific and socially destructive goals.

The issue of authority perfectly contrasts the view of science as seen by the public and by scientists. To a scientist, authority means nothing, and the professional status and advanced degrees of a scientist count for precisely nothing compared to the evidence that is present or absent in his next article. To the public by contrast, a scientist's remarks carry more weight than those of an ordinary mortal, but this only uncovers a flaw in public education, and a reverence toward authority that is sadly common in modern times.

Ironically, a measure of a field's respect toward science can be gauged by the importance given to an advanced degree (they are inversely related). When Albert Einstein published his first relativity paper in 1905, he hadn't completed his degree, but because physics is a science Einstein's work was evaluated based on its content, not its source. Many successful scientists have never acquired or completed a degree, and the correlation between scientific degrees and scientific accomplishment is slight.

If a scientist submits a paper to a reputable scientific journal, no one asks where the scientist went to school (which is why Einstein was published at all), instead the content of the paper is the only issue. By contrast, in a field like psychology, one's degree is of crucial importance and one cannot publish or practice without a doctorate. The reason? It is difficult to raise evidence above eminence in a field with such poor evidence.

This common error usually originates in sloppy thinking. It can use a valid premise to construct an invalid conclusion:

All Greeks are human.

All humans are mortal.

Socrates is mortal.

Therefore Socrates is all Greeks.

That is more of an old joke than a realistic example, but it shows the pattern.

An average family has 2.5 children.

The Smiths are an average family.

Therefore the Smiths must have 2 or 3 children.

The error in the second example arises in assuming that an "average" family is identical to the mean produced by a statistical sampling. If this were legitimate reasoning, based on an average of 50 coin-tosses, one would expect an average coin to land on edge.

This error is so common and widespread that it isn't possible to catalogue its variations in a reasonable space.

This fallacy illogically associates causes and effects. "I was depressed, so I went to a therapist, after which I felt better." In the absence of a carefully designed study there is no evidentiary basis for assuming a result, which followed the therapy, was caused by it — there are any number of other explanations. This fallacy is very common and results from the complete absence of skepticism common to most human thinking.

This fallacy is sometimes expresssed as "post hoc, ergo propter hoc" — the outcome followed an event, therefore it was caused by the event. The primary way by which experimenters avoid this hidden assumption is to include a control group that receives a sham treatment — if the experimental and control groups both improve to the same extent, the treatment may have no value. For a number of reasons control groups are rare in psychological studies.

The example is a bit more complicated — it might even be called a meta-fallacy (a fallacy about fallacies). It shows the fallacy of concluding that, because an argument contains a fallacy, the argument is false on that ground.

Birds have wings.

I saw an animal with wings.

Therefore I saw a bird.

The argument is fallacious, but this doesn't necessarily mean the conclusion is false.

Moral Fallacy

The name of this fallacy is my own, but it resembles a few similar fallacies and philosophical problems like the "is-ought problem." In the moral fallacy, what "ought to be" becomes an argument in defense of what is. Obviously, statements about what "ought to be" are moral judgments, not logical arguments at all.

Psychology is a science.

Okay, psychology doesn't meet scientific standards.

But psychology ought to be a science.

And the perfect is the enemy of the good.

Therefore psychology is a science.

In case the reader thinks I made this example up, I've watched this precise argument evolve in conversations with psychologists.

Summary

It is commonly believed that science can function in a world that doesn't understand science or scientific thinking, but this is false. A public that is ignorant of scientific principles and argument is ill-equipped to distinguish science from pseudo-science.

Postmodernism

Although not central to how science is defined and practiced, a practical requirement is that people agree on the meaning of science, evidence, and the significance of particular findings and theories. This important issue isn't central to science because a single person working alone can produce valid scientific theories and results, but for any coöperation between scientists or for the dissemination and application of scientific theories and results, people must accept the notion of shared, objective truth.

Objective Truth

Naturally, once one utters the word "truth" or asserts that an idea can be accurately transmitted between people by language or print, philosophers will rise up and debate whether this is even possible. This is normally a harmless onanistic activity confined to faculty tea parties, but there is a small possibility that one or another abstract philosophical idea may gain traction in the corporeal world of telescopes and test tubes. Postmodernism is just such an idea. (In this article, for brevity I use "postmodernism" to refer to "deconstructive postmodernism".)

The central idea of postmodernism is that there are no shared objective truths, that everything is subjective opinion. People possessed of common sense will instantly see that this idea is self-canceling (if the postmodernist thesis is true, it must apply to postmodernism and thus dismantle it by self-reference), but it is vital to understand that, at some point during the training of the average philosopher, common sense is reliably extinguished.

The Academic

The agenda of the academic postmodernist should be obvious — given a postmodernist outlook, he can publish any number of words in any way he pleases, with no possibility of meaningful refutation. And this academic paradise has been fully realized — postmodernism has produced far more words, with far less discernible content, than ever before in the history of philosophy. This ridiculous but true state of affairs was dramatized in 1996 by physicist Alan Sokal, who submitted a deliberately nonsensical article to Social Text, a magazine attuned to the inner world of postmodernists. After publication Sokal revealed that he had constructed his article out of "a pastiche of left-wing cant, fawning references, grandiose quotations, and outright nonsense ... structured around the silliest quotations I could find about mathematics and physics" that had previously been published by postmodernist academics.

In the resulting Sokal Affair, heated discussion revolved around whether Sokal had violated academic ethics by creating and publishing a deliberate hoax article intended to ridicule an academic discipline and its adherents. Eventually most people realized the postmodernists were fully qualified to ridicule themselves and with an efficiency that couldn't be improved upon by outsiders.

The single most important property of a postmodernist idea is that it be couched in insufferable verbiage. Knowing this, Sokal titled his article "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity." Given the absurd nature of postmodernist publishing and its fondness for turgid rhetoric, the title alone assured publication.

Secular Postmodernists

But the real harm in postmodernism doesn't lie within universities among the lost souls in the department of philosophy — the harm lies outside, where there are other people, similarly intellectually handicapped in such a way that they cannot see the logical self-contradiction at the heart of postmodernism, but who have one or another base motive for adopting a postmodernist outlook.

Psychology

For example, psychologists would like to believe their field is a science. Looked at objectively, psychology lacks a unifying, testable theoretical core that can meaningfully govern the behavior of clinical psychologists in the way that physical theory governs the behavior of civil engineers or medical theory governs the behavior of doctors. Because the scientific standing of a field is determined by the central role played by evidence and testable theories, to assess psychology all one need do is look at what therapies are offered by trained, licensed psychologists, and the degree to which those therapies are derived from rigorous, repeatable experimental evidence. (Answers: "anything" and "zero".) This situation is freely acknowledged by qualified people both outside and within the field of psychology, including the president of the American Psychological Association.

Psychologists, on recognizing the true nature of their field, want the imprimatur of science for commercial and self-esteem purposes, so (instead of reforming their field from within) they try to borrow the authority and status they believe lies within science through the expedient of redefining science. In my many debates with psychologists, I have noticed the primary strategy is to assert that what constitutes science is a matter of opinion, that is, by adopting a postmodernist outlook. In this way psychologists can reconcile contemporary psychological practice with science.

I regularly hear from psychologists who use expressions like "your kind of science" and "different kinds of science" as though science is an ice cream store that offers many flavors, but all essentially ice cream. Obviously, given the nature of current psychological practice, the two requirements psychologists would like to see eliminated are the rigorous testing of theories and falsifiability. Unfortunately once these are gone there is nothing left of science but the name. But this outcome isn't so bad, since they only wanted the name anyway.

This strategy hasn't played out in any formal way, but given the undisciplined nature of postmodernist thinking, that doesn't really matter. One psychology correspondent even argued that psychologists must be scientists because they were called scientists, and their field was described as a science, in internal literature. When I read his argument I thought he would do well to learn a little science, if only to sound more credible as he tore it to pieces.

Religion

Religious pseudoscientists, like the advocates of "intelligent design", e.g. creationists, have the same goal and use the same strategy as psychologists. If science can be defined as something that doesn't require evidence, then pseudoscience becomes science by decree. A specific example is the Discovery Institute's public campaign to replace conventional science "with a science consonant with Christian and theistic convictions" and one that can "affirm the reality of God." (this from the Wedge Document). Achievement of this goal would require the acceptance of untestable supernatural agencies and the abandonment within science of evidence, experiment and testable theories. As before, everything is gone but the name.

This demand has its roots in the realization that science can produce vaccines and in other ways alleviate human suffering to a degree that prayer doesn't seem able to match, and rather than reconsider their attachment to an untestable belief system, truly dedicated religious believers would prefer to dismantle the modern world, starting with science (nothing is so offensive to a religious believer as an effective non-religious life strategy).

Erosion of Science

The danger in these private strategies is that all of them threaten the existence of science, either by confusing children about its nature as the "intelligent design" advocates do, or by attacking it directly as psychology is trying to do. It's possible that psychology will accept its true status (perhaps after going through the five stages of grief) and begin work toward actually becoming a science. But in the meantime, and based on my relatively large and growing sampling of psychologists, I find that many of these people are sworn enemies of science and reason, to the degree that it is sometimes difficult to distinguish them from religious fanatics.

The bottom line is that there is precisely one kind of science. Science is not an ice cream store with dozens of flavors, it is a well-integrated discipline with strict rules. Beyond this, it is well-connected with the ultimate form of validation: results. Science depends on acceptance of the principle that objective facts can be accurately transmitted and that there are legitimate ways to share information. Postmodernism denies this thesis. And postmodernism's advocates want it both ways — they want to assert the postmodern thesis, but then they want to proceed as though they haven't just committed intellectual suicide by self-reference.

Conclusion

The plight of science is that its enemies want its status without its discipline, and they are willing to injure science by posturing as sciences and scientists, by pretending to meet its requirements while avoiding anything but the name.

Pretending an association with science is not a new game — consider Christian Science and Scientology as just two examples — but as time passes and science's results become better understood, more pretenders come out of the woodwork.

In the most basic sense, if a field cannot produce objective evidence for its practices, or cannot show a meaningful connection between its theories and evidence for those theories, and in particular if a theory cannot be falsified by contradicting evidence, that field is not based in science.

Science grants plenty of latitude with regard to hypotheses and creative thinking, but in science, hypotheses must eventually be either abandoned or supported by evidence. It may shock the reader to learn that some technical and medical fields produce hypotheses as in normal science, but then, without taking the mandatory step of testing the ideas against evidence, open clinics to treat people based on the still-untested hypotheses.

Another kind of pseudoscience collects evidence, but never creates a theory to generalize the evidence. This leaves the evidence as a description without a theory as an explanation. Evidence alone, description alone, doesn't meet the definition of science, because there is no basis for a general statement about the evidence, and there is no basis for falsification.

Some unscientific fields use the "moral fallacy" to argue for their own existence — the idea that, because something ought to be so, therefore it is so. For example, because there ought to be a scientific psychology, therefore there is a scientific psychology.

But because there are no science police (and there should never be), society must decide how scientific various disciplines need to be. Some fields don't need to be scientific to accomplish what they do, while others are never confused with science, so there's no problem. The biggest problem is with fields that need to be scientific to earn public trust and to assure the safety of their clients, but that aren't scientific. For example, after a rocky period in the early 20th century, mainstream medicine has slowly evolved toward being scientific enough to earn the public's trust.

Psychology is on the cusp of being transformed into a science, but this is not to say that contemporary psychology is remotely scientific at present. It isn't.

One measure of the scientific standing of a field is the degree to which it is united by a tested, falsifiable system of scientific theories. For example, the various branches of physics are united with civil and electrical engineering by a strong system of theories that ends up determining how bridges and airplanes are designed and built. Because of this unification, a discovery in the theory of physics will quickly change the practice of physics (e.g. engineering).

In psychology, by contrast, research results from theoretical psychology have no measurable effect on clinical psychology, both because there is no theoretical framework that unites psychological research and practice, and because there is no strong understanding of, or attachment to, scientific standards and methods within current psychological training and practice.

I wouldn't have been able to make the above claim several years ago, but since that time I've had extensive correspondence with professional psychologists engaged in both research and in clinical practice, and I must sadly report that science isn't being presented to these individuals as anything but a rhetorical obstacle to be waved away.

To me, the single most important defect in contemporary psychology is a near-perfect disregard for the null hypothesis, the scientific precept that assumes an idea to be false unless and until there is scientific evidence for it. In clinical psychology as a result, any number of dangerous practices are begun without any effort to discover whether there is a rational basis for them or whether they offer therapeutic benefits (as with "facilitated communication" and "recovered memory therapy").

At the moment, clinical psychology is stuck at the approximate evolutionary stage of mainstream medicine at the turn of the 20th century — a handful of conscientious practitioners mixed in with a great number of charlatans. It's becoming clear that psychology needs to clean its own house or suffer a complete loss of public confidence. As American Psychological Association president Ronald Levant recently said in his call for an evidence-based practice, " ... psychology needs to define [scientific practice] in psychology or it will be defined for us. We cannot afford to sit on the sidelines."