The demarcation problem is the philosophical problem of determining what types of hypotheses should be considered scientific and what types should be considered pseudoscientific or non-scientific. It also concerns itself with the ongoing struggle between science and religion, in particular the question about which elements of religious doctrine can and should be subjected to scientific scrutiny. This is one of the central topics of the philosophy of science, and it has never been fully resolved. In general, though, a hypothesis must be falsifiable, parsimonious, consistent, and reproducible to be scientific.

“”No one in the history of the world has ever self-identified as a pseudoscientist. There is no person who wakes up in the morning and thinks to himself, "I'll just head into my pseudolaboratory and perform some pseudoexperiments to try to confirm my pseudotheories with pseudofacts."

Obviously nobody identifies their beliefs as pseudoscience, because that would imply that what they believed was wrong; if they thought they were wrong, then they would change their beliefs, and would avoid believing what they think is pseudoscience. Self-identification is out, and it's clear that a better method is needed to determine if something is pseudoscience.

The assumption of methodological naturalism is arguably the most basic and important foundation of the scientific method. While it does not explicitly reject the existence of the supernatural — a position associated with philosophical naturalism — it limits the applicability of science to the natural world and the observable laws shaping it. In a nutshell, this means that an explanation that has to resort to a supernatural cause, like Intelligent design, must be considered unscientific.

The Viennese philosophers who introduced the positivist paradigm effectively laid the groundwork for the modern philosophy of science and one of its most important strands of thought. The early Positivists favored a rather strict approach to the demarcation and strongly affirmed the empirical nature of science, meaning that questions that cannot be empirically verified or falsified are irrelevant to scientific thought. This obviously set science in stark contrast to religion, but also to the philosophical schools in the vein of classical rationalist tradition that emphasized pure thought.

In his book The Logic of Scientific Discovery, Karl Popper proposed the idea that scientific hypotheses must be falsifiable; unfalsifiable hypotheses should be considered non-scientific. Popper's emphasis on falsifiability changed the way scientists viewed the demarcation problem, and his impact on philosophy of science was enormous. The concept of falsifiability came under attack from William Quine, who argued that it is impossible to test a hypothesis in isolation, because the process of testing requires the assumption of certain background hypotheses (this is known as the Duhem-Quine thesis), such as 'the equipment is working the way I think it does', and 'the laws of thermodynamics hold'. This means that a falsifying observation guarantees that one of your assumptions is incorrect, but it says nothing about which one it is.

A good example of this is from the British astronomer Sir Fred Hoyle. Hoyle believed in Steady State theory, which opposes Big Bang theory in arguing that the universe is eternal. The vast majority of scientists felt that the debate between Steady State and Big Bang was solved in the 1960s with the observation of the cosmic microwave background (CMB) radiation, which was considered decisive evidence in favour of the Big Bang and a falsification of Steady State. Hoyle dissented from his colleagues by arguing that the observation of the CMB did not disprove Steady State, but instead disproved the First Law of Thermodynamics, i.e. that matter/energy would not remain constant in a closed system, but that there was a source of energy somewhere in the Universe. Hoyle resolved the incompatibility between his theory and observation by rejecting one of his background assumptions. Almost no scientist took this seriously, and Hoyle died in 2001 as a scientific outcast still rejecting the Big Bang.

Despite the problems with Popper's concept of falsification, it has seen wide adoption by many scientists and is often given as the solution to the demarcation problem by practicing scientists.

Thomas Kuhn coined the concept of a paradigm shift, in which scientists come to a consensus on certain relevant assumptions, facts, or methodologies. Any research done in a pre-paradigm state is considered non-scientific.

Imre Lakatos combined elements of Popper and Kuhn's philosophies with his concept of research programs. Programs that succeed at predicting novel facts are scientific, while ones that fail ultimately lapse into pseudoscience.

The concept of 'Non-overlapping magisteria (NOMA) is a relatively recent attempt at proposing a clear demarcation between science and religion. It explicitly restricts science to its naturalistic foundations, meaning that no conclusions about supernatural phenomena like gods may be drawn from within the confines of science. This idea has come under heavy criticism for ignoring the blatantly irrational nature of modern-day fundamentalism, and adherents of this theological orientation have unfortunately not paid the same respect to science.

It's been noted that people often call something pseudoscience if it threatens something important to science.[1] For example, young earth creationism is a threat to science education and funding and confuses the public on what evolution and scienceactually are. This is opposed to, for example, string theory, which is probably unfalsifiable but doesn't actively hurt science.

This approach is problematic, not least because what a creationist and an "evolutionist" consider to be sane and useful to science are radically different, and so the demarcation of pseudoscience becomes an issue of ideology.

“”Many philosophers have tried to solve the problem of demarcation in the following terms: a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly. But the history of thought shows us that many people were totally committed to absurd beliefs. If the strengths of beliefs were a hallmark of knowledge, we should have to rank some tales about demons, angels, devils, and of Heaven and Hell as knowledge. Scientists, on the other hand, are very sceptical even of their best theories. Newton's is the most powerful theory science has yet produced, but Newton himself never believed that bodies attract each other at a distance. So no degree of commitment to beliefs makes them knowledge. Indeed, the hallmark of scientific behaviour is a certain scepticism even towards one's most cherished theories. Blind commitment to a theory is not an intellectual virtue: it is an intellectual crime.

Thus a statement may be pseudoscientific even if it is eminently 'plausible' and everybody believes in it, and it may be scientifically valuable even if it is unbelievable and nobody believes in it. A theory may even be of supreme scientific value even if no one understands it, let alone believes in it.

Larry Laudan has proposed that there is no firm line of demarcation between science and non-science, and that any attempt to draw such a line is a pointless exercise.[3]

Others like Susan Haack, while not rejecting the problem wholesale, argue that a misleading emphasis has been placed on the problem that results in getting bogged down in arguments over definitions rather than evidence.[4]