Navigation

01 Background

Submitted by DB Larson on Mon, 08/04/2008 - 12:05

CHAPTER 1

Background

To the man of the Stone Age the world in which he lived was a world of spirits. Powerful gods hurled shafts of lightning, threw waves against the shore, and sent winter storms howling down out of the north. Lesser beings held sway in the forests, among the rocks, and in the flowing streams. Malevolent demons, often in league with the mighty rulers of the elements, threatened the human race from all directions, and only the intervention of an assortment of benevolent, but capricious, deities made man’s continued existence possible at all.

This hypothesis that material phenomena are direct results of the actions of superhuman beings was the first attempt to define the fundamental nature of the physical universe: the first general physical concept. The scientific community currently regards it as a juvenile and rather ridiculous attempt at an explanation of nature, but actually it was plausible enough to remain essentially unchallenged for thousands of years. In fact, it is still accepted, in whole or in part, by a very substantial proportion of the population of the world. Such widespread acceptance is not as inexplicable as it may seem to the scientifically trained mind; it has been achieved only because the “spirit” concept does have some genuine strong points. Its structure is logical. If one accepts the premises he cannot legitimately contest the conclusions. Of course, these premises are entirely ad hoc, but so are many of the assumptions of modern science. The individual who accepts the idea of a “nuclear force” without demur is hardly in a position to be very critical of those who believe in the existence of “evil spirits.”

A special merit of this physical theory based on the “spirit” concept is that it is a comprehensive theory; it encounters no difficulties in assimilating new discoveries, since all that is necessary is to postulate some new demon or deity. Indeed, it can even deal with discoveries not yet made, simply by providing a “god of the unknown.” But even though a theory may have some good features, or may have led to some significant accomplishments, this does not necessarily mean that it is correct, nor that it is adequate to meet current requirements. Some three or four thousand years ago it began to be realized by the more advanced thinkers that the “spirit” concept had some very seriousweaknesses. The nature of these weaknesses is now well understood, and no extended discussion of them is necessary. The essential point to be recognized is that at a particular stage in history the prevailing concept of the fundamental nature of the universe was subjected to critical scrutiny, and found to be deficient. It was therefore replaced by a new general physical concept.

This was no minor undertaking. The “spirit” concept was well entrenched in the current pattern of thinking, and it had powerful support from the “Establishment,” which is always opposed to major innovations. In most of the world as it then existed such a break with accepted thought would have been impossible, but for some reason an atmosphere favorable to critical thinking prevailed for a time in Greece and neighboring areas, and this profound alteration of the basic concept of the universe was accomplished there. The revolution in thought came slowly and gradually. Anaxagoras, who is sometimes called the first scientist, still attributed Mind to all objects, inanimate as well as animate. If a rock fell from a cliff, his explanation was that this action was dictated by the Mind of the rock. Even Aristotle retained the “spirit” concept to some degree. His view of the fall of the rock was that this was merely one manifestation of a general tendency of objects to seek their “natural place,” and he explained the acceleration during the fall as a result of the fact “that the falling body moved more jubilantly every moment because it found itself nearer home.”1 Ultimately, however, these vestiges of the “spirit” concept disappeared, and a new general concept emerged, one that has been the basis of all scientific work ever since.

According to this new concept, we live in a universe of matter: one that consists of material “things” existing in a setting provided by space and time. With the benefit of this conceptual foundation, three thousand years of effort by generation after generation of scientists have produced an immense systematic body of knowledge about the physical universe, an achievement which, it is safe to say, is unparalleled elsewhere in human life.

In view of this spectacular record of success, which has enabled the “matter” concept to dominate the organized thinking of mankind ever since the days of the ancient Greeks, it may seem inconsistent to suggest that this concept is not adequate to meet present-day needs, but the ultimate fate of any scientific concept or theory is determined not by what it has done but by what, if anything, it now fails to do. The graveyard of science is full of theories that were highly successful in their day, and contributed materially to the advance of scientific knowledge while they enjoyed general acceptance: the caloric theory, the phlogiston theory, the Ptolemaic theory of astronomy, the “billiard ball”theory of the atom, and so on. It is appropriate, therefore, that we should, from time to time, subject all of our basic scientific ideas to a searching and critical examination for the purpose of determining whether or not these ideas, which have served us well in the past, are still adequate to meet the more exacting demands of the present.

Once we subject the concept of a universe of matter to a critical scrutiny of this kind it is immediately obvious, not only that this concept is no longer adequate for its purpose, but that modern discoveries have completely demolished its foundations. If we live in a world of material “things” existing in a framework provided by space and time, as envisioned in the concept of a universe of matter, then matter in some form is the underlying feature of the universe: that which persists through the various physical processes. This is the essence of the concept. For many centuries the atom was accepted as the ultimate unit, but when particles smaller (or at least less complex) than atoms were discovered, and it was found that under appropriate conditions atoms would disintegrate and emit such particles in the process, the sub-atomic particles took over the role of the ultimate building blocks. But we now find that these particles are not permanent building blocks either.

For instance, the neutron, one of the constituents, from which the atom is currently supposed to be constructed, spontaneously separates into a proton, an electron, and a neutrino. Here, then, one of the “elementary particles,” the supposedly basic and unchangeable units of matter, transforms itself into other presumably basic and unchangeable units. In order to save the concept of a universe of matter, strenuous efforts are now being made to explain events of this kind by postulating still smaller “elementary particles” from which the known sub-atomic particles could be constructed. At the moment, the theorists are having a happy time constructing theoretical “quarks” or other hypothetical sub-particles, and endowing these products of the imagination with an assortment of properties such as “charm,” “color,” and so on, to enable them to fit the experimental data.

But this descent to a lower stratum of physical structure could not be accomplished, even in the realm of pure hypothesis, without taking another significant steps away from reality. At the time the atomic theory was originally proposed by Democritus and his contemporaries, the atoms of which they conceived all physical structures to be composed were entirely hypothetical, but subsequent observations and experiments have revealed the existence of units of matter that have exactly the properties that are attributed to the atoms by the atomic theory. As matters now stand, therefore, this theory can legitimately claim to represent reality. But there are no observed particles that have all of the properties that are required in order to qualify as constituents of the observed atoms.The theorists have therefore resorted to the highly questionable expedient of assuming, entirely ad hoc, that the observed sub-atomic particles (that is, particles less complex than atoms) are the atomic constituents, but have different properties when they are in the atoms than those they are found to have wherever they can be observed independently.

This is a radical departure from the standard scientific practice of building theories on solid factual foundations, and its legitimacy is doubtful, to say the least, but the architects of the “quark” theories are going a great deal farther, as they are cutting loose from objective reality altogether, and building entirely on assumptions. Unlike the hypothetical “constituents” of the atoms, which are observed sub-atomic particles with hypothetical sets of properties instead of the observed properties, the quarks are hypothetical particles with hypothetical properties.

The unreliability of conclusions reached by means of such forced and artificial constructions should be obvious, but it is not actually necessary to pass judgment on this basis, because irrespective of how far the subdividing of matter into smaller and smaller particles is carried, the theory of “elementary particles, of matter cannot account for the observed existence of processes whereby matter is converted into non-matter, and vice versa. This interconvertibility is positive and direct proof that the “matter” concept is wrong; that the physical universe is not a universe of matter. There clearly must be some entity more basic than matter, some common denominator underlying both matter and non-material phenomena.

Such a finding, which makes conventional thinking about physical fundamentals obsolete, is no more welcome today than the “matter” concept was in the world of antiquity. Old habits of thought, like old shoes, are comfortable, and the automatic reaction to any suggestion of a major change in basic ideas is resisted, if not outright resentment. But if scientific progress is to continue, it is essential not only to generate new ideas to meet new problems, but also to be equally diligent in discarding old ideas that have outlived their usefulness.

There is no actual need for any additional evidence to confirm the conclusion that the currently accepted concept of a universe of matter is erroneous. The observed interconvertibility of matter and non-matter is in itself a complete and conclusive refutation of the assertion that matter is basic. But when the inescapable finality of the answer that we get from this interconvertibility forces recognition of the complete collapse of the concept of a universe of matter, and we can no longer accept it as valid, it is easy to see that this concept has many other shortcomings that should have alerted the scientific community to question its validity long ago. The most obvious weakness of the concept isthat the theories that are based upon it have not been able to keep abreast of progress in the experimental and observational fields. Major new physical discoveries almost invariably come as surprises, “unexpected and even unimagined surprises,”2 in the words of Richard Schlegel. They were not anticipated on theoretical grounds, and cannot be accommodated to existing theory without some substantial modification of previous ideas. Indeed, it is doubtful whether any modification of existing theory will be adequate to deal with some of the more recalcitrant phenomena now under investigation.

The current situation in particle physics, for instance, is admittedly chaotic. The outlook might be different if the new information that is rapidly accumulating in this field were gradually clearing up the situation, but in fact it merely seems to deepen the existing crisis. If anything in this area of confusion is clear by this time it is that the “elementary particles” are not elementary. But the basic concept of a universe of matter requires the existence of some kind of an elementary unit of matter. If the particles that are now known are not elementary units, as is generally conceded, then, since no experimental confirmation is available for the hypothesis of sub-particles, the whole theory of the structure of matter, as it now stands, is left without visible support.

Another prime example of the inability of present-day theories based on the “matter” concept to cope with new knowledge of the universe is provided by some of the recent discoveries in astronomy. Here the problem is an almost total lack of any theoretical framework to which the newly observed phenomena can be related. A book published a few years ago that was designed to present all of the significant information then available about the astronomical objects known as quasars contains the following statement, which is still almost as appropriate as when it was written:

It will be seen from the discussion in the later chapters that there are so many conflicting ideas concerning theory and interpretation of the observations that at least 95 percent of them must indeed be wrong. But at present no one knows which 95 percent.3

After three thousand years of study and investigation on the basis of theories founded on the “matter” concept we are entitled to something more than this. Nature has a habit of confronting us with the unexpected, and it is not very reasonable to expect the currently prevailing structure of theory to give us an immediate and full account of all details of a new area, but we should at least be able to place the new phenomena in their proper places with respect to the general framework, and to account for their major aspects without difficulty.

The inability of present-day theories to keep up with experimentaland observational progress along the outer boundaries of science is the most obvious and easily visible sign of their inadequacies, but it is equally significant that some of the most basic physical phenomena are still without any plausible explanations. This embarrassing weakness of the current theoretical structure is widely recognized, and is the subject of comment from time to time. For instance, a press report of the annual meeting of the American Physical Society in New York in February 1969 contains this statement:

A number of very distinguished physicists who spoke reminded us of long-standing mysteries, some of them problems so old that they are becoming forgotten—pockets of resistance left far behind the advancing frontiers of physics.4

Gravitation is a good example. It is unquestionably fundamental, but conventional theory cannot explain it. As has been said it “may well be the most fundamental and least understood of the interactions.”5 When a book or an article on this subject appears, we almost invariably find the phenomenon characterized, either in the title or in the introductory paragraphs, as a “mystery,” an “enigma,” or a “riddle.”

But what is gravity, really? What causes it? Where does it come from? How did it get started? The scientist has no answers… in a fundamental sense, it is still as mysterious and inexplicable as it ever was, and it seems destined to remain so. (Dean E. Wooldridge)6

Electromagnetic radiation, another of the fundamental physical phenomena, confronts us with a different, but equally disturbing, problem. Here there are two conflicting explanations of the phenomenon, each of which fits the observed facts in certain areas but fails in others: a paradox which, as James B. Conant observed, “once seemed intolerable,” although scientists have now “learned to live with it.”7 This too, is a “deep mystery,”8 as Richard Feynman calls it, at the very base of the theoretical structure.

There is a widespread impression that Einstein solved the problem of the mechanism of the propagation of radiation” and gave a definitive explanation of the phenomenon. It may be helpful, therefore, to note just what Einstein did have to say on this subject, not only as a matter of clarifying the present status of the radiation problem itself, but to illustrate the point made by P. W. Bridgman when he observed that many of the ideas and opinions to which the ordinary scientist subscribes “have not been thought through carefully but are held in the comfortable belief…that some one must have examined them at some time.”9

In one of his books Einstein points out that the radiation problem is an extremely difficult one, and he concludes that:

Our only way out seems to be to take for granted the fact that space has the physical property of transmitting electromagnetic waves, and not to bother too much about the meaning of this statement.10

Here, in this statement, Einstein reveals (unintentionally) just what is wrong with the prevailing basic physical theories, and why a revision of the fundamental concepts of those theories is necessary. Far too many difficult problems have been evaded by simply assuming an answer and “taking it for granted.” This point is all the more significant because the shortcomings of the “matter” concept and the theories that it has generated are by no means confined to the instances where no plausible explanations of the observed phenomena have been produced. In many other cases where explanations of one kind or another have actually been formulated, the validity of these explanations is completely dependent on ad hoc assumptions that conflict with observed facts.

The nuclear theory of the atom is typical. Inasmuch as it is now clear that the atom is not an indivisible unit, the concept of a universe of matter demands that it be constructed of “elementary” material units of some kind. Since the observed sub-atomic particles are the only known candidates for this role it has been taken for granted, as mentioned earlier, that the atom is a composite of sub-atomic particles. Consideration of the various possible combinations has led to the hypothesis that is now generally accepted: an atom in which there is a nucleus composed of protons and neutrons, surrounded by some kind of an arrangement of electrons.

But if we undertake a critical examination of this hypothesis it is immediately apparent that there are direct conflicts with known physical facts. Protons are positively charged, and charges of the same sign repel each other. According to the established laws of physics, therefore, a nucleus composed wholly or partly of protons would immediately disintegrate. This is a cold, hard physical fact, and there is not the slightest evidence that it is subject to abrogation or modification under any circumstances or conditions. Furthermore, the neutron is observed to be unstable, with a lifetime of only about 15 minutes, and hence this particle fails to meet one of the most essential requirements of a constituent of a stable atom: the requirement of stability. The status of the electron as an atomic constituent is even more dubious. The properties, which it must have to play such a role, are altogether different from the properties of the observed electron. Indeed, as Herbert Dingle points out, we can deal with the electron as a constituent of the atomonly if we ascribe to it “properties not possessed by any imaginable objects at all.”11

A fundamental tenet of science is that the facts of observation and experiment are the scientific court of last resort; they pronounce the final verdict irrespective of whatever weight may be given to other considerations. As expressed by Richard Feynman:

If it (a proposed new law or theory) disagrees with experiment it is wrong. In that simple statement is the key to science…. That is all there is to it.12

The situation with respect to the nuclear theory is perfectly clear. The hypothesis of an atomic nucleus composed of protons and neutrons is in direct conflict with the observed properties of electric charges and the observed behavior of the neutron, while the conflicts between the atomic version of the electron and physical reality are numerous and very serious. According to the established principles of science, and following the rule that Feynman laid down in the foregoing quotation, the nuclear theory should have been discarded summarily years ago.

But here we see the power of the currently accepted fundamental physical concept. The concept of a universe of matter demands a “building block” theory of the atom: a theory in which the atom (since it is not an indivisible building block itself) is a “thing” composed of “parts” which, in turn, are “things” of a lower order. In the absence of any way of reconciling such a theory with existing physical knowledge, either the basic physical concept or standard scientific procedures and tests of validity had to be sacrificed. Since abandonment of the existing basic concept of the nature of the universe is essentially unthinkable in the ordinary course of theory construction, sound scientific procedure naturally lost the decision. The conflicts between the nuclear theory and observation were arbitrarily eliminated by means of a set of ad hoc assumptions. In order to prevent the breakup of the hypothetical nucleus by reason of the repulsion between the positive charges of the individual protons it was simply assumed that there is a “nuclear force” of attraction, which counterbalances the known force of repulsion. And in order to build a stable atom out of unstable particles it was assumed (again purely ad hoc) that the neutron, for some unknown reason, is stable within the nucleus. The more difficult problem of inventing some way of justifying the electron as an atomic constituent is currently being handled by assuming that the atomic electron is an entity that transcends reality. It is unrelated to anything that has ever been observed, and is itself not capable of being observed: an “abstract thing, no longer intuitable in terms of the familiar aspects of everyday experience,”13 as Henry Margenau describes it.

What the theorists’ commitment to the “matter” concept has done in this instance is to force them to invent the equivalent of the demons that their primitive ancestors called upon when similarly faced with something that they were unable to explain. The mysterious “nuclear force” might just as well be called the “god of the nucleus.” Like an ancient god, it was designed for one particular purpose; it has no other functions; and there is no independent confirmation of its existence. In effect, the assumptions that have been made in an effort to justify retention of the “matter” concept have involved a partial return to the earlier “spirit” concept of the nature of the universe.

Since it is now clear that the concept of a universe of matter is not valid, one may well ask: How has it been possible for physical science to make such a remarkable record of achievement on the basis of an erroneous fundamental concept? The answer is that only a relatively small part of current physical theory is actually derived from the general physical principles based on that fundamental concept. “A scientific theory,” explains R. B. Braithwaite, “is a deductive system in which observable consequences logically follow from the conjunction of observed facts with the set of the fundamental hypotheses of the system. “14 But modern physical theory is not one deductive system of the kind described by Braithwaite; it is a composite made up of a great many such systems. As expressed by Richard Feynman:

Today our theories of physics, the laws of physics, are a multitude of different parts and pieces that do not fit together very well. We do not have one structure from which all is deduced.15

One of the principal reasons for this lack of unity is that modern physical theory is a hybrid structure, derived from two totally different sources. The small-scale theories applicable to individual phenomena, which constitute the great majority of the “parts and pieces,” are empirical generalizations derived by inductive reasoning from factual premises. At one time it was rather confidently believed that the accumulation of empirically derived knowledge then existing, the inductive science commonly associated with the name of Newton, would eventually be expanded to encompass the whole of the universe. But when observation and experiment began to penetrate what we may call the far-out regions, the realms of the very small, the very large, and the very fast, Newtonian science was unable to keep pace. As a consequence, the construction of basic physical theory fell into the hands of a school of scientists who contend that inductive methods are incapable of arriving at general physical principles. “The axiomatic basis of theoretical physics cannot be an inference from experience, but must be free invention,”16 was Einstein’s dictum.

The result of the ascendancy of this “inventive” school of science has been to split physical science into two separate parts. As matters now stand, the subsidiary principles, those that govern individual physical phenomena and the low-level interactions, are products of induction from factual premises. The general principles, those that apply to large scale phenomena or to the universe as a whole, are, as Einstein describes them, “pure inventions of the human mind.” Where the observations are accurate, and the generalizations are justified, the inductively derived laws and theories are correct, at least within certain limits. The fact that they constitute by far the greater part of the current structure of physical thought therefore explains why physical science has been so successful in practice. But where empirical data is inadequate or unavailable, present-day science relies on deductions from the currently accepted general principles, the products of pure invention, and this is where physical theory has gone astray. Nature does not agree with these “free inventions of the human mind.”

This disagreement with nature should not come as a surprise. Any careful consideration of the situation will show that “free invention” is inherently incapable of arriving at the correct answers to problems of long standing. Such problems do not continue to exist because of a lack of competence on the part of those who are trying to solve them, or because of a lack of adequate methods of dealing with them. They exist because some essential piece or pieces of information are missing. Without this essential information the correct answer cannot be obtained (except by an extremely unlikely accident). This rules out inductive methods, which build upon empirical information. Invention is no more capable of arriving at the correct result without the essential information than induction, but it is not subject to the same limitations. It can, and does, arrive at some result.

General acceptance of a theory that is almost certain to be wrong is, in itself, a serious impediment to scientific progress, but the detrimental effect is compounded by the ability of these inventive theories to evade contradictions and inconsistencies by further invention. Because of the almost unlimited opportunity to escape from difficulties by making further ad hoc assumptions, it is ordinarily very difficult to disprove an invented theory. But the definite proof that the physical universe is not a universe of matter now automatically invalidates all theories, such as the nuclear theory of the atom, that are dependent on this “matter” concept. The essential piece of information that has been missing, we now find, is the true nature of the basic entity of which the universe is composed.

The issue as to the inadequacy of present-day basic physical theory does not normally arise in the ordinary course of scientific activity because that activity is primarily directed toward making the best possible useof the tools that are available. But when the question is actually raised there is not much doubt as to how it has to be answered. The answer that we get from P. A. M. Dirac is this:

The present stage of physical theory is merely a steppingstone toward the better stages we shall have in the future. One can be quite sure that there will be better stages simply because of the difficulties that occur in the physics of today.17

Dirac admits that he and his fellow physicists have no idea as to the direction from which the change will come. As he says, “there will have to be some new development that is quite unexpected, that we cannot even make a guess about.” He recognizes that this new development must be one of major significance. “It is fairly certain that there will have to be drastic changes in our fundamental ideas before these problems can be solved”17 he concludes. The finding of this present work is that “drastic changes in our fundamental ideas” will indeed be required. We must change our basic physical concept: our concept of the nature of the universe in which we live.

Unfortunately, however, a new basic concept is never easy to grasp, regardless of how simple it may be, and how clearly it is presented, because the human mind refuses to look at such a concept in any simple and direct manner, and insists on placing it within the context of previously existing patterns of thought, where anything that is new and different is incongruous at best, and more often than not is definitely absurd. As Butterfield states the case:

Of all forms of mental activity, the most difficult to induce even in the minds of the young, who may be presumed not to have lost their flexibility, is the art of handling the same bundle of data as before, but placing them in a new system of relations with one another by giving them a different framework.18

In the process of education and development, each human individual has put together a conceptual framework which represents the world as he sees it, and the normal method of assimilating a new experience is to fit it into its proper place in this general conceptual framework. If the fit is accomplished without difficulty we are ready to accept the experience as valid. If a reported experience, or a sensory experience of our own, is somewhat outside the limits of our complex of beliefs, but not definitely in conflict, we are inclined to view it skeptically but tolerantly, pending further clarification. But if a purported experience flatly contradicts a fundamental belief of long standing, the immediate reaction is to dismiss it summarily.

Some such semi-automatic system for discriminating between genuineitems of information and the many false and misleading items that are included in the continuous stream of messages coming in through the various senses is essential in our daily life, even for mere survival. But this policy of using agreement with past experience as the criterion of validity has the disadvantage of limiting the human race to a very narrow and parochial view of the world, and one of the most difficult tasks of science has been, and to some extent continues to be, overcoming the errors that are thus introduced into thinking about physical matters. Only a few of those who give any serious consideration to the subject still believe that the earth is flat, and the idea that this small planet of ours is the center of all of the significant activities of the universe no longer commands any strong support, but it took centuries of effort by the most advanced thinkers to gain general acceptance of the present-day view that, in these instances, things are not what our ordinary experience would lead us to believe.

Some very substantial advances in scientific methods and equipment in recent years have enabled investigators to penetrate a number of far-out regions that were previously inaccessible. Here again it has been demonstrated, as in the question with respect to the shape of the earth, that experience within the relatively limited range of our day-to-day activities is not a reliable guide to what exists or is taking place in distant regions. In application to these far-out phenomena the scientific community therefore rejects the “experience” criterion, and opens the door to a wide variety of hypotheses and concepts that are in direct conflict with normal experience: such things as events occurring without specific causes, magnitudes that are inherently incapable of measurement beyond a certain limiting degree of precision, inapplicability of some of the established laws of physics to certain unusual phenomena, events that defy the ordinary rules of logic, quantities whose true magnitudes are dependent on the location and movement of the observer, and so on. Many of these departures from “common sense” thinking, including almost all of those that are specifically mentioned in this paragraph, are rather ill advised in the light of the facts that have been disclosed by this present work, but this merely emphasizes the extent to which scientists are now willing to go in postulating deviations from everyday experience.

Strangely enough, this extreme flexibility in the experience area coexists with an equally extreme rigidity in the realm of ideas. The general situation here is the same as in the case of experience. Some kind of semi-automatic screening of the new ideas that are brought to our attention is necessary if we are to have any chance to develop a coherent and meaningful understanding of what is going on in the world about us, rather than being overwhelmed by a mass of erroneous or irrelevant material. So,just as purported new experiences are measured against past experience, the new concepts and theories that are proposed are compared with the existing structure of scientific thought and judged accordingly.

But just as the “agreement with previous experience,” criterion breaks down when experiment or observation enters new fields, so the “agreement with orthodox theory” criterion breaks down when it is applied to proposals for revision of the currently accepted theoretical fundamentals. When agreement with the existing theoretical structure is set up as the criterion by which the validity of new ideas is to be judged, any new thought that involves a significant modification of previous theory is automatically branded as unacceptable. Whatever merits it may actually have, it is, in effect, wrong by definition.

Obviously, a strict and undeviating application of this “agreement” criterion cannot be justified” as it would bar all major new ideas. A new basic concept cannot be fitted into the existing conceptual framework, as that framework is itself constructed of other basic concepts” and a conflict is inevitable. As in the case of experience” it is necessary to recognize that there is an area in which this criterion is not legitimately applicable. In principle, therefore, practically everyone concedes that a new theory cannot be expected to agree with the theory that it proposes to replace, or with anything derived directly or indirectly from that previous theory.

In spite of the nearly unanimous agreement on this, point as a matter of principle, a new idea seldom gets the benefit of it in actual practice. In part this is due to the difficulties that are experienced in trying to determine just what features of current thought are actually affected by the theory replacement. This is not always clear on first consideration, and the general tendency is to overestimate the effect that the proposed change will have on prevailing ideas. In any event, the principal obstacle that stands in the way of a proposal for changing a scientific theory or concept is that the human mind is so constituted that it does not want to change its ideas, particularly if they are ideas of long standing. This is not so serious in the realm of experience, because the innovation that is required here generally takes the form of an assertion that “things are different” in the particular new area that is under consideration. Such an assertion does not involve a flat repudiation of previous experience; it merely contends that there is a hitherto unknown limit beyond which the usual experience is no longer applicable. This is the explanation for the almost incredible latitude that the theorists are currently being allowed in the “experience” area. The scientist is prepared to accept the assertion that the rules of the game are different in a new field that is being investigated, even where the new rules involve such highly improbable features as events that happen without causesand objects that change their locations discontinuously.

On the other hand, a proposal for modification of an accepted concept or theory calls for an actual change in thinking, something that the human mind almost automatically resists, and generally resents. Here the scientist usually reacts like any layman; he promptly rejects any intimation that the rules which he has already set up, and which he has been using with confidence, are wrong. He is horrified at the mere suggestion that the many difficulties that he is experiencing in dealing with the “parts” of the atom, and the absurdities or near absurdities that he has had to introduce into his theory of atomic structure are all due to the fact that the atom is not constructed of “parts.”

Inasmuch as the new theoretical system presented in this volume and those that are to follow not only requires some drastic reconstruction of fundamental physical theory, but goes still deeper and replaces the basic concept of the nature of the universe, upon which all physical theory is constructed, the conflicts with previous ideas are numerous and severe. If appraised in the customary manner by comparison with the existing body of thought many of the conclusions that are reached herein must necessarily be judged as little short of outrageous. But there is practically unanimous agreement among those who are in the front rank of scientific investigators that some drastic change in theoretical fundamentals is inevitable. As Dirac said in the statement previously quoted, “There will have to be some new development that is quite unexpected, that we cannot even make a guess about.” The need to abandon a basic concept, the concept of a universe of matter that has guided physical thinking for three thousand years is an “unexpected development,” just the kind of a thing that Dirac predicted. Such a basic change is a very important step, and it should not be lightly taken, but nothing less drastic will suffice. Sound theory cannot be built on an unsound foundation. Logical reasoning and skillful mathematical manipulation cannot compensate for errors in the premises to which they are applied. On the contrary, the better the reasoning the more certain it is to arrive at the wrong results if it starts from the wrong premises.