Lets start from the beginning. The era of quantum theory kicked
off in 1900 with a discovery made by Max Plank. Plank was studying
the so-called black body radiation problem. Classical physics predicted
that black bodies should glow bright blue, a stark contradiction to the
experience of steelworkers everywhere. In order to simplify the
mathematical calculations, Plank restricted the vibration of the matter
particles according to the following rule: E = nhf, where E is
the particle’s energy, n is any integer, f is the
frequency of vibration, and h is a constant chosen by Plank. This
rule restricts the particles to energies that are certain multiples of
their vibration frequency. Plank’s intention was to let h approach
zero; however, this only predicted the same blue radiation as before. By
chance, Plank discovered that if he set h to a certain value,
the calculations matched the experimental results exactly. This
special value for his now known as Plank’s
constant and is also called the “quantum of action.” Plank
showed that energy can only be emitted and absorbed in tiny packets. Each
packet of energy became known as a quanta, or quantum.

In 1905, Einstein produced three major publications that revolutionized
the world of physics. The first of these papers proposed a theory
in which a beam of light behaves like a shower of tiny particles. Picking
up where Plank left off, Einstein showed that energy is not only absorbed
and emitted in quantas, but energy itself comes in discrete quantum packets. Einstein
demonstrated his theory by explaining the photoelectric effect, light’s
ability to knock electrons out of metal. The fact that individual
electrons could be detected as they were knocked off a metal surface
seemed to imply that light was behaving like a particle. Moreover,
reducing the intensity of the light beam did not effect the energy of
the ejected electron. On the other hand, the energy of the ejected
electron could be effected by changing the frequency of the light. Einstein
proposed that these light particles, called photons, come in packets,
each with energy given by Plank’s expression: E = hf, where h is
Plank’s constant, and f is the light’s frequency. This
formula predicts that photons of high-frequency light have more energy
than photons of low-frequency light.

Einstein’s discovery was completely contradictory to the previously
held scientific theories of electromagnetic radiation. In 1864,
Clark Maxwell formalized the basic equations that govern electricity
and magnetism, which are both now known to be aspects of a single entity
we call the electromagnetic field. According to Maxwell’s
theory, light is a wave. In other words, light is an electromagnetic
vibration at a particular frequency. The electromagnetic field
is actually the spectrum of all possible frequencies of light. In
fact, the visible light we perceive with our eyes is a tiny fraction
of this spectrum. Maxwell’s theory predicted the existence
of light waves at lower and higher frequency than visible light. Shortly
thereafter, radio waves were discovered, as were X-rays, infrared waves,
ultraviolet waves, microwaves, and gamma rays. These different
types of waves are just different names for light at various wavelengths. On
the other hand however, Einstein’s theory demonstrated that light
behaves like a particle.

Further evidence supporting Einstein’s quantum theory of light
came in 1923 when Arthur Compton made an important discovery. Compton’s
experiment involved shining a beam of X-rays into a gas of loosely bound
electrons. Compton showed
that X-rays behave like particles, which bounce off the electron. Both
the X-ray and the electron scatter at specific angles like two billiard
balls colliding. He also formulated an expression for the momentum p of
the light particle given by the following expression: p = hk,
where h is plank’s constant, and k is the light’s spatial
frequency. Surprisingly, in 1914 the Bragg brothers had used crystal
diffraction to show that X-rays behaved like waves. This type of
experiment is known as Bragg scattering. Physicists at this time
were confronted with contradictory evidence which suggested that light
behaves both like a particle and a wave!

The plot thickened even more in 1924 when Louis de Broglie proposed
that every particle of matter was associated with a wave. De Broglie
reached this conclusion by using Einstein’s two equations for energy: E
= mc2
and E = hf. De Broglie claimed that the wavelength of a matter-wave
is given by the expression λ = h / P, where h is again Plank’s
constant, and P is the momentum of the particle. De Broglie’s outlandish
idea, that matter is actually a wave, was soon proven experimentally to be
correct.

In the classical view of physical reality, there was no way to reconcile
the differences between waves and particles. A wave can spread
out over a large area, be split up in an infinite number of waves, and
two waves can interpenetrate and emerge unchanged. On the other
hand, a particle is located in a tiny region, travels in one direction,
and crashes into other particles. Although waves and particles
appear to be contradictory aspects of reality, we have discovered that
all waves are also particles, and all particles are also waves.

In order to further illustrate this peculiar wave/particle coexistence,
let’s briefly consider a simple type of quantum experiment. Imagine
we have an electron gun, a device that produces a beam of electrons. Also,
our experiment will include a phosphor screen. If an individual
phosphor in the screen is struck by an electron, the phosphor gains a
little energy and immediately returns to its ground state by emitting
a photon of light. Firing the electron gun at the phosphor screen
produces a point of light on the screen. In this way, we can easily
observe the particle nature of the electron.

Next, in between the gun and the screen, let’s place a card that
has a small hole in the center. If our hole is sufficiently small
enough, we will observe very different pattern than before. The
image on the screen is no longer a point of light, but a series of bright
and dark concentric rings resembling a bull’s eye target. This
pattern is caused by wave diffraction, and the light and dark rings are
caused by wave interference. Interference is an inherent property
of all wavelike interactions. If two waves come together that are
completely in phase, the resulting wave has an amplitude which is the
sum of the two original wave amplitudes. If the two waves are completely
out of phase, the original waves simply cancel each other out. In
general when waves meet, their amplitudes add. This rule is known as
the wave superposition principle, and it applies to all types of waves.

The bull’s eye pattern on the phosphor screen clearly demonstrates
the wavelike nature of the electron. This pattern is created by
a large number of electrons, which individually look like little points
of light on the screen. That is, each electron is observed only
as a tiny flash of light, but after a large number of electrons have
hit the screen, the pattern of the bull’s eye emerges. This
can be demonstrated in the following way. If we lower the intensity
of the beam, such that only one electron can pass through the hole at
a time, we would be able to observe each electron hit the phosphor screen. The
exact location of each impact is completely unpredictable. However,
if we use a photographic plate to record each impact, and allow the system
to continue firing one electron at an interval of say one every ten minutes,
then, when we observe the plate later on, we will see the same bull’s
eye pattern as before. This experiment seems to imply that although
it appears on the screen as a particle, each electron by itself travels
from the gun to the screen as though it were a wave. It should
be noted that this type of experiment could have been done using any
type of charged particle, or any frequency of light such as infrared
or X-rays.

Entities, Attributes, Waveforms, and other Finite Fields of Possibility

Quantum theory is the method that has been developed to analyze experiments
such as the one outlined above. This theory was created to deal with
tiny creatures such as atoms, electrons, and photons. However, quantum
theory has also proved successful in dealing with the atomic nucleus as well
as subatomic particles such as quarks, gluons, and leptons. In principal,
quantum theory also applies to the macroscopic world which we inhabit, as well
as large scale astronomical entities such as galaxies and black holes. To
date, quantum theory has successfully predicted the results of every experiment
the human mind can devise. However, the predictive strategy of quantum
theory is quite different than classical mechanics in one fundamental way:
quantum theory cannot predict what will happen in a measurement situation,
it can only predict the statistical probabilities of how likely an event is
to occur. For any quantum entity, quantum theory predicts the probability
of each possible value of a specific physical attribute. Depending on
the nature of the measurement situation, a quantum entity may demonstrate many
different types of attributes. Quantum theory does not say anything about
what happens when the quantum entity is not being measured.

First of all, let’s discuss what we mean by quantum entity, and how
the theory addresses these entities. A quantum entity is any thing, regardless
of its size, which exhibits both wave and particle characteristics. Usually,
a quantum entity will demonstrate either particle nature or wave nature depending
on the type of measurement it is subjected to. A typical quantum entity
could be a photon, an atom, or an electron; but a human, a planet, or the entire
universe could be considered a quantum entity as well. In this project,
we will refer to a quantum entity as a quon for simplicity. Instead
of dealing with quantum entities specifically, quantum theory represents the quon with
a mathematical device called the wave function, usually labeled Ψ or psi. The
first step in any quantum experiment is to associate a particular wave function
to the relevant quantum entity.

In most ways, the wave function, Ψ, is just like any other wave we are
familiar with. Before discussing quantum waves, let’s take a look
at waves in general. A wave is typically characterized by qualities known
as amplitude, wavelength, frequency, and phase. The amplitude of a wave
is a measure of the deviation from its rest state. In general, the amplitude
is the maximum height of a wave. If the wave is cyclic, then the wavelength
is the space spanned by one cycle. The length in time of one cycle is
called the period. The number of complete cycles in a certain interval
of time is called the temporal frequency. The number of complete cycles
in a certain interval of space is called the spatial frequency. The phase
of a point in a cyclic wave is a measure of how far into a cycle that point
is located.

As mentioned earlier, all waves obey the superposition principle, which states:
when two waves meet, their amplitudes add. After the two waves move through
each other, each wave retains its respective amplitude, and is thus unchanged
by the temporary superposition. As we shall see later, any two
waves can interact and depart each other’s company with their respective
amplitudes intact, but the phases of the these two waves become entangled,
and are thus phase correlated for the rest of eternity. When any two
waves meet, the superposition of amplitudes depends on the phases of the each
wave. This is characterized by constructive and destructive interference. For
example, if two waves, each with amplitude of one, meet each other completely
in phase, the resulting amplitude is two. If two waves, each with amplitude
of one, meet each other completely out of phase, the resulting amplitude is
zero. If two waves, each with amplitude of one, meet each other at arbitrary
phases, the resulting amplitude will be between zero and two. Quantum
waves have all the characteristics of ordinary waves that have been outlined
above.

In general, the energy of a wave is a measure of intensity, and is given by
the square of the amplitude. For example, if you double a wave’s
amplitude you quadruple the wave’s energy. Quantum waves
are different from ordinary waves in one important way. Quantum waves
do not have energy. Instead, the square of the amplitude is a measure
of probability. This idea lies at the heart of how quantum theory works. To
predict the results of an experiment, we must find the amplitudes of each possible
value of the attribute we are measuring, and then we square the amplitudes
to get a probability distribution which indicates how likely each possibility
is to occur.

Before we can make any type of measurement, we must first decide what attribute
we want to measure. In general, quantum entities have two kinds of attributes:
static and dynamic. The static attributes of an elementary quantum entity
always have the same value. The major static attributes are mass (M),
charge (Q), and spin magnitude (S). The values for the dynamic attributes
of a quantum entity change over time. The major dynamic attributes are
position (X), momentum (P), energy (E), and spin orientation (S
z
). Before we can understand how quantum theory represents the dynamic
attributes of a quantum entity, we must first discuss some more basic properties
of waves in general.

Early in the 1800’s, a man named Joseph Fourier developed a new language
which could be used to express any type of wave. Fourier showed that
any wave could be decomposed into a unique recipe of sine waves. Each
sine wave has a particular value of frequency k, amplitude a,
and phase p. The process of breaking any wave up into a bunch
of sine waves is known as Fourier analysis. Conversely, any wave can
be constructed by putting together a bunch of sine waves, a process known as
Fourier synthesis.

Sine waves represent one type of waveform family. Another type of waveform
family is the impulse family. An impulse wave is an infinitely narrow
spike located at a specific location. Just as Fourier showed that any
wave could be broken up into sine waves, the same wave could be broken up into
impulse waves. The basis of digital electronic music is that any wave
can be constructed by putting together a bunch of these impulse waves.

The sine waveform family and the impulse waveform family are just two examples
of waveform families; in fact, there are an infinite number of waveform families. Any
imaginable wave can be decomposed into a unique recipe of particular members
of any type of waveform family. This idea is sometimes called the synthesizer
theorem. Any wave can be expressed as a unique sum of members from any
particular waveform family. This means that any wave can be taken apart
in an infinite number of ways, depending on which waveform family we choose
to use. Conversely, if we choose a particular waveform family, we can
create any wave imaginable.

Quantum theory makes use of this so-called synthesizer theorem in a peculiar
way. Quantum theory represents each dynamic attribute with a particular
waveform family. In other words, every possible waveform family corresponds
to some dynamic attribute of the quantum entity. The individual members
of each waveform family represent different physical values of the dynamic
attribute. To illustrate, let’s give a few well-known examples.

First of all, the position attribute is associated with the impulse waveform
family. Each individual impulse wave is a narrow spike, characterized
by a value x which describes the position of that particular impulse wave. Each
possible position attribute value, X, of a quon is associated with the
location of a specific impulse wave at position x. The momentum
attribute is associated with the spatial sine waveform family. Each member
of this waveform family is characterized by a specific value for spatial frequency, k. A
specific momentum value, P, corresponds to each member of the spatial
sine waveform family according to the following rule: P = hk, where h is
Plank’s constant. The energy attribute is associated with the temporal
sine waveform family. Each individual member of this family is characterized
by a specific value, f, which represents temporal frequency. The
energy value associated with each individual wave in this family is given by
the following rule: E = hf, where h is again Plank’s constant. The
relationships for momentum and energy are just de Broglie’s law for the
matter-wave wavelengths, and Einstein’s relation for the energy of a
quantum of light. The waveform family associated with the spin orientation
attribute is known as the spherical harmonic family. Each member of this
family is distinguished by two values: m and n, such that m and n are
both positive integers. The spin orientation value, Sz ,in the polar direction is given by the follow rule: Sz = m2 / (n2 + n).

Quantum theory works by associating each dynamic attribute with a particular
waveform family. The relationship between the values of an attribute
and the individual members of a particular waveform family is given by a rule,
which can be quite simple in some cases, or very complicated in others. For
the most part, physicists are concerned with the major dynamic attributes,
which have been described above; however, there are an infinite number of different
dynamic attributes since there are infinitely many waveform families.

A specific waveform family has special types of relationships with other waveform
families. To understand these relationships, we must first introduce
some terminology. By the synthesizer theorem, we know that any arbitrary
wave can be broken up into different sets of component waves, depending on
which waveform family we choose. Breaking up an arbitrary wave into component
waves is analogous to putting the original wave through a prism. For
example, Newton showed that white
light could be passed through a prism to yield a rainbow of colors, known as
the spectrum of visible light.

If we analyze an arbitrary wave with different waveform prisms, we will discover
that some prisms break the wave into a small number of components while some
prisms break the wave into a large number of components. The number of
waveform components which a prism spits a wave is known as wave’s spectral
width, or bandwidth. If a particular waveform prism breaks an arbitrary
wave into a small bandwidth of components, we could say that the waveform family
is similar to the original wave. If a waveform prism produces a large
bandwidth of components, we could say that the waveform family is not similar
to the original wave. If we take an arbitrary wave and put it through
its own family prism, the resulting bandwidth will consist of only one wave
component, which is the minimum spectral width. For example, if we put
any sine wave through a sine waveform prism, the result will yield only one
wave, which is exactly the original sine wave.We will refer to this prism,
which does not split the original wave at all, as the kin prism. For
any arbitrary wave, there exists such a kin prism, which does not decompose
the wave into any components expect for itself. Conversely, for any arbitrary
wave, there exists a particular waveform prism, which breaks the original wave
into the largest possible bandwidth. This is to say that for any wave,
there exists a waveform family which resembles the original wave the least. We
will refer to this prism, which yields the maximum spectral width, as the conjugate
prism. Thus, every wave belongs to a unique waveform family, and every
waveform family bears a special relationship to a unique conjugate waveform
family. An example of such a conjugate relationship is found between
the sine waveform family and the impulse waveform family. Because of
their mutual relationship to an arbitrary wave, we could say that these two
waveform families are conjugate to each other.

To illustrate this relationship between a prism and its conjugate prism, let‘s
consider the following experiment. Imagine we have identified two conjugate
waveform families, called A and Z. Fist, take any arbitrary wave X, and
analyze this wave by using the A prism. The result will be a particular
bandwidth ΔA of output waveforms. If we analyze X by using the Z
prism, we will get a bandwidth ΔZ of output waveforms. Because A
and Z are conjugate waveform families, if X is very similar to A, then X will
not be very similar to Z. Conversely, if X is very similar to Z, then
X will not be very similar to A. Consequently, there exists a limit on how
small both bandwidths of A and Z can get for the same input wave. This
limit is usually expressed by the following relation: ΔA ● ΔZ ³ C,
where A and Z are conjugate waveform families, and C is some positive constant. We
will refer to this relationship as the spectral area code. The spectral
area code is a fundamental feature of all waves, including quantum waves.

In the above example A and Z are as dissimilar as two waveform families can
be. Now suppose we have chosen another waveform family K. Let’s
assume that K is not very similar to A, but K is more similar to A than Z is. If
we analyze the original wave X using the A and K prisms, there will still be
a limit on how small both bandwidths of A and K can get for the same input. This
can be expressed in the same way as above such that ΔA ● ΔK ³ C’,
where C’ is another constant. However, since A and K are more similar
than A and Z, the constant C’ will be less than C. If we were to
use two waveform prisms that are very similar such as A and B, the spectral
area code may yield a resolving limit that is close to zero. In other
words, if two waveform families are very similar, there is no limit on how
small both bandwidths can be for the same input wave. On the other hand,
if two waveform families are strikingly different in character, the spectral
area code limits the product of the two spectral widths. In this case,
a small resulting bandwidth from one prism means that the resulting bandwidth
of the other prism is huge.

In quantum theory, every dynamic attribute is represented by a particular
waveform family and a specific rule, which translates how individual members
of the family correspond to particular values of the physical attribute. As
a direct consequence of the spectral area code, every conceivable dynamic attribute
bears special relationships to other particular types of dynamic attributes. Each
dynamic attribute has a conjugate attribute in the same sense that each waveform
family has a conjugate family. In general, if two dynamic attributes
are related in this way, such that the spectral area code applies, we could
say that each attribute is conjugate to the other.

We noted earlier that the sine family and the impulse family are conjugates. We
also know that the sine family can be associated with the momentum attribute
of a quon , and the impulse family can be associated with the position
attribute of a quon. The spectral area code can be translated
into an expression for the physical dynamic attributes of position and momentum
in the following way: ΔX ● ΔP ³ h. Here, ΔX
represents the uncertainty in our measurement of the position attribute, ΔP
represents the uncertainty in our measurement of the momentum attribute, and h is
Plank’s constant. This relationship is commonly known as the Heisenberg
uncertainty principle. The result of this relation is that we can know
either position or momentum with perfect accuracy; however, since position
and momentum are conjugate attributes, we can not define both attributes at
the same time with perfect accuracy. In other words, if we know the exact
value of one of these attributes, the value of the other attribute becomes
maximally uncertain. It is possible that two dynamic attributes are independent
of each other, in which case we can know the values of both simultaneously
with perfect accuracy. The uncertainty principle applies to dynamic attributes
which are not independent. The word independent is not really
used here in any rigorously defined manner; however, we shall soon see that
the condition which determines whether the uncertainty principle applies actually
boils down to the commutative properties of specific matrices.

Heisenberg’s uncertainty principle directly implies that the assumptions
of classical physics were incredibly naïve. Before Quantum theory,
physics was based on the formulation of deterministic physical laws, which
could be used to predict the exact outcome of any system. In general,
classical systems were represented by relationships in phase space. Every
particle, or object, in phase space is characterized by a definite position
and momentum. Assuming that one knows all the laws which govern a system,
as well as the position and momentum values of a particle in such a system,
one should be able to predict exactly how the system will change with time. This
ideal formed the basis of classical physics and inspired the conception of
a universe that operates like a giant deterministic machine. However,
according the scientific discoveries of quantum theory in the early twentieth
century, it is impossible to know the exact value of an object’s position
and momentum at the same time. Thus, Heisenberg’s uncertainty principle
delivered a fatal blow to the antiquated conception of long term predictive
determinism in physical systems. In general, quantum theory does not
predict the result of a measurement on a physical system at all; however, quantum
theory predicts the probability of each possibility in the quantum system.

Classical physics also assumed that all objects have inherent definite attributes
which exist independently of the observation of those attributes. As
we will see, the structure of quantum theory implies that the attributes of
any aspect of reality are inseparable from the observation of those attributes. In
fact, it is impossible to say for sure that something possesses any type of
attribute whatsoever outside the context of some measurement situation.

Before we delve any deeper into this mysterious theory, let’s review
the basics so far. Quantum theory represents all quantum systems with
a wave function, which we call Ψ. This wave function is not only
determined by the quantum entity in question, but by the type of attribute
we wish to observe as well as the measurement situation we have designed to
detect such attribute values. For simplicity, we could say that Ψ is
determined by the entire measurement situation. Granted, this description
is vague, but it sufficiently expresses the fact that there can be no separation
between the observer and the observed. The Ψ-wave represents
all possibilities of the quantum system. Choosing a specific attribute to measure
is analogous to choosing a waveform family prism which analyzes the Ψ-wave
into component waves. Each component wave represents a possible value
of the attribute we are measuring. Moreover, each component wave has
a particular amplitude and phase. In other words, each possibility is
assigned a specific coordinate value that represents the amplitude and phase
of that possibility. The square of the amplitude at each possibility
gives the probability that a particular attribute value will be observed if
we were to actually make a measurement.

The first mathematical version of quantum theory was developed by Werner Heisenberg
in 1925. In Heisenberg’s model, a quantum system is represented
by a set of matrices. Each matrix represents a specific dynamic attribute
such as position, momentum, or energy. The probability that a system
has a particular attribute value is determined by the diagonal entries of the
matrix. An important property of matrices is that many types of matrices
do not commute when they are multiplied together. If two attribute matrices
don’t commute, then the measurement of these attributes is limited by
the uncertainty principle. The progression of the quantum state
in time is represented mathematically by certain laws of motion expressed using
matrices. This first version of quantum theory is usually known as Heisenberg’s
matrix mechanics.

A few months after Heisenberg’s theory was created, another physicist,
named Erwin Schrödinger, introduced a different version of quantum theory. Schrödinger
created a wave equation, which represents the evolution of a quantum system
over time. The quantum state of a system at any instant is represented
by a certain field of possibilities, Ψ, such that each possibility has
a certain probability of occurring. As the quantum system evolves, the
amplitudes of the Ψ-wave change continuously according to Schrödinger’s
wave equation. The time dependent Schrödinger equation is usually
written in the following way: - (h / 2π i) d/dt Ψ(x, t)
= ĤΨ(x, t). In this expression, x is vector
whose component values represent all possible values of any attribute X, and Ĥ is
the Hamiltonian. The Hamiltonian is a linear operator that represents
the total energy of the system. An operator is a mathematical device
that transforms a given function into some other function according to a certain
rule. In the case of Schrödinger equation, the time dependent Hamiltonian
operator Ĥis equal to - (h / 2π i) d/dt. Without being
too technically specific, the important thing is that Schrödinger’s
wave equation defines a rule Ĥ, which describes how the Ψ-wave
changes over time.

At about the same time as Schrödinger proposed his theory of wave mechanics,
a third quantum theory was developed by Paul Dirac. This theory was rigorously
formalized a few years later by the world famous mathematician John von Neumann. Dirac
showed that the fundamental ideas of quantum theory can be represented in abstract
mathematical terms by placing the theory in what is called Hilbert space. Dirac
also showed that both Heisenberg’s and Schrödinger’s theories
are special cases of his own Hilbert space version of quantum theory. Dirac’s
theory is a mathematical formulation that resembles our previous description
of quantum theory, which we described solely in terms of waveform families
and spectrums.

Hilbert space is not geometrical, but is an abstract way of organizing functions. Although
it is of little relevance to the goals of this project, we will present the
conditions which define Hilbert space. Hilbert space is a vector space
on which an inner product is defined, and which is complete, i.e., which is
such that any Cauchy sequence of vectors in the space converge to a vector
in the space. This abstract function space provides a natural reference
frame for analyzing the wave function Ψ.

To illustrate the idea of Hilbert space and how it applies to quantum theory,
let’s take a general example. Imagine we have a quantum system,
which is composed of a quon, and a measuring device that is designed
to observe a particular dynamic attribute A of the quon. The attribute,
or observable, we choose to measure is represented mathematically by a linear
operator, which we can label Â. This linear operator is
analogous to the waveform family prism we used earlier. Each possible
value of our attribute A is represented by a dimension in Hilbert space, which
we will call a basic ray. Mathematically, we could also say that each
dimension represents an eigenfunction of the operator Â. Generally
speaking, there are as many basic rays as there are possible values for an
attribute. In three-dimensional Euclidean space, each dimension is at right
angles to the others. Similarly, each dimension, or eigenfunction, in
Hilbert space is perpendicular, or orthonormal, to the other dimensions. If
our attribute has two possible values, the corresponding Hilbert space will
consist of two dimensions. If our attribute is real valued along a continuum,
the corresponding Hilbert space will consist of an uncountably infinite number
of dimensions. The reference frame in Hilbert space is determined by
the possible values of the attribute we have chosen to measure.

The wave function, Ψ, is represented by a vector in Hilbert space. This
vector, which we will call the quantum ray, is simply a direction, which passes
through the origin of our given coordinate frame of reference in Hilbert space. The
quantum ray, Ψ, represents one quantum state of the system which is being
analyzed. Given our particular reference frame, the wave function assigns
a specific coordinate value, or point, to each basic ray. This coordinate
point of each dimension is just the projection of the quantum ray onto each
single basic ray. However, the coordinate value is not a point on a real
line, but is a point on the complex plane. Each coordinate value is represented
by a 2-dimensional complex vector which, if defined in exponential form, can
be written in the following way: z = reif
, such that r is the length, or magnitude, of the vector, and f is the angle
of the vector. The magnitude of a given coordinate value represents the
amplitude of the wave function at a specific possibility. The angle of
a given coordinate value represents the relative phase of the wave function
at a specific possibility. Thus, for each possibility, also called a
basic ray, the wave function assigns a coordinate value, which is a complex
vector that represents a specific amplitude and phase.

Each dimension,
or basic ray, of Hilbert space, is associated with its own complex plane. The
projection of the Ψ-wave onto each basic ray
is given by a specific complex vector. If we let c
i
stand for the coordinate value along a particular basic ray, and we let Φ
i
stand for a particular eigenfunction, or basic ray, then we can express Ψ in
the following way: Ψ(x) = ∑ci
Φ
i (x), for all i. The Ψ-wave, or quantum ray, is just
the sum of all these coordinate values, or complex vectors. Thus, Ψcan
either be represented as a single entity such as a vector in Hilbert space,
or by a collection of vectors in the complex plane such that each vector represents
a specific possibility. The quantum wave is a field of possibilities.

Each possibility
is characterized by an amplitude and a phase. To find
the probability that a particular possibility will occur, we simply take the
square of the amplitude. If c
i
represents a particular complex vector, then the square of the amplitude can
be expressed as ||c
i
||
2
. In order for our probability measure to make any sense, we must normalize
the quantum ray, Ψ, such that ∑||c
i
||
2
= ∫ ||Ψ(x)||
2
dx = 1. This means that the sum of all the probabilities is equal
to 1.

As noted above, the quantum wave function is a vector, in Hilbert space, which
represents one quantum state. Without actually observing the measurement
situation, we can ask how the Ψ-wave might change over time. For
simplicity, lets assume there is only one dimension of time and that it always
travels in the same direction. If the original Ψ-wave is calculated
at time t
o
, then the Ψ-wave at time t
1
will be represented by a vector in Hilbert space which is different than the
original vector. If we assume that time is a continuum, we can show that
the quantum vector changes its orientation continuously. Thus, our spinning
quantum vector, in Hilbert space, represents the continuously morphing Ψ-wave. Therefore,
Schrödinger’s equation, which describes our spinning vector, is
actually a mathematical representation of a morphing field of possibilities. As
the quantum wave moves and changes direction, the magnitudes and the relative
phases of all the coordinate values also change. Note, if the quantum
vector travels continuously in Hilbert space, then each projection, which determines
the possibility amplitude, also changes continuously. The probability
distribution for each quantum vector also changes continuously because the
probabilities are the squares of the continuously changing amplitudes.

It is important to remember that although this theory can be used effectively
to determine the probability distribution for the attribute values we are concerned
with, these potential tendencies to exist are not inherent in the representation
of the quantum entity. Unless we first assume a frame of reference, such
as a measurement situation designed to observe the value of a specific dynamic
attribute, the quantum entity is simply a wave of infinite possibilities. The
frame of reference in Hilbert space is created based on which attribute we
choose to measure. Only after we have chosen an attribute, can we analyze,
or decompose, the quantum wave into complex projections along the orthonormal
basic rays.

In order to calculate the probability distribution for the possible values
of a quantum entity, we must first create the concept of an attribute,
which we want to measure. The word create is used here because,
in some sense, any type of conceptual attribute such as position, momentum,
energy, and spin is a construct of the imagination. In other words, if
we want to measure position we must create a unit measure of distance. Likewise,
if we want to measure momentum, we must create a unit measure of time as well
as direction. In general, these units have been created out of thin air,
and bear no real connection to nature.

Without a reference frame of observation, it is meaningless to say that the
quantum entity possess any attribute whatsoever, let alone values for that
attribute. Perhaps this claim is too far out to except right off; however,
it at least appears safe to say that the internal structure of quantum theory
implies that the attributes of any aspect of reality are inseparable from the
observation of those attributes. For example, a quon does
not inherently possess what we call momentum; however, given a certain measurement
situation, the quon will demonstrate the appearance of momentum. In
other words, momentum does not belong to the quantum entity itself, but to
our interaction, or relationship, with the quantum entity. In a similar
sense, a quantum entity is neither a wave nor a particle, but if we interact
with the quantum entity, it will express itself either as a wave or a particle. This
point of view is not without opposition. For instance, many physicists
still believe that the quantum entity is an ordinary object, which exists whether
it is being observed or not. Although there is a generally accepted recipe,
or method, for using quantum theory, there is not much agreement amongst physicists
concerning how quantum theory is actually connected to what we call reality. Before
we turn to the various interpretations of quantum theory, let’s consider
a fourth version of quantum theory.

In 1948, Richard Feynman developed a method for calculating a quon’s wave
function which is called the sum-over-histories approach. We’ve
already seen that the Ψ-wave represents all the possibilities open to
a given quantum system. To get more perspective, let’s consider the quon gun
and phosphor screen experiment that was described earlier. Between the quon gun
and the screen, the unmeasured quon behaves like a wave of possibilities. Feynman’s
idea was that what actually happens on the screen is influenced by everything
that could have happened. Feynman’s approach to calculate Ψis
to sum over the amplitudes of all possible ways a quon can get from
the quon gun to the screen. Feynman describes the unmeasured world
by making two postulates: a quon takes all possible paths, and no path
is better than another. He also proposes that every path open to the quon has
the same amplitude, and that each path differs from other paths only in its
phase. In quantum theory, possibilities have a wavelike nature. Therefore,
certain possibilities can cancel if they have different phases. Feynman
showed that summing up all possible paths, or histories, produces the same
wave function as solving Schrödinger’s equation.

In the context of our phosphor screen experiment, quantum theory implies that
just before a flash is made on the screen we should not imagine that a tiny quon is
actually heading for one particular phosphor molecule. Before the measurement
occurs, the quon is heading in all possible directions at the same time. According
to this view, an unmeasured quon exists only as a bunch of unrealized
quantum potentialities. However, every time we make a measurement, only
one of these possibilities becomes an actuality.

The Battlefield of Quantum Speculation and The Meaning of It All

The early interpretations of quantum theory reconciled this peculiar
phenomenon by assuming that the world is divided into two separate parts. The
unmeasured world, it was assumed, consists only of quantum potentials. On
the other hand, the measured world consists only of classical type actualities. This
interpretation of quantum theory was primarily advanced by Niels Bohr and Werner
Heisenberg and is known generally as the Copenhagen interpretation. In
addition to the view of distinct measured and unmeasured aspects of reality,
the Copenhagen interpretation asserts
two other fundamental assumptions. Firstly, Copenhagenists assume that
there is no reality in the absence of observation. Secondly, this interpretation
asserts that observation creates reality.

These two assertions are based on the idea that the dynamic attributes of
a quon are contextual in the sense that the attribute values are determined
by which attribute we choose to measure. For example, the position attribute
of a quon is jointly determined by the quon and the measuring
device. If we take away the measuring device, we also take away the position
attribute of the quon. If we change the measurement context, then
we also change the attributes of the quon. A Copenhagenist would
argue that when a quon is not being measured, it has no definite dynamic
attributes. This idea that observation creates reality is based on the
so-called quantum meter option, i.e. the observer’s ability to freely
select which attribute he wants to look at. In terms of our earlier discussion
of waveform languages, this quantum meter option is analogous to our freedom
of choice concerning which waveform prism we will utilize to analyze an arbitrary
wave.Another assumption of the Copenhagen interpretation
is that all quons in the same quantum state, i.e. represented by the
same wave function, are physically identical. Furthermore, the Copenhagen interpretation
asserts that the wave function tells us everything there is to know about the
quantum entity. However, Copenhagenists do not believe that the Ψ-wave
is a real wave. They view the Ψ-wave simply as a mathematical tool
that can be used to determine the statistical likelihood of an event, given
a specific measurement context. It is also their position that there
is absolutely no way to know which possibility will become an actuality.

In classical mechanics, the unpredictability of an event was attributed to
the ignorance of the observer. The observer’s ignorance, in the
classical sense, arose because the observer did not have a complete knowledge
of all the variables in a system, or the measuring device used in the observation
was technologically unable to yield perfectly accurate readings. It was
assumed that this ignorance could be overcome by making further technological
improvements to the measuring devices. However, in the Copenhagen interpretation,
it is impossible to predict which possibility will become an actuality simply
because the deepest form of knowledge we can have of a quantum system is purely
statistical. This type of ignorance is known as quantum ignorance, as
opposed to classical ignorance. Classically, the missing information
exists, but has yet to be uncovered by the experimenter. The idea of
quantum ignorance asserts that the missing information simply does not exist.

Quantum ignorance is closely tied to the idea of quantum randomness. In
order to understand this idea better, let’s consider the quon gun
and detector screen experiment again. Assume that the gun fires only
one quon. The wave function, Ψ, gives us a complete description
of the probabilities of each possibility. Before the measurement, the quon assumes
all possible paths. The result of the measurement yields only one actual
flash on the phosphor screen. Now, suppose we fire a second quon at
the screen. This second quon is represented by the exact same Ψ-wave
as the first quon. The result of the second measurement again
yields only one actual flash; however, this flash is most likely located in
a different place on the phosphor screen relative to the first flash.

The
Copenhagenists explain this phenomenon by appealing to what they call quantum
randomness. The basic principle of quantum randomness is that identical
physical situations give rise to different outcomes. If it is true that
the Ψ -wave gives us all the information we can know, then it is impossible
to predict exactly where the quon will strike the phosphor screen. According
to the Copenhagists, the occurrence of an actual event is determined by blind
chance. We shall soon get a better idea of how these quantum fields,
which organize the probability distribution of our system, are extremely complex
and multidimensional. In any case, the visualization of these fields
is beyond the capacity of most physicists working within the current paradigm. At
this point, we will merely note for future reference that the dynamics of a
given field of possibilities may be so extraordinary chaotic that the pattern
appears random to our ordinary mode of perception.

It should be noted that the Copenhagen interpretation
is based on the primary assumption that measuring devices are ordinary objects
which exist and are definable in the classical sense. Quantum theory
describes neither the quantum system nor the measuring device. The
theory applies to the relationship which exists between the quantum system
and the measuring device. However, the Copenhagen interpretation
asserts that a very significant and mysterious transition takes place at the
boundary between the measuring device and the quantum system. In this
transition, the surreal potential existence of the unmeasured quantum entity
immediately transforms into a real classical type observed actuality. The
question as to exactly how, why, and when this transition occurs is the basis
of the so-called quantum measurement problem. The Copenhagenists side
step this interpretive paradox by assuming that measuring devices are real
things which actually exist with definite attributes, while quantum entities
are represented by a superposition of potential possibilities.

In 1932, John von Neumann published a book called the Mathematical Foundations
of Quantum Mechanics (Die Mathematische Grundlagen der Quantunmechanick),
in which the ideas of quantum theory are subjected to rigorous mathematical
analysis. Von Neumann’s analysis is primarily concerned with
Dirac’s Hilbert space version of quantum theory, which has been shown
to be more general and complete than the Heisenberg and Schrödinger
theories. Among other things, von Neumann demonstrates that there is
nothing intrinsically special about measuring devices. Therefore, the
Copenhagenist assertion that measuring devices are somehow privileged with
a classical status of existence seems awkward and contrived. In von
Neumann’s theory, everything is represented by quantum Ψ-waves,
even measuring devices. Von Neumann’s interpretation is known
as the all-quantum theory because there is no longer any aspect of the theory
which relies on classically defined objects.

Von Neumann showed that it is indeed possible to represent everything in the
world with Ψ-waves; however, the all-quantum theory only works if we make
one crucial assumption. Before dealing with this assumption directly,
let’s consider again the structure and dynamics of the wave function
in Hilbert space. We know that a particular quantum state is represented by
a normalized vector in Hilbert space. The dimensionality of our frame
of reference in Hilbert space is determined by the attribute operator we choose. It
is often the case that this frame of reference consists of an uncountably infinite
number of dimensions. Each dimension represents an orthonormal eigenfunction
of the quantum operator that we are using. Each of these orthonormal
eigenfunctions represents a specific attribute value of the quon we
are measuring.

The amplitudes and phases of each possibility are determined by decomposing
the wave function into complex vector components, which are just the projections
of the wave function along each dimension. However, a quantum system
has a definite value for an observable attribute if and only if the quantum
vector, Ψ, is an eigenstate of the attribute operator. This means
that the system only has a definite state if the quantum vector is parallel
to a particular eigenfunction. Since each eigenfunction of the operator
is independent, or orthonormal, to all other eigenfunctions, any vector which
lies along one single eigenfunction has no components along any of the other
eigenfunctions. In other words, if the quantum vector lies along one
specific eigenfunction, the amplitude at that possibility is one, and the amplitudes
at all other possibilities are zero. However, in most cases, the wave
function can only be expressed as a linear combination consisting of coordinates
from many eigenfunctions.

According to Feynmann’s version of quantum theory, the unmeasured quon assumes
all possible values at the same time. Contrary to this idea, in which
the quon assumes all possibilities at once, is the actual fact that
any type of measurement only yields one specific result. Therefore, in
order for von Neumann’s theory to be consistent, we must assume that
at some point between the creation of the quantum entity in the quon gun
and the observation of an experimental result, a remarkable transformation
must occur. At the exact instant the measurement occurs, the quantum
entity must cease to be a superposition of possibilities, and must contract
into a single possibility, corresponding to the single observed measurement
result. This mysterious and radical transformation is called the collapse
of the wave function. Von Neumann’s all-quantum theory will not
work unless this collapse of the wave function actually occurs in every type
of quantum measurement. As alluded to earlier, the fundamental paradox
of quantum theory is the so-called quantum measurement problem, which can be
stated in the following way: how and when does the wave function collapse?

In von Neumann’s analysis of the quantum measurement problem, he proposed
that the measurement act could be broken up into a series of small steps. In
this way the entire measurement act is visualized as a chain of events stretching
from the quon gun, to the phosphor screen, to the observer’s retinas,
and finally to the observer’s conscious perception of the measured result. Von
Neumann’s goal was to analyze each link in this chain in order to find
the most natural place to put the collapse of the Ψ-wave. What he
discovered is that we can cut the chain and insert a collapse anywhere we please,
but the theory won’t work if we leave it out. Von Neumann reasoned
that the only peculiar link in the chain is the moment when the physical signals
in the human brain become an actual experience in the human mind. Based
on this form of logic, von Neumann reached the conclusion that human consciousness
is the only viable site for the collapse of the wave function. Therefore,
according to von Neumann, consciousness creates reality.

This idea of consciousness-created reality is a step beyond the claims made
by those who subscribe to the observer-created reality interpretation. Observer-created
reality enthusiasts simply claim that the observer is free to choose which
attribute will be measured. However, they do not claim that the observer
determines what the actual result of the measurement will be. Consciousness-created
reality enthusiasts, on the other hand, claim that consciousness selects which
one of the many possibilities actually becomes realized. Granted, these
claims have not been experimentally proven, yet we might still consider some
general consequences of this interpretation of quantum theory. If we
assume that the basic principles of quantum theory are correct, we can easily
derive two such interesting general conclusions. Firstly, as far as the
final results are concerned, there is no natural boundary line between the
observer and the observed system. Secondly, it is apparently the case
that no such interpretation of quantum theory would be complete unless it successfully
incorporates the function of consciousness, which seems to be inseparable from
the manifestation of particular outcomes in the quantum measurement.

There is, however, another interpretation of quantum theory, which is similar
to von Neumann’s ideas, but is not dependent on the idea of a wave function
collapse. This theory, called the many-worlds interpretation, was developed
by Hugh Everett in 1957. Everett,
like von Neumann, assumes that there is nothing special about measuring devices
and that everything can be represented by Ψ-waves. However, Everett leaves
out the collapse of the wave function. Instead, his theory is based on
the idea every possible attribute value of a quon actually becomes realized
when the quon interacts with a measuring device. For example,
if the quon can assume six possible attribute values, then all of these
possibilities actually occur. Everett claims
that the entire measurement device branches into many measurement devices,
each of which observe a different possible value of the chosen attribute. Given
that nobody has ever seen a measuring devices spit apart in such a way, Everett claims
that each possible value is realized in its own parallel universe.

Everett’s quantum model
implies that at every instant, the universe is a branching tree in which anything
that can happen, no matter how improbable, actually does happen. As far
out as this claim might seem to our simple egos, this many-worlds interpretation
actually addresses the fundamental inconsistencies of quantum theory in a satisfactory
manner. For instance, there is no such attempt to sanctify the status
of measuring devices. In addition, there is no need for the mysterious
notion of the wave function collapse, which in itself, has never been detected,
nor is their any a priori evidence which supports its existence other than
the fact that we humans only perceive the occurrence of one event at a time.

Up until now we have only considered the orthodox Copenhagen interpretation
and its primary derivatives, i.e. von Neumann’s all quantum theory, and Everett’s
many-worlds interpretation. These theories accept as their basic premise
that the fundamental level of reality, namely the quantum world, is governed
solely by the statistical laws of quantum possibilities. In addition,
these theories also accept the idea that quantum entities are not ordinary
objects in the classical sense. An ordinary object possesses definite
attributes independently of the observation of those attributes. Indeed,
von Neumann, in his book on the foundations of quantum mechanics, derived a
proof which asserts that if quantum theory is correct, then the world cannot
be made of ordinary objects. However, despite the strong convictions
amongst the majority of physicists that no such ordinary object model of reality
could be consistent with the quantum facts, there is a group of physicists
which believe that such a model could indeed be produced.

The most famous of these physicists, who opposed the quantum orthodox interpretation,
is Albert Einstein. Einstein strongly believed that quantum theory was
incomplete because it only gave a statistical account of elementary phenomenon. He
believed that it was possible to construct an ordinary object model of reality
in which the quantum entities had definite attributes whether or not anybody
was observing them. Einstein and the other physicists who believe that
an ordinary object model of reality is possible are sometimes referred to as
neorealists. The neorealist position is basically that there exists a
deeper, more fundamental, level of reality, which is not described by the quantum
wave function. As we have already seen, if we assume that the Ψ-wave
tells us everything there is to know about the quon, then it is impossible
to predict what the actual result of a measurement will be. The neorealists
believe that the Ψ-does not tell us everything there is to know. Indeed,
they hold the position that hidden, unseen parameters exist at a deeper level,
which if discovered, could be used to predict exactly what will happen in a
quantum experiment. For this reason, neorealist theories are also known
as hidden-variable theories.

As noted above, von Neumann’s proof asserts that no such theory of ordinary
objects can explain the quantum facts. However, David Bohm, a protégée
of Albert Einstein, was able to develop a hidden-variable theory which is seemingly
consistent with the observed quantum facts. Bohm’s hidden-variable
model of reality, which was developed in 1952, assumes that quantum entities
are ordinary objects, such as real particles, which have at all times a definite
position and momentum. Whereas the Copenhagen interpretation
assumes that an unmeasured quon assumes all possibilities at once, Bohm’s
theory assumes that an unmeasured quon takes only one path and that
this path is ultimately predictable. However, there is a catch. Bohm’s
theory introduces a new type of wave called the “pilot wave”, which
organizes the unfolding history of the quantum entity.

The Copenhagenists assert that the Ψ-wave is not real, but merely a fictitious
mathematical device which happens to be effectively useful in calculating quantum
probabilities. Bohm, on the other hand, asserts that both the quantum
entity and the pilot wave are real things which actually exist. Although
the pilot wave is supposedly a real entity, in order for Bohm’s theory
to be consistent with the facts, this pilot wave must have certain remarkable
characteristics which defy our conventional definitions of what is possible
in reality. For instance, this pilot wave must connect with every particle
in the universe, it must be entirely invisible, and it must transfer information
at superluminal speeds, i.e. faster than light.

Of these three, the first two properties of Bohm’s pilot wave are familiar
within physics in that they are both aspects of the gravitational and electromagnetic
fields. Superluminal connections, on the other hand, seem to be the one
thing most physics hate most. This is primarily because the existence
of superluminal connections would violate many fundamental assumptions of the
orthodox theory on physical reality. For example, real superluminal transfers
would contradict the orthodox themata which asserts that influences can only
be mediated by direct interactions. This assumption, that object A can
only effect object B via direct subluminal interactions, is called the locality
assumption. Also, faster than light connections directly imply that the
past can be influenced by the future. Most physicists, however, would
like to believe that time travels in only one direction, and that what happens
within each moment is solely influenced by what has already happened.

In Bohm’s model, each particle in the universe, it is assumed, is associated
with a pilot wave. This pilot wave is sensitive to the entire environment
of the quantum entity, and the wave changes its form instantly whenever there
is a change anywhere in the environment. Conversely, this instantaneously
morphing field informs the quantum entity of such changes in the environment,
at which point the quantum entity alters its values of position and momentum
accordingly.

However, this theory predicts that all pilot waves of all particles are instantaneously
connected across the entire universe. This implies that the relevant
environment, or measurement situation, which determines the form of the pilot
wave, includes all events in the universe across all dimensions of space-time. Understandably,
most physicists abhor the idea of faster than light, let along instantaneous,
connections, and consequently, many physicists consider Bohm’s theory
to be absurd. However, although it seems absurd to the quaint common
sense intuitions of most physicists, it was soon proven that these superluminal
connections are no accident, but a necessary condition of any theory of reality. Big
news!

EPR, Bell’s Theorem, Non-locality, and Superluminal Spaghetti

This proof we mentioned above was devised by John Stewart Bell in 1964,
and is known as Bell’s interconnectedness
theorem. Bell, while studying
Bohm’s theory, was able to show how an ordinary object model of reality
had been created contrary to the proof of von Neumann, which asserted that
no such theory was possible. Obviously, von Neumann’s proof contained
a loophole. Bell showed that
von Neumann’s idea of an ordinary object was too limited. Bohm
was able to create such a theory by stretching the conventional idea of an
ordinary object. Most physicists would not consider any object ordinary
if it can change its attributes instantaneously via resonance with some invisible,
all-pervasive, superluminal field.

Bell’s theorem, which Bell developed
after his work on Bohm and von Neumann, brings into question the assumption
of a locally based version of reality, and ultimately proves that the reality
which underlies our experience must be non-local. This proof was based
on the factual results of an experiment originally designed by Albert Einstein,
Boris Podolsky, and Nathan Rosen. Before taking a deeper look into Bell’s
theorem and non-locality, let’s briefly discuss the logistics of Einstein’s
experiment, which has since come to be known as the EPR experiment.

As described earlier, Albert Einstein believed that quantum theory was not
a complete theory of reality. Thus, Einstein designed a specific thought
experiment, which supposedly demonstrates that there are aspects of reality
that are not accounted for in the quantum theory. In brief, the EPR source
emits a pair of phase entangled photons in opposite directions at the speed
of light toward two spatially separated detectors. Let’s label
these detectors A and B. In a generic form of the EPR experiment, these
detectors are designed to measure the polarization attribute of the photons. A
simple form of a polarization detector can be realized by using a calcite crystal
whose optic axis is pointing in a certain direction. The crystal divides
light into two beams. The up beam consists of photons which are
polarized along the optic axis, while the down beam consists of photons
which are polarized at right angles to the optic axis. Because the photons
are phase entangled, the phase of each photon depends on what the other photon
is doing. Also, there is only one wave function, which describes both
photons. Before the actual measurement, quantum theory predicts that
neither photon has a definite value of polarization.

If we assume that each calcite detector is positioned at any arbitrary angle,
then each detector will measure a fifty-fifty percent mixture of up / down results. On
the other hand, if we assume that each detector is orientated at the same angle,
then we can measure another type of attribute called the parallel polarization
attribute. In this case, both photons are always measured to have the
same polarization. If the two detectors hold their crystals at a relative
angle of ninety degrees, then the polarization value at one detector will always
measure the opposite as the other detector. As an example, let’s
assume that detector A holds its crystal at zero degrees, and that this A detector
is located closer to the source than B is. This way, the polarization
at A is detected first. Also, let’s assume that at an angle of
zero degrees, A measures an up value. Quantum theory predicts
that if B holds its crystal at zero degrees, it will measure up as well. On
the other hand, if B holds its crystal at ninety degrees, it will measure a down valued
polarization. If B holds its crystal at angles other than zero or ninety
degrees, quantum theory gives no definite results. For example, if B
holds its crystal at forty-five degrees relative to A, then the odds are fifty-fifty
that B will measure an up value.

Quantum theory predicts that except at certain angles, such as zero and ninety
degrees, the result of B’s measurement is determined by quantum randomness. In
other words, at angles between zero and ninety degrees, the measurement at
B is determined by blind chance. However, Einstein argues that since
the photons are in what can be called a twin state, if detector A is measured
first at any particular angle, then the photon at the other detector must possess
a definite polarization attribute value prior to its interaction with the detector,
which could be set at any angle. Einstein also argues that quantum theory
only gives a statistical interpretation of attribute values which truly have
a definite existence before the act of measurement. Therefore, Einstein
concludes that quantum theory is not a complete theory of reality. The
basic assumption which Einstein makes is that, after the photons have left
the source, the situation at detector B is not affected by how detector A chooses
to hold its crystal. This premise is known generally as the locality
assumption. Einstein’s argument can only be refuted in two ways:
either the locality assumption is violated, or there is no such thing as two
spatially separated events. This perplexing thought experiment is known
as the EPR paradox.

While studying this thought experiment, Bell considered
what would happen to the measurements of each detector if the calcite crystals
vary their angles between zero and ninety degrees. Bell incorporates
a type of polarization attribute which measures how these results are correlated. This
attribute can be called the polarization correlation attribute, labeled PC(θ). As
before, if the relative angular difference between each detector is zero, then
the measurements are perfectly correlated, thus PC(0) = 1. If the crystals
are set at a difference of ninety degrees, the measurements are perfectly uncorrelated,
thus PC(90) = 0. At angles between zero and ninety degrees the value
of PC is some fraction between 0 and 1.

The value of PC(θ), for angles between zero and ninety degrees, can be
measured by firing many pairs of phase entangled photons and then comparing
the series of measurement values recorded at each detector. The
polarization correlation attribute is a measure of the fraction of matches
between two detectors over a long series of photon pair emissions. Imagine
that each list of measurements is a type of binary message. If A and
B receive exactly the same messages, then the PC(θ) value is one, and
the angle between each crystal must be zero degrees. If A and B receive
exactly opposite messages, then the PC(θ) value is zero, and the relative
angle must be ninety degrees. In between these two extremes, the two
messages will contain a fraction of errors. For example, let’s
assume that if the crystals are orientated at a relative angle α, then
the two binary messages differ by one out of every four bits. In other
words, the error rate between the two messages is ¼. Thus,
at angle α, the polarization correlation attribute value, PC(α),
is a factor of three correlated results for every four photon pairs, that is,
PC(α) = ¾. Everything presented so far concerning
this type of experiment is based purely on scientific fact.

To understand Bell’s theorem,
let’s first assume that both crystals are set vertically at zero degrees. Now
we rotate the A crystal α degrees in the clockwise direction, and we rotate
the B crystal α degrees in the counterclockwise direction. These
crystals are now separated by a relative difference of 2α degrees. Bell,
like Einstein, makes only one fundamental assumption, which asserts that the
situation at detector A does not effect what is happening at detector B; this
is known as the locality assumption. This assumption appears reasonable since
these photons are flying away from each other at the speed of light. If
we assume locality, it follows that if the error rate at angle α is equal
to ¼, then the error rate at angle 2α must be less than
or equal to ½. This expression is an example of what is
known as Bell’s inequality. This
inequality is a direct consequence of the locality assumption.

However, the equation for the polarization correlation attribute can be derived
mathematically such that PC(θ) = cos
2
θ. For this equation, PC(30) = ¾, and the error rate
equals ¼; that is, one error between each message for every four
photon pairs. However, at twice this angle, PC(60) = ¼, and thus,
the error rate becomes ¾. This result is a direct
violation of Bell’s inequality,
which predicts that the fraction of errors between the messages cannot be greater
than ½. Let’s recap Bell’s
argument. First we assume that reality is strictly local. This assumption
leads directly to a specific inequality. Surprisingly, whenever this experiment
is performed, the results violate this inequality. Therefore, since we
reached a contradiction, our original assumption must be false, i.e. reality
is non-local.

Bell’s proof does not demonstrate
any observable type of non-local interaction. He merely proved that the
correlation between two twin state photons is so strong that no version of
local reality can account for the mathematically predicted violation of Bell’s
inequality. Indeed, Bell’s
idea was experimentally put to the test, first by John Clauser in 1972, and
later by Alain Aspect in 1982. Both of these technologically sophisticated
experiments produced results that directly violate Bell’s
inequality. Therefore, unless we resort to drastic counter-arguments,
such as the claim that there is no reality at all, or that everything in the
world is entirely predetermined to the infinite degree, there is no other way
to save the locality assumption from its decent into the dustbin of history. Through
careful consideration of the experimental facts, it is now safe to say that
locality is just as outdated and incorrect as the idea that the Earth is flat.

Although John Bell only proved that non-locality is a necessary factor in
describing a particular twin-state photon experiment, we can extend this idea
to include everything that exists in reality. We can make this type of
assertion because quantum theory predicts a phenomenon known as phase entanglement. Whenever
two quantum entities interact, their phases get mixed up. As these entities
interact and then depart their separate ways, the amplitudes of each Ψ-wave
come apart, but the phases of the two quons remain connected. Indeed,
the strong correlation between these two EPR photons is a direct result of
the fact that they were created from the same source, and are thus, phase entangled. This
prediction of phase entanglement was recognized by physicists which preceded Bell;
however, Bell was to the first
to actually demonstrate that this phenomenon actually exists in the real world.

The basic idea of phase entanglement and non-locality rests on the idea that
once two entities have interacted, they are eternally connected by the correlation
between their mutual phases. An important consequence of all this is
the fact that the so-called entire measurement situation, which determines
the attribute values of a quon, must include situations, measurements,
and events everywhere throughout the universe. Moreover, non-locality
implies that the entire measurement situation, of even a simple quantum experiment
here on Earth, must include all measurements and events everywhere in the universe
across all scales and dimensions of time. Presented another way, non-locality
implies that every thing is everywhere, and no thing is really separate from
anything else. Indeed, everything together is only one thing. If
we try to restrict ourselves by measuring only part of the one thing, we will
inevitably encounter a limit on how accurate our predictions may be. After
all, there’s nothing special about Plank’s constant h. Perhaps
the unavoidable quantum uncertainly is a direct consequence of our naïve
assumption that we are separate from that which we are observing.

Hyperdimensional Holon Attractors and the B-Sense

Thus far, we have embarked on quite a lengthy discussion of quantum
theory and it’s various interpretations. Although many concepts
have been addressed in this project, the majority of the details have been
left out. In addition, our exploration through this quantum realm has
merely scratched the surface, and much of the most interesting terrain has
yet to be explored. Although there are more advanced forms of exploration
which lie beyond the scope of this project, it is my hope that this preliminary
exploration will form a stable foundation such that further developments and
interpretations may be explored at a latter time. For now, let’s
conclude this journey with an overall survey of some ideas which might form
the basis of future, more detailed, endeavors into quantum theory. It
should be obvious to the reader that the implicate seeds contained within some
of the ideas to follow will certainly contradict, and bring into question,
many of our presently held notions concerning the nature of reality and consciousness.

First of all, quantum theory, in the broadest sense, is a theory of whole
entities. Any representation of a quantum entity must include a joint
description of the entity itself as well as its observational context. It
must be remembered at all times that there is no real distinction between the
attributes of any aspect of reality and the experience of those attributes
relative to a specific observer. Furthermore, if multiple observers are
measuring the same quantum system but in different ways, the experience of
these observers will also differ. That is, the experience of reality
is relative to one’s frame of reference. In addition, regardless
of whether we subscribe to an ordinary-object based interpretation, or to a
statistical interpretation, the idea that the manifestations of physical reality
are self-organized by abstract fields of possibility is unavoidable.

Another unavoidable conclusion is that these fields must be interconnected
in such a way that it makes absolutely no sense to speak of them as separate
fields. For example, let’s consider two seemingly separate quantum
entities, each resented by its own quantum vector in its own frame of reference
in Hilbert space. If these two entities become entangled, then the composition
of two Hilbert spaces, H
a
and H
b
, can be represented by the tensor product H
a
Ä H
b
, which itself forms an entirely new vector in a new Hilbert space. In
other words, entangled entities are not represented by separate quantum fields,
but are represented by only one Ψ-wave. However, it my contention
that every aspect of reality is already phase entangled. This would certainly
be the case if the cosmological Big Bang theory were correct. If it is
true that everything that exists is indeed part of one phase entangled quantum
system, then it might prove useful to consider the likely existence of a universal
wave function. Although this field would be incomprehensibly complex,
the nature of non-locality assures us that whatever it is, it is within every
thing.

Another interesting aspect of quantum theory is the manner in which quantum
waves morph over time. To consider a particular example, let’s
measure the position attribute of a quon. First of all, we shall
assume for simplicity that we live in 3-dimensional Euclidean space. The
realm of possible values for position includes all points in a 3D continuum. Obviously,
there are an uncountably infinite number of possible positions. Each
point in 3D configuration space is represented by a single dimension in Hilbert
space. The wave function of our quon, is a vector in this space,
which has a unique decomposition into complex projection components along each
dimension. To find the probability that the quon will be measured
at the point (x
o
, y
o
, z
o
), we simply take the square of the amplitude of that possible point at instant
t
o
. At a different instant in time, t
1
, the quantum state will be represented by a different quantum vector. If
we assume time is a continuum, the transition between these two states can
be visualized as a spinning vector in Hilbert space.

However, it appears that we could modify our example to give the full picture
at one glance, as opposed to watching our quantum vector spin around. Firstly,
we will expand our domain of possibilities from all points in 3D configuration
space to the domain of all points in 3D configurations space for all time. Thus,
each possible value of position is now a point (x, y, z, t). The corresponding
Hilbert space is exactly what we would get if we assumed each possibility is
a point in a 4D continuum. Thus, we still have an uncountably infinite
number of dimensions, albeit a much larger uncountable infinity. Regardless,
we can still represent the wave function as a vector in this new Hilbert space
which decomposes into projections along each dimension. The square of
the possibility amplitude in this example will give the probability of measuring
the quon at a point in a 4-dimensional continuum. Both these examples
are representations of the same thing, except that the first example required
a spinning quantum vector to represent all possible instants, whereas the second
example required only one quantum vector, which represents the entire quantum
state from a higher dimensional perspective. The upshot of this argument
is that any particular morphing field of possibilities can be represented,
alternatively, by a single stationary vector at a higher dimensional level
of mathematical abstraction.

We have also seen that one quantum vector in Hilbert space looks exactly like
every other, namely a unit vector which has a absolute magnitude of one. Indeed,
it is merely our choice of which attribute we want to measure that determines
the probability distribution of all unrealized possibilities. The quantum
vector can only be analyzed by choosing a specific frame of reference. In
this way a quon’s tendencies to exist in a certain state are inseparably
determined by how we choose to observe the system. Since all quantum
vectors in Hilbert space are essentially the same, and since the only perceived
difference is a result of different possible choices of a frame of reference,
it appears safe to say that there may only be one quantum entity. This
is indeed the basic assertion of quantum theory, which utilizes one basic description
for all possible quantum entities, namely an abstract wave function, Ψ,
and a particular reference frame. Let’s suppose, for fun, that there
is only one fundamental quantum entity. Outside the context of a measurement
situation, it is meaningless to say anything about this entity. However,
once we define a frame of reference, we then can derive the basic characteristics
the Ψ-wave, which represents a specific attribute, or quality, of the
one quantum entity. I feel that it might be useful to introduce a new
concept to the existing version of quantum theory. As before, we understand Ψ to
be an abstract field of possibilities within a given reference frame. Now,
let us introduce a new symbol ,
which we shall define as the field of all possible reference frames. Whereas Ψ is
a representation of the quantum entity given one specific attribute reference
frame,
is a broader representation of the quantum enitity given all possible
attribute reference frames. This concept is somewhat analogous to putting
an arbitrary wave through all possible waveform family prisms at the same time. I
not sure if this is actually possible, or what the actual result might be; however,
I feel that the basic idea could be handled simply my modifying the existing
structure of quantum theory.

For example, each possible reference frame could be represented by an independent
dimension in some new type of space for which we have no name. Obviously,
there are an infinite number of possible reference frames, and thus this symbol
truly represents an infinite-dimensional field, which includes all possibilities. Whereas
the coordinate values of Ψ are represented by complex vectors, the coordinate
values of
could be represented by vectors in Hilbert space; that is, each dimension
in our new space represents a possible Hilbert space. If each specific
frame of reference is a sense, i.e. context, then the field of all possible reference
frames is truly the broadest sense. In general, we could say that by itself,
is completely undefined, and at the same time,
assumes all possibilities at once. Any particular Ψ-wave
is generated simply by slicing this infinite-dimensional field with a lower-dimensional
reference frame. This idea of slicing is a metaphor used for the creation
of level-sets, which are lower- dimensional projections of a higher-dimensional
object. In other words, all morphing quantum fields of possibility are
created by reflecting
at different angles.

Each angle of perception constitutes it’s own frame of reference. In
reality, all possible reference frames, or dimensions, are realized simultaneously. However,
as a result of our ordinary mode of human consciousness, we specific ego-centered
entities only perceive reality along one dimension at a time. If one
is able to broaden his perception to include multiple reference frames, it
is possible to experience reality along more than one dimension at a time. This
is by no means a rigorous treatment of the concept in question; however, it
is purposed simply because it is interesting to consider such claims given
our current exploration into the unknown.

In addition, it will also be said that this idea, outlined above, will not
work unless we assume that the ultimate source of all creation is right now. This
claim seems justified in a number of ways. For instance, who has ever
had a real experience of the future or the past anyway? Every experience
of reality is always in the now. The past and the future can only
be represented by fields of possibility; however, at every now, only
one thing is actually happening. The idea, that only now exists,
is consistent with experimental fact because every type of quantum measurement
yields only one actual event. It is my claim, that from the perspective
of an infinite-dimensional field of all possible reference frames, everything
in space-time already exists right now. From the perspective of
a lower-dimensional reference frame, events appear to be separated by space
and time.

Another claim, which I feel is justified, is that the existing formulation
of quantum theory applies to all entities regardless of their size. Quantum
theory was discovered in the realm of atomic and sub-atomic particles because
at these scales of reality, the effects of quantum waves become dramatically
obvious. It is generally assumed that at a certain limit, the quantum
laws converge to the normal everyday laws of ordinary experience. This
may be the case for many types of attributes which physicists are preoccupied
with, but in no way does it rule out the possibility that there may exist presently
undiscovered quantum relationships between macroscopic entities such as humans,
plants, star systems, or ant colonies.

Logically, quantum theory applies to all things primarily because everything
is made from the same stuff. It is all woven from the same fabric. It
should be obvious that there is no natural division between the realm of a
super-cluster of galaxies and the realm of a bunch of quarks. However,
macroscopic entities such as humans and stars are not made of atoms, nor are
they made of quarks. In general, it seems that everything consists of
frequencies of energy-mass; however, on an even deeper level, these frequencies
don’t exist unless we first define a reference frame. Therefore,
it appears that the ultimate stuff of reality is simply pure infinite possibility

As noted earlier, quantum entities are necessarily whole beings. Obviously,
physical scientists have been able to detect atomic and sub-atomic phenomenon;
however, I would argue that as opposed to being made of such building-block
like parts, each quantum whole is a hyperdimensional complex within which reside
lower-dimensional wholes. At the same time, each whole is embedded in
a broader context of an even higher dimension. In other words, the fields
which organize individual quarks are contained within broader sense fields
which organize individual atoms. Atom fields, in turn, can be represented
within molecular fields, which can be represented within cellular fields. In
this way, we can easily conceptualize human fields, collective species fields,
planetary fields, star system fields, and galactic fields.

Modern
travel-egg:
used primarily for interdimensional
trips to parallel universes

A possible dimension you may be
interested in travelling to

It is also extremely likely that similar fields exist
which organize other types of dynamic systems as well. The following
list represents just a few examples: the weather, the stock market, a flock
of birds, the rise and fall of human civilizations, and the development of
an embryo. I would even go one step further and propose that quantum Ψ-waves
could also be utilized to represent entities such as thoughts, ideas, dreams,
and memories. These exotic entities, such as ideas, should qualify
as quons because they have both continuous wave-like characteristics,
as well as discrete particle-like characteristics. Indeed, it seems
obvious that any generic quantum entity is quite similar to a memory in that
it is possible to represent both using abstract fields of possibility. One
thing is for sure, the realm of quantum waves is more like an ocean of ideas
than like a box, filled with the hard ordinary objects we’re used to
here in physical reality.

Physicists have been able to derive formulas for elementary
quantum processes because they are simple in comparison to the more complex
entities such as galaxies and ideas. It is extremely difficult to derive
the wave function for a molecule, let alone a human being. In fact,
I would say that it is impossible to calculate the dynamic field properties
of a simple multi-cellular organism even with the most advanced supercomputers
of the next 100 years. You might as well forget about using a pencil
and paper. The main reason is that there are simply too many variables
to keep track of. The only available mechanism, which is capable of
computing such astronomically complex forms of relationships, is the electromagnetic
neuro-chemical circuitry of the organismic bio-computer that we call the
human body. In other words, we already possess a natural mechanism
which can navigate through these fields intuitively, as opposed to analytically.

Moreover, advanced forms of bio-technology, such as human beings
and stars, can easily tap into even more powerful systems of organic bio-technology. For
example, humans can open a direct connection to the planet, our larger whole,
which in itself, is an incomprehensibly more evolveded expression of the
one quantum entity. This idea is analogous to a network of computers
which are all connected by a main frame or a hub. The sun, in turn,
can open a direct connection to the center of the galaxy, an even larger
whole. Implicit in this view is the necessary assumption that all forms
of quantum whole entities are expressions of consciousness. This does
not mean that galaxies are conscious in the same way that humans are, but
it does imply that all forms of creation, no matter how alien, are truly
conscious in their own way.

In fact, it seems that humans today are operating in safe-mode,
which mysteriously limits our capabilities to roughly ten-percent of our
full computing potential. If we were able to turn on to our full potential,
we might really be surprised by the complexity, and dimensionality, of the
patterns which we are capable of perceiving. As a final remark, it
is important to note again that quantum theory does not apply solely to the
unbelievably small scale of electrons, photons, and quarks. Quantum
theory is actually a mathematical description of a fundamentally deeper level
of reality, which precedes and organizes the manifestations of physical existence
according to the dynamics of hyperdimensional fields of possibility. The
ultimate source of all these fields is most likely the cosmic imagination
of God/Goddess, which expresses itself through all things. At the most
fundamental level of reality, everything in the Universe is a seamless unbroken
extension of itself, which, in turn, is constantly observing itself at different
angles, and re-creating itself right now in an infinite number ways. Oh psy .
. . . . it’s time to wave goodbye!

Hyperdimensional human in trance

Hyperdimensional human beyond 2012

Morphology

Bohm, David. Wholeness
and the Implicate Order. Routedge London and New
York, 1980.