Order, Chaos,and the End of Reductionism

The author presents a case against reductionism based on the emergence of chaos and order from underlying non-linear processes. Since all theories are mathematical, and based on an underlying premise of linearity, the author contends that there is no hope that science will succeed in creating a theory of everything that is complete. The controversial subject of life and evolution are explored, exposing the fallacy of a reductionist explanation, and offering a theory of order emerging from chaos as being the creative process of the universe, leading all the way up to consciousness. The essay concludes with the possibility that the three-dimensional universe is a fractal boundary that separates order and chaos in a higher dimension. The author discusses the work of Claude Shannon, Benoit Mandelbrot, Stephen Hawking, Carl Sagan, Albert Einstein, Erwin Schrodinger, Erik Verlinde, John Wheeler, Richard Maurice Bucke, Pierre Teilhard de Chardin, and others. This is a companion piece to the essay "Is Science Solving the Reality Riddle?"

Transcript of "Order, Chaos,and the End of Reductionism"

1.
Order, Chaos and the End of Reductionism
(Further Ruminations of an Amateur Scientist)
By John Winders
z' = zn
+ c

2.
Note to my readers:
You can access this essay and my other essays directly instead of through this website,
by visiting the Amateur Scientist Essays website at the following URL:
https://sites.google.com/site/amateurscientistessays/
You are free to download and share all of my essays without any restrictions, although it
would be very nice to credit my work when making direct quotes.

3.
The image below was generated by cellular automata. The pattern evolves downward from an
Alpha Point at the top of the image. Each pixel in a row is defined by the neighboring pixels in the
preceding row by following simple rules of modulo-2 arithmetic. Modulo-2 arithmetic is highly
non-linear, and non-linear processes produce order and chaos. Projecting the top-to-bottom
evolution as a 2-dimensional image, complicated large-scale order seems to emerge from simple
localized processes.
The image below was generated by the Mandelbulb Generator computer program. The surface
surrounding this strange object is the boundary that separates order from chaos. Points inside the
surface represent order (included in the Mandelbrot set) and points outside the surface represent
chaos (excluded from the Mandelbrot set). Order and chaos thus mirror each other.

4.
The image below is the barred spiral galaxy NGC1300 taken by the Hubble space telescope. The
color rendering was inverted to produce the color-on-white image. The large-scale order is largely
a result of interactions involving gravity and inertia. According to reductionist thinking, entropy
can only produce randomness and disorder. Erik Verlinde has discovered that gravity and inertia
both emerge from entropy. Thus, a post-reductionist interpretation of this image is the balance
between the tendency of gas molecules to fly apart and the tendency for them to collapse; both of
these tendencies are driven by a single entropic force.
The image below is an actual photograph of a DNA strand. DNA has the most highly-organized
naturally-occurring structure known; current scientific theories based on reductionism cannot fully
explain it.

5.
The image below is the famous painting “The Great Wave off Kanagawa” by the Japanese artist
Katsushika Hokusai. It captures the essence of order from chaos. Notice the self-similarity and
scale-invariant features of the breaking wave, which are the fundamental properties of fractals.
Also notice the similarity between the rising wave in the foreground and the snow-covered
mountain in the background. Hokusai was keenly aware of the fractal-like patterns found
throughout nature. This raises the prospect that these patterns are reflections of fractal properties
of space itself. It is possible to mathematically construct a Mandelbrot set using quaternions; the
set would be a finite 4-dimensional solid enclosed by a fractal boundary having three dimensions
with an infinite volume. Could our 3-dimensional space be a fractal-like boundary that separates
order from chaos in a higher dimension?
The image below is the strange stationary hexagonal feature at the north pole of Saturn taken by
the Cassini orbiter in 2012. It was first seen in the 1980s by the Voyager flyby missions. An
unknown self-organizing mechanism is responsible for sustaining the formation. (Credit: NASA)

6.
This image captures natural order and chaos that spring within the fractal boundary we live in.
The chaotic water jet splashing over the urn gives way to the orderly laminar flow down along the
sides. The same fundamental law, which maximizes the total degrees of freedom of the universe,
governs the laws of fluid dynamics and the self-organizing principle expressed in the plants and
flowers that surround the urn.

7.
Note: The drawing on the cover is an example of Penrose tiling, generated by a computer program
provided by Craig S. Kaplan of the University of Waterloo in Ontario, Canada. This particular
example was generated by varying the program's parameters until they were almost at the
borderline of order and chaos.
This essay is a companion piece to Is Science Solving the Reality Riddle?(Cogitations of an
Amateur Scientist). I considered adding yet another appendix to Reality Riddle, but repeatedly
fooling around with it was starting to get ridiculous. So I decided to encapsulate some ideas in a
separate piece instead (this one). In case you're interested in knowing the genesis of these ideas, I
suggest reading over Reality Riddle first.
I'll start off with a dictionary definition of reductionism:
re·duc·tion·ism
1: explanation of complex life-science processes and phenomena in terms of the laws of physics
and chemistry; also: a theory or doctrine that complete reductionism is possible
2: a procedure or theory that reduces complex data and phenomena to simple terms
— re·duc·tion·ist noun or adjective
— re·duc·tion·is·tic adjective
I'd like to concentrate on the second definition first. The basic idea is that the whole is equal to the
sum of its parts. I'm what you might call an anti-reductionist, because I think the whole is greater
than the sum of its parts, and usually it's a lot greater. Unfortunately, the “hard” sciences, such as
physics and chemistry, and almost all of engineering fall into the reductionist camp. This started
back before Isaac Newton, but he was the one who really gave it legs. Scientists knew that planets
revolved around the sun before Newton, and they even had a pretty good idea of how they moved.
They just didn't have a clue as to why they moved the way did. Johannes Kepler accurately
described planetary motions in a set of three laws, but he was a little fuzzy about why these laws are
true. Oh, he did have a theory, described in a document called the Mysterium Cosmographicum,
which seems to be a weird mixture of Platonism, astrology, Biblical doctrine, and maybe even
alchemy. But that doesn't resemble anything like a sound theory according to modern physics.
Then in 1687, Newton came up with his laws of motion and gravity that he published in his
Philosophiæ Naturalis Principia Mathematica, or just Principia for short. He even invented the
calculus to help scientists and engineers work with his theories.1
Way to go, Sir Isaac! The big
breakthrough came when Newton realized that the same laws that govern apples falling on the Earth
also apply to motions of the Moon and the planets. This also reinforced the idea that natural
processes can be described by mathematics, specifically linear equations, and more specifically
differential equations. This idea became an obsession among scientists, and reductionism hinges on
the notion that nature obeys mathematics; however, I think it's more accurate to state that
mathematics sometimes mimics nature.
Since Newton's time, science and mathematics have been inextricably linked. Every breakthrough
in mankind's understanding of nature has been accompanied by a scientific theory couched in the
language of mathematics. Today, it's the other way around: mathematics is leading science by the
nose. Today, it's virtually impossible to express scientific thought in any language other than
1 Actually, he co-invented the calculus along with Gottfried Wilhelm Leibniz, whose notation was adopted by
mathematicians, and is the standard way calculus is taught in high schools and colleges. Newton accused Leibniz
of plagiarism, even though Leibniz published his version first.
1

8.
mathematics. I feel that this is becoming a stumbling block of science.2
There was great scientific progress in early part of the 20th
century, beginning with Albert Einstein's
theory of special relativity and the quantum theory of light in 1905, followed up by his theory of
gravity expressed by general relativity in 1915. Einstein's breakthrough with the quantum theory of
light was further developed by a notable cast of characters beginning roughly in the 1920s.3
I'm not
going to repeat the well-documented history of these events, other than to point out that relativity
and quantum mechanics came at reality from completely different directions, and are in many ways
completely incompatible with each other. This led Einstein and others to try to merge or “unify”
quantum physics with general relativity. So far, these attempts have been completely unsuccessful.
In my opinion, the problem of unification lies mainly with general relativity, because it is still a
classical-deterministic theory.4
Experiments have shown time and again that reality does not obey
classical-deterministic rules. As I stated often in Reality Riddle, general relativity is a good
conceptual tool that describes many phenomena very accurately on fairly small scales, as long as
the “curvature” of space-time isn't carried to extremes. The mathematics begins to fall apart – as
indicated by infinities and time anomalies that pop up – when it is (mis)applied to extreme
gravitational conditions or when trying to “solve” the state of the entire universe.
Physicsts believe that the unification of general relativity with quantum field theory will ultimately
result in a Quantum Theory of Gravity. That theory requires a hypothetical elementary particle
known as the graviton – the force carrier of gravity. So far, this particle has not been seen in the
wild, but its quantum-mechanical properties are pretty well established. It's range is infinite and it
must travel at the speed of light, so it can't have any mass, and in order to fit into the standard
model, it must be a boson with a spin of 2.5
One of the strange things about gravity is that it cannot
be shielded or blocked. If you stand behind a wall of solid lead – or solid wall of anything for that
matter – the force of gravity will go right through it. So the graviton must also have infinite
penetrating power, which is a somewhat unique property among elementary particles.
Unfortunately, coming up with a quantum theory of gravity involves a lot more than just plugging a
graviton into quantum field theory, or turning gravitons loose to zoom around in 4-dimensional
space-time. As I stated in Reality Riddle, there seems to be a problem with properly incorporating
rotation into general relativity, which might actually point to a bigger problem. Einstein apparently
believed that there are no inherent, qualitative differences between rotating objects, which have
centripetal acceleration, and objects that accelerate in straight lines. But I suspect there really are
qualitative differences between them. For one thing, an object that accelerates in a straight line
needs to be pushed by something else; otherwise it just stops accelerating.6
A rotating object on the
other hand, accelerates centripetally without any help from the outside. That's one qualitative
difference. Another qualitative difference is that linear acceleration is equivalent to a gravitational
field; however, there doesn't seem to be any plausible gravitational equivalence to centripetal
acceleration. My suspicion is that the failure to recognize these inherent, qualitative differences
resulted in an incomplete theory. This causes anomalies like backward time travel when the general
2 This was one of the basic themes in Is Science Solving the Reality Riddle?(Cogitations of an Amateur Scientist).
3 These included Einstein himself, along with Niels Bohr, Max Born, Satyendra Nath Bose, Louis de Broglie, Arthur
Compton, Paul Dirac, Werner Heisenberg, David Hilbert, Enrico Fermi, Max Von Laue, John von Neumann,
Wolfgang Pauli, Max Planck, and of course Erwin Schrödinger.
4 It is also very much a reductionist theory, which is another fallacy.
5 Mass particles, such as electrons, protons, and neutrons, are fermions. They have spins that are odd multiples of ½
and they obey Pauli's exclusion principle. Force carrier particles, such as photon, gluons, and such, are bosons.
They have spins that are either zero or even multiples of ½ and they don't obey Pauli's exclusion principle.
6 An accelerating rocket pushes on the gas escaping the rocket nozzle. The gas pushes back on the rocket according
to Newton's third law of motion, causing it to accelerate.
2

9.
relativity field equations are solved for cases where there are spinning motions.
Here's another clue: the fundamental constant in quantum mechanics is Planck's constant, ħ. This
constant has units of angular momentum or spin. The energy of a body in periodic motion is
quantized, as given by the formula ΔE = ħω, where ω is the frequency of oscillation. Planck's
constant also shows up in Schrödinger's wave function, which is a periodic function. Periodic
motions and spin are closely related. Therefore, it seems that spin is the one ingredient that
automatically provides quantization, and I have a hunch that a quantum theory of gravity might
emerge naturally if spin could be properly incorporated into general relativity and baked into it from
the very beginning.
At the end of the 19th
century, the Industrial Revolution had transformed the western world, science
and mathematics had triumphed, and it appeared that nothing further could be invented or
discovered. This was the prevailing reductionist fantasy, expressed earlier in the century by the
physicist Pierre-Simon Laplace:
“Consider an intelligence which, at any instant, could have a knowledge of all forces controlling
nature together with the momentary conditions of all the entities of which nature consists. If this
intelligence were powerful enough to submit all this data to analysis it would be able to embrace in
a single formula the movements of the largest bodies in the universe and those of the lightest atoms;
for it nothing would be uncertain; the future and the past would be equally present to its eyes.”
By Laplace's time, science had pretty much worked out the movements of the largest bodies in the
universe and those of the lightest atoms, thanks to Newton's laws. So all that needed to be done was
to collect the momentary conditions of all the entities (plugging in the boundary conditions) and
turn the crank. Past, present, and future would be revealed in all their glory.
Of course, the remarkable progress in the early 20th
century laid waste to the naive notion that there
was nothing left to discover or invent. But in the early 21st
century, it's déjà vu all over again.
Some scientists actually think that unifying quantum theory with relativity – possibly through string
theory – is the only piece of the puzzle that's missing. Like the intelligence in Laplace's fantasy,
finding the missing puzzle piece will reveal the entire past, present and future; how the universe
began in minute detail, its entire evolution, and how it will end. It might even reveal the the origin
of life itself. Well, here's what I think: when and if a unified theory is unveiled, it won't be the end
of science, but it very well might be the end of reductionism. I'll now try to explain the reasoning
behind that statement.
First, it will be helpful to give a very broad overview of the two physical theories that scientists are
attempting to merge. Einstein's theory of relativity can be expressed by the following mathematical
equation, which links the curvature of space the concentration of mass-energy as follows:
This is called the Einstein field equation. I'm not going to attempt to explain exactly what each of
the terms mean, other than the fact that Rμν, gμν, and Tμν are what are known as tensors. Tensors are
geometric object that express linear relationships among objects, in this case in four dimensions.
Although Einstein's field equation is somewhat similar to an ordinary linear differential equation, it
is in fact nonlinear, so it is devilishly difficult to solve except for the most simple cases.
Quantum mechanics can be similarly summarized by a single equation, known as the time-
3

10.
dependent Schrödinger equation, shown below.
Again, I'm not going to explain all of the terms, other than to say that it is a second-order
differential equation of the variable Ψ, which varies over time and distance; i.e., it's a wave. The
wave itself has no physical meaning – it simply “exists” in space and time.7
Yet this immaterial
wave mysteriously orchestrates the movements of all material objects from electrons to planets.
The Schrödinger equation, unlike Einstein's field equation, is linear and can be readily solved.
What is meant by “linear”? Well, the equation z = x + y is linear because the value of z is simply
the sum of its parts, x and y. The Schrödinger equation are linear because the components simply
add. If two wave functions, Ψ1 and Ψ2, were to overlap in space, the resulting wave function would
be the sum of the two because space is presumed to be linear. You would get an interference
pattern, but you could still decompose the pattern into its constituent parts. This also makes it
possible to apply mathematical tools, such as Fourier analysis, which are used to break down
complicated functions into sums of much simpler functions such as sine waves.
The equation z = x2
+ 2xy + y2
is nonlinear because the whole, z, is not the sum of its parts, x and y.
If space were nonlinear, two overlapping wave functions would combine in ways that would make
it impossible to decompose the resulting wave into its parts. This would render most situations
unanalyzable. In order to have any chance of analyzing a physical process mathematically, the
process must be linear. Therefore, all physical processes that scientists analyze are assumed to be
linear.
String theorists call the ultimate theory of reality M-Theory, although nobody really knows what the
M stands for. For the lack of a better name, I'll stick to the term M-Theory as well. It's almost
certain that M-Theory must be expressed mathematically, since pure mathematics is the only
driving force behind it at the moment. This means that no matter what form M-Theory takes, the
underlying assumption is that reality is linear. But what if it isn't? In that case, although M-Theory
might successfully describe many things, it won't describe everything, which was the original
purpose for developing it in the first place. But if that's true, then physicists will discover to their
horror that M-Theory was actually a dead end. They will have no choice but to scrap the notion that
reality is linear or that it can be expressed through mathematics, or at least using the kinds of
mathematics we presently use. In other words, scientific principles will change in significant ways,
forcing us to abandon reductionism and look for other kinds of answers.
Now saying that reality is nonlinear is a pretty sweeping statement, but I'm convinced it's true. The
simple reason is that there is order in the universe, and order can only arise naturally through
nonlinearity. We kind of take order for granted, but it's really a very deep mystery because
according to the second law of thermodynamics, order shouldn't exist at all.
First, we need to explore the concept of entropy. When James Watt invented the steam engine
around 1765, he didn't have a clue about thermodynamics. He just knew that steam makes pressure
and by condensing steam you make a vacuum; and if you put pressure on one side of a piston or a
vacuum on the other side, you can make the piston move back and forth; and you can make a
moving piston turn a wheel by using rods. Scientists started to study heat analytically, and they
conjured up a bunch of laws they called thermodynamics. The second law of thermodynamics
7 The wave function Ψ is expressed as a complex variable, having a real and an imaginary part. Its conjugate, Ψ*,
changes the sign of the imaginary part from a plus to a minus or vice versa. The product ΨΨ* is a real number, and
that does have a physical meaning: it's the probability density function of a particle, or the likelihood of finding the
particle within a given region of space and time.
4

11.
states that heat always flows from hot objects to cold objects. Well, duh. That sort of seems
obvious to most people, but it has some very significant ramifications.
Scientists in the late 18th
and early 19th
century became obsessed by steam, for good reason, because
steam had completely transformed their civilization by ushering in the Industrial Revolution. They
studied steam from every possible angle, and calculated all of its properties, including temperature,
pressure, enthalpy, and a mysterious property known as entropy.
In 1803, Lazare Carnot came up with the notion of entropy, whereby all physical systems have the
tendency to lose useful energy. The concept of entropy was further developed by his son, Sadi, who
viewed production of work by a heat engine as coming from the flow of a substance called caloric,
like the flow of water through a waterwheel. In the ideal Carnot cycle, the system is returned to its
original state, so the cycle is theoretically reversible. When a process is reversible, then the entropy
of the system remains constant, but if a process is irreversible, some of its ability to do work is lost
and the entropy increases. Increasing entropy → decreasing ability to do work.
When heat flows from a hot object to a cold object, it is an irreversible process and entropy
increases. In a reversible process like the ideal Carnot cycle, entropy stays constant. But in neither
case does entropy decrease. Thus, the second law of thermodynamics can be stated as follows, “In
an isolated system, entropy never decreases.”
In 1877, Ludwig Boltzmann came up with a way to express entropy as a statistical property, which
became the modern way of working with entropy. He defined entropy as the logarithm of the
number of states a system can have times a constant, known as the Boltzmann constant. The second
law of thermodynamics is just another way of saying that all physical systems tend to move toward
their most probable states, which shows up as increased entropy. Viewed in that context, entropy
can be thought of as measuring disorder or randomness.
This led to a very depressing state of affairs, however. Physicists soon realized that the entropy of
the entire universe is increasing, which means that the universe is constantly winding down. This
ultimately will lead to a condition known as “heat death.” This doesn't mean that heat will vanish;
it only means that the universe will reach a state of thermodynamic equilibrium where heat can no
longer produce useful work. But this doesn't just apply to heat; it applies to everything. Stars will
burn up all their nuclear fuel, all radioactive materials will decay, and everything will be in perfect
state of equilibrium and maximum entropy where nothing ever changes.
The prospect of heat death as the ultimate fate of the universe is a direct result of reductionism.
Based on the underlying assumption of linearity where the sum is equal to the sum of its parts, there
can be no other outcome. The second law of thermodynamics is relentless, driving the universe to a
bland, featureless, and dead state. In fact, a reductionist universe is dead already. But a reductionist
universe is also contrary to the obvious fact that order does, in fact, exist in the universe.
So where does order come from? Surprisingly, it comes from the very same processes that produce
chaos. Order and chaos are actually twins, although they're fraternal and not identical. I'll explain
all that a little further ahead. But how do order and chaos relate to entropy? More specifically, how
can order arise when the second law of thermodynamics states that entropy, or disorder, always
increases? Well, actually viewing entropy as simply disorder is somewhat of a misconception. In
the 1940s, Claude Shannon developed the modern theory of information.8
After studying
information in detail, he came up with the astounding conclusion that information and entropy are
really the same thing!
8 Shannon's work at Bell Labs followed his work on code decryption during WWII. The people at Bell Labs were
interested in sending signals through noisy channels, which tends to corrupt signals. Through clever encryption,
Shannon proved it was possible to send signals error-free as long as the information rate is kept below a certain
threshold. This led to error-correcting codes, making modern communication systems and computers possible.
5

12.
This leads to an interesting corollary to the second law of thermodynamics, namely that information
cannot be destroyed. Physicists, led by Stephen Hawking and Leonard Susskind, have concluded
that entropy is “hidden” information. I'm not sure I agree with the “hidden” part, but I guess they
have their reasons for saying that.9
I have a slightly different interpretation. Information is
constantly being created in the Now, which becomes permanently stored as the Past. We sense the
passage of time as information being added to the universe. You could think of the Past as a filing
cabinet being filled with information, but that information can only be perceived in the Now. The
Future is nothing but an empty filing cabinet with no information it it at all, so our sense of Future
is merely a mental extrapolation based on what has already taken place and what is taking place. So
only Now truly exists, which represents the totality of all changes taking place and influenced by
the Past.
Shannon showed that information is fairly easy to quantify, drawing similarities with Boltzmann's
formula for entropy. The hard part is assigning a qualitative value to information. Information is
neither “good” nor “bad” but certain kinds of information seem more meaningful than other kinds.
I think that is where order and chaos come into play.
Creationists argue that evolution isn't possible because it would violate the second law of
thermodynamics. In the face of entropy, how could life forms have arisen, becoming more and
more complex over time, unless they were created and fashioned by a conscious and willful divine
Entity? Reductionists like Carl Sagan argued that life arose through a random process; if atoms
keep banging into each other over a sufficiently long time,10
they will eventually form DNA
molecules. If you keep randomly shuffling a deck of cards, it will eventually arrange itself in
perfect ascending order. Could random natural process possibly account for the incredible
complexity of life? Reductionism says yes.
To me, the creationism argument is a false dichotomy. It's not a choice between increasing order or
increasing entropy; both can increase together, and in fact they actually do just that. Think of a
river that flows downhill due to the force of gravity alone. Imagine that the river bed is filled with
rocks, logs, and other debris and that the river banks are very uneven. Now the general flow of the
river is always downhill, but you will see eddies and whirlpools here and there. Now for the most
part, those eddies and whirlpools don't move downstream. In fact, some of them may even move
upstream momentarily. Now would you say those eddies and whirlpools defy the law of gravity?
Of course not. The water molecules always move downhill, but the features of the river don't
necessarily have to. The gravitational force is actually what causes those features to form in the
first place, along with the highly nonlinear process known as fluid turbulence. Turbulence produces
unpredictable, chaotic motions that somehow arrange themselves into stable, ordered features. Very
mysterious, no?
Entropy, order, and chaos work in much the same way. Entropy is the “engine” that keeps the
whole process moving. Yes, the system as a whole (the universe) will tend to move toward the most
probable state, thereby increasing its entropy. But although the universe began in a very
improbable, low-entropy state and is currently winding down, nonlinear processes abound in nature.
These processes create chaos, which is completely unpredictable. An it is chaos that nudges the
universe into creating the beautiful order and structure seen everywhere. The universe isn't “dead.”
It's very much alive and it's engaged in an incredibly rich and diverse creative process.
Well, how does this process actually work? What's the mathematical equation that governs it?
Well, I'm afraid I can't describe the process through a single equation – maybe nobody can. But I
9 This came about by studying what happens when objects are dropped into black holes. If all information about
them is erased, then this would violate the second law of thermodynamics. Hawking and Susskind concluded that
the information isn't lost; it becomes encoded or “hidden” as entropy on the black hole's event horizon.
10 Or as Sagan would say, “After billions and billions of years.”
6

13.
can describe some examples how this process can work on paper. Benoit Mandelbrot was a brilliant
engineer/mathematician who spent much of his career studying how order comes from chaos,
although he didn't describe it quite that way. He published his results in Fractals: Form, Chance
and Dimension. His ideas were not widely understood by the scientific community, at least initially.
But his was a case of someone who was very much ahead of his time; thanks to Mandelbrot, fractals
have become a vibrant field of study.
Here's one of the ways the process works. Take the formula z′ = z2
+ c. The first thing we note is
that the expression on the right side is nonlinear, owing to the z2
term. The z′ (z prime) stands for a
new value of z based on the old value of z in the right side of the equation. Thus, the formula also
contains feedback. The value of c is a number that we want to test using the formula. Next, we let
z′, z, and c be complex numbers. Now don't get scared or flustered by that. It just means that each
of them has a real and an imaginary part. Using the rules of complex algebra, we can rewrite the
formula as two separate formulas:
z′ (real) = z (real) × z (real) – z (imaginary) × z (imaginary) + c (real)
z′ (imaginary) = 2 × z (real) × z (imaginary) + c (imaginary)
Now, we can plot imaginary numbers as points on an x-y graph: the real parts correspond to x
values and imaginary parts correspond to y values. We pick a real and imaginary value for c, say
(0,0) and plug it into the formula to calculate z′. We feed z′ back into the equation as z, and repeat
the calculation over and over. Now one of two things will happen: a) the value of z′ will become
chaotic and zoom out of the x-y plane, or b) the value of z′ will settle down to a nice, predictable set
of values that keep repeating. If a) happens, then c is thrown out. If b) happens, then c becomes
part of the Mandelbrot set, and we plot the real and imaginary parts of c on our x-y graph.
Over many trials involving different values of c, a distinct and very beautiful 2-dimensional pattern
emerges. The pattern is a fractal that has very unusual properties. I'm not going into those
properties here,11
but the point is this: the formula that is used to generate the fractal creates both
chaos and order at the same time. The “chaos” consists of unstable numbers that are not part of the
set; the “order” consists of stable numbers that are part of the set. Chaos and order, Yin and Yang:
The process of making a fractal is a type of self-ordering process. People who have studied self-
ordering processes have identified three necessary conditions: 1) the system cannot be in a state of
equilibrium, 2) there must be at least one degree of freedom, and 3) nonlinearity must be present. It
is almost certain that all three of these conditions exist in the universe. The first two are obvious:
the universe is certainly not in a state of thermodynamic equilibrium because entropy is still
increasing, and there are at least three degrees of freedom present in the very space that things
occupy. The only necessary ingredient we're not quite sure about is the nonlinearity. But the very
fact that self-ordering processes seem to be taking place is a very good indication that nonlinearity
is an underlying feature of our universe. This feature simply cannot be described using linear
equations, so the self-ordering process is not amenable to mathematical expression or analysis.
In case you are inclined to think that fractals have no relationship to reality, you may want to
observe nature more closely. Fractal-like objects are ubiquitous, from the veins in a leaf, to a head
11 I discussed them in more detail in Reality Riddle.
7

14.
of broccoli, mountain landscapes, to ocean waves breaking on rocks. Even the rhythm of your heart
is a fractal pattern as a function of time. How do these things arise? Well, many of self-organizing
processes are very local in nature, but lead to highly organized structures on very large scales. This
kind of processes is called cellular automation. Here's an example of how this works: suppose
there is a row of boxes, each of which can be either full or empty. Now we add a simple rule for
each box: if the two neighbors on either side are either both full or both empty, then the box
becomes empty. Otherwise the box becomes full. Now fill some of the boxes and watch the row
“evolve” one step forward using that rule. The process is repeated over and over and as the rows
evolve, complex large-scale patterns emerge from one very simple rule applied on a very local
scale. You could try this yourself using the cells of a spreadsheet.
This brings up the very controversial subject of the evolution of life. Biochemists have now
successfully “sequenced” the entire human genome. Every gene consists of a sequence of so-called
letters imprinted on a strand of DNA. These letters form a code, which instructs the cell what to do
but more importantly determines whether the cell is part of a plant or animal, and what kind of plant
or animal it is. There is no question that a person's genes determine many of his or her physical
attributes, from eye color to hair texture, height, bone structure, etc. This is obvious simply by
looking at family resemblances, especially between identical twins. However, the big question is
how the letters imprinted on the DNA strands shape the individual.
The biochemists say the genes simply tell the cells which proteins to produce and that some genes
are turned on while others are turned off. Well, that's not much of an explanation. How does a liver
cell know it resides in the liver, where it's supposed to be making liver enzymes, instead of in the
big toe, where it wouldn't have to do much of anything? An embryo starts out as a single cell,
which divides many times before the individual cells begin to branch out as nerve cells, bone cells,
skin cells, etc. Where is the “template” that tells each cell where it is in relationship to all the
others? Well, the creationists have a ready answer for that: God tells the cells what to do and when
to do it. That sounds very unscientific, but I'm afraid the reductionists don't have much of an
answer either, based on the model of a dead, reductionist universe. However, the principle of
cellular automation might explain how a complicated structure like a human being could arise from
each cell knowing who its neighbors are and following simple rules written in the code letters of its
DNA. I'm not saying that's exactly how it happens, but I'm saying that it could be close to the truth.
Could the process of cellular automation explain how life originated in the first place? Well, I don't
know, but it's certainly more plausible than atoms banging into each other and forming life by sheer
luck. It also avoids having to invoke a special one-time creation event as the cause. The boundary
between life and non-life seems to be rather sharp. However, the study of chaos shows that
boundaries are often very sharp between linear and chaotic behavior. So it certainly seems plausible
and even possible that life could have been initiated in a sudden chaotic manner from non-life.
Science has been pushing God into the gaps ever since Newton, and maybe even before him. Each
time some phenomenon was explained by a natural law or process, there was less and less room for
a supernatural explanation. Now I realize that this theory of chaos and entropy may push God still
further into the gaps. Is there any room left for Her at all? Of course there is, and I think there's a
lot more room for Her compared to a reductionist philosophy based on random chance alone. Think
of the ramifications of all this: God could have simply willed creation into existence ready-made,
complete with stars, planets, plants, animals, and people just like it says in Genesis. Or She could
have designed a universe that started out in a completely formless, uniform, and highly improbable
state; a complete void with zero entropy, but with a strong propensity for creating chaos and order
out of nothing and absolutely no way for Her to predict exactly how the whole thing would end up.
Then She could just sit back, let the whole thing unfold in front of Her, and really enjoy the show.
Now I ask: if you were God, which kind of universe would you choose to create?
8

15.
Appendix A – Order is in the Eye of the Beholder
One of the books that inspired this essay was The Cosmic Connection by Paul Davies. There is one
paragraph on Page 109 that's worth quoting:
“Information theorists have demonstrated the 'noise', i.e. random disturbances, has the effect of
reducing information. (Just think of having a telephone conversation over a noisy line.) This is, in
fact, another example of the second law of thermodynamics; information is a form of 'negative
entropy', and as entropy goes up in accordance with the second law, so information goes down.
Again, one is led to the conclusion that randomness cannot be a consistent source of order.”
Well, this doesn't quite jibe with the information theory I learned in graduate school, or what I know
of Claude Shannon's work. As far as I know, there is no such thing as “negative entropy,” and I
think Stephen Hawking would agree with me that information doesn't “go down” – ever. He and
Leonard Susskind refer to entropy as just “hidden information,” and I guess I could sort of go along
with that. But the point is that entropy and information are essentially the same.
I think there's a common misconception that entropy lacks any information just because it's random.
Randomness contains the same quantity of information as non-randomness, because a random state
is just as unique as a non-random state. However, randomness does seem to lack a quality we call
order, which we need to define. I'll try to clarify these distinctions through a simple example.
Suppose you're sitting across the table from an alien from the Alpha Centauri system and you each
have a deck of cards. Your deck is the standard 52-card variety with four suits of deuces through
tens, three face cards, and an ace. Now you shuffle the deck about a dozen times and start drawing
cards one at a time, and notice that they're all in order! You keep drawing and they keep coming out
in order. So your heart's pounding and you're getting all excited, and you start to sweat. And then
there are only two cards left: the king and ace of spades. Could the next card be the king, making
all 52 cards come out in perfect order? That would be one chance in 52! or about one chance in
8 ×1067
. You draw the next two cards and they're the king followed by the ace! The alien just stares
at your cards and shrugs its shoulders. To it, those cards are just showing random symbols.
Now the alien gets out its deck of 53 cards, which have all sorts of weird hieroglyphs printed on
them. In fact, each card has a completely unique symbol on it because its species uses a base-53
number system. The alien shuffles its deck a number of times and starts drawing cards. To you, the
cards appear to be in random order with no discernable pattern whatsoever because each card has a
unique symbol printed on it. But you notice the alien is getting very nervous and excited as it draws
down the deck. Near the end of the deck, the alien is so excited it can't even hold itself together. It
draws the last card and faints dead away. You look at the cards on the table, and they just look like
a pile of random hieroglyphs. But to aliens from the Alpha Centauri system, the arrangement of
those cards has meaning: all 53 cards came out in perfect order in their base-53 number system.
You see, strictly from information theory, there is nothing really special about any arrangement of
cards versus any another. They're all equally probable. No matter what arrangement you dream up,
you would have to shuffle the deck about 1068
times for there to be a decent probability of that
arrangement coming up by chance. This is how I started to change my thinking about entropy,
information, order, and chaos. Entropy and information are quantitative measurements, whereas
order and chaos are qualitative measurements. It's actually very hard to define what order is. It's
like beauty – you know it when you see it. You might define order as information with chaos
removed, but then you would have to define what chaos is. Yin and yang.
Here's another analogy.12
Suppose you're building a giant wall of bricks, say 1,000 bricks wide by
1,000 bricks high. There's a huge pile of bricks lying at the construction site. There are two kinds
12 I just love analogies, don't you? But some people, like my daughters, don't seem to like my analogies very much.
9

16.
of bricks: some have white 0s painted on them and others have black 1s painted on them, so you
could think of those bricks as information. You call over your assistant, whose name happens to be
Claude Shannon, and ask him, “Hey Claude, how much information is over there in that pile of
bricks?” Claude counts the bricks and informs you there's one million bits of information. Now
before you start building the wall, you decide it would be nicer to create a pixelated copy of the
“Mona Lisa” using the 0s and 1s instead of just randomly laying the bricks next to and on top of one
another. So that's what you do; and after you finish, you stand back and admire your version of the
“Mona Lisa,” and ask, “Hey Claude, how much information is in those bricks now?” You think
he'd be so impressed by your work that he'd say there are a couple of billion bits up there. But
Claude simply counts the bricks and tells you there are one million bits of information in the wall.
You see, Claude doesn't appreciate art, so to him, every arrangement of 1s and 0s is just like any
other. What you should be asking him is how much order (or lack of chaos) is in those bricks.
Generating pseudorandom number sequences is similar to generating chaos. There are methods that
measure the “statistical complexity” of pseudorandom numbers generated by algorithms, as
described in a paper entitled Intensive Statistical Complexity Measure of Pseudorandom Number
Generators, by H.A. Larrondo, C.M. González, M.T. Martin, A. Plastino, and O.A. Rosso.
According to my “new” way of thinking about order and chaos, Larrondo et al may have stumbled
on a way to measure order indirectly by measuring chaos. Maybe it's an equation like this:
Order = Information – Chaos
I think the only truly random processes are “natural,” especially quantum ones, like as radioactive
decay. In the famous “Schrödinger's cat”13
thought experiment, the process that triggers the release
of cyanide and kills the cat is from a radioactive material placed near a Geiger counter. Apparently,
Schrödinger realized that a pseudorandom number generator just wouldn't cut it in that experiment
because it wouldn't be random enough. Now you might say that there's no real difference between
an algorithm that generates random numbers and a radioactive decay process that generates 0s and
1s. But there is. Albert Einstein thought that quantum processes, like radioactive decay, were like
little machines that are programmed to spit out beta or alpha particles every so often. He called the
programming “hidden variables.” He challenged his nemesis, Niels Bohr, with this by publishing a
paper in 1935. He said that said Boh'rs version of reality – quantum uncertainty – was bogus.14
Well, it turns out that experiments performed in the 1980s proved Einstein was wrong and Bohr was
right, so Bohr got the last laugh; or he would have if he and Einstein had still been alive by then.15
When I was in the army, I saw some super-secret radio transceivers that scrambled (encrypted)
human voices. The encryped transmissions received by an ordinary radio sounded like noise, as if
you were listening to Niagara Falls. But it wasn't random noise at all, it was really chaos. The
information in the message wasn't diminished – it can't be – but the circuitry changed ordered
{silence + human voices} into chaos. Those secret transceivers must have used pseudorandom
number generators to do that because the process was completely reversible so the receivers could
change the chaos back into ordered {silence + human voices} again. The whole science of breaking
secret codes, Shannon's area of expertise in WWII, depends on the reversibility of the encryption
process. In principle, every code can be broken – with a sufficient amount of brute force – because
they all use reversible algorithms. I think a completely unbreakable code would have to scramble
messages using random numbers from an irreversible process like radioactive decay. But then
nobody would be able to unscramble the messages, including people who are supposed to receive
them. So there are even qualitative differences between chaos generated by reversible processes
and chaos generated by irreversible processes, although it's pretty hard to tell the two apart.
13 It's also known as the “Fluffy experiment” named after Schrödinger's pet cat, Fluffy. Just kidding.
14 Actually, he wasn't quite that rude. He just politely asked whether or not Bohr's theory was “complete.”
15 I covered Bell's inequality experiments in excruciating detail in my essay Reality Riddle.
10

17.
Appendix B – The Ice Box Conundrum
Whenever I think about entropy I always come back to the same ice box problem that sticks in my
head. Say you have a perfectly-insulated box with food items at room temperature, and you want to
cool the food down in a hurry so it won't spoil. You go to a store where they sell dry ice (frozen
CO2) and you bring a chunk of it home, stick it in the box, and close the lid. Now an expert in
thermodynamics will say that you disrupted the thermal equilibrium of the box at room temperature
by putting a cold chunk of dry ice in it. In other words, you opened the system to the outside and
lowered its entropy by forcing it to be in an unnaturally-ordered state: {warm food + cold ice}.
Now over time, heat will flow from the food into the dry ice, which makes some of the CO2
evaporate. This confirms the second law of thermodynamics as it was originally stated: heat flows
from bodies at higher temperatures to bodies at lower temperatures. If the box is perfectly
insulated, the amount of heat energy inside stays the same, but the entropy increases because a gas
has more entropy than a solid. What this means is the number of “microstates” of the system has
increased while that elusive property we call “order” decreases. Eventually, the food and the dry ice
will reach thermal equilibrium where everything is at the same temperature. This maximizes the
number of microstates the system can occupy, which maxes out its entropy.
Suppose the box is not only perfectly insulated but it's also perfectly sealed. If not all the CO2
evaporates, there's still some dry ice in the box and all the original CO2 molecules are still in there.
Now here's the part that bothers me: most textbooks that discuss entropy say there's always some
probability that systems in thermal equilibrium could spontaneously go from a disordered state into
some highly-ordered state. They say the probability might be vanishingly small, but it could
happen. In other words, there's some miniscule probability that all the CO2 gas molecules could
suddenly decide to refreeze and dump heat back into the food, returning the system to its original
state. Since dry ice that spontaneously decides to refreeze is exactly the same as the dry ice you put
there originally, the entropy of the entire system will have to go back to its original low value.
The authors of the textbooks wave their hands around and say, “Don't worry, this won't happen
because the number of microstates is unbelievably large, so the probability of going all the way
back to square one is vanishingly small.” But this just won't cut it because vanishingly small is still
greater than zero, so this still could happen; but the second law of thermodynamics says it simply
can't happen. Period. This is what I call the ice box conundrum.
I thought about this for a long time and I think I came up with a solution. When rolling dice, it
doesn't matter whether you roll one die a million times or roll a million dice all at once. Either way,
the probabilities of the dice coming up certain ways are the same because all rolls are statistically
independent. In other words, previous rolls don't change the probabilities of future rolls. This is
different than the changes happening inside the ice box. As each CO2 molecule vaporizes, the
number of possible microstates increases, so entropy increases gradually; here, the probabilities do
change depending on what state the system is in. It can't get from the initial low-entropy state to
any of the high-entropy equilibrium states in one giant leap because those states aren't included in
the list of possible low-entropy microstates. Pathways to those states have to open up first.
Here's why going in the reverse direction wouldn't work. In the textbook version of a system in
equilibrium, the system jumps around from one state to another; all states are equally probable and
each jump is statistically independent from all the others. So in theory, the system could jump all
the way back to its original low-entropy state in one jump like rolling all the dice at once. But a real
physical system like the ice box can only move into the states that are available to it. Unlike dice
rolls, the moves are not statistically independent. If a tiny pathway to a lower-entropy state opens
up, it soon closes again before it can be filled. A few CO2 gas molecules might refreeze from time
to time, but no permanent pathway is open for the system to get back to its original state.
11

18.
Appendix C – The Post-Reductionist Universal Law
Newton's laws, special and general relativity, and quantum theory all have something in common:
they all hinge on fields. Newton saw nothing wrong with action at a distance, so he didn't bother to
postulate a field in his theory of gravitational attraction between two masses; his equations spoke
for themselves. But others who followed him made sure to add a gravitational field. Einstein
explained gravitation as space-time curvature, which can also be interpreted as a disturbance of the
space-time field. Quantum mechanics is based on the Schrödinger wave function, Ψ, which is a
kind of field, although nobody is sure what Ψ really is. Modern quantum field theory, which
produced the standard model of elementary particles, proposes many different kinds of fields. The
elementary particles are knots in those fields; individual electrons are knots in the electron field,
individual quarks are knots in the quark field, etc. The vacuum isn't empty; it's filled with fields of
every type and description, including the all-pervasive Higgs field, with virtual particles popping in
and out of existence as a result of quantum fluctuations in those fields. Nobody yet knows what
string theory, or M-Theory, will come up with, but I'm sure new fields will be in it. The one thing
that's lacking in all of this is a unifying law or principle that make everthing hang together.
Some scientists in the past and present have proposed a different way of thinking. I'll call this the
“post-reductionist” view. Whereas reductionism views the whole (the universe) as being the sum of
its parts (a linear superposition of all fields throughout space), post-reductionism is a holistic theory
that proposes there is a unifying law or principle that expresses itself through the action of the parts.
Pierre de Fermat and Joseph Louis Lagrange were two pioneers of this philosophy.
Fermat proposed that the path taken by a light ray is the path that minimizes the transit time.
Physicists generally reject that notion, favoring the wave theory of light to explain refraction,
although they have to admit that Fermat's conjecture does work. Reductionist thinking doesn't
allow for light rays to seek out paths that minimize transit times. Instead, light waves are
influenced locally by the optical properties of the media through which the waves propagate, and
the waves themselves are electromagnetic fields governed by Maxwell's equations.
One of Lagrange's ideas was the principle of least action, where moving objects follow paths that
minimize the total “action” summed over time. Lagrange came up with a definition of action as
follows: Action = Kinetic Energy – Potential Energy. Suppose you're on the ground and throw a
ball to your friend standing on a flat roof. You want to know what path the ball follows, knowing
only its initial velocity and the location of your friend. Applying Lagrange's method, you would
express the incremental action, dS, in terms of the ball's mass, m, its horizontal and vertical
distances from you, x and y, and the gravitational acceleration, g, over a time interval, t:
dS = { ½ m [(dx/dt)2
+ (dy/dt)2
] – mgy } dt
The path of the ball is expressed as the function y(x) is found by minimizing the integral of dS over
the total time it takes the ball to go from you to your friend, which of course you don't know ahead
of time. Now actually doing the Lagrange computation is fiendishly hard, taking up several pages
of very difficult calculations. What you end up with is a parabola:
y(x) = Ax – Bx2
, where A and B depend on the ball's initial velocity. Now you might ask why any person with a
sane mind would go to all that trouble when you could just use Newton's laws of motion and come
up with the same result with a few lines of relatively simple calculus? Well, you wouldn't use
Lagrange's method for this particular problem, but the fact that it actually works provides some
deep insights about the universe. Richard Feynman's high school physics teacher showed him this,
and it made a deep and lasting impression on him. In fact, his quantum field theory uses a
12

19.
methodology that is closely related to Lagrange's least-action principle.
Instead of going through all the excruciating pain of calculating the Lagrange integral, you could
approach the ball-tossing problem another way. Start out by drawing a straight line between you
and your friend and calculate the total Lagrange integral by summing the actions at all the points
times the increments of time it takes the ball to go between the points. Then move the points one at
a time (except the points where you and your friend are standing) up and down just a little and see
whether those movements increase or decrease the total action. If a movement decreases the action,
keep moving it, otherwise go the other way. If you keep doing this over and over, you eventually
reach a point where no little movements will reduce the action any further. That's the path.
The part that impressed Feynman so much was the fact the ball seems to “know” the “best” overall
path to follow. This is a very holistic approach to the problem of ball throwing. Instead of gravity
tugging on the ball and changing its velocity ever so slightly, the ball just “knew” where to go.
Now this sounds absurd, but Feynman used this approach to explain the famous double-slit
experiment in quantum mechanics. In his interpretation, a particle doesn't blindly follow a path
though the slits. Instead, it first explores every possible path through the slit at the same time and it
then “chooses” the one path with the highest probability based on some fundamental principle.
Using a Lagrangian approach, let me propose my post-reductionist universal law: “Every change
maximizes the total degrees of freedom of the universe.”
The first element of this law involves change. Without change, the law wouldn't make any sense.
The second element is holistic. It implies that everything, from elementary particles, to baseballs,
to planets knows its place in the entire scheme of things and how to maximize the total degrees of
freedom of the universe.16
Not only that, everything will act accordingly. Remember the example
of the ice box in Appendix B? Well, as soon as the dry ices was placed in the box, heat energy
began flowing from the warm food to the cold dry ice. As this occurred, the got colder and lost
some degrees of freedom; however as a result, the total degrees of freedom were increased. As CO2
molecules absorbed heat from the food, they evaporated and created many more degrees of freedom
for the CO2 molecules than were lost by the food molecules. In other words, the food molecules
slowed down, and gave up some of their degrees of freedom for the greater good of the universe.
How did they know how to do that? That's the great mystery.
Before going further, let's find out how many degrees of freedom typical things have. Entropy is a
well-known quantity for well over 100 years and it's been measured accurately. The entropy of one
kilogram of steam at a pressure of one atmosphere and a temperature of 100ºC has been measured at
7.35 kJ/ºK. Boltzmann's entropy formula17
is
S = k log W
W is the number of degrees of degrees of freedom, and k is Boltzmann's constant, which is a very
small number: 1.38 ×10-26
kJ/ºK. Rearranging the formula,
W = 10 S/k
Plugging in the values for S and k, we see that W for a kilogram of steam at 100ºC is equal to 10
followed by over 1026
zeros – not just 1026
mind you – but 10 followed by 1026
zeros!! This is just
an insanely large number.
Entropy isn't just a byproduct of time, it' really the driving force behind creation. It's easy to see
how creating more degrees of freedom makes gas expand, but most people don't think of that as
much of a “creative” process. If that were all entropy did, it would turn everything into random
nothingness – and entropy does have a very bad rap sheet in that regard. But there's much more to
16 “Degrees of freedom” sounds less sinister than “entropy.” However, maximizing one maximizes the other.
17 This formula is carved on Boltzmann's tombstone.
13

20.
it than that: entropy actually may be pulling everything together too.
Erik Verlinde has come up with an amazing theory that says that gravity is caused by entropy. I
can't really do justice to this theory, so I strongly recommend reading On the Origin of Gravity and
the Laws of Newton on his web site: http://staff.science.uva.nl/~erikv/
Verlinde's theory is based on the holographic principle that Leonard Susskind and Stephen Hawking
discovered through studying black holes. Every finite volume of space containing mass-energy has
a finite number of degrees of freedom (microstates). This number is determined by the Bekenstein
bound.18
Verlinde says that when mass-energy is distributed over the finite microstates, it produces
a temperature, a macroscopic property. Multiplying that temperature by the increase in the entropy
that occurs as the two bodies come together equals work, and it's the same quantity of work gravity
does on those bodies according to Newton's law. Verlinde believes this is no mere coincidence.
Instead, some fundamental law of maximizing entropy is forcing the bodies to come together. The
force is manifested as Newton's gravitational force. He says,
“The holographic principle has not been easy to extract from the laws of Newton and Einstein, and
is deeply hidden within them. Conversely, starting from holography, we find that these well known
laws come out directly and unavoidably. By reversing the logic that lead people from the laws of
gravity to holography, we will obtain a much sharper and even simpler picture of what gravity is.
For instance, it clarifies why gravity allows an action at a distance even when there in no mediating
force field.”
So we've come full circle from Newton's action at a distance, to field theories, and finally back
again to action at a distance. I don't think this is the entire story, however. The law stating, “Every
change maximizes the total degrees of freedom of the universe” may explain a lot of things,
including the forces found in current field theories.19
But even if entropy turns out to be the driving
force behind it all, I still don't think it's the only creative mechanism in the universe; alone it doesn't
account for all the order and structure found everywhere. We need another ingredient for order (and
chaos), and I believe that ingredient is a strong local nonlinearity that permeates everything.
One source of local nonlinearity could be quantum interactions. The quantum properties of things
are binary for the most part: spin up, spin down, positive charge, negative charge, etc. When there
are quantum interactions, information is exchanged – a quantum computation of sorts. Modulo-2
arithmetic is highly nonlinear and so are feedback processes. We saw earlier how cellular
automation can create structure and order, and this phenomenon may be occurring at the sub-atomic
level through quantum interactions. Maybe modulo-2 arithmetic and feedback take place during
quantum interactions. But this is getting very speculative, so I'll stop right here.
This is an entirely new way of thinking about reality and it needs a lot more work to flesh it out as a
good scientific theory. Unfortunately, there aren't enough minds working on it right now. Breaking
the prevailing deterministic-reductionist paradigm will be almost as tough as it was for 16th
century
astronomers in overturning Ptolemaic gobbledygook. But at least the 21st
century scientists only
have to worry about losing their research grants, and not being burned the stake for heresy.
One think is abundantly clear, at least to me. Reductionism is dead, or at least its days are
numbered. If and when the Theory of Everything is found, I think scientists will be astounded by
the utter simplicity of the universal law that governs it, and by the amazing complexity that emerges
from such a simple law.
18 The Bekenstein bound gives the maximum degrees of freedom expressed as entropy: S ≤ 2π k RE / ħc, where R is
the radius of a sphere enclosing the volume and E is the mass-energy (expressed as energy) inside the volume. The
constants k, ħ, and c are Boltzman's constant, Planck's constant, and the speed of light.
19 Obviously, it should produce results that are consistent with current theories; otherwise, it wouldn't be a very good
law. But it should also explain those results in a more fundamental way than the current theories do.
14

21.
Appendix D – Introduction to Radical Post-Reductionism: Wheelerism
Quantum mechanics clashed with Newtonian physics and relativity right from the beginning. Even
some of the scientists who ushered in quantum theory, such as Erwin Schrödinger and especially
Albert Einstein, began to have misgivings when they realized the full ramifications of what they had
wrought. On the opposite side, Niels Bohr and his Copenhagen crew weren't particularly bothered
by the fact that something could be in multiple places or in multiple states at the same time.
In 1935 Erwin Schrödinger proposed his famous cat experiment, where a live cat is placed in a
sealed box along with a Geiger counter that triggers a release of deadly cyanide gas.20
A sample of
radioactive material emitting beta particles is placed near the Geiger counter. The radioactive atoms
have a known half-life, and based on their proximity to the Geiger counter, there is exactly a 50%
probability that the Geiger counter will be activated within ten minutes. Everything is sealed up
nice and tight so nothing, not even the sounds of a dying cat in agony, could escape the box, and a
10-minute timer outside the box is started as soon as the Geiger counter is activated. After ten
minutes the box is opened to see whether the cat is alive or dead. The question is: during those
fateful ten minutes, what was the state of the cat? Was it dead, alive, both, or neither?
Now at first, this sounds like a really dumb question because how could a cat be both dead and alive
or neither? Most people would say that if the cat is alive after opening the box, it was alive the
whole time, and if it is dead, then it started out alive and became dead at some point before the box
was opened. But that's not how quantum physics works. You see, the strict Copenhagen
interpretation is that the Geiger counter, the cyanide, and the cat are all sealed in the box where no
information can get out, so they're all included in the same quantum wave function that keeps the
radioactive atoms in a superimposed state of decay and non-decay. A measurement must be made
to see if any of them decayed or not. In this interpretation, the cat is both alive and dead until the
box is opened and an observation (measurement) is taken. Then the entire wave function
“collapses” and the cat is either still alive or it becomes dead at that moment.21
The real question is whether the cat itself counts as an observer. Now cats are pretty smart animals,
but presumably they're not as smart as humans.22
If you accept a cat as being a valid observer, the
wave function would collapse before the box is opened. But in that case, which kind of animal
wouldn't count as an observer? A rabbit? A snake? A fish? A slug? A bacterium? This highlights
the problem known in quantum-mechanical circles as “the measurement problem.”
John Wheeler was among a new breed of thinkers, who solved the measurment problem in a pretty
radical way: he stated that history doesn't exist until we create it. We make the whole thing up
when we observe things. It's as if dinosaurs didn't really exist until someone dug up their fossils.23
I call this philosophy “Wheelerism.” To prove his point, Wheeler came up with all sorts of
interesting thought experiments, including one based on the famous double-slit experment, called
“Wheeler's delayed-choice experiment.” A form of the delayed choice experiment was actually
carried out in a lab, and it did seem to validate the notion the present influences the past.24
Of course, Wheeler has many critics in reductionist circles who argue that you can't really show that
history doesn't exist by doing a lab experiment – the time delays are too short. In response to that
criticism, he imagined a much bigger experiment called “Wheeler's astronomical experiment.” In
20 There is absolutely no evidence that Schrödinger actually did this experiment on a live cat. If he had, these
questions might have been answered by now.
21 An old-fashioned reductionist wouldn't buy any of this, but that way of thinking is passé, as we have seen.
22 This is debatable. Most of the cat owners I know are very well trained, and cats must have a superior intelligence in
order to train humans so successfully.
23 Some creationists deny the existence of dinosaurs even after dinosaur fossils are dug up. However, they are not to
be mistaken as Wheelerites.
24 This apparent paradox is explained fully in Reality Riddle.
15

22.
this experiment, a very distant star emits light that travels toward the Earth. A very massive object,
such as a black hole, sits between the star and the Earth. This object forms a gravitational lens,
allowing light from the star to take two completely different paths to the Earth – it's like the
double-slit experiment on steroids. Now depending on how you decide to detect the light from that
star, you can either get an interference pattern from light going in both paths around the lens at the
same time (making it a wave), or you can use a telescope to see which path the light took (making it
a particle). Either way, you're creating your own version of history because the light passed the lens
billions of years earlier. Although Wheeler's astronomical setup does give you plenty of time for
making delayed choices (billions of years), the main problem with it is that you need to work with
one photon at a time for the experiment to prove anything. I think it would be pretty hard to get a
star to emit one photon at a time, and that would also make the star awfully dim, so I don't see much
chance of anyone actually carrying out this experiment.
There's one aspect of Wheelerism I really do like, although I wouldn't quite carry it to the extremes
some do. I'm referring to the “it from bit” conjecture. As I stated often in Reality Riddle, it seems
plausible, and even likely, that everything we observe in the universe is essentially just information
– a dataverse. But that's not all. According to Wheeler, reality has two distinct parts that are
modeled after computer technology: hardware (the “it” part) and software (the “bit” part). The
software component consists of observers like us, and that is the “real” part of real-ity. The
hardware component (the “-ity” part) is just the nuts and bolts, like electrons, quarks, gluons,
gravity, etc. The software controls the hardware; without software, the computer is just a dead
machine; not real. Now here's the truly weird part of Wheelerism: the software is constantly
creating history by modifying and improving the hardware. It's like your computer suddenly
decides to upgrade its memory from 8 gigabytes to 16 gigabytes, so it goes online, orders a couple
of sticks of RAM, and installs them all by itself. Reality is like HAL in 2001: A Space Odyssey.
The most extreme form of Wheelerism says that quarks didn't always exist, despite what the
cosmologists say about the early universe and its evolution. It says quarks were invented by
Murray Gell-Mann and George Zweig in 1964, and they were “discovered” right on cue in 1968.
The same thing is true about the neutron, the electron, the atom, and so forth. Those particles didn't
exist either until the “software” (the physicists) decided to upgrade the “hardware” (the universe) by
inventing them and creating history. So it should have surprised no one when the Higgs particle
was discovered in 2013, because Peter Higgs had already created it in 1964, although a machine big
enough to make Higgs particles had to be built first.
Now this is getting way too metaphysical for a “scientific” essay, so I'm going to have to dial it back
a little. However, there is a grain of truth that points back to Schrödinger's cat experiment and the
measurement problem. The quandary was how to separate the observer from the observed in
experiments involving quantum particles. Borrowing some of Wheeler's ideas, history doesn't exist
until some kind of “record” of it is made. But I don't think it takes an intelligent observer, such as a
human or even a cat to record it. A record could consist of a track of a positron in a cloud chamber,
or anything else that leaves a physical impression of some sort. By defining things that way,
quantum objects like electrons, photons, etc., have no histories because they carry no records of any
kind. All electrons are exactly alike, and there's no way of telling where they've been or what
they've done in the past. They are defined only by their wave functions. This could mean that all
quantum particles are connected through a common wave function, and the universe is holistic and
very interconnected at its core. I think it would have to be holistic in order to carry out a universal
law requiring that all changes must maximize the total entropy of the universe.
Cats, on the other hand, are unique, non-quantum creatures. They have memories and personalities.
They have kittens, get old, and sometimes they die from cyanide poisoning. In short, they do have
histories and wave functions don't apply to them. Parts of Wheelerism aren't very plausible and are
even pretty disturbing, but thinking about it did help me resolve the measurement problem.
16

23.
Appendix E – Order, Chaos, and the Emergence of Consciousness
I'm really going out on a limb with this appendix because as an engineer, I have practically no
professional experience whatsoever in the fields of in neurology, psychology, or psychiatry. But
that won't stop me from talking about those things, because I have opinions on just about everything
and I'm not shy about sharing my opinions with anyone who will listen.
To recapitulate what I've said so far: science will sooner or later undergo a paradigm shift away
from orthodox reductionism. Field theories will be replace by a more holistic and integrated view
of the universe because scientists must come to realize that while reductionism explains many
things, it doesn't explain everything, In fact, it may not even explain most things. The new
paradigm will be based on a single universal law with all our existing physical laws being seen as
special cases. I suggested earlier that such a law might be: “Every change always maximizes the
total number of degrees of freedom.” We can see that this is a holistic law because it encompasses
everything all at once. The same law that causes gas molecules to expand also causes massive
objects to collapse toward each other as a primitive form of organization called gravitation. There
are countless other ways the universal law operates that are just waiting to be discovered.
Below the surface a powerful organizing principle is at work. It operates when systems having
degrees of freedom are not in equilibrium, and when interactions are nonlinear. This organizing
principle causes chaos and order to spring out of nowhere from what might be otherwise considered
“dead” material. Even as the law of entropy relentlessly drives the universe toward randomness and
“heat death,” this organizing principle works to create order through chaos. This inevitably
generates increasing structure and complexity, ultimately leading to life and consciousness.
One thing is certain: reductionism and molecular biology have utterly failed to provide a coherent
explanation of how life functions after it is created, let alone offer any rational theory of how life
emerged from non-living matter in the first place. Without recognizing any universal organizing
principle, we would be forced to abandon science altogether and invoke special creation by an
intelligent and purposeful Creator as the only plausible explanation for life. But this is a false
dichotomy – we don't just have a choice between reductionism and creationism; I believe science
will ultimately discover the organizing principle and show that the emergence of life is a natural and
inevitable outcome of change.
The line dividing life from non-life is sharp. Even unconscious life, at the level of a bacterium, is
amazing and purposeful. A bacterium does live a purposeful life, although its “purpose” may be
limited to consuming food, eliminating waste, and reproducing copies of itself.25
Simple life is
amazing enough, but the emergence of consciousness from living matter is almost beyond belief.
The dividing line between consciousness and unconsciousness isn't quite as sharp as the one that
divides living from non-living. We'll see there are different levels of consciousness, with somewhat
fuzzy lines between them. Take an earthworm for example. The nervous system of an earthworm
is rather primitive, but it does actually have one, around 300 neurons in all. There's no brain or
eyes, but an earthworm can respond to outside stimuli. It likes to dig tunnels, and it seeks out other
earthworms to mate with, so it evidently knows the difference between a potential mate and a twig
from a tree.26
However, its primitive consciousness is just barely “aware” of its surroundings.
Next up on the ladder of consciousness are the leeches, snails, and slugs. These animals have
between 10,000 and 20,000 neurons, which is couple of orders of magnitude more than an
earthworm has, but there doesn't seem to be much improvement in overall intelligence. These
25 Actually, some human lives seem to have similarly limited purposes.
26 Earthworms have both male and female reproductive organs, so they could theoretically mate with themselves. I
don't know if they do that, however.
17

24.
animals don't have spinal cords or brains; just ganglia that are spread throughout their bodies.
When we get to the fruit fly, there is a quantum jump in brain power – yes, it actually has a brain.
With about 100,000 neurons and about 107
synapses (connections between them), a fruit fly
registers brainwave activity while in flight. Now that's quite an improvement. Ants are interesting
creatures. They only have about 2½ times the number of neurons as fruit flies, but they leverage
their tiny brains with all other ants in their colonies, which can number as many as 40,000. They
act collectively, and can do things together that no ant could do alone; in fact, collectively, they
have ten billion neurons, more than most mammals.27
Honeybees act collectively too, but a bee can
go off alone without acting stupid. They have almost a million neurons with 109
synapses. We're
about to leave the class of insects, but before we do, guess which insect has the most neurons.28
I'm not going to go up the entire animal kingdom, but we eventually end up with mammals at the
top of the heap. You need to mostly count neurons in the brains of a mammal instead of the total
number of neurons in their bodies, because most of the action takes place inside the brain. The
brain has neurons and synapses that form a very highly non-linear network. So not only is there a
fundamental non-linear biological process, which creates order and chaos that allows the brain to
emerge; but the brain itself is a non-linear process, which creates order and chaos that allows
consciousness and intelligence to emerge. Here's what physicist James Crutchfield says,
“Innate creativity may have an underlying chaotic process that selectively amplifies small
fluctuations and molds them into macroscopic coherent mental states that are experienced as
thoughts. In some cases the thoughts may be decisions, or what are perceived to be the exercise of
will. In this light, chaos provides a mechanism that allows for free will within a world governed by
deterministic laws.”
Wow, that's quite a statement! I think it kind of capsulizes much of what this essay is about. Again,
there is a universal theme: entropy is the driving force behind everything, while an undercurrent of
order and chaos that comes from non-linearity. From order and chaos, complexity emerges in
stages. At the bottom level is dead matter organizing into stars, galaxies, planets, etc. through the
primitive push/pull balancing act of entropy. Biological activity emerges as a higher level that uses
new chaotic processes to organize complex body structures that eventually lead to nervous systems
and a brain. The brain has its own chaotic process that organizes consciousness, free will, and
intelligence. The same organizing principle operates on different levels, each level involving
different chaotic processes that allow the level to emerge, and so on.
Once we arrive at consciousness, it also splits into higher levels of complexity. In the mammal
class, there is simple consciousness, self consciousness, and cosmic consciousness. The three levels
were described by Richard Maurice Bucke, a 19th
century psychiatrist from Ontario, Canada. Most
mammals experience simple consciousness. These mammals are fully aware of their surroundings,
have memories, and may have a full range of emotions like love, fear, anger, joy, sorrow, and even
remorse and shame. Mammals with simple consciousness can plan ahead and even use reasoning
and logic to solve problems. This isn't conjecture; it's a proven fact.
Gable is a border collie who lives at the University of Lincoln in the UK, where behavioral
psychologists study him. Gable has managed to associate 54 human words as names for 54
different toys. When his trainer tells Gable to fetch a particular toy from a pile in another room, he
will go to that room, pull out the toy from the pile, and return it to his trainer. That's pretty good,
but here's the amazing part. Once his trainer placed an unfamiliar toy in the pile and gave it a name
that Gable was never taught. The trainer told Gable to fetch that toy using that name, but Gable was
confused and didn't know what to do. He was instructed to fetch that toy by name several more
27 When an individual ant is separated from her colony, she becomes pretty stupid. At least that seems to be the case.
28 That honor goes to the cockroach with 1,000,000 neurons.
18

25.
times. Finally, Gable went into the room, found the new toy, and brought it back to his trainer.
Gable could reason that the toy his trainer wanted was not one of the 54 toys that he knew by name.
So he searched for a toy that was not one of those 54 toys he knew until he found it.
Looking only at the number of neurons in the brain gives misleading information about intelligence
because the overall size of the animal has to be factored in. Very large animals like whales and
elephants need more neurons to just move their huge bodies around. But it's interesting to note that
cats have almost twice as many neurons as dogs, 300 million versus 160 million, while both animals
are of the same order in size. Chimpanzees (5-6 billion neurons) are considered Number 2 in the
intelligence hierarchy, and of course we humans (19-23 billion neurons) are Number 1.
As smart as cats and dogs are, they still only possess simple consciousness. Self consciousness is
the next level up, and humans (and maybe chimpanzees) have it. A self-conscious being not only
thinks, but it knows it's thinking. This brings about a whole new level of complexity. One way to
tell if an animal is self conscious is by placing it in front of a mirror. If it recognizes the image in
the mirror as itself, then it probably has self consciousness. We humans usually don't reach that
stage until we're almost a year old. Put a mark of lipstick on a child's forehead and place her in
front of a mirror. If she's attained the level of self consciousness, she will immediately try to rub off
the mark on her forehead; a baby doesn't associate the image of the baby's forehead in the mirror
with her own forehead until she's reached that level. Adult chimpanzees seem to recognize
themselves in mirrors. Dogs don't; but what dogs lack in self consciousness, they more than make
for up for by learning to adapt so well to the peculiar ways of human beings.29
Bucke classified self consciousness as an emergent phenomenon, and he said it only emerged in the
human race quite recently. Looking at this from a post-reductionist perspective, we see that it
would be the inevitable result of the self organizing principle; it happens when simple
consciousness becomes sufficiently complex and chaotic. Today, virtually every adult human being
is in a state of self consciousness. This enables us to think abstractly on several levels at once, as in
the statement, “I know that I know that I know.” We can also think symbolically at a very high
level, and we can manipulate abstract mathematical symbols to solve problems.
Self consciousness is also a nonlinear and chaotic process, and when it becomes sufficiently
complex and chaotic, it will inevitably organize into what Bucke called cosmic consciousness.
Bucke's description of cosmic consciousness seems identical to what Buddhists call satori, a calm
state of pure knowing, without any fear, anger, or self-centeredness. In that state, a person is
consciously aware of the connectivity and unity that underlies the universe. It seems that people
who are in satori directly experience the laws of the universe operating within their own minds.
Relatively few humans have attained that level of consciousness, and still fewer have sustained it
for any length of time.30
However, Bucke believes that cosmic consciousness will eventually
become the normal state of consciousness of the human race as it continues to evolve.
Pierre Teilhard de Chardin was a French philosopher, paleontologist, and geologist. He was a
29 While chimps and dogs are comparable size, chimps have over ten times as many neurons, so they should be way
smarter than dogs, right? But consider this: when a human points at something, dogs instinctively know to look in
the direction the human is pointing, whereas chimpanzees don't have a clue about what the human is doing. Long
ago, dogs learned how to get along with humans and they almost became like us. They do what we tell them to do
(sort of), and they seem to go out of their way to please us. Because of this, dogs get to live in our houses, eat our
food, play with our children, go on trips with us, and sometimes they're even allowed to lie on our beds. On other
hand, adult chimps are vicious, hateful creatures that will attack and kill humans if they are given the chance.
Because of this, chimps get to live alone in steel cages. Now I ask: which animal is really smarter?
30 Bucke himself purportedly experienced a fleeting moment of cosmic consciousness in 1872. Although the
experience was temporary, it had a profound effect on Bucke that permanently changed him.
19

26.
staunch believer in evolution, both of the human race and of the universe as a whole.31
His ideas
were clearly post-reductionist. He also believed that the evolution of the universe is being
orchestrated by the conscious creatures who inhabit it, with everything and everyone evolving
toward an end state he calls the Omega Point. I can see clear parallels between Teilhard's views and
John Wheeler's “it from bit” conjecture. Both Teilhard and Wheeler believe in the primary status of
consciousness (“bit”) and the secondary status of the physical universe (“it”). Both held the belief
that the “bit” controls and determines the “it.” I think the more likely scenario is that both the “it”
and the “bit” emerge together from one universal law and its corollary organizing principle through
chaos. The law itself has primary status; the universe is secondary. Just don't ask me how or why
the universal law and the organizing principle originated, because I have no idea.
Created matter organizes into more complex structures that eventually become chaotic. Chaotic
structures leads to order at a higher level, which may even add new processes of organization as the
universe marches on with increasing degrees of freedom. Those new structures may also open
additional pathways that maximize the total degrees of freedom.32
This process goes on and on until
chaos produces a whole new level organization – life. At the level of living things, the role of
ordinary physical laws is significantly diminished; life is governed by a different set of laws. It is
here that reductionism fails completely because quantum mechanics and Newtonian physics simply
cannot account for most processes that occur in living forms; microbiology doesn't provide a
complete picture either. Darwin's theory can explain parts of the evolutionary process, but the
power of natural selection is somewhat limited, and its effects on living forms are almost trivial.
Eventually, life evolves complicated and highly nonlinear neural networks and brains. Those
provide a whole new stage for chaos and order to play out their roles. There is another exponential
increase in complexity and chaos, then order produces consciousness. First there is only primitive
consciousness, on the level of an insect, but that is followed by higher levels, with each level setting
the stage for the next. The physical brain continues to provide the foundation for the edifice of
consciousness, like the foundation of a building. There may also be entirely new organizing
processes operating on consciousness itself that transcend and bypass the physical brain altogether.
Looking at the entire picture, there seems to be a universal hierarchy at work: Elementary particles
operate on the lowest level, obeying the laws of quantum physics and nothing else. For them, time
is symmetrical, they have no individual identities, and the past does not exist. Quantum particles
organize into macroscopic objects that have identities and histories. Here, time is not symmetrical,
and the past emerges. Macroscopic objects organize into larger and more complex physical
structures that obey Newtonian and relativistic laws (approximately). Structural complexity
increases until chaos produces a new order called life, which obeys an emergent set of laws that
science presently does not understand. There is a hierarcy among living things, some having
evolved into organisms of extreme complexity, where chaos produces ordered nervous systems,
increasing in size and complexity until primitive consciousness finally emerges. Chaos organizes
primitive consciousness into higher states of order, and so on, ad infinitum. It's hard for me to
visualize where this process will lead, but I'm positive it will be a fantastic journey.
31 He also happened to be a Jesuit priest. Needless to say, his unorthodox views on creation and evolution didn't
exactly endear him to the Vatican.
32 Proponents of the big bang theory are certain that there was an event called inflation, when the universe expanded
exponentially soon after it came into being. Physicists proposed several possible mechanisms for inflation, but
there seems to be no rationale for why inflation started and stopped. Here's my suggestion: The universe may have
originated in a state of near-zero entropy. The universal law requires that change maximize the total degrees of
freedom of the universe. At that time, inflation was the only available mechanism for doing that, so it did. At some
point during inflation, the universe entered a different state where inflation could no longer maximize the total
degrees of freedom, so it stopped. A different form of expansion accomplished the task of maximizing the total
degrees of freedom, which is ongoing today. The universe may attain states where different processes of change
will emerge, which will fulfill the universal law more effectively than the present process, and so forth.
20

27.
Appendix F – We're Living on the Hairy Edge
As I wrote the essay Is Science Solving the Reality Riddle, I kept getting this nagging feeling that
the answers to some of the questions concerning reality might have something to do with fractals
and Mandelbrot sets. At that time, I didn't really understand the full implications of fractals, and I
still don't; but I've done a bit of research on fractals since then and came up with some amazing
connections between fractals, three dimensions, order and chaos.
First, I looked at a very simple fractal known as the Koch snowflake, named after Swedish
mathematician Helge von Koch. To make a Koch snowflake, you start out with a simple equilateral
triangle with each side having a length, s. You add three more equilateral triangles to each side to
make a star of David with 12 sides. Then you add more 12 equilateral triangles to each of those
sides, and so on. This drawing shows the evolution of the snowflake:
The nth
evolution is denoted by the symbol S(n). Starting out with the triangle, T, you get the
following sequence of figures: T → S(1) → S(2) → S(3) → S(4) → S(5). Now you can carry this
on forever if you want, and the resulting shape will be a fractal. This snowflake has very unusual
properties. The length of the perimeter of the snowflake is given by the very simple formula
P = 3s(4/3) n
. The funny thing is that as n → ∞ so does P. That's right, the perimeter, P, of the
fractal becomes infinite. And I don't just mean that it has an infinite number of points – all lines
have an infinite number of points – I mean P has an infinite length!
One ramification of this is that you can't really define a Koch snowflake by a formula, like the
formula y2
= r2
– x2
for a circle, or any other kind of formula for that matter. You can only define it
by describing the process that generates it. Now, although the Koch snowflake has a perimeter of
infinite length, it sure looks like it has an inside and and outside. And in fact it does. So what's the
area inside the snowflake? I'm not going into the whole derivation, because you can look that up,
but the important thing is that the area is finite: A = 2 s2
√3 /5. So, the perimeter of a fractal
encloses a finite area even though the perimeter itself has infinite length. Very strange.
Now fractals can be generated in other ways too, the Koch snowflake being a very simple evolution.
There's another class of fractals are are generated from a process that creates Mandelbrot sets,
named after Benoit Mandelbrot. You can represent any point in 2-dimensional space as a complex
number: z = x + iy.33
You can generate a Mandelbrot set in two dimensions as follows. Using the
formula z´ = z 2
+ c, pick any point you want c = x + iy and compute z´ from the starting point z = 0
33 Up until now I've denoted the imaginary number √-1 by the letter j. Now I'm going to change that to the letter i, for
reasons that will become clear shortly.
21

28.
using the rules of complex algebra. Next, feed z´ back into the formula as z, and compute a new z´
and keep c the same. Keep doing that over and over. Two things might happen: a) the values of z´
settle down to very predictable numbers that repeat, or b) the values of z´ chaotically zoom off into
the stratosphere. If a) occurs, then c is part of the Mandelbrot set, and if b) occurs, it is not.
What you'll find is this: there's a boundary that separates the numbers in the Mandelbrot set (order),
from the numbers not in the Mandelbrot set (chaos). This boundary is a fractal perimeter, having
similar properties to the perimeter of a Koch snowflake. The perimeter encloses a finite amount of
area inside it, but the perimeter itself will have an infinite length. You may ask whether this type of
thing could be extended into three dimensions? The answer is yes – sort of.
There are no mathematical objects having three dimensions that follow the kinds of algebraic rules
that complex numbers follow; so although you can represent points in 3-dimensional space as sets
(x, y, z), there are no consistent algebraic rules for these sets. Luckily, through some mathematical
trickery, you can still generate a fractal surface in three-dimensional space called a Mandelbulb. An
example of one of these strange objects is shown in a figure near the front of this essay. The
colored surface of this Mandelbulb is all “fractally” and uneven. Points in space “inside” the
surface are part of a Mandelbrot set (order). Points not “inside” the surface are not part of that set
(chaos). The surface itself is thus a boundary between order and chaos.
Since the Mandelbulb is a surface that encloses a finite volume, it must have two dimensions (at
least nominally) and so it must also have an area. What's the area equal to? Infinity. Just like the
perimeter of the Koch snowflake is infinite, the surface of a Mandelbulb is infinite. Very strange.
Now everyone who has studied scientific literature probably knows about a place called “Flatland”
where hypothetical 2-dimensional creatures live. Flatland is ordinarily thought of as a traditional
2-dimensional surface, like a flat plane or the curved surface of a sphere. Well, what would happen
if we were 2-dimensional creatures living on the surface of a Mandelbulb? How would we
characterize the area of our home? What kind of features would we see there? Now I think some of
you might just see where this is all going, and here's where things start to get a little freaky.
It turns out that there is a class of mathematical objects known as quaternions. They were
discovered by the mathematician William Rowan Hamilton. These objects extend the idea of
complex numbers into four dimensions.34
Quaternions do follow a set of consistent algebraic rules,
although they're strange rules. For one, multiplication isn't commutative. In ordinary algebra, and
even complex algebra, the multiplication operation is commutative: A × B = B × A (whether A and
B are real or complex). This isn't the case in Hamiltonian algebra. Here, the order of things is
important, like in matrix algebra. Here is Hamilton's table for the rules of multiplication:
×
1 i j k
1 1 i j k
i i -1 k -j
j j -k -1 i
k k j -i -1
Hamilton saw the whole shebang in a flash of insight; he summarized it by: i2
= j2
= k2
= ijk = -1.
Let's put all of this into practice. Suppose of you have a point, z, in 4-dimensional space. This
34 Note that mathematics jumps from 2-dimensional complex numbers into 4-dimensional quaternions and completely
skips over the third dimension. This may be very significant. Or maybe not.
22