-by Moisés José Sametband-

INTRODUCTION
Three decades have passed since a new line of scientific research called
"chaos theory" was begun.
As opposed to what happens in other fields of physics, like quantum
mechanics, research on the fundamental particles that comprise matter or
theories concerning the origin of the Universe, one is trying to apply this
"science of chaos" to many events directly linked with habitual human
experience, and so explain phenomena as dissimilar as arrhythmias in the
functioning of the heart, or aspects of the economy like the fluctuations of
the stock market, or also the appearance of life upon the Earth, in addition to
the behavior of dynamic physical systems with a high number of components which
could be the atmosphere or a liquid in a state of turbulence.
The physicist Joseph Ford, in an article in the book, The New Physics,
proclaimed the new chaos science to be "the beginning of the third revolution
in the physics of the present century," the other two previous ones being the
theory of relativity and quantum theory.
It still seems premature, nevertheless, the give it the designation of
third revolution in physics, since as opposed to the other two cases is is not
correct to speak of a "chaos theory," a theory that still does not exist. It
refers to a new and very promising way of applying the known laws of physics,
with the fundamental assistance of the computer, to very diverse phenomena
that include, in addition to those traditional in physics, those occurring in
the biological sciences and social sciences, whenever they can be confronted
as if dealing with complex dynamic systems.
In physics there is work with growing intensity upon the topic of
disordered sets and their specific properties, of great scientific and
technological interest.
But one must avoid the confusions that can be generated around this
topic, particularly by the expectations which the study of chaos can awaken in
those who work in other fields of knowledge.
Actually a problem of interpretation appears here: among those who are
not familiar with the physical sciences or mathematics--and due in part to the
declarations of certain scientists--a sort of mythology of Chaos or Disorder
has been installed, which assigns a transcendent significance to the
accidents (real or apparent) of nature, and that proclaims the definitive
demise of determinism, when everything indicates that for chaotic systems
determinism will continue being valid, if indeed one requires a probabilistic
description of their behavior.
In a similar manner to what occurred with the catastrophes theory
developed by René Thom--who made a first attempt to study certain complex
phenomena mathematically--there are those who hope the study of chaos will
help explain the mysteries of the great social transformations or that of the
relations between the neuronal nets and psychology, and it has unleashed
formidable speculations about the meaning of time and of disorder in the
Universe.
Of course the extension of discoveries achieved in one field of
knowledge to other areas is very beneficial, yet when they deal, for example,
with human behavior, individual or collective, which has a complexity
incomparably greater than that of physical systems, that extension should be
done with much prudence, and in general can only have the character of an
analogy.
Thus, it may be fruitful to utilize the psychology of a familiar group
applying certain guidelines that have analogies with those of dynamic physical
systems, and with difficulty one might work on such a theme applying the
mathematics of chaos, and seeking fractal dimensions and strange attractors.
One should avoid using a language that seems to attribute magic powers to
"chaos": in scientific texts, this concept has a precise meaning, which leads
us to complex phenomena, particularly difficult to formulate mathematically,
yet which do not in principle manifest any connection with the primordial
Chaos conceived by the ancient mythologies.
It is important to make clear that the fundamental laws of physics
continue to rule and that the fact we utilize, as we shall see, statistical
characteristics to predict behavior is not an "epistemological drama" as some
have suggested.
The strong emotional charge that the word, chaos has is in part the cause
of the above-mentioned confusions, and have contributed to them the fact that
the name of this new discipline is still not definitively established.
30 years ago, when it began to develop, one spoke of the "science of
chaos," that soon came to be called "deterministic chaos," to distinguish it
from the chaos produced by pure chance. Now support for the word, "complexity"
tends to strengthen, which designates the study of those dynamic systems that
are at some intermediate point between the order in which nothing changes,
which might be that of crystalline structures, and the state of total disorder
or chaos such as might be an ideal gas in thermodynamic equilibrium.
The phenomena of "deterministic chaos" or of "complexity" refer to many
systems that exist in nature whose behavior keeps changing with the passage of
time (dynamic systems). Such phenomena appear when systems become extremely
sensitive to their initial conditions of position, velocity, et cetera, such
that very small alterations in their causes are capable of provoking great
differences in the effects. As a consequence of this it is not possible to
exactly predict how such systems will behave beyond a certain time, because
they seem to follow no law, as if they were ruled by chance.
But the researchers have discovered that dynamic systems in these
conditions show signs of collective regularity although it may not be possible
to distinguish the individual behavior of each one of their components.
It has been determined that there are certain common characteristics that
permit including in the study of complex processes not only physical systems
and inert chemicals but also living organisms, all approached through common
mathematical tools. The fundamental tool is, of course, the computer, without
which it would have been impossible to develop this new focus upon dynamic
systems. In a manner similar to the impulse given to science by the use of the
telescope and the microscope in the 17th and 18th centuries, the use of this
machine enormously eases the testing of theories through experimentation,
thanks to the immense increment in calculating capacity and the possibility of
making simulations of real processes and of creating models of complex
systems. Without their existence concepts like that of strange attractors,
fractals or algorithmic complexity could not have been developed.
In fact, this entire vast field of non-linear phenomena, with high
sensitivity to the initial conditions, was not unknown to the great
mathematicians and physicists of the 19th century; but in that era the
solution of the corresponding systems of equations would have required
numerical calculations so extensive as to be impractical, so that we had to
wait for the second half of this century, with the apparition of fast
computers, to be able to confront it.
One of the most positive results due to the emergence of this new field
of research is that interdisciplinary groups have been formed--integrating,
for example, biologists, physicists, mathematicians, or sociologists,
economists and computer experts--to study the problems inherent in complex
dynamic systems. These range from turbulent liquids to ecological systems or
economic models of society.
Various centers at a high level have emerged, in particular in the United
States, Russia and France; thus can be mentioned the Santa Fe Institute in New
Mexico, and departments dedicated to the subject at Los Alamos National
Laboratory (Center for Non-Linear Studies), the Georgia Institute of
Technology, the University of California at Berkeley, and the Center for
Research in Saclay, France, in addition to the group at the Free University of
Brussels, Belgium.
A major role in the study of these phenomena was that filled by the
Russian school of physics and mathematics, like Landau, Kolmogorov, Andronov,
Lyapunov, and others, who developed the necessary techniques long before
deterministic chaos was to become a fashionable topic. The above school
continues making great contributions to this important area.
In the last years the advance of this activity is displayed in the
growing number of congresses regarding its diverse aspects. The possibilities
of application to the most diverse sciences were placed in relief at the First
Conference on Experimental Chaos, held in the United States in October of
1991, where it was shown that there are cases when one can profit from chaos
instead of avoiding it, and obtain thereby systems of greater flexibility than
those which for being ordered have a "good" behavior, or are predictable.
This book has the intentions of describing the fundamental
characteristics of complex systems and the methods which are used to study
their behavior.
I wish, finally, to express my gratitude to Marcos Saraceno for his valuable
suggestions concerning the development of this theme and the presentation of
the text.
I. The Universe, does it function like a clock?
COMMON TO many creation myths is the concept of an eternal battle between
Order and Chaos, which suggests that those two concepts are profoundly rooted
in the human mind.
For primitive humanity, Nature was a Chaos, a capricious entity, subject
to the whims of powerful and indecipherable gods, whose actions could not be
predicted.
Thus, the cosmologists of many cultures imagined an initial state of
Disorder, or Chaos, from which things and beings emerged.
In The Theogony, Hesiod says: "Chaos was first, and then the
Earth." Chaos is a word of Greek origin, equivalent to abyss, or also a
formless entity: cosmos, for its part, designates order and, by extension, the
Universe.
A Chinese creation myth says that from chaos two principles emerged, Yin
and Yang, which with their feminine and masculine aspects later created the
entire Universe.
In the Bible we read: "And the earth was without form, and void; and
darkness was upon the face of the deep. And the Spirit of God moved upon the
face of the waters."
Chaos then is the primordial formless substance, from which the Creator
molded the world. But human beings need to discover order in nature, seek the
laws behind their complex behavior that will permit them to know the duration
of the days and the nights, the phases of the Moon, the season for the
harvests. The notion of order is thus introduced under the necessity of
foreseeing in anticipation, of predicting, indispensable for survival.
Slowly, over the length of many millennia, mankind kept discovering
ordered, regular behaviors in nature, which they learned to register, predict
and exploit.
In Greek thought two different visions appear regarding what is important
for our comprehension of the Universe, and which is essentially maintained in
the West until our times: the ecstatic monism of Parmenides and the being in
perpetual motion conceived by Heraclitus.
Plato took from Parmenides the emphasis upon order, on the immutable
aspects of reality, and from Pythagoras the study of the mathematical and
geometrical laws that are the expressions of the "Forms," eternal ideas, of
which mankind perceives only the shadows. Furthermore, like Heraclitus, he
kept in mind the instability and the incessant flux of all that is manifested
in nature.
With this focus, Platonic thought seeks for something that is behind the
process of change and the notion of the passage of time, to be able to make an
intelligible description of reality. The Cosmos is rational, and if in the
beginning there was Chaos, it was the Demiurge, the Supreme Orderer, who
arranged the transformation of the Universe from the state of disorder to that
of order. This divine Being is mathematical, and the order that it establishes
also is. The field of knowledge where this concept is most clearly expressed
is that of geometry, and Platonic thought makes a clear distinction between
the Forms, which are idealizations, are perfect, and objects such as they
appear in nature, where perfect circles, spheres or planes are never found.
From this came the clear aversion of the Platonic thinkers to using tools
for studying geometry--they only tolerated the use of the ruler and the
compass--since every material object is imperfect, a pallid reflection of the
Ideas.
Due to the action of an "errant cause," matter resists being modeled by
ideas, and so opposes being incarnated in all the pure forms of geometry. The
result is then a Cosmos agitated by numerous small convulsions, which in
modernity we could call whirlwind motions.
For Plato, consequently, there is a hierarchy formed with three
fundamental levels: the Ideas and mathematical Forms, which are the perfect
model of all things; the original Chaos; and an intermediate state, which is
our imperfect world, complex, the result of the work performed by the Demiurge
parting the Chaos and modeling it upon the basis of the Ideas.
In this hierarchical order, the supreme value is that of the Ideas and
mathematical Forms, which express the divine qualities of simplicity, harmony,
regularity.
This philosophy was incorporated into medieval Christian thought, and the
scientists as the Renaissance began have followed its fundamental aspects.
Galileo, two thousand years after Plato, says that the book of nature is
written in mathematical language, and in his era the bases for the scientific
method are established, which consists in seeking the eternal laws that rule a
natural phenomenon, those that should be formulable in mathematical language,
and questioning nature to discover whether the facts corroborate or negate the
proposed theory.
For thinkers like Galileo, Kepler, Newton, or Einstein, everything occurs
as if God (or nature) were to have chosen the order of the "pure forms," those
which the more simple, the more beautiful and true are considered, and the
ideal of science is to discover through reason that order and regularity which
is behind the apparent disorder of nature.
As Albert Einstein will express with so much clarity:
We recognize at the root of all scientific work at a certain level
a conviction comparable to a religious sense, that accepts a world based
upon reason, an intelligible world. This conviction, tied to a profound
sense of a superior reason which is revealed in the world of experience,
expresses for me the idea of God.
Science thus postulates that behind the complexity of the world there are
mathematical laws that show an underlying harmony, in which there is no place
for disorder or the unforeseeable; those dissipate on being illuminated by the
light of reason.
Yet there is another vision that, until recently, has been less
propagated than the Platonic search for that which is invariant in nature. It
is the perspective of Aristotle, who puts his emphasis on change in the
observable processes in the world, instead of the invariant Forms behind them
that cannot be observed.
In this vision, his observations of the living world, for which he was
very gifted, were very influential. For the advocates of this focus, nature is
similar to an organism, complex, changing, and they accept as real and not
appearances its aspects of disorder and unpredictability, trying to understand
those characteristics without eliminating them, since no one would thus try to
transform a living organism into a machine as predictable as a clock.
We shall return to this focus further on, but now we will detail the
characteristics of the scientific method such as it was applied at the outset
of the work of the great thinkers such as Galileo, Descartes, Huyghens, and
Leibniz.
The scientific method applied by Galileo had its mathematical expression
thanks to Descartes, who emphasized the necessity of analyzing, that is,
dividing what is examined into its simplest components, so as later to
recompose them in a synthesis that will permit understanding the phenomenon
with certainty. He also created analytic geometry, basing it on his
introduction of coordinates, thus identifying space as an immense grid every
point of which can be assigned a numerical value, with which he succeeded in
united geometry with algebra.
NEWTON'S MESSAGE
Three centuries ago the monumental work of Isaac Newton was published,
Mathematical principles of natural philosophy, whose message has been
decisive for the culture of the West.
According to this:

The Universe is ordered and predictable; it has laws expressible in
mathematical language, and we can describe them.

As we have seen, for scientists at the beginning of the Renaissance it
reaffirms the Platonic focus, given that there is an order in the Universe
behind its apparent complexity, with simple laws that contain immutable
aspects, expressed in the famous magnitudes which remain invariant in physics:
total energy, momentum, electrical charge.
The laws discovered by Newton seem simple, and their application to the
behavior of bodies allowed describing precisely the movement of the stars in
the firmament, the fall of bodies, et cetera.
Newton formulated his laws through mathematical equations, that relate
the magnitudes which we can measure of a body, such as its position and its
velocity, and the form in which these vary with time.

NEWTON'S LAWS OF DYNAMICS

Inertia: Every material body without having a force applied
remains in repose or moves in a straight line with a uniform velocity.

Force: When a force is applied to a free body the momentum
changes over time proportionally to that force, and the direction of its
movement is that of the line of action of the force.

Action and reaction: For each action exercised upon a body there is
always an equal and opposite reaction.

The first law was the formulation of Galileo's discovery, who observed
that one should attend not to the velocity of a body but to the change in that
velocity with the passage of time, thus putting an end to the Aristotelian
belief which had blocked the advance of physics for many centuries.
Newton also applied a concept that has been essential to the scientific
method: that of ideally isolating the dynamic system that one wishes to
examine from the rest of the universe of which it forms a part. This allows
its behavior to be understood, now that it is not necessary to consider all
the infinite relationships with the universe, which would only be possible for
an infinite being. It is enough then that they consider only those
characteristics of the system that are relevant to the phenomenon which they
desire to study. Thus, the first law asks what can be said of the movement of
an isolated body, or that is, one to which no force is being applied.
Aristotle had said that it ought to remain in repose.
Newton, like Galileo before, establishes that, in this situation, the
body can be in repose or can move in a straight line with uniform velocity.
If a body falls towards the earth, this is because the force of gravity
acts on it, and therefore its velocity cannot be uniform but must always be
greater; therefore it is more dangerous to fall from a larger height than from
a lesser.
Is there something that remains constant in this process of falling? Yes,
the acceleration, that is, the speed with which the velocity of the falling
body increases.
The second law formulates this concept, which is expressed mathematically
as F = m × a (the force F applied to the body is
proportional to the acceleration a of its movement, according to a
constant m, the mass or quantity of material in the body).
For any dynamic system in the Universe, the laws of motion can be
expressed as F = m × a, no matter what is the origin of the
applied force. It can deal with the force of gravity, the electrical or the
magnetic, and for all the same surprisingly simple equation applies equally.
The velocity with which a magnitude changes is determined by the
difference between its values for two successive times, and thereby the term
"differential" that appears in mathematical analysis: Newton's equations
involve speeds of change and are, accordingly, differential equations.
Algebraic equations are distinguished from differential ones because they
do not involve rates of change. They are not always easy to solve. Yet to
solve differential equations is, in general, much more difficult, and it
becomes truly surprising that so many equations important for their
applications to physics have a definite solution.
A basic mathematical principle of differential equations is that their
solutions, that is, their integration, are unambiguously determinate, and give
a single result for each set of numerical data that are introduced into the
equations; if, for example, one wishes to know the height reached by a
projectile, by introducing into the equations the data of initial velocity,
angle of the cannon, et cetera, a result is obtained that clearly defines this
altitude; they are accordingly deterministic equations: there is a single
effect for each cause.
The importance of these differential equations lies in that they can be
applied to dynamic systems, which is to say to any process which changes over
time.
Many physicists and mathematicians have marveled before the fact of how
effective these equations seem in describing the structure of the physical
world.
In view of the complexity of the world that surrounds us, it is truly
noteworthy that there are natural phenomena that allow description through
simple physical laws. How is this possible?
It was Isaac Newton who had the vision that opened the road for the
natural sciences to have so much success over these past three centuries.
This is due partly to them initially restricting their attention to the
study of simple natural systems with only a few components.
PHYSICAL LAWS AND INITIAL CONDITIONS
In accordance with the focus held by Newton on mechanics, a material system
can be conceptually divided into: 1) the "initial conditions," which
specify its physical state at a certain initial time (these conditions can be,
for example, the position and the velocity of a projectile, or of the Moon with
respect to the Earth); 2) the natural or "physical laws," that specify
how this state changes.
The initial conditions are usually very complicated, a complication that
reflects the complexity of the world in which we live. The natural laws, on
the other hand, can be and are instead more simple, and are expressed through
differential equations. This division--laws and initial conditions--is
maintained through today.
In practice those equations can only be strictly solved that represent
simple physical laws for systems with simple initial conditions: the shooting
of a projectile, the movement of the Earth around the Sun without taking into
account the influence of the other planets. That is, before the infinite
complexity of nature, which causes each of its components to be linked with
the rest by an immense quantity of relations, an abstraction is performed,
ideally considering the system one wishes to study as if separated from the
rest, and selecting those characteristics of the system which seem to be
sufficiently important as against others that produce almost no effect upon
the phenomenon being examined. So, to calculate the trajectory of a projectile
only the attraction of the Earth's gravitation is considered as an influence,
since the attraction that the Moon or the Sun exercise is so small that it
need not be considered among the initial conditions. In like manner, to study
the movement of the Moon around the Earth one will not take into account the
attraction of the stars.
One has then simple initial conditions because those variables have been
selected which are those that most affect the phenomenon under study, and
furthermore these obey laws expressed by equations in which small variations
in the initial conditions yield solutions that differ little among themselves.
In this way, the fact that the initial conditions are known in general with a
certain margin of error affects relatively little the result which can be
expected from these equations. For example, if a projectile is shot from a
gun, the calculation of where it shall hit the target starting from a certain
position of its barrel or from another slightly different will produce a
proportionally small difference in the result.
Another important aspect, also studied by the founders of mechanics, was
that of reversibility in time of the trajectories of dynamic systems: the
equations show that if the sign of the velocities of all the components of the
system are inverted, replacing v by -v, the result is
mathematically equivalent to changing the time t to -t, as if
the system could flow "backwards" in time. This is the mathematical form of
expressing that if starting at a certain instant there is a change in a
dynamic system, another change, defined through the inversion of the
velocities of the components, can restore the original conditions.
EVERY PHYSICAL PROCESS IS DETERMINATE AND THEREFORE
IT IS POSSIBLE TO PREDICT ITS BEHAVIOR
Abiding by these rules of the game, we arrive at the conclusion that in a
system which responds to the laws of classical mechanics, and which,
accordingly, is determinist, if the positions and the velocities of its
components at an instant are known, one can calculate the positions and
velocities at every following or previous instant.
Thus if in the dynamic system formed by two billiard balls we know at a
given initial moment the position and velocity of each one, we can through the
differential equations of Newton predict their respective movements from
when they received the initial impulse until they give up the effort.
Even more, at the end of the 18th century the conviction arose that if
one knew the position and velocity of each of the planets that comprise the
solar system at a given instant, they could calculate their positions in the
future, and also their positions in the past, through equations that
unambiguously determine the trajectories. It was precisely the application of
the equations to the movement of the stars, the celestial mechanics, which
signified Newton's greatest triumph.
The various scientific disciplines that continued developing over the
centuries following Newton's theory studied other magnitudes in addition to
the position and velocity of bodies. Yet the procedure, consisting of
introducing values for those magnitudes for an initial time into a mathematical
equation which, once solved, determines those values for any other time, it
being the same whether one deals with the configuration of an atom, the
movement of a comet, the temperature of a gas, or the voltage of an electrical
circuit.
Upon finalizing this process of scientific development it seems that the
behavior of the entire Universe could come to be expressed mathematically,
since it is determined by the immutable laws mentioned, which dictate the
movement of every particle in an exact form and for always, the scientific
task consisting in applying these laws to the particular phenomena.
The Newtonian scheme thus made possible the construction of the majestic
structures of classical mechanics, which gave humanity the vision of an
ordered and predictable Universe.
LAPLACE'S DEMON
This revolution in thought had its clearest expression with Pierre Simon de
Laplace, who in the age of Napoleon wrote, in his Analytic theory of
probabilities:
We should consider the present state of the Universe as the effect
of its previous state and as the cause of its future state. An
Intelligence that, for an instant, were to understand all the forces with
which nature is animated and the situation regarding the beings that
comprise it, if it also were sufficiently profound as to submit those to
[mathematical] analysis, it would embrace in the same formula the
movements of the largest bodies in the Universe and of the quickest atom:
nothing would be uncertain for it and the future, like the past, would be
present before its eyes.
The human spirit offers, in the perfection that it has known how to
give to astronomy, a pallid example of this Intelligence. His discoveries
in mechanics and in geometry, together with that of universal gravitation,
have placed him in a condition of embracing in the same analytic
expressions to past states and the future systems of the world.
Thus he has a vision of the Universe as a gigantic mechanism that
functions "like a clock" (not an electronic digital clock, but one of the
classics, formed with moving parts, springs and gears). Such a mechanism is,
over and above everything, absolutely deterministic and, accordingly,
predictable: it is governed by eternal laws which cause that under identical
circumstances the same things always result. And if the circumstances, instead
of repeating themselves in an identical form, change slightly, the resultant
will also change in a proportionately slight manner.
A specialist in these laws of mechanics who knows its characteristics
and its state at a given moment can, in principle, establish exactly what it
will do at any moment of the past or future. If in practice this can be applied
only to relatively simple systems, clocks, machines, planets, and there are
many objects in the world that appear to have irremediably disordered,
chaotic, unpredictable behavior, that relates only to something apparent, and
to the degree that mathematical analysis perfects itself and the corresponding
hidden physical laws are discovered, the day will come when that apparent
chaos will disappear.
From this viewpoint the future is rigidly determined since the beginning
of the Universe. Time ceases having much physical significance, since it is as
if that Intelligence proposed by Laplace which many call "Laplace's demon"
(from the Greek "dáimon," a secondary divinity intermediate between the
gods and man) were to have the entire history of the Universe recorded on a
cinematographic film, which can be contemplated going forward or backwards in
time. Since this time is reversible, it only marks the direction in which one
observes a process that cannot be modified.
II. Where chaos appears in the machine
DURING the 18th and 19th centuries Newton's mechanics were applied with
impressive success. The mechanistic point of view became popular and, combined
with the experimental method, gave a great impetus to physics, chemistry and
biology. Also it fundamentally transformed the new political, economic and
social theories.
The ancient conception of chaos as patriarch of nature, where things
follow by chance, by caprice, without any relation between cause and effect,
gave way to the vision of order in the world as deterministic as a fine Swiss
watch.
But it would come to seem that there are cyclical processes in history,
or something similar to a circular spiral where a cycle does not repeat
exactly, but passes to a new level.
Something like this occurs with the theme of order and chaos in our
vision of the Universe. The primitive chaos was replaced by the Newtonian
order. However, to the extent that knowledge of nature strengthened,
difficulties appeared for the mechanistic model. In the second half of the
19th century it became clear what the limits of classical mechanics were: its
validity did not extend to extremely large velocities or for the extremely
minute world. As a product of this crisis there emerged, in the second half of
the 20th century, two new branches of physics which studied,
respectively, the theory of relativity and quantum mechanics.

The theory of relativity marked the limit of the equations of Newton,
which must be corrected when one confronts velocities close to that
of light.
Quantum mechanics establishes, through the principle of uncertainty, a
limit to the precision with which one can simultaneously measure
variables such as the position and the velocity of a particular atom.

Today the scientists of numerous disciplines begin to persuade themselves
there is a third limit to the possibility of knowledge of nature, and which is
further valid for the world of our everyday experience: in many circumstances
not only can the behavior not be predicted of the individual components in
complex dynamic systems, which involve the interactions of a large number of
components, but also the same can even occur in the case of simple systems,
formed by a few components that are subjected to the action of two or more
forces.
We all know that the world we live in is complex, and no one is surprised
at the low success for predictions about the economy of a country, or the
meteorological outlook, or the behavior of any human being or of living
organisms in general. This has always been taken as an indication that if
there are laws for this very complicated world, they would have to be
complicated, and not like those that rule in dynamic systems which physics has
studied, which exhibit characteristics of order and predictability. But now it
turns out that even simple physical systems, subject to simple laws, can have
unpredictable, chaotic behavior.
Thereby we are presented with a dilemma: our complex world is governed by
simple laws that we shall progressively be discovering through the methods
developed by science, in accord with a vision that we have called Platonic? Or
do we adopt a vision that resembles the Aristotelian, inasmuch as it puts the
emphasis on the processes of change with the passage of time, accepting that
in many cases the behavior of such processes cannot be predicted exactly using
simple laws which govern behind the phenomena?
To reply to this dilemma it is necessary to examine in what conditions
one or the other focus seems valid.
Many dynamic systems, whatever be their nature (physical, chemical,
electromechanical, biological) are extremely sensitive to the values of their
initial conditions, such as the position, the velocity, et cetera. This places
a limit on the possibility of predicting the future state of the system, given
that, as we have seen, such a prediction is based on the supposition that small
causes produced effects also small and that, hence, a small change in the
initial values of the differential equations that describe the behavior of the
system will produce a proportionally small change in the solution of these
equations that allow us to know the future state.
We must distinguish here between linear and non-linear differential
equations. The solution of a differential equation is called integration; the
most important class among integrable equations is that of linear equations.
The most simple among them depends upon one single variable, and its solution
will be graphically represented by a straight line; thus the name of linear
equation.
The family of linear equations have the characteristic that the solutions
obtained by solving them for different numerical values of the variables can
be summed among themselves, also giving a solution as the result. A simple
example is that of the linear equation for waves, that describes the movement
upon a liquid surface of waves of small amplitude. The equation has many
different solutions, each one with different wave amplitudes and lengths, and
these can be added thereby yielding a new solution of the equation. This
expresses mathematically the physical fact that, as we see in the water of a
lake, various different waves can be superimposed, and these superimpositions
also correspond to a solution which is the sum of the solutions of the linear
wave equations.
In general, linear equations are much easier to solve than non-linear,
and therefore they have been studied more; also, when a physical phenomenon
might require being expressed through a difficult to solve non-linear
equation, the usual procedure is to linearize it, eliminating those terms that
influence it least, that is, a linear approximation is made.
Yet in nature, the majority of phenomena are expressible through
non-linear equations. The well-behaved equations of classical mechanics, such
as those that determine the movement of the Moon, and which permit predicting
the future of the system with exactitude, are the exceptions, not the rule.
Recently it became possible to find the solution of any non-linear
equation with the advent of the computer, since these permit numerical
analysis of all types of equation--linear or non-linear--however complicated
they may be.
Today then, one can, thanks to computers, approach the study of dynamic
systems whose behavior responds to non-linear equations, and which are
precisely those that display sensitivity to the initial conditions.
As an example of this let us consider a non-linear, unstable, system of
behavior: a metal cone standing on its vertex, similar to that in figure II.1.
However much we make its axis vertical, it will end by falling, and the side
upon which it falls will depend on minuscule differences that break the
equilibrium: a light breath, a small piece of dust. To predict on what side
the cone will fall would require precise knowledge of all the forces to which
it is subjected at the initial moment of equilibrium, which is equivalent to
impossibility of introducing the totality of an immense quantity of parameters
as initial conditions into the equations of movement.
Another example of unstable non-linear systems of behavior is that of a
snow-covered slope of a mountain, which can be in a state where the energy
produced by emitting a shout may provoke as an effect an avalanche of many
tons of snow, evidently disproportionate to its cause.
If one deals, alternatively, with dynamic systems that periodically repeat
a behavior, in those cases where a small change in the initial conditions is
repeated or multiplied in each of the following periods, such that there is a
situation describing positive augmentation, and which is described through
non-linear equations, one can arrive at situations so distinct that there is no
possibility of predicting them.
Let us consider a dynamic system, formed by a sphere subjected to a force
that causes it to revolve along a circular path that has a distance of 10
meters, as in figure II.2.
To locate the position on the sphere at any moment we shall measure the
length of the track from a fixed reference point which we have marked upon
it. We shall suppose that the initial position on the sphere is of 1 meter
with respect to that reference, and that we measure that position with a tape
measure that give us an error of +/- 1 millimeter, that is, that the true
position can be from 0.999 m, up to 1.001 m or any value included between
these entries.
We further suppose that each time the sphere passes by the reference
point it receives an impulse, and that this provokes a displacement of the
sphere around its orbit that increments its position 10 percent, such that if
it was between 0.999 and 1.001 m at the initial moment, it is multiplied by a
factor of 1.1 and this position comes now to be between 1.099 and 1.101 m at
the end of the first period, and in turn repeats the multiplication by 1.1 for
every one of the successive periods for which the position will be: between
1.209 and 1.211 m in the second period, between 1.329 and 1.332 m in the
third, et cetera. At the end of 25 turns, the position will be practically the
same as at the start: between 10.825 and 10.845 m, which since the total
length of the orbit is 10 meters, locates itself on the sphere between 0.825
and 0.845 m with respect to the reference. Yet note well that if indeed the
sphere has returned in period 25 to be practically in the initial position,
the indefiniteness in its position has grown: it is now a zone of +/- 10 mm of
length upon the track. The initial imprecision of +/- 1 mm in its location has
been multiplied by 10.
In figure II.2 the situation at 25 rotations ["vueltas"] and after 60
rotations is illustrated, where the sphere can be in any position between 3.81
and 4.19 meters, and at 70 rotations where it will be between 8.21 and 9.79
meters. After 97 rotations, this indefiniteness in the position will have been
amplified 10 thousand times and, accordingly, will be 10 meters. Such that the
sphere can be at any point on the track of 10 meters length, and we cannot
know in advance what is that point through calculation.
Of course, this seems easily remediable: since a tape measure has been
used to measure the initial position with an error of 1 mm, it can be
exchanged for a much more precise instrument, which yields an error 10
thousand times less, of only 1 tenth of a micron.
Now then, with such a precise initial datum the sphere can be located
through a calculation to predict its position after 97 rotations with an error
no greater than 1 mm.
But, what if one intends to continue predictions through calculation for
a greater number of rotations?
It results that even with this much more precise initial measurement,
after 193 rotations it again is in the initial position: an indefiniteness of
10 meters, which does not allow predicting where the sphere is. As is obvious
one cannot go on increasing the precision of measurement in an indefinite
manner, for there rapidly appears the need to measure magnitudes with an error
less than the size of an atom, and even if a similar marvel might be realized,
a few hundred rotations more will come to the same frustrating situation.
We are here between the sword and the wall: either we use an approximate
measurement, which is not sufficiently exact for making predictions about any
future state, or we try to achieve such an extremely precise measurement that
it is impractical.
A dynamic system that has this behavior is not rare in nature. It refers
to systems that periodically repeat a certain state, and where this undergoes
a perturbation which is multiplied every period (it is a case of resonance),
being sufficient for after a certain time producing effects of a
disproportionately large magnitude in comparison with the initial perturbation.
In these systems, governed by strictly deterministic laws, under certain
conditions calculation of their behavior can become increasingly imprecise,
causing every intent of predicting the long-term future to be defeated.
PHYSICAL PROCESSES ARE DETERMINATE BUT, CAN THEY
ALWAYS BE PREDICTED?
It now becomes necessary to distinguish between determinism and predictability,
two words that since the age of Laplace were considered synonymous. We see
that, under certain conditions, a dynamic system can be deterministic and,
nevertheless, contain unpredictable behavior.
Of course the physicists knew that the Universe demonstrates an immense
complexity, but they supposed that, in general, that entity could be analyzed
by decomposing it into its simple components, and that of these one could
select a few variables as initial conditions to introduce into the equations,
given that the immense majority of the other conditions are of such small
magnitude, that not to consider them does not affect the result. But now we
learn that those cases are something special, and that much more common is the
situation in which even when one deals with systems for that where determinism
rules, there is a limitation on the possibility of predicting future behavior
which can come to be practically equivalent to a situation of chaos.
This does not mean, however, that one can say nothing about such systems,
since for certain initial conditions, the behavior is indeed ordered and hence
predictable in the long run. Furthermore, even in the chaotic behavior state,
these systems display many properties that can be understood with the help of
the theory of probabilities, this mix of determinism and probability being a
very fruitful form of attacking the characteristic problems of complex systems.
One deals then with a new physics of non-linear phenomena whose object is
the study of processes such as, for example, physico-chemical and biological
turbulences and oscillations. These phenomena have an apparently random
aspect, yet with unsuspected similarities in their behavior, which leads to
encompassing their study in mathematical methods, where common notions result
like bifurcation, strange attractors and Lyapunov's exponents.
This new focus on complex phenomena implies a change of paradigm, now
that that posited by Laplace with his demon capable of predicting any event
has been put in a fair context: it does not try to absolutely predict all the
phenomena that appear in nature using calculations based on deterministic
laws; neither Laplace nor any physicist have dreamed that that would be
possible. But the demon places in relief that this impossibility is due to our
imperfection, as the limited beings that we are.
Only an absolute Being could know the future for any term, when it always
knows all the initial conditions with infinite precision and, in its
calculations, also would manipulate numbers with an infinite number of digits,
such as the irrational numbers. Neither human being nor computers can do this,
given that they necessarily utilize a finite quantity of initial conditions
and a limited number of digits in their calculations.
This is the Platonic focus, which postulates that a mathematical reality
exists that indeed is perfect, an ideal that sets the goal for where our
efforts should be directed. Coherent action with this focus consists in
concentrating effort on finding the solution of the equations that describe
the phenomenon, so as to introduce in them empirical data obtained every time
with greater precision, in the confidence that with the passage of time one
that would thus be more and more near to predicting a growing repertoire of
behaviors.
In this vision of continual progress, it is a real question regarding
study of complex systems, whether that sensitivity to the initial conditions
may be such that no observation, no matter how precise it may be, will allow us
to determine those conditions with sufficient exactness. If indeed in a
deterministic system the initial state determines the final state, in these
non-linear systems an approximate knowledge of the initial state does not
permit deducing approximate knowledge of the final state, and it becomes
meaningless to place priority upon solving the equations and introducing
constantly more exact data into them.
This difficulty with predictions in deterministic complex systems was
already known in the previous century: the physicist James C. Maxwell, who
played a fundamental role in the theory of electromagnetism, said at one of
his conferences in 1873 that although the same causes may produce the same
effects, when there is sensitivity to the initial conditions, similar causes
may not have similar effects. He gives among other examples the explosion of
cotton powder and the effect of small human actions that can unleash great
social changes.
The great mathematician Henri Poincaré wrote in 1908, in his
Science and method that "a very small cause, which eludes us, determines a
considerable effect that we cannot avoid seeing so then say that that effect is
due to chance." Later he extends this concept:
Why do meteorologists find it so difficult to predict the weather
with any certainty? Why do the rains, the storms themselves seem brought
by chance, such that many people naturally believe in prayer so that rain
will fall or good weather come, when in reality they would encounter
ridicule with a petition for an eclipse? We see that big perturbations
are in general produced in the regions where the atmosphere is in
unstable equilibrium. The meteorologists see clearly that that equilibrium
is unstable, that some cyclone will occur somewhere but, where? They are
not in a position of saying; a tenth of a degree more or less at any
point and the cyclone erupts here and not there and extends its havoc to
regions that otherwise would not have been devastated. If one could have
known about that tenth of a degree they might have foreseen it in
advance, but their observations were neither sufficiently rigorous nor
precise enough and therefore everything seems due to the intervention of
chance.
For Poincaré that appearance of chance in nature has two principal
roots:
1) the already mentioned sensitivity to the initial conditions, which causes
that even if the system's laws are known, small initial errors have an enormous
influence over the final state, and 2) the complexity of the causes, for when
the process of analysis has been ideally isolated, only a part of the
innumerable influences to which it is subject are taken into account.
This procedure is effective only for integrable systems which, in
general, are linear systems. However, for the non-linear ones, this separation
for analysis is performed by paying the price of diminishment of the reach of
the prediction.
Maxwell as well as Poincaré gave as an example of dynamic systems
sensitive to initial conditions, the case of a gas formed of many molecules
that fly at great speed in all directions and have many collisions among
themselves. If one desires to determine the evolution of a system with a
quantity of components as immense as those that form a gas, they must keep in
mind that it is impossible to calculate the movement of each one of the
particles with Newton's differential equations.
Consider that a cubic centimeter of air contains 27 trillion (millions and
millions of millions) of atoms, and the entire surface of the Earth would not
suffice to write nothing more than the corresponding differential equations,
without mentioning the calculation to determine with them the movement of each
atom in that cubic centimeter of gas.
Let us suppose that we could observe a very tiny portion of that cubic
centimeter of air, a portion sufficiently limited so as to be able to register
the movement of the few molecules which might be present there. We can imagine
that by thus ideally isolating our zone of observation, we might be able to
follow the trajectories in individual straight lines, the collisions between
molecules and how they rebound, and we shall know that they follow the laws of
mechanics are perfectly predictable, now that we can calculate their
individual movements... Useless effort, since it will only be valid for the
briefest time! It happens that more molecules will quickly arrive from outside
our zone of observation, which did not figure in our initial data, and the
molecules which were under observation will disappear from the zone, and the
result now will be a different movement, with new unforeseen collisions, hence
impossible to calculate, as unpredictable as if one dealt with a process ruled
by chance. Thus we comprehend that if one observes only an infinitesimal
fraction of a very complex process, formed of innumerable components, with an
enormous quantity of distinct variable that act upon the system, it can appear
as fortuitous, disordered.
This is valid not only for a set of molecules in a gas: there is a certain
parallelism with the phenomena that the social sciences study, since if one
wants to examine a nation's economy or its political evolution by studying a
small part isolated from the set, they move towards failure, for that part will
be continually subject to the uncontrollable and unforeseen influences from the
rest of the system.
How then to proceed? It would seem that with detailed knowledge blocked of
each individual behavior, one would have to resign themself to accepting that
social, biological, economic, and physical systems with a large number of
components cannot be confronted with the scientific method, given that they
have behavior which apparently depends on pure chance, although one might
think that it is governed by determinist laws.
The answer to this question emerged in the field of the social sciences
in the 19th century, when they wanted to study the health and economic
characteristics of the nations. They applied to this the practical branch of
probability, which is statistics.
Its mathematical development was initiated by A. Quetelet in 1820, with
his book Social mechanics, inspired by the model of Celestial
mechanics by Laplace, and which had as its goal the development of the
"moral and political sciences." The idea appears there of the average man,
with a length of life, economic income, et cetera which are calculated as the
average of the whole population.
The experience with the study of gases teaches that, despite that on a
microscopic scale the number of components is so large it is impossible to
know their individual behavior, together on a macroscopic scale they display
global properties such as the temperature, pressure, density, volume, that can
be measured, and that relate to each other with well-defined laws. This is
analogous to what happens with a human population with a great number of
individuals, for whom parameters can be established like the rate of births,
deaths, product per capita, et cetera, without the need of pursuing each
individual history.
The statistical method gave such good results for the social sciences
that the physicists later applied it, thus creating statistical mechanics, a
branch of physics developed around 1900 by the Austrian Ludwig Boltzmann and
the North American J. Willard Gibbs.
It is thus that James Clerk Maxwell proposed in 1872 the introduction in
physics of the calculus of probabilities for systems with a large number of
components, and wrote: "we have found a new type of regularity, the regularity
of averages."
GAMES OF CHANCE AND PREDICTIONS
The theory of probabilities began in games of chance. Laplace establishes
in his book that:

The probability of an event is the number of times it appears, divided
by the total of those that could occur supposing that all were equally possible.

Thus for example, a six-faced die gives us a probability of 1/6 that it
will fall with the 2 facing upwards, and the same probability applies to the
other five faces.
If a die is rolled a great number of times and statistics are kept on how
many times each one of its six numbers appear, it will be proved that the
higher the number of rolls the nearer the number will be to 1/6. Yet it is
unknown before rolling the die which face will be up this time and this is, of
course, the reason for the existence of games of chance.
In theory, if one knew the forces applied upon throwing the die, the
frictions, air resistance, et cetera, they would be able to predict how it
would fall, but it is impossible to obtain all the necessary data and,
accordingly, players can continue to enjoy the emotions of betting without
knowing what chance will deal them.
Thus when we speak of dynamic systems in which Newton's equations cannot
be applied to determine the behavior of each individual case, the nearest to a
prediction is to establish the probability of a certain event occurring.
We should respond now to a new type of question. Not now, what is this
particle's velocity, but instead, what is my probability of finding particles
whose velocity is between such and such a value?
The method was initially applied in physics in the study of gases, so
that the analogy to which we have referred between the molecules of a gas and
the individuals of a society is logical. This concept was introduced in
thermodynamics, which emerged as a scientific development from the study of
heat.
At the end of the 19th century there existed then two scientific foci for
the mathematical formulation of natural phenomena: analysis through
differential equations for systems with few components, and statistical
analysis for those systems with an elevated number of components.
Yet in the scientific community the hierarchical distinction between each
focus was quite clear: the highest level corresponded to the first of these
visions, that which expressed the focus we have called Platonic, wherein
nature is ruled by eternal laws written in mathematical language, whereas
the second owes its existence to the ignorance that derives from our human
limitations, which can be partially compensated through the use of
probability, but which inevitably will see its field of application shrink to
the extent that knowledge advances, thanks to that continual and inevitable
progress which was a basic belief in the West beginning with the Enlightenment.
As we saw, such a vision changed radically during the passage of the 20th
century, Henri Poincaré being the first who perceived the essential
characteristics of our focus. Just like Maxwell, he warned that simple
systems, of few components and that, therefore, are in the first of the
categories which we have described, can nevertheless behave chaotically and
then require statistical methods for their description.
Poincaré made this warning on examining the case of a very simple
dynamic system, formed of only three bodies that are attracted by action of
the force of gravity, when one of them is very small in relation to the other
two. He tried to determine how it would move and to graphically represent its
evolution as a trajectory in a mathematical space call "space of the phases"
(to this space we shall return below). The structure of this trajectory,
called a homoclinic, turned out to be so extraordinarily complicated are
far from intuition that Poincaré stopped trying to draw it and wrote in
his New methods of celestial mechanics:
We attempt to imagine an idea of the figure formed from these two
curves and their infinite intersections, each one of which corresponds to
a doubly asymptotic solution. Such intersections form a sort of
framework, a weaving, a net of infinitely fine mesh. None of these curves
can cross themselves; further, they have to fold into themselves in a
very complex manner to cut through the mesh of the net infinite times.
The complexity of this figure that he did not even want to draw is
surprising. Nothing is more relevant to give us an idea of the complexity
of problems with three bodies and, in general, all the problems within
Dynamics where there is not a uniform integral.
This unawaited complexity, that so surprised the genial mathematician, is
what is today studied under dynamic systems with structures like the
homoclinics, the heteroclinics and also the strange attractors, fundamental
representations for the examination of deterministic chaos.
Poincaré's has been basic for the study of dynamic systems and is
consecrated as the foundation of the means that today permit approaching the
mathematics of deterministic chaos. However, there was a period of more than
sixty years when his contributions on non-linear functions were practically
ignored, and it is recently starting with the work of A. Kolmogorov and V.
Arnold in Russia, S. Smale in the United States, D. Ruelle and R. Thom in
France, that serious work in this field resumes.
Despite Poincaré being a visionary, he did not advance in the study
of those systems, which may be attributed to two principal reasons. The first
is that in his era numerical calculations were done manually, which makes
impossible the numerical treatment of the non-linear systems of equations,
which must await the appearance of computers. The other reason is his
philosophical attitude, shared with the majority of the mathematicians and
physicists of his era, and which is clearly reflected in a writing by
Poincaré where he notes the existence of certain mathematical functions
that do not meet the classical requirements of being continuous and derivable:
"Logic sometimes engenders monsters. For half a century we have seen a
multitude of rare functions emerge that seem to strain to resemble as little as
possible the honest functions which serve for anything. No more continuity, no
more derivatives, et cetera."
What would have been Poincaré's astonishment before the attractor
that Lorenz discovered to model meteorological phenomena! Not for the
mathematical aspect, since as we saw, he understood perfectly the impossibility
of making long-term predictions, but because this strange attractor, which
permits the graphing of essential characteristics of the dynamic system of the
atmosphere, is derived using the "rare" mathematical monsters that he so
little trusted. He had arrived at discerning the fascinating universe of the
mathematics of the unpredictable, but he retreated before that lack of
regularity, of continuity, which caused him so much displeasure.
A consequence of that vision was that the scientists imbued with it have
tended to ignore certain fields of research, like those that shelter the
"monsters" which Poincaré mentioned (for example, curves without tangents
and all the algorithms that today we would call "chaotic").
To consider deterministic chaos with real scientific value, one must
renounce the belief in a hierarchy where the top is occupied by the perfect
forms (circle, sphere, et cetera). But, why accept that a circle is superior
to a complex figure, such as a fractal or any element of nature? As the
mathematician B. Mandelbrot would say, "the mountains are not cones, nor are
the clouds spheres, nor does the direction of a river go in a straight line;
without doubt, the forms that surround us throughout our life have always been
very far from the simplicity of Euclidism."
This change in valuation has a cultural and philosophical origin, and if
this is overlooked, one risks reducing the modern study of chaos to a mere set
of new techniques.
An important aspect of this new focus is the indispensable use of the
computer, which permits making the calculations and additionally translating
into images the behavior of dynamic systems.
Also in this regard there is a clear distinction between the attitudes of
the mathematicians who adhere to one or the other focus. Just as the use of
tools that were not the ruler and the compass was rejected by the Greek
geometricians, today the resistance of many "pure" mathematicians continues to
the use of computers for mathematical demonstrations, based on the true
fact that the number of decimals in any numerical result that a computer can
handle is limited.
Another new aspect is the deliberate introduction of randomness in many
calculations, for example, through series of numbers generated as random, with
the goal of modeling natural phenomena that have unpredictable components.
The use of probabilities had been begun in the past, as we saw, by
Quetelet in the social sciences and by Maxwell in the kinetic theory of gases,
but it was considered as an index of ignorance, which indicated the limits of
the zones containing the topics that true science had managed to clarify; and
in those zones randomness, probabilities could only be accommodated when true
science could not be applied.
However in the first quarter of the century, quantum mechanics showed
that the concept of probability and statistical methods are essential for the
formulation of the laws of physics, at least at the atomic scale, from where
appreciation began of their conceptual significance.
The mathematicians and physicists were aware that there is a certain
amount of disorder in nature, yet they did not believe its study constituted a
true science.
Now, on the other hand, chaos is fully recognized and it is accepted that
even in the phenomena that exhibit order and regularity, there is a vast
universe of disordered, irregular phenomena, which are not reducible to pure
forms, and which also can appear unexpectedly in very simple systems, like
that comprised of three bodies.
Chaotic phenomena can be studied scientifically, for although there is no
possibility whatsoever of foreseeing the detailed behavior of its individual
components, yet a qualitative prediction can be made of the evolution of the
system as a whole, and also the conditions can be sought by which a dynamic
system that is in a state of order passes into chaotic behavior and vice versa.
As can be seen, some of the most interesting ideas of existing science
are related to those topics, which has obliged us to determine the true reach
of determinism, of our capacity to predict the future and of the true
significance of natural laws. Only time will give us an exact idea of the true
impact of those changes upon the method and the philosophy of science.
Now let us turn to examining the characteristics of dynamic systems which
can exhibit chaotic behavior.
IN NATURE THERE ARE MANY CYCLICAL PROCESSES
With regard to the dynamic processes that are observed in nature, when one
wants to study them they should underline the universality of what are
periods or oscillations, ubiquitous processes in physics, astronomy and
biology, whether in the structure of the clouds in the atmosphere, the eddies
in a river torrent, the melodious sound of a violin, or the flux of electric
energy in an electronic circuit, and go from the know movement of pendulums
and of the orbits of the planets to complex biological rhythms: fluctuations
in the reproduction cycles of an animal population, respiration, cardiac
rhythm, alternation of waking and sleep, neurophysiological processes, et
cetera. In all of those are found this characteristic of periodicity of
behavior.
Since the dispositions and phenomena presented by this oscillatory
behavior are legion, it is natural that the oscillator would inspire a growing
scientific interest. But now we go far beyond that, for the conclusion which
is extracted from the previous vision is that the properties of oscillators
can be applied to the set of dynamic processes.
In effect, one can describe mathematically any process that fluctuates
over time like the algebraic sum of a set of periodic oscillations, by a
method called Fourier analysis.
As can be seen in figure II.3, if we graph the variations over time in a
process O that we wish to analyze, we obtain an irregular curve, which does
not repeat over time. But Fourier demonstrated that one can consider it as the
resultant of summing various regular periodic curves, such as those of A, B
and C, and hence any theory of dynamic processes can require application of
the concept of oscillators, which in combination account for any variation
over time.
It is for this reason that, as an initiation into the study of dynamic
systems we shall examine in detail the behavior of an elementary oscillator,
such as a simple pendulum, and later we shall see what happens when two or
more of these elementary oscillators are combined to form a dynamic system.
III. Simple pendulums can be very complex
WE SHALL ANALYZE here under what conditions the movement will be
chaotic in a system as little complicated as one can be, that formed of
a pair of pendulums.
The pendulum is a dynamic system that repeats its behavior at regular
intervals which we call periods.
They have been studied at least since the age of Galileo, such that one
might suppose it would have no occasion to surprise us. Furthermore, for
centuries the pendulum has constituted the paradigm of predictability and
regularity. Before the utilization of quartz oscillators, clocks were
regulated by pendulums.
CONSERVATIVE OR DISSIPATIVE DYNAMIC SYSTEMS
One should before everything distinguish between conservative dynamic
systems, which are those where the energy of the system remains constant
because there is an absence of rubbing or internal friction, and those in
which due to friction that is a continual diminution of energy, called
dissipative systems.
Regarding the former they should also be called "Hamiltonian systems,"
since it becomes very fruitful to describe their behavior with a mathematical
function developed by W. Hamilton that uses the positions as variables and,
in place of velocities, momentums (the product of velocity and mass).
Examples of Hamiltonian systems are the solar system and the plasma in a
particle accelerator, and of the dissipative the terrestrial atmosphere, the
oceans, living organisms, and all machines.
We shall examine the characteristics of Hamiltonian systems, beginning
with the most simple case, or that is of the ideal pendulum, which no one will
find in any laboratory, yet which permits establishing the essential ideas for
studying oscillators.
This ideal pendulum moves in a two-dimensional space, or that is, upon a
vertical plane, the thread from which it hangs being rigid and weightless,
without friction in the pivot and being in a vacuum, such that there is no air
resistance to its movement.
Thus we deal with a simple Hamiltonian system, and it is possible to
mathematically describe its movement through Newton's equation: a differential
equation that links the second derivative of the angle the thread forms with
the vertical, with respect to time (or that is, variation over time in the
speed with which the angle changes), with other parameters like the length of
the thread and the acceleration of gravity.
If one wants to represent its movement graphically, we shall need to
calculate for every moment its angle with respect to the vertical and its
velocity, which we shall call θ (theta) and v respectively. We
wish to know how each changes to the degree that time elapses and for us it
will be of supreme utility to use the space of its phases.
THE SPACE OF THE PHASES AND COUPLED PENDULUMS
This is an abstract mathematical space, that should not be confused with that
in which the components of the system really move, but which contain in their
geometrical forms some concrete information: the variable which describe the
movement of the dynamic system.
We shall use here a method which is the exact opposite from that used by
René Descartes when he conceived the use of coordinates.
Descartes discovered how to transform the geometry into numbers by
imagining space as an immense grid, the system of Cartesian coordinates, so
that the position of any point in space is defined through numbers that
measure its distance to these reference coordinates. Accordingly, any geometric
form can be expressed in numbers through a mathematical equation which refers
to the Cartesian coordinates.
Now to introduce the space of the phases we do the reverse: we transform
numbers into geometric forms pretending that these numbers are coordinates in
such an imaginary space.
The advantage of proceeding in this way is that it is enough to observe
that geometrical form to know how the behavior of the system over the passage
of time will vary. For that one uses as axes of coordinates the system's
dynamic variables, or that is those magnitudes that are changing over time, as
might the velocity of the pendulum and its angle with respect to the vertical.
At an initial moment the pendulum, then, will be represented by a point in
the graphic of the space of the phases which indicates to us its velocity and
angle. An instant later it will have another position and velocity, and to this
a different point corresponds, and thus we can create the history of the
pendulum through successive points which trace a trajectory. In this manner,
dynamics switches from studying long lists of numbers to visualizing how a
geometric figure evolves over time.
If the result after a time is a single fixed point, the system is static,
does not evolve. If there is a curve and it is closed, this indicates that the
system periodically repeats its behavior; if it is an open curve, its
characteristics must be examined to see whether or not there are underlying
regularities.
It was the great mathematician Henri Poincaré who had the brilliant
idea of proposing this method, by which dynamics can be made visible.
Researchers try to discover the general characteristics of a system, which are
better appreciated if one observes the forms that appear in the space of the
phases.
In general, this way of representing a dynamic system has among its
advantages that the coordinates of the space of the phases can represent any
characteristic of the dynamic system which varies with time, such as, for
example, the electrical signals of the heart, the population of bees in a
hive, the value of the dollar, et cetera.
A general principle of trajectories in the space of the phases is that
none can touch the other as a consequence of the determinist character of
this description: if two trajectories intersected at a point, there would be
two different curves starting together, which would correspond to two distinct
solutions of the differential equations of the system, or that is two
different behaviors at the same time.
Thus, if we were representing an automobile as a dynamic system, if two
trajectories were to touch at a point it would indicate that we have a single
vehicle moving simultaneously at, for example, 30 and 90 km. per hour, which
obviously is opposed to the unambiguous, deterministic character of these
phenomena.
To represent the behavior of the ideal pendulum in the space of the
phases, we begin by drawing two reference axes, or coordinates, one horizontal
θ for the angles, and another vertical v for the velocities. Now we mark the
path or trajectory of the pendulum onto this map as in the figure III.1b above.
Let us suppose that starting at an initial instant when we provide an impetus,
every tenth of a second we measure the angle and velocity, and mark the
corresponding point in figure III.1b. As we can see in figure III.1a, at the initial
time T = 0, the pendulum forms an angle -θ with respect to the vertical, a
situation that is represented by the point T = 0 in figure III.1b. When you let it go
it will move to the right with growing velocity, which will be at a maximum
when it passes the vertical at the instant T = 1, a moment at which θ = 0, and
this is represented in figure III.1b with the point T = 1. Upon attaining the maximum
displacement to the right, so that θ = +θ, its velocity will have been reduced
to zero, and this corresponds to the point T = 2 in figure III.1b.
Continuing this procedure one will obtain a set of points that trace the
dynamic trajectory of the oscillator, a trajectory that represents the
complete movement of the pendulum throughout a cycle.
Given that the path is repeated cycle after cycle, the map for this
simple pendulum is a single closed trajectory, also called an orbit, by
analogy with the movement of the planets.
If we give the pendulum a greater initial push, the maximum angle will be
greater. Thus, in a single graphic we can represent the movement of the same
pendulum with different initial velocities, and for each one of them there is
a distinct orbit. This is one of the characteristics of conservative or
Hamiltonian systems.
A family of curves is obtained thereby, which can constitute the plane
θ,v (figure III.2). For an ideal pendulum, and for small angles
of distance from the vertical, these curves are concentric circles, which
correspond to the simplest solution of the differential equation, called in
this case an "equation for a simple harmonic oscillator," and which is a
linear equation.
As you can see, there is a central point A, at the cross of the
axes, that is, for zero values of θ and v. This represents
the pendulum when it has a zero velocity and rests over the vertical, or that
is when it is at rest.
If now we separate the pendulum much from vertical, the relation between
that force that moves it and the angle is described through a much more
complicated differential equation, which is not linear. Its solution is
graphed in figure III.3. The family of possible trajectories now resembles an
eye. The central point A continues to represent the immobile pendulum
(zero velocity and angle). The concentric ellipses B correspond to
cycles of the pendulum more distant from vertical each time, like a swing that
is given a greater push every time, until it lies 90 degrees from the
vertical, and then begins to rise above the pivot.
What happens when it receives a send-off such that it surpasses 180
degrees?
As we know, it will not oscillate, but will rotate in a circle in one or
the other dimension, like a helix.
This is represented in the space of the phases as the families of curves
indicated as C and D. As can be seen, those in C have a
positive velocity, that is, represent complete rotations in one direction, for
example in the same way as the hands of a clock, and those in D
represent the contrary movement, that which can be indicated as a velocity
with a negative sign.
This method of representing the behavior of the dynamic system in the
space of the phases allows us to appreciate the essential characteristics at a
glance: if the graph is a point, the system is quiet; if there are concentric
circles, it is oscillating to a low extent; if there are ellipses, the
oscillations are broader; if they are in the zones C or D, it is
rotating instead of oscillating.
The two curves S that form the border between the B
oscillation regions and C, D of rotation are called "separatrixes" and
correspond to the pendulum positions when it is suspended exactly on the
vertical at its highest position (figure III.3) or that is which has the
maximum instability. If it is in that position it will fall rotating in one
direction or the opposite, and will acquire an increasingly great velocity,
until passing by the lowest point on the vertical and then ascending with
decreasing velocity until returning to the highest point on the vertical. On
the separatrix curve, the total energy available to the pendulum is exactly
equal to that that it needs to leave that highest point upon the vertical and
return reaching it.
What happens in a more complicated Hamiltonian system, formed from two
coupled ideal pendulums (A and B), that is, which mutually
influence each other in their movement?
Each one of the pendulums has its own oscillation period, but now this is
affected by that of the other. If we ignore pendulum A, then the
movement of pendulum B will trace a closed curve in the bi-dimensional
space of the phases. If we ignore pendulum B, the movement of A
will trace another closed curve in a different bi-dimensional space. But if
the two pendulums interact, now they are not independent and to represent the
trajectories the two spatial planes must combine into one whose dimensions
will accordingly increase from two to three.
Let us suppose that period A is, for example, nine times longer
than that of B. If A were independent of B, it could be
represented in figure III.4 as a closed curve upon a horizontal plane, and
B as on a plane perpendicular to the horizontal. But when their
movements are coupled, this is represented in the space of the phases
combining both closed curves: to the extent that A is being displaced
horizontally, curve B deviates from the horizontal plane, in a movement
comparable to that of rolling a rope around an inner tube. The result of one
circle rolling itself around another is the creation of a figure in the form
of a solenoid ring, upon the surface of what the mathematicians call a torus
(figure III.4). Here we can consider that the A cycle is the axis of
the torus, and hence B is a cycle perpendicular to such an axis.
Now we can see the three-dimensional torus in more detail.
If the periods or frequencies of two coupled pendulums are in a simple
relation, for example if one has a period nine times greater than the other,
the relation is 9/1, and the line making circles around the surface of the
torus is tracing a solenoid which always passes over the same points on the
torus no matter how many revolutions those combined pendulums realize, which
demonstrates that the dynamic system is exactly periodic. Thus, in figure
III.4, the initial point where T = 0 and the final one on completing a
period, T =1, coincide.
But, what happens when the periods of the coupled oscillators are
incommensurable, or that is when the quotient of those periods is an
irrational number? An irrational number cannot be written as a ratio and its
decimal expression contains an infinite number of terms without a repetitive
pattern.
If the coupled system has an irrational relationship between periods, the
curve in the space of the phases will proceed by wrapping around the torus
passing on every turn over different points, as is seen in figure III.5, where
the initial point T = 0 and that which corresponds to a period, T
= 1, do not coincide, such that the curve with the passage of time will cross
every point of the surface until totally covering the torus, and never will
repeat itself. A system with these characteristic is called almost periodic.
Mathematicians are capable of working with toruses of any number of
dimensions. Which is equivalent to it being perfectly possible to combine more
than two oscillators and represent their combined movement on the surface of a
multidimensional torus.
POINCARÉ SECTIONS AND MULTIDIMENSIONAL SPACES
If indeed there is no restriction in principle to the number of dimensions to
the space of the phases, evidently is is much easier to visualize the forms
with only three or two dimensions (volumes or surfaces).
There is a method, conceived by H. Poincaré, for visualizing the
essential properties of complicated trajectories in spaces of three or more
dimensions, that consists of lowering to one that number of dimensions.
We consider the example of the trajectories that surround the torus in a
bi-periodic system and which forms the surface of a tri-dimensional solenoid.
We now slice the torus with a transverse plane (figure III.6) and mark
the points where the trajectory intersects the plane. Since the trajectory is
periodic, one point will be marked each turn, such that after a sufficiently
long time one will have a true map upon the plane, which in practice is very
advantageous for the simplification involved in reducing to one the number of
dimensions, and also because it has passed from continuous description
over the time of the trajectory in the space of the phases, to taking only the
data of movement each time the trajectory crosses that section, and this
implies that it is necessary to manipulate a much smaller amount of data.
What can the Poincaré section tell us for this bi-periodic system?
If the relation of frequencies is a rational number, the curve is fixed
upon the torus, every rotation superimposing on the previous one, such that in
the section there appear only the corresponding isolated points whose number
and position depends on the relationship of the periods.
If, however, the system is almost periodic, with a relation of
frequencies that is an irrational number, the curve will pass through a
different point on every turn, in time covering the whole surface of the
torus, such that in the Poincaré section one has a closed curve.
The analysis of more complicated oscillator systems requires the
introduction of a space of the phases with more dimensions, since as we have
seen, the number of dimensions depends upon the quantity of independent
variables in the system, which might be the velocity or the thrust, the
position, or some other dynamic characteristics that define the behavior of
each of the components.
To be able to confront these concepts of multidimensional space, crucial
for the study of complex systems, one must make a generalization from the
geometry of coordinates.
Let us say that we live in a tri-dimensional space, given that our
movement in space has three degrees of freedom: we can make three types of
movement which have perpendicular directions between themselves (left-right,
front-back, up-down) and any point in space can be reached by combining those
three possible types of movement, such that its position can be indicated
through three numbers that we shall call the x, y, z coordinates of the
point, and which give the distances from the point to a reference in those
three perpendicular directions.
We can also refer to an abstract space of four dimensions, with four
coordinates w, x, y, z, to one with five dimensions, with coordinates
v, w, x, y, z, and so on successively, always by keeping in mind that
now we do not refer to the physical tri-dimensional space in which we live and
move, but instead to a mathematical space. This becomes of enormous utility
when one tries to comprehend the mathematics of many variables.
In any problem, be it in physics, biology, economy, any significant
magnitude can be considered, and visualized, as a dimension of the problem. An
economist can work in a multidimensional "space" with index variables for cost
of life, cost for shelter, value of the dollar, price of oil, trimesters of
the past decade, et cetera.
A physicist can study a dynamic system formed of three bodies, for
example, where to each of them there corresponds the three positional
coordinates x, y, z plus the three coordinates of velocity or of
momentum, or that is a total of 18 coordinates, such that we are dealing with
a space of 18 dimensions.
Nothing prevents us from continuing to augment the number of components
of the system, so that we can say that, in general, a dynamic system with
n independent variables--n degrees of freedom--can be
represented in a space of n dimensions.
In general, this is valid for ordered and stable systems that, though they
may be formed from a great number of components, and which accordingly should
be represented in a space of the phases with a great number of dimensions, in
practice move in a very small sub-space of this vast multidimensional space
that represents a physical state in all its slightest details, and whose
volume we shall call H. Thus a solid body, for example, a rock,
although it is composed of an enormous quantity of molecules, since those are
rigidly linked among themselves if they move in unison, and it is enough to
represent the movement with a single point: the rock's center of gravity.
The evolution of the system over time is represented by the trajectory of
H in the space of the phases (figure III.7) where the points are marked
which correspond to the states for the times T = 1, T = 2, T = 3, et cetera.
A dynamic system of ordered behavior can, if its characteristics are
altered, come to have chaotic behavior, and the opposite process can also
occur.
Is it possible to represent those changes in the space of the phases? The
answer is affirmative: as we shall see below, the study of the transition of
an ordered dynamic system to chaos is, in a certain sense, the analysis of how
a movement that can be very simple, limited and repetitive, breaks at a
certain critical point, developing a new behavior which corresponds to a
displacement of the system's trajectory to much vaster zones of the space of
the phases.
Now we shall be able to visualize the difference between predictable
processes and chaotic processes. We shall represent (figure III.8) the
evolution of a dynamic system of predictable behavior as a trajectory
indicated by 1 in the space of the phases. If we quickly vary the initial
conditions, we have trajectory 2, which remains close to that of 1, indicating
that the behavior is practically the same (figure III.8a). On the other
hand, in a chaotic process, the trajectories 1 and 2 that initially were
nearby separate more over time, signaling the growing divergence of their
behaviors (figure III.8b).
CONSERVATION OF VOLUMES IN THE SPACE OF THE PHASES
As we have seen, the total energy of a Hamiltonian system is invariable. How
is this expressed in the space of the phases?
Let us consider representation of the evolution of a dynamic system in the
space of the phases. We saw that there is a region of volume H to
represent that system, and that it displaces into the space of the phases, so
tracing a trajectory which expresses its evolution over time.
In the 19th century, the mathematician J. Liouville demonstrated that for
every Hamiltonian system the volume of this H region remains constant
over time, as if treating of a drop of a liquid that it is not possible to
compress.
This property of Hamiltonian systems is much more restrictive than simple
conservation of energy or reversibility in time of the movement equations.
The conservation of the volume of the region in the space of the phases
can take place in two different ways: 1) the region under consideration
shifts around the trajectory, rotating and deforming in periodic fashion, and
then the neighboring trajectories "coil up" without separating much from each
other; 2) the volume H stretches over time in one direction, contracting in
the perpendicular direction. While in the first case two initially close
trajectories remain nearby, in the second they tend to separate.
From a dynamic viewpoint, the difference is considerable. In effect, the
trajectories are stable in the first instance, unstable in the second, for
here a weak initial separation can be amplified with the passage of time.
REVERSIBILITY IN TIME
Another important property of the equations that describe the behavior of a
Hamiltonian system is that changing the sign for time, that is, replacing
+t with -t has no effect whatsoever; the equations are identical. Or
say, that if the movement of an ideal pendulum is filmed, you cannot discern
in which direction the film is run.
One says, in general, that the conservative systems have reversible
mechanics.
As we shall see below, irreversiblity in time is characteristic of those
systems in which energy, instead of being conserved, dissipates.
WHERE IN PENDULUMS DOES CHAOS APPEAR
The study of deterministic chaos began in the Sixties with the pioneering work
of E. Lorenz, D. Ruelle and F. Takens, who using computers, which in those days
began to show their fantastic potential, demonstrated that even simple dynamic
systems, formed with only a few oscillators, can behave in an unpredictable,
chaotic manner.
It is easy and amusing to prove this with a system of two coupled
pendulums - simple to build and even obtainable as a toy. In figure III.9, the
system consists of a light pendulum formed from two small spheres united by an
axis around which they can rotate. This axis hangs from a heavier pendulum,
which is what imposes the basic oscillation.
Both pendulums have permanent magnets attached, so that their movements
are coupled. At the base of the system there is a small electromagnet fed by
an electric oscillator circuit which maintains the oscillations of the main
pendulum so that it does not dampen (entrained pendulum).
Once an initial push is given, the heavier pendulum oscillates with a
clock's regularity, while every time one of the other spheres swings near the
large pendulum it receives an impulse due to the attraction between the
respective magnets. Soon a surprising spectacle will occur: the light pendulum
performs a strange erratic dance, oscillating at times in a rhythmic fashion,
to then jump unpredictably to movements that are chaotic.
How will this behavior be represented in the space of the phases?
Let us take the case of the dissipative dynamic systems, which abound in
this world, and whose energy diminishes through friction and other effects.
We know that if we give an impetus to an actual pendulum, it will
oscillate or rotate but soon, as opposed to the ideal pendulum we have studied
previously, its movement will continually fade, until finally it will hang
still, unless it receives new energy.
This behavior can be represented as a spiral trajectory in the space of
the phases, which results in point A as the final resting position
(figure III.10).
No matter what initial impulse the pendulum receives, in every case the
loss of energy finally immobilizes it, that is, the trajectory inevitably
leads to the point of repose A, as if this attracted the curves in the
space of the phases. Hence the name of "attractor" that the fixed point
A receives.
Of course a pendulum clock would be of very little utility if this
trajectory from the first impulse until rest were to take only a few minutes,
and therefore diverse mechanisms have been invented that replace the energy
which is lost in functioning: weights that stretch elastic springs,
electromagnets that cyclically change their polarity. The result is that the
pendulum moves with a regular rhythm despite friction and air resistance.
Hence, in a pendulum that receives energy, the curves in the space of the
phases corresponds to figure III.11.
In fact, if the pendulum is given an additional push, or if it is
momentarily slowed, it will eventually return to its original rhythm, which
corresponds to the trajectory C. This curve evidently constitutes a new
type of attractor, since instead of the system being attracted to a fixed
point, it is brought into a trajectory that forms a closed curve.
Notice that there is an important difference with an ideal pendulum that
moves without friction nor loss of energy: in this, the smallest perturbation
caused by adding a greater impulse or slowing it causes the pendulum's orbit
to change by contracting or expanding a little, or say it jumps from one
concentric curve to another larger or smaller one. By contrast, the trajectory
of a mechanically assisted pendulum has stability, resists small perturbations
(figure III.11) and thus when it is given a greater impulse it gradually
dissipates it, or if it is slowed it received energy from its source, so that
in both cases it finally returns to that single closed curve, which
accordingly is also an attractor, that we shall call the "attractor limit
cycle."
There are two basic classes of limit cycles: that which we just finished
presenting for the pendulum, which is stable; the points on nearby
trajectories move towards it. There also are dynamic systems with unstable
limit cycles, where the point of nearby trajectories move away from this
curve, which acts as a repeller.
The importance of what we have analyzed for the pendulum is that it
permits understanding the essential behavior characteristics of systems
that act cyclically and which are so frequent in nature: an electric
oscillator, the tides, the vibrations of the air in an organ pipe, the
electrical impulses that cause the heart to beat, the number of individuals in
an animal population...
We recall the two types of attractors described up to now: 1) the
attractor point, that corresponds to a stationary system state, nothing
happening over time; 2) the cycle limit attractor, which indicates
periodic behavior, which further implies that, if indeed the system is
dissipative and hence is losing its energy, that is being replaced by the
delivery of energy from some external source.
CONTRACTION OF VOLUMES IN THE SPACE OF THE PHASES
How is the fact that we are dealing here with dissipative systems represented
in the space of the phases? As we saw, the cycle limit attractor indicates
that it is displaying a stable dissipative system, which replaces the energy
that it loses. If one has, for example, an entrained pendulum that they
perturb temporarily halting it, when the angle and the velocity diminish we
obtain trajectory B (figure III.12) but the system receives energy from
its source to compensate for this diminution and the trajectory ultimately
merges with cycle limit A.
The same occurs is the pendulum is given an additional push that removes
it from its orbit; with time, the dissipation of this excess energy will cause
its trajectory to converge with cycle limit A.
Accordingly, the cycle limit attractor is within a zone C in the
space of the phases such that any trajectory that is initiated from any point
whatsoever inside that region will end by being inexorably drawn by the
attractor. This zone is called a "basin of attraction" (figure III.12).
Let us suppose that in this basin C there is a region R
that represents a set of initial values for positions and impulses in the
system. To the extent that the trajectories are attracted by the limit cycle,
they approach each other and culminate by converging upon the latter,
ultimately comprising a single trajectory: there is a contraction of the
region R, which diminishes until disappearing into the attractor curve.
Since a curve is a line of a single dimension, it cannot pass through all
the points that comprise a volume; there will always be an infinity of points
not covered by the curve and it is evident, therefore, that in a space of the
phases of three dimensions an attractor should have fewer dimensions than 3,
and this can be generalized:

The dimension d of an attractor in a space of n dimensions
is less than n:

d < n

This is the principal characteristic that distinguishes dissipative
systems from conservative ones. In a dissipative system all the initial
conditions converge towards regions in the space of the phases which have
fewer dimensions than that of the original space.
DISSIPATIVE SYSTEMS FORMED FROM TWO COUPLED PENDULUMS
We turn now to study the case of a dissipative system also formed by two
coupled pendulums. Here, in a similar manner to that seen in the study of
conservative systems formed by combining two ideal pendulums, it will require
a space of the phases of three dimensions, in which the trajectories unfold
upon the surface of a torus, so that for the case of a dissipative system we
have a more complicated attractor than a fixed point or a limit cycle: we deal
with a curve in the shape of a solenoid that passes through all the points on
the surface of the torus, thereby representing the combined behavior of two
coupled pendulums with periods which are incommensurable, a that is whose
quotient is an irrational number. Hence, we call it an "almost periodic
attractor."
The behavior of such a system is predictable, that is, that knowing the
velocities and positions of its components at a given moment, one can
determine them for any other instant. And even if such knowledge has a certain
margin of error, or uncertainty, this remains of the same order of magnitude
for all future determinations. This property translates into the space of the
phases through the fact that two adjacent trajectories continue being adjacent
even as time passes.
STRANGE ATTRACTORS AND FRACTAL DIMENSIONS
We now return to a system that displays chaotic behavior, like that of the two
pendulums coupled with a magnet which we had described.
Its representation in the space of the phases will require this having
three dimensions, in which the trajectory will be a continuous curve, similar
to the case of the almost periodic attractor. Yet what distinguishes it from
this is that if one examines two neighboring trajectories in the space of the
phases, they see them diverge rapidly, continually separating more. We deal
here with a "strange attractor," whose essential characteristic is the
amplification of the separations, small as they may be, between trajectories
in the space of the phases. This characteristic is called sensitivity to
the initial conditions. We have here the key to understanding why the
determinism that governs a dynamic system does not necessarily imply
predictability.
If this sensitivity appears, the system is unpredictable after a certain
time, no matter what the other characteristics of the space of the phases, or
its number of dimensions, may be. We would only be able to predict its
evolution with exactitude if we were to know with absolutely infinite
precision all the factors that act upon the system, and we know that that is
impossible, given that we necessarily know the initial conditions only in
approximate form.
However, the situations corresponding to the other attractors relate to
stable and predictable states, because they do not have such a sensitivity,
but indeed the contrary: they is an insensitivity to the initial conditions.
This is evident because, in accordance with the above figure, there is a
basin of attraction in the space of the phases within which a trajectory that
starts from any initial condition inexorably ends by merging into the
attractor; no matter how the process is begun, the system after a time
concludes by following the stable and foreseeable behavior described by said
attractor.
The surprise that is felt before the appearance of chaos in a system as
simple as that comprised of two coupled oscillators is natural, however no one
found it novel or strange that systems as complicated as the terrestrial
atmosphere or a turbulent stream have behaviors difficult to prognosticate.
Nevertheless, the study of these simple systems has shed light on those
constituted from a great number of components.
The attractor that characterizes chaotic behavior if transformed in a
spectacular and apparently counter-intuitive manner, since it should reflect
that situation in its geometry.
Its structure should exhibit two opposite tendencies: in honor of its
name, the adjacent trajectories should converge toward the attractor and
conversely, to reflect a state of sensitivity to the initial conditions, the
trajectories should diverge continually separating more. The speed with which
these successive trajectories diverge is measured with a coefficient
called a "Lyapunov exponent."
Furthermore, the other essential condition is that, as we have seen
above, the curves formed by the trajectories cannot cross meeting at a point
(a condition of determinism). An attractor then becomes very difficult to
visualize, whose form cannot exist on a surface, no matter whether it is a
plane or a curve like the solenoid which surrounds the torus in the simple
case we examine.
The reason for this impossibility is that the divergence between adjacent
trajectories required by the chaotic state cannot be expressed upon the
surface of the torus, because after a certain number of turns these, not being
parallel, should cross to accommodate all the trajectories. The existing space
on the surface is then insufficient and only one alternative solution exists,
which is what appears in the attractor: the trajectories detach from the
surface jumping outwards to occupy the surrounding region. Accordingly the
dimensions of this attractor should be more than the two that correspond to a
surface, or that is:
2 < d
But, and here the difficulty emerges, we are dealing with a simple
system, in a space of three dimensions, and we have seen that, since the
conditions of the dynamic system (velocities, positions, et cetera) are
necessarily limited to a certain range of values, the attractor should occupy
only a restricted zone of the space of the phases, in place of filling it
completely, and therefore its dimension d should be less than 3:
2 < d < 3
Or let us say that we should imagine a geometric figure of greater than 2 and
less than 3 dimensions, which is to say that it is in a situation intermediate
between a surface and a volume! Obviously, such a situation is not normal in
Euclid's geometry, and this may have been one of the reasons why the
possibility of chaos in simple system was so recently discovered.
Yet recently mathematics have been developed in which there exist
irregular or fragmentary shapes that can be characterized by dimension which,
as opposed to the Euclidean are not whole numbers, and which have been called
fractals by the mathematician Benoit Mandelbrot, who has been one of
the main forces for the study of these strange geometries.
This attractor then is a fractal, which is located on a region that
includes the surface of the torus but, because it is a curve, does not occupy
all the points in the volume of that region, there always remaining an
infinite number of points through which it does not pass.
Other dynamic systems, such as those formed by electrical oscillators,
turbulent fluids, chemical reactions, display attractors with those
characteristics, which have been baptized "strange attractors" by Ruelle in
1971.
A strange attractor and its Poincaré section can have a general aspect
like that of figure III.13, which we might compare to a ball of thread after a
puppy plays with it for several hours: if indeed it continues to be a single
thread (hence, with a determinate trajectory) it will be impossible in
following the turns to predict whether in one centimeter it is going to
fold back, go toward the center of the ball, or towards the outside, et
cetera. Since it has sensitivity to the initial conditions, the slightest
alteration of these will be represented by another complex tangle whose
turnings have nothing to do with the first, although the volume it occupies be
practically the same. To obtain the shape of a strange attractor we shall use
the following procedure: we consider the flux in three dimensions of the
trajectories in the space of the phases (figure III.14).
This flux is subject to two opposite influences, since it should contract
because there is an attractor, and it should expand from the sensitivity to
the initial conditions.
To better analyze the process, we shall separate the two effects, such
that the contraction from the condition of being an attractor is exercised in
the vertical direction, while that of expansion is exercised in the horizontal
direction.
Successive transverse Poincaré sections thereby exhibit a rectangle
being deformed, contracting vertically and dilating horizontally. This process
of contraction and expansion should continue to the extent that the flux of
trajectories traverse the attractor. If one measures the expansion on every
successive turn and compares it with the previous, when it grows exponentially
with a positive factor (Lyapunov exponent) there is really expansion of the
trajectory.
But we have seen that the volume which the flux occupies in the space of
the phases should remain constant for Hamiltonian systems, because the energy
is conserved, or should diminish for dissipative systems, which lose energy
through friction. Accordingly, the area occupied by the successive
deformations of that rectangle cannot increase, can only maintain itself
practically constant if there is little dissipation of energy or diminish, as
the case may be.
Furthermore, here there appears an additional limitation, which is that
responsible for the appearance of the fractal form: since the variables which
describe the system (impulses, positions, et cetera) cannot have any value
imaginable, but those being necessarily restricted, the flux should confine
itself to a region of the space of the phases and, thus, there is a limit to
the expansion of the rectangle.
In consequence, the only solution that permits the geometry to fulfill
all these conditions ensuring at the same time that the flux remains in a
restricted zone, is for the rectangle to fold into itself.
By maintaining in a simultaneous manner the three operations,
contraction, stretching and doubling back, the rectangle is progressively
transformed into a horseshoe that, in turn, will flatten, expand, fold, giving
birth to a structure of a double bobby pin, and so on successively. The area
of the resultant figure at each deformation keeps diminishing, or can maintain
itself practically constant in certain cases, yet it cannot grow.
In each expansion the distance increases in x between points that
previously were contiguous, which then separate exponentially, as corresponds
to the condition of sensitivity to the initial conditions measured by
Lyapunov's coefficient. The strange attractor was thus fabricated in a similar
fashion to that which the baker uses for the bread dough, and it is not
surprising that its structure would therefore resemble that of a puff pastry.
If one examines the Poincaré section of the attractor in the previous
figure, she sees a band structure that repeats endlessly (figure III.15).
Every band is comprised of sub-bands and these, in turn, by others of
similar structure; without it mattering what scale we use to examine the
microstructure, its aspect will be similar. The property is called
"self-similarity" and is one of the characteristics of fractal forms, those
which as we have seen, have dimensions that are not whole numbers.
In summary, for the behavior of a dynamic system to become chaotic it is
enough that it have a minimum of three degrees of freedom and, hence, a space
of the phases with a minimum of three dimensions. If in the corresponding
attractor we find that the Poincaré section displays structures that are
self-similar, we shall know that the attractor is strange, which implies
sensitivity to the initial conditions and, accordingly, impossibility of
predicting the behavior of the system beyond a certain time.
These strange attractors, curves with no end and located in the space of
the phases are, in general, geometric figures of rare beauty. It is not
possible to calculate them exactly on the basis of mathematical equations,
since none can be described in a precise manner, and the only route towards
constructing and visualizing their aspect in the space of the phases is
through the computer, which is how they were discovered.
It is important to note that the strange attractors are figures which
occupy only one zone in the space of the phases, and this permits diagnosing
at a glance whether a system that appears as chaotic has underlying
regularities or whether its behavior is purely random.
If one can construct an attractor based on data, this tells us there is
some non-linear mechanism operating upon the system and that, therefore, we
deal with deterministic chaos. If, however, the represented behavior shows a
dispersion of points throughout the entire space of the phases, this indicates
phenomena which occur by chance, those that technically are called white noise.
THE FRACTAL DIMENSIONS
What exactly is a fractal, and how is one made?

A fractal is a geometric form that consists of a motif which repeats
itself at whatever scale it is observed.

This form can be very irregular, or very interrupted or fractured, thus
originating its name, which B. Mandelbrot derived from the Latin "fractus"
(interrupted or irregular). Its basic characteristic is the concept of self-
similarity, making the geometry of fractals an indispensable tool for the
study of all those phenomena that exhibit the same structure no matter the
magnification at which they are examined.
The property appears with surprising frequency in nature: we think of a
rushing torrent with eddies, which if we examine in detail contain smaller
eddies, or in many plants, ferns, trees, that branch through successive
processes before reaching the leaves, or in the system of blood circulation
with its ever smaller ramifications until arriving at the capillaries.
Fractals are, at the same time, very complex and particularly simple.
They are complex in virtue of their infinite detail and their unique
mathematical properties (there are no two identical fractals), nevertheless,
are simple because they can be generated by the successive application of a
simple iteration, and the introduction of random elements.
Mathematicians have conceived a great variety of fractals, but the
oldest is probably that of the mathematician G. Cantor, in 1883. He wished to
surprise his colleagues with two apparently contradictory characteristics for
a set of numbers falling between 0 and 1: a) that the set would have a
zero size, or that is if it were represented by points along a line, on any
portion of it the points which did not belong to the set would greatly exceed
those that are part of it, and simultaneously, b) that the number of
members of the set would be as innumerable as the set of all the real numbers
also included between 0 and 1.
Many mathematicians, including Cantor himself at first, did not think
that such a monster could exist, yet finally he discovered it. His
construction seems surprisingly simple. We begin with a segment of straight
line and mark the extremities with 0 and 1. We erase the middle third, while
maintaining their extremities 1/3 and 2/3. There remain two segments with a
total of four extreme points, and for each one of these we erase anew the
interior third, so that we are left with four segments with two extreme points
each, or that is points which correspond to the 8 numbers: 0, 1/9, 2/9, 3/9,
6/9, 7/9, 8/9, and 1. If we continue with the same procedure ad
infinitum, we eventually arrive at each segment being formed by a single
point.
In figure III.16 the first five erasing operations are shown, but there
is no way to draw the final result. Cantor's set has openings at whatever
scale it is examined, and is composed solely of an infinity of isolated points,
none of which is adjacent to another point of the set. If one wants to measure
the length that remains, they will verify it is zero by being comprised of a
sum of holes, now that the segments which were being removed sum to a total
length of 1 after the infinite erasure operations.
The mathematicians say then that Cantor's set has a measurement of
dimension of zero. Yet actually there is a new mathematical concept that has
more meaning for measuring the dimensions of fractals, and in accord with this
it is demonstrated that the dimension of Cantor's set is the so-called "fractal
dimension D"
log 2
D = -------- = 0.6309
log 3
which turns out not to be a whole number but a fraction instead, and which
suggests a geometric form intermediate between points (Euclidian dimension 0)
and a curve (Euclidian dimension 1).
It should be clarified that not all fractals have a D which is a
fraction; this can be a whole number, yet its form is always interrupted or
irregular.
Just as occurs with the dimensions of the space of the phases, it should
be borne in mind that a fractal dimension does not have the same meaning as
that of the dimensions of our Euclidian space, but instead is the numerical
expression that permits us to measure the degree of irregularity or of
fragmentation in one of these figures. Thus, a D dimension between 1
and 2 means we are dealing with certain very irregular curved planes that
almost become a plane, and the surfaces which resemble puff pastries, with
numerous folds which fill part of a volume have a D between 2 and 3.
There are fractals with D = 1 and with D = 2, which nowhere
appear like a line or a plane, but always are irregular or interrupted.
We now return to examining the strange attractor corresponding to the
transformation of the horseshoe.
This transformation was developed by the mathematician S. Smale, for
application to the study of coupled electronic oscillators that were used in
radar installations. Upon creating a section s-s* (see figure III.17)
the set of Cantor is observed.
The essential properties of this attractor appear in a great number of
Poincaré sections for chaotic systems. Cantor's set is an example of an
interrupted fractal.
A fractal, a classic example of an irregular curve, is Koch's curve,
proposed by the Swiss mathematician H. von Koch in 1904. To construct it, we
start with a straight segment (figure III.18) of length 1 and in its middle
third an equilateral triangle is constructed. The length of this line is now
4/3. If the operation is repeated we obtain the figure with a length
(4/3)2 or 16/9, and infinite iterations arrives at a fractal form of
infinite length and whose extremes are, nevertheless, separated by the same
distance as the initial generating segment of length 1. Its fractal dimension
is D = 1.26...
Another variant is Koch's snowflake which is constructed by the same
method beginning with the equilateral triangle (figure III.19) in which, if
each side of the triangle measures 1/3, after n iterations it has a
perimeter of total length (1/3)n which becomes infinite when
n reaches infinity, such that if one wanted to virtually trace all the
turns in the curve with a pencil, she would never reach the end point, despite
its enclosing an hexagonal figure of a perfectly limited area.
These strange figures seem to be the products of the imagination of some
mathematicians disconnected from everyday life.
However, as has been seen so often in the history of knowledge, creative
thought, which does not seek an immediate practical utility, turns out to be
very fruitful. Thus, this new geometry conceived at the beginning of the 20th
century today becomes indispensable for the dynamic that we are studying, and
which has so many applications.
Now we can understand why fractals and strange attractors are found so
intimately connected. As we have seen, a strange attractor is traversed by the
point representing the dynamic system, which advances along a curve of
infinite complexity, that extends and at the same time folds and re-folds ad
infinitum, and whose Poincaré section is formed by groups of points
characterized by self-similarity.
A strange attractor is, accordingly, a fractal curve.
IV. Deterministic chaos in the heavens
ONE COULD IMAGINE Newton's amazement were he to read the article in the
magazine Science of July 3rd of 1992, where it is confirmed that the
behavior of the solar system as a whole exhibits signs of chaos. Thanks to
modern computers they have been able to confront the old problem of how stable
that system is, already set forth during the era of the great mathematician
and physicist Henri Poincaré.
One characteristic of the planetary system, like the other systems which
celestial mechanics studies, is that for the times under consideration, which
may be up to billions of years, there is practically no dissipation of energy
through friction, given that the celestial bodies move in an almost perfect
vacuum, and the losses of energy through other effects, such as radiation, are
not important either, such that one has here bodies that form conservative or
Hamiltonian dynamic systems.
For the solar system, the results are surprising: G. Sussman and J.
Wisdom in the article mentioned describe the calculation through numerical
integration of the evolution of the overall system for the next hundred
million years, which required a month of specialized computer time for each
run of the program.
It turned out that the behavior of the nine planets beginning the next
four million years reveals that the planetary system is in a chaotic state.
For our own tranquility, this does not mean the chaos in the solar system
is of such characteristics as to be annihilated within a short while, with
planets colliding among themselves, or fleeing toward other galaxies, but
instead that the orbits are unpredictable when calculated for times in the
order of a hundred million years, and hence one can only anticipate that they
will move in space within determinate zones. These do not overlap, so this
does not presage collisions among planets, at least so far as calculated,
which corresponds to the next hundred million revolutions of our planet around
the Sun.
They also discovered that the sub-system formed by Jupiter and its
satellites is chaotic, and the same for the orbit of the planet Pluto.
The researchers estimated the error with which they know the initial
position of a planet in its orbit as only 0.00000001%. One might then expect
that if two possible orbits are calculated that differ initially in their
position by that percentage, and orbits stay on course while the mentioned
time elapses, it would give a distance between them along the order of 10 m to
1 km.
But they discovered that the distance between those two alternative
orbits multiplied in the calculation by 3 every 4 million years and,
therefore, became 9 times greater after 8 million years, 27 at 12 million
years, and so on successively; at 100 million years, then, the position of the
planet could differ by 100% yielding the chaotic region, also called "of
resonance."
In particular, for the Earth, a measurement error for the initial
location in its orbit of only 15 meters makes it impossible to predict in what
position in its orbit it will be after a hundred million years.
In fact the calculus shows that these chaotic regions, called of
resonance, are restricted to a portion of space, such that in the case of the
solar system, there is no evidence for future collisions among planets.
Yet the importance of this discovery is something more than an
astronomical curiosity of interest only to specialists. Among the work's
consequences, one is eminently practical: across the paths of the sky, there
not only circulate "heavy vehicles" like the planets and their satellites, but
also it is crossed by myriad fragments such as asteroids and comets. Through
observation with telescopes and calculation based upon the equations of
celestial mechanics, it is possible to foresee where and when the paths of
those objects will cross. Thus it is known that in July of 1994 a fragment of
considerable size will collide with Jupiter, falling onto the face that at
that moment will not be visible from Earth. Also it is known that the asteroid
1989 AC, which apparently is in resonance with Jupiter, will cross the Earth's
orbit in the year 2004, passing at a distance from our planet of 0.011
astronomical units, or that is 1,650,000 km.
Fortunately the equations allow defining with precision the location of
the planet for within a few revolutions around the Sun, so that humanity
awaits this event with total tranquility. Yet as we have seen, the capacity to
predict whether or not there will be collisions becomes lost in a type of fog
to the degree that we advance into the future.
How far away is the concept of the laws of celestial mechanics that
permit predicting forever the movement of the heavenly bodies!
But, does this not contradict the fact that we have known the ordered
movement of the planets for millennia, and that the Babylonians could exactly
predict an eclipse with years of anticipation? Did the Newtonian revolution
perhaps not really consist of the discovery of the immutable laws that govern
all the dynamic systems which appear in nature?
Now we understand that that order is such for an observational time of a
few dozen thousand turns of Earth around the Sun, but that it vanishes for
time scales thousands of times greater.
THE DISCOVERIES OF POINCARÉ
The existence of this problem was also suspected by the scientists of the
past century, but it was Henri Poincaré (figure IV.1) who approached it in
its true magnitude.
In 1887, king Óscar II of Sweden instituted a prize of 2,500 kronor
for whoever might produce an answer to a fundamental question for astronomy:
is the solar system stable, defining stability as the situation in which small
changes in the planetary movements only yield small alterations in the system's
behavior.
In that age one could entertain, for example, the suspicion that the
Earth would end by falling into the Sun. In trying to resolve this case,
Poincaré opened a trail for treating problems of stability in complex
dynamic systems, and even if he could not resolve the problem for the ten
bodies that form the solar system, he received the prize anyway for his
important contributions, among them the creation of topology.

Of course Newton's laws continue being valid, but their exact solutions
require an intelligence like that of Laplace to introduce into them,
data of infinite precision.

The astronomers can only know in an approximate manner the initial
conditions of velocities and positions of celestial bodies, but this precision
limited to a certain number of decimal places has not been an obstacle for
many calculations, since they normally work with equations where small
variations in the initial conditions yield proportionately small effects, yet,
what happens when a system simulates situations of high sensitivity to the
initial conditions?
THREE OR MORE BODIES CAN GENERATE CHAOS
For a Hamiltonian system of only two bodies, like the Earth and the Moon or
the Earth and the Sun, Newton's equations can be solved exactly, this problem
is called integrable, and its solution corresponds to an elliptical orbit. But
if a third body is added (for example, upon introducing the effect that
Jupiter produces, with its mass which is a thousandth part that of the Sun,
into the Earth-Sun system) one must use an approximating method, called
perturbations.
In this method, the perturbation on the order of thousandths produced by
Jupiter's gravitational attraction upon the Earth-Sun system, adds up to the
solution for the case of the two bodies, Earth and Sun, thus achieving a
better approximation of reality. To this result they return to add in the
effect of Jupiter's perturbation, but elevated to the square, or that is a
factor which is a millionth, and thus successively, in a series of
approximations where each one should be of smaller magnitude than the previous.
It is hoped that this series formed by the sum of terms with decreasing
values will converge, that is, that if for a sum of, for example, 1,000 terms
one has a certain number as a result, and by adding the 1,001st term the sum
grows by 1%, the sum the 1,002nd term produces is a new increase in the sum
less than that 1%, for example 0.9%, and that each time another term is added,
the new sum increases in decreasing fashion, tending towards a figure
practically constant for a sufficiently large number of terms.
This will permit claiming that the problem of three bodies is resolved:
the location of each of them in its orbit will be given by those numbers
calculated with the perturbations method. Yet upon applying it to the
Sun-Earth-Moon system, Poincaré discovered, to his surprise, that some
orbits behaved chaotically, which is to say that their position was impossible
to predict through that calculation, which implies that here there is not
linear behavior of the equations.
The sum of approximations diverges--its result is an increasingly large
number--with the effect that these infinitesimal perturbations became
amplified and in certain limit situations could come to completely remove a
planet from its orbit.
But Poincaré could advance no further and come to resolve the actual
case of the complete solar system, due to the difficulties involved in trying
to calculate using ten bodies. As we have seen, this problem had to wait a
century, to be confronted with the modern tools of the computer and the
calculus of numerical integration. The great mathematician demonstrated,
nevertheless, the possibility that a totally deterministic dynamic system,
like that of the planets in orbit, might arrive at a state of chaos where its
future behavior cannot be predicted. It is enough for that that a non-linear
situation occurs, in which a tiny fluctuation is amplified on being reiterated
a great number of times.
Furthermore, Poincaré proved that chaos can even appear in relatively
simple systems, as might be one formed by only three bodies, so that the
structures described in the space of the phases form the complicated
geometry to which we referred in chapter 2.
PHENOMENA OF RESONANCE
An important contribution of Poincaré was to demonstrate that in this
system the instabilities are due to phenomena of resonance.
Resonance appears when there is a simple numerical relation between
rotation periods of two bodies in the solar system. For example, Pluto has a
rotation period around the Sun of 248.54 years and Neptune, of 164.79 years;
the relationship between periods then is 3 to 2, which is indicated as orbit-
orbit resonance 3:2. Yet there can also be resonances between the orbital
period of an object and that of its own rotation around its axis (spin) as,
for example, the Moon, with a 1:1 spin-orbit resonance, which is why it always
displays the same face to the Earth.
An effect that can result from resonance is indefiniteness in a planet's
position in its orbit. Such is the case with Pluto, with its 3:2 orbit-orbit
resonance with Neptune. The calculation has been made via a computer of its
position, running the program twice, each with a very slightly different
initial position. The two orbits so obtained locate Pluto on opposite sides
with respect to the Sun after four hundred million years.
Another possible effect of resonance is to produce a sudden increase in
the eccentricity of the orbit of an asteroid that circles a planet. It has
been shown that such an effect is responsible for the existence of empty zones
or "Kirkwood gaps" in the belt of asteroids which exists between Mars and
Jupiter. J. Wisdom, using a method of numerical integration for this chaotic
dynamic, demonstrated that the effect of resonance is to produce violent
changes in the eccentricity of the orbits, those that culminated by
fragmenting the asteroids and launching them against Mars, and also onto the
Earth: a dramatic example of the ubiquity of chaos in the solar system.
Resonances are important in Hamiltonian systems and frequently suggest
chaotic situations.
To understand this, we can visualize the movement of the system in the
space of the phases (see figure IV.2).
Here as with the ideal pendulum, we can consider small energy values
which, in a similar fashion to low-amplitude oscillations of the pendulum,
correspond to movements called "libration," which are small oscillations
around the resting equilibrium position; and thus like for higher values of
energy, the pendulum performs rotation movements, and for a Keplerian system (a
unique body in orbit around the Sun) the movement is called "circulation."
Between both movements, of libration and of circulation, lies the separatrix,
which for the case of a perturbation (resonance) can give rise to a chaotic
zone.
We analyze the corresponding Poincaré section, where a point is
equivalent to a periodic orbit. If two pendulums are combined, one has a two-
body dynamic system. Every concentric circle that surrounds the central point
in the Poincaré section corresponds to an almost periodic movement, that
is a combination of the two circular movements, each with its own period. In
the space of the phases we have a torus, upon whose surface the almost periodic
trajectory unrolls. For there to be an almost periodic movement it is
indispensable that the relation between the rotation periods be an irrational
number. In this case, when the curve remains on the torus, its intersection
with the Poincaré section is contained in a closed curve (an invariant
curve). If the smaller period increases, we have concentric curves of growing
radius.
But in those zones where the relationship of periods is a rational
number, like, for instance, 3:2, 5:3, et cetera (resonance conditions) chaos
can appear. This is evidenced in the Poincaré section because regions of
instability appear, where the successive points which mark the chaotic
trajectory on crossing that section, fill in the entire unstable region in a
random manner.
Within this chaotic zone of resonance islands of stability appear, in
each one of which there is a structure analogous to that found in the center
of the Poincaré section. Around each island a chaotic zone exists in which
the movement is unstable: a trajectory can rotate in the curves inside of the
islands (libration) and later in the curves of the central zone (circulation).
The chaotic zone corresponds to the separatrixes in the case of the pendulum,
and in this zone the movement is extremely unstable, and very sensitive to the
initial conditions.
The origin of the chaotic movement of Mercury, Venus, Earth, and Mars is
the existence of resonances between the periods of precession (or that is the
rotation in space of the planetary orbit) of these planets. Calculations show
that our planet is far from the central stable zone of almost periodic
movements, and is more in the chaotic zone near to a chaos island (previous
figure), the reason that one cannot predict its position in a hundred million
years.
CHAOTIC BEHAVIOR IN THE GALAXIES?
Up to here the method to study the evolution of a Hamiltonian system formed by
three or more bodies has been described but, what would happen if it were to
be applied to a system of an immense number of components such as a galaxy,
which also is a Hamiltonian system, given that for the scales of times
involved, of hundreds of millions of years, the loss of energy through
radiation or friction is negligible.
In 1962, the astronomer M. Hénon approached this problem of trying
to calculate the movement of individual stars around the center of a galaxy.
One of the differences from the solar system is that, here, the center of
attraction is concentrated in the Sun, a sphere around which the planets
rotate in flat orbits, while for a galaxy it can be modeled as a zone of
attraction in the shape of a disk around which the stars rotate in orbits on
three dimensions.
Hénon computed with the method of Poincaré the intersection of
the successive trajectories of a star with a plane, determining what changes
were produced for different system energies. As would be expected for a
Hamiltonian system, the Poincaré section of the tri-dimensional torus
generated by the trajectories showed concentric curves, enclosed areas
proportional to the energy (see figure IV.3).
But a moment arrived, for the highest energies, in which the curves would
break, disappearing to be replaced by points distributed apparently randomly
in zones within which other curves appeared like islands in a blustery sea. In
the corresponding states of the system it then becomes impossible to precisely
establish the stellar orbits.
In summary: thus we see how even celestial mechanics, that is considered
the best example of the predictive capacity of the physical sciences,
demonstrates the limits imposed by the existence of non-linear phenomena, such
as appear as much in the solar system as in galaxies.
These discoveries are not only important for celestial mechanics. There
are other dynamic systems of great interest, like plasmas formed from ionized
gases, or that is so hot that many of its atoms lose their electrons.
The principal interest in these systems is due to investigations in how
to use them to make nuclear fusion reactors that provides safe and cheap
energy. Studying them as Hamiltonian systems with an immense number of
components, now subject not to gravitational attraction but instead to
electrical and magnetic fields, Poincaré sections have been discovered
with islands of regularity, similar to Hénon's figures, which confirms
they can pass into a state of chaos for certain critical values.
V. Chaos, entropy and the demise of the
universe
AS A consequence of the Industrial Revolution the necessity emerged of
understanding the phenomenon of the generation and taking advantage of heat in
steam engines. This involved studying systems of gases and of liquids where
only certain global properties can be measured, like temperature, pressure,
volume, viscosity, which do not require knowledge of the positions and
velocities of each one of their atoms.
LAWS FOR HEAT
The first great achievement of thermodynamics was the law of conservation
of energy, which establishes that heat is one more of the forms in which
energy presents itself, and that the total energy involved in a process can
change in characteristics, passing for example from caloric to kinetic,
electrical or chemical, yet is never lost. In a steam engine the caloric
energy is transformed into kinetic energy (of movement) of a piston, and if
two bars are rubbed, one obtains heat through friction.
The concepts of heat and temperature were made more precise with the
development of the kinetic theory of gases: the caloric energy of a body is
the sum of its movement, or kinetic, energy of all the molecules that comprise
it, while the temperature is proportional to the kinetic energy of the average
molecule. Our image of a gas today (figure V.1) is one of an extremely
elevated number of molecules that behave like diminutive, elastic billiard
balls, which move in a straight line at great speeds with no preference for
any particular direction in space, until colliding with others and rebounding
vigorously. The molecules that form the air in normal conditions of pressure
and temperature, for example, move at an average 1,800 kilometers per hour
and effect five billion collisions every second. This agitated movement never
ceases and increases upon raising the temperature.
We should highlight here the notable fact that the word gas was invented
by the Dutch doctor Van Helmont in the 17th century. Until then one spoke of
"vapors" and "spirits," but he, with rare vision, considered that those were
formed of invisible particles dispersed in every direction, this being the
image of chaos, the word from which the name "gas" was derived.
If the gas is contained in a receptacle, its molecules will make impact
in their movement with its walls, globally exercising a pressure that will be
greater the more kinetic energy its molecules have, or that is the hotter the
gas may be.

The second law of thermodynamics establishes that, if the total
quantity of energy is kept constant in an isolated system, which is that
that receives neither energy nor matter from its surroundings, the useful
energy capable of being utilized to perform work decreases, for in every
process a fraction of the energy is inevitably transformed, through
friction, rubbing, et cetera, into heat, which now cannot be of use for
conversion to some form of energy.

Thermodynamics is giving us, then, two notices: one "good" and another
"bad." The "good" one is that energy is inexhaustible, that there will always
be energy in the Universe; the "bad" is that energy appears in two varieties,
of which only one is useful to us, and that furthermore this useful energy is
diminishing and some day will disappear.
What example do we have of useful energy capable of performing work in an
isolated system? A simple case is that of a closed container divided into two
compartments through a movable partition operated by a piston (see figure
V.2).
Suppose that we initially fill one compartment with a hot gas, and the
other with the same gas at a lesser temperature. The energy of the gas is then
usable, for the hot gas contains molecules with greater energy of movement
than the cold gas and, hence, exerts greater pressure against the piston,
pushing it.
This organized state, where there is an appreciable difference between two
regions of the system, we consider to be ordered, as distinguished from the
case where the receptacle is uniformly full of gas at the same temperature,
which, accordingly, pushes the piston both ways and cannot move it.
THE ARROW OF TIME
If indeed molecules of a gas have individually reversible movements--a
cinematographic film of its trajectories projected from beginning to end or
from end to beginning will be equally valid, for Newton's laws will be the
same--together the gas undergoes an irreversible process: if initially there
was a difference in temperature between both compartments, this will end by
disappearing, because the molecules of the hotter gas will lose part of their
energy by pushing the piston and, therefore, it temperature will lower.
At the same time, the molecules of the cold gas collide with the piston
that advances towards them, from which they receive more kinetic energy, so
that their temperature increases. After a time, both compartments will contain
a warm gas, which exerts equal pressure from both sides of the piston and
that, in consequence, cannot yield work.
Our experience tells us that a spontaneous appearance of a temperature
difference has never been seen, when the gas heats back up in one compartment
and cools down in the other.
It is not that there is a prohibition that emerges from the laws of
physics, because for that to occur it would suffice for the majority of the
molecules, which individually are moving in every direction possible without
preferring any, to move in a spontaneous manner by simple coincidence all in
the same direction, those of one compartment towards the piston and those of
the other moving away from the piston.
The probability of something like this happening has been calculated and
the result is that it would require waiting the entire age of the Universe and
much more, in order to produce such a phenomenon. Accordingly, this return to
the initial state equivalent to a reversibility in time, in practice is so
improbable that we can consider it never to occur.
In this way the notion of non-reversible processes in time was introduced
into physics, an asymmetry that has been called the "arrow of time," for
thermodynamic processes for isolated systems only occur in one direction: that
in which over time non-utilizable energy grows.
Even further, if we consider the whole Universe as an isolated
thermodynamic system, wherein by definition it has no exterior from where
matter or energy could arrive, one comes to the conclusion that it will end
with "heat-death," when all the energy it contains degrades to non-utilizable
caloric energy, until attaining a final state of equilibrium.
This state to which it inexorably will arrive is identified with disorder
or chaos, because it is visualized as the end of the Universe as we know it,
of the organization that sustains all the ordered activity we perceive, and
which, from the galaxies and solar systems to the beings that inhabit them,
will disappear to become a homogeneous chaos of atoms moving blindly for the
rest of eternity.
Thus, the most general formulation of the second law of thermodynamics
states that every isolated physical system, including the Universe,
spontaneously increases its disorder.
In 1865, the physicist Clausius introduced the concept of entropy to
express in a precise mathematical function this tendency of evolution of
thermodynamic systems.
The entropy function increases in an isolated system in the same manner
as disorder, and is considered as a measure of that disorder.
Clausius reformulated, furthermore, the two laws of thermodynamics in the
following manner:

The energy of the Universe is constant.
The entropy of the Universe tends toward a maximum.

These concepts were analyzed through statistical mechanics by the
physicist Ludwig Boltzmann, who demonstrated that the final state of an
isolated system, when there is no change over time in the macroscopic
properties, such as density, pressure, temperature, et cetera, is that of
thermodynamic equilibrium, finding itself, in that case, the set of its
molecules in a state of maximum entropy.
In 1875 he proposed his definition of entropy as proportional to the
number of possible distinct configurations of the system that are compatible
with the macroscopic properties. Thus, for a gas isolated in a receptacle and
in equilibrium, there are a certain number of configurations, understanding by
configuration each one of the possible ways that the molecules can be
distributed in the receptacle, and which produce identical macroscopic
properties of pressure, temperature, density, et cetera as a result. All these
configurations have the same probability of presenting themselves, so that it
suffices to calculate their total number to obtain the system's entropy.
The greater that number is, the greater the entropy which can be given,
then, as a numerical quantity for a system in equilibrium, which constitutes a
great improvement compared to simply using the concepts of "order" and
"disorder," which are vague.
It might be thought that the disorder or chaos which reigns in a gas in
equilibrium is due to the great number of molecules. The existence of
deterministic chaos in systems with few degrees of freedom demonstrates that
the preceding is not absolutely obvious. It is associated more with the
enormous difference between the volumes occupied by ordered and by chaotic
states in the space of the phases.
Let us recall that that is a mathematical space, whose number of
dimensions depends upon the number of independent variables or degrees of
freedom to be represented for the dynamic system.
If the system is comprised of n particles not linked to one
another, there will be 3n positional coordinates and 3n force
coordinates, so that even a few cubic centimeters of any gas implies an immense
number of dimensions in the space of the phases. Thus, for example, for a
cubic centimeter of air, which contains some 2.7 × 1019
molecules, the number of dimensions in the space of the phases is around 6
× 2.7 × 1019 or 160,000,000,000,000,000,000.
Obviously it lacks meaning to try to visualize such a "space." In any
event it can be schematized as if it were tri-dimensional. What is interesting
here is that in a given instant of time, this portion of air has, for example,
the following observable macroscopic properties: a temperature of 20 degrees
centigrade, a normal pressure of 1,013 millibars, a volume of 1 cubic
centimeter; it will be represented within a certain region we shall designate
as H in the space of the phases (see chapter III) and we shall not have
to worry about knowing the infinite detail of all the positions and individual
forces of the component molecules.
Those will be shown in each of the points within the region. To each
point there corresponds an instant in time with a determinate configuration, a
distribution in space of the individual molecules of the gas, each with its
own velocity.
A distinct point symbolizes another distribution of positions and
velocities, but if both points are contained in the same region the
macroscopic properties are the same.
The space of the phases can thus be divided into a number of regions, to
each one of which there corresponds a different set of observable macroscopic
characteristics.
The shape of a region, which is given by the number of points that
comprise it, indicates to us the quantity of different possible distributions
of the molecules.
And how the gas evolves over time is represented globally by a trajectory
that, starting from the point which indicates the distribution of the
positions and velocities of the molecules for the beginning instant, traverses
the space of the phases, and emigrates from that initial region into others to
the extent the macroscopic characteristics change. In figure V.3 those regions
are marked by their macroscopic properties of pressure P and temperature
T.
In a thermodynamic process the sizes of the different regions can be very
different. For an ideal gas, which is confined in a certain volume within an
isolated box and proceeds to expand until occupying the entire volume, one
could estimate the relative sizes of the two corresponding regions: the
initial and the final equilibrium, designated by P0, T0 and Pθ, Tθ in figure V.3. Upon arriving at this last region, the
gas will be in thermal equilibrium, so characterized because its molecules are
distributed in a uniform manner in the volume of the box and move in all
directions with a range of velocities known as "Maxwell's distribution."
To be able to have an idea of the major factors in play let us look at a
very simplified example: we assume there are a total of six molecules inside
of a box divided into two parts. There are various possibilities for their
distribution in the box: two in one half and four in the other, five in one
half and one in the second, et cetera. The molecules are identical, in their
movement no differences exist between the two halves, and how many possible
configurations there are for each distribution can be easily calculated.
Twenty different combinations result that give a uniform distribution of
three molecules in each half, against six where the six are in one of the two
halves. But to the degree that the quantity of molecules increases, the
difference increases more rapidly: for 10 molecules it is 252, against 10; for
20 molecules it is 1.3 × 1013 versus 20; for 100 molecules
1029 configurations give uniform distributions versus only 100 of
those where the 100 are concentrated in one half.
Keep in mind now the number of molecules there are in a liter of any gas,
and you will understand that, consequently, the quantity of possible
configurations in the box with a uniform distribution is immense, by
comparison with those where all are found in one half.
Since to each possible distribution there corresponds a point in the
space of the phases, it is deduced that the region corresponding to the gas in
thermal equilibrium is, far and away, the larger in the space of the
phases.
Suppose now that we start from the situation in which all the air is
accumulated in one portion of the box. Immediately it will commence to expand
continually occupying more space until filling it; after a time it will arrive
at thermal equilibrium, with a uniform distribution of temperature and
pressure throughout the box.
ENTROPY AND THE SPACE OF THE PHASES
How shall we represent this process in the space of the phases? The point
begins from a very small region - the region representing the collection of the
possible initial states for when the gas is accumulated in a corner of the
box. When the gas begins to expand, the trajectory of the moving point will
enter into a region in the space of the phases of greater size, and later to
bigger and bigger regions while the gas keeps expanding, until it finally
enters into the region of the greatest volume in the space of the phases--that
which corresponds to thermal equilibrium--which, because of what we have seen,
occupies practically the whole space of the phases (figure V.3).
Hence, the possibility that the trajectory, after having left that tiny
volume of space of the phases to enter the vast domain of the region of
thermal equilibrium, will return to the initial region are practically nil.
It is not that there is a prohibition dictated by the laws of nature: as
we have seen, all states are possible, but the probability that the trajectory
enters that region anew is much less than that of finding a needle by chance in
a haystack the size of the planet.
It is easy to see that the entire age of the Universe would not suffice
for just that configuration with the gas gathered in a portion of the box to
spontaneously re-occur. What never ceases being reassuring, of which physics
assures us, if indeed it is not impossible, the probability that, suddenly,
all the air in the house where we are will retreat to a corner, leaving us
unable to breathe, is so slight that it would not occur even once in many
hundreds of billions of years, and this in practice for us is equivalent to an
impossibility.
From all the preceding we deduce that, once a gas has reached the state
of thermal equilibrium, it does not spontaneously leave it, for even though
all the other states are also possible, that one is much more probable by an
immensely greater factor.
Furthermore, if we consider that the entropy of the system is a measure
of the volume of the corresponding region of the space of the phases, we
arrive at the conclusion that if the trajectory representing the gas starts
initially in a very small region of volume, or that is with minimum entropy,
as time passes it will move through regions of the space of the phases with
growing volumes or entropies, until arriving at the maximum of entropy in
thermal equilibrium.
One arrive in this way at the formulation of the second law of
thermodynamics utilizing the concept of the representation of the system in
the space of the phases. The entropy grows because the states outside of
equilibrium are much less probable than those in equilibrium (they occupy much
smaller volumes in the space of the phases). Accordingly, once the system in
its long-term evolution arrives in the space of the phases at that vast
region, it is very improbable, although not impossible, that it may return to
that which corresponds to states out of equilibrium.
VI. The behavior of systems with large numbers of components
WE SHALL EXAMINE what occurs with systems formed from an immense
quantity of component molecules such as fluids (liquids, vapors, et cetera)
when they quickly pass from an ordered behavior to chaotic confusion.
No one is trying here to seek some exotic phenomenon, which only can be
observed through the walls of sophisticated physics laboratories; in reality,
as we shall see, they are totally normal in our everyday life.
Let us observe the smoke that ascends through the air from a cigarette
placed in an ashtray or a just extinguished candle (figure VI.1). It ascends
vertically over several centimeters, in an ordered column, almost rectilinear
(laminar flow) until ever more complex whirlwinds abruptly appear in a
totally disordered cloud that ends by dispersing into the air.
The millions of microscopic particles of hot soot that form the smoke all
move with the same velocity during the laminar flow ascent, as if one dealt
with a car track with columns of autos driven by conductors respectful of the
transit laws. Through the phase of eddies, they flow into the disordered
cloud, and in it the ash particles are animated by movements in all
directions, such that if one wished to measure the velocity of the smoke at a
point in this cloud, they would verify that it varies from moment to moment in
a totally random form, as unpredictable as if the data of the measurements
were obtained by playing roulette.
TURBULENCE
This condition is called turbulent; in physics, a flow is called turbulent if
the velocity of the fluid seems to vary randomly as much in time as in space.
Turbulence appears with much frequency in the fluids that we find in
nature: in air currents, water streams, atmospheric processes, oceanic
currents, processes in the atmosphere of planets such as Jupiter, et cetera.
As we know it also constitutes one of the great problems of modern
technology (in the aeronautical industry, in the subtleties of oil or water
transport) and despite the efforts exerted by many scientists, they are still
far from dominating its fundamental principles.
Nevertheless, some of the promising avenues pass by studying what occurs
in a fluid when the transition from laminar order to the chaos of turbulence is
produced. Until 1970, a theory was accepted by the physicist Lev Landau, who
understood that in a system formed from so many particles, when it leaves
laminar flow and the first eddies begin, this is equivalent to the beginning
of the oscillations in a pendulum, that is, a periodic movement appears. The
initial eddies very quickly divide into smaller eddies, which implies that
instabilities appear in the fluid that cause it to oscillate with another
different additional period, and this in turn produces smaller eddies with new
oscillations. Thus, the turbulence will initially be a superimposition of
three or four periodic movements. Once it is totally developed, the number of
different oscillations will be infinite, as if there were infinite mutually
interacting pendulums, and this will explain the chaotic behavior of the
fluid's velocity.
But in 1970, the mathematicians D. Ruelle and F. Takens proposed a
different interpretation from that of Landau. They agree on the first stage of
the laminar flow up to the first whirlwinds. During the stage of laminar flow,
all the particles move at practically the same speed in the same direction, so
that representation of the process in the space of the phases is very simple:
a point, which attracts the trajectories of those that deviate by small
perturbations (see figure VI.2a). Here the attractor point represents
the constant velocity of the fluid.
Yet in reality the velocity keeps increasing, due to the rising force of
the hot smoke, and when it exceeds a certain value, it generates an abrupt
change: before the smallest perturbation, that could be a very light current
of air, the threads and layers of smoke are deflected forming curls that
turn around themselves. There here appears a rhythmic movement, with a certain
period, that can be represented in the space of the phases (see figure
VI.2b) in the same manner as with the case of a pendulum, with a limit
cycle.
This motion is relatively stable, such that it constitutes an attractor.
But on continuing to move the eddy soon suffers the effect of some other air
current, that simultaneously causes it to oscillate in another direction,
wherein we have a case similar to that of the system formed by two pendulums
of different periods of chapter III. We saw that the corresponding
representation in the space of the phases is the solenoid on the surface of a
tri-dimensional torus, where we find ourselves with three independent
variables or degrees of freedom. What happens from then on?
It is at this point that the two theories separate. For Landau, a new
perturbation introduces another oscillation, which will take us to a
representation in a space of the phases of four dimensions, and so on
successively until reaching chaos with a space of the phases with a high
number of dimensions, corresponding to the elevated number of independent
variables of the turbulent fluid. For Ruelle-Takens, the chaos is produced
abruptly and long before: when the eddy is deflected to simultaneously have
two oscillation frequencies.
The foregoing is visualized not with a new figure in a space of the
phases of four dimensions, but by radically altering the shape of the
attractor that was on the torus (see figure III.13 above). It is as if the
solenoid that defined that trajectory had exploded, producing a figure so
extravagant that Ruelle named it, "strange attractor." From then on this name
was that adopted by the scientists to denominate the attractors that display
behavior which cannot be predicted in the long term.
A Poincaré section of the strange attractor shows a fractal
structure, such that the attractor which was bi-dimensional jumped to a
dimension greater than 2 but less than 3.
Experiments performed in laboratories have confirmed the existence of
this strange attractor, which opens a path for formulating the laws of
turbulence.
Another important aspect of this discovery is it shows that a dynamic
system formed from a great number of elements also can arrive at chaotic
behavior starting from only three degrees of freedom. Until then it was
thought that, as common sense seems to indicate, there can only be chaos when
the system has a large number of independent variables.
THERMAL CONVECTION
A notable example of a fluid giving rise to a phenomenon of a spontaneous
order can be found in thermal convection, or that is the transport of heat
caused by a hot fluid when displaced towards a colder zone.
H. Benard performed the first studies in 1900. The French physicist
discovered that if you heat a thin surface of a fluid (oil), underneath, this
can spontaneously organize itself in convection cells of a characteristic size
(see figure VI.3), similar to a bees' honeycomb. In each of Benard's cells hot
fluid rises through the center and cold fluid descends around the borders.
Today we know that this phenomenon of spontaneous organization us very
diffused throughout nature: the surface of the Sun is covered with convection
cells each one of which has a size on the order of a thousand kilometers, and
in general the same cells can be seen in all fluids in which thermal
convection produces movements, when they form surfaces much more wide than
tall.
This is also true for the circulation of air and of the oceanic currents,
those that in large part determine the short- and medium-term climate.
Assume that we place water or oil in a very large frying pan and
uniformly heat it, so that the liquid in contact with the metal has the same
temperature in all its parts; on heating it will conduct the heat upwards,
where it will dissipate on the surface of the liquid. Thus we have thermal
conduction of heat. But if we keep augmenting the heating temperature above a
certain value another phenomenon suddenly appears, that of thermal convection
from the formation of the cells.
What occurs is that the lower, hotter layers of fluid expand, so that
they have less density than the colder layers which are above and being
lighter, ascend, being replaced by those colder volumes (figure VI.4). These
heat as they near the bottom, and simultaneously those that ascend cool, which
produces a circular movement.
The complexity of these movements is notable: a section of the fluid
shows how the sorts of circulation are coordinated in adjacent cells,
alternately going clockwise, or counter-clockwise.
The correlation of the movements due to processes of thermal convection
is that much more noteworthy if one keeps in mind that the size of these
Benard cells can reach several millimeters, while the range of action of the
forces between molecules is on the order of ten millionth of a millimeter
(10-7 mm). In each cell there are about 1021 molecules.
It should be stressed that the experiment is perfectly reproducible,
meaning that repeating the same heating conditions, one arrives, starting from
the same temperature threshold, at the same formation of cells with similar
geometry and rotation velocity. Yet what cannot be controlled is the rotation
direction: it is known in advance that with the experience's conditions well
controlled, in a certain zone of the surface of the fluid a cell will appear,
but it is impossible to predict whether it will rotate in one direction or in
the opposite.
Repeated tests have demonstrated that the probability that the cell will
rotate in one direction or the other is the same for both cases, which is to
say that at any point in the fluid, its velocity can be directed as much in
one direction as in the contrary. This property can be represented in a
graphic like that of figure VI.5, where the velocity V is indicated on
the vertical axis, with V+ for rotation velocities in one
direction and V- for the opposite; on the horizontal axis,
the variable R represents the conditions of viscosity, density,
thickness, and the temperature difference between the lower surface and the
top of the cap. On beginning to heat R passes from zero to growing
values until, upon reaching the critical value Rc, a
bifurcation of the curve appears: two equal branches V+ and
V-, it being impossible to determine in advance which of
them the behavior of the dynamic system will traverse, that evidently responds
to non-linear conditions, with sensitivity to the initial conditions.
If we keep increasing the heating temperature, suddenly the cells are
erased, and a random movement begins which is the initiation of turbulence.
In 1987, the researchers M. Dubois, P. Atten and P. Bergé performed
measurements in the fluid when the Benard cells appear and represented the
corresponding thermal oscillations in the space of the phases. In this way
they obtained a strange attractor whose Poincaré section has a dimension
between 2.8 and 3.1, depending on the heating conditions, which indicates that
the behavior of the system can be described with a minimum of four degrees of
freedom.
CONVECTION IN THE ATMOSPHERE
AND METEOROLOGICAL PREDICTIONS
As we know, every day the meteorologists publish their predictions for the
weather during the next days, based upon measurement data of the atmosphere,
of the terrestrial surface and the oceans, and on satellite observations.
Powerful computers process this data at centers in the United States and
Europe, through models of the atmosphere like a dynamic system with more than
a million independent variables. There are plans to continuously increase the
power of those data processing centers and also the number and frequency of
measurements throughout the planet. However, no serious meteorologist believes
that in some near future they could affirm in a weather forecast something
like "Friday the 26th, or that is within 12 days, in greater Buenos Aires the
maximum and minimum temperatures will be 19 and 14 degrees, with a humidity of
67% and a 53% probability of rain."
Actually, with all the modern instruments, the forecast begins to be
uncertain after four or five days. There have been cases (as in England,
1987) when they could not foresee the appearance of a disastrous hurricane 24
hours before its arrival.
It is estimated that in the future we will prognosticate up to 14 or 15
days in advance, and only beyond that will be the growing cloud of the
uncertain. But we are not referring to an exact prediction for individual
points on the planet; if, on the other hand, we consider the climate for an
entire region, this indeed is predictable, with considerable more precision
than "in June in the province of Buenos Aires the temperature will vary
between 18 and 5 degrees."
The atmosphere being a fluid, it is studied as a dynamic system formed
from an immense number of components, subject to turbulences and convections.
The Sun heats the surface of our planet, which in turn elevates the air
temperature, and in a fashion similar to what we saw when we examine Benard
cells, upon heating from below the air layer that forms the atmosphere,
convections appear.
The distance between the equator and the poles is about ten thousand
kilometers, while the thickness of the atmosphere is some ten kilometers
towards the troposphere. In this spherical layer of fluid in rotation,
convection cells appear distributed the length of six rings that encircle the
planet, three in the Northern hemisphere, and another three in the Southern,
as is indicated in figure VI.6. The thick arrows indicate the general
direction of the rotation, and the finer ones the additional circulation
produced by the rotation of the planet (Coriolis effect).
There are further currents of air at very high velocity that circulate in
the stratosphere, and which decisively influence long-term weather. They are
unstable, mobile cells, with changing limits and sizes. Their dynamic
characteristics can vary in amounts that grow by double approximately every
couple of days, and in turn act upon the currents of the stratosphere that
modify the weather from the temperate regions to the polar.
The behavior of the atmosphere thus comprises a non-linear process, with
regeneration, highly sensitive to the initial conditions, such that it fulfills
the conditions for being a chaotic system. That is to say it does not matter
how complex the dynamic models may be, or how precise are measurement data
from land, water and air, the laws of physics imposing a limit beyond which it
is impossible to make meteorological predictions.
Yet even if we cannot foresee more than four or five days for a specific
geographical point, could we make a more long-term prediction, which would
give the global tendency for a whole region that has relatively large
dimensions?
The meteorologist Edward Lorenz brought up this problem at the beginning
of the decade of the Sixties. At the Massachusetts Institute of Technology,
where he worked, the study of non-linear dynamics was developing, and he
decided to make a simple atmospheric model of three non-linear equations with
only three independent variables in place of the immense quantity there could
be for this system.
He represented the evolution of the weather in a space of the phases of
three dimensions and proved, to his surprise, that the corresponding
trajectories gave rise to an attractor of a most curious shape, with two
similar links and butterfly wings (see figure VI.7). He had discovered the
first strange attractor. In the space of the phases, each one of the wings of
the attractor represents a possible state of the atmosphere, for instance a
rainy time in the left wing, with dry and stable weather in that of the right.
If the initial conditions are those that mark point 1 on the left, the
evolution will follow the trajectory which remains in the same wing: the
weather will be rainy. But a small perturbation, which changes the initial
conditions leading to the atmosphere with the situation represented by point 2,
it takes us on trajectory 2, which evolves towards the right wing, and the
weather then would be dry and stable.
The Lorenz attractor has a fractal structure, with the dimension
D = 2.06.
Lorenz published this discovery in 1963, it being the first example of a
model calculated with only three independent variable where behavior
unpredictable in the long term appeared.
THE BUTTERFLY EFFECT
Lorenz coined the famous expression "butterfly effect" as an example of this
extreme sensitivity to initial conditions: when the wings of a butterfly in
the Amazon beat today, it could produce an extremely tiny alteration in the
state of the atmosphere, which if amplified by doubling every couple of days,
will proceed to diverge continually more with respect to what it would have
been without the butterfly, such that several weeks later a cyclone might
appear in the Caribbean, which, without the insect in question having existed,
never would have emerged.
The Lorenz model with but three variables only qualitatively describes the
chaotic form in which weather evolves, yet it does so in a manner very similar
to models with many independent variables.
In actuality, models for meteorological prediction have around a million
degrees of freedom, and this allows making general forecasts for global regions
anticipating up to one month.
VII. Far-from-equilibrium systems
THE EMERGENCE of thermodynamics was a challenge for the physicists of the 19th
century, whose schooling was based upon the concepts of Laplace, Lagrange,
Hamilton, and their disciples. They came together to study heat equations as
universal as those of Newton, and that referred to the global behavior of
matter. As we have seen, to apply Newton's laws to dynamic systems, it is
necessary to define the positions and velocities of each one of their
component elements. But a gas, a liquid or a solid have an immense quantity of
components (along the order of 1023 molecules in a cubic centimeter
in a gas) which is why it was ignored how their global behavior might be
established starting from calculation of the movements of its molecules.
With thermodynamics a new focus emerges, where the heat equations, which
also are universal, use collective, macroscopic properties as parameters, such
as pressure and temperature, and do not seem to require detailed
knowledge of what occurs with each of the participant molecules.
Thermodynamics appears, then, as a potentially valuable tool for studying
processes of global change in dynamic systems formed from a very large quantity
of components and, effectively, achieved great advances during the 19th
century, especially when, as we have seen in the preceding chapter, it applied
the concepts of mechanical statistics to link the pressure and the temperature
with the average effects from the movement of a very large number of molecules.
In this way the evolution toward thermodynamic equilibrium of isolated
structures that appear in physics and chemistry, whose most well-known example
is that of the ideal gases, received a satisfactory interpretation via the law
of entropy.
The second law of thermodynamics establishes that, in general, isolated
structures end by decomposing, remaining reduced to a disorganized movement of
their elements: the clouds of smoke dissipate, the hot zones and the cold
zones in a object fade until arriving at a uniform temperature.
The amount of entropy in the Universe--the random, or disordered--can
only increase, until reaching the maximum, says the second law. Yet,
curiously, in the same era when this law was enunciated the theory of natural
selection to explain the evolution of species appeared. Through that theory,
Charles Darwin tried to take account of the fact that living organisms
constitute continually more organized structures: beginning with the bacteria
today we arrive at mammals and at humans. Does this process of
growing organization contradict the second law of thermodynamics? Actually,
the existence of living beings is not a challenge to this law, which applies
to isolated systems.
OPEN SYSTEMS CAN ORGANIZE THEMSELVES
A living system is open: a person absorbs energy and matter from external
sources (the heat of the Sun, the air, meats, vegetables, sources which in
turn are structured and, hence, are of low entropy) and expels her waste
products, which are of high entropy as a result of the decomposition of
organized matter, into other open systems in her environment. While an
organism is alive, it remains far from the thermodynamic equilibrium towards
which isolated systems tend.
We know very well that if someone is completely isolated from the outside
environment, in a very short while the inexorable second law is obeyed
bringing mortality.
Over a considerable time, many scientists thought that the fundamental
laws of physics only permitted deducing that systems must reach thermodynamic
equilibrium and that, accordingly, the process of biological evolution, with
growing complexity of living organisms, was a rare exception.
But neither Boltzmann nor Darwin had been able to consider the existence
of the phenomena of spontaneous formation of structures in matter, a property
that has begun to be studied in these last decades and that appears with
surprising frequency in nature.
Thus, systems as simple as a layer of liquid or a mix of chemical
products can, under certain conditions, exhibit phenomena of coherent
spontaneous behavior. An essential condition for this to occur is that it
concerns open systems, kept far from thermodynamic equilibrium through sources
of energy.
It is truly extraordinary that enormous sets of particles, subject only
to the blind forces of nature, should be, nevertheless, capable of organizing
themselves in configurations of coherent activity. One of the groups that has
promoted the study of this type of process--developing in this way the branch
of thermodynamics of far-from-equilibrium systems--has been that of Ilya
Prigogine and his collaborators, which is why Prigogine received the Nobel
prize in chemistry in 1977.
These processes also help us understand the mechanisms that lead to
oscillations in certain chemical reactions, knowledge which has great
transcendence for industrial catalytic processes, without mentioning the
importance of this type of reactions for biochemical process in living beings.
VERY STRANGE CHEMICAL REACTIONS
We have seen that phenomena of thermal convection can take a liquid or gas
that starts initially in a homogeneous state and develop and structure it with
time, giving rise to regular forms.
An indispensable condition for cells to appear like those of Benard is
that the system be open, that is to say have an external source of energy, and
that it be remote from equilibrium.
This formation of cells through convection is very striking, yet it seems
a humble phenomenon if compared with the spectacular effects of chemical
oscillations, which have begun to be studied methodically beginning in 1980.
In 1958, the Russian biochemist Boris Belousov created a mixture of
certain chemical products that usually form a colorless liquid until they
react, then turning a pale yellow color. Belousov had mixed the ingredients
without worrying what proportions he used, and surprised he observed that the
solution changed its color periodically, passing at regular intervals from
colorless to pale yellow and back to being colorless, which meant that the
reaction retreated and returned to advance as if it could not decide what
direction to take.
Poor Belousov tried to publish his discovery in the scientific journals,
but it was refused. The arbiters who evaluated the work considered the only
possible explanation for that phenomenon was an inefficient mix of the
reactants, for the laws of thermodynamics opposed the existence of such
oscillations. Belousov died without having succeeded in his research being
recognized.
The skepticism of the chemists of the day should not surprise us too
much. In effect, in a typical chemical reaction between two reactants A and B,
their molecules are moving at random, and if an A and another B collide they
can combine to form a molecule C and another D, which are the so-called
products of reaction. This is symbolized as:
A + B → C + D
The reactants A and B continue progressively disappearing to the extent
that the proportion of products C and D increase. Nevertheless, in an isolated
system it is observed that the reactants A and B never run out completely and
that, over time, the four, A, B, C, and D, co-exist maintaining a fixed
proportion of each in the solution. This proportion will no longer vary and we
then say that the system is in chemical equilibrium, which corresponds to the
equilibrium attained by systems according to the second law of thermodynamics.
Yet one can also transform this system into an open one, for example by
continually adding more reactants A and B, or retiring part of the C or D that
are produced. Here too it can be proved that through an adequate combination
of the entrance and exit flows, the proportions keep varying progressively
over time until becoming fixed, although with different values from those in
the isolated system, a result which, once again, would be expected in accord
with the laws of thermodynamics.
But what Belousov claimed implied that those combinations instead of
progressively growing over time until reaching a stable equilibrium, could
retreat towards the initial state, which is equivalent to contradicting the
second principle of thermodynamics, and furthermore could do so repeatedly,
oscillating in one direction and the other.
Phenomena that are periodic or of oscillation are ubiquitous in physics,
astronomy and biology, but the chemists thought that reactions were immune
from this class of behavior.
It is not usually expected that concentrations of the intermediate
products of a reaction reach a certain level, then fall to a lower one, then
rise and fall repeatedly until at some point stable products resistant to
further changes result. In any case, at some laboratories in Moscow they kept
treating Belousov's "recipe" as a curiosity of chemistry, and at the beginning
of the decade of the Sixties, Anatol Zhabotinskii returned to the theme for his
doctoral dissertation. He performed systematic research changing the reactants
so as to obtain colors with better contrast.
In the reaction now called BZ (Belouysov-Zhabotinskii) the oscillation is
manifested by a regular change between red and blue. It is obtained by
dissolving in water at a certain temperature certain proportions of sulfuric
acid, malonic acid, potassium bromate, and salts of cerium and iron. The
resulting oscillations between red and blue have a period of almost a minute
and can last for several hours.
Oscillations can also be produced in space: if one pours some of that
solution on a disk forming a layer of low thickness, some lovely figures
appear, with concentric circles or blue spirals upon a red background, which
rotate in one direction or the other and keep changing with time. Figure VII.1
shows a successions of photographs of these ripples.
Actually the BZ reaction and other similar ones are studied in what is
called a continuous flow reactor, continually introducing the reactants and
removing the excess products to maintain a constant volume. The reaction can
so be rigorously controlled and maintained like an open chemical system, far
from equilibrium.
FEEDBACK AND CATALYSIS
Today all the stages of this type of reaction have been studied in detail,
establishing the corresponding equations, and with them the oscillations have
been simulated by computer. It is now known that the conditions for these to
appear are that, in addition to dealing with a far-from-equilibrium system,
there must be a feedback, which is that some of the products that appear at
one stage of the process be capable of influence upon its own speed of
formation. It chemistry this is called "autocatalysis."
A catalyzer influences the speed with which the substances present react
chemically and remains unchanged during this process. In autocatalysis, if the
substance being produced acts on its own velocity of production by augmenting
it, increasing its concentration and, at the same time, producing a higher
quantity of that substance, one has positive feedback, which responds to a non-
linear equation.
Varying the entrance flow of reactants in the continuous flow reactor,
the chemical system can pass into either of two different stable states, each
one of which having its own oscillation period, which is to say that
bifurcations appear for the possible evolution of the system, which recall
those found in the dynamic of fluid flow.
For some ranges of flow in the reactor a more complex behavior appears,
with a mixture of various oscillation frequencies of diverse amplitudes. The
current explanation is that, as opposed to a "normal" chemical reaction, where
the reactants and products continue to be uniformly distributed in the
solution, here a very tiny heterogeneity or difference at one point is
incremented through the effect of autocatalysis, such that in a region a
specific chemical substance can dominate the reaction while in the neighboring
one, its concentration exhausts itself. Thus complex oscillations are
activated in the reaction system.
A chemical oscillator, for example, can initially be of a uniform red
color and, as the reaction continues, white spots appear that become
concentric blue rings, that are destroyed by colliding with each other. One
has then the situation of sensitivity to the initial conditions that, as we
have seen, characterizes non-linear dynamic systems and, in fact, representing
the evolution of these chemical systems in the space of the phases reveals
strange attractors.
It must be remembered that catalyzers are employed in many important
processes, like those linked to petroleum for example, such that detailed
comprehension of the dynamic of chemical reactions which are important for
industry can have large economic consequences.
Moreover, these chemical clocks suggest the natural rhythms that appear
in living organisms. If one keeps in mind that feedback also appears in
biochemical reactions, produced through the catalytic effect of enzymes, and
that living beings are far-from-equilibrium systems, it can be hoped that this
new focus might help understanding the behavior of many biological mechanisms.
VIII. How do we define complexity?
UNTIL now we have used the term complexity for a state where many different
factors interact among themselves. Yet we must give greater precision to this
concept, since the complexity of a system should not be confused with that of
a system that is merely complicated. In reality one should talk of complex
behavior of a system since, as we have seen, a dynamic system can be very
simple but under certain conditions exhibit unexpected behavior of very
complex characteristic which we call chaotic.
BETWEEN ORDER AND CHAOS: COMPLEXITY
A quartz crystal is an ordered system, with its atoms vibrating around
positions ultimately fixed in the crystalline net; a virus has characteristics
of order in its structure, similar to an organic crystal, yet when it infects
a cell, it rapidly commences to replicate genes like a live organism: it is
complex; the movement of the molecules of a gas in thermal equilibrium is
truly chaotic. Complexity thus covers a vast territory that is between order
and chaos.
There is not actual agreement concerning the meaning of "complexity." We
all know that human beings have great complexity, and that the fishes have
enough; mushrooms are somewhat complex, and a virus, less so. The galaxies are
complex, but in a different sense.
We all agree about that, but could we come to agreement on using a more
scientific definition, which would specify quantitatively how much more
complexity a plant has than a bacteria?
In the physical sciences, a law is not a law if it cannot be expressed
in mathematical language; no scientist would enthuse over measures of
complexity that sounded like this: "a fish is considerably more complex than a
virus, but a little less than a mammal."
It might seem that this difficulty in discovering a good definition of
complexity disappears if we ignore living organisms and apply it only to
dynamic systems formed from inert matter, where it is easier to know whether
or not they are complicated.
Yet it is not exactly so either. According to the conditions of the
system, these can have order or complexity or even chaos, whether they treat
of only three components or with an enormous number of them.
It is the territory of complexity, which is situated between order and
chaos, that comprises the new challenge for science.
In any case, it is difficult to be able to predict that that search for a
precise definition of complexity will end with the discovery of a unique
magnitude that would give us a number to use in the physics equations, such as
occurs with velocity, with pressure or with mass.
In the final analysis, the complexity of a galaxy implies a relation
among its component stars qualitatively different from that which exists
between the cells that form a sheep. It does not seem that this difference
could be reduced to a unique magnitude similar to the temperature.
Possibly the best solution to the problem is that proposed by computation
specialists. Actually the majority of scientists dedicated to trying to define
complexity pertain to that area, whose tradition is to see practically
everything as reducible to information and, accordingly, quantifiable in terms
of bits and bytes.
HOW TO MEASURE COMPLEXITY
If this viewpoint is adopted, then we could measure complexity as a function
of, for example, the time that a computer requires to execute a program which
simulates a complex physical process.
In principle there is no limit to the types of processes that can be
simulated with a program, such that this would be a good starting point for
the definition we seek. Thereby, complex phenomena include not only clouds of
symbols (numbers, computer programs, words) but also physical processes and
living organisms.
Such a focus originated with the publication in 1948 of a work by Claude
Shannon, of Bell Telephones, on the mathematical theory of communication,
which led to a search for an equivalent to the second law of thermodynamics but
for information.
As a consequence of that presentation, scientists have become accustomed
to viewing physical entities--bicycles or oil flows--and asking, how much
information is required to describe this system?
Moreover the phenomenal advance in the capacity of computers has
propelled the computational focus for processes in physical systems, combining
observations of such systems with the construction of models for the computer.
A process is simulated this way, so as to obtain results that traditionally
only were achieved by modifying the physical conditions of the system and
measuring the effect. This becomes especially useful for systems like those
studied in cosmology, where one cannot alter for instance the structure of a
planet, or in the social sciences, where we cannot change the economic
conditions of a nation in order to determine which variables are important.
Given that many processes in nature are expressible through models which
permit simulating their evolution with a computer, one could try to measure
the complexity of a system by the difficulty of representing it in an
algorithm, that is, with a computer program. How is that difficulty measured?
Different formulas have been proposed, like that of measuring the minimum
time necessary for a machine to execute the program, or alternatively measure
the minimum memory capacity the computer should have to run that program. But
since these magnitudes depend on the type of machine utilized, it is necessary
to refer them to some ideal computer which acts as normalizing governor.
This abstract machine was conceived by the English mathematician Alan Turing
in 1936. It can be considered as a mechanical artifact with a printer, through
which the memory passes, consisting of a paper ribbon as long as necessary and
upon which are marked a sequence of spaces; each one of the spaces can be
blank or marked with a line, which is equivalent to the binary notation 0 and
1 respectively (see figure VIII.1). The machine can perform one of four
operations each time it uses the memory, namely when it traverses the ribbon's
spaces and reads them: move a space forwards, or move a space backwards, or
erase a line, or print a line. Since the ribbon is as long as needed, it is
understood that the machine has an unlimited storage capacity for data, also
that no fixed limit of time is given for completing its operations.
The machine begins to operate when the program is introduced, and
continues until finishing by printing the result. In this manner it is capable
of performing any program expressed in binary code, or that is, in a
mathematical language formed from two unique signs 0 and 1.
What characterizes Turing's universal machine is that, given an adequate
input program, it can simulate the behavior of any other digital computer, even
the most complex. Obviously it is much slower, so that no one has tried to
construct one, despite its simplicity.
We now have the instrument for precisely measuring the quantity of
information that we send in a message: the fundamental unit of information is
the bit, which is defined as the smallest information capable of indicating a
selection between two equally possible things. In the binary notation, a bit
equals a digit, and can be 0 or 1. Thus, for example, the number 29 is written
11101 in the binary code and, hence, contains five bits of information.
We have here, then, the sought for normalization, that permits measuring
the complexity independently of the type of computer employed.
THERE ARE NUMBERS THAT ARE COMPUTABLE
AND OTHERS WHICH ARE NOT
We shall attempt now to define complexity with greater precision. It seems
sensible to try to do so beginning with abstract objects such as numbers are,
since the relations among them are, obviously, quantitative.
Turing distinguished two classes of numbers: those computable and those
non-computable. The computable are those for which an algorithm, or computer
program, exists, that when run on the machine delivers us the number, no
matter how big it may be, which includes even being infinite.
To clarify this concept let us suppose that we have a computer connected
via satellite with another used by a friend who lives in Japan, to whom we
need to transmit certain numbers. As we know that the more seconds of
transmission our message requires, the greater will be the bill which the
company presents, it befits us to compress it to the maximum. For that it is
advisable to see whether it has some characteristic that assists towards that
end.
Suppose that we desire to send a number like, for example:
1234567891011121314151617181920212223242526272829303132
Upon examination we find that its digits are formed by writing the first
32 whole numbers in order. Its structure follows a perfectly determinate law,
since I know that the digit 7 is followed by 8, with 9 coming next, et cetera.
Accordingly, it will be sufficient to transmit a message with the instructions
for constructing that number.
The corresponding algorithm to transmit is a very short program, which
basically executes the following instruction:
"Print the first 32 whole numbers in order"
Our friend's computer will generate the number, which can be as large as
we want changing very little the length of the program (for example, "Print
the first 1,000 whole numbers in order").
By contrast, a non-computable number is one whose only possible algorithm
turns out to be the same number written into the program.
We shall easily understand this if we wish to transmit to our friend in
Japan a number that we have constructed by rolling a die many times and
recording the figures from 1 to 6 which we successively obtain. Thus after
throwing the die 25 times we obtained this number:
3546221356431142652142663
In this instance, since each digit was generated at random, it has no
relation whatever with that which follows or precedes it, such that the only
algorithm that can reproduce the number is the program which simply copies it:
"Print 3546221356431142652142663"
In the same fashion, if I wanted to transmit the number obtained when I
kept throwing the die until completing a thousand throws, I have no other
possibility than to revert to the program:
"Print 3456221356431142642142663..."
Here the three dots signify the other 975 digits from 1 to 6. I should
resign myself to the fact that the message with the program will continue to
be as large as the number.
In summary, for a computable number it is possible to write a relatively
short computer program that will calculate it, though the number may be
infinitely large. However, for a number generated at random, not being
computable, the program that calculates it must contain the same number and
will be at least as large as it is.
Therefore, the complexity of a number could be measured by the minimum
amount of instructions for a Turing machine program--that is, its minimum
length--capable of reproducing the number.
In consequence, a number generated at random corresponds to a high
complexity--of chaos--while a computable number is ordered, having low
complexity. In the middle will be the complicated numbers, generated through a
combination of chance and order.
Yet, what relation can there be between these valid considerations for
mathematical entities and the systems of objects in nature whose complexity we
intend to estimate?
A pathway for seeing the relation is given in the fact that the binary
code which the computer uses to express and number can also express any other
information. The 28 letters of the alphabet can be made to correspond with 26
different sequences of ones and zeros, along with the punctuation signs, et
cetera; this permits us, for example, to consider the complexity of the
construction of words in a language.
Something ordered would be a string of letters like aaaaa, because
it can be concisely written in a program as "5 × a." However, a
random sequence of letters, like dcflksivgdhglkjthlakijgueernsedgmk, is
chaotic, not having a program that can print it which is more concise than the
sequence itself. Between both extremes is the complexity that appears in the
way letters are grouped to form words and sentences.
This method also permits coding the information that corresponds to
physical objects. Thus, an ordered object, such as a chunk of crystal formed
of carbon atoms (a diamond) can be formulated in an algorithm which specifies
the quantity of carbon atoms that for it as, for example, "1027
× carbon."
In the same manner, plants, persons and bacteria will be similar, under
this focus, to words, sentences and paragraphs: mixtures of order and
randomness.

Algorithmic complexity is defined as the length of the shortest program
that can perform a computation.

A related approach is found in the algorithmic theory of complexity
formulated by Andrei Kolmogorov, Gregory Chaitin and Ray Solomonov. This
theory considers computation of the physical magnitude Q specified as a
sequence of digits S, and establishes that S, and hence Q,
is random if the minimum computing program required to obtain S is the
program where all that is done is to copy S:
"Print S"
Computational complexity can also be defined by this method, as the amount
of time a computer needs to resolve a certain problem and which therefore
measures its difficulty.
It is shown that there are two basic types of dynamic systems whose
movements can be computed: ordered systems and chaotic systems. The first, as
is the case with the harmonic oscillator, have orbits that require a quantity
of input information, or that is a sequence of ones and zeros, whose length
is relatively short, yet the corresponding computational process produces much
much greater output information as a result. The input information is of short
length because the initially nearby trajectories of the system separate slowly
with the passage of time, so that it is not necessary to know all the digits
of initial information to enable prediction through computation of future
states.
From the viewpoint of computational complexity, the time required to
compute a trajectory is proportional to the logarithm of the system's time.
Thus, if one calculates the trajectory for the time t = 15 seconds, she
would require a computation time of 1.17, and for t = 150,000 seconds,
a time of 5.18, always much less than that which the system requires to
traverse that trajectory.
Instead, for chaotic systems, the quantity of input information is the
same as the output, since the initially close trajectories diverge
exponentially over time, and this has as a consequence that every second of
time the system loses a decimal digit of precision in the output information.
To maintain the degree of exactitude in the output information it is
necessary, then, to compensate for that loss by adding an additional digit of
precision to the input information and, hence, if one wants to integrate a
chaotic trajectory with exactitude, they must introduce as much information as
what they extract.
If we examine this from the point of view of computational complexity
instead of algorithmic complexity, we see that the time the computer requires
to calculate a chaotic trajectory is proportional to what that system requires
to traverse it, in such a manner that the computer needs as much time to
calculate as the chaotic system itself to execute the process, and
accordingly, it is not possible to make a prediction.
All these methods allow quantifying the complexity of the extreme
processes of order and chaos, yet they are not very satisfactory for
complexity that is between both. In effect, the complexity of a dynamic system
like that of a living organism is, according to the foregoing definitions,
less than that of a gas in thermodynamic equilibrium, since in the latter
entropy has reached its maximum and the movement of its atoms is totally
random. This would seem to imply that complexity and randomness are
equivalent, which takes us nowhere if the intention is to understand the
complexity of systems that display organization, such as are abundantly found
in nature. The idea of complexity does not coincide with that of entropy, for
there are systems which can have equal order, or equal disorder, and differ in
complexity.
C. Bennet, of IBM Research, and other researchers have proposed a
different focus, that offers as a result a distinction by which the complexity
of a fish will be greater than that of a crystal or than that of a gas.
The evolution of life on Earth shows us ever more organized organisms,
which is to say that something we can call complexity appears to be
increasing, despite that the quantity of order in the Universe is diminishing.
Bennet's proposal is to measure that process of organization, especially
for the self-organized systems that appear in nature. What characterizes them
is that, in addition to being complex, begin initially from simple systems (a
layer of fluid which is heated, a solution of chemical substances, a
fertilized egg). The idea of organization or complexity of a system will thus
be strictly linked to the process that can go from that simple initial system
to the totally developed complex system.
Complexity will then be measured by the time it will take a computer to
simulate the entire development of that system until arriving at its
definitive organization, and counting the number of "logical steps" for the
length of the process. It has for an input algorithm the basic rules of its
growth, which include, for example, genetic data, if we are dealing with human
beings. Thus, considering natural selection as a theory of the origin of the
human species, the number of logical steps will be the estimated number of
times that our genetic material has been modified starting from when it was
contained in an initial bacteria. The complexity will be measured then by the
time a computer will require to simulate that evolution passing through all
those logical steps.
Yet none of the mentioned definitions of complexity that is between order
and chaos have been accepted by the scientific community as totally
satisfactory, there are new proposals and the debate still continues.
Moreover, the lack of definition is not an impediment because the theme of
complexity in itself is registering important advances, in a process that
resembles that which accompanied comprehension of the nature of heat and its
relation to energy. The advances in the science of heat were realized from the
beginning of the 18th century, but by the middle of the 19th century it could
be defined with precision basing itself on the kinetic theory. It can be hoped
that the advances achieved will permit defining complexity in a much shorter
time.
IX. Applications in biology and economics
THERE EXISTS a mathematical equation especially adequate for examining
these properties common to various systems and given its importance we turn to
describing it.
THE LOGISTIC EQUATION
It concerns the logistic equation, which is a simple equation, very fruitful in
a number of applications in many fields of study of complex systems:
ecologists, biologists, economists, et cetera.
It is an equation that operates upon one number and transforms it into
another upon which it operates, and so on repeatedly, in an iterative process.
Its special characteristics are only in evidence when the number of iterations
is large, so that to apply it at least a pocket calculator is required, though
a computer is the most adequate tool.
The logistic equation produces two opposite effects from any number:
1) it is incremented, producing another greater number which, in turn,
is returned to be incremented by the equation and so on repeatedly; 2)
it keeps reducing those resulting numbers as they grow, such that we have here
a process with a controlled feedback.
What would happen when the equation had operated a good number of times?
Common sense would tell us there should finally result some intermediate
number, neither too large nor too small.
Yet here the grand surprise comes: this can be totally mistaken, so
mistaken that we can also run into some unsuspected behavior, of a chaotic
nature.
APPLICATIONS IN BIOLOGY
The log equation was proposed in 1845 by the sociologist and mathematician
Pierre Verhulst; its surprising properties were made manifest by the physicist
and biologist Robert May in the decade of 1970, when he applied it to the
study of the population dynamic of plants or animals. In such populations
there is feedback in every natural cycle due to reproduction controlled by the
negative effect of predators or by the increasing scarcity of food, which thus
impedes those populations from growing explosively. The equation allows
calculation, starting from the characteristics of the population at a given
moment, of how these will vary over time.
We wish to know how the number of individuals in a population will vary
annually, from what we know, that in the initial year there are 1,000 and that
it increases at a constant rhythm of 10% per year.
If control by predators or by the availability of nourishment did not
exist, we could make the following table:

Year

Number of individuals

Total

0
1
2
3

1,000
1,000 + 100
1,100 + 110
1,210 + 121

1,000
1,100
1,210
1,331

And so on successively.
This procedure can be expressed mathematically with the equation:
Xt + 1 = K × Xt
Where t indicates the quantity of years elapsed starting from the
initial year which is t = 0, and X is the variable that
symbolizes the number of individuals. Thus, Xt is that
quantity in the year t, and Xt + 1 in the
following year.
The parameter K indicates the annual rate of increase of X;
in the previous example it is K = 1.1.
Accordingly, what this expression tells us is that if we know what the
population is in the year t, it is enough to multiply the corresponding
number by the rate of increase K to determine the population there will
be in the following year.
To facilitate the calculations we work with a normalized X,
meaning that it can only vary between 0 and 1. Thus, Xt = 1
corresponds to the maximum possible for that population, or that is 100%, and
Xt = 0.5 at 50%, and it does not matter whether we are
considering 15,950 rabbits or 12 million trees; all that interests us is to
calculate the annual variation of the population in relation to the previous or
subsequent values.
Returning to the example, evidently that increment of 10% annually will
lead us to an impossible situation: if we begin with 1,000 rabbits in the year
zero, 200 years later there will be 19 billion rabbits covering the surface of
the planet; it is then required to add to the equation a term that reflects
the real situation, where food is not going to reach in feeding the growing
population, and furthermore there will continually be more foxes and other
predators which feed on rabbits.
The logistic equation is, definitively, the following:
Xt + 1 = K × Xt(1 - Xt)
Here two opposite actions are executed; the more factor X grows,
the more the factor (1 - Xt) diminishes, reducing the final
result.
If, for example, K is 1.1, for a small Xt of
0.1, with the first equation it resulted:
Xt + 1 = 1.1 × 0.1 = 0.11
But since the reducing factor is here (1 - 0.1) = 0.9 the new equation is:
Xt + 1 = 0.11 × 0.9 = 0.099
Or, the reducing factor affects the result very little.
When Xt grows greatly, for instance to 0.8 at a
maximum, this factor becomes 0.2, and one then has instead:
Xt + 1 = 1.1 × 0.8 = 0.88
The result corrected by the reducing factor of 0.2:
Xt + 1 = 1.1 × 0.8 × 0.2 = 0.176
This diminishes the population by a fifth.
We have then a mathematical expression that permits unambiguously
calculating the value of Xt; in other words we are dealing
with dynamic systems where the future depends in a deterministic manner upon
the past, without uncertainties.
All the information about the system will be found contained in the
logistic equation, and applying it we can know how it will vary over time.
Thus, if X0 is the initial value of the variable whose
evolution over time we wish to know, it will be X1 when it
arrives at time t = 1:
(a) X1 = K × X0 × (1 - X0)
And for X2 we will have:
(b) X2 = K × X1 × (1 - X1)
Where we can substitute X1 for equation (a) resulting
in the following:
X2 = K(K × X0(1 - X0)(1 - K × X0(1 - X0)))
And similarly for t = 3:
X3 = K(K(KX0(1 - X0)(1 - KX0(1 - X0))) ×
(1 - (KX0(1 - X0)(1 - KX0(1 - X0)))))
As can be seen, the equation progressively and rapidly converts to an
ever more complicated formula; if we were to attempt to know the future for
t = 20 (20 years, or 20 generations, or whatever we use as a measure of
time) we would need some 300 pages for nothing more than expressing
X20, and to arrive at a time t = 50, which is not unrealistically far
in the future, it would not fit in the size of a library.
In the computer era, it is meaningless to follow this path; for if there
is something that those artifacts know how to do, it is to reiterate at great
velocity the same operation as often as desired, and additionally they do so
without mistakes or boredom.
It follows that, instead of pursuing the classical method of writing an
equation that will be valid for every possible t, X0 and K, so as to
later introduce into it the numbers corresponding to the case of interest,
what we shall do is give the computer the initial data X0 and K,
the number of iterations that we need, and the instructions for it to execute
the basic calculation which as we have seen is very elemental, consisting of
performing a subtraction and two multiplications.
With the program thus prepared any personal computer permits us to obtain
values such as X100,000 or any other in a very short while,
and also to provide experiences like seeing step by step how
Xt varies on incrementing t, and what happens when K is changed.
Whoever desires to feel the power of iterative calculation can do so
without the need of using a computer: a simple pocket calculator is
sufficient, and a good dose of patience.
If both are available, I propose to perform a very simple iterative
calculation, which consists in raising a number by squaring, the result
returning to be raised by squaring again, and so on successively up to ten
times in all.
We shall input the number "0.9999" into the calculator, and we press the
"x2" key, or if this is unavailable, that of multiplication, "×" followed by "=".
We repeat the same operation ten times. A calculator of eight digits gives
as the result:
X0 = 0.9999
X10 = 0.9026637
We now see what happens when the initial conditions is quickly varied.
What will happen if we use as X0 a value that differs by only 0.1%
from the preceding? For:
X0 = 0.9999
It now is:
X10 = 0.3589714
Thereby the final result is 40% of the preceding, having changed the
initial value by only 0.1%!
We have here an example of sensitivity to the initial conditions which,
as we have seen, is a necessary condition for the appearance of chaotic
phenomena and of complexity in the dynamic systems.
HISTORIES OF FISHES AND CRUSTACEANS
In these iterative mathematical operations other unexpected characteristics
appear, which manifest themselves beginning with some dozens of cycles of
calculation, and that we turn to describe. If someone wishes to do them
themselves, it is preferable to use a computer, the less you enjoy doing long
accounts with a calculator.
The program with instructions for the computer has a general form in the
BASIC language:
INPUT K
X = 0.6
FOR n = 1 to 100
X = K * X * (1 - X)
PRINT X
NEXT n
STOP
Where K is the rate of increase, and we have set the number of
iterations at 100 and the initial X to 0.6.
To demonstrate the surprising behavior of the logistic equation, we
shall begin by comparing it with the results of the mathematical studies
performed by Vito Volterra in the decade of 1920 to explain periodic
fluctuations in the fish population of the Mediterranean.
Let us consider a population of crustaceans, and of fishes that feed on
them, and we shall assume that the crustaceans have a low rate of
reproduction, K = 1.01, and that their initial population is
X0 = 0.6.
Making the calculation results in the population decreasing every year
such that, at the end of a time, the colony will disappear:

t

X

t

X

0
1

0.6000
0.2424

5
10

0.1147
0.0732

But what happens when K is greater, for example K = 2?

t

X

t

X

0
1
2

0.6000
0.4800
0.4992

3
4
5

0.5000
0.5000
0.5000

The population remains stabilized at 0.5.
If the reproduction rate increases to 2.7, the equation shows an annual
fluctuation, varying between 0.61 and 0.64 due to the opposition between
growth and the action of the fishes, but finally, after about 15 cycles, it
stabilizes at approximately X = 0.63, a value which therefore makes it
an attractor for this behavior.
Yet it is starting with a rate of reproduction K greater than 3.0
that something new happens: the system strongly fluctuates at the outset, and
finally the population of the colony oscillates between two stable values,
indicating that the attractor has bifurcated into two.
Thus, for K = 3.3:

t

X

t

X

0
1
2
3
4
5

0.60000
0.79200
0.54363
0.81872
0.48978
0.82466

14
15
97
98
99
100

0.47941
0.82360
0.47943
0.82360
0.47943
0.82360

The stable values here are 0.47943 and 0.82360, and each of them repeats
every two years.
Upon representing on one graph the annual variation in the population of
crustaceans (see figure IX.1) we see that once the fluctuations have
stablized, if in one year they increase to 0.83360, this turns out to be a
veritable feast for the fishes, which, in turn, increase so much that they
lower the quantity of crustaceans to the lower level of 0.47943 in the
following season. This will diminish the fish population due to the scarcity
of food, allowing the crustaceans in the following year to abound, and so on
cyclically.
The process described can also be represented as was done by Volterra, in
a space of the phases, which permits visualizing the cyclical behavior of the
fishes-crustaceans system in figure IX.2, where the vertical axis indicates to
us the crustacean population and the horizontal that of the fishes.
If we start the colony in A with X0 = 0.6 crustaceans will be increasing
along curve 1, which simultaneously marks the increase in predators until
reaching level B. Starting from there the population of fishes dominates,
taking the system around curve 2 with diminishment of crustaceans, which
drags the fish populations toward a minimum, and this repeats itself
cyclically as a period, in this case of two seasons.
The logistic equation adjusts itself very well to other natural cycles
like those of insects and bacteria, which were studied by R. May in the decade
of 1970.
Let us continue exploring the equation through the computer, which
fulfills in this age at the end of the 20th century a fundamental role as a
tool of science, analogous to that of the microscope and of the telescope in
previous centuries.
If we increase K to over 3.45, the two values that repeat every
two years resume being unstable, and each one bifurcates to produce a
population which oscillates around four different values that are repeated
every four years, so that the period has doubled.
For K = 3.45:

t

X

t

X

0
72
73
74
75

0.6
0.42688
0.84405
0.45412
0.85524

76
97
98
99
100

0.42713
0.84470
0.45258
0.85474
0.42835

The stable values are found around 0.42820, 0.45290, 0.84410, and 0.85320.
With K = 3.56 a new instability is produced, with bifurcations
that produce eight fixed values, with periods of double the preceding, and
this occurs again with K = 3.596, giving 16 values; later more and more
bifurcations keep appearing, until finally with K = 3.56999 it arrives
at a chaotic state, with myriad values of X for the colony which
oscillate in an unpredictable way between 1 and 0.
ROUTES TO CHAOS
We see in this example that a dynamic system can begin from an ordered state
and under certain conditions evolve towards a chaotic state.
Actually there exist various routes among regular, ordered behavior and
the chaotic, unpredictable state. over which dynamic systems can move; also at
some stages in the routes complex behaviors appear that can give them
surprising properties.
Such routes can move in both senses: dynamic ordered systems can pass
into completely complex behavior or reach a chaotic one, as occurs with the
pendulums and other oscillators, or pass from chaos to an organized
complexity, like in chemical clocks or the cells of Benard.
We have seen that for these behaviors to appear in systems with a quantity
of components that can be small or very large, an indispensable requirement is
that the number of variables which intervent in the system's dynamics be
limited. If the number of independent variables, or degrees of
freedom, is less than three, the behaviors analyzed do not appear; moreover,
if that number is very large, it will be impossible to distinguish by these
methods whether their evolution is due to the phenomena we are studying of
obeys fundamentally random factors.
The routes between order and chaos can be classified into three principal
types according to the different modalities under which those transitions are
produced:
1) Almost periodicity. The system is represented in the space of
the phases with an almost periodic attractor inscribed on a torus and the
transition transforms it into a strange attractor.
2) Sub-harmonic cascades. The system displays oscillations of a
certain period T, and at the start of the transition a bifurcation is
produced, others appearing with a double period 2T, later 8T and
so on successively in a cascade. This is observed among others in phenomena of
thermal convection (chapter VI) and in the BZ reaction (chapter VII).
3) Intermittencies. The system sporadically produces fluctuations
of great amplitude. These transitions usually appear in hydrodyamic processes
and also in the oscillations of electronic circuits, where it is manifested as
a low-frequency noise which appears occasionally.
With a basis in the foregoing classification, in the case of the logistic
equation, the variation of the K parameter implies a route to chaos
through duplication of the periods, that is, the subharmonic cascade.
As we see, the role of K is to define the complexity of the
behavior, and it becomes convenient to visualize its effect on the system's
evolution through a graphic like that of figure IX.3, where we shall represent
population on the vertical axis and on the horizontal the value of K.WHERE CHAOS APPEARS
The figure shows the panorama that R. May found with his model based upon the
logistic equation, where K varied with variation in the provision of
food. Again, the figure represents the population on the vertical axis and the
parameter K on the horizontal axis.
He thus discovered the successive bifurcations that indicate the increase
in the oscillations in the insect population, and which later, after various
bifurcations, entered into a chaotic zone indicated in the figure, where the
population in the model fluctuates erratically, such as occurs in reality with
that of the insects.
We have depicted in the frames of figure IX.3 some of these situations,
which respond to a specific value of K. Thus for that of K =
1.01, the population P that initially was at 0.6, diminishes over time
T, represented on the horizontal axis, until reaching zero: it is
extinguished.
The frame for K = 2 indicates to us that for that value, after
some cycles with fluctuations the population becomes stable.
For K = 3.3, the population oscillates between the two values
0.47943 and 0.82360, taking one of those values in one cycle, and the other in
the following cycle. These two values are further represented in the principal
graphic: if at K = 3.3 we trace a vertical line, it will cut the two
branches of the bifurcated curve, at the values mentioned for P.
In the same manner, for K = 3.4 there are four values that repeat
in successive cycles, values which correspond on the principal curve to the
intersections of a vertical line with the four branches. With K = 3.7,
there are so many distinct values of P that it is not possible to
discover any relation between one cycle and the next; with the principal curve,
a vertical line will pass through thousands of intersections, marked as points
on the vertical stripes. It is in the zone of deterministic chaos. Finally,
for K = 4, one has a chaos with infinite possible values for each cycle.
As for today, one of the questions posed by the ecologists is that of
determining whether the behavior predicted by this model occurs in the real
populations.
Opinions among biologists concerning these theories are divided: from
those who think that the unpredictability inherent in deterministic chaos is a
very important factor to explain the evolution of species, to those who
consider real populations not to have chaotic dynamics because they will be
extinguished, and that the researchers in laboratories and through simulation
with computers are too removed from cases as they appear in nature.
This is not a simple task, because in ecological systems it turns out to
be very difficult to separate out the multiple environmental factors, nor can
parameters like the reproduction rate be varied in order to see how the effect
varies.
In any case experiments have been made on animal or vegetable populations
isolated in the laboratory, where indeed some conditions can be altered,
varying, for instance, the surrounding temperature to accelerate the
metabolism of horse flies or of protozoa.
The studies of these experimental populations effectively revealed the
transitions that correspond to the first bifurcations, but without a
duplication of periods as clear and vivid as that which appears in physical
systems.
What unequivocally appears is the transition to chaos.
The other powerful method that is being used for the study of the dynamic
of populations is computer simulation, in which the researchers create a model
that generates series of data representing the size of a population, over the
course of many generations. This is an imaginary world, where the researcher
specifies at will the factors which govern the system, and later analyzes them
using methods that are applied to real living systems.
This mathematical method permits constructing a space of the phases with
as many dimensions as there are independent variables in play, and seeking
strange attractors, which, if they appear, are evidence that you are before a
dynamic non-linear, deterministis system.
One of the difficulties that presents itself is that each species usually
interacts with many others, and for each a variable or dimension should be
added, so that one must work with multidimensional spaces of the phases, where
it is very easy to confuse statistical fluctuations with the presence of
attractors.
The method has been applied to instances like that of the measles
epidemics in New York appearing over a period of 40 years, and this
revealed the presence of a tri-dimensional attractor in a space of four
dimensions.
RHYTHMS IN LIVING ORGANISMS
Mathematical methods have also been applied to the search for indications of
deterministic chaos in the dynamic of individual organisms, that is, in
physiologically and neurobiologically rhythmic processes, which permits them
being studied as sets of mutually influencing oscillators that have feedback
cycles with a non-linear dynamic.
A highly studied case is that of cardiac rhythms, which can throw light
on arrhythmias and upon the interpreation of electrocardiograms before and
after a cardiac attack.
The biophysicist R. Cohen performed a computer simulation of cardiac
rhythms and proved that in the prelude to a heart attack there appears a
bifurcation of the heartbeat period. This can be explained considering that
the electrical pulses that force the muscular fibers of the ventricles to
contract and thus pump the blood, work as an oscillator system with a regular
period. If for some pathology there appears an alteration in the oscillation
period of the electrical pulses in a group of fibers, these two different
oscillations combined can enter into the situation we examined for two coupled
pendulums, with a cascade of bifurcations of period 2T, 4T...
until paralyzing the heart.
It is clear that here we deal with an effect which appears in a computer
simulation and that it is not easy to obtain experimental data for a heart
attack, for doctors and patients are sufficiently occupied with trying to
overcome the crisis. So it is that the closest to the real phenomenon is to do
measurements in a laboratory.
In 1980, the physiologist L. Glass initiated a series of investigations
with heart cells from chicken embryos. These, in a cultivating medium, keep
spontaneously pulsing at a rhythm of from 60 to 120 beats per minute, and thus
are a natural oscillator. If a microelectrode is introduced into that mass of
cells periodic electrical shocks can be applied to it, so that now one has an
oscillating system with two coupled rhythms, one intrinsic and the other
forced. This latter can vary, so we observe how the cardiac heartbeat varies
with periods 2T appearing, later 4T, and also totally irregular,
chaotic situations which suggest fibrillation.
Another application appears in neurophysiology, where the dynamic
complexity has been analyzed of electroencephalograms (EEG) which register the
cerebral activity of human beings performing diverse tasks such as, for
example, counting backwards from 700 by sevens. The fractal dimension was
computed of the attractor that appears when the set of oscillations
registered by the EEG is represented in the space of the phases, discovering
that it moves above the basic value of 2.3 to a value of around 2.9 when the
subject is making the effort of counting backwards. The conclusion to which he
arrived is that forms of the EEG with a higher fractal dimension, or that is
being more complex, correspond to a more alert mental state.
Yet all these physiological and neurological studies expose the same weak
flank as that with animal studies: how difficult it is to apply these theories
to the actual cases that appear in nature, and which are much more complex than
the models that can be established through a computer.
The difficulties in applying these methods of study are recognized by R.
May, who nevertheless has promoted them for twenty years based on the fact
that populations as well as biological processes are governed by non-linear
mechanisms, and that hence they should display chaotic behaviors in addition
to the stable cycles.
It still seems premature to pronounce upon the final result. Evidently it
is necessary to keep perfecting the application techniques of a method that,
in the last analysis, is very recent, before being able to determine to what
degree these theories on the transitions between order and chaos comprise an
explanation of the dynamic of living being.
ECONOMIC CYCLES
The field of the social sciences as well is working intensely on the
application of these novel concepts. In particular it becomes attractive for
the economists to try to document their capacity of prediction in the
financial markets. In the final analysis, the financial markets and the
nations' economists are dynamic systems with mechanisms of feedback and self-
regulation. We know that if one raises the price of a product in an excessive
manner, demand diminishes and the price should fall. And so here also one
can try to create a model based upon the logistic equation:
Pt + 1 = A × Pt(1 - Pt)
Where we call Pt a price on the day, month or moment t,
A the rate of increase, and Pt + 1 the price in the following period.
As we have seen previously, the properties of this mathematical equation
indicate that for an A less than 3, after a certain time the price
P will be stable, and that if A is more than 3, the price would
have periodic fluctuations, with double cycles which oscillate between two
values at the beginning, and later pass to four, and so on successively until
for an even greater A, the value of P can have unusual
behaviors, such that never repeat, that is to say are chaotic. In such a
situation, a graphic of the values of P can be easily confused with a
series of numbers generated at random.
In recent years various research groups confronted this problem of
applying the focus used by physics on non-linear dynamic systems to the field
of the economy.
The most well known is that begun in the Santa Fe Institute, in New
Mexico, where in 1987 three winners of the Nobel prize (the economist Kenneth
Arrow and the physicists Murray Gellman and Philip Anderson) brought together
economists, physicists, biologists, and specialists in computation to begin a
program of research on the economy considered as a complex dynamic system.
They have managed to construct computer models of the economy where the
investors are, in turn, computer programs capable of recognizing rules of
variation in the prices and of acting in consequence, and which furthermore
learn from experience. The result of this computer simulation begins to
approach that which is observed in the actual economies.
Yet the same as what we saw for the applications of an equation of the
same type to ecological systems and to live organisms, economic reality is
even much more complex, affected by constant changes in society and the
price of every product is linked to that of many others.
Thus, it is possible that the effects of deterministic chaos that appear
in those mathematical models may not be present in a real economy, and that it
may be equally valid to keep describing the economy through linear processes
with data which have an important component of "noise," or that is, of chance
fluctuations.
In order to decide which focus is more adequate, the real cases must be
used, which are very difficult to apply because they require very large sets
of data, and those that count are in general scarce, in addition to having a
strong noise component.
But in recent years new techniques have been invented for statistical
analysis that are capable of distinguishing between fluctuations due to chance
and those that might exhibit regularity is they are adequately examined.
The basic idea is that complex systems can reveal their structure if the
data are translated to a space of the phases with an appropriate number of
dimensions.
These methods have been applied with encouraging results to studies
concerning populations and other biological systems, on turbulent fluids, and
now also for these economic models. Among the most powerful are the algorithms
that use a single magnitude (population, temperature, price, or whatever which
will yield a sufficiently large series of data) taken at regular intervals of
time.
In its most elementary form, this temporal series is a list of codes
which represent the experimental data, for example, a population of bacterias
measured every hour:
0.453; 0.671; 0.632; 0.661; 0.702; 0.799; 0.530; 0.501...
We wish to determine whether an attractor appears in a space of the
phases that has three dimensions at a minimum. From everything that we have
seen here, that then requires representing the measurements of three
independent variables, yet the data available to us only gives us information
about one variable. Is there a solution for this situation?
The mathematicians D. Ruelle and N. Packard discovered a trick that
resolves the problem, and F. Takens succeeded in demonstrating that this
artifice if mathematically correct.
The method consists in fabricating another two series with the same
measurement values, but displaced over time. Calling the successive values
obtained X1,X2,X3...
the pseudoseries are constructed:
Y1 = X2, Y2 = X3, Y3 = X4...
Z1 = X3, Z2 = X4, Z3 = X5...
Or let us say that, instead of a single temporal series, we have the
original X and two copies displaced one and two places in time,
Y and Z:
Thereby for the time t = 1 we represent in the tri-dimensional
space of the phases the point X = 0.453, Y = 0.671, Z = 0.632;
for t = 2, the point X = 0.671, Y = 0.632, Z = 0.661,
and so on successively.
If the result is that the point remain grouped in a limited region
forming an attractor, this is an indicator there is a regularity, some periodic
behavior. If, however, they remain spaced more or less uniformly throughout
the entire space of the phases, we might be dealing with a random phenomenon,
or it may be that a space with more dimensions is required for an attractor to
appear.
It is clear that the higher the number of necessary dimensions, the more
doubtful becomes the presence of an attractor.
One of the difficulties in the application of these methods lies in that,
even for dynamic systems like turbulent fluids it is not easy to discern which
are the pertinent variables. This is even truer for systems which are studied
in fields far from physics, where it is even more difficult to determine the
dimension of the space of the phases.
X. Order, chaos and complexity in mathematics
WE HAVE seen some of the surprising qualities of the logistic equation
discovered when R. May applied it to the field of biology and that caused many
mathematicians to begin to study it in detail performing large calculations
with their computers. It establishes a new method of investigating the laws of
mathematics that differs from the approaches where all knowledge is attained
through logical steps within an abstract framework and not through
experiments. Thanks to computers, mathematical experimentation is done today
and this is particularly so in that it is applied to the dynamic by the role
played in it by iterative processes.
THE LOGISTIC EQUATION REVEALS A VERY COMPLEX WORLD
A simple equation like the logistic reveals aspects unsuspected until various
millions of calculations are made that indicate a very intimate relation
between the dynamic of complex systems and the structure of number systems.
From the moment when the iterations for this type of equation reveal a
sensitivity to the initial values it is shown that not only the systems of the
physical world can be deterministic and unpredictable at the same time, but
also this is so in those systems of the mathematical world.
That impossibility of predicting a result in the long term will only be
eliminated were one to measure the physical world with infinite precision, or
if in the mathematical world one were to calculate utilizing all the infinite
digits of which the majority of numbers are formed.
Let us now examine the surprising results that derive from making
calculations for the logistic equation for a K greater than 3.5 where
complex behavior appears (see figure X.1). We see that the period T
bifurcates in an ever more rapid cascade upon increasing K, and that
the distance between the corresponding values of X are becoming ever
less. In a zone beginning with K = 3.57 we have for each growing value
of K respectively 1,024, 2,048, 4,096... periods, and it would take a
microscope to distinguish the structure formed by the points.
Another notable property is that of "renormalization," discovered by M.
Feigenbaum in the decade of 1970: for sufficiently high bifurcations, for
instance with 2,048 periods, if those bifurcate again to 4,096 they repeat the
structure of the 2,048, if and when represented by a very precise increase in
the scales; the increase in K should be 4.66920166... and the increase
in X should be 2.502908... These numbers of Feigenbaum's are universal, like
π, because the same cascading structure of bifurcations and the same
Feigenbaum numbers also appear in other equations if and when they are
continuous functions of X and having only one maximum.
Cascading bifurcations and the Feigenbaum numbers appear not only in the
calculations done by mathematicians with a computer, but also when many
behaviors in nature are mathematically represented.
In a more detailed graphic we depict the variation of X with
respect to K (see figure X.2). We shall see that beyond the
bifurcations, for a K between 3.55 and 4 vertical stripes appear
covered with spots that correspond to the myriad places where the system could
be for a value of K, so that if the computer generates a value
corresponding to a point X = 0.57739, for example, the subsequent will
be at some place between the upper and lower extremes of the dark stripe, and
that is all we can anticipate until the computer reveals to us the result of
the new calculation.
Further than X = 4 the results show no structure whatsoever between
X = 0 and X = 1. It is such that the points are, in this zone,
indistinguishable from those that could have been marked by randomly
generating data, despite that we are using a perfectly deterministic
mathematical equation.
We cannot attribute this impossibility of predicting to the existence of
unknown factors, because the equation has no ambiguity. The reason is that as
we have seen, we can neither measure nor represent a present state with
infinite precision, and even if this limitation is not important in many
situations, we now perceive that non-linear behaviors abound, where
insignificant causes that we thought would not affect a dynamic system produce
such disproportionately large effects that they change its behavior in an
unpredictable fashion.
We can consider the region between K = 3.55 and K = 4 to
correspond to complexity and, starting from K = 4, to chaos with points
which fill the entire space.
WINDOWS LIKE ISLANDS IN A CHAOTIC SEA
In the zone of complexity more obscure curves can be observed, which
demonstrate a greater concentration of points or, that is, greater probability
of the system's presence, and also something especially significant: observe
that there are white vertical bands interspersed, crossed by a few lines.
These receive the name of windows, and indicate the return to regular cyclical
behavior, where two or three different periods appear. Such intervals with low
oscillation frequencies are termed intermittencies.
If one examines a window of three periods like that of K = 3.83
with higher definition, that is to say, with more computer calculations, and
represented in the central zone where we have enclosed box A in
figure 10.2, at an amplified scale, we encounter another great surprise: the
period repeats its bifurcation, and thus initiates a cascade with successive
duplications of 6, 12, 24, 48... repeating the original scheme in miniature,
with windows which in turn demonstrate repeated cascading bifurcations (box
B) where there are other windows with their own cascades and so on as
many times as it is desired to explore this unknown universe.
Here we have the same phenomenon of self-similarity that we had found in
fractal figures.
MANDELBROT SETS AND COMPLEX PLANES
Let us return then to the concept of generating fractals through the process
of iteration. When the mathematician B. Mandelbrot reviewed the work of Gaston
Julia, a disciple of H. Poincaré, on iterative calculations with complex
numbers, he decided that they suggested a path for constructing fractal
figures starting from mathematical equations.
It is necessary here to review the concept of a complex number. As we
know there are various classes of numbers, calling the most basic and
elementary the natural numbers: 0, 1, 2, 3... These can be added or multiplied
to produce other natural numbers. But if we want to get to a systematic
method, it is convenient to introduce the negative numbers -1, -2, -3...
Up to here we deal only with whole numbers, and this can create problems
when we try to divide one whole by another, so that we need fractions or
rational numbers 1/2, -1/2, 1/3, -1/3, 3/2, -3/2... and the irrational
numbers, that require an infinite number of digits to be expressed, such as
π, and 2.
All those different types of numbers form the system of real numbers. But
there remains one limitation: if you want to obtain the square root of a
number, it can only be done with those that are positive; thus, the root of 9
is 3, which multiplied by itself regenerates the 9, but the root of -9 does
not exist, since -3 × -3 is also 9. The square roots of negative numbers
were denominated, consequently, "imaginary" numbers to differentiate them from
"real" ones, yet since it turns out to be supremely convenient to use them for
calculations, the difficulty was resolved by "inventing" a square root called
i for the negative number -1. It then results that i2
= -1 and the mathematical nature of i does not worry us. Like any
negative number, -a can be written as -1 × a, and its square root, -a
can be written as -1 × -a or that is ia.
With this subterfuge these numbers can thus be considered as real as the
"real," so achieving major versatility for the numerical system.
Finally, if we combine "real" numbers with "imaginaries" a number results
that we call complex, not because it is complicated, but because it has
various components. A complex number is of the form:
Z = X × iY
Where X and Y are ordinary real numbers, and i is the square root of -1.
Similarly to the real numbers, the complex numbers can be visualized
representing them through a graphic in a system of coordinates, but in a
complex plane, where the vertical axis is iY, and the horizontal is
X. The point Z is located at the intersection of lines parallel
to the two axes that share the value of X and that of iY (figure X.3).
Complex numbers have their own arithmetic, algebra and analysis, and on
graphing the result of performing mathematical operations on them, aspects of
great importance and strange beauty are discovered.
Mandelbrot started from a very simple iterative operation: assign an
initial value to the complex variable Z, raise it to the square and add
a constant number that we shall call C:Z = Z2 × C
The Z so obtained repeats being raised to the square and added to
the same C, and so on successively, an operation that requires a
computer to discover its properties, for those are placed in relief when the
graphical result in the complex plane appears on the screen (figure X.4). The
heart-shaped figure, called a "Mandelbrot set," represents all the values of
Z that will not come to have an infinite value even if infinite
iterations were done. Surrounding the figure are all the points that do tend
to an infinite value when one performs ever more iterations.
The frontier between both zones has properties of self-similarity and a
complexity so great that it can only be captured when the computer is
instructed to shade them in different tones of gray, or better yet in distinct
colors, according to the different velocities with which Z grows upon
iteration.
There is a classic book for admiring these figures, The Beauty of
Fractals, by H. Peitgen and P. Richter, and computer programs like
Fractint, which have divulged the beauty of such forms, where each Mandelbrot
set has very small similar figures attached which seen with the amplification
of the computer as if it were a microscope repeat and repeat in a self-similar
form.
If we keep increasing the amplification, other characteristic forms
appear, which resemble dendrites, whirlpools, seahorse tails, that in turn
repeat ever more microscopic details, an endless process, since it could be
iterated an infinite number of times and, additionally, varying the constant
C can produce and infinite variety of forms (see figure X.5).
Even the most experienced observer is filled with admiration before this
process, through which beginning from a simple equation we arrive through
iteration at a result with structures so extraordinarily complex that they
recall the results of repeating the logistic equation. In reality there are
some similar aspects between both mathematical expressions, which lead one to
consider that the Mandelbrot equation is the version for complex variables of
the logistic equation, where now, instead of the variable X, Z is
applied, and in place of the growth factor K the constant C.
Thus by graphically representing the variation of X upon increasing
K the surprising forms of figure X.2 were obtained, while the
representation of Z while continually varying C produces the
Mandelbrot figures. In both cases one has an "ordered" region with stable
values, which is the region for K less than 3 in the logistic equation,
corresponding to the interior of the Mandelbrot set. The zone of "chaos," for
a K greater than 4 in the first case can be compared to the most remote
region in the heart figure of the second, where the values of Z go
rapidly to infinity, and an intermediate region with a very complex structure,
plus properties of self-similarity.
Just as there are windows for the logistic equation, for instance for
K = 3.84, with a stable cycle in which there appear replicas in
miniature of the original figure, also with the Mandelbrot set for the
position with imaginary part iY = 0 and real part X = -1.75,
is a small island in the form of a heart (figure X.4) which, seen in detail,
becomes a new continent similar to the principal Mandelbrot set with "buds"
that, examined with the microscope turn out to be diminutive replicas of the
initial set, which in turn exhibit other more minuscule buds.
The Mandelbrot figure and each attached circular disk correspond to a
particular periodic orbit: that in the form of a heart to period 1, the
largest disk to period 2, followed by disks of periods 8, 16...
Such complexity shows us that, just like what we observed with many
phenomena in nature, complex behavior can appear even with simple laws.
One of the most important results of these mathematical investigations is
that, upon their basis, a new method is emerging for confronting study of
dynamic systems: the complex dynamic, which does not mean complicated, but
instead based on complex numbers.
XI. Can chaos be tamed?
IT IS now more than a century since the mathematicians established the basis
for studying non-linear systems. For three decades it has been applied to
research the phenomena called chaotic which appear in nature.
The attitude before chaos has been equivalent to that one has with
diseases: she investigates their causes to avoid their appearance, since what
one wants are predictable processes, behaviors without surprises, perfectly
controllable domesticated systems. Chaos implies catastrophes, and one tries
to keep it at a prudent distance.
Yet this vision of chaos is changing during recent years, before the
evidence that it can be controlled and even reach doing useful things.
CHAOS CAN BE USEFUL
In the First Conference on Experimental Chaos occurring in the United States
in October in 1991 reports were presented which demonstrate that, and which
refer to research underway to stabilize cardiac rhythms, control the
oscillations in chemical clocks, increase the potency of laser beams,
synchronize the output of electronic circuits.
In every case, the results of applying chaos and controlling it have been
very encouraging.
Many of these applications have been initiated by the physicists William
L. Ditto, of the Georgia Institute of Technology, and Louis M. Pecora, of the
U.S. Naval Research Laboratory, who see two fundamental reasons for chaos to
be of utility. In the first place, the deterministic chaos in dynamic systems
is, in reality, a complex structure of many ordered states, none of which
predominate over the others, as opposed to an ordered system that has a unique
behavior.
The researchers have demonstrated that if the chaotic system is
adequately perturbed, it can be stimulated to adopt one among those ordered
behaviors. The great advantage with respect to the classical ordered systems
is their superior flexibility, because they can jump rapidly from one behavior
to others out of a wide collection.
The other reason is that if indeed one cannot predict its behavior, it is
determined, such that if two practically identical chaotic systems of an
adequate type are guided or moved by the same chaotic signal, both will have
the same behavior even though it cannot be predicted in advance. Chaotic
synchronization if the best demonstration that here we deal with deterministic
chaos, since if we dealt with pure chance the behaviors would have no reason
to coincide.
Two isolated chaotic systems cannot be synchronized, because although
they may be practically identical and begin to function at the same time,
immediately their minuscule differences will be amplified and will cause them
to diverge more and more. But if they are guided by a single chaotic signal
both will have identical chaotic behavior, with the condition that they be
stable, or that is if you perturb them a little, their trajectory in the space
of the phases change only a little. If this condition is fulfilled, both
systems will suppress whatever difference lies between them and will act in a
chaotic--that is, unpredictable--yet synchronized manner. This is being
applied in communication systems and signal processing.
CHAOS IS CONTROLLABLE
The applications of controlling chaos are based on the OGY method (developed
by Ott, Grebogi and Yorke, of the University of Maryland) who succeeded in
having a system that displayed an entire collection of periodic oscillations
move to having only one.
The method consists in obtaining in the space of the phases the
trajectory information as it passes through the Poincaré section of the
strange attractor, and wait until the trajectory returns to pass through the
section. If this happens in the vicinity of a desired periodic orbit, at that
precise moment the dynamic system is perturbed modifying the parameter
sufficiently for it to remain in that orbit.
What they proved is that since chaotic dynamic systems are so sensitive
to initial conditions, reacting very rapidly and with great versatility to
this control, therefore their use can prove very advantageous.
This has been applied to the control of chaotic fluctuations in the
intensity of laser systems, attaining greater flexibility and stability, with
potential output increased by a factor of 15.
Another application of importance is the control of chaos in a biological
system: utilizing a piece of the heart of a rabbit an arrhythmia was provoked,
and in that situation it was stimulated with electrical signals produced by
the OGY method. These were sufficient to re-establish a normal cardiac rhythm,
which has encouraged the researchers to seek control of arrhythmias in human
hearts and to project developing pacemakers and defibrillators based on this
controlled chaos.
Conclusions
WE HAVE reviewed the multiple aspects presented in the behavior of non-linear
dynamic systems and the diverse methods that are used for its study.
The applications of concepts such as self-similarity, fractals, the
strange attractors, have awakened great enthusiasm among researchers in the
most diverse disciplines, founded upon their notable results for non-linear
physical systems.
That is giving a great impetus to statistical mechanics, to the study of
phase transitions, of spin windows, of turbulence, themes which also open new
perspectives on other scientific fields.
It might be, in effect, that those concepts are equally valid for other
complex phenomena such as the fluctuations in insect or animal populations, in
the economy or the behavior of the brain, et cetera. But to determine that
we must know how to characterize the chaos in the system under study, and that
becomes much more difficult the greater the number of independent variables in
play. Even for physical systems like turbulent fluids with many degrees of
freedom it is not entirely understood what is the role of deterministic chaos
in the different transitions between order and chaos.
When there are few degrees of freedom in play, one looks for a strange
attractor, but this is made more difficult when applied to the search for
attractors for the brain or the financial market, which then requires
prudence, because expectations generated in this field have fomented a sort of
fashion, where there are cases in which the same scientific works that a few
decades ago were presented with results obtained by applying the classical
tools are now done deriving them from fractal dimensions, without an apparent
conceptual advantage for the interpretation of the phenomena studied.
In any event, this is a very young discipline, in constant evolution. The
outlook is very good for it is understood that, instead of avoiding
non-linearity and complexity, they can be employed to provide more flexible,
rapid systems with unexpected behavior which presents a wide gamut of
possibilities.
Yet the implications of deterministic chaos for the realm of knowledge
extend beyond its utility for studying the different sorts of dynamic systems,
and the new techniques being deployed as its applications grow. Another
significant aspect refers to the way in which changes in our concept of the
world are conditioned by our beliefs, a theme studied by Thomas S. Kuhn in
The Structure of Scientific Revolutions.
When a new paradigm is produced, the scientists see new and different
things in the world of research, even with instruments and in places already
known.
A notorious example is the way that the falling trajectory of bodies was
perceived before and after the studies of Galileo. As the historian of science
Pierre Thuillier shows us in his book From Archimedes to Einstein, wise
persons before Galileo were educated in the physics of Aristotle, according to
which a body only moves when it is subjected to a force. Accordingly when a
rock is thrown, or one shoots an arrow, or when a cannon fires a projectile,
the body moves in a straight line from the impetus received, until this is
exhausted due to air resistance, the moment when it falls vertically to the
ground, to the place which corresponds to it according to the natural order.
It is for that reason that the drawings in artillery manuals written by the
experts around the year 1500 show trajectories in the form of an inverted L.
One might suppose that an expert in artillery must have observed sometime
how a stone or an arrow fall when shot toward a target, or the shape
waterfalls have or wine poured from a jug. But only after the diffusion of
Galileo's teachings, who demonstrated that these trajectories have the form of
a parabolic curve, did the fall of projectiles, waterfalls, et cetera begin to
be correctly represented. It is as if projectiles, which before Galileo fell
in a straight line, would now begin to move following a curve!
In a similar fashion, an investigator of the stature of Leonardo da
Vinci, impassioned observer of nature, made drawings of what he saw in
dissections of cadavers of animals and human beings. Yet when he drew the
heart, he represented it with two ventricles and one auricle, for he saw it
that way, imbued with what Galen, the maximum medical authority of that era,
taught in that respect.
In this century something similar occurs, with a change of paradigm that
begins with the appearance of quantum and relativist physics, and which
continues with the transformation that invokes the real aspect of non-linear
phenomena.
Until the 19th century, only phenomena were studied that obeyed
integrable equations, linear in particular. If there was interest in a non-
linear phenomenon, it was transformed through a linear approximation. It
seemed as if in nature the truly important was the family of linear phenomena
and that the others were an exception, a rare species undesirable for the
difficulty of their treatment. In witness to this are the physics and
mathematics books written until only a few decades ago, where non-linear
systems are barely mentioned.
Now, however, it is perceived that the immense majority of natural
phenomena are non-linear, and that the others are the exception and not the
rule. If indeed much meaningful physics exists which can be applied to linear
phenomena, increasingly more institutions and scientists are dedicated to the
dynamics of non-linear processes. Before there were no non-linear phenomena
worthy of study, and today they are legion. Once again it is proved that the
perception of what each of us has before ourselves is conditioned by our
theories and beliefs concerning the world, to a very often unsuspected degree.
As we have seen, the incorporation of the theme of chaos implies a change
in focus, a different vision from what was common to scientists until the
start of the 20th century: an "absolute" determinism like that which Laplace
would so clearly formulate is no longer considered valid, and thus it is
accepted that systems which obey deterministic laws can have unpredictable
behavior, which makes inevitable their description through probabilities, and
furthermore that this applies for the majority of them, including cases as
apparently simple as the movement of every object that is subjected to the
action of more than two forces.
Accordingly, the study of chaos has revealed that the unpredictability of
this complex world can be reconciled with the existence of simple and ordered
natural laws.
The mathematics corresponding to this field also require a different
focus, now that the fundamental application of the computer permits
visualization, through images which often are of rare beauty, the global,
qualitative behavior of the equations of chaotic systems. Here too we see how
very simple equations, applied through iteration, and with characteristics of
self-similarity, give birth to fractal forms astonishingly similar to those
that appear in nature: trees, mountains, clouds, which it was never dreamed
could be represented mathematically.
The mathematics of chaotic dynamic systems are non-linear, and this, due
to the consequent sensitivity to the initial conditions, implies that, as
opposed to those which obey integrable equations, only the global behavior of
those systems can be known. The example of Poincaré's study of the
movement of the planets illustrates this: the only linear, integrable system
is that studied by Kepler, namely that which forms the Sun and the Earth. To
manage to approximate to a linear system the other eight bodies comprising the
solar system were excluded. In this way the elliptical trajectory of our
planet can be obtained, but if you want to predict its long-term position with
even minimum exactitude the Moon must be added, which influences terrestrial
movement, and the remaining planets plus the gravitational interactions among
them, which gives us a non-integrable, chaotic system, and where even the
comets that come into the confines of the system or disappear for various
reasons can, thanks to the extreme sensitivity to such perturbations, totally
alter the whole. So it is not possible to analyze it as one would a linear
system, through separation of the factors that affect the behavior from those
which practically do not: every factor, however infinitesimal it may appear
initially, can unleash a drastic change in the complete set.
Over short durations, as are the thousands of years of our historical
record in this example, the linear approximation serves for the calculations,
but the more one wishes to explore the future the more she should keep in mind
the global functioning of the system.
BIBLIOGRAPHY
Bergé, P., Y. Pomeau and Ch. Vidal, L'ordre Dans Le Chaos, Paris,
Hermann, 1985.
Borman, S., "Researchers Find Order, Beauty in Chaotic Chemical Systems" in:
Chemical & Engineering News, January 1991, pp.18-29.
Dewdney, A. K., "Computer Recreations" in: Scientific American, July
1985, pp.16-32.
Ditto, W. L. and L. M. Pecora, "Mastering Chaos" in: Scientific
American, August 1993, pp.78-84.
Ekeland, I., Al Azar, Barcelona, Gedisa, 1992.
Epstein, I. et al, "Oscillating Chemical Reactions" in: Scientific
American, March 1983, pp.96-108.
Gleick, J., Chaos, Cardinal Books, 1991.+
Hall, N. (comp.), The New Scientist Guide to Chaos, London, Penguin
Books, 1992.
Kuhn, T. S., La estructura de las revoluciones cientificas, Mexico
City, Fondo de Cultura Económica, 1982.
Mandelbrot, B., Los objetos fractales, Barcelona, Tusquets, 1987.
May, R. M., "Biological Populations with Nonoverlapping Generations: Stable
Points, Stable Cycles, and Chaos" in: Science, vol.186, pp.645-47.
----, "Simple Mathematical Models with Very Complicated Dynamics" in:
Nature, vol.261, pp.459-67.
Newton, I., Principios matemáticos de la filosofía natural,
Madrid, Tecnos, 1987.
Nicolis, C., "Is There a Climatic Attractor?" in: Nature, vol.311,
pp.529-532.
Pagels, H. R., The Dreams of Reason, Bantam, 1988.
Peitgen, H. O. and P. Richter, The Beauty of Fractals, Berlin,
Springer, 1986.
Plato, "Timeo o de la Naturaleza" in: Diálogos, Mexico City,
Porrúa, 1989.
Prigogine, I., From Being to Becoming, New York, W. H. Freeman and Co.,
1980.
----, "La Thermodynamique de la Vie" in: La Recherche, no.24, pp.547-52.
Ruelle, D., Chance and Chaos, London, Penguin Books, 1993.
----, "Les Attracteurs Étranges" in: La Recherche,no.108,
pp.132-44.
Schroeder, M., Fractals, Chaos, Power Laws, New York, W. H. Freeman and
Co., 1991.
Stewart, I., Does God Play Dice? Cambridge, B. Blackwell, 1991.
Sussman, G. J. and J. Wisdom, "Chaotic Evolution of the Solar System" in:
Science, vol.257, pp.56-62.
Thuillier, P., De Arquímedes a Einstein, Madrid, Alianza, 1990.
Vidal, Ch., "Les Ondes Chimiques" in: La Recherche, no.216, pp.147-48.
Wagensberg, J., Ideas sobre la complejidad del mundo, Barcelona,
Tusquets, 1989.
Winfree, A. T. and S. H. Strogatz, "Organizing Centres for Three-dimensional
Chemical Waves" in: Nature, vol.311, pp.611-14.
"La Science du Desordre", special issue of La Recherche, vol.22, May
1991.