When Newton developed his theory of universal gravitation, the first problem he tackled was Kepler’s elliptical orbits of the planets around the sun, and he succeeded beyond compare. The second problem he tackled was of more practical importance than the tracks of distant planets, namely the path of the Earth’s own moon, and he was never satisfied.

Newton’s Principia and the Problem of
Longitude

Measuring the precise location of the moon at very exact times against the backdrop of the celestial sphere was a method for ships at sea to find their longitude. Yet the moon’s orbit around the Earth is irregular, and Newton recognized that because gravity was universal, every planet exerted a force on each other, and the moon was being tugged upon by the sun as well as by the Earth.

Newton’s attempt with the Moon was his last significant scientific endeavor

In Propositions 65 and 66 of Book 1
of the Principia, Newton applied his
new theory to attempt to pin down the moon’s trajectory, but was thwarted by
the complexity of the three bodies of the Earth-Moon-Sun system. For instance, the force of the sun on the
moon is greater than the force of the Earth on the moon, which raised the
question of why the moon continued to circle the Earth rather than being pulled
away to the sun. Newton correctly recognized that it was the Earth-moon system that was in orbit around the sun,
and hence the sun caused only a perturbation on the Moon’s orbit around the
Earth. However, because the Moon’s orbit
is approximately elliptical, the Sun’s pull on the Moon is not constant as it
swings around in its orbit, and Newton only succeeded in making estimates of
the perturbation.

Unsatisfied with his results in the Principia, Newton tried again, beginning
in the summer of 1694, but the problem was to too great even for him. In 1702 he published his research, as far as
he was able to take it, on the orbital trajectory of the Moon. He could pin down the motion to within 10 arc
minutes, but this was not accurate enough for reliable navigation, representing
an uncertainty of over 10 kilometers at sea—error enough to run aground at
night on unseen shoals. Newton’s attempt
with the Moon was his last significant scientific endeavor, and afterwards this
great scientist withdrew into administrative activities and other occult
interests that consumed his remaining time.

Race for the Moon

The importance of the Moon for navigation was too pressing to ignore, and in the 1740’s a heated competition to be the first to pin down the Moon’s motion developed among three of the leading mathematicians of the day—Leonhard Euler, Jean Le Rond D’Alembert and Alexis Clairaut—who began attacking the lunar problem and each other [1]. Euler in 1736 had published the first textbook on dynamics that used the calculus, and Clairaut had recently returned from Lapland with Maupertuis. D’Alembert, for his part, had placed dynamics on a firm physical foundation with his 1743 textbook. Euler was first to publish with a lunar table in 1746, but there remained problems in his theory that frustrated his attempt at attaining the required level of accuracy.

At nearly the same time Clairaut and
D’Alembert revisited Newton’s foiled lunar theory and found additional terms in
the perturbation expansion that Newton had neglected. They rushed to beat each other into print, but
Clairaut was distracted by a prize competition for the most accurate lunar
theory, announced by the Russian Academy of Sciences and refereed by Euler,
while D’Alembert ignored the competition, certain that Euler would rule in
favor of Clairaut. Clairaut won the
prize, but D’Alembert beat him into print.

The rivalry over the moon did not
end there. Clairaut continued to improve lunar tables by combining theory and
observation, while D’Alembert remained more purely theoretical. A growing animosity between Clairaut and
D’Alembert spilled out into the public eye and became a daily topic of
conversation in the Paris salons. The
difference in their approaches matched the difference in their personalities,
with the more flamboyant and pragmatic Clairaut disdaining the purist approach
and philosophy of D’Alembert. Clairaut
succeeded in publishing improved lunar theory and tables in 1752, followed by
Euler in 1753, while D’Alembert’s interests were drawn away towards his
activities for Diderot’s Encyclopedia.

The battle over the Moon in the late 1740’s was carried out on the battlefield of perturbation theory. To lowest order, the orbit of the Moon around the Earth is a Keplerian ellipse, and the effect of the Sun, though creating problems for the use of the Moon for navigation, produces only a small modification—a perturbation—of its overall motion. Within a decade or two, the accuracy of perturbation theory calculations, combined with empirical observations, had improved to the point that accurate lunar tables had sufficient accuracy to allow ships to locate their longitude to within a kilometer at sea. The most accurate tables were made by Tobias Mayer, who was awarded posthumously a prize of 3000 pounds by the British Parliament in 1763 for the determination of longitude at sea. Euler received 300 pounds for helping Mayer with his calculations. This was the same prize that was coveted by the famous clockmaker John Harrison and depicted so brilliantly in Dava Sobel’s Longitude (1995).

Lagrange Points

Several years later in 1772 Lagrange discovered an interesting special solution to the planar three-body problem with three massive points each executing an elliptic orbit around the center of mass of the system, but configured such that their positions always coincided with the vertices of an equilateral triangle [2]. He found a more important special solution in the restricted three-body problem that emerged when a massless third body was found to have two stable equilibrium points in the combined gravitational potentials of two massive bodies. These two stable equilibrium points are known as the L4 and L5 Lagrange points. Small objects can orbit these points, and in the Sun-Jupiter system these points are occupied by the Trojan asteroids. Similarly stable Lagrange points exist in the Earth-Moon system where space stations or satellites could be parked.

For the special case of circular orbits of constant angular frequency w, the motion of the third mass is described by the Lagrangian

where the potential is time dependent because of the motion of the two larger masses. Lagrange approached the problem by adopting a rotating reference frame in which the two larger masses m1 and m2 move along the stationary line defined by their centers. The Lagrangian in the rotating frame is

where the effective potential is now time independent. The first term in the effective potential is the Coriolis effect and the second is the centrifugal term.

Fig. Effective potential for the planar three-body problem and the five Lagrange points where the gradient of the effective potential equals zero. The Lagrange points are displayed on a horizontal cross section of the potential energy shown with equipotential lines. The large circle in the center is the Sun. The smaller circle on the right is a Jupiter-like planet. The points L1, L2 and L3 are each saddle-point equilibria positions and hence unstable. The points L4 and L5 are stable points that can collect small masses that orbit these Lagrange points.

The effective potential is shown in the figure for m3 = 10m2. There are five locations where the gradient of the effective potential equals zero. The point L1 is the equilibrium position between the two larger masses. The points L2 and L3 are at positions where the centrifugal force balances the gravitational attraction to the two larger masses. These are also the points that separate local orbits around a single mass from global orbits that orbit the two-body system. The last two Lagrange points at L4 and L5 are at one of the vertices of an equilateral triangle, with the other two vertices at the positions of the larger masses. The first three Lagrange points are saddle points. The last two are at maxima of the effective potential.

L1, lies between Earth and the sun at about 1 million miles from Earth. L1 gets an uninterrupted view of the sun, and is currently occupied by the Solar and Heliospheric Observatory (SOHO) and the Deep Space Climate Observatory. L2 also lies a million miles from Earth, but in the opposite direction of the sun. At this point, with the Earth, moon and sun behind it, a spacecraft can get a clear view of deep space. NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) is currently at this spot measuring the cosmic background radiation left over from the Big Bang. The James Webb Space Telescope will move into this region in 2021.

The 1960’s are known as a time of cultural revolution, but perhaps less known was the revolution that occurred in the science of dynamics. Three towering figures of that revolution were Stephen Smale (1930 – ) at Berkeley, Andrey Kolmogorov (1903 – 1987) in Moscow and his student Vladimir Arnold (1937 – 2010). Arnold was only 20 years old in 1957 when he solved Hilbert’s thirteenth problem (that any continuous function of several variables can be constructed with a finite number of two-variable functions). Only a few years later his work on the problem of small denominators in dynamical systems provided the finishing touches on the long elusive explanation of the stability of the solar system (the problem for which Poincaré won the King Oscar Prize in mathematics in 1889 when he discovered chaotic dynamics ). This theory is known as KAM-theory, using the first initials of the names of Kolmogorov, Arnold and Moser [1]. Building on his breakthrough in celestial mechanics, Arnold’s work through the 1960’s remade the theory of Hamiltonian systems, creating a shift in perspective that has permanently altered how physicists look at dynamical systems.

Hamiltonian Physics on a Torus

Traditionally, Hamiltonian physics is associated with systems of inertial objects that conserve the sum of kinetic and potential energy, in other words, conservative non-dissipative systems. But a modern view (after Arnold) of Hamiltonian systems sees them as hyperdimensional mathematical mappings that conserve volume. The space that these mappings inhabit is phase space, and the conservation of phase-space volume is known as Liouville’s Theorem [2]. The geometry of phase space is called symplectic geometry, and the universal position that symplectic geometry now holds in the physics of Hamiltonian mechanics is largely due to Arnold’s textbook Mathematical Methods of Classical Mechanics (1974, English translation 1978) [3]. Arnold’s famous quote from that text is “Hamiltonian mechanics is geometry in phase space”.

One of the striking aspects of this textbook is the reduction of phase-space geometry to the geometry of a hyperdimensional torus for a large number of Hamiltonian systems. If there are as many conserved quantities as there are degrees of freedom in a Hamiltonian system, then the system is called “integrable” (because you can integrated the equations of motion to find a constant of the motion). Then it is possible to map the physics onto a hyperdimensional torus through the transformation of dynamical coordinates into what are known as “action-angle” coordinates [4]. Each independent angle has an associated action that is conserved during the motion of the system. The periodicity of the dynamical angle coordinate makes it possible to identify it with the angular coordinate of a multi-dimensional torus. Therefore, every integrable Hamiltonian system can be mapped to motion on a multi-dimensional torus (one dimension for each degree of freedom of the system).

Actually, integrable Hamiltonian systems are among the most boring dynamical systems you can imagine. They literally just go in circles (around the torus). But as soon as you add a small perturbation that cannot be integrated they produce some of the most complex and beautiful patterns of all dynamical systems. It was Arnold’s focus on motions on a torus, and perturbations that shift the dynamics off the torus, that led him to propose a simple mapping that captured the essence of Hamiltonian chaos.

The Arnold Cat Map

Motion on a two-dimensional torus is defined by two angles, and trajectories on a two-dimensional torus are simple helixes. If the periodicities of the motion in the two angles have an integer ratio, the helix repeats itself. However, if the ratio of periods (also known as the winding number) is irrational, then the helix never repeats and passes arbitrarily closely to any point on the surface of the torus. This last case leads to an “ergodic” system, which is a term introduced by Boltzmann to describe a physical system whose trajectory fills phase space. The behavior of a helix for rational or irrational winding number is not terribly interesting. It’s just an orbit going in circles like an integrable Hamiltonian system. The helix can never even cross itself.

However, if you could add a new dimension to the torus (or add a new degree of freedom to the dynamical system), then the helix could pass over or under itself by moving into the new dimension. By weaving around itself, a trajectory can become chaotic, and the set of many trajectories can become as mixed up as a bowl of spaghetti. This can be a little hard to visualize, especially in higher dimensions, but Arnold thought of a very simple mathematical mapping that captures the essential motion on a torus, preserving volume as required for a Hamiltonian system, but with the ability for regions to become all mixed up, just like trajectories in a nonintegrable Hamiltonian system.

A unit square is isomorphic to a two-dimensional torus. This means that there is a one-to-one mapping of each point on the unit square to each point on the surface of a torus. Imagine taking a sheet of paper and forming a tube out of it. One of the dimensions of the sheet of paper is now an angle coordinate that is cyclic, going around the circumference of the tube. Now if the sheet of paper is flexible (like it is made of thin rubber) you can bend the tube around and connect the top of the tube with the bottom, like a bicycle inner tube. The other dimension of the sheet of paper is now also an angle coordinate that is cyclic. In this way a flat sheet is converted (with some bending) into a torus.

Arnold’s key idea was to create a transformation that takes the torus into itself, preserving volume, yet including the ability for regions to pass around each other. Arnold accomplished this with the simple map

where the modulus 1 takes the unit square into itself. This transformation can also be expressed as a matrix

followed by taking modulus 1. The transformation matrix is called a Floquet matrix, and the determinant of the matrix is equal to unity, which ensures that volume is conserved.

Arnold decided to illustrate this mapping by using a crude image of the face of a cat (See Fig. 1). Successive applications of the transformation stretch and shear the cat, which is then folded back into the unit square. The stretching and folding preserve the volume, but the image becomes all mixed up, just like mixing in a chaotic Hamiltonian system, or like an immiscible dye in water that is stirred.

Fig. 2 Arnold Cat Map operation is an iterated succession of stretching with shear of a unit square, and translation back to the unit square. The mapping preserves and mixes areas, and is invertible.

Recurrence

When the transformation matrix is applied to continuous values, it produces a continuous range of transformed values that become thinner and thinner until the unit square is uniformly mixed. However, if the unit square is discrete, made up of pixels, then something very different happens (see Fig. 3). The image of the cat in this case is composed of a 50×50 array of pixels. For early iterations, the image becomes stretched and mixed, but at iteration 50 there are 4 low-resolution upside-down versions of the cat, and at iteration 75 the cat fully reforms, but is upside-down. Continuing on, the cat eventually reappears fully reformed and upright at iteration 150. Therefore, the discrete case displays a recurrence and the mapping is periodic. Calculating the period of the cat map on lattices can lead to interesting patterns, especially if the lattice is composed of prime numbers [6].

Fig. 3 A discrete cat map has a recurrence period. This example with a 50×50 lattice has a period of 150.

The Cat Map and the Golden Mean

The golden mean, or the golden ratio, 1.618033988749895 is never far away when working with Hamiltonian systems. Because the golden mean is the “most irrational” of all irrational numbers, it plays an essential role in KAM theory on the stability of the solar system. In the case of Arnold’s cat map, it pops up its head in several ways. For instance, the transformation matrix has eigenvalues

Hamiltonian systems are freaks of nature. Unlike the everyday world we experience that is full of dissipation and inefficiency, Hamiltonian systems live in a world free of loss. Despite how rare this situation is for us, this unnatural state happens commonly in two extremes: orbital mechanics and quantum mechanics. In the case of orbital mechanics, dissipation does exist, most commonly in tidal effects, but effects of dissipation in the orbits of moons and planets takes eons to accumulate, making these systems effectively free of dissipation on shorter time scales. Quantum mechanics is strictly free of dissipation, but there is a strong caveat: ALL quantum states need to be included in the quantum description. This includes the coupling of discrete quantum states to their environment. Although it is possible to isolate quantum systems to a large degree, it is never possible to isolate them completely, and they do interact with the quantum states of their environment, if even just the black-body radiation from their container, and even if that container is cooled to milliKelvins. Such interactions involve so many degrees of freedom, that it all behaves like dissipation. The origin of quantum decoherence, which poses such a challenge for practical quantum computers, is the entanglement of quantum systems with their environment.

Liouville’s theorem plays a central role in the explanation of the entropy and ergodic properties of ideal gases, as well as in Hamiltonian chaos.

Liouville’s Theorem and Phase Space

A middle ground of practically ideal Hamiltonian mechanics can be found in the dynamics of ideal gases. This is the arena where Maxwell and Boltzmann first developed their theories of statistical mechanics using Hamiltonian physics to describe the large numbers of particles. Boltzmann applied a result he learned from Jacobi’s Principle of the Last Multiplier to show that a volume of phase space is conserved despite the large number of degrees of freedom and the large number of collisions that take place. This was the first derivation of what is today known as Liouville’s theorem.

Close-up of the Lozi Map with B = -1 and C = 0.5.

In 1838 Joseph Liouville, a pure mathematician, was interested in classes of solutions of differential equations. In a short paper, he showed that for one class of differential equation one could define a property that remained invariant under the time evolution of the system. This purely mathematical paper by Liouville was expanded upon by Jacobi, who was a major commentator on Hamilton’s new theory of dynamics, contributing much of the mathematical structure that we associate today with Hamiltonian mechanics. Jacobi recognized that Hamilton’s equations were of the same class as the ones studied by Liouville, and the conserved property was a product of differentials. In the mid-1800’s the language of multidimensional spaces had yet to be invented, so Jacobi did not recognize the conserved quantity as a volume element, nor the space within which the dynamics occurred as a space. Boltzmann recognized both, and he was the first to establish the principle of conservation of phase space volume. He named this principle after Liouville, even though it was actually Boltzmann himself who found its natural place within the physics of Hamiltonian systems [1].

Liouville’s theorem plays a central role in the explanation of the entropy of ideal gases, as well as in Hamiltonian chaos. In a system with numerous degrees of freedom, a small volume of initial conditions is stretched and folded by the dynamical equations as the system evolves. The stretching and folding is like what happens to dough in a bakers hands. The volume of the dough never changes, but after a long time, a small spot of food coloring will eventually be as close to any part of the dough as you wish. This analogy is part of the motivation for ergodic systems, and this kind of mixing is characteristic of Hamiltonian systems, in which trajectories can diffuse throughout the phase space volume … usually.

Interestingly, when the number of degrees of freedom are not so large, there is a middle ground of Hamiltonian systems for which some initial conditions can lead to chaotic trajectories, while other initial conditions can produce completely regular behavior. For the right kind of systems, the regular behavior can hem in the irregular behavior, restricting it to finite regions. This was a major finding of the KAM theory [2], named after Kolmogorov, Arnold and Moser, which helped explain the regions of regular motion separating regions of chaotic motion as illustrated in Chirikov’s Standard Map.

Discrete Maps

Hamilton’s equations are ordinary continuous differential equations that define a Hamiltonian flow in phase space. These equations can be solved using standard techniques, such as Runge-Kutta. However, a much simpler approach for exploring Hamiltonian chaos uses discrete maps that represent the Poincaré first-return map, also known as the Poincaré section. Testing that a discrete map satisfies Liouville’s theorem is as simple as checking that the determinant of the Floquet matrix is equal to unity. When the dynamics are represented in a Poincaré plane, these maps are called area-preserving maps.

There are many famous examples of area-preserving maps in the plane. The Chirikov Standard Map is one of the best known and is often used to illustrate KAM theory. It is a discrete representation of a kicked rotater, while a kicked harmonic oscillator leads to the Web Map. The Henon Map was developed to explain the orbits of stars in galaxies. The Lozi Map is a version of the Henon map that is more accessible analytically. And the Cat Map was devised by Vladimir Arnold to illustrate what is today called Arnold Diffusion. All of these maps display classic signatures of (low-dimensional) Hamiltonian chaos with periodic orbits hemming in regions of chaotic orbits.

Chirikov Standard Map

Kicked rotater

Web Map

Kicked harmonic oscillator

Henon Map

Stellar trajectories in galaxies

Lozi Map

Simplified Henon map

Cat Map

Arnold Diffusion

Table: Common examples of area-preserving maps.

Lozi Map

My favorite area-preserving discrete map is the Lozi Map. I first stumbled on this map at the very back of Steven Strogatz’ wonderful book on nonlinear dynamics [3]. It’s one of the last exercises of the last chapter. The map is particularly simple, but it leads to rich dynamics, both regular and chaotic. The map equations are

which is area-preserving when |B| = 1. The constant C can be varied, but the choice C = 0.5 works nicely, and B = -1 produces a beautiful nested structure, as shown in the figure.

Iterated Lozi map for B = -1 and C = 0.5. Each color is a distinct trajectory. Many regular trajectories exist that corral regions of chaotic trajectories. Trajectories become more chaotic farther away from the center.

While virtually everyone recognizes the famous Lorenz “Butterfly”, the strange attractor that is one of the central icons of chaos theory, in my opinion Hamiltonian chaos generates far more interesting patterns. This is because Hamiltonians conserve phase-space volume, stretching and folding small volumes of initial conditions as they evolve in time, until they span large sections of phase space. Hamiltonian chaos is usually displayed as multi-color Poincaré sections (also known as first-return maps) that are created when a set of single trajectories, each represented by a single color, pierce the Poincaré plane over and over again.

The archetype of all Hamiltonian systems is the harmonic oscillator.

A Hamiltonian tapestry generated from the Web Map for K = 0.616 and q = 4.

Periodically-Kicked Hamiltonian

The classic Hamiltonian system, perhaps the archetype of all Hamiltonian systems, is the harmonic oscillator. The physics of the harmonic oscillator are taught in the most elementary courses, because every stable system in the world is approximated, to lowest order, as a harmonic oscillator. As the simplest dynamical system, one would think that it held no surprises. But surprisingly, it can create the most beautiful tapestries of color when pulsed periodically and mapped onto the Poincaré plane.

The Hamiltonian of the periodically kicked harmonic oscillator is converted into the Web Map, represented as an iterative mapping as

There can be resonance between the sequence of kicks and the natural oscillator frequency such that α = 2π/q. At these resonances, intricate web patterns emerge. The Web Map produces a web of stochastic layers when plotted on an extended phase plane. The symmetry of the web is controlled by the integer q, and the stochastic layer width is controlled by the perturbation strength K.

A tapestry for q = 6.

Web Map Python Program

Iterated maps are easy to implement in code. Here is a simple Python code to generate maps of different types. You can play with the coupling constant K and the periodicity q. For small K, the tapestries are mostly regular. But as the coupling K increases, stochastic layers emerge. When q is a small even number, tapestries of regular symmetric are generated. However, when q is an odd small integer, the tapestries turn into quasi-crystals.