COMPLEXITY EXPLAINED: 7. Cosmic Evolution of Complexity

Our universe is believed to have begun with the Big Bang, 10-15 billion years ago. Its degree of complexity at and soon after that moment was next to nil. Then why and how has the cosmic complexity gone on increasing? In fact, it is increasing exponentially fast. The explanation can be traced ultimately to the fact that the universe has been expanding all the time.

7.1 Quantum Mechanics

All phenomena are governed by the laws of quantum mechanics. Quantum theory has been remarkably successful in explaining a vast range of observations. It is also highly counterintuitive. We accept it because there is no better theory for understanding natural phenomena. In any case, there is no reason why the laws of Nature should not be counterintuitive to humans. There is nothing special about us, except that we possess intelligence and consciousness. In the history of the cosmos, we emerged on the scene very recently, whereas the laws of Nature have been there all the time.

The electron microscope provides a good example of the counterintuitive behaviour of fundamental particles like electrons. Let us first consider the conventional optical microscope we use for obtaining a magnified view of small objects. We cannot observe an object in total darkness, so we must shine some light on it. The object scatters the light in all directions. The pattern of the scattered light carries information about the shape and other properties of the object. Some of this diverging scattered light is intercepted by a lens of the microscope. The lens bends the waves of scattered light so that they can recombine or ‘interfere’ with one another, and produce an image of the object in the so-called ‘image plane’, located somewhere between the lens and its focal point.

Suppose we want to go on increasing the ‘resolving power’ of the optical microscope. That is, we want that even when two particles are located very close to each other, they should still be seen as separate in the image produced by the microscope. Naturally, the question of the wavelength of the light used for illuminating the object becomes relevant. The smaller this wavelength is, the greater is the resolving power. But how small a wavelength can we use and still obtain an image of the object? There is a practical limit. Suppose we want to use X-rays, instead of visible light, for illuminating the object. X-rays and visible light are both electromagnetic radiation; they differ only in wavelength, X-ray wavelengths being typically 5000 times shorter than those in the visible part of the electromagnetic spectrum. This presents a serious practical difficulty. It is not possible (at least not easy) to find a lens which can bend X-rays sufficiently to meet and interfere and form an image. It is easier to solve this problem by using electrons instead of visible radiation.

Yes, electrons. There is an inverse relationship between the speed of an electron and the wavelength associated with it. Electrons are charged particles, and we can use high electric fields to accelerate them to high velocities, with a correspondingly short wavelength. But what about the lens needed to bend the high-speed electrons after they have been scattered by the object we want to view? No problem; just use electric fields to do the bending. All this is what is actually done in an electron microscope. The fact that electron microscopy is a reality is proof that this line of reasoning must be correct.

So an electron in motion has a wavelength associated with it. But an electron is also a particle, with a definite ‘rest mass’. This wave-particle duality, though counterintuitive, is an important feature of quantum theory. Similarly, a beam of light (in fact, radiation of any wavelength) is not just a wave, but also has a particle aspect. Particles (or quanta) of light are called photons. Interestingly, Einstein got the Nobel Prize, not for his work on the theory of relativity, but for work which confirmed the particulate nature of radiation.

Now, a wave is not something you can specify in terms of location at a point in space (unlike a particle). A wave has nonlocal character, with an amplitude and a phase at every point in space. That leads to another counter-intuitive aspect of quantum mechanics: Since a particle has a wave aspect also, it can be everywhere is space (with varying probabilities, of course). Thus, we cannot specify the position of a particle with complete certainty. This conclusion is a blow to the traditional (classical mechanics) worldview of things. Classical mechanics can be deterministic, but not quantum mechanics. In classical (or Newtonian) mechanics, one can not only specify the position and momentum of a particle with infinite precision, one can also determine (through the equations of motion) the position and momentum of that particle at any time in the future and in the past. Quantum indeterminism has been the subject of many philosophical discussions. In quantum mechanics, we can speak only in terms of probabilities, and not certainties.

7.2 The Heisenberg Uncertainty Principle

Let us return to the issue of having to shine some probing radiation (photons or electrons or whatever) on an object for viewing it. The quantum of the probing radiation has a certain momentum and energy, so it disturbs what we are trying to observe when it impinges on it. This is an unhappy situation indeed, but totally unavoidable: The act of observing an object disturbs it.

And how much is the disturbance? To answer that question, let us move away from just microscopy, and say that we want to determine both the position and the momentum of the object. As a simple case, suppose the object is at rest, so its momentum is zero. The act of impinging it with even one quantum of the probe (e.g. a photon) will impart it some momentum, and also disturb its initial position. Suppose we want to determine the position very accurately. That would require a probing photon of very small wavelength. But such a photon also has more energy compared to a longer-wavelength photon, so the disturbance or uncertainty in the momentum will be larger. The converse is also true: If we try to reduce the uncertainty in our knowledge of the momentum by using a larger-wavelength probe, the uncertainty in the measurement of the position will be larger. This tradeoff between the uncertainty in position and the uncertainty in momentum is captured neatly by the celebrated Heisenberg uncertainty principle of quantum mechanics. It says that there is a lower limit to the uncertainty with which we can specify both the position and the momentum of a particle. Suppose the uncertainty in position is Δx, and the uncertainty in momentum is Δpx, then the product Δx.Δpx must always be greater than a small but nonzero universal value. The uncertainties in position and momentum mean that unpredictable quantum fluctuations can occur in their values within the limits prescribed by the Heisenberg principle.

The quantum-mechanical uncertainty becomes a dominant effect only when we are dealing with entities of very small sizes and masses. If we are dealing with a heavy object, it would normally have a large size also. Therefore, bombarding a few photons for determining its position and momentum will hardly cause any relative disturbance to the values of these parameters. This is one example of how quantum mechanics merges seamlessly with classical mechanics when we are dealing with macroscopic objects in everyday life.

7.3 Our Universe

There have been competing theories about the origin of our universe, and whether the universe indeed has a beginning and an end. Edwin Hubble made the crucial observation during the 1920s that the universe is expanding. This meant that, if we imagine a time reversal, there must have been a moment when all the contents of the universe were at one point, a so-called ‘singularity‘. The so-called Big Bang occurred at that moment, and the universe has been expanding ever since. The Big Bang theory was proposed in 1930 by Georges Lemaître, and developed by other physicists, notably George Gamow. The theory implies that the universe had a definite beginning and has a finite age.

Fred Hoyle, Hermann Bondi, Thomas Gold, and Jayant Narlikar formulated the alternative ‘steady-state’ theory of the universe in 1948. This theory implies an infinitely old universe, with no ‘beginning.’ The Bell Labs scientists Arno Penzias and Robert Wilson discovered in 1965 the cosmic background radiation that Gamow had predicted to be a consequence of the Big Bang model. The observation of this radiation, a relic of the early universe, delivered a body-blow to the steady-state model of the universe. However, the last word has not been said yet about what is the correct model for the universe. The steady-state-universe model has much to commend itself.

The two pillars of modern physics are the quantum theory, and the general theory of relativity. Quantum mechanics has been remarkably successful in understanding the physics of very small objects (like electrons), and the general theory of relativity deals with very large distances and masses for which gravitation becomes a dominant interaction. Isaac Newton was the first to understand gravity as an attractive force between bodies that depends only on their masses and the distance between them. Newton’s theory was extended by Einstein’s general theory of relativity. The new theory treated gravity as a distortion of space rather than a force between bodies.

The moment of the Big Bang was a singularity because it involved very small dimensions and very large gravitational forces. So a good theory for explaining this scenario must merge quantum mechanics with general-relativity theory. In other words, we need a theory of quantum gravity. Such a theory still eludes us, although there has been considerable progress through the work of Stephen Hawking and others. As argued by Hawking and Penrose, Einstein’s general theory of relativity is only an incomplete theory. It cannot tell us how the universe started off, because it predicts that all physical theories, including itself, break down at the beginning of the universe.

Hawking has come up with idea of ‘a universe without boundaries’, as discussed in a somewhat nontechnical language in his famous book The Universe in a Nutshell. In 3-dimensional space, the surface of a sphere is a good example of a ‘universe’ without boundaries from the vantage point of a creature constrained to exist only on the surface of the sphere; there is no beginning or end to the surface of the sphere. In the words of Hawking: ‘It is perhaps ironic that, having changed my mind, I am now trying to convince other physicists that there was in fact no singularity at the beginning of the universe – as we shall see later, it can disappear once quantum effects are taken into account’. In the Hartle-Hawking model, the universe is finite but has no boundary in imaginary time. Imaginary time is real time multiplied by the square root of minus one ((-1)1/2). Mind boggling stuff indeed.

7.4 The Big Bang

The singularity at the moment of the Big Bang was of such small spatial dimensions that quantum-mechanical effects in general, and the Heisenberg uncertainty principle in particular, were extremely dominant. There is a viewpoint that the universe was born as a quantum fluctuation. The quantum fluctuation in momentum (Δp) or kinetic energy permitted by the Heisenberg principle (because of the vanishingly small spatial dimensions Δx at the moment of the singularity) was large enough to account for the immense amount of the energy in the universe. Space and time were strongly twisted in the beginning. Space itself exploded, its dynamics explained for later moments of time by Einstein’s geometrical laws of general relativity.

How can energy be created out of nothing, and how is it continuing to increase as the universe expands? Here I am on uncertain ground, as the experts do not yet agree on what really happened. Apart from what I have said above (which may be debatable), here is a possible answer, given by Seth Lloyd (2006) in his book Programming the Universe: ‘Quantum mechanics describes energy in terms of quantum fields, a kind of underlying fabric of the universe, whose weave makes up the elementary particles – photons, electrons, quarks. The energy we see around us, then – in the form of Earth, stars, light, heat – was drawn out of the underlying quantum fields by the expansion of our universe. Gravity is an attractive force that pulls things together. . . As the universe expands (which it continues to do), gravity sucks energy out of the quantum fields. The energy in the quantum fields is almost always positive, and this positive energy is exactly balanced by the negative energy of gravitational attraction. As the expansion proceeds, more and more positive energy becomes available, in the form of matter and light – compensated for by the negative energy in the attractive force of the gravitational field.’ Lloyd emphasizes the complementary roles of energy and information in the cosmic evolution of complexity: ‘Energy makes physical systems do things. Information tells them what to do.’

7.5 Nature Abhors Gradients

To understand the cosmic evolution of complexity, it is helpful to take note of the fact that ‘Nature abhors gradients’. This is usually not stated as a law of science, but is a clear consequence of the ‘official’ laws of thermodynamics. It provides a different perspective to why evolutionary progress occurs. We all know how difficult it is to maintain vacuum in a vessel. Nature abhors vacuum, and tends to fill up empty space with whatever molecules happen to be around. What is really happening is that there is a gradient of pressure, and this gradient tends to get destroyed in an irreversible manner, in accordance with the second law of thermodynamics. In fact, the second law itself is nothing but a statement about the spontaneous destruction of gradients like thermal gradients, pressure gradients, concentration gradients, etc.

So we can generalize and say that Nature abhors gradients of all types. In particular, it may be noted that when a system is pushed away from a state of thermodynamic equilibrium by an influx of energy and/or matter, a gradient is created. As discussed in Part 3 and Part 6, if the departure from equilibrium is not too large, Nature restores equilibrium by destroying the gradient. But if the departure from equilibrium is too large, then the system is unable to return to the old configuration of equilibrium, and must seek a new steady state or equilibrium state. What is more, since the departure from equilibrium is large, the system tends to find more efficient ways of destroying gradients, and this results in pattern formation and emergent phenomena or structures so characteristic of complexity. Just think of the whirlpool, or, if you are familiar with such things, the regular pattern created by a so-called Bénard instability.

7.6 The Cosmic Evolution of Complexity

Chaisson (2001) identifies three eras in the cosmic evolution of complexity. In the beginning there was only radiation, with such a high energy density that there was hardly any structure or information content in the universe; it was just pure energy. As the universe cooled and thinned, a veritable phase transition, or bifurcation in the phase-space trajectory occurred, resulting in the emergence of matter coexisting with radiation. This marked the start of the second era, in which a high proportion of energy resided in matter, rather than in radiation. The third era was heralded by the onset of ‘technologically manipulative beings‘.

As the very hot plasma after the Big Bang expanded, it also cooled. The temperature was ~1032 K, 10-43 seconds after the Big Bang. Gravitation appeared at this stage. Around 10-34 seconds later, the temperature was ~1027 K, and matter appeared in the form of quarks, leptons, gauge bosons, and several other elementary particles. ‘Antimatter’ also appeared. The appearance of matter can be attributed to quantum fluctuations in the density of the universe, amplified by the effects of gravity. Even a miniscule increase in local density could attract more matter towards it, with a corresponding decrease in the surrounding density.

Around 10-10 seconds later, the electro-weak interaction split into the electromagnetic interaction and the weak interaction (another symmetry-breaking phase transition or bifurcation, like so many in the cosmic evolution, with a concomitant increase in the degree of complexity). Around 10-512 K. This is when the quarks formed the protons and the neutrons, and the antiquarks formed antiprotons. The collisions between protons and antiprotons left behind mostly protons, as well as photons. Around one second later, collisions among electrons and positrons occurred, leaving behind mostly electrons. Around one minute later, with temperature ~109 K, neutrons and protons could coalesce, resulting in nuclei like those of helium, lithium, and the (heavy) isotopes of hydrogen. seconds later, the temperature had fallen to ~10

About ten million years after the Big Bang, enough cooling had occurred to fill the universe with a mist of particles, containing mostly hydrogen and some helium, as also some elementary particles, including neutrinos, some electromagnetic radiation, and perhaps some other, unknown, particles. The universe was just cold, dark, and formless at that stage. Then some quantum-mechanical primordial fluctuations in the densities of the particles gave rise to a clumping of some of the particles, rather like the nucleation that precedes the growth of a crystal from a fluid. The presence of such clumped particles suddenly brought the gravitational forces into prominence, leading to a cascading effect. Portions of the mist began collapsing into huge swirling clouds. Over a period of a few hundred million years, huge galaxies, each containing billions of young stars of various sizes, formed and began to shine. The formless darkness of the initial period was gone.

The large superstars among these were strongly bright spheres, the brightness coming from the fusion of hydrogen and helium in their interiors, made possible by the prevailing extreme temperatures and pressures. This is how the heavier elements got formed in the interiors of these large stars. The emergence of heavier elements by the process of nuclear fusion continued steadily until the element iron started forming. The iron nucleus is the most stable of them all. Iron cannot fuse with one or more nucleons and release radiative energy of nuclear fusion. Its presence acts as a poison for the nuclear fusion process. Thus the appearance of iron marked the beginning of the end of the available nuclear fuel, and therefore the end of the life of the star. In due course, the smaller stars simply ceased to shine, shrinking into cold and dead entities.

But a very different fate awaited the larger stars. No longer able to sustain their size because of the progressively decreasing processes of nuclear fusion of elements, they began to collapse under their immense gravitational pull. A rapid change occurred in their interiors. Under the immense squeezing generated by the collapse, the iron-atom core imploded. This resulted in a new state of matter as the electrons and the protons in the atoms were squeezed together. The dominant process of interaction now was the electro-weak interaction, in which protons and electrons reacted to produce neutrons and electron-neutrinos. The collapse led to a compression of the star to an extremely dense ball of pure neutron matter. Concomitantly, the neutrino cloud burst outwards, resulting in an explosion (the supernova explosion) of the outer shell of the star. This is how the newly synthesized elements (up to the atomic number for iron), residing in the outer shell of the star, were scattered into the universe, accompanied by a brilliant flash of light.

A consequence of such supernova explosions (which still occur from time to time, and light up the galaxies with brilliant flashes of light) was the emergence of clouds of dust and gas and the debris containing heavy elements. These clouds encircled the galaxies in spiralling arms. The intensity of the explosions was so high that elements heavier than iron were also produced and scattered into space.

In the outer portions of the spirals occurred a condensation of the dust and the clouds and the debris, resulting in the formation of the second generation of (smaller) stars (including our Sun), as also planets, moons, comets, asteroids, etc. Our solar system was formed when the universe was ~9 billion years old. In the initial period, our Earth underwent several violent upheavals (bombardment by comets and meteors, as also huge earthquakes and volcanic eruptions). By the time the Earth was ~2.5 billion years old, its continents had formed. Life appeared in due course.

7.7 Why is There so Much Complexity in the Universe?

At the moment of the Big Bang, the information content of the universe was probably zero, assuming that there was only one possible initial state and only one self-consistent set of physical laws. Existence of information means that there are alternatives available; e.g. 0 or 1. If there were no alternatives to the initial state of the universe, then it did not require any bits of information to describe it. Soon after time and space began, the quantum fields contained very little information and energy to begin with. Thus, in the beginning, the effective complexity, the logical depth, and the thermodynamic depth (cf. Part 5) were all zero, or nearly zero. This view is consistent with the fact that the universe emerged out of nothing.

As the early universe expanded, it pulled in more and more energy out of the quantum fabric of space and time. Under continuing expansion, a variety of elementary particles got created, and the energy drawn from the underlying quantum fields got converted into heat, meaning that the initial elementary particles were very hot and increasing in number rapidly, and therefore the entropy of the universe increased rapidly. And high entropy means that the particles require a large amount of information to specify their coordinates and momenta. This is how the degree of complexity of the universe grew in the beginning.

Soon after that, quantum fluctuations resulting in density fluctuations and clumping of matter made gravitational effects more and more important with increasing time. The extremely large information content of the universe results, in part, from the quantum-mechanical nature of the laws of physics. The language of quantum mechanics is in terms of probabilities, and not certainties. This inherent uncertainty in the description of the present universe means that a very large amount of information is needed for the description. This is just another way of saying that the present degree of complexity of the universe is very large.

But why does the degree of complexity go on increasing? In Part 5 we introduced the metaphor of a monkey typing away randomly on the keyboard of a computer. We concluded that Ockham’s razor ensures that short and simple programs are the most likely to explain natural phenomena, which in the present context means the explanation of the evolution of complexity in the universe. The quantum-mechanical laws of physics are the ‘simple programs’, as well as the computer. But what is the equivalent of the monkey, or rather a large number of monkeys, injecting more and more information and complexity into the universe by programming it with a string of random bits? According to Seth Lloyd (2006), ‘quantum fluctuations are the monkeys that program the universe‘.

The current thinking is that the universe will continue to expand, and that it is spatially infinite. But the speed of light is not infinite. Therefore, the causally connected part of the universe has a finite size, limited by what has been called the ‘horizon‘ (Lloyd 2006). The quantum computation being carried out by the universe is confined to this part. Thus, for all practical purposes, the part of the universe within the horizon is what we can call ‘the universe’. As this universe expands, the size of the causally connected region increases, which in turn means that the number of bits of information within the horizon increases, as does the number of computational operations. Thus the expanding universe is the reason for the continuing increase in the degree of complexity of the universe.

7.8 Concluding Remarks

The ever-present expansion of the universe is a necessary cause (though perhaps not a sufficient cause) for all evolution of complexity, because it creates gradients of various kinds: ‘Gradients forever having been enabled by the expanding cosmos, it was and is the resultant flow of energy among innumerable non-equilibrium environments that triggered, and in untold cases still maintains, ordered, complex systems on domains large and small, past and present’ (Chaisson 2006). The ever-present expansion of the universe gives rise to gradients on a variety of spatial and temporal scales. And, ‘it is the contrasting temporal behaviour of various energy densities that has given rise to those environments needed for the emergence of galaxies, stars, planets, and life.’

In the grand cosmic scenario, there was only physical evolution in the beginning, and it prevailed for a very long time. While the physical evolution still continues, the emergence of life started the phenomenon of biological evolution. The ‘Nature abhors gradients’ way of looking at the evolution of complexity has been particularly well articulated by Lynn Margulis and Dorion Sagan (2002) in their book Acquiring Genomes: A Theory of the Origins of Species: ‘Although it is difficult to say why the universe is so organized, the measured universal expansion since the Big Bang of space continues to provide a “sink” (a place) into which stars as sources can radiate: A progenitive cosmic gradient, the source of the other gradients, is thus formed by cosmic expansion. For the foreseeable future the geometry of the universe’s expansion continues to create possibilities for functionally creative gradient destruction, for example, into space and in the electromagnetic gradients of stars. Once we grasp this organization, however, life appears not as miraculous but rather another cycling system, with a long history, whose existence is explained by its greater efficiency at reducing gradients than the nonliving complex systems it supplemented.’

It is perhaps a sobering thought that we seem so inconsequential in the Universe. It is even more humbling at first – but then wonderfully enlightening – to recognize that evolutionary changes, operating over almost incomprehensible space and nearly inconceivable time, have given birth to everything seen around us. Scientists are now beginning to decipher how all known objects – from atoms to galaxies, from cells to brains, from people to society – are interrelated.

About the author

Vinod Wadhawan

Dr. Vinod Wadhawan is a scientist, rationalist, author, and blogger. He has written books on ferroic materials, smart structures, complexity science, and symmetry. More information about him is available at his website. Since October 2011 he has been writing at The Vinod Wadhawan Blog, which celebrates the spirit of science and the scientific method.

4 Comments

The mode of a gene’s response to organism-culture’s feedback signal, i.e. “replicate without change” or “replicate with change” in case of proven augmented energy constrainment by the offspring, is the mode of Life’s normal evolution, which is the mode of evolution universally.

Again, the scope of of genes lifehood is not just the lifehood of genes.

Genes, and Life in general, are but one of the forms of mass, of constrained energy formats. The lifehood of genes is the foundation of the subject of evolutionary biology, which is a major component of the subject of life, which is a minute component of the subject of evolution of the universe, which is the subject for which humanity seeks a unified field theory.

Since the big-bang resolution of E/m superposition ALL the energy of the universe is destined for the galactic clusters expansion plus laying down of the gravity natrix for the eventual cosmic impansion, and ALL the mass is destined to revert to energy for these ends. The mass-to-energy reversion is resisted by the mass, this resistance being the archtype of selection for survival by all materials, including life. This resistance is due, exciting to us, to the fact that – as we know from everyday experience – formation of mass requires investment of energy, that dissipates when the mass disintegrates. And as we also know from everyday experience all energy forms other than gravity end up eventually as gravity energy. This is expected since ALL the contents of the universe are manifestations of the gravity energy freed at Inflation.

And again, a unified field theory is sought since unlike the evergrowing list of specific science/technology divisions drawn by the “scientists” trade-unions like the AAAS, the universe and Earth evolve as an integrated intertwined interrelated tangled whole and not as a collection of individual divisions.

I enjoyed reading your expert comments. Thank you very much. In this series of articles I am constrained to keep my narrative very simple and introductory, so as to be accessible to as large a readership as possible.

In the light of the comments made here by Prof. Henis, I want to tell the readers about a great website, namely

“What’s exciting for me is that this study shows that it is possible to predict when and where genes are expressed, which is a crucial first step towards understanding how regulatory networks drive development”

B. Organism’s behaviour, its reactions to its environments, are “regulatory networks”?

The above statement by Furlong, translated to 22nd century comprehension, amounts to:

What’s exciting is that this study shows that it is possible to predict when and where organisms react to their environments, which is a crucial first step towards understanding how evolution proceeds.

C. Please consider the following suggestions of the origin and nature of life and organisms, and of the origin and nature of cosmic and life evolution

– Genes, Earth’s primal organisms, and all their take-off organisms – Life in general – are but one of the cosmic forms of mass, of constrained energy formats.

– The on-going cosmic mass-to-energy reversion since the Big-Bang inflation is resisted by mass, this resistance being the archtype of selection for survival by all forms of mass, including life.

– The mode of genes’, Earth’s primal organisms, response to the cultural feed-back signals reaching them from their upper stratum take-off organism is “replicate without change” or “replicate with change”. “Replicate with change” is selected in case of proven augmented energy constrainment by the the new generation, this being “better survival”. This mode of Life’s normal evolution is the mode of energy-mass evolution universally.

– a theory that the various types of animals and plants have their origin in other preexisting types and that the distinguishable differences are due to modifications in their successive generations, and also the process described by this theory.

– a process in which the whole universe is a progression of interrelated phenomena.

a theory, and the process described by it, that the whole universe changes in a progression of interrelated phenomena of mass formats attaining temporary augmented energy constraint in their successive generations with energy drained from other mass formats, to temporarily postpone, survive, reverting of their mass to the cosmic energy fueling the galactic clusters expansion.