THE ADJACENT POSSIBLE

THE ADJACENT POSSIBLE

An autonomous agent is something that can both reproduce itself and do at least one thermodynamic work cycle. It turns out that this is true of all free-living cells, excepting weird special cases. They all do work cycles, just like the bacterium spinning its flagellum as it swims up the glucose gradient. The cells in your body are busy doing work cycles all the time.

Introduction

Stuart Kauffman is a theoretical biologist who studies the origin of life and the origins of molecular organization. Thirty- five years ago, he developed the Kauffman models, which are random networks exhibiting a kind of self-organization that he terms "order for free." Kauffman is not easy. His models are rigorous, mathematical, and, to many of his colleagues, somewhat difficult to understand. A key to his worldview is the notion that convergent rather than divergent flow plays the deciding role in the evolution of life. He believes that the complex systems best able to adapt are those poised on the border between chaos and disorder.

Kauffman asks a question that goes beyond those asked by other evolutionary theorists: if selection is operating all the time, how do we build a theory that combines self-organization (order for free) and selection? The answer lies in a "new" biology, somewhat similar to that proposed by Brian Goodwin, in which natural selection is married to structuralism.

Lately, Kauffman says that he has been "hamstrung by the fact that I don't see how you can see ahead of time what the variables will be. You begin science by stating the configuration space. You know the variables, you know the laws, you know the forces, and the whole question is, how does the thing work in that space? If you can't see ahead of time what the variables are, the microscopic variables for example for the biosphere, how do you get started on the job of an integrated theory? I don't know how to do that. I understand what the paleontologists do, but they're dealing with the past. How do we get started on something where we could talk about the future of a biosphere?"

"There is a chance that there are general laws. I've thought about four of them. One of them says that autonomous agents have to live the most complex game that they can. The second has to do with the construction of ecosystems. The third has to do with Per Bak's self-organized criticality in ecosystems. And the fourth concerns the idea of the adjacent possible. It just may be the case that biospheres on average keep expanding into the adjacent possible. By doing so they increase the diversity of what can happen next. It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it. There's a lot of neat science to be done to unpack that, and I'm thinking about it."

—JB

STUART A. KAUFFMAN, a theoretical biologist, is emeritus professor of biochemistry at the University of Pennsylvania, a MacArthur Fellow and an external professor at the Santa Fe Institute. Dr. Kauffman was the founding general partner and chief scientific officer of The Bios Group, a company (acquired in 2003 by NuTech Solutions) that applies the science of complexity to business management problems. He is the author of The Origins of Order, Investigations, and At Home in the Universe: The Search for the Laws of Self-Organization.

(STUART KAUFFMAN): In his famous book, What is Life?, Erwin Schrödinger asks, "What is the source of the order in biology?" He arrives at the idea that it depends upon quantum mechanics and a microcode carried in some sort of aperiodic crystal—which turned out to be DNA and RNA—so he is brilliantly right. But if you ask if he got to the essence of what makes something alive, it's clear that he didn't. Although today we know bits and pieces about the machinery of cells, we don't know what makes them living things. However, it is possible that I've stumbled upon a definition of what it means for something to be alive.

For the better part of a year and a half, I've been keeping a notebook about what I call autonomous agents. An autonomous agent is something that can act on its own behalf in an environment. Indeed, all free-living organisms are autonomous agents. Normally, when we think about a bacterium swimming upstream in a glucose gradient we say that the bacterium is going to get food. That is to say, we talk about the bacterium teleologically, as if it were acting on its own behalf in an environment. It is stunning that the universe has brought about things that can act in this way. How in the world has that happened?

As I thought about this, I noted that the bacterium is just a physical system; it's just a bunch of molecules that hang together and do things to one another. So, I wondered, what characteristics are necessary for a physical system to be an autonomous agent? After thinking about this for a number of months I came up with a tentative definition.

My definition is that an autonomous agent is something that can both reproduce itself and do at least one thermodynamic work cycle. It turns out that this is true of all free-living cells, excepting weird special cases. They all do work cycles, just like the bacterium spinning its flagellum as it swims up the glucose gradient. The cells in your body are busy doing work cycles all the time.

Definitions are neither true nor false; they're useful or useless. We can only find out if a definition is useful by trying to apply it to organisms, conceptual issues, and experimental issues. Hopefully, it turns out to be interesting.

Once I had this definition, my next step was to create and write about a hypothetical chemical autonomous agent. It turns out to be an open thermodynamic chemical system that is able to reproduce itself and, in doing so, performs a thermodynamic work cycle. I had to learn about work cycles, but it's just a new class of chemical reaction networks that nobody's ever looked at before. People have made self-reproducing molecular systems and molecular motors, but nobody's ever put the two together into a single system that is capable of both reproduction and doing a work cycle.

Imagine that inside the cell are two kinds of molecules—A and B—that can undergo three different reactions. A and B can make C and D, they can make E, or they can make F and G. There are three different reaction pathways, each of which has potential barriers along the reaction coordinate. Once the cells make the membrane, A and B can partition into the membrane, changing their rotational, vibrational, and translational motion. That, in turn, changes the shape of the potential barrier and walls. Changing the heights of the potential barrier is precisely the manipulation of constraints. Thus, cells do thermodynamic work to build a structure called the membrane, which in turn manipulates constraints on reactions, meaning that cells do work at constructing constraints that manipulate constraints.

In addition, the cell does thermodynamic work to build an enzyme by linking amino acids together. It binds to the transition state that carries A and B to C and D—not to E or F and G—so it catalyzes that specific reaction, causing energies to reach down a specific pathway within a small number of degrees of freedom. You make C and D, but you don't make E and F and G. D may go over and attach to a trans-membrane channel and give up some of its vibrational energy that popped the membrane open and allow in an ion, which then does something further in the cell. So cells do work to construct constraints, which then cause the release of energy in specific ways so that work is done. That work then propagates, which is fascinating.

As I proceed here there are several points to keep in mind. One is that you cannot do a work cycle at equilibrium, meaning that the concept of an autonomous agent is inherently a non equilibrium concept.

A second is that once this concept is developed it's only going to be a matter of perhaps 10, 15, or 20 years until, somewhere in the marriage between biology and nanotechnology, we will make autonomous agents that will create chemical systems that reproduce themselves and do work cycles. This means that we have a technological revolution on our hands, because autonomous agents don't just sit and talk and pass information around. They can actually build things.

The third thing is that this may be an adequate definition of life. In the next 30 to 50 years we are either going to make a novel life form or we will find one—on Mars, Titan, or somewhere else. I hope that what we find is radically different than life on Earth because it will open up two major questions. First, what would it be like to have a general biology, a biology free from the constraints of terrestrial biology? And second, are there laws that govern biospheres anywhere in the universe? I'd like to think that there are such laws. Of course, we don't know that there are—we don't even know that there are such laws for the Earth's biosphere—but I have three or four candidate laws that I struggle with.

All of this points to the need for a theory of organization, and we can start to think about such a theory by critiquing the concept of work. If you ask a physicist what work is he'll say that it's force acting through a distance. When you strike a hockey puck, for example, the more you accelerate it, the more little increments of force you've applied to it. The integral of that figure divided by the distance the puck has traveled is the work that you've done. The result is just a number.

In any specific case of work, however, there's an organization to the process. The description of the organization of the process that allows work to happen is missing from its numerical representation. In his book on the second law, Peter Atkins gives a definition of work that I find congenial. He says that work itself is a thing—the constrained release of energy. Think of a cylinder and a piston in an old steam engine. The steam pushes down on the piston, and it converts the randomness of the steam inside the head of the cylinder into the rectilinear motion of the piston down the cylinder. In this process, many degrees of freedom are translated into a few.

The puzzle becomes apparent when we ask some new questions. What are the constraints? Obviously the constraints are the cylinder and the piston, the fact that the piston is inside the cylinder, the fact that there's some grease between the piston and the cylinder so the steam can't escape and some rods attached to the piston. But where did the constraints come from? In virtually every case it takes work to make constraints. Somebody had to make the cylinder, somebody had to make the piston, and somebody had to assemble them.

That it takes work to make constraints and it takes constraints to make work is a very interesting cycle. This idea is nowhere to be found in our definition of work, but it's physically correct in most cases, and certainly in organisms. This means that we are lacking theory and points towards the importance of the organization of process.

The life cycle of a cell is simply amazing. It does work to construct constraints on the release of energy, which does work to construct more constraints on the release of energy, which does work to construct even more constraints on the release of energy, and other kinds of work as well. It builds structure. Cells don't just carry information. They actually build things until something astonishing happens: a cell completes a closed nexus of work tasks, and builds a copy of itself. Although he didn't know about cells, Kant spoke about this 230 years ago when he said that an organized being possesses a self-organizing propagating whole that is able to make more of itself. But although cells can do this, that fact is nowhere in our physics. It's not in our notion of matter, it's not in our notion of energy, it's not in our notion of information, and it's not in our notion of entropy. It's something else. It has to do with organization, propagation of organization, work, and constraint construction. All of this has to be incorporated into some new theory of organization.

I can push this a little farther by thinking of a puzzle about Maxwell's demon. Everybody knows about Maxwell's demon; he was supposed to separate fast molecules in one part of a partitioned box from the slow molecules by sending the slow molecules through a flap valve to another part of a partitioned box. From an equilibrium setting the demon could then build up the temperature gradient, allowing work to be extracted. There's been a lot of good scientific work showing that at equilibrium the demon can never win. So let's go straight to a non-equilibrium setting and ask some new questions.

Now think of a box with a partition and a flap valve. In the left side of the box there are N molecules and in the right side of the box there are N molecules, but the ones in the left side are moving faster than the ones in the right. The left side of the box is hotter, so there is a source of free energy. If you were to put a little windmill near the flap valve and open it, there would be a transient wind from the left to the right box, causing the windmill to orient itself towards the flap valve and spin. The system detects a source of free energy, the vane on the back of the windmill orients the windmill because of the transient wind, and then work is extracted. Physicists would say that the demon performs a measurement to detect the source of free energy. My new question is, how does the demon know what measurement to make?

Now the demon does a quite fantastic experiment. Using a magic camera he takes a picture and measures the instantaneous position of all the molecules in the left and right box. That's fine, but from that heroic experiment the demon cannot deduce that the molecules are going faster in the left box than in the right box. If you took two pictures a second apart, or if you measured the momentum transfer to the walls you could figure it out, but he can't do so with one picture. So how does the demon know what experiment to do? The answer is that the demon doesn't know what experiment to do.

Let's turn to the biosphere. If a random mutation happens by which some organism can detect and utilize some new source of free energy, and it's advantageous for the organism, natural selection will select it. The whole biosphere is a vast, linked web of work done to build things so that, stunningly enough, sunlight falls and redwood trees get built and become the homes of things that live in their bark. The complex web of the biosphere is a linked set of work tasks, constraint construction, and so on. Operating according to natural selection, the biosphere is able to do what Maxwell's demon can't do by himself. The biosphere is one of the most complex things we know in the universe, necessitating a theory of organization that describes what the biosphere is busy doing, how it is organized, how work is propagated, how constraints are built, and how new sources of free energy are detected. Currently we have no theory of it—none at all.

Right now I'm busy thinking about this incredibly important problem. The frustration I'm facing is that it's not clear how to build mathematical theories, so I have to talk about what Darwin called adaptations and then what he called pre-adaptations.

You might look at a heart and ask, what is its function? Darwin would answer that the function of the heart is to pump blood, and that's true—it's the cause for which the heart was selected. However, your heart also makes sounds, which is not the function of your heart. This leads us to the easy but puzzling conclusion that the function of a part of an organism is a subset of its causal consequences, meaning that to analyze the function of a part of an organism you need to know the whole organism and its environment. That's the easy part; there's an inalienable holism about organisms.

But here's the strange part: Darwin talked about pre-adaptations, by which he meant a causal consequence of a part of an organism that might turn out to be useful in some funny environment and therefore be selected. The story of Gertrude the flying squirrel illustrates this: About 63 million years ago there was an incredibly ugly squirrel that had flaps of skin connecting her wrists to her ankles. She was so ugly that none of her squirrel colleagues would play or mate with her, so one day she was eating lunch all alone in a magnolia tree. There was an owl named Bertha in the neighboring pine tree, and Bertha took a look at Gertrude and thought, "Lunch!" and came flashing down out of the sunlight with her claws extended. Gertrude was very scared and she jumped out of the magnolia tree and, surprised, she flew! She escaped from the befuddled Bertha, landed, and became a heroine to her clan. She was married in a civil ceremony a month later to a very handsome squirrel, and because the gene for the flaps of skin was Mendelian dominant, all of their kids had the same flaps. That's roughly why we now have flying squirrels.

The question is, could one have said ahead of time that Gertrude's flaps could function as wings? Well, maybe. Could we say that some molecular mutation in a bacterium that allows it to pick up calcium currents, thereby allowing it to detect a paramecium in its vicinity and to escape the paramecium, could function as a paramecium-detector? No. Knowing what a Darwinian pre adaptation is, do you think that we could say ahead of time, what all possible Darwinian pre adaptations are? No, we can't. That means that we don't know what the configuration space of the biosphere is.

It is important to note how strange this is. In statistical mechanics we start with the famous liter volume of gas, and the molecules are bouncing back and forth, and it takes six numbers to specify the position and momentum of each particle. It's essential to begin by describing the set of all possible configurations and momenta of the gas, giving you a 6N dimensional phase space. You then divide it up into little 6N dimensional boxes and do statistical mechanics. But you begin by being able to say what the configuration space is. Can we do that for the biosphere?

I'm going to try two answers. Answer one is No. We don't know what Darwinian pre adaptations are going to be, which supplies an arrow of time. The same thing is true in the economy; we can't say ahead of time what technological innovations are going to happen. Nobody was thinking of the Web 300 years ago. The Romans were using things to lob heavy rocks, but they certainly didn't have the idea of cruise missiles. So I don't think we can do it for the biosphere either, or for the econosphere.

You might say that it's just a classical phase space—leaving quantum mechanics out—and I suppose you can push me. You could say we can state the configuration space, since it's simply a classical, 6N-dimensional phase space. But we can't say what the macroscopic variables are, like wings, paramecium detectors, big brains, ears, hearing and flight, and all of the things that have come to exist in the biosphere.

All of this says to me that my tentative definition of an autonomous agent is a fruitful one, because it's led to all of these questions. I think I'm opening new scientific doors. The question of how the universe got complex is buried in this question about Maxwell's demon, for example, and how the biosphere got complex is buried in everything that I've said. We don't have any answers to these questions; I'm not sure how to get answers. This leaves me appalled by my efforts, but the fact that I'm asking what I think are fruitful questions is why I'm happy with what I'm doing.

I can begin to imagine making models of how the universe gets more complex, but at the same time I'm hamstrung by the fact that I don't see how you can see ahead of time what the variables will be. You begin science by stating the configuration space. You know the variables, you know the laws, you know the forces, and the whole question is, how does the thing work in that space? If you can't see ahead of time what the variables are, the microscopic variables for example for the biosphere, how do you get started on the job of an integrated theory? I don't know how to do that. I understand what the paleontologists do, but they're dealing with the past. How do we get started on something where we could talk about the future of a biosphere?

There is a chance that there are general laws. I've thought about four of them. One of them says that autonomous agents have to live the most complex game that they can. The second has to do with the construction of ecosystems. The third has to do with Per Bak's self-organized criticality in ecosystems. And the fourth concerns the idea of the adjacent possible. It just may be the case that biospheres on average keep expanding into the adjacent possible. By doing so they increase the diversity of what can happen next. It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible. If they did it too fast, they would destroy their own internal organization, so there may be internal gating mechanisms. This is why I call this an average secular trend, since they explore the adjacent possible as fast as they can get away with it. There's a lot of neat science to be done to unpack that, and I'm thinking about it.

One other problem concerns what I call the conditions of co-evolutionary assembly. Why should co-evolution work at all? Why doesn't it just wind up killing everything as everything juggles with everything and disrupts the ways of making a living that organisms have by the adaptiveness of other organisms? The same question applies to the economy. How can human beings assemble this increasing diversity and complexity of ways of making a living? Why does it work in the common law? Why does the common law stay a living body of law? There must be some very general conditions about co-evolutionary assembly. Notice that nobody is in charge of the evolution of the common law, the evolution of the biosphere, or the evolution of the econosphere. Somehow, systems get themselves to a position where they can carry out coevolutionary assembly. That question isn't even on the books, but it's a profound question; it's not obvious that it should work at all. So I'm stuck.