We’re made of parts. Our skull is distinct from our spine. Our liver does not grade subtly into our intestines. Of course, the parts have to be connected for us to work as a whole: a skull completely separated from a spine is not much good to anyone. But those connections between the parts are relatively few. Our liver is linked to the intestines, but only by a few ducts. That’s a far cry from the intimate bonds between all the cells that make up the liver itself, not to mention the membrane that wraps around it like an astronaut’s suit. The distinctness of the parts of our bodies is reflected in what they do. In the liver, all sorts of biochemical reactions take place that occur nowhere else. Our skull protects our brain and chews our food–jobs carried out by no other part of our body.

Biologists like to call these parts modules, and they call the “partness” of our bodies modularity. It turns out that we are deeply modular. Our brain, for example, is made up of 86 billion neurons linked together by perhaps 100 trillion connections. But they’re not linked randomly. A neuron is typically part of a dense network of neighboring neurons. Some of the neurons in this module extend links to other modules, creating bigger modules. The brain can link its modules together in different networks to carry out different kinds of thought.

The proteins that make up our cells work in modules, too. Some proteins can only work in collaboration with certain other proteins. They may need to join together to make a channel, for example, or they may help out in an assembly line of chemical reactions that breaks down a toxin. You can draw a map of these interactions by connecting lines between genes that are turned into proteins in the same situations. The modules look like dense nests of links, with a few links joining together one module to another.

We are not alone. Other animals are modular. So are plants, fungi, protozoans, and bacteria. It’s enough to make you wonder why life is universally made up of parts.

You may be able to think up plenty of reasons that seem obvious. Maybe modules do things more efficiently. Maybe too much multi-tasking slows life down too much. Maybe modules make it easier for life to adapt to new challenges, by letting one part of an organism evolve without affecting the other parts. Or maybe during evolution, modules can be easily duplicated and then tweaked to tackle a new job.

Maybe. Or maybe not. To judge the merit of such ideas, scientists put them to the test. Scientists can compare modules in real organisms to look for patterns their hypotheses predict. They can tinker with bacteria to make them more or less modular and see how they perform. Recently three scientists, Jeff Clune of the University of Wyoming, Jean-Baptiste Mouret of Pierre and Marie Curie University in Paris, and Hod Lipson of Cornell used another method that’s become increasingly popular among scientists who want to understand the parts of life: they evolved a computer network.

Clune and his colleagues created a network inspired by the network of neurons we use to see. For a retina, it has eight virtual neurons, arranged in a four-by-two grid. Each one either sees light or darkness. Like a real neuron, Clune’s virtual neurons can respond to these inputs by sending a signal to neurons in the layer below. A single neuron may receive inputs from all eight neurons, or just one. It uses certain rules to decide whether to send a signal of its own in response to the next layer down. Finally, the network funnels down to a single neuron–a virtual brain, if you will–that can switch on or off in response to information that makes its way down through all the layers.

The scientists made lots of different networks, varying which neurons were linked to which, as well as how strongly they influenced each other. And then they put these networks to a test. In effect, they asked if a specific pattern was present on the left side and whether a different pattern was present on the right side. If both were there, the eye needed to answer TRUE. Otherwise, it needed to respond FALSE.

They showed all 256 possible combinations to the networks and scored them for their accuracy. Not surprisingly, most were deeply awful. But a few were a little less awful, thanks only to chance.

Clune and his colleagues then mimicked natural selection. They selected the best-performing networks and duplicated them. They introduced a mutation-like feature to their program by randomly altering their links. Then the scientists tested the mutant networks again, and once again let the best ones produce new mutants.

Over 25,000 generations, some of the virtual eyes managed to get good–perfect in some cases. But then Clune and his colleagues threw another ingredient into the mix. They not only rewarded virtual eyes for becoming more accurate, but also for how few links they needed to do the job. It’s a plausible factor to include, because building and running more tangled networks can impose a higher cost on an organism. Neurons, for example, are big cells that require a lot of energy to build and also demand a lot of repair to keep running.

In this new environment, evolution operated differently. A lot more virtual eyes ended up recognizing patterns perfectly. They also became more adaptable. Clune and his colleagues turned the virtual eyes towards a new task: they had to recognize whether one particular pattern of four pixels was present on either the left or the right side. The minimal-wiring networks took much less time to evolve a skill at this new task than regular ones.

And there was one more difference between the two kinds of eyes–one that might tell us something about why life comes in parts. The minimal-wiring virtual eyes spontaneously evolved modules. The virtual neurons organized themselves into two networks–one on the left, and one on the right. Only at the final layer of the network do they combine their signals. In other words, a premium on minimally-linked networks spontaneously produces modules. (You can see this evolution in action in the video embedded at the end of the post.)

A skeptic might argue that modules evolved in this experiment because the problem that the virtual eye had to solve was itself modular. The two sides of the eye had to recognize its own pattern before making a final judgment. To test this possibility, the scientists evolved the eye with rewards for problems that couldn’t be broken down so neatly. For example, in one task, the eye had to determine whether there were four black squares anywhere in the eight-pixel grid. Even in these decidedly unmodular tasks, modules emerged.

Clune’s study suggests an evolutionary route to modules: as networks become more efficient, they become more modular. But once the parts of a system emerge, natural selection may then favor modules themselves, because they make living things more flexible in their evolution. Once life’s Legos get produced, in other words, evolution can start to play.

[Update 5:30 pm: Corrected description of first test]

Related

4 thoughts on “The Parts of Life”

A very interesting area. The general concept of “small world networks” is fascinating. A couple of topics not covered in your review.
1. wiring costs. In the CNS it takes a lot of energy and space to maintain axons. Keeping wiring (axon) volume minimized is of value.
2. boundaries. In certains systems its useful to separate one domain from another. The blood-brain-barrier is an example.
3. topological organization. In the brain, many processing regions use topography to aid in computation. Connectivity strength is roughly proportional to brain distance, laid out (typically) in 2D. Permits computations such as inhibitory surround. This works for low-dimensional systems.
4. In the CNS there are non-modular cortices, such as the hippocampus (my guess). These are smaller and phylogentically older than neocortical domains.

I don’t find the modularity all that surprising, after all, all matter is module, and different combinations of building blocks result in different functionality.

But it’s a great result, and a brilliant approach. It’s interesting that including network complexity in the fitness measure actually improves the process, that could benefit genetic programming in general since it promotes less resource-intensive programs.

You mention that the concept of modularity spans a great scale: from proteins to organs; I might even argue that we see the same general phenomena between organisms in social species, and in humans all the way from interpersonal relationships to international ones.

It has fascinated me for quite some time to think about how the forces of competition and cooperation show up on all scales from single-celled organisms all the way up to our global society.

Also I think you forgot to close an italic tag when you updated the post.

Modularity, the Integrated Information Theory of Giulio Tononi along with the thermodynamics-led behaviours of the Constructal law, (with their Asynsis principle geometric signatures linking optimisation and laws of beauty in the arts and architecture), are shaping up to be a new paradigm for sustainability in complex systems.
One of the key lessons of this paradigm-shifting new ToE are that for our civilisation to preserve nature, it must evolve to better emulate her.http://asynsis.wordpress.comhttp://constructal.org

Who We Are

Phenomena is a gathering of spirited science writers who take delight in the new, the strange, the beautiful and awe-inspiring details of our world. Phenomena is hosted by National Geographic magazine, which invites you to join the conversation. Follow on Twitter at @natgeoscience.

Ed Yong is an award-winning British science writer. Not Exactly Rocket Science is his hub for talking about the awe-inspiring, beautiful and quirky world of science to as many people as possible.
Follow @edyong209
Subscribe via RSS