Author Affiliations

Murmur the word ‘complexity’ and it is a bit like tasting a fine wine: ‘forward oaty nose, thrilling undertones of blackberry, whiff of giraffe urine …’. Everybody knows what you are talking about, but it is all so very elusive. Oats? Giraffe pee? The history of life shouts ‘Look! Once there was bacteria, now there is New York’: thermogenic plants, Bombardier beetles, ballistic fungal spores. The biological world is not only fascinating, but dazzlingly complex, but how do we capture it? Take two pinches of Shannon, a dash of self‐organization, sprinkle Kolmogorov liberally, stir with a fractal spoon, and serve immediately. In the computer such a recipe might work, but in the forests and oceans, concepts of complexity slip through our fingers.

Perhaps we need to take a step back; if we can define some of the boundaries, then maybe our mathematical colleagues can step into the cage and pin the beast with their equations. Let's begin by applying Conway Morris' Fourth Law of Biology: whenever the word ‘surprising’ is used, be prepared to smell a rat. Such terminology is employed when we look at ancestral forms. Far from being slobberingly simple, such ancestors are ‘surprisingly’ complex. A striking example involves the earliest eukaryotes. In terms of gene complements crucial for subsequent multicellularity and bodyplan construction, such as SNAREs (Kloepper et al, 2007) and homeodomain TALE/non‐TALEs (Derelle et al, 2007), the archaic eukaryote must have been, well, unexpectedly complex. Much the same can be inferred from other molecular machines, such as the kinesins (Wickstead et al, 2010). That such may be the norm, even the rule, is apparent from the nature of the first vertebrates (Heimberg et al, 2010). As Alysha Heimberg and co‐workers remark, the first worm‐like vertebrate “was a more complex organism than conventionally accepted”. To be sure, there are further elaborations, not least by the engine of gene duplication. But ancestral complexity is a non‐trivial problem. In part, the solution must lie in co‐options (and maybe horizontal gene transfer), but the suspicion remains that self‐organization has a crucial role as each biological threshold is breached.

But what happens when evolution runs in reverse, towards supposedly simpler forms? In fact, are they any less complex? In the world of bacteria, for example, think of either pathogens or those inhabiting the bacteriomes of sap‐feeding insects. Convergence is the rule both in terms of their multiple origins and the striking reductions in genome size as innumerable operations, such as amino‐acid synthesis, are passed to the host. These associations are not only extraordinarily intimate, indeed dangerously so with little charmers like rickettsialids, but also incredibly sophisticated. Can we make the argument that the degree of integration between bacteria and host defines another boundary in the world of complexity? That it does is suggested by another striking symbiosis. This involves the dicyemid metazoans and chromodinid ciliates, whose habitation on the surface of cephalopod kidneys not only represents an extraordinary convergence, but also, as I speculate, enhances kidney function to a near‐vertebrate capacity.

Perhaps there are not only boundary lines to complexity, but also a ceiling. Can we make the argument that evolution is running out of things to do? Sarah Adamowicz and colleagues (2008), for example, argue that in terms of tagmosis in crustaceans—whereby the segmental series of more or less identical limbs are transformed into a linear array of highly specialized units (such as in the lobster)—not only is this a trend that has evolved several times, but as a group the crustaceans are also approaching the limits of possible tagmosis. Fascinatingly, they discuss whether this evolutionary journey is incomplete, or whether other factors will prevent crustaceans reaching the final limit of complexity.

Perhaps the latter, if Conway Morris' Seventh Law of Biology holds: that all systems evolve to the most complex of possible forms. Nowhere is this more hauntingly evident than in the evolution of nervous systems. First, such configurations are prodigiously expensive to run. As Simon Laughlin et al (1998) point out, to transport a single bit of information across a synapse requires a staggering 104 ATP molecules. The retina of the blow‐fly is consuming a jaw‐dropping 8% of total metabolic energy, even at rest. Nervous systems, therefore, are masters of economy; not surprisingly, strategies that would be the envy of any Green Officer have evolved to attempt to circumvent the energetic penalties. Yet there are other limits to the complexity of any nervous system. Michel Hofman (2001), for example, has shown that because of the contrasting allometries of grey and white matter in anthropoid brains, there is an upper limit to their size. At about three times our current size we might be mistaken for extra‐terrestrials, but even this limit might be forever beyond reach, on account of the challenges of connecting such a massive piece of neurological machinery.

Here lies an irony. Does this ceiling constrain our very powers of thought? Are there neurological limits to what we can understand; are there things ‘out there’ that are literally beyond our comprehension? Or are you willing to subscribe to Conway Morris' Ninth Law of Biology, that paradoxically states: there is no limit to our understanding or knowledge? And if you do, then what does that suggest about the nature of the Universe in which we are embedded?