Im going
to gingerly venture in this space for the first time into waters that Ive
been heavily exploring for two years with other faculty and through very active
reading, namely, complex-systems theory, complexity theory, nonlinear dynamics,
emergent systems, self-organizing systems and network theory.

I am a very serious
novice still in these matters, and very much the bumbler in the deeper scientific
territory that these topics draw from. (Twice now Ive hesitantly tried
in public to talk about my non-mathematical understanding of the travelling
salesman problem and why an emergent-systems strategy for finding good answers
in non-polynomial time is useful, and I suspect that I could begin a career
in stand-up comedy in Departments of Mathematics all around the nation with
this routine.)

More generally,
I think there is one major insight Ive gotten about many of the systems
that get cited as either simulated or real-world examples of emergence.
The working groups Im in have been thinking a lot about the question of
why so many emergent systems seem to be surprising in their results,
why the structures or complexities they produce seem difficult to anticipate
from their initial conditions. Some complex-systems gurus like Stephen Wolfram
have very strong ontological claims to make about the intrinsic unpredictability
of such systems, but these are questions that I am not competent to evaluate
(nor am much interested in). I tend to think that the sense of surprise is more
perceptual, one part determined by the visual systems of human beings and one
part determined by an intellectual metahistory that runs so deep into the infrastructure
of our daily lives that we find it difficult to confront.

The visual issue
is easier to recognize, and relatively well considered in A-Life research. Its
why I think some simulations of emergence like the famous flocking
models are so readily useful for artists and animators, or why were weirdly
fascinated by something like Conways Game of Life when we see it for the
first time. Emergent systems surprise us because they have a palpable
organicism about themthey move in patterns that seem life-like to us,
but in contexts where we do not expect life. Theres a deep human algorithim
here for recognizing life that involves a combination of random
movement and structural coherence, which is just what emergence does best, connecting
simple initial conditions, randomness and the creation of structure. Purely
random movements dont look lifelike to us; top-down constructions
of structure appear to us to have human controllers, to be puppeted.
So we are surprised by emergence because we are surprised by the moment-to-moment
actions of living organisms.

When I look at
ants, I know in general what they will do next, but I dont know what exactly
any given ant will do in any given moment. This, by the way, is why most online
virtual worlds still fail to achieve immersive organicism: play enough, explore
enough, and you know not only what the general behavior of creatures in the
environment is, but precisely what they will do from moment to moment.

What I think is
deeper and harder to chase out is that we do not expect the real-world complex
systems and behaviors we actually know about to be possible through emergence,
in the absence of an architect, blueprint or controller. Some of this expectation
has rightfully been attributed by Stephen Johnson and others to a particular
set of presumptions about hierarchy, the so-called queen ant hypothesis.
But I also think it is because there is an expectation deeply rooted in most
modernist traditions that highly productive or useful systems achieve their
productivity through some kind of optimality, some tight fit between purpose
and result, in short, through efficiency.

My colleague Mark
Kuperberg has perceptively observed that Adam Smith has to be seen as an early
prophet of emergencewhat could be a better example than his bottom-up
view of the distributed actions of individuals leading to a structural imperative,
the invisible handbut as digested through the discipline of
economics, Smiths view was increasingly and to my mind unnecessarily parsed
in terms of models requiring those agents to be tightly optimizing.

Thats whats
so interesting about both simulated and real-world examples of emergence: they
create their useful results, their general systemic productivity, through excess,
not efficiency. Theyre not optimal, not at all, at least not in their
actual workings. The optimality or efficiency, if such there is, comes in the
relatively small amount of labor needed to set such systems in motion. Designing
a system where there is a seamless fit between purpose, action and result is
profoundly difficult and vastly more time-consuming than setting an overabundance
of cheap, expendable agents loose on a problem. They may reach a desired end-state
more slowly, less precisely, and more expensively in terms of overall energy
expenditure than a tight system that does only that which it needs to do, but
that excess doesnt matter. Theyre more robust to changing conditions
if less adapted to the specificities of any given condition.

We go looking for
efficiencies and thriftiness in productive systems partly because of a deep
underlying moral presumption that thrift and conservation are good things in
a world that we imagine to be characterized by scarcitya presumption that
Joyce Appleby has noted lies very deeply embedded in Enlightenment thought,
even in the work of Adam Smith. And we do so because of a presumption that productivity
and design, fecundity and cunning invention, are necessarily linkeda presumption
that I am guessing is one part modernist trope and one part deep cognitive structure.
We are disinclined to believe it possible that waste and excess can be the progenitors
of utility and possibility. Georges Batailles answer to Marx may be, as
Michael Taussig has suggested, far more important than we guess. Marx (and many
non-Marxists) assume that surplus must be explained, that it is non-natural,
that it is only possible with hierarchy, with intentionality, with design. It
may be instead that excess is the key, vastly easier to achieve, and often the
natural or unintended consequence of feedback in both human and natural systems.

The metahistory
that I think I see lurking in the foundations here is a tricky one, and a lot
of effort will be required to bring it to light. We will have to unlearn assumptions
about scarcity. At the scale of living things, making more copies of living
things may be thermodynamically incredibly cheap. At the scale of post-Fordist
mass production, making more material wealth may be much cheaper than we tend
to assume. We will have to root out our presumptions about efficiency and optimality
and recognize that many real-world systems whose results we depend upon, from
the immune system to the brain to capitalist economics, depend uponinefficient
effectiveness (productive non-optimality, wasteful utility).

I also think exploring
this metahistory of our unconscious assumptions might help us contain emergence
and complex-systems theory to a subset of relevant examples. Some of the people
working in this field are too inclined to start sticking the label emergent
on anything and everything. You could actually come up with a prediction about
the limited class of systems that can potentially be emergent or self-organizing
(and Im sure that some of the sophisticated thinkers in this field have
done just that): they would have to be systems where many, many agents or discrete
components can be made exceptionally cheaply and where simple rules or procedures
for those component elements not only produce a desired end-state but also intrinsically
terminate or contain their actions within the terms of that result, and probably
some other criteria that might be identified by unthinking our prevailing assumptions
about efficiency and designsay constraints on the space, environment or
topology within which inefficiently effective systems might be possible.