7 Discussion and Future Work

What makes complexification such a powerful search method?
Whereas in fixed-topology coevolution, as well as in
simplifying coevolution, the good structures must be
optimized in the high-dimensional space of the
solutions themselves, complexifying coevolution
only searches high-dimensional structures that are
elaborations of known good lower-dimensional structures.
Before adding a new dimension,
the values of the existing genes have already been
optimized over preceding generations. Thus, after
a new gene is added, the genome is already
in a promising part of the new, higher-dimensional space.
Thus, the search in the higher-dimensional space is
not starting blindly as it would if evolution began searching
in that space. It is for this reason that complexification
can find high-dimensional solutions that
fixed-topology coevolution and simplifying coevolution
cannot.

Complexification
is particularly well suited for coevolution problems.
When a fixed genome is used to represent a strategy,
that strategy can be optimized, but it is
not possible to add functionality without
sacrificing some of the knowledge that is already present.
In contrast, if new genetic material can be added, sophisticated
elaborations can be layered above existing structure,
establishing an evolutionary arms race. This process
was evident in the robot duel domain, where
successive dominant strategies often built new
functionality on top of existing behavior by
adding new nodes and connections.

The advantages of complexification do not
imply that fixed-sized genomes cannot sometimes
evolve increasingly complex phenotypic behavior.
Depending on the mapping between the genotype
and the phenotype, it may be possible for a fixed, finite set
of genes to represent solutions (phenotypes) with varying behavioral
complexity.
For example, such behaviors have been observed in Cellular Automata
(CA), a computational structure consisting of
a lattice of cells that change their
state as a function of their own current state and the
state of other cells in their neighborhood.
This neighborhood
function can be represented in a genome of size
(assuming neighboring cells with binary state) and
evolved to obtain desired target behavior.
For example, were
able to evolve neighborhood functions
to determine whether black or white cells
were in the majority in the CA lattice.
The evolved CAs displayed complex global behavior patterns that
converged on a single classification, depending on which
cell type was in the majority. Over the course of evolution,
the behavioral complexity
of the CA rose even as the genome remained the same size.

In the CA example, the correct neighborhood size was chosen a priori.
This choice is difficult to make, and crucial for success.
If the desired behavior had
not existed within the chosen size, even if the behavior
would become gradually more complex, the system would never
solve the task.
Interestingly, such a dead-end could
be avoided if the neighborhood (i.e. the genome)
could be expanded during evolution. It is possible that CAs
could be more effectively evolved by
complexifying (i.e. expanding) the genomes,
and speciating to protect innovation, as in NEAT.

Moreover, not only can
the chosen neighborhood be too small to
represent the
solution, but it can also be unnecessarily large.
Searching in a space of more dimensions than necessary can
impede progress, as discussed above.
If the desired function
existed in a smaller neighborhood
it could have been found with
significantly fewer evaluations. Indeed, it is even possible that
the most efficient neighborhood is not symmetric, or contains
cells that are not directly adjacent to the cell being processed.
Moreover, even the most efficient neighborhood may be too large
a space in which to begin searching. Starting search in a small space
and incrementing into a promising part of higher-dimensional space
is more likely to find a solution.
For these reasons, complexification can be an advantage, even
if behavioral complexity can increase to some extent within a
fixed space.

The CA example raises the intriguing possibility that
any structured phenotype can be evolved through
complexification from a minimal starting point, historical
markings, and the protection of innovation through speciation.
In addition to neural networks and CA,
electrical circuits ,
genetic programs ,
robot body morphologies ,
Bayesian networks , finite automata ,
and building and vehicle architectures
are all structures of varying complexity that can
benefit from complexification.
By starting search in a minimal space and adding new
dimensions incrementally, highly complex phenotypes
can be discovered that would be difficult to find if search
began in the intractable space of the final solution,
or if it was prematurely restricted to too small a space.

The search for optimal structures is a common problem
in Artificial Intelligence (AI).
For example, Bayesian methods have been applied to learning model structure
. In these approaches, the posterior
probabilities of different
structures are computed, allowing overly
complex or simplistic models to
be eliminated. Note that these approaches are not aimed at generating
increasingly complex functional structures, but rather at providing
a model that explains existing data.
In other cases, solutions involve growing gradually larger
structures, but the goal of the growth is to form gradually
better approximations. For example, methods like Incremental
Grid Growing , and Growing Neural Gas
add neurons to a network until it approximates the
topology of the input space reasonably well. On the other hand,
complexifying systems do not have to be non-deterministic (like NEAT),
nor do they need to be based on evolutionary algorithms. For example,
introduced a deterministic algorithm where the
chromosome lengths of the entire population increase all at the same
time in order to expand the search space; developed a
supervised (non-evolutionary) neural network training method called
cascade correlation, where new hidden neurons are added to the network
in a predetermined manner in order to complexify the function it
computes.
The conclusion is that complexification is an important general
principle in AI.

In the future, complexification may help with the general problem
of finding the appropriate level of abstraction for difficult
problems. Complexification can start out with a simple, high-level
description of the solution, composed of
general-purpose elements. If such an abstraction is insufficient,
it can be elaborated by breaking down each high-level element into lower
level and more specific components. Such a process can continue
indefinitely, leading to increasingly complex substructures, and
increasingly
low-level solutions to subproblems.
Although in NEAT the solutions
are composed of only connections and
nodes, it does provide an early example of how such a process could be
implemented.

One of the primary and most elusive goals of AI is
to create systems that scale up. In a sense, complexification
is the process of scaling up. It is the general principle
of taking a simple idea and elaborating it for broader application.
Much of AI is concerned with search, whether over
complex multi-dimensional landscapes, or through highly-branching
trees of possibilities. However, intelligence is as much about deciding
what space to search as it is about searching once the proper
space has already been identified.
Currently, only humans are able to decide the proper level
of abstraction for solving many problems,
whether it be a
simple high-level combination of general-purpose parts,
or an extremely complex assembly of low-level components.
A program that can decide what level of abstraction is
most appropriate for a given domain would be a highly compelling
demonstration of Artificial Intelligence.
This is, we believe, where complexification methods
can have their largest impact in the future.