The words "complexity" and "information" are often invoked in debates
about evolution. Those words are seldom carefully defined, but we all
share an intuitive idea of what they mean and how they apply to this
debate.

In the last few years, many advocates of Intelligent Design theory have
claimed that natural laws or regularities simply cannot do the job of
increasing biological complexity and information. Increases in
complexity and information, they have claimed, require certain types of
input from an intelligent agent. Let's pursue that claim.

In this post, I offer three scenarios in which it appears that simple,
regular processes can bring about large increases in information. Each
of these scenarios has several analogies to biological information and
biological evolution. These scenarios are not perfect analogies, but
they address some of the same issues, so I think they will be worthy of
consideration.

My question in each of the three scenarios is simply this: Where did
the increase in information come from?

++++++++++++

1. The M.I.T. Media Lab has built a humanoid robot named "Cog" to help
them test theories of human learning. (You can read more about Cog at http://www.ai.mit.edu/projects/cog/) Cog has many computer processors
working simultaneously at several hierarchical levels. They process
sensory information, control body movements, and coordinate sensory
information with body motion so that Cog can learn to perform physical
tasks.

One task which Cog can learn through repeated trials is point its arm
at a distant object. During the first few trials, Cog flails its arm
and points randomly. Then, error-correcting routines take over and,
after repeated trials, Cog gets better and better at pointing.

Now consider the end result of many trials: There are a great many
variables in Cog's distributed memory which allow it to point
successfully. These variables control the sequence, timing, and
amplitude of various motions in Cog's neck, shoulder, elbow and wrist
joints.

There is one other interesting tidbit about Cog which I don't see
anywhere on the MIT website, but which I recall hearing on a television
program. If you re-set Cog's memory and have Cog re-learn the task, it
will re-learn the task in about the same amount of time, with the same
success, but with a different final set of variables in its memory. I
believe that the reason the final set of variables is different from
trial to trial is because Cog's learning starts out with random arm-
flailing, which provides data for subsequent error-correction and
improvement.

Here's the punchline: After Cog has learned a task, there must be a
great deal of information in its distributed memory. Out of all
possible variable sets which could cause Cog to move in a variety of
ways, only a tiny subset of variables allow it to perform the specified
task. (This is analogous to the fact that out of all possible DNA
sequences, only a tiny subset of DNA sequences can produce a living
creature.) To use Dembski's terminology, Cog's variable set after
learning a task is a low-probability specified set of numbers.)

Where did that information come from?

++++++++++++++++

2. I would like to write a computer program which would learn to
navigate mazes. This program would read off an instruction string of
zeros and ones; each pair of bits in the instruction string would tell
it which way to move. For example:
00 -- move down the screen one step
01 -- move up the screen one step
10 -- move left on the screen one step
11 -- move right on the screen one step.

The program enters the maze and follows the instruction string from the
beginning of the string. If the program hits a wall in the maze before
it gets to the end of its instruction string, it stops following the
instruction string and generates an error signal. In addition, to
prevent the program from going around in loops, the program will keep
track of the path it travels through the maze; it will stop and
generate an error signal if it crosses its old path.

The instruction string starts out as ten randomly chosen bits -- enough
to move five steps. If the program gets an error signal, it randomly
flips one bit in its instruction string and tries again from the
beginning of the maze.

Eventually, the program will hit upon an instruction string which it
can follow to the end of the string without getting an error signal.
(Depending upon the maze, there may be more than one such string.)
When this happens, the program increases the length of the instruction
string. It does this by duplicating its sequence to create a twenty-
bit string. It then continues to navigate the maze (from the beginning
of the maze) with the new 20-bit instruction string. Now when it
generates an error signal, it randomly flips only one of the last ten
bits of its instruction string. After repeated trials, it will once
again hit upon an instruction string which it can follow to the end of
the 20-bit string without error. Once again it will lengthen its
instruction string by duplicating the last ten bits. It will again run
the maze with a 30-bit string, generating error signals and randomly
flipping one of the last ten bits, until it successfully follows a 30-
bit string. This procedure continues until the program finally finds
the "exit" of the maze.

Now consider a maze with has many walls, but has lots of different
possible solutions. For example:

(In such a maze, there is one more feature we would need to add to the
maze-finding program to prevent it from "drawing itself into a corner."
We could add an error-counter. If the program had more than 128 error
signals in a row without reaching the end of its instruction string, we
would conclude that it had, in fact, drawn itself into a corner. It
would then generate a meta-error signal. This would cause it to delete
the last 18 bits off the end of its instruction string, duplicate the
last 10 remaining bits, and continue. There may be more elegant ways
and more thorough ways of dealing with the "draw yourself into a
corner" problem. However, this rule should suffice for demonstration
purposes.)

Here's the punchline: Once this program has run many times, and found
its way through the entire maze, it has generated a long instruction
string on how to navigate the maze. If the maze is like the one above,
quite a few different instruction strings will work to navigate the
maze. However, out of all possible bit-strings, only a very tiny
fraction of them would successfully navigate the maze. Thus, this
instruction string contains a lot of information. (To use Dembski's
terminology, it is a low-probability specified string.)

Where did that information come from?

++++++++++++++++++++++

3. This is a variation on the maze-finding program. In this case, the
maze is 4-dimensional in space (instead of 2-dimensional) and infinite
in extent (no exit point). The 4-dimensional space has many 3-
dimensional walls in it so that programs navigating the maze will have
to make many twists and turns. However, as before, there are many
possible paths (instead of just one possible path).

The instruction string is once again a string of bits. Each step
requires four bits to specify. Two bits specify the dimension of
motion (x, y, z, or w-direction). The third bit specifies movement in
the positive or negative direction. And the fourth bit specifies
whether the program takes the step in "pencil up" mode or "pencil down
on paper" mode.

In this case, more than one will be running simultaneously in the same
maze. Each program will have its own instruction string.

As before, each program will generate an error signal if it runs into a
wall or if -- while taking a step in "pencil down" mode -- it crosses
over a spot which has already been "drawn" over by itself or by another
maze-finding program. Since different programs can interfere with each
other by operating in "pencil down" mode, there will be some
competition amongst the programs.

After each run, those programs which encountered an error signal will
flip a random bit amongst the last 20 bits in their instruction string.
An error-counter is included, and those programs which hit a certain
number of consecutive errors will be terminated. If a program reaches
the end of its instruction string without an error signal, it will be
allowed to "reproduce." It gets one offspring for each step it takes
in "pencil down" mode. Each offspring is like the original, with an
additional 20 random bits tacked onto the end of the instruction
string. One final caveat: since "sibling" programs (and their
subsequent offspring) will share identical first portions of their
instruction strings, family members will not be considered
"competition" when operating in "pencil down" mode, up until the point
of divergence of their instruction strings, after which they will be
considered competitors.

Here's the punchline: If we run this program for a while, I expect we
will find our 4-dimensional maze populated with many programs with long
instruction strings. Once again, there will be many different
"successful" instruction strings. But out of all possible bit strings,
only a very tiny fraction would successfully live inside the maze.