The psynet model, as outlined above, is an abstract model
of the dynamics of mind. It is not a model of the dynamics of
the brain. This is an important distinction, both
methodologically and conceptually. Mind is not brain; mind is,
rather, a collection of patterns emergent in the brain. Mind is
a mathematical entity; a collection of relations, not an actual
physical entity.

It is clear that, in modern mind theory, psychology and
neuroscience must proceed together. But even so, given the
tremendous rate of change of ideas in neuroscience, it seems
foolish to allow one's psychological models to be dictated too
precisely by the latest discoveries about the brain. Rather, one
must be guided by the intrinsic and elegant structure of thought
itself, and allow the discoveries of neuroscience to guide one's
particular ideas within this context.

An outstanding example of this point is neural network
modelling. Neural network models take an idea from neuroscience
-- the network of neurons exchanging charge through synapses --
and elevate it to the status of a governing psychological
principle. Many notable successes have been obtained in this
way, both psychologically and from an engineering point of view.
However, the problem of connecting the states and dynamics of
neural networks with mental states and dynamics has never really
been solved. Neural networks remain a loose, formal model of the
brain, with an uncertain, intuitive connection to the mind
itself.

The advantage of neural network models is that they allow
one to import the vocabulary of dynamical systems theory into the
study of the brain. One can talk about attractors of various
kinds, attractor basins, and so forth, in a rigorous and detailed
way. The idea of thoughts, memories and percepts as attractors
is given a concrete form. However, in a sense, the
representation is too concrete. One is forced to understand
deep, fuzzy, nebulous patterns of mind in terms of huge vectors
and matrices of neural activations and synaptic weights.

In this chapter I will present an alternative approach to
brain modelling, based on the psynet model. Instead of looking
at the brain and abstracting a model from brain structure, I will
look at the brain through the lense of the psynet model and ask:
What structures and dynamics in the brain seem to most naturally
give rise to the structures posited in the psynet model? The
resulting theory is quite different from standard neural network
models in its focus on overall structure. And it is different
from standard, non-connectionist AI in its focus on self-organization and emergence. Rather than specifying the update
rules of the neurons and synapses, one is specifying the overall
emergent structure of the autopoietic system.

In particular, what I will present here is a theory of
cortical structure. The brain is a highly complex system -- it
is complex on many different levels, from overall architecture
to neuronal wiring to biochemical dynamics. However, it is
possible to single out certain aspects of neural complexity as
being more important than others. What differentiates the human
brain from the primate brain is, above all, our greatly enlarged
neocortex. Elucidation of the workings of the cortex would thus
seem to be a particularly important task.

Thanks to recent advances in neurobiology, we now know a
great deal about the structure and function of the cortex. What
we do not know, however, is what cortex computes. Or, to put it
in different terminology, we do not know the dynamics of the
cortex. On a very crude level, it is clear that the cortex is
largely responsible for such things as high-level pattern
recognition, abstract thought, and creative inspiration. But how
this particular configuration of brain cells accomplishes these
fantastic things -- this is the sixty-four million dollar
question. This is the question that I will attempt to answer
here.

3.2 NEURONS AND NEURAL ASSEMBLIES

In this section I will present a few basic facts about the
structure and function of the brain. The goal is to give the
minimum background information required to understand the model
of cortex to be presented. For an adequate review of
neuroanatomy and neurochemistry, the reader is referred
elsewhere. An excellent, if dated, overview of the brain may be
found in (Braitenberg, 1978); a more modern and thorough
treatment may be found in any number of textbooks. Finally, the
comprehensive reference on cognitive neuroscience is (Gazzaniga,
1995).

First, the neuron is generally considered the basic unit of
brain structure. Neurons are fibrous cells, but unlike the
fibrous cells in other body tissues such as muscles or tendons,
they have a tendency to ramify and branch out. The outer cell
membrane of a neuron is shaped into extensive branches called
dendrites, which receive electrical input from other neurons; and
into a structure called an axon which, along its main stalk and
its collateral ramifications, sends electrical output to other
neurons. The gap between the dendrite of one neuron and the axon
of another is called the synapse: signals are carried across
synapses by a variety of chemicals called neurotransmitters.
There is also a certain amount of diffusion of charge through the
cellular matrix. The dynamics of the individual neuron are quite
complex, but may be approximated by a mathematical "threshold
law," whereby the neuron sums up its inputs and then gives an
output which rapidly increases from minimal to maximal as its
total input exceeds a certain threshold level.

By passing signals from one neuron to another in complex
circuits, the brain creates and stores information. Unlike nerve
cells in the skin or the retina, which transmit information about
the external world, neurons in the brain mostly trade information
around among themselves. One may thus say with somejustification that the brain is a sense organ which senses
itself.

The neuron is in itself a complex dynamical system.
However, many theorists have found it useful to take a simplified
view of the neuron, to think of it as an odd sort of electrical
switch, which takes charge in through certain "input wires" and
puts charge out through certain "output wires." These wires are
the biologist's synapses. Some of the wires give positive charge
-- these are "excitatory" connections. Some turn positive charge
into negative charge -- these are "inhibitory." Each wire has
a certain conductance, which regulates the percentage of charge
that gets through. But, as indicated above, the trick is that,
until enough charge has built up in the neuron, it doesn't fire
at all. When the magic "threshold" value of charge is reached,
all of a sudden it shoots its load.

This "electrical switch" view of the neuron is the basis of
most computer models of the brain. One takes a bunch of these
neuron-like switches and connects them up to each other, thus
obtaining a vaguely brain-like system which displays remarkable
learning and memory properties. But it is important to remember
what is being left out in such models. First of all, in the
brain, the passage of charge from one neuron to the other is
mediated by chemicals called neurotransmitters. Which
neurotransmitters a given neuron sends out or receives can make
a big difference. Secondly, in the brain, there are many
different types of neurons with different properties, and the
arrangement of these types of neurons in particular large-scale
patterns is of great importance. Each part of the brain has
different concentrations of the different neurotransmitters, and
a different characteristic structure. Here we will be concerned
in particular with the cortex, which poses a serious problem for
the brain theorist, as it has much less of an obvious
architecture than such areas as the hindbrain or the cerebellum.

Cell Assemblies

In the late 1940's, in his book The Organization of
Behavior, Donald Hebb (Hebb, 1949) proposed a neuronal theory of
high-level brain function. He hypothesized that learning takes
place by the adaptive adjustment of the conductances of the
connections between neurons. And he argued that thoughts, ideas
and feelings arose in the brain as neural assemblies -- groups
of neurons that mutually stimulate each other, and in this way
maintain a collective dynamical behavior. While crude and
clearly in need of biochemical, neuroanatomical and mathematical
elaboration, Hebb's conceptual framework is still the best guide
we have to understanding the emergence of mind from brain. It
has inspired many current theorists, most notably Edelman (1987)
and Palm (1982), and it underlies the ideas to be presented here.

In dynamical systems terms, we may recast Hebb's model by
saying that mental entities are activation patterns of neural
networks, which may arise in two ways: either as relatively
ephemeral patterns of charge passing through networks, or as
persistent attractors of subnetworks of the brain's neural
network. The process of learning is then a process of adaptive
modification of neuronal connections, so as to form networks withdesired attractors and transient activation patterns. The
transient case corresponds to formal neural network models of
perceptual and motor function, principally feedforward network
models (Rumelhart et al, 1986). In these models the goal is to
modify synaptic conductances so that the network will compute a
desired function. The attractor case, on the other hand,
corresponds to formal neural network models of associative memory
(see Serra and Zanarini, 1991). In these models, the goal is to
modify synaptic conductances so as to endow a network with a
given array of attractors. The network may then remain
constantly in a certain attractor state, or, alternately, it may
possess a variety of different attractor states. A given
attractor state may be elicited into by placing the network in
another state which is in the basin of the desired state.

This modernized Hebbian view of neural network learning is
the basis of many formal neural network models, and in this sense
it is known to be mathematically plausible. Biologically, it
cannot be regarded as proven, but the evidence is nevertheless
fairly convincing. On the one hand, researchers are beginning
to document the existence of complex periodic and chaotic
attractor states in the cortex -- the best example is Freeman's
(1992) work on the olfactory cortex. And, on the other hand, the
search for biochemical mechanisms of synaptic modification has
turned up two main candidates: the number of vesicles on the
presynaptic side of the synapse and the thickness of the spine
on the postsynaptic side of those synapses involving dendritic
spines (Braitenberg and Schuz, 1994 and references therein).

The neural assembly model is quite general. What it does
not give us is a clear picture of how simple assemblies build up
to form more complex assemblies, and how the dynamics of simple
assemblies relate to the dynamics of the more complex assemblies
of which they are parts. To get at issues such as this, one
needs to look at the architecture of particular regions of the
brain, in this case the cortex.

3.3 THE STRUCTURE OF THE CORTEX

The cortex is a thin, membrane-like tissue, about two
millimeters thick. It is folded into the brain in a very complex
way, and is generally understood to be structured in two
orthogonal directions. First it has a laminar structure, a
structure of layers upon layers upon layers. The consensus is
that there are six fairly distinct layers, although in some areas
these six may blend with each other, and in others some of these
six may subdivide into distinct sublayers. Then, perpendicular
to these six layers, there are large neurons called pyramidal
neurons, which connect one layer with another. These pyramidal
neurons are surrounded by smaller neurons, most notably the
interneurons, and form the basis for cortical columns, which
extend across layers.

Pyramidal cells comprise about 85% of the neurons in the
cortex, and tend to feed into each other with excitatory
connections; there is good reason to consider the network of
pyramidal cells as the "skeleton" of cortical organization.
Pyramidal cells are distinguished by the possession of two setsof dendrites: basal dendrites close to the main body of the cell,
and apical dendrites distant from the main cell body, connected
by a narrow shaft-like membrane formation. Pyramidal cells in
the cortex transmit signals mainly from the top down. They may
receive input from thousands of other neurons -- less than the
10,000 inputs of a Purkinje cell, but far more than the few
hundred inputs of the smallest neurons. Pyramidal neurons can
transmit signals over centimeters, spanning different layers of
the cortex. Lateral connections between pyramidal cells can also
occur, with a maximum range of 2-3 millimeters, either directly
through the collateral branches of the axons, or indirectly
through small intervening interneurons. In many cases, there is
a pattern of "on-center, off-surround," in which pyramidal
neurons stimulate their near neighbors, but inhibit their medium-distance neighbors.

The columnar structure imposed by pyramidal neurons is
particularly vivid in the visual cortex, where it is well-established that all cells lying on a line perpendicular to the
cortical layers will respond in a similar way. A column of, say,
100 microns in width might correspond to line segments of a
certain orientation in the visual field. In general, it is clear
that neurons of the same functional class, in the same cortical
layer, and separated by several hundred microns or less, share
almost the same potential synaptic inputs. The inputs become
more and more similar as the cell bodies get closer together.
What this suggests is that the brain uses redundancy to overcome
inaccuracy. Each neuron is unreliable, but the average over 100
or 1000 neurons may yet be reliable. For instance, in the case
of motion detection neurons, each individual neuron may display
an error of up to 80% or 90% in estimating the direction of
motion; yet the population average may be exquisitely accurate.

The visual cortex highlights a deep mystery of cortical
function which has attracted a great deal of attention in the
last few years, the so-called binding problem. The visual cortex
contains a collection of two-dimensional maps of the scene in
front of the eyes. Locations within these maps indicate the
presence of certain features -- e.g. the presence at a certain
location of a line at a certain orientation, or a certain color.
The question is, how does the brain know which features
correspond to the same object? The different features
corresponding to, say, a cat in the visual field may be stored
all over the cortex, and will generally be all mixed up with
features of other objects in the visual field. How does the
cortex "know" which features go with the cat? This is related
to the problem of consciousness, in that one of the main
functions of consciousness is thought to be the binding of
disparate features into coherent perceived objects. The current
speculation is that binding is a result of temporal
synchronization -- that neurons corresponding to features of the
same object will tend to fire at the same time (Singer, 1994).
But this has not been conclusively proven; it is the subject of
intense current research.

A recent study by Braitenberg, Shuz and others at the Max
Planck Institute for Biological Cybernetics sheds much light upon
the statistics of cortical structure (Braitenberg and Shuz,
1994). They have done a detailed quantitative study of theneurons and synapses in the mouse cortex, with deeply intriguing
results. They find that, in the system of pyramidal cell to
pyramidal cell connections, the influence of any single neuron
on any other one is very weak. Very few pairs of pyramidal cells
are connected by more than one synapse. Instead, each pyramidal
cell reaches out to nearly as many other pyramidal cells as it
has synapses -- a number which they estimate at 4000.
Furthermore, the cells to which a given pyramidal cell reaches
can be spread over quite a large distance. The conclusion is
that no neuron is more than a few synapses away from any other
neuron in the cortex. The cortex "mixes up" information in a
most remarkable way.

Braitenberg and Shuz give a clever and convincing
explanation for the emergence of columnar structure from this
sprawling pyramidal network; they show how patches of lumped
inhibitory interneurons, spaced throughout the cortex, could
cause the pyramidal neurons inbetween them to behave as columnar
feature receptors, in spite of having connections extending at
least two or three columns out in any direction. This is
fascinating, as it shows how the columnar structure fits in
naturally with excitatory/inhibitory neuron dynamics.

Finally, the manner in which the cortex deals with sensory
input and motor output must be noted. Unlike the multilayered
feedforward neural networks often studied in cognitive science,
which take their inputs from the bottom layer and give their
outputs from the top layer, the cortex takes both its input and
its outputs from the bottom layers. The top layers help to
process the input, but if time is short, their input may be
overlooked and processing may proceed on the basis of lower-level
neural assemblies.

3.4 A THEORY OF CORTICAL DYNAMICS

Given the highly selective account of neural dynamics and
cortical structure which I have presented here, the broad
outlines of the relation between the psynet model and the cortex
become almost obvious. However, there are still many details to
be worked out. In particular, the emergence of abstract symbolic
activity from underlying neural dynamics is a question to which
I will devote special attention.

Before going into details, it may be useful to cite the
eight principles of brain function formulated by Michael Posner
and his colleagues (Posner and Raichle, 1994), on the basis of
their extensive work with PET brain scanning technology. Every
one of these principles fits in neatly with the psynet view of
brain/mind:

Elementary mental operations are located in discrete

neural areas...

Cognitive tasks are performed by a network of widely

distributed neural systems...

Computations in a network interact by means of "re-

entrant" processes...

Hierarchical control is a property of network operation...

Activating a computation from sensory input (bottom-

up) and from attention (top-down) involves many of the
same neurons...

Activation of a computation produces a temporary

reduction in the threshold for its reactivation...

When a computation is repreated its reduced threshold

is accompanied by reduced effort and less attention...

Practice in the performance of any computation will

decrease the neural networks necessary to perform
it...

Posner's principles emphasize pattern recognition, hierarchical
structure, distributed processing and self-organization (re-entrant processes) -- qualities which the psynet model ties up
in a neat and synergetic bundle. What they do not give, however
-- what does not come out of brain imaging studies at all, at the
current level of technology -- is an explanation of how these
processes and structures emerge from underlying neurodynamics.
In order to probe this issue, one must delve deeper, and try to
match up particular properties of the cortex with particular
aspects of mental function.

There are many different ways to map the psynet model onto
the structure of the cortex. The course taken here is to look
at the most straightforward and natural correspondence, which can
be summarized in four principles:

Proposed Psynet-Cortex Correspondence

1. Neural assemblies may be viewed as "magicians" which

transform each other

3. What assemblies of cortical pyramidal neurons do is to

recognize patterns in their inputs

3. The multiple layers of the cortex correspond to the

hierarchical network of the dual network

4. The pyramidal cells based in each level of the cortex

are organized into attractors that take the form of
two-dimensional, heterarchical networks, in which
cells represent emergent patterns among neighboring
cells

The first principle is essentially a reinterpretation of the
cell assembly theory. If one accepts that cell assemblies have
persistent attractors, and if one accepts that synapses are
modified by patterns of use, then it follows that cell assemblies
can, by interacting with each other, modify each other's
synapses. Thus cell assemblies transform each other.

The second principle, that neural processes recognize
patterns, is also more programmatic than empirical, since
virtually any process can, with a stretch of the imagination, be
interpreted as recognizing a pattern. The real question is
whether it is in any way useful to look at neural processes as
pattern-recognizers.

The pattern-recognition view is clearly useful in the visual
cortex -- feature detectors are naturally understood as patternrecognizers. And I believe that it is also useful in a more
general context. Perhaps the best way to make this point is to
cite the last three of Posner's principles, given above. These
state, in essence, that what the brain does is to recognize
patterns. The two principles before the last state that
components of the brain are more receptive to stimuli similar to
those they have received in the recent past -- a fact which fact
can be observed in PET scans as reduced blood flow and reduced
activation of attention systems in the presence of habituated
stimuli. And the final principle, in particular, provides a
satisfying connection between neuroscience and algorithmic
information theory. For what it says is that, once the brain has
recognized something as a repeated pattern, it will use less
energy to do that thing. Thus, where the brain is concerned,
energy becomes approximately proportional to subjective
complexity. Roughly speaking, one may gauge the neural
complexity of a behavior by the amount of energy that the brain
requires to do it.

Turning to the third principle of the proposed psynet-cortex
correspondence (that the multiple layers of the cortex are layers
of more and more abstract patterns, ascending upwards), one may
once again say that this is the story told by the visual cortex,
in which higher levels correspond to more and more abstract
features of a scene, composed hierarchically from the simpler
features recognized on lower levels. Similar stories emerge for
the olfactory and auditory regions of the cortex, and for the
motor cortex. Numerous connections have been identified between
perceptual and motor regions, on both lower and higher levels in
the hierarchy (Churchland et al, 1995), thus bolstering the view
that the lower levels of the cortex form a unified "perceptual-motor hierarchy." It would be an exaggeration to say that the
layers of cortex have been conclusively proved to function as a
processing hierarchy. However, there are many pieces of evidence
in favor of this view, and, so far as I know, none contradicting
it.

The final principle, the correspondence between the
heterarchical network and the organization of attractors in
single layer of the cortex, is the least obvious of the four.
In the case of the visual cortex, one can make the stronger
hypothesis that the columnar organization corresponds to the
heterarchical network. In this case the organization of the
heterarchical network is based on the organization of the visual
scene. Feature detectors reside near other feature detectors
which are "related" to them in the sense of responding to the
same type of feature at a location nearby in the visual field.
This organization makes perfect sense as a network of emergence,
in that each small region of a scene can be approximately
determined by the small regions of the scene immediately
surrounding it. The dynamic and inventive nature of this network
representation of the visual world is hinted at by the abundance
of perceptual illusions, which can often be generated by lateral
inhibition effects in neural representations of scenes.

In order for the fourth principle to hold, what is required
is that other regions of the cortex contain maps like those of
the visual cortex -- but not based on the structure of physical
space, based rather on more general notions of relatedness. Thisis not an original idea; it has been explored in detail by Teuvo
Kohonen (1988), who has shown that simple, biologically plausible
two-dimensional formal neural networks can be used to create
"self-organizing feature maps" of various conceptual spaces. All
the formal neurons in one of his feature map networks receive
common input from the same collection of formal neurons on an
hypothetical "lower level"; each formal neuron also exchanges
signals with the other formal neurons in the feature map network,
within a certain radius.

This is a crude approximation to the behavior of pyramidal
cells within a cortical layer, but it is not outrageously
unrealistic. What happens is that, after a number of iterations,
the feature map network settles into a state where each formal
neuron is maximally receptive to a certain type of input. The
two-dimensional network then mirrors the topological structure
of the high-dimensional state space of the collection of inputs,
in the sense that nearby formal neurons correspond to similar
types of input.

Kohonen's feature map networks are not, intrinsically,
networks of emergence; the notion of "relatedness" which they
embody is simply proximity in the high-dimensional space of
lower-level inputs. However, these feature maps do provide a
vivid illustration of the spontaneous formation of two-dimensional neural maps of conceptual space. What Kohonen's work
suggests is a restatement of Principle 4 of the hypothesized
psynet/cortex correspondence:

4'. Each cortical layer consists of a network of pyramidal
neurons organized into Kohonen-style feature maps, whose
topology is based on a structural notion of relatedness
between nearby pyramidal neurons.

In order for this restated principle to hold true, it is
sufficient that two properties should hold. These criteria lie
at the borderline of mathematics and biology. It is possible
that they could be proved true mathematically, in such a general
sense that they would have to hold true in the cortex. On the
other hand, it seems more likely that their mathematical validity
relies on a number of conditions, the applicability of which to
the cortex is a matter of empirical fact.

The first property is that pyramidal neurons and neuronal
groups which are close to each other, and thus have similar
inputs from lower level neurons, should, on average, recognize
similar patterns. Of course, there will be cases in which this
does not hold: one can well have two neurons with almost the same
inputs but entirely different synaptic conductances, or with
different intervening interneurons turning excitatory connections
to inhibitory ones. But this is not generally the case in the
visual cortex -- there what we see is quite consistent with the
idea of a continuity of level k+1 feature detectors corresponding
to the continuity of their level k input.

The second property is that higher-level patterns formed
from patterns involved in a network of emergence should
themselves naturally form into a network of emergence. We have
seen that the associative-memory "network of emergence" structureis an attractor for networks of pattern recognition processes;
what the fulfillment of this criterion hinges on is the basin of
the network of emergence structure being sufficiently large that
patterns recognized among level k attractors will gradually
organize themselves into a level k+1 network of emergence.

3.5 EVOLUTION AND AUTOPOIESIS IN THE BRAIN

I have delineated the basic structural correspondence
between the psynet model and the cortex. In essence, the idea
is that the two orthogonal structures of the cortex correspond
to the two principal subnetworks of the dual network. The next
natural question is: what about dynamics? The psynet model comes
equipped with its own dynamics; how do these correspond to the
dynamics of brain function?

Recall that, in the previous chapter, a distinction was
drawn between two types of dynamics in a dual network: evolution
and autopoiesis. This is to some extent an artificial
distinction, but it nevertheless useful in a neurobiological
context. It is essentially the same as the distinction, in
biology, between evolution and ecology. On a more basic,
philosophical level, it is a distinction between a force of
change and a force of preservation.

Evolution

First, what about neural evolution? On the simplest level,
one may say that the reinforcement of useful pathways between
neural assemblies is a form of evolution. Edelman (1987) has
called this view "neuronal group selection," or "Neural
Darwinism." Essentially, in Neural Darwinism, one has survival
of the fittest connections. Chaos and randomness in the neural
circuits provide mutation, and long-term potentiation provides
differential selection based on fitness. As in the dual network
model, the progressive modification of synapses affects both
associative memory (within a layer) and hierarchical
perception/control (across layers).

In this simplest model of neural evolution, there is no
reproduction -- and also no crossover. Edelman argues that the
lack of reproduction is compensated for by the vast redundancy
of the cortex. For, in a sense, one doesn't need to reproduce
connections, because almost every connection one might wish for
is already there. There may not be many multiple connections
between the same pair of pyramidal neurons, but pyramidal neurons
tend to have similar connections to their neighbors, so there
will be plenty of multiple connections from one cluster of
similar pyramidal neurons to another.

Edelman does not even mention the lack of crossover. From
his perspective, mutation alone is a perfectly valid evolution
strategy. From another point of view, however, one might argue
for the necessity of neural crossover. As will be argued in
Chapter Six, crossover is demonstrably a more powerful learning
technique than mere mutation. Furthermore, if one considers the
two methods as learning algorithms, crossover gives the power-law
learning curve so familiar from psychology, while mutation givesa straight-line learning curve. Finally, intuition and
introspection indicate that human creativity involves some form
of combination or crossing-over of ideas.

As I have argued in EM, synaptic modification, in the
context of an hierarchical processing network, can provide a kind
of reproduction by crossover. By appropriate strengthening and
weakening of synapses, one can take two trees of neural
assemblies and swap subtrees between them. This is very close
to the kind of crossover studied by John Koza (1992) in his
"genetic programming paradigm." The difference is that, instead
of trees of neural assemblies, he has trees of LISP functions.
There is therefore a sense in which the Neural Darwinist model
of neural evolution can provide for crossover.

It is not clear, however, whether this kind of mutation-based crossover is enough. I have proposed as a speculative
hypothesis that the brain, in its creative evolution, routinely
carries out a more flexible kind of crossover -- that its neural
networks are easily able to move assemblies from place to place.
Memory reorganization would be more effective, it would seem, if
memories were actually able to move from one part of the brain
to another, rather than merely having the connections between
them modified. And, from the hierarchical point of view, actual
moving of neural assemblies would provide for a much more
flexible crossover operation between trees and other systems of
assemblies. This hypothesis finds some support both in
neuroscience and in formal neural network theory. On the one
hand, evidence is emerging that the brain is, in certain
circumstances, able to move whole systems of assemblies from one
place to another; even from one hemisphere to another (Blakeslee,
1991). And, on the other hand, one may show that simple Kohonen-style neural networks, under appropriate conditions, can give
rise to spontaneously mobile activation bubbles (Goertzel,
1996a). It is not possible to draw any definite conclusions as
yet, but the concept of "sexually" reproducing neural assemblies
is looking more and more plausible.

Autopoiesis

Autopoiesis has an obvious correlate in neural networks.
If neural assemblies are magicians, then structural conspiracies
are assemblies of neural assemblies -- neural meta-assemblies.
Hebb, in the original statement of cell assembly theory, foresaw
that neural assemblies would themselves group into self-organizing systems. Of course, self-organizing systems of neural
assemblies need not be static, but may be in a continual process
of mutual growth and change.

Autopoiesis, in the psynet model, is asked to carry out a
wide variety of functions. Essentially, anything involving the
preservation of structure over time must be accomplished by
pattern/process autopoiesis. This leads to a variety of
intriguing hypotheses. For instance, consider the vexing
question of symbolic versus connectionist processing. How do the
messy, analogue statistical learning algorithms of the brain give
rise to the precise symbolic manipulations needed for language
and logic. According to the psynet model, this must come out of
pattern/process autopoiesis. Thus, in the current brain theory,it must come out of autopoietic systems of neural assemblies.

But how can symbol processing come out of autopoiesis?
Intriguingly, mathematics provides a ready answer. The technique
of symbolic dynamics, to be discussed in Chapter Five, deals
precisely with the emergence of symbol systems and formal
languages out of complex dynamical systems. To study a dynamical
system using symbolic dynamics, one partitions the state space
of the system into N+1 regions, and assigns each region a
distinct code number drawn from {0,...,N}. The system's
evolution over any fixed period of time may then be represented
as a finite series of code numbers, the code number for time t
representing the region of state space occupied by the system
state S(t). This series of code numbers is called a "symbolic
trajectory"; it may be treated as a corpus of text from an
unknown language, and grammatical rules may be inferred from it.
In particular, systems with complex chaotic dynamics will tend
to give rise to interesting languages. Chaos, which involves
dynamical unpredictability, does not rule out the presence of
significant dynamic patterns. These patterns reveal themselves
visually as the structure of the chaotic system's strange
attractor, and they reveal themselves numerically as languages
emergent from symbolic dynamics.

Cohen and Eichenbaum (1995) have demonstrated that cortical-hippocampal feedback loops play a fundamental role in helping the
neocortex to store and access symbolic, declarative information.
The hypothesis to which the psynet model of brain leads us is
that the cortical-hippocampal feedback loops in fact serve to
encode and decode symbolic memories in the structures of the
attractors of cortical neural assemblies. In fact, one may show
that these encoding and decoding operations can be carried out
by biologically plausible methods. This is an intriguing and
falsifiable hypothesis which ensues directly from applying the
simplicity of the psynet model to the complexity of the brain.

3.6 CONCLUSION

It is important not to fall into the trap of believing
neural network models, in particular, to exhaust the
applicability of complex systems science to the study of brain
function. The brain is a very complex system, and complex
systems ideas can be applied to it on many different levels --
from the microtubular level stressed by Stuart Hameroff in
Ultimate Computing and Roger Penrose in Shadows of Mind, up to
the abstract mental-process level emphasized here.

I am not a neurobiologist, and the cortical model presented
here is plainly not a neurobiologist's model. It has an abstract
structure which doubtless reflects my background as a
mathematician. But, on the other hand -- and unlike many neural
network models -- it is not a mere exercise in mathematical
formalism. It is, rather, a conceptual model, an intuitive
framework for understanding.

The value of such models lies in their ability to guide
thought. In particular, this model was developed not only to
guide my own thinking about the brain, but to guide my ownthinking about the learning behavior of human, animals and
artificial intelligence systems. My hope is that it may help
others to guide their thoughts as well. For, after all, the
project of understanding the brain/mind is just barely getting
under way -- we need all the ideas we can muster.