The Function of Conscious Experience: An
Analogical Paradigm of Perception and Behavior

slehar@cns.bu.edu

Abstract

The question of whether conscious experience has any functional
purpose depends on a more fundamental issue concerning the nature of
conscious experience. In particular, whether the world of experience
is the external world itself, as suggested by direct realism, or
whether it is merely a virtual- reality replica of that world in an
internal representation, as in indirect realism, or
representationalism. There is an epistemological problem with the
notion of direct realism, for we cannot be consciously aware of
objects beyond the sensory surface. Therefore the world of experience
can only be an internal replica of the external world. This in turn
validates a phenomenological approach to studying the nature of the
perceptual representation in the brain. Phenomenology reveals that the
representational strategy employed in the brain is an analogical one,
in which objects are represented in the brain by constructing full
spatial replicas of those objects in an internal representation.

Introduction

The question of the functional role of conscious experience is
currently an active area of debate. On the one side there are those
like Dennett (1988, 1991) who argue that consciousness is an
epiphenomenon, with no direct functional value. Others, such as
Humphrey (1999 p. 250) argue that consciousness must have some
adaptive value on evolutionary grounds, for nothing can evolve by
natural selection unless it has some effect on behavior. The question
is a paradigmatic one in the Kuhnian sense (Kuhn 1970), because the
differences of opinion on the function of conscious experience reflect
deeper differences on the more fundamental question of what
consciousness itself actually is. I propose that the debate over the
ontological status of conscious experience in turn rests on the
epistemological question of whether the world we see around us is the
real world itself, or whether it is merely a virtual-reality replica
of that external world in an internal representation. Until this
central issue is resolved on sound logical grounds, the sciences of
psychology and consciousness studies are condemned to remain in a
pre-paradigmatic state, with opposing camps arguing at cross-purposes
due to lack of consensus on the foundational issues of the science.

If we accept the modern materialist view of mind as the operation
of the physical brain, then the epistemological question is not open;
there is only one reasonable interpretation of the ontology of
conscious experience, i.e. that consciousness is in fact an internal
replica of the external world rather than the world itself. This in
turn validates a phenomenological approach to the study of
conscious experience, i.e. to examine the world around us not as a
scientist examining an objective external world, but as a
perceptual scientist examining a rich and complex internal
representation. I will show how the phenomenological approach can be
employed to examine both the structure of conscious experience, and
also the detailed workings of the computational strategy or algorithm
that guides behavior. This approach to the study of conscious
experience clearly demonstrates that consciousness is not an
epiphenomenon, but serves an essential functional role, which is to
provide an analogical representation of the external world, that
operates in conjunction with an analogical computational strategy that
guides behavior. In other words perception and behavior are intimately
coupled through the agency of conscious experience, and careful
examination of the properties of that experience offers insights into
the nature of both perception and of behavior.

The Epistemological Divide

The debate over the nature of conscious experience is confounded
by the deeper epistemological question of whether the world we see
around us is the real world itself, or merely an internal perceptual
copy of that world generated by neural processes in our brain. In
other words this is the question of direct realism, also known as
naive realism, as opposed to indirect realism, or
representationalism. Although this issue is not much discussed in
contemporary psychology, it is an old debate that has resurfaced
several times, but the continued failure to reach consensus on this
issue continues to bedevil the debate on the functional role of
conscious experience. The reason for the continued confusion is that
both direct and indirect realism are frankly incredible, although each
is incredible for different reasons.

Problems with Direct Realism

The direct realist view (Gibson 1972) is incredible because it
suggests that we can have experience of objects out in the world
directly, beyond the sensory surface, as if bypassing the chain of
sensory processing. For example if light from this paper is transduced
by your retina into a neural signal which is transmitted from your eye
to your brain, then the very first aspect of the paper that you can
possibly experience is the information at the retinal surface, or the
perceptual representation that it stimulates in your brain. The
physical paper itself lies beyond the sensory surface and therefore
must be beyond your direct experience. But the perceptual experience
of the page stubbornly appears out in the world itself instead of in
your brain, in apparent violation of everything we know about the
causal chain of vision. The difficulty with the concept of direct
perception is most clearly seen when considering how an artificial
vision system could be endowed with such external perception. Although
a sensor may record an external quantity in an internal register or
variable in a computer, from the internal perspective of the software
running on that computer, only the internal value of that variable can
be "seen", or can possibly influence the operation of that
software. In exactly analogous manner the pattern of electrochemical
activity that corresponds to our conscious experience can take a form
that reflects the properties of external objects, but our
consciousness is necessarily confined to the experience of those
internal effigies of external objects, rather than of external objects
themselves. Unless the principle of direct perception can be
demonstrated in a simple artificial sensory system, this explanation
remains as mysterious as the property of consciousness it is supposed
to explain.

Problems with Indirect Realism

The indirect realist view is also incredible, for it suggests that
the solid stable structure of the world that we perceive to surround
us is merely a pattern of energy in the physical brain, i.e. that the
world that appears to be external to our head is actually inside our
head. This could only mean that the head we have come to know as our
own is not our true physical head, but is merely a miniature
perceptual copy of our head inside a perceptual copy of the world, all
of which is completely contained within our true physical
skull. Stated from the internal phenomenal perspective, out beyond the
farthest things you can perceive in all directions, i.e. above the
dome of the sky and below the earth under your feet, or beyond the
walls, floor, and ceiling of the room you perceive around you, beyond
those perceived surfaces is the inner surface of your true physical
skull encompassing all that you perceive, and beyond that skull is an
unimaginably immense external world, of which the world you see around
you is merely a miniature virtual-reality replica. The external world
and its phenomenal replica cannot be spatially superimposed, for one
is inside your physical head, and the other is outside. Therefore the
vivid spatial structure of this page that you perceive here in your
hands is itself a pattern of activation within your physical brain,
and the real paper of which it is a copy it out beyond your direct
experience. Although this statement can only be true in a topological,
rather than a strict topographical sense, this insight emphasizes the
indisputable fact that no aspect of the external world can possibly
appear in consciousness except by being represented explicitly in the
brain. The existential vertigo occasioned by this concept of
perception is so disorienting that only a handful of researchers have
seriously entertained this notion or pursued its implications to its
logical conclusion. (Kant 1781/1991, Koffka 1935, Köhler 1971,
Russell 1927, Boring 1933, Feigl 1958, Smart 1959, Smythies 1989,
1994, Harrison 1989, Hoffman 1998, Lehar 2002)

Another reason why the indirect realist view is incredible is that
the observed properties of the world of experience when viewed from
the indirect realist perspective are difficult to resolve with
contemporary concepts of neurocomputation. For the world we perceive
around us appears as a solid spatial structure that maintains its
structural integrity as we turn around and move about in the
world. Perceived objects within that world maintain their structural
integrity and recognized identity as they rotate, translate, and scale
by perspective in their motions through the world. These properties of
the conscious experience fly in the face of everything we know about
neurophysiology, for they suggest some kind of three- dimensional
imaging mechanism in the brain, capable of generating
three-dimensional volumetric percepts of the degree of detail and
complexity observed in the world around us, that are able to rotate
and translate freely relative to the space in which they appear. No
plausible mechanism has ever been identified neurophysiologically that
exhibits this incredible property. The properties of the phenomenal
world are therefore inconsistent with contemporary concepts of neural
processing, which is exactly why these properties have been so long
ignored.

Problems with Projection Theory

There is a third alternative besides the direct and indirect
realist views, and that is a projection theory, whereby the
brain does indeed process sensory input, but that the results of that
processing get somehow projected back out of the brain to be
superimposed on the external world (Ruch 1950 quoted in Smythies 1954,
O'Shaughnessy 1980 pp 168-192, Velmans 1990, Baldwin 1992). According
to this view, the world around us is part real, and part perceptual
construction, and the two are spatially superimposed. However no
physical mechanism has ever been proposed to account for this external
projection (Smythies 1954). The problem with this notion becomes clear
when considering how an artificial intelligence could possibly be
endowed with this kind of external projection. Although a sensor may
record an external quantity in an internal register or variable in a
computer, there is no sense in which that internal value can be
considered to be external to that register or to the physical machine
itself, whether detected externally with an electrical probe, or
examined internally by software data access. Unless the principle of
external projection can be demonstrated in a simple artificial sensory
system, this explanation too remains as mysterious as the property of
consciousness it is supposed to explain.

Selection from Incredible Alternatives

We are left therefore with a choice between three alternatives,
each of which appears to be absolutely incredible. Contemporary
neuroscience seems to take something of an equivocal position on this
issue, recognizing the epistemological limitations of the direct
realist view and of the projection hypothesis, while being unable to
account for the incredible properties suggested by the indirect
realist view. However one of these three alternatives simply must be
true, to the exclusion of the other two. And the issue is by no means
inconsequential, for these opposing views suggest very different ideas
of the function of visual processing, or what all that neural wetware
is supposed to actually do. Therefore it is of central importance for
psychology to address this issue head-on, and to determine which of
these competing hypotheses reflects the truth of visual processing.

The problem with the direct realist view is of an epistemological
nature, and is therefore a more fundamental objection, for direct
realism is nothing short of magical, that we can see the world out
beyond the sensory surface. The projection theory has a similar
epistemological problem, and is equally magical and mysterious,
suggesting that neural processes in our brain are somehow also out in
the world. Both of these paradigms have difficulty with phenomena of
dreams and hallucinations (Revonsuo 1995), which present the same kind
of phenomenal experience as spatial vision, except independently of
the external world in which that perception is supposed to occur in
normal vision. It is the implicit or explicit acceptance of this naive
concept of perception that has led many to conclude that consciousness
is deeply mysterious and forever beyond human comprehension. For
example Searle (1992) contends that consciousness is impossible to
observe, for when we attempt to observe consciousness we see nothing
but whatever it is that we are conscious of; that there is no
distinction between the observation and the thing observed. This is
also the "Problem of Transparency" in Tye's (1995) Ten
Problems of Consciousness.

The problem with the indirect realist view on the other hand is
more of a technological or computational limitation, for we cannot
imagine how contemporary concepts of neurocomputation, or even
artificial computation for that matter, can account for the properties
of perception as observed in visual consciousness. It is clear however
that the most fundamental principles of neural computation and
representation remain to be discovered, and therefore we cannot allow
our currently limited notions of neurocomputation to constrain our
observations of the nature of visual consciousness. The phenomena of
dreams and hallucinations clearly demonstrate that the brain is
capable of generating vivid spatial percepts of a surrounding world
independent of that external world, and that capacity must be a
property of the physical mechanism of the brain. Normal conscious
perception can therefore be characterized as a guided hallucination
(Llinás & Paré 1991, Revonsuo 1995), which is as much a matter
of active construction as it is of passive detection. If we accept the
truth of indirect realism, this immediately disposes of at least one
mysterious or miraculous component of consciousness, which is its
unobservability. For in that case consciousness is indeed observable,
contrary to Searle's contention, because the objects of experience are
first and foremost the product or "output" of consciousness, and only
in secondary fashion are they also representative of objects in the
external world. Searle's difficulty in observing consciousness is
analogous to saying that you cannot see the moving patterns of glowing
phosphor on your television screen, all you see is the ball game that
is showing on that screen. The indirect realist view of television is
that what you are seeing is first and foremost glowing phosphor
patterns on a glass screen, and only in secondary fashion are those
moving images also representative of the remote ball game.

The choice therefore is that either we accept a magical mysterious
account of perception and consciousness that seems impossible in
principle to implement in any artificial vision system, or we have to
face the seemingly incredible truth that the world we perceive around
us is indeed an internal data structure within our physical brain. If
science is to triumph over mysticism, we are compelled to accept the
latter view, and accept the reality of conscious experience as a
direct manifestation of neurophysiological processes within our
physical brain. This in turn validates a phenomenological
approach to the study of conscious experience, i.e. to examine the
world around us not as a scientist examining an objective external
world, but as a perceptual scientist examining a rich and complex
internal representation. I will show how phenomenological observation
can be used to determine the dimensions of conscious experience, and
what the structural form of conscious experience tells us about the
representational strategy used in the brain. I will then show how the
phenomenological technique can also be employed to determine the
functional properties of the conscious experience, or how the
information encoded in perception is used to guide behavior.

Despite the deep and fundamental problems with these non-reductive
physicalist theories, they have been, and still are the most
influential metaphysical position on the mind-body problem (Kim
1998). It is interesting how this fits in with the historical pattern
of the epistemological debate. For neither Putnam nor Davidson nor
Dretske propose to challenge the core issues of epistemology directly,
but merely present a convenient escape clause by which philosophers
can evade the issue altogether while supposedly preserving their
scientific integrity. This is the same kind of evasion of the fearsome
facts of epistemology as was offered by the dualist account that
pushed the problem into the domain of God or Spirit, and the
behaviorist solution that pushed the problem of consciousness
off-limits to scientific scrutiny, and the critical realist solutions
that invoke semi-existent spatial entities located in non-space. And
as with those earlier movements in philosophy, all that was required
was to open the door of doubt a tiny crack to unleash a stampede of
popular support all desperately seeking a respectable justification
for their own naïve realist intuitions. There are various other
strategies to be found in contemporary philosophy to evade the
epistemological issue, such as the modern dualism of
nonnaturalism (Popper & Eccles 1977, Swinburne 1984, Adams
1987) which holds that consciousness is not a natural phenomenon and
is therefore closed to scientific scrutiny; and anticonstructive
naturalism (McGinn 1989) which suggests that consciousness is
terminally mysterious, and all attempts to resolve the mind-body
problem are doomed to failure from the outset; and eliminative
naturalism (Churchland 1981, Churchland 1983) which suggests that
consciousness is a natural phenomenon, but that there is a sense in
which it cannot be explained, for the concept of consciousness is
simultaneously too simplistic, too vague, and too historically
embedded in false and confused theory to denote a phenomenon in need
of explanation. But the deeply mysterious aspect of consciousness
which motivates these pessimistic analyses is mostly involved in its
confusion with the external world. Once we recognize that `external'
consciousness is in fact an internal representation, and that it takes
the form of a spatial structure in the brain, the mystery is
transformed from a deep logical paradox to a neurophysiological or
neurocomputational problem, i.e. the question of what kind of
neurophysiological or computational principle could possibly account
for the emergence of dynamic spatial structures in the brain.

Another somewhat different challenge to the mind-brain identity
theory is posed by Smythies (1994, 1999). While Smythies explicitly
refutes naïve realism (Smythies & Ramachandran 1998) and the
resultant confusion of the phenomenal body or `body image' with the
objective physical body, (Smythies 1953) Smythies argues that "the
brain, as a machine, is simply the wrong sort of machine to be able to
actually construct the visual field and other components of phenomenal
consciousness." (Smythies 1994, p. 311) Smythies cites Leibnitz's
principle that for two entities to be identical, they must have
identical properties. "Since events in the sensory brain and events in
our sensory fields in consciousness have clearly distinct properties,
- the theory fails in principle." (Smythies 1999, p. 168) Here
Smythies puts his finger on the principal motivation for naive realism
in modern philosophy, which is the glaring disparity between
phenomenology and contemporary neuroscience. For modern neuroscience
tells us that the brain is composed of innumerable discrete neurons,
interconnected in a network of synaptic connections. It is hard to
resolve this discrete or quantized concept of brain physiology with
the continuous, unitary, and field-like character observed in the
phenomenal world, an issue which is sometimes called the "grain
problem" (Wilfred Sellars 1963). This either means that consciousness
is a complete illusion that bears no resemblance to its corresponding
neurophysiological state, or that contemporary neuroscience is in a
state of serious crisis, for it offers no evidence for the continuous
field- like pictorial representations that we know to be present in
the brain. It is not so surprising for a neuroscientist to favor the
former eliminative alternative, having more inherent faith in his own
method of investigation. It is surprising however that the modern
philosopher most often defers to neuroscience whenever it is in
conflict with the observed properties of the mind, for it is the mind,
and not the brain, which is, or at least should be the primary object
of philosophical inquiry. A philosopher who cannot trust his
observations of the properties of the mind unless or until they are
confirmed by neurophysiology, would do better to abandon philosophy
altogether as a futile enterprise, and switch to the more certain
science of neurophysiology. Smythies' own solution is to propose that
consciousness may be concealed in one of the hidden dimensions of
reality which are sometimes proposed in modern cosmology. In some
sense this is reminiscent of the semi-existent entities proposed by
the critical realists for sense-data, which are spatial structures but
not to be found in physical space. However by identifying the hidden
dimensions of physical reality as the locus of these hidden percepts,
Smythies moves the theory into scientific territory where those hidden
dimensions should at least in principle be accessible to scientific
scrutiny. But until modern science can actually confirm the existence
of those hidden dimensions, and demonstrate how information can be
stored in, and retrieved from them by physical processes, this is a
speculative hypothesis that remains to be confirmed. Unlike the
critical realists however, Smythies would presumably allow that the
properties of those hidden dimensions of the universe are accessible
phenomenologically, so Smythies' theory is an identity theory, that
places sense data within the physical brain, albeit in a hidden
dimension not readily accessible to scientific scrutiny except by way
of conscious experience.

The continued dominance of the naive realist view in contemporary
philosophy is highlighted by the fact that books like Tye's (1995)
"Ten Problems of Consciousness" pass largely unchallenged, even though
several of the fundamental problems of consciousness identified by Tye
disappear altogether when viewed from the indirect realist
perspective. For example Tye's problem number eight, the Problem of
Transparency, is the same issue raised by Searle (1992), that we
cannot distinguish consciousness itself from the object of which we
are conscious. But consciousness is only transparent to those who fall
prey to the naive illusion, and believe they are viewing the world
itself directly, as if by magic. Tye's problem number nine, the
Problem of Felt Location, is the question of external perception,
which is also resolved by the dualist epistemology which reveals that
the space of our phenomenal experience is a representation inside our
head, and the paradox disappears. While indirect realism does not
resolve all of Tye's ten problems as easily as it does these two, it
does cast the problem in an entirely new light, whose implicatons
deserve at least as much scrutiny as the naive view has been afforded,
to see if this unexplored alternative might finally release the
problem of consciousness from its current paradoxical impasse. But
what is interesting in this case is that Tye does not consider
indirect realism even as an alternative to be discussed and refuted,
it is simply ignored altogether as if it had no place in the debate.

Flanagan's (1992) Consciousness Reconsidered comes closer to
expressing the true mind-brain relation, for Flanagan argues that mind
is a natural phenomenon that can be investigated by blending insights
from neuroscience, psychology, and phenomenology. Thus it would seem
that Flanagan's "constructive naturalism" is a kind of mind-brain
identity theory. But Flanagan provides subtle clues that he still
clings to a few last vestiges of naive realism. Although Flanagan
approves of the investigation of brain processes through
phenomenology, he also claims (p. 12) that "Phenomenology alone has
been tried and tested. It does not work. ... Phenomenology alone never
reveals anything about how `seemings' are realized, nor can it reveal
anything about the mental events and processes involved in conscious
mental life." But phenomenology has already revealed that perception
involves a three-dimensional volumetric spatial representation, and
that is a very significant fact of conscious mental life, with direct
implications for how spatial "seemings" are realized in the brain. The
only way that this most obvious representational fact could possibly
have escaped Flanagan's notice was if he mistook this phenomenal world
for reality itself. Significantly Flanagan never discusses the
epistemological issue itself directly, or its very significant
implications for all of the other issues of perception and
consciousness, as if Flanagan were totally unaware of the existence of
this theoretical alternative.

If there is anything to be learned from the long history of the
epistemological debate, it is that the issue is by no means simple or
trivial, and that whatever is ultimately determined to be the truth of
epistemology, we can be sure that it will do considerable violence to
our common-sense view of things. This however is nothing new in
science, for many of the greatest discoveries of science seemed
initially to be so incredible that it took decades or even centuries
before they were generally accepted. But accepted they were,
eventually, and the reason why they were accepted was not because they
had become any less incredible. In science, irrefutable evidence
triumphs over incredibility, and this is exactly what gives science
the power to discover unexpected or incredible truth. Ultimately,
therefore, the most convincing argument for epistemological dualism is
the fact that its monistic alternatives have all been refuted on sound
logical grounds, which leaves epistemological dualism as the only
viable alternative. Until this most basic fact of conscious experience
finally triumphs over our naive realist inclinations, philosophy and
psychology are doomed to an endless and futile recapitulation of the
ancient epistemological debate.

The Dimensions of Conscious Experience

Once we accept the fact that the world of visual consciousness is
a pattern of energy in our physical brain, we can begin to examine
that conscious experience to see what it might tell us about its
neurophysiological correlate. The practice of phenomenology for
investigating mental function was more popular before modern
neuroscience introduced a new concept of neurocomputation that seems
inconsistent with phenomenological observation. (Vernon 1937, 1952,
Gregory 1981, Ramachandran & Blakeslee 1998, Smythies 1953, 1988,
1994, 1999, Koffka 1935, Köhler 1924) The most basic and salient
fact of visual consciousness is that it appears as a
three-dimensional spatial structure (Vernon 1952, p. 81- 92). More
specifically, the phenomenal world is composed of solid volumes,
bounded by colored surfaces, embedded in a spatial void. Every point
on every visible surface is perceived at an explicit spatial location
in three-dimensions (Clark 1993), and all of the visible points on a
perceived object like a cube or a sphere, or this page, are perceived
simultaneously in the form of continuous surfaces in depth. The
perception of multiple transparent surfaces, as well as the
experience of empty space between the observer and a visible surface,
reveals that multiple depth values can be perceived at any spatial
location. The information content of perception can therefore be
characterized as a three-dimensional volumetric data structure in
which every point can encode either the experience of transparency,
or the experience of a perceived color at that location. Since
perceived color is expressed in the three dimensions of hue,
intensity, and saturation, the perceived world can be expressed as a
six-dimensional manifold (Clark 1993), with three spatial and three
color dimensions.

The Cartesian Theatre and the Homunculus Problem

This "picture-in-the-head" or "Cartesian theatre" concept of
visual representation has been criticized on the grounds that there
would have to be a miniature observer to view this miniature internal
scene, resulting in an infinite regress of observers within
observers. However this argument is invalid, for there is no need for
an internal observer of the scene, since the internal representation
is simply a data structure like any other data in a computer, except
that this data is expressed in spatial form. If the existence of a
spatial data structure required a homunculus to view it, the same
objection would also apply to symbolic or verbal information in the
brain, i.e. epistemic as opposed to sensory perception,
which would also require a homunculus to read or interpret that
data. In fact any information encoded in the brain needs only to be
available to other internal processes rather than to a miniature copy
of the whole brain. To deny the spatial nature of the perceptual
representation is to deny the spatial nature so clearly evident in
the world we perceive around us. To paraphrase Descartes, it is not
only the existence of myself that is verified by the fact that I
think, but when I experience the vivid spatial presence of objects in
the phenomenal world, those objects are certain to exist, at least in
the form of a subjective experience, with properties as I experience
them to have, i.e. location, spatial extension, color, and shape. I
think them, therefore they exist. All that remains uncertain is
whether those percepts exist also as objective external objects as
well as internal perceptual ones, and whether their perceived
properties correspond to objective properties. But their existence in
my internal perceptual world is beyond question if I experience them,
even if only as a hallucination.

The Neuroreductionist Objection

A number of theorists have proposed (Dennett 1991, 1992, O'Regan
1992, Pessoa et al. 1998) that consciousness is an illusion, and that
in fact the conscious experience is considerably more impoverished
than it appears subjectively. For example the loss of resolution in
peripheral vision is not immediately apparent to the naïve
observer. However the objective of phenomenology is not to quantify
the casual experience of the naïve observer, but the careful
observation of the critical observer. For the loss of acuity in
peripheral vision is plainly evident under phenomenological
observation, and can be easily verified psychophysically, and
therefore this should also be reflected in the perceptual
model. Dennett argues that visual information need not be encoded
explicitly in the brain, but merely implicitly in some kind of
compressed representation. For example the percept of a surface with
uniform color could be abbreviated to a kind of edge image, with a
single value to encode the color of the whole surface, as is the
practice in image compression algorithms. This notion appears to be
supported by neurophysiological studies of the retina which show that
ganglion cells respond only to spatial or temporal discontinuities of
the brightness profile, with no response within regions of uniform
color or brightness. Dennett argues that the experience of a
filled-in field of color in uniform fields, and in the blind spot,
does not suggest an explicit filling-in mechanism in the brain, but
that the color experience is encoded by "ignoring an absence"
(Dennett 1991,1992). However an absence can only be ignored from a
representation that already contains something in the place of the
ignored item, otherwise one would experience nothing at all, rather
than a spatially continuous field of color. In fact the experience of
the retinal blind spot, or a uniformly colored surface, produces a
distinct colored experience at every point throughout the colored
region to a particular spatial resolution as a spatial continuum, and
the informational content of that experience is greater than that in
a compressed representation. If it is true that the retinal image
encodes only brightness transitions at visual boundaries, then some
other mechanism higher up in the processing stream must perform an
explicit filling-in to account for the subjective experience of the
filled-in surface. In fact the many illusory filling-in phenomena
such as the Kanizsa illusion implicate exactly this kind of mechanism
in perception. If visual information were indeed expressed in a
compressed neurophysiological code, then our subjective experience of
that information would have to also be correspondingly compressed or
abstracted, as is the case for example with an experience of a
remembered or imagined scene. The fact that our phenomenal experience
is of a filled-in volumetric world is direct and concrete evidence
for a volumetric filling-in mechanism in the brain.

An Analogical Paradigm of Representation

Once we recognize the world of experience for what it really is,
it becomes clearly evident that the representational strategy used by
the brain is an analogical one. In other words, objects and surfaces
are represented in the brain not by an abstract symbolic code, as
suggested in the propositional paradigm, nor are they encoded by the
activation of individual cells or groups of cells representing
particular features detected in the scene, as suggested in the neural
network or feature detection paradigm. Instead, objects are
represented in the brain by constructing full spatial effigies of
them that appear to us for all the world like the objects themselves-
or at least so it seems to us only because we have never seen those
objects in their raw form, but only through our perceptual
representations of them. Indeed the only reason why this very obvious
fact of perception has been so often overlooked is because the
illusion is so compelling that we tend to mistake the world of
perception for the real world of which it is merely a copy. This is a
classic case of not seeing the forest for the trees, for the evidence
for the nature of perceptual representation in the brain has been
right before us all along, cleverly disguised as objects and surfaces
in a virtual world that we take to be reality. So for example when I
stand before a table, the light reflected from that table into my eye
produces an image on my retina, but my conscious experience of that
table is not of a flat two-dimensional image, but rather my brain
fabricates a three-dimensional replica of that table carefully
tailored to exactly match the retinal image, and presents that
replica in an internal perceptual space that includes a model of my
environment around me, and a copy of my own body at the center of
that environment. The model table is located in the same relation to
the model of my body as the real table is to my real body in external
space. The perception or consciousness of the table therefore is
identically equal to the appearance of the effigy of the table in my
perceptual representation, and the experience of that internal effigy
is the closest I can ever come to having the experience of the
physical table itself.

There is ample evidence suggestive of an analogical representation
at least in the function of mental imagery. Kosslyn (1995) lists four
computational functions of mental imagery, which are the ability to
1: generate images, 2: interpret the shapes in the images, 3: retain
the image over time, and 4: to transform the image in some
way. Pinker et al. (1988) present a computational model of the
mental imagery medium which, although restricted to two dimensions,
nevertheless employs an explicit spatial representation. Pinker
(1980) shows how mental imagery phenomena extend also into the third
dimension. Finke et al. (1989) show that, given suitable
conditions, people can assign novel interpretations to ambiguous
images which have been constructed out of parts or mentally
transformed. For example, when asked to imagine the letter "D" on
its side, affixed to the top of the letter "J", subjects
spontaneously report "seeing" an umbrella. Kosslyn et al.
(1995) show that mental imagery plays a role not only in memory and
spatial reasoning tasks, but in fact imagery also plays a role in
abstract reasoning, skill learning, and language comprehension. Mellett et al. (2000) present regional
cerebral blood flow (rCBF) data which provide strong evidence that
imagery based on verbal descriptions can recruit cortical regions
known to be engaged in high-order visual processing.

The Function of Conscious Experience

The concept of perceptual representation developed above relates
directly to the issue of the function of conscious experience, and
whether it is an epiphenomenon that has no direct functional
value. For once we accept the inescapable fact that the brain is
capable of generating three-dimensional volumetric spatial structures
in perception, the functional purpose of that spatial representation
becomes clear. It is to provide an internal replica of the external
world in order to guide our behavior through the world, for otherwise
we would have no knowledge of the structure of the world, or of our
location within it. Exactly how behavior is guided by conscious
experience can also be determined by phenomenological
observation. What that observation reveals is an analogical paradigm
of behavioral computation that is quite unlike the analytical
symbolic paradigm of computation embodied in the digital computer. In
order to illustrate the functional principle behind this unique
computational strategy I will present a spatial analogy that operates
on the same essential principle as human behavioral computation,
although in a much simplified form. I will then present the
phenomenological evidence that implicates that same principle of
spatial computation in human behavior.

The Plotting Room Analogy

During the Battle of Britain in the second world war, Britain's
Fighter Command used a plotting room as a central clearinghouse for
assembling information on both incoming German bombers, and defending
British fighters, gathered from a variety of diverse sources. A chain
of radar stations set up along the coast would detect the range,
bearing, and altitude of invading bomber formations, and this
information was continually communicated to the Fighter Command
plotting room. British fighter squadrons sent up to attack the
bombers reported their own position and altitude by radio, and
squadrons on the ground telephoned in their strength and state of
readiness. Additional information was provided by the Observer Corps,
from positions throughout the British Isles. All of this information
was transmitted to the central plotting room, where it was collated,
verified, and cross-checked, before being presented to controllers to
help them organize the defense. The information was presented in the
plotting room in graphical form, on a large table map viewed by
controllers from a balcony above. Symbolic tokens representing the
position, strength, and altitudes of friendly and hostile formations
were moved about on the map in order to maintain an up-to-date
graphical depiction of the battle as it unfolded. I propose that the
functional principle behind this concept of plotting information is
directly analogous to the strategy used for perceptual representation
in the brain.

From Perception to Behavior

Now the plotting room analogy diverges from perception in that the
plotting room does indeed have a "homunculus" or homunculi, in the
form of the plotting room controllers, who issue orders to their
fighter squadrons based on their observations of the plotting room
map. However the idea of a central clearinghouse for assembling
sensory information from a diverse array of sensory sources in a
unified representation is just as useful for an automated system as
it is for one designed for human operators. The automated system need
only be equipped with the appropriate spatial algorithms to make use
of that spatial data. In order to clarify the meaning of a spatial
algorithm that operates on spatial data, I will describe a
hypothetical mechanism designed to replace the human controllers in
the Fighter Command plotting room. The general principle of operation
of that mechanism, I propose, reflects the principle behind human
perception and how it relates to behavior. Let us begin by designing
a mechanism to command a squadron of friendly fighters to close with
an enemy formation depicted on the plotting room map. This objective
could be expressed in the plotting room model as a force of
attraction, like a magnetic or electrostatic force, that pulls the
fighter squadron token in the direction of the approaching bomber
formation token on the plotting room map. However the token cannot
move directly in response to that force. Instead, that attractive
force is automatically translated into instructions for the squadron
to fly in the direction indicated by that attractive force, and the
force is only relieved or satisfied as the radio, radar, and Observer
Corps reports confirm the actual movement of the squadron in the
desired direction. That movement is then reflected in the movement
of it's token on the plotting room map. The force of attraction
between the squadron token and that of the bomber formation in the
plotting room model represents an analogical computational strategy
or algorithm, designed to convert a perceptual representation, the
spatial model, into a behavioral response, represented by the command
for the squadron to fly in the direction indicated by the force of
attraction. The feedback loop between the perceived environment and
the behavioral response that it provokes, is mediated through actual
behavior in the external world, as reflected in sensory or
"somatosensory" confirmation of that behavior back in the perceptual
model.

To demonstrate the power of this kind of computational strategy,
let us delve a little deeper into the plotting room analogy, and
refine the mechanism to show how it can be designed to be somewhat
more intelligent. When intercepting a moving target such as a bomber
formation in flight, it is best to approach it not directly, but with
a certain amount of "lead", just as a marksman leads a moving target
by aiming for a point slightly ahead of it. Therefore the bomber
formation is best intercepted by approaching the point towards which
it appears to be headed. This too can be calculated with a spatial
algorithm by using the recent history of the motion of the bomber
formation to produce a "leading token" placed in front of the moving
bomber token in the direction that it appears to be moving, advanced
by a distance proportional to the estimated speed of the bomber
formation.The leading token therefore represents the presumed future
position of the moving formation a certain interval of time into the
future. The fighter squadron token should therefore be designed to be
attracted to this leading token, rather than to the token
representing the present position of the bomber formation itself. But
in the real situation the invading bombers would often change course
in order to throw off the defense. It was important therefore to try
to anticipate likely target areas, and to position the defending
fighters between the bombers and their likely objectives. This
behavior could be achieved by marking likely target areas, such as
industrial cities, airports, or factories etc. with a weaker
attractive force to draw friendly fighter squadron tokens towards
them. This force, in conjunction with the stronger attraction to the
hostile bombers, will induce the fighters to tend to position
themselves between the approaching bombers and their possible
targets, or to deviate their course towards those potential targets
on their way to the attacking bombers, and then to approach the
bombers from that direction. Additional forces or influences can be
added to produce even more complex behavior. For example as a fighter
squadron begins to exhaust its fuel and / or ammunition, its behavior
pattern should be inverted, to produce a force of repulsion from
enemy formations, and attraction back towards its home base, to
induce it to refuel and re-arm at the nearest opportunity. With this
kind of mechanism in place, fighter squadrons would be automatically
commanded to approach the enemy, attack, and return to base, all
without human intervention.

The mechanism described above is of course rather primitive, and
would need a good deal of refinement to be at all practical, to say
nothing of the difficulties involved in building and maintaining a
dynamic analog model equipped with spatial field-like forces. But the
computational principle demonstrated by this fanciful analogy is very
powerful. For it represents a parallel analogical spatial computation
that takes place in a spatial medium, a concept that is quite unlike
the paradigm of digital computation, whose operational principles are
discrete, symbolic, and sequential. There are several significant
advantages to this style of computation. For unlike the digital
decision sequence with its complex chains of Boolean logic, the
analogical computation can be easily modified by inserting additional
constraints into the model. For example if the fighters were required
to avoid areas of intense friendly anti-aircraft activity, this
additional constraint can be added to the system by simply marking
those regions with a repulsive force, that will tend to push the
fighter squadron tokens away from those regions without interfering
with their other spatial constraints. Since the proposed mechanism is
parallel and analog in nature, any number of additional spatial
constraints can be imposed on the system in similar manner, and each
fighter squadron token automatically responds to the sum total of all
of the analog forces acting on it in parallel. In an equivalent
Boolean system, every additional constraint added after the fact
would require re- examination of every Boolean decision in the
system, each of which would have to be modified to accommodate every
combination of possible contingencies. In other words adding or
removing constraints after the fact in a Boolean logic system is an
error-prone and time consuming business requiring the attention of an
intelligent programmer, whereas in the analogical representation
spatial constraints are relatively easy to manipulate independently,
while the final behavior automatically takes account of all of those
spatial influences simultaneously.

Analogical v.s. Sequential Logic

Despite the advantages inherent in the analogical paradigm, there
are cases in which a Boolean or sequential component is required in a
control system. For example if it is required to direct a squadron to
proceed to a point B by way of an intermediate point A. This kind of
sequential logic can be incorporated in the analogical representation
by installing an attractive force to point A that remains active only
until the squadron token arrives there, at which point that force is
turned off, and an attractive force is applied to point B instead. Or
perhaps the attractive force can fade out gradually at point A in
analog fashion as the squadron token approaches, while a new force
fades in at point B, allowing the squadron to cut the corner with a
smooth curving trajectory instead of a sharp turn, or to adapt the
curve of their turn to account for other spatial influences acting on
it at that time. The analogical paradigm therefore can be designed to
subsume digital or sequential functions, while maintaining the basic
analogical nature of the elements of that logic, thereby preserving
the advantages of a parallel decision strategy within sequentially
ordered stages of processing.

Internal v.s. External Representation

The analogical spatial strategy presented above is reminiscent of
the kind of computation suggested by Braitenberg (1984) in his book
Vehicles. Braitenberg describes very simple vehicles that exhibit a
kind of animal-like behavior by way of very simple analog control
systems. For example Braitenberg describes a light-powered vehicle
equipped with two photocells connected to two electric motors that
power two driving wheels. In the presence of light, the current from
the photocells drives the vehicle forward, but if the light
distribution is non-uniform and one photocell receives more light
than the other, the vehicle will turn either towards or away from the
light, depending on how the photocells are wired to the wheels. One
configuration produces a vehicle that exhibits light-seeking
behavior, like a moth around a candle flame, whereas with the wires
reversed the same vehicle will exhibit light-avoiding behavior, like
a cockroach scurrying for cover when the lights come on. The behavior
of these simple vehicles is governed by the spatial field defined by
the intensity profile of the ambient light, and therefore, like the
analogical paradigm, this type of vehicle also performs a spatial
computation in a spatial medium. However in the case of Braitenberg's
vehicles, the spatial medium is the external world itself, rather
than an internal replica of it. Rodney Brooks (1991) elevates this
concept to a general principle of robotics, whose objective is
"intelligence without representation". Brooks argues that there is no
need for a robotic vehicle to possess an internal replica of the
external world, because the world can serve as a representation of
itself. O'Regan (1992) extends this argument to human perception, and
insists that the brain does not maintain an internal model of the
external world, because the world itself can be accessed as if it
were an internal memory, except that it happens to be external to the
organism. Nevertheless information from the world can be extracted
directly from the world whenever needed, just like a data access of
an internal memory store. (See also Pylyshyn 1998)

However there is a fundamental flaw with this concept of
perceptual processing, at least as a description of human
perception. For unless we invoke mystical processes beyond the bounds
of science, then surely our conscious experience of the world must be
limited to that which is explicitly represented in the physical
brain. In the case of Braitenberg's vehicles, that consciousness
would correspond to the experience of only two values, i.e. the
brightness detected by the two photocells, and the conscious
decision-making processes of the vehicle (if it can be called such)
would be restricted to responding to those two values with two
corresponding motor signals. These four values therefore represent
the maximum possible content of the vehicle's "conscious
experience". The vehicle has no idea of its location or orientation
in space, and its complex spatial behavior is more a property of the
world around it than of anything going on in its "brain". In the case
of human perception, our consciousness would be restricted to a
sequence of two-dimensional images, as recorded by the retina, or
pairs of images in the binocular case. However our experience is very
different from the retinal representation. For when we stand in a
space, like a room, we experience the volume of the room around us as
a simultaneously present whole, every volumetric point of which
exists as a separate parallel entity in our conscious
experience. Braitenberg's vehicles can be programmed to go to the
center of a room by placing a light at that location, but the vehicle
cannot conceive of the void of the room around it or the concept of
its center, for those are spatial concepts that require a spatial
understanding. The world of visual experience therefore clearly
demonstrates that we possess an internal map of external space like
the Fighter Command plotting room, and the world we see around us is
exactly that internal representation.

Symbol Grounding by Spatial Analogy

The analogical spatial paradigm offers a solution to some of the
most enduring and troublesome problems of perception. For although
the construction and maintenance of a spatial model of external
reality is a formidable computational challenge, the rewards that it
offers makes the effort very much worth the trouble. The greatest
difficulty with a more abstracted or symbolic approach to perception
has always been the question of how to make use of that abstracted
knowledge. This issue was known as the symbol grounding
problem (Harnad 1990) in the propositional paradigm of
representation promoted by the Artificial Intelligence (AI)
movement. The problem of vision, as conceptualized in AI, involves a
transformation of the two-dimensional visual input into a
propositional or symbolic representation. For example an image of a
street scene would be decomposed into a list of items recognized in
that scene, such as "street", "car", "person", etc., as well as the
relations between those items. Each of these symbolic tags or labels
is linked to the region of the input image to which it pertains. The
two- dimensional image is thereby carved up into a mosaic of distinct
regions, by a process of segmentation (Ballard & Brown 1982,
p. 6-12), each region being linked to the symbolic label by which it
is identified. Setting aside the practical issues of how such a
system can be made to work as intended, (which itself turns out to be
a formidable problem) this manner of representing world information
is difficult to translate into practical interaction with the
world. For the algorithm does not "see" the street in the input image
as we do, but rather it sees only a two-dimensional mosaic of
irregular patches connected to symbolic labels. Consider the problem
faced by a robotic vehicle designed to find a mail box on the street
and post a letter in it. Even if an image region is identified as a
mail box, it is hard to imagine how that information could be used by
the robot to navigate down the street to the mail box avoiding
obstacles along the way. What is prominently absent from this system
is a three-dimensional consciousness of the street as a spatial
structure, the very information that is so essential for practical
navigation through the world.

An analogical representation of the street on the other hand would
involve a three-dimensional spatial model, like a painted cardboard
replica of the street complete with a model of the robot's own body
at the center of the scene. It is the presence of such a
three-dimensional replica of the world in an internal model that, I
propose, constitutes the act of "seeing" the street. Setting aside
the issue of how such a model can be constructed and updated from the
two-dimensional sensory image, (which is also a formidable problem)
making practical use of such a representation is much easier than for
a symbolic or abstracted representation. For once the mailbox effigy
in the model is recognized as such, it can be marked with an
attractive force, and that force in turn draws the effigy of the
robot's body towards the effigy of the mailbox in the spatial
model. Obstacles along the way are marked with negative fields of
influence, and the spatial algorithm to get to the mailbox is to
follow the fields of force, like a charged particle responding to a
pattern of electric fields.

An essential component of this analogical concept of perceptual
processing therefore is a spatial replica of the percipient's own
body in the spatial replica of his environment. This perceptual
"homunculus" is not to be confused with the internal observer of the
"homunculus objection", for this body replica is not the observer of
the internal scene, but is merely another object in the perceived
world, for that world would be incomplete without a representation of
the percipient's own body as an object in the scene. The analogy of
the fighter command plotting room reveals the function of this body
percept, i.e. that it is required to perform the spatial computations
that elicit the appropriate behavioral response to that
environment. The fact that human perception employs such an internal
"body image" is plainly evident by inspection of our own apparent
body at the center of our phenomenal world. But this fact is only
evident to the indirect realist, for the naive realist mis-identifies
his perceived body with his actual physical body in physical space, a
paradigmatic error which has been shown to be untenable many times
over. (Russell 1927, Schilder 1942, 1950, Smythies 1953, Ramachandran
& Blakeslee 1998, Smythies & Ramachandran 1998)

The analogical paradigm can also be employed to compute the more
detailed control signals to the robot's wheels. The forward force on
the model of the robot's body applies a torque force to the model
wheels, but the model wheels cannot respond to that force
directly. Instead, that torque in the model is interpreted as a motor
command to the wheels of the larger robot to turn, and as the larger
wheels begin to turn in response to that command, that turning is
duplicated in the turning of the model wheels, producing behavior as
if responding directly to the original force in the model world. Side
forces to steer the robot around obstacles can also be computed in
similar fashion. A side force on the model robot should be
interpreted as a steering torque, like the torque on the pivot of a
caster wheel. That pivoting torque in the model is interpreted as a
steering command to pivot the larger wheels, and the steering of the
larger wheels is then reflected in the steering of the model wheels
also. The forces impelling the model robot through the model world
are thereby transformed into motor commands to navigate the real
robot through the real world, and the physical response of the robot
to those commands is in turn communicated back into the model world
to keep it aligned with events in the external world.

The idea of motor planning as a spatial computation has been
proposed in field theories of motor control, (Gibson & Crooks
1938, Koffka 1935, Lewin 1969) in which the intention to walk towards
a particular objective in space is expressed as a field-like force of
attraction, or valence, between a model of the body, and a model of
the target, expressed in a spatial model of the local
environment. The target is marked with a positive valence, while
obstacles along the way are marked with negative valence. When we see
an attractive stimulus, for example a tempting delicacy in a shop
window at a time when we happen to be hungry, our subjective
impression of being physically drawn towards that stimulus is not
only metaphorically true, but I propose that this subjective
impression is a veridical manifestation of the mental mechanism that
drives our motor response. For the complex combination of joint
motions responsible for deviating our path towards the shop window
are computed in spatial fashion in a spatial model of the world,
exactly as we experience it to occur in subjective
consciousness. Indeed the spatial configuration of the positive and
negative valence fields evoked by a particular spatial environment
can be inferred from observation of their effects on behavior, in the
same way that the pattern of an electric field can be mapped out by
its effects on moving charged particles. For example the negative
valence field due to an obstacle such as a sawhorse placed on a busy
sidewalk can be mapped by observing its effect on the paths of people
walking by. The moving stream of humanity divides to pass around the
obstacle like water flowing around a rock in a stream in response to
the negative valence field projected by that obstacle. Although the
influence of this obstacle is observed in external space, the spatial
field that produces that behavioral response actually occurs in the
spatial models in the brains of each of the passers-by individually.

Another example of a spatial computational strategy can be
formulated for the problem of targeting a multi-jointed limb,
i.e. specifying the multiple angles required of the individual joints
of the limb in order to direct its end-effector to a target point in
three-dimensional space. This is a complex trigonometrical problem
that is underconstrained. However a simple solution to this complex
problem can be found by building a scale model of the multi-jointed
limb in a scale model of the environment in which the limb is to
operate. (McIntyre & Bizzi 1993) The joint angles required to direct
the limb towards a target point can be computed by simply pulling the
end-effector of the model arm in the direction of the target point in
the modeled environment, and recording how the model arm reacts to
this pull. Sensors installed at each individual joint in the model
arm can be used to measure the individual joint angles, and those
angles in turn can be used as command signals to the corresponding
joints of the actual arm to be moved. The complex trigonometrical
problem of the multi-jointed limb is therefore solved by analogy, as
a spatial computation in a spatial medium.

There is psychophysical evidence to suggest that this kind of
strategy is employed in biological motion. For when a person reaches
for an object in space, their body tends to bend in a graceful arc,
whose total deflection is evenly distributed amongst the various
joints to define a smooth curving posture, i.e. the motor strategy
serves to minimize a configural constraint expressed in
three-dimensional space, thus implicating a spatial computational
strategy. The dynamic properties of motor control are also most
simply expressed in an external spatial context. For the motion of a
person's hand while moving towards a target describes a smooth arc in
space and time, accelerating uniformly through the first half of the
path, and decelerating to a graceful stop through the second
half. (Bizzi et al. 1995, 2000, Dornay et al. 1993,
Hollerback 1990) In other words the observed behavior is exactly as
if the person's body were indeed responding lawfully to a spatial
force of attraction between the hand and the target object in three-
dimensional space, which in turn suggests that a spatial
computational strategy is being used to achieve that result. Further
evidence comes from the subjective experience of motor planning, for
we are unaware of the individual joint motions when planning such a
move, but rather our experience is more like a force of attraction
that seems to pull our hand towards the target object, and the joints
in our arm seem to simply follow our hand as it responds to that
pull. This computational strategy generalizes to any configuration of
limbs with any number of joints, as well as to continuous limbs like
a snake's body or an elephant's trunk.

Conclusion

The study of mind and its place in nature has always been a
principal focus of philosophy. But from the outset the investigation
has been plagued by a fundamental confusion over the object of this
inquiry, i.e. which aspects of phenomenal experience are a
manifestation of mind as distinct from the world beyond the mind. The
debate began three centuries ago with a choice between the two
epistemological monist alternatives of realism and idealism; either
the world we perceive around us is the real world itself, or we
cannot see the world directly, all we can ever experience is our own
mind. The problem is that the phenomenal world shows evidence of both
origins. It appears to be the world external to our mind because it
seems to have an objective external existence independent of our
mental states. And yet there is also clear evidence that the world of
experience is a product of mind, as seen for example in visual
illusions, and even more clearly in the case of dreams and
hallucinations. But if the mind is but the activity of the brain, how
can a product of our brain, which is in our head, escape the confines
of our head to appear in the world around us? Both of these
epistemological monist alternatives are inconsistent with the facts
of the causal chain of visual processing.

An extraordinary variety of intermediate theories have been
proposed over the centuries, in an attempt to place phenomenal
experience partially inside, and partially outside the head, but in
neither place explicitly. From Descartes' dualist mind as a
non-spatial entity with no defined location in space, to
Malebranche's perceived colors which are in the mind, but also
somehow in the external object, to the sense data of the critical
realists that are experienced, but which do not, or may not exist, to
Davidson's notion of supervenience, of which the mind / brain
relation is the only example in the known universe. All of these
explanations propose to make an exception in the laws of nature just
to accommodate the special case of conscious experience. The only
alternative which does not entail suspension of the normal laws of
nature is epistemological dualism. This theory explains how the
phenomenal world can appear external to the body while at the same
time actually being in the head. It explains how different
individuals can each have their own unique perspective on a commonly
viewed object. And it offers the only plausible explanation for those
most troublesome phenomena of dreams and hallucinations, as well as
for the data of mental imagery and neglect syndrome, which no longer
require heroic efforts of denial to account for their manifest
properties. All of these phenomena follow naturally from the indirect
view of perception.

But the indirect realist solution comes at a cost. In return for
resolving the epistemological question, indirect realism opens a new
paradox, and that is a glaring disparity between two primary sources
of knowledge, phenomenology and neurophysiology. Phenomenology
presents the mind as a three- dimensional colored structure or
analogical representation, while neurophysiology presents the brain
as an assembly of billions of discrete quasi-independent local
processors interconnected in a massively parallel network. Where in
that mass of neural circuitry are the three-dimensional volumetric
real-time moving pictures that we know so well in conscious
experience? The brain just seems to be the wrong kind of device to
create that kind of representation. Is consciousness therefore an
illusion with no direct neurophysiological correlate? Or is there
something fundamentally wrong with our understanding of
neurophysiology?

The information that phenomenal experience gives us about the
external world is known to be somewhat uncertain, as we are easily
fooled by illusions, and occasionally by outright hallucinations. But
when the object of our phenomenological investigation is conscious
experience itself, our knowledge of that particular entity is very
certain. In fact our knowledge of our own conscious state is more
certain and reliable than any other knowledge we can possibly have,
even when our conscious experience is itself only a
hallucination. Neuroscience on the other hand is a science very much
in its infancy, and is rife with uncertainty. In fact the "dirty
little secret" of neuroscience, as Searle (1997, p. 198) calls it, is
that the central principles of representation and computation in the
brain remain to be discovered. Very little is known with any real
certainty about how perceptual or cognitive information is encoded in
the brain, or what kind of computation the brain actually performs in
perception. And there are several prominent aspects of brain activity
whose functional significance remains almost entirely obscure, such
as the synchronous oscillations observed between neurons in remote
cortical areas, and the global oscillations of the brain as a whole
as seen in Electroencephalogram (EEG) recordings. The
phenomenological inspection of conscious experience therefore offers
more reliable and certain knowledge of the essential principles of
mental representation and function than anything that modern
neuroscience has yet to offer, because it gives us direct access to
the massive quantities of information encoded in the brain, presented
in a form that is immediately meaningful to us. If our observations
of the nature of phenomenal experience are in conflict with
contemporary concepts of neurocomputation, it is our
neurophysiological theories which are in urgent need of revision in
order to bring them in line with observed phenomenologically. For a
neuroscience which explains everything about the brain except for how
it generates the mind, is a neuroscience which essentially explains
nothing, because it is the mind that makes the brain interesting in
the first place.

So if we identify the world of experience as an internal spatial
model, what does that tell us about the function of conscious
experience? In the first place it tells us that one of the most
significant functions of conscious experience is to serve as a
structural model of the external world in an internal representation.
This function is completely transparent, or invisible, as long as
consciousness is viewed from the naïve realist perspective, which
is why this most obvious fact of perception has gone unnoticed for so
long. Now it might be argued that this is not a function of
conscious experience itself, but only of perception. For example one
could still imagine a hypothetical zombie that behaves in every way
like a conscious human being, but supposedly lacks all conscious
experience. However if perception is indeed indirect, and if behavior
is governed by analogical forces, this means that the zombie would
also have to be equipped with a volumetric spatial model of external
reality and an analogical computational strategy in order to
duplicate human behavior. And the zombie must also be able to report
the colors of the surfaces in that internal model, all in the absence
of conscious experience. This description of unconscious experience
comes so close to a description of consciousness itself as to leave
very little real distinction, because the structural and
representational aspects of consciousness are every bit as much an
essential part of visual consciousness as is the experiential, or
"what it is like" aspect. In any case, whether or not hypothetical
zombies can have an internal spatial model without a conscious
experience of it, we know for a fact that we ourselves do have
an internal spatial model, and that in our case we are also conscious
of it, which makes the whole question of zombies somewhat moot with
respect to human consciousness.

Information theory offers an interesting new angle on the
problem. For information cannot exist without some physical medium,
or carrier, because the information is encoded as modulations of that
carrier. In the brain the carrier is some kind of electrochemical
state, and the information encoded in the brain is presumably
expressed as modulations of that electrochemical state across space
and time. A similar information theoretic organization is observed on
the subjective side of the mind / brain barrier. Every point in the
three-dimensional matrix of phenomenal space can express every color
in the gamut of phenomenal color experience, including the experience
of transparency, or of empty space. Conscious experience is expressed
in perception as patterned modulations of those color qualia across
space and time. Information theory therefore suggests that the
qualia, such as the primal experience of color and space, are
themselves the carrier, or the mechanism by which experience is
represented or expressed in the brain, whereas the spatial and
temporal modulations of those basic qualia across the volume of
phenomenal space represent the information content of the
representation, i.e. the perceptual scene that is being currently
portrayed. For the most part perception is indirect, we view the
world through the medium of conscious experience. But there is one,
and only one entity that we do see directly, and that is the
representational mechanism itself, the inside of our own brain. The
volume of space we perceive around us is a data structure in our
physical brain, and the primal color qualia with which that world is
painted are different states of the physical mechanism of our own
physical brain. That does not mean that those parts of the brain
would actually appear colored to a micro-electrode inserted into that
part of the brain, nor would they appear colored under microscopic
examination. But that does not make them any the less colored, or any
the less an intrinsic property of the physical brain.

But how does the brain make use of this structural information?
Who is the observer of this internal scene? Well for one thing it is
not the "homunculus" if by that is meant a miniature copy of the
entire brain. The data of consciousness need only be available to
other internal processes and mechanisms designed to read and
interpret that data, and to generate an appropriate behavioral
response. And once we recognize conscious experience for what it
really is, we can employ phenomenological observation to determine
not only the structure of conscious experience, but also the
principles of its function, i.e. the principle by which fears, urges,
pains, and desires, often stimulated by recognized patterns present
in the conscious representation, are expressed as forces in our
perceptual space that seem to thrust us away from aversive patterns
while drawing us towards attractive ones. And the object on which
these synthetic forces act in perception is a different kind of
"homunculus" at the center of our perceptual representation, the item
known in psychology as the "body image", most frequently
mis-identified as our own physical body. It is the arms and legs and
torso that we perceive to sprout outward from the egocentric point of
our private representational space. It is this perceptual replica of
a human body that feels the influence of the analog forces that
appear in perceived space in response to perceived objects. And the
forces that act on the body image are interpreted as motor commands
to the larger external body which is beyond our direct experience. As
the greater external body moves in response to these internal
commands, the body image mirrors those movements in the internal
replica. The external physical body therefore moves in the world
exactly as if it were responding to analog field forces in the
external environment directly, although in fact it is responding
indirectly to the miniature forces in the internal replica. At the
same time the subjective experience of consciousness gives us the
impression of being a free agent in an external world, although in
fact our conscious experience is forever entombed within the walls of
our own physical skull. Until this most basic fact of conscious
experience is generally accepted as an essential fact of nature,
philosophy will be condemned to a view of consciousness as something
that is deeply mysterious, and forever beyond the capacity of human
comprehension. The indirect realist perspective reveals that in fact
it is the remote external world which is forever beyond human
comprehension, and that consciousness is perhaps the only
thing we can ever fully comprehend.

References

Adams R. M. (1987) Flavors, Colors, and God. In The Virtue of Faith and Other Essays in Philosophical
Theology. New York: Oxford University Press.