An alternative approach to the mind-body problem was the idea that mental states are processes, that they are functional states of the brain. This view is now called functionalism, and it has gained great prominence in recent decades. Before functionalism could be truly accepted, however, it had to explain how can it be that mental phenomena are so qualitatively different than the physical brain phenomena which presumably make them happen

Two other issues deserve some discussion. During the 19th Century, studies of living organisms became separated into those addressing issues on the level of behavior and those addressing issues of bodily physiology. This was done for very practical reasons of scientific specialization, but it led to the growth of a huge conceptual gap between knowledge obtained within psychology and knowledge obtained through physiological studies of the nervous system. For over a hundred years, there was precious little interaction between the two.

… The computer metaphor for the mind quickly found its way to become the official foundation of modern psychology. Since digital computers were the only systems capable of complex reasoning whose operation was understood, it was not known which conclusions about their operation should be carried over to theories of the brain and which should not.

At first, the analogy was taken quite literally, and notions of symbolic computation and serial processing were seen as inseparable from the concept of functionalism, and necessary ingredients to constructing any human-like intelligence. Meanwhile, the formal definitions of information used in computer science and communications technology found widespread use in psychological theory and practice as well…In recent decades, computationalism has become the basic axiom of most large-scale brain theories and the language
in which students entering brain-related fields are taught to phrase their questions. The task of brain science is
often equated to answering the question of how the brain computes.

Many debates remain, of course, about such issues as serial vs. parallel processing, analog vs. digital coding, and symbolic vs. non-symbolic representations, but these are all debated within the accepted metaphor: perception = input, action = output, and cognition = computation. Despite its popularity, certain problems have plagued computationalism from the very beginning. Notable among these are questions of consciousness, emotion, motivation, and meaning.

In this article I focus on the question of meaning. This, I will argue, is not a problem for the brain to solve through some dedicated “meaning assignment” mechanism. Instead, it is only a problem with our description of the brain – a symptom of the shortcomings of computationalism, and an argument for going beyond it. The question of meaning is a central problem in the philosophy of mind. If the brain is doing computation, defined as a transformation of one representation into another, how does the brain know what these representations mean? To illustrate the problem, an analogy to computers is usually employed…

The riddle of meaning is at least in part a symptom of a particularly inappropriate definition of “information” used by most psychologists…“the symbol grounding problem”. That is the label I will use here because I find his presentation of the problem to be the most clear…“How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?” (Harnad, 1990, p. 335)… the problem lies in the arbitrary nature of the assignment
between the symbolic tokens and the objects or states of affairs that these tokens stand for. To ground the symbols, he proposes a hybrid system where the representations of the symbolic system are linked to two kinds of nonsymbolic representations: icons which are analogs of the sensation patterns, and categorical representations which capture the invariant features of these icons. The fundamental symbols are arbitrary tokens assigned to the nonarbitrary patterns of the icons and categories, and higher-level symbols are composites of these. For example, the word “horse” is linked to all the images and all the categorical representations involved in the perception of horses, thereby being grounded.

In summary, Harnad is proposing that we solve the symbol grounding problem by backing up out of the premature analogy, made during the beginnings of Artificial Intelligence, that all thought is like symbolic logic. Though I believe that this is moving in the right direction, I suggest that we need to back out further. We need to step back all the way out of the computer metaphor and to consider whether there is a better alternative description of what it is that the brain does.

In the following section, I outline an alternative metaphor for describing the function of the brain. Those who believe that “information-processing” already captures this function adequately might question the utility of searching for an alternative. I ask these readers to bear with me.

Behavior as Control
In an attempt to develop a new metaphor, we must first break free from the preconceptions that our current one forces us into. This is not easy, but it may be possible if we step back from modern philosophical debates for a moment and consider issues which at first might appear unrelated. This lets us develop a discussion that is not filtered through the lens of the current metaphor…
We begin with a fundamental premise: The brain evolved. This is accepted as fact by almost everyone, but its
Implications…are rarely acknowledged. The evolution of a biological system such as the brain is not merely a source of riddles for biologists to ponder. It is also a rich source of constraints for anyone theorizing about
how the brain functions and about what it does. It is a source of insight that is too often overlooked. An evolutionarily sound theory of brain function is not merely one which explains the selective advantage offered by some proposed brain mechanism. Lots of mechanisms may be advantageous. What is more useful toward the development and evaluation of brain theories is a plausible story of how a given mechanism may have evolved through a sequence of functional elaborations. The consideration of such a sequence offers powerful guidance toward the formulation of a theory, because the phylogenetic heritage of a species greatly constrains the kinds of mechanisms that may have evolved.

Therefore, we should expect to gain insight into the abilities of modern brains by considering the requirements faced by primitive brains, and the sequence of evolutionary changes by which these primitive brains evolved into modern brains. A contemplation of the most fundamental functional structure of behavior can start all the way back at the humble beginnings of life.

The earliest entities deserving of the term “living” were self-sustaining chemical systems called “autocatalytic sets”. There are various theories of how these systems came into existence…all theories of early life agree that living systems took an active part in ensuring that the conditions required for their proper operation were met. This means that any changes in critical variables such as nutrient concentration, temperature, pH, etc. have to be corrected and brought back within an acceptable range. This is a fundamental task for any living system if it is to remain living. Mechanisms which keep variables within a certain range are usually called “homeostatic” mechanisms. Biochemical homeostasis often works through chemical reactions where the compounds whose concentration is to be controlled affect their own rates of production and/or breakdown.

Mechanisms generated by evolution are products of a long sequence of modifications and elaborations,
all of these performed within living organisms. Because the modified organisms have to continue living, evolution does not have full freedom to redesign their internal mechanisms. Consequently, the modern form of these mechanisms is strongly constrained by their ancestral forms. Our theories should be similarly constrained. Therefore, an understanding of the fundamental architecture of the brain can greatly benefit from an understanding of the kinds of behaviors and mechanisms present at the time when that fundamental architecture was being laid down.

… If evolution can exploit reliable properties of biochemistry, then it should also be able to exploit reliable properties of geometry and statistics. That the second kind of homeostasis involves a mechanism which effectively extends its action past the membrane, moving the organism through the environment, does not make it do something other than homeostasis. Both kinds of mechanisms ultimately serve similar functions – they maintain the conditions necessary for life to continue. They may be described as control mechanisms.
As evolution produced increasingly more complex organisms, the mechanisms of control developed more
sophisticated and more convoluted solutions to their respective tasks. Mechanisms controlling internal variables such as body temperature or osmolarity evolved by exploiting the consistent properties of chemistry, physics, fluid dynamics, etc. Today we call these “physiology”. Mechanisms whose control extends out through the environment had to exploit consistent properties of that environment. These properties include statistics of nutrient distributions, Euclidean geometry, Newtonian mechanics, etc. Today we call such mechanisms “behavior”.

In both cases, the functionally analogous mechanism is employed by modern woodlice. The mechanism operates under a simple rule – move more slowly when humidity increases – resulting in a concentration of woodlice in damp regions where they won’t dry out. The bacteria Escherichia coli use a mechanism only slightly more sophisticated to find food. Their locomotion system increases turning rates when the nutrient concentration decreases, and thus they tend to move up the nutrient gradient. This mechanism, called klinokinesis, exploits the reliable fact that in the world of E. coli, food sources are usually surrounded by a chemical gradient with a local peak.

The term “homeostasis” implies some constant goal-state, but we need not be too devoted to that implication. Much of the activity of living creatures is anything but constant. For that reason, I stay away from the term “homeostatic mechanism” and prefer the more general term “control mechanism”, implying only that control over a variable is maintained to keep it within a desirable range. How that range changes may be determined by various factors, including other, higher-level control mechanisms. Furthermore, it should not be assumed that a mechanism which exerts control over some state necessarily involves an explicit representation of the goal state (consider the examples described in footnote.

The alternative “control metaphor” being developed here may now be stated explicitly: the function of the brain is to exert control over the organism’s state within its environment.

The concept of stimulus-response, or reflex arc, focuses attention only on the events leading from the detection of stimulation to the execution of an action, and leads one to ignore the results of that action which necessarily cause new patterns of stimulation. “…The feedback nature of behavior has often been discussed within psychology as well…Feedback control has long been used to describe the physiological operation
of the body…In ethology, the study of animal behavior, the feedback control nature of behavior has been a foundation for years.

Why then has the control system metaphor been so neglected within mainstream psychology? Several possible reasons come to mind:

1) The experimental methodology in psychology deliberately prevents the response from affecting the stimulus in an effort to quantify the stimulus-response function. This is appropriate for the development of controlled experiments, but can be detrimental when it spills over into the interpretation of those experiments.
2) There is excess homage paid to the skin, and the structural organization of behavior (from receptors to
effectors) is mistaken to be its functional organization (from input to output).
3) The behavior of modern humans is so sophisticated that most actions that we tend to contemplate are performed for very long-range goals, where the ultimate control structure is more difficult to appreciate. Thus, because many traditions of brain science began by looking at human behavior, they were not likely to see the control structure therein.
4) Systems with linear cause and effect are much more familiar and easier to grasp than the dynamical interactions present in systems with a closed
loop structure.
5) Interdisciplinary boundaries have split the behavioral loop across several distinct sets of scientific
literature, making it difficult to study by any single person. The study of advanced behavior has become allocated among numerous scientific disciplines, none of which is given the mandate of putting it all together. Even philosophy has only recently begun to look into brain science and biology for insight into mental function. 6) Finally, the various attempts to reintroduce the control metaphor into brain theory must themselves take some of the blame. In an attempt to establish themselves as distinct entities, many of the movements listed above described themselves as revolutionary viewpoints that redefine the very foundations of scientific psychology or Artificial Intelligence research. Much criticism was leveled at mainstream theories,
resulting in impassioned defenses. And during such defenses, the new movements were portrayed as already
familiar and discredited viewpoints (usually as variants of the most extreme form of behaviorism
and thus quickly rejected. But it is not true that these movements redefine psychology – they merely present novel perspectives on existing data, data which continues to be relevant to the study of behavior.

This essay suggests that the control metaphor is a better way of describing brain function than the computer
metaphor. One advantage, of particular interest to philosophy of mind, is that it provides a simple answer to the question of meaning. Briefly, rather than viewing behavior as “producing the right response given a stimulus”, we should view it as “producing the response that results in the right stimulus”. These statements seem pretty similar at first, but there is a crucial difference. While the first viewpoint has a difficult time deciding what is “right”, the second does not:
• Animals have physiological demands which inherently distinguish some input (in the sense of “what the animal perceives as its current situation”) as “desirable”, and other input as “undesirable”. A full stomach is preferred over an empty one; a state of safety is preferred over the presence of an attacking predator.
• This distinction gives motivation to animal behavior – actions are performed in order to approach desirable input and avoid undesirable input. It also gives meaning to their perceptions – some perceptions are cues describing favorable situations, others are warnings describing unfavorable ones which must be avoided.
• The search for desirable input imposes functional design requirements on nervous systems that are quite different from the functional design requirements for input-output devices such as computers. In this sense, computers make poor metaphors for brains. For computers there is no notion of desirable input within the computing system, and hence there is the riddle of meaning, a.k.a. the symbol grounding problem.

Re-examining the problem of meaning
Most philosophical inquiries into meaning begin by contemplating the meaning of words and symbols. This is
undoubtedly due to the influence that computer analogies and “language of thought” theories have
historically had over the field. As discussed above, a few decades ago the paradigm of symbolic logic had been perceived as the only mechanistic explanation of complex behavior available as an alternative to the emptiness of dualism and the ineffectiveness of behaviorism. It thus defined the default premises for modern (theories) of mind.

With that foundation, modern (theory of mind) thought revolves around questions of how symbols acquire their
meaning. To repeat: “How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols?” (Harnad, 1990).
It is usually assumed that if we can understand the meaning of symbolic tokens, then the meaning of non-symbolic representations (like sensorimotor schemata) will follow trivially. It is also usually assumed that the meaning of perceptual representations can be understood in isolation…One might suggest that we can avoid the symbol grounding problem by abandoning symbols in favor of connectionist representations. However, the problem exists for representations in general, be they symbolic or non-symbolic, as long as their role in controlling behavior is neglected.

Such conceptual isolation is a large part of the reason why meaning appears mysterious. As discussed above,
traditional disciplinary boundaries and the need for specialization in science have divided the large problem of
behavior into smaller sub-problems such as Perception, Cognition, and Action. These disciplinary boundaries then spill over into large-scale brain theories, yielding a model of the brain with distinct modules separated by putative internal representations.

The Perceptual module is separated from the Cognitive module by an internal unified representation of the external world, and the Cognitive module is separated from the Action module by a representation of the motor plan. These boundaries help to limit the volume of literature that scientists are confronted with, but at the same time they isolate them from potential insights. Those who study the mechanisms of perception and movement control seldom contemplate “philosophical” issues like meaning. Meaning is left for those who study cognition. And those who study cognition start with a perceptual representation and usually phrase the question of meaning in terms of how meaningful symbolic labels can be attached to that representation, and how
symbols can be about the things they refer to.

The symbol grounding problem has things backwards. Meaning comes long before symbols in both phylogeny (evolutionary history of a species) and ontogeny (developmental history of an individual). Animals interacted with their environment in meaningful ways millions of years before they started using symbols. Children learn to interact with their world well before they begin to label their perceptions. The invention of symbols, in both phylogeny and ontogeny, is merely an elaboration of existing mechanisms for behavioral control.

Again, let’s step back from the (theoretical) debate surrounding meaning and consider issues that are more
fundamental from a biological perspective.
• In order to survive, organisms have to take an active part in controlling their situation and keeping it within desirable states.
• For an organism to exert control over its environment, there must exist predictable relationships between an action and the resulting stimulation (“motor-sensory” relationships).
• both physiology and behavior can function adaptively only if there exist reliable properties in the organism’s niche which may be exploited toward its survival.
• Biochemical control exploits reliable properties of chemistry, diffusion, fluid dynamics, etc.,
• While behavioral control exploits reliable properties of statistics, geometry, rules of optics, etc. With a simple behavioral control mechanism such as klinokinesis, it is easy to see how the system makes use of the statistics of nutrient distribution.
• With more complex behaviors, the properties of the niche which are exploited by the organism (i.e.
brought within its control loop) are more subtle….

Affordances are opportunities for action for a particular organism…Thus, an affordance is objective in the sense that it is fully specified by externally observable physical reality, but subjective in the sense of being dependent on the behavior of a particular kind of organism…perception of the world is based upon perception of affordances, of recognizing the features of the environment which specify behaviorally relevant interaction.
An animal’s ecological niche is defined by what its habitat affords.

These relationships between animals and their habitats may be considered precursors to meaning. They are
properties of the environment which make adaptive control possible and which guide that control. They make it
possible for an organism to establish a behavioral control loop which can be used to approach favorable situations and avoid unfavorable ones. Because these properties tend to come packaged along with semi-permanent physical objects, we can speak of the “meaning” that these objects have to the organism in question. However, the crucial point is that the “meaning of an object” is secondary to the meaning of the interactions which the object makes possible.

The symbol grounding problem has been turned upside-down: The question we’re left with is not how meaning is assigned to the symbol “rattle”, but how the symbol is assigned to meaningful interactions with a rattle. How does a shorthand representation emerge within a system of sensorimotor control strategies? What regularities in the environment make the establishment of symbols in the organism possible? – Perhaps one example is the fact that the affordances of an external object tend to remain attached to it as a set. For what behavioral abilities is such a shorthand representation useful? – One likely use is in predicting events; perhaps another is when constructive imitation among social animals evolves into purposeful teaching (Bullock, 1987). And at what phylogenetic stage of behavioral sophistication are these abilities observed? These are the kinds of questions that research on symbolic thought can productively pursue once it moves beyond the riddle of meaning.

One of the major reasons why meaning may appear as a riddle is its presence in communication. When humans talk over a telephone line, somehow there is meaningful information passed from one to the other, even though nothing more than arbitrary electrical impulses are being transmitted between them. This article is another example of a set of arbitrary symbols which somehow convey meaningful information. How can this be?

Let’s consider communication from the evolutionary perspective which has been used throughout.
• In order to survive, an organism must exert control over its situation, and this control can only be achieved if there exist reliable relationships between actions and their results.
• While laws of chemistry create opportunity for biochemical homeostasis, laws of geometry, optics, etc. create opportunity for behavioral interaction.
• A behavioral control loop can be constructed wherever consistencies exist in the environment. And of course, such consistencies also exist in the behavior of other animals.

Animals respond in complex but predictable ways to various stimuli. When a threatening posture is assumed by one crayfish, another will either back off or respond with its own threat posture. This establishes a domain of interaction
between the two creatures where each can attempt to control the other into conceding submission…While the earliest symbols (such as the threat posture of crayfish) closely resemble the state of affairs (battle) which they signal, over time the symbol forms may diverge into arbitrary variations.

Thus, like physiology and behavior, communication is also an extension of control: one which encompasses other creatures in the environment. To describe communication merely as “transmission of information” is incomplete. While information is indeed transferred in communication, to miss the control purpose of the transmission is a fatal oversight in an attempt to understand the meaning of the communiqué.
The same case can be made for the meaning of words. As the system for communication grows in complexity and acquires syntax, the meaning of its elements derives from the behavioral control goals of the speaker. In humans, this ability has developed most impressively and meanings have been built upon meanings until even abstract concepts can be expressed. Nevertheless, most communication, including this article, is an attempt at persuasion.

Conclusions
… the control metaphor is more precise than the input-output metaphor. To describe brains as input-output devices is like describing cars as energy-conversion systems without adding something that distinguishes them from chainsaws and chloroplasts. Because the control metaphor is a more precise description, it better constrains the task of explaining behavior. Control problems, being a subset of input/output problems, present a smaller search-space of possibilities.

The historical interdependence in the development of both functionalism and computationalism have resulted in the two concepts often being viewed as inseparable. However, they are not, and several prominent criticisms aimed at functionalism are really criticisms of computationalism…Brains interact with a world,
controlling their situation by performing actions that result in desirable input. They do this by discovering and
exploiting the reliable rules of output-to-input transformation that are made possible by the external world – this is what a pragmatic understanding of the world equates to. Any system which is capable of performing such control over the environment will perforce contain internal representations that have meaning to it…

In summary, what I am advocating here comes down to correcting two crucial mistakes that psychology has been led to make:
1) severing the behaving organism from its environment; and
2) decomposing behavior into Perception, Cognition, and Action modules which are then studied in isolation.

Both of these are very old ideas, but they have become particularly entrenched in mainstream psychological thought with the development of computationalism. To reject these two mistakes by viewing the brain as a control system does not, however, invalidate the progress made under computationalism. For example, the idea of lawful transformations of internal patterns of activity is still useful.

We can still refer to this as the “processing” of “representations” as long as we focus on the pragmatic value
these representations have for the task of control and not dwell solely on how they may describe the world. The idea of specialized brain subsystems is also perfectly reasonable, as long as we delimit these subsystems for functional reasons and not because of conceptual traditions. Finally, many existing models of brain systems, developed in the context of the computer metaphor, are equally compatible with a high-level view of the brain as a control system – this is particularly true of connectionist models.

Making the change in perspective from viewing the brain as an input-output device to viewing it as a control system also leads to a number of important conceptual shifts. A major one is a shift from an emphasis on representations to an emphasis on behaviors, from the analysis of serial stages of processing to an analysis of parallel control streams. This lets one avoid some classic problems in philosophy of mind.
• First, as discussed above, once neural representations are viewed within the context of the behavioral control to which they contribute, their meaning is not a mystery.
• Second, motivated action is also not a mystery – when an animal’s physiological state no longer meets its internal demands (like a growing hunger), action is generated so as to bring it to a more satisfying state.
• Third, once one no longer assumes the presence of a complete internal representation of the external world, many forms of the “binding problem” are no longer difficult. When environmental regularities are allowed to take part in behavior, they can give it coherence without need of explicit internal mechanisms for binding perceptual entities together.
.
Finally, the shift away from serial representations leads one to reconsider some classic notions concerning
consciousness. Much of psychological theory in the last century has been developed in the context of (theoretical) viewpoints on the mind-body problem. Because dualism and its variations have thrived at least until the 1950’s, they had a great deal of influence on the foundations of psychology. This dualist backdrop led many psychologists to assume that the brain, somewhere within it, presents a model of the world for the mind to observe. Dualism called for a central representation, and computationalism provided that in the form of the internal world model upon which cognition presumably applies its computations. Thus, there has existed for a long time a symbiotic relationship between dualism, a philosophical stance, and computationalism, a psychological viewpoint. And although dualism itself has largely been discredited, some if its influences, such as the assumption of a unified internal world model, remain with us still. Deconstructing that assumption by moving beyond computationalism will have profound effects on what we imagine that the neural correlates of consciousness might be. Perhaps the shift will help us develop a more functional concept of consciousness than the currently prevalent dualistic one, freeing us from the persistence of the so-called “hard problem”.

The last three decades of brain science have witnessed a progressive backing away from several premature assumptions based on the computer analogy. It was quickly obvious that serial searches among a combinatorial set of possibilities cannot be the way that a human brain reasons, even if today such a process can be made fast enough to beat a chess grand-master. Neuroscience research has made it clear that the brain operates with large numbers of noisy elements working in parallel rather than with a single powerful CPU. Human memory appears to be stored in a distributed manner rather than in the sequential addresses of computer memory. The re-emergence of connectionism has questioned the notion that symbolic logic is the only form of computation to be considered…

All these developments demonstrate weaknesses of the aging analogy that brains are like computers, and motivate us to take a few steps back away from some of the assumptions it originally generated. This article argues that one more step needs to be taken. We need to step back from the input-output metaphor of
computationalism and ask what kind of information processing the brain does, and what is its purpose? The answer, suggested numerous times throughout the last hundred years, is that the brain is exerting control over its environment. It does so by constructing behavioral control circuits which functionally extend outside of the body, making use of consistent properties of the environment including the behavior of other organisms. These circuits and the control they allow are the very reason for having a brain.

To understand them, we must move beyond the input-output processing emphasized by computationalism and recognize the closed control-loop structure that is the foundation of behavior.