Commentaires 0

Retranscription du document

1COGNITION AND LIFE.-THE AUTONOMY OF COGNITIONAlvaro Moreno (#), Jon Umerez (#) & Jesús Ibañez(*)(#) Dept. of Logic and Philosophy of Science(*) Dept. of Languages and Information SystemsUniversity of the Basque CountryP.O. Box 1249 / 20080 Donostia / SpainTel.: +34-43 31 06 00 (ext. 221)Fax.: +34-43 31 10 56E-mail: ylpmobea@sf.ehu.esRunning title. Cognition and LifeKeywords.Artificial Intelligence, Artificial Life, autonomy, biological grounding, cognition, evolution, life,nervous system, universality.Abstract.In this paper we propose a philosophical distinction between biological and cognitive domains basedon two conditions which are postulated in order to get a useful characterization of cognition: biologicalgrounding and explanatory sufficiency. According to this, we argue that the origin of cognition innatural systems (cognition as we know it ) is the result of the appearance of an autonomous systemembedded into another more generic one: the whole organism. This basic idea is complemented withanother one: the formation and development of this system, in the course of evolution, cannot beunderstood but as the outcome of a continuos process of interaction between organisms andenvironment, between different organisms, and, specially, between the very cognitive organisms.Finally, we address the problem of the generalization of a theory of cognition (cognition as it could be)and conclude that this work would imply a grounding work on the problem of the origins developed inthe frame of a confluence between both AL and an embodied AI.21.- Introduction.In the second half of the present century modern science has witnessed an apparentlycontradictory process. On the one hand, the classical "field disciplines" have become fragmented into avariety of significantly specialized subareas with increasingly narrower scopes. On the other hand,heterogeneous scientific communities have got developed around multidisciplinary ideas, integratingthe different epistemological, methodological and technological contributions of their members, andcreating what have been called the sciences of complexity (see Pines, 1987; Cowan, Pines & Meltzer,1994). The first significant milestones of this phenomenon were founded almost in parallel byCybernetics (Wiener, 1948/1961; Ashby, 1956) and by General Systems Theory (Bertalanffy, 1968,1975), and as a consequence we can today speak about Adaptation, Autonomy, Communication,Information, Second Order (observation-dependent) and, specially, Complexity Sciences, although insome cases the title is far from having a sound tradition. Undoubtedly the quick and huge developmentof Computer Science has been and is being an important factor in the spreading of these newdisciplines, since it has provided empirical and experimental counterparts to the formal approachesnecessary in the attempts to abstraction of these "Complexity Sciences".Let us just mention some of the main conceptual issues which might help in drawing the widercontext within which the subject of the paper is embedded.1.1.- Artificial / Natural.The extreme position in this direction has been the development of a new scientific strategy that(transcending the previous and well established practice of Engineering and other technologicaldisciplines) has given origin to the Sciences of the Artificial (to borrow Simon's, 1969, denomination).This strategy consists in the study of complex systems by means of their artificial creation, in order toexperimentally evaluate the main theories about them. The computational technology is central(although not necessarily unique) in this new experimental paradigm, and its most outstandingapplications concern precisely those fields where it becomes not just an alternative to other empiricalapproaches but a first order validation tool (since the enormous complexity of their scientific objectsmakes extremely difficult to render the traditional experimental approaches still operational). Up tonow there are two main research projects that have resulted from its application to psychological andbiological problems: Artificial Intelligence and Artificial Life, that is to say, the attempts of having"explanation through recreation" for the natural phenomena of intelligence and life.1.2.- Functionalism.The application of this paradigm to an existing scientific field (the host science) has twoconsequences that will be relevant for the discussion attempted in the present work. First of all, thechange of status of the epistemic relationship between scientific theories and reality: the pervasivecomparison of models and artificially created systems renders what has been called the deconstructionof science's traditional object (Emmeche, 1994). This situation is clearly exemplified in ArtificialIntelligence, where it is very common to find arguments against its comparison with natural cognitiveprocesses whenever it fails to account for them adequately. This defense strategy of a researchprogram's framework can also be detected in Artificial Life researchers, and its natural fate is to evolveinto extreme functionalism by explicitly giving up in the attempt of modelling real systems (Umerez,1995).1.3.- Universality.Another issue concerns the universality of the host science. Sciences as Physics, Chemistry orThermodynamics either don't assume ontological restrictions about their field of study, or include aframework to reduce any constraint in their scope to the physical level. Moreover, they provide amethodology that operationally includes their target objects and its direct manipulation. Thus they canbe considered as universal sciences of matter and energy since their laws are intended to be valid up tocontingencies. Unlike them, Biology studies phenomena about which we only have intuitive andempirically restricted knowledge (if any), and whose complexity is a barrier against its reduction to thelower physical and/or chemical levels (Moreno, Umerez & Fernández, 1994). Thus Biology (not tospeak about Psychology) can only be intended as a science that studies the phenomenon of lifethrough the experience of the living systems as we know them, and so we still have no means todistinguish, among the known characteristics of these systems, which of them are contingent andshould not be demanded for the characterization of life or cognition in a generic context (Moreno,Etxeberria & Umerez, 1994).3In this case Artificial Life has been more radical and more mature than its predecessor, AI.While Artificial Intelligence states as explicit goal to attain a general theory to explain intelligentbehavior but implicitly assuming the anthropocentric perspective of considering that what we have isthe most we can expect, Artificial Life (Langton, 1989) has had from its birth a clear and explicitattitude towards contributing to extend the actual biological realm (life-as-we-know-it) into a universaltheory of the living organization that could go beyond our empirical experience (life-as-it-could-be) asa consequence of the generalization provided by the artificial models.1.4.- Relation between Artificial Intelligence and Artificial Life.In any case, we can see that, despite the efforts made by most Artificial Life founders to make itan epistemologically different discipline from Artificial Intelligence (stress in the bottom-upmethodology), both research programs have relevant similar traits and have had to some extent similarattitudes in facing the study of their respective main scientific targets (Pattee, 1989; Sober, 1992;Keeley, 1993; Umerez, 1995). In this sense, their methodological differences are contingent: each onehas chosen its particular working hypotheses and, up to now, these have been proven the best ones intheir respective fields for universality purposes. We obviously do not want to say that they are perfect,not even that they are any good. We simply want to point out that they have not at present betteralternatives in order to produce general theories of life and intelligence.The true differences arise when these researchers deal with a subject that lies partially in thescope of both, and that is precisely what happens with cognition. On the one hand, ArtificialIntelligence has widened its scope, especially through its insertion into Cognitive Science, and hasstarted to study processes that do not necessarily imply the classical knowledge approach (e.g.perception instead of recognition) (Brooks, 1991). On the other hand, besides dealing with specificallybiological problems, Artificial Life has evidenced capability of producing systems which are close tomodel low level cognitive processes. Cognition is not the main target of any of them (and for themoment we do not have a research program aimed to Artificial Cognition), but a secondary objective inboth.The approach to the cognitive phenomenon is different from the perspectives of each ArtificialIntelligence and Artificial Life. The former, though claiming a physicalist and mechanicist stand, hastended to consider cognition in an abstract and disembodied way. The latter, notwithstanding, hasbrought a new sensibility: an effort has been made to address the understanding of the cognitivephenomenon from the bottom up and to insert it in an evolutionary and embodied frame. From ourpoint of view this constitutes a considerable advance but it has been reached at the price of creating aconfusion between what is generically biological and what is properly cognitive.In this context it is legitimate to make the comparison between both approaches, their respectivemethodologies, theoretical models and experimental results. Moreover, this is probably the mostinteresting comparison test (if not the only one) that we can have between Artificial Life and ArtificialIntelligence. This paper attempts a critical review of the subject.2.- The Phenomenon of Cognition.Cognition is not a concept standing out there, ready to be handled by any discipline who wishto do it. There is a considerable controversy about its definition and nature (Churchland & Sejnowski,1993; Gibson, 1979; Pylyshyn, 1984; Smolensky, 1988; Van Gelder, 1992/1995; Varela, Thompson& Rosch, 1991), to the point of altering its meaning depending on the starting epistemologicalassumptions for its study. In this sense it is worth remembering its philosophical origin which denotesan essentially human feature in whose realization awareness, acquaintance and knowledge should beinvolved. Today's situation is that most explicit definitions of cognition can be perfectly correct, thoughcontradictory with each other, simply because they address the same word to different concepts.But in this controversy there are two aspects. On the one hand, there is the problem of theboundaries of the idea of cognition. Given that there is no scientific disagreement about consideringhuman intelligence a form of cognition, the upper boundary seems to be delimited without controversy.Therefore, the main problem to deal with concerns the lower limits of cognitive phenomena.On the other hand, there is the methodology of the question, i.e. what kind of definition are welooking for. We suggest that in spite of arguing in favour of one or other type of definition, it is moreuseful to discuss the methodological implications that such definitions convey. In other words, itshould not be sought a precise definition of cognition, but a useful one, that is to say, one which allowsthe correct framing of a research project centered on it.4What is being discussed is mainly a conflict between two types of Research Program aboutcognition. On the one side, a research program which attempts to have cognitive phenomena emergefrom a purely biological background. On the other, the more traditional research program in ArtificialIntelligence which seeks mainly to reproduce high level cognitive functions as a result of symbolmanipulating programs in abstract contexts (Newell, 1980). This second position has the advantage ofdealing with a distinctly cognitive phenomenology by focusing on high level cognition without anycondition. As a matter of fact, most of the best models in AI are of this kind. Nevertheless, thisperspective has also well known but very serious problems: it implies an abstract and disembodiedconcept of cognition whose foundations ("symbol grounding problem", Harnad, 1990) are by nomeans clear.Thus, according to these former considerations, we propose that a useful characterization ofcognition should fulfil two conditions:a) Biological grounding: to establish the biological conditions under which cognition is possibleand so to relate the understanding of cognition with its origins.b) Explanatory sufficiency: any plausible minimal conditions to characterize cognitive phenomenashould include all the primitives necessary for utterly explaining its more evolved forms: the higher-level forms involved in human intelligence.Therefore, in the next two sections we will try to develop a concept of cognition simple enoughto be derived from a biological frame, and at the same time, endowed with an autonomous1identitywhich permits it to be useful also for supporting high level cognitive phenomena.3.- The lower bound: Life and Cognition.As we have stated before, an important prerequisite for any research program involving atheoretical addressing of a complex phenomenon such as cognition is to work out an explanation ofthat phenomenon along with a characterization of the mechanisms that make it possible and originateit. Cognition only appears in Nature with the development of living systems. The inherent complexityof living processes render the relationship between life and cognition a very interesting and difficultproblem. In particular, it has been traditionally very hard not only to identify precisely the origins ofcognitive activities, but even to distinguish which are the biological activities that can be consideredcognitive (Maturana & Varela, 1980; Heschl, 1990; Stewart, 1992). We will try to trace back thedifferent stages associated with the origin of cognition addressed from the perspective of the origin oflife itself.Since its origin, life has provoked a set of continuos changes in the earth. Thus, living beingshave had to develop several adaptive mechanisms in order to keep up its basic biological organization.At a phylogenetic scale the solution to this fundamental problem is given by evolutionary mechanisms,but we see that when organisms are focused at their lifetime scale, each one is as well able to adapt—in a non-hereditary fashion in this case— to changes of the environment. Even the simplestbiological entities known at present possess some sort of "sensor organs" that perform evaluations ofthe physical or chemical parameters of their environment that are functionally relevant for them tosubsequently trigger structural or behavioral changes to ensure a suitable performance of their livingfunctions.At this level, ontogenetic adaptability consists in functional modulation of metabolism triggeredby molecular detection mechanisms located in the membrane. Any biological system, no matter howprimitive, includes relationships among different biochemical cycles that allow the existence ofregulatory mechanisms that can imply modifications in different parts of the metabolic network. In thisvery elementary stage of the relations between organism and environment, the basic sensorimotorloops that constitute adaptive mechanisms don't have significant differences from the rest of theordinary biological processes of the organism, e.g. its metabolic cycles. For instance, flagellummovements involved in oriented locomotion in certain types of bacteria can be equivalentlycharacterized as modifications in metabolic paths. From this starting scheme, evolution has developedorganisms provided with more and more complex metabolic plasticity, whose control by the organismsthemselves has allowed in its turn more and more complex viable behavior patterns. However, as far asthe variety of possible answers were based only on the metabolic versatility of the organism, thecomplexity of the behavioral repertoire would remain strongly limited. That is why, according to thesecond condition assumed in the previous section, those kinds of adaptive responses to the detection ofsignificant environment variations through molecular mechanisms (certain membrane proteins) are5essentially nothing but biological functions and only in a very unspecific sense could such behavior beconsidered "cognitive".The history of life shows, though, that the aforementioned limited responses have not been aninsurmountable obstacle for the creation of progressively more complex organisms, when in the courseof evolution some such organisms began to form pluricellular individuals. There is, however, anexception for those forms of life based on the search for food through movement, where speed in thesensorimotor loop is still crucial. In this case a process of cellular differentiation were induced leadingto an internal specialized subsystem that could quickly link effector and sensory surfaces. Thisprocess was the origin of the nervous system. In its turn, the operation of such a system implied thedevelopment of an internal world of externally related patterns (because coupled with sensors andeffectors) organized in a circular self-modifying network. As we will see in the next section, in theorganisms endowed with a nervous system (henceforth animals) instead of through metabolicmechanisms of self-control, adaptation takes place through an informational meta-control onmetabolic-motor functions2.When we face the evolution of pluricellular organisms whose strategy of life was based in thesearch for food, the development of such neural subsystem represented two significant advantages:higher speed and finer flexibility in the coordination of the sensory motor loops. Moreover, theorganic changes attributable to nervous processes only represented a small amount (in terms ofenergetic costs) in the set of physiological processes that occur in the lifetime of the individual. Forthese reasons selective pressure determined that the development of pluricellular organisms whoseadaptability relied in motor behaviors would become impossible without a neural subsystem.As far as the neural system becomes more complex, animals could use neural resources "offline" for exploring virtual, mentally simulated situations before taking actions in their realenvironments (Clark & Grush, 1996). Hence, the fundamental factor in the development of theNervous System has not only been the relation between the organisms whose way of life is based onmovement and their non cognitive environment but also the co-evolution —cooperation andcompetition— with other cognitive organisms. Co-evolution is essential (not just a contingent fact) forthe emergence of meaning and cognition because the "autonomy" of the cognitive agents, and of everyorganism as biological organization, can not be understood without its collective dimension (and viceversa). "Movement", for instance, as has been pointed out by the Ecological Realism (see Turvey &Carello, 1981), has not to be taken as a mere physical concept, but mainly as a network of interactionsamong other organisms equally endowed with nervous system. Accordingly, the development ofcognitive capacities occurred as a collective phenomenon which took the form of a "bootstrapping"process.3.1.- Blending Approach and its evaluation.So far we have placed the discussion of the origin of cognitive capacities in an evolutionaryframe. However, among those authors who agree with the idea of the necessity of a biologicallygrounding of cognition, there is a (rather philosophical) discussion concerning the nature of life andcognition. As we see it, the clue of this discussion is a discrepancy about the significance of the gapbetween what we have called mere adaptation and the world of phenomena out of the development ofthe nervous system.Some authors (Maturana & Varela, 1980; Stewart, 1992; Heschl, 1990) consider that life itselfnecessarily involves cognitive abilities. Though significantly different among them, the positions ofthese and other authors share the assumption that life and cognition are, if not the same concept,inseparably linked phenomena, and in the ongoing discussion we will refer collectively to them as theBlending Approach (BA).According to the BA, all these adaptive processes would constitute the simplest forms ofcognition. Thus, there would not be any explanatory gap between life and cognition (the existence ofbiological systems is linked to the presence of cognitive abilities), and, moreover, the understanding ofthe nature of cognition is linked to an explanation of its own origin and of the origin of life (thesimplest forms of cognition, and so, the easiest ones to be understood, would be present in the earliestorganisms).The main advantage of this position is that it is able to give account of the biological origins ofthe epistemic phenomena. However, as we have pointed out, the concept of cognition proposed by theBA gets considerably reduced in its operational meaning. This is because it renders as ontologicallyequivalent the aforementioned basic sensorimotor loops, that is to say, interaction processes betweenorganism and environment through metabolic nets, and much more complex and evolved interactionprocesses that explicitly involve other cognitive organisms. In other words, the kind of processes6considered in the BA paradigm are closer to other biological functions than to higher cognitiveabilities.Besides, there would be no profit in carrying the BA up to its most extreme consequences:either cognition is reduced to life, and this leads to abandon the term "cognitive" because of its lack ofspecific meaning, or a more pragmatic argument is adopted in order to state that life and cognitionwould represent operationally and epistemologically different concepts. In the first case thoseprocesses (like the basic sensorimotor loops) that are presented as cognitive ones, can in fact becharacterized as purely adaptive process, in which the specifically cognitive dimension is notfunctionally distinguishable from the whole of the biological operations of the individual. In thesecond case, on the contrary, the problem we have is how to determine which biological processescould be categorized as specifically cognitive and which not. Thus we would not have simplified theconcept of cognition, but merely translated the boundary problems to the biochemical level, since it isat that level where earlier cognitive mechanisms are identified. Finally, it seems very hard to ground theprimitives of cognitive science (like open-referential information and representation) without assumingthe necessity of the aforementioned gap between purely biological phenomena and cognitive ones.4.- The Autonomy of the Nervous System.The existence of an important gap between purely adaptive behavior and high-level cognitionsuggests the importance of an explanation of the origin of cognition as an autonomous phenomenonwith respect to biology and the necessity of raising the lower boundary of cognition. If we claim (as infact we do) that cognition is not epistemologically equivalent to the basic biological functions, we needto identify not only its specific phenomenology, but the (harder) biological conditions to produce thisphenomenology.This leads us to face the question of the origin of the nervous system (NS) in a new manner,namely, as a radically different kind of organization arising in a biological frame. As we havementioned before, the emergence of the NS is the result of an evolutionary strategy carried out bypluricellular organisms whose survival depended on obtaining food through movement. This strategyultimately attained the formation of a specialized subsystem of the organism to quickly channel thesensorimotor couplings. The organization of the nervous system is oriented towards density, speed,precision, plasticity, pattern number maximization and energy cost minimization. The combination ofthese features express the specific identity of the NS as the material support of the cognitive capacitiesin animals.Functionally speaking, the specificity of the role played by the NS lies in the different way bywhich adaptive mechanisms take place. Organisms without NS, when facing up to biologicallysignificant changes in the environment, trigger a set of functional metabolic reactions, keeping thebiological viability of the organism. Here adaptation occurs essentially as a result of biochemicalchanges induced by sensor surfaces that constrain the metabolism. Instead, when animals interactcognitively with their environment, sensorial flow does not constrain directly metabolic states (thebody), but also a flow of nervous patterns within a recursive network. Effector organs are thusconnected with sensors through this network, which allows the possibility that some internal patternsbe coupled with not only present features of the environment. For this reason, it seems moreconvenient to speak about this kind of coupling between nervous internal patterns and external eventsin informational terms, whose meaning we will discuss later.As we will see, the specificity and potential open-endedness of the internal patterns arising inthis network will open the specific phenomenology of the cognitive domain. Thus, the NS is thematerial support of the cognitive phenomenology as an autonomous level with regard to the rest of thebiological domain. Cognition appears as the consequence of the emergence of the nervous system.When we describe the NS as a functional network it is worth to distinguish in it differentlevels. At the bottom, we have complex patterns of metabolic processes. But part of these processesproduce at a higher level simple discrete events, and even at more higher ones, patterns formed bygroups of neurones. As a result of functional interactions that an animal has with its environment itarises a history of couplings between internal states (underlying complex metabolic processes) andevents of the environment. So, meaning or cognitive information occurs at different hierarchical levelsimplying both activation patterns of discrete units and the insertion of these patterns in a body frameendowed with an evolutionary history (Umerez & Moreno, 1995).Here is where cognitive information appears. The fact that the sensorimotor loop is mediatedthrough informational processes is precisely what distinguishes cognition from generic adaptation.7However, information is also a concept central in biology at large, for instance essentialprocesses like self-maintenance and self-reproduction depend both on the specific sequence of discreteunits stored in DNA molecules, i. e., genetic information. Now, this information, though generically"epistemic" -because of its referentiality- is bounded to self-referentiality. Nevertheless, if informationhas to be a useful concept for cognition, it needs to convey open referentiality.More precisely, let us compare these two senses of the term information. When we try toaccount for both its genetic and cognitive meanings, information should be understood as a set ofpatterns with causal effects that connect meta-stable states with physically independent events orprocesses by virtue of some interpretation mechanisms autonomously constructed by the very system.Therefore, in the case of both the genetic and the neuronal information, we are dealing with self-interpreted functional information, and not with just a series of discrete units which have a certainprobability assigned and whose meaning is externally attributed with independence of its formalstructure. In the frame of the NS the term information corresponds to the functional description ofthose metabolic global patterns that in turn modulate a flow of chemical and physical processesconnected to the outside through diverse organs, sensors and effectors, in a circular manner3. Thedynamics of the informational flow is constrained by both the requirements of sustaining the entireorganism's viability and the constraints of the structure of the environment.In a similar way to the generic biological organization, the nervous system produces primarilyits own internal states as expression and condition of its self-coherence as an operationally closednetwork (Varela, this issue). But this autonomy is in its turn, not independent of that of the wholeorganism. Once emerged and developed, the nervous system subsumes purely metabolic adaptabilityfunctions. In this sense, the epistemological autonomy of the nervous system is the continuousproduction and reproduction of an informational flow coherent with the viability of the autonomy ofthose organisms that generate precisely these internal meta-autonomies. Along with this, the nervoussystem is essentially immersed in (and coupled with) the external environment (mainly other cognitiveorganisms). The autonomy of the nervous system can also be stated in that frame of informationalrelationships.Thus, the appearance of a new phenomenological domain whose primitives are theseinformational patterns is one of the most outstanding features of the nervous system. This domainrelies in a set of features that configure the deep specificity of this system with respect to the rest of theorganized structures of the individual organism.As a consequence the external environment of the organisms endowed with NS is constitutedby informational interactions rather than by functional ones. But as this environment mainly consistsof other cognitive organisms, the world of cognitive agents becomes progressively a communicationworld.5.- The relationship between the cognitive and body features.We have previously pointed out that the nervous system constitutes in its performance anoperationally closed system, and this fact supposes a fundamental problem: How can we understandthe relationships between the nervous system and the rest of the organism (what we usually call body)if the whole of it is to be also characterized as an autonomous system? If the self-maintenance of thebody is expressed by means of metabolism, how can we interpret the set of constraints performed bythe nervous system on it?This is a difficult question. Those who stress the embededness of the cognitive systemnormally blur its autonomy and ontological distinction with respect the biological level, which hinderstheir possibilities to generate a useful research program in Cognitive Sciences. But those who, on theother hand, stress the autonomy of cognitive phenomenon from the biological level tend to disembodyit in greater or lesser degree.If we want to avoid the problems involved in the disembodied theories about cognition, it isnecessary to assume that the NS is subsumed in the wholeness of the body. But as the latter is too initself an operationally closed system, we would have to interpret "the whole organism" in its turn, as ahigher form of an operationally closed system in which the body would perform the dynamical leveland the nervous system the informational one, in a similar way to the concept of Semantic Closureproposed by H. Pattee (i.e., 1982, 1986, 1987, 1989,1993 1995; see also Thompson, this issue) toexplain the complementary relation between DNA and proteins in the cellular frame. Thisinterpretation seems to us more suitable than that dealing with the body as an "environment" for thenervous system (Clark, 1995).8How is it reflected this complementarity between body and nervous system? The answer to thisquestion could be in the sense that functional meanings emerge precisely through a self-interpretationprocess of the nervous information. The body (metabolic-energetic-functional system) specifies ordetermines the "readiness" of the informational relationships. What is functionally significant for theanimal constrains the performance of the nervous system and conversely. The body controls thenervous system and conversely.The autonomy of the body is, in a generically biological sense, more general and global thanthat of the nervous system. The body is energetically connected to the environment, while the nervoussystem is connected informationally. This doesn't mean that they are independent processes: in fact,what is informationally relevant for the organism depends on its internal state, like thirst, sexualreadiness, tiredness, etc. (Etxeberria, 1995). In addition to that, the phenomena of pain and pleasure arenot understable unless we conceive the relation between the NS and the rest of the body in a globallyentangled manner. The functional output of neuronal activity is not only a set of motor actions (which,at their turn, constrain sensorial inputs), but a more encompassing effect of metabolic constrainingaction (hormonal secretion, etc.) which, ultimately, accounts for the whole sensorial flow (including themeaningfulness of 'painful' or 'pleasant' feelings in animals). So, the body constrains biologically thenervous system, for instance determining its basic functioning, but also the converse is true: the(operationally closed) logic of the nervous system in turn constrains how the biological functioning ofthe body will take place. The nervous system has phylogenetically performed on the body of theanimal fundamental evolutionary constraints, so conditioning the development of the different bodilystructures.Functional self-interpretation of the information (the question of the emergence of epistemicmeanings) is only possible through this complementary relationship. The informational nature of therelation maintained by the nervous system with the environment expresses its autonomy andoperational closure, whereas the entanglement between the biological and the cognitive structuresexpress the embodiment of this latter.What confers informational character to some of the patterns produced at certain levels of thenervous system is its operational closure. In the nervous system the neural patterns that assume aninformational nature are those who establish causal connections with physically independent eventsdue both to the operational closure of the nervous system and to that formed globally between thenervous system and the body. The first of these operational closures render autonomous the processthat connects the sensor surfaces with the effector mechanisms from the rest of the biologicalprocesses, so constituting them as cognitive. The second is the mechanism by which the processes thatoccur at the nervous level acquire a functional meaning: lastly, the global biological self-maintaininglogic is responsible for the functional interpretation of the information of the nervous system.5.1.- Representation.The present perspective is, in our opinion, the only one which allows a satisfactory approach tothe problem of representation in cognitive science. This concept, in its classical formulations within thecomputationalism has been heavily criticized in the last decade, specially for its alleged incompatibilitywith Connectionism (see Andler 1988). Recently, even its abandonment has been proposed (Varela,1989; Van Gelder, 1992/1995; Brooks, 1991). Notwithstanding, the problem of these radical positionsis that they throw the baby out with the bath water for without the idea of representation it is hardlypossible to build a Cognitive science able to explain cognitive phenomena of higher level. Therefore,even if it is possible to discuss whether representation is dispensable or not to explain certain level ofbehavior, the crucial problem is that without a concept of representation is not easy to see how aresearch program in cognitive science could be articulated that went from its lowest to its highest levelwithout cleavage (as we put forth in the first section).It is true indeed that a fair amount of the debates around the dispensability of representation aredue to disagreements with respect to what is cognitive, but there also are serious discrepancies andconfusion around the very meaning of representation itself. Clark & Toribio (1995) hold, however, thatbehind this diversity a basic implicit consensus exists around the definition proposed by Haugeland(1991). Such definition states that representation is a set of informational internal states which hold an'standing for' (referentiality) relation toward certain traits of the environment which are not alwayspresent and which form a general scheme systematically generating a great variety of related states(also representational).Most of the objections and difficulties posed to this definition proceed from its abstract anddisembodied character. But if we situate the idea of these referential internal states in the context of theinformational patterns generated but the operational closure of the nervous system, we think that thesedifficulties could be solved.9Thus, the mechanism of self-interpretation of the information inside the nervous system isachieved by the complementarity relationship between body and NS within the organism as anintegrated whole. This is the radical meaning of the statement suggesting that the biological is a lowerground of the cognitive level.6.- Could we have a disembodied AI?In the previous section about the relations that hold in those natural cognitive systems that weknow between the properly cognitive system (the nervous one) and that which globally provides for theidentity of the whole system as an autonomous being (the body) we have seen the deep interrelationbetween both. Should we infer that any cognitive system must be based upon a similar relationship?This question can be approached in two different ways.The first consists in asking within the frame of the universalization of biology which are theconditions for the appearance of natural cognitive systems. It is possible to argue that the conditionswe have indicated in previous sections are only the instantiation of a phenomenology, a particular caseof the various possible histories of generation of cognitive systems in Nature. Nevertheless, the mainfeatures we have used to define the cognitive phenomenon satisfy the requisites posed at the end ofsection 2. Also, in any biological setting, it is logical to suppose that any kind of natural cognitiveagent, as it might appear under any circumstances, could only be conceived as the result of some sortof evolutionary process from previous organizational and generically lifelike stages. According to this,the relationship framework concerning cognitive and biological levels would have similar guidelines tothe previously described scheme.The second way to address the question of how to generalize a full theory of cognitionuniversally valid is the attempt to build artificial cognitive systems, what is commonly known asArtificial Intelligence. In this frame, one of the more important questions to investigate is the way todetermine the series of conditions, structures and material processes (what Smithers (1994) has called"infrastructure") required to support the emergence of cognitive abilities in artificial systems.So far AI has mainly tried to simulate and build expert systems or neural networks thataccomplish certain kind of cognitive tasks externally specified. Despite the success obtained in thesesresearches, one could disagree with the idea that these systems are truly cognitive, because, as we havepreviously argued, cognition is a capacity that should be understood through its own process ofappearance and development. And this implies their embededness in a whole biological background.Recently, however, there has been an increasing interest in relating the cognitive and biologicalproblems, mainly due to the promising research lines that try to study and design robots capable ofdeveloping (different degrees of) autonomous adaptive behavior —the so called Behavior BasedParadigm (Maes, 1990; Meyer & Wilson, 1991; Meyer, Roitblat & Wilson, 1992; Cliff, Husbands,Meyer & Wilson, 1994). The fact that autonomy should be considered as the basic condition forcognition is precisely one of the bridges between Artificial Intelligence and Artificial Life.7.- Artificial Life as a Methodological Support of a New Artificial Intelligence.Artificial Life poses the questions about cognitive phenomena from its own point of view, assomething that has to be considered inside a biological frame (not necessarily within a terrestrialscope). Most works in AL related to cognition attempt to develop cognitive abilities from artificialbiological systems (whether computational models or physical agents). In this sense, it can be said thatthese abilities, though low level, are generically universal because they are generated from biologicallyuniversal systems. Furthermore, in all these works it is essential that the cognitive abilities appear notas a result of a predefined purpose but as an "emergent" outcome of simpler systems.Thus, if we consider that the preceding argument about the lack of universality of Biology canevidently be translated to Cognitive Science, it would be a natural step to produce a research programto fill the gap between cognition-as-we-know-it and cognition-as-it-could-be in which the developmentof artificial systems would play a major role. This fact poses a number of interesting questions whoseanswers could be of great interest in the search of a general theory of cognition. First of all, it can beasked if artificial cognition of any kind is a specific target for Artificial Life. The question arisesbecause of the difficulties of joining together this problem with other ones more essentially biological,such as origins of life, evolution, collective behavior, morphogenesis, growth and differentiation,development, adaptive behavior or autonomous agency. Second, should the answer be positive, therewould be a problem of methodological status of the studies on low level cognition: since it can be a10common interest area for Artificial Intelligence and Artificial Life, it is not clear which is themethodology that should be applied. And third, the study of emergence of cognitive abilities in simplelifelike artificial systems might enlighten the evolutionary conditions for the origin of specializedcognitive systems to take place. This could be essential in the correct approach to more complex formsof cognition.But within Artificial Life itself we may distinguish two basic perspectives to face the problemof designing cognitive agents: the "externalist" one and the "internalist" one. In the externalist positioncognition is understood as a process that arises from an interactive dynamical relation, fundamentallyalien to the very structure (body) of the cognitive agent, while according to the internalist position,cognition is the result of a (more) fundamental embodiment that makes it possible for evolution tocreate structures that are internally assigned interactive rules (Etxeberria, 1994).Most of the work done in computer simulations -and practically all in robotic realizations-belong to the first perspective. For practical reasons, the "internalist" view hardly could be, by now,developed otherwise than by means of computational models.In both positions autonomy and embodiment are established gradually. The externalist positionis well represented by the aforementioned behavior based paradigm, one of whose main characteristicsis the fact of building physical devices for evaluating cognitive models. This represents an advantage inmany aspects, because the interactions in real, noisy environments turn out to be much more complexthan simulations.In the externalist position, the parameters to control the agent are measured from the situationin which the agent itself is, and the results are put in dynamic interaction with the effector devices. Itsperformance is controlled by adaptive mechanisms that operate from the point of view of the agentitself: but the agent's body is essentially only a place. Although this position supposes a significantadvance with respect to the position of classic Artificial Intelligence and even with respect to someconnectionist viewpoints, in fact it is still inside the traditional endeavor of designing cognitive agentsdisregarding the conditions that constitute them as generically autonomous, i.e.(full-fledged) biologicalsystems. The consideration of the body essentially as a place means that the co-constructive (co-evolutionary) aspect of the interaction between agent and environment (Lewontin, 1982, 1983) isignored. Autonomy (seen as the ability of self-modification) is restricted to the sensorimotor level(what Cariani (1989) has called the syntactical level of emergence). Thus, the plasticity of its cognitivestructure is ultimately independent of its global structure (which is neither self-maintained, nor self-produced, nor evolutionary). As long as the autonomy in the solution to the cognitive problemsinvolved in these agents is considered fundamentally external to the process of constructive self-organization of the very cognitive system (Moreno & Etxeberria, 1992; Etxeberria, Merelo & Moreno,1994), their ability to create themselves their own world of meanings (their autonomy) will be verylimited (Smithers, this issue).In the second perspective the cognitive autonomy of the agent is approached in a much moreradical way, since its frame is the very biological autonomy. Nevertheless, we will see that here it isalso possible the reappearance of positions that have been criticized in previous sections for their strictidentification of the cognitive and biological mechanisms. We certainly have to agree with the idea inVarela, Thompson & Rosch (1991) that the design of agents with cognitive functions should beunderstood in the frame of the very process that constitute the agent as an autonomous entity (that is tosay, its biological constitution). But as we have earlier said, this ability is not enough to explain theemergence of cognitive capacities.Biology shows that the emergence of autonomous agents doesn't take place without 1) aprocess of constitution of a net of other autonomous agents and 2) a process that occurs throughvariations in reproduction and selection in its expression level. It is evident that in the biological framethe environment of a cognitive agent is mainly the result of the action (along with evolutionaryprocesses) of the cognitive organisms themselves and other biological entities with which they have co-evolved. This is important because it means that, while the environment of biological systems is itself abiological (eco)system, the environment of cognitive agents is, to a great extent, a cognitiveenvironment (communication).Thus, the study of cognition in natural systems leads us to the conclusion that the functionalityor cognitive meaning of the world for an agent emerges from this process of co-evolution. If wepropose to apply this idea to the design of cognitive artificial systems it is because only from thisperspective it can be established a research program that ends up in the creation of true autonomouscognitive systems, i. e., that define their cognitive interaction with their environment by themselves.This leads us to the necessity of adopting an Artificial Life research program in whichevolutionary processes can have a fundamental role in the constitution in the agent of its own cognitivestructure.11The so called "evolutionary robotics" research project has tried to face this problem byredesigning the cognitive structure of the agent from an evolutionary perspective (in the current state oftechnology this cannot be made but in a purely computational universe4). In these models a phenotypeand a genotype are considered the fundamental primitives of an evolutionary process. But thephenotype as such is reduced to a nervous system scheme (that is, a neural net) (Floreano & Mondada,1994; Yamauchi & Beer, 1994; Nolfi et al., 1995; Jakobi et al., 1995; Gomi & Griffith, 1996). One ofthe most interesting aspects of these researches is the different attempts to evaluate in a realistic,physical, context the evolutionary design of the cognitive system of the agents. In some cases there iseven an on-going physical evaluation of the computational evolutionary design, as in Harvey et al.(1994).All this work means a significant advance, but it has a problematic identification between thephenotype of an agent and its nervous system. That is to say, the complementary relationship betweennervous system and body, that we have argued as fundamental in previous sections, is still absent(because it doesn't exist a proper body). Hence, the problem of designing, in an evolutionary scenario,agents having their structure set up as a complementary interplay between its metabolic and neuralorganizations remains basically unexplored.Some authors (Bersini, 1994; Parisi, this issue; Smithers, 1994) have presented criticalproposals regarding this predominant approach of considering the phenotype of an agent only as itsnervous system. But the solution to this problem is linked to two deep questions very difficult to solve.One is how to generate from a global evolutionary process of the organism the structure of a system asthe nervous one. The other is how to generate cognitive abilities through a process of co-evolutionaryinteraction among agents. We think that the research about the problem of the origin of cognition hasto undertake as its main task the combined resolution of both kind of problems.But the research program of evolutionary robotics is based on physical realizations. And thiscircumstance conveys, given the level of current technology, a series of limitations for the explorationof the above mentioned issues. Therefore, the study of such problems has to be done fundamentally bymeans of computational models.With respect to the first of the issues some recent work has been done which offers interestinginsights. These works develop models in which neuronlike structures are generated from evolutionaryprocesses that produce cellular differentiation. The model by Dellaert & Beer (1995) shows an effortto avoid the direct mapping from genotype to phenotype. This is achieved through the implementationof three successive levels of emergent structures (molecular, cellular and organismal). In that sense, itrepresents an attempt to design epigenetic (ontogenetic or morphogenetic) processes to develop morerealistic phenotypic structures. More recently Kitano (1995) has also developed another model inwhich the structure of a system similar to that of the nervous system appears through a process of celldifferentiation. The most interesting aspect of Kitano's work is the generation of a "tissue" made ofcells which are connected among them through axonal structures. Nevertheless, none of these addressthe emergence of cognitive functionalities.There is another important question which is not addressed by these models: in the process ofconstitution of cognitive structures (and, in general, in the whole morphogenetic process) theinteraction with the environment is not considered, and, therefore, the role that coevolution with othercognitive organisms plays in the genesis and development of the cognitive system is ignored. If wewant to understand in which way and under which conditions can some organisms give origin to acognitive system, it is necessary to have as starting point a collection of organisms that have developeda considerable complexity level. An interesting work which confronts the development of cognitiveabilities in an artificial world from a co-evolutionary perspective is that of Sims (1994). In contrastwith the previously mentioned models, in this case the stress is made on the emergence of cognitivefunctionalities. In this model there is a bodylike structure formed by rigid parts. Better than inspired inbiochemical-type processes, these parts behave more as physical mechanical structures. The fitnessfunction is based in a contest in which organisms compete with each other.An innovative advance of this model is that the neural net (though it is not metabolicallyembedded) is structured in two levels (local and global). But this structure is introduced more infunction of considerations about the physics of cognitive processes than of globally biological ones. Inthis sense, Sims' model conveys a greater abstraction of a series of processes situated in the interfacebetween the metabolic and the neural level. Although it includes energetical considerations in thedevelopment of its cognitive functionalities, these considerations ignore the basic relation with themetabolic level (the network which ensures the self maintenance of the whole system —the"infraestructure").The problem is how to integrate these works with each other. In Kitano's model the emergentfunctionality is manifested through the formation of a neuronlike structure. Maybe, what is still lacking12is two new levels in the model. Firstly, a level at which newly formed neuronlike structures performsome control task —constraint— over the whole of the body. And, secondly, the appearance of a newlevel derived from a co-evolutionary process among organisms able to generate, at its turn, newfunctionalities as very basic cognitive behaviors.This task becomes one of a great complexity. It is difficult to determine which fundamentalelements have to be part of the model and which are disposable. And the same happens at differentlevels, what makes the development of the model even more complicated. One of the biggestdifficulties consists, surely, in searching transformation rules of genotypic structures into non-arbitraryphenotypic expressions (morphogenesis), what requires linking them to the realization of newfunctionalities. This, at its turn, is linked to the generation of forms of "downwards causation"(Campbell, 1974). All this implies serious difficulties, because it is not allowed that the appearance offunctional abilities be facilitated by means of an artificial simplification of the rules at the high level("complex" parts) of the model.What has been said to this point is more a review of the approaches to the problem of cognitionwithin AL than a clear proposal of solutions. Notwithstanding, we think that a correct estimation ofwhich is the fundamental frame (underlying levels of complexity, etc.) within which the issue of theappearance of cognitive abilities is posed, constitutes by itself an important advance considering thecurrent context of AL (and AI too). It is true that in the program of research of AL there is acharacteristic emphasis in bottom-up methodology, as well as a greater insistence on the principle ofembodiment with respect to the classical positions in AI. However, when reviewing most of the worksthat confront the study of cognitive functionalities from the AL perspective, it is easy to see the lack ofunanimity and even the absence of clear criteria regarding which is the kind of problem that we oughtto solve in order to adequately state the emergence of such capacities.8.- Conclusions.In the preceding section we have seen that the complexity of the relation between the systemsupporting cognitive abilities and the whole organism has entailed frequent misunderstandings.Sometimes the deep embededness of the cognitive system in its biological substrate has been ignored(as it has happened and still happens in classical Artificial Intelligence, where the construction ofdisembodied artificial cognitive systems is attempted); some others the autonomy of cognitivephenomena has been neglected, subsuming it in a generic adaptive behavior.At the root of these difficulties is the fundamental problem of the origin of cognition. From theanswer given to this question depends the kind of research program in cognitive sciences and, evenmore, the autonomy of the cognitive sciences with respect to biology, on the one hand, and itsgrounding, on the other. The problem is that neither Biology nor Cognitive Science provide today asatisfactory theory about the origin of cognitive systems. AL research can, however help in developingsuch a theory. In this way, the knowledge that we gradually acquire about the conditions that makepossible the arising of cognitive systems in artificial organisms will be endowed of a higher generalitythan classical biological studies.What we have proposed here is that the origin of cognition in natural systems (cognition as weknow it) is the result of the appearance of an autonomous system —the Nervous System— embeddedinto another more generic one —the whole organism. This basic idea is complemented with anotherone: the formation and development of this system, in the course of evolution, cannot be understoodbut as the outcome of a continuos process of interaction between organisms and environment, betweendifferent organisms, and, specially, between the very cognitive organisms.The possibilities of generalizing this conception of the origin of cognition rest in AL. AL offersnowadays new tools which make it possible to establish the foundations of a theory about the origin ofcognition as-it-could-be. This should be, precisely, the bridge between Artificial Life and ArtificialIntelligence. Our suggestion is that those investigations in AL should satisfy the two previouslymentioned conditions —autonomy and co-evolution— in order to be able to connect with thefoundations, in its turn, of a new research program in AI.It is conceivable to hope that the result of all that might rearrange the research programs of bothArtificial Life and Artificial Intelligence so that they gradually converge, though not necessarily in aglobal merging process, but finding a well-established common research area. This mutual findingseems today more likely since within Artificial Intelligence there is an increasing research line onsituated systems, with more and more autonomy degrees, and whose main ability doesn't concern thesolution of very complex problems, but the ability to functionally modify the statement of easier ones.So to say, agents capable of doing simple things in a more autonomous way. And in its turn, systems13that could be considered as "the primordial soup" to allow the emergence of agents with primitivecognitive functions are starting to be taken into consideration within Artificial Life. Should thisconfluence be achieved, Artificial Life would not only have contributed to establish the bases ofBiology as the science of all possible life, but also those of Cognitive Science as the science of allpossible cognition.14References.Andler, D.1988. Representations in Cognitive Science: Beyond the pro and con". Manuscript.Ashby, W. R. 1956. An Introduction to Cybernetics. London: Chapman & Hall.Bersini, H. 1994. Reinforcement learning for homeostatic endogenous variables. In D. Cliff et al.(Eds.), From Animals to Animats 3, pp. 325-333.Bertalanffy, L. von 1968. General Systems Theory; Foundations, Development, Applications. NewYork: George Braziller.Bertalanffy, L. von 1975. Perspectives on General Systems Theory. Scientific-Philosophical Studies.New York: George Braziller.Brooks, R. & Maes, P. (Eds.) 1994. Artificial Life IV. Cambridge, MA: MIT Press.Brooks, R. A. 1991. Intelligence without representation. Artificial Intelligence, 47, 139-159.Campbell, D. T. 1974. Downwards causation in hierarchically organized biological systems. In F.J.Ayala & T. Dobzhansky (Eds.) Studies in the Philosophy of Biology, London: Macmillan, pp.179-186.Cariani, P. 1989. On the design of devices with emergent semantic functions. Ph. D. Dissertation,State University of New York at Binghamton.Clark, A. & Toribio, J. 1994. Doing without representing? Synthese, 101, 401-431.Clark, A. 1995. Autonomous agents and real-time success: Some foundational issues. In IJCAI'95.Clark, A. & Grush, 1996. Towards a Cognitive robotics. Personal ManuscriptCliff, D., Husbands, P., Meyer, J.-A. & Wilson, J. S. (Eds.) 1994. From Animals to Animats 3,Proceedings of the Third Conference on Simulation of Adaptive Behaviour SAB94. Cambridge,MA: MIT Press.Cowan, G. A., Pines, D. & Meltzer (Eds.) 1994. Complexity. Reading, MA: Addison-Wesley.Churchland, P. S. & Sejnowski, T. 1993. The Computational Brain. Cambridge, MA: MIT Press.Dellaert, F. & Beer, R. 1995. Toward an evolvable model of development for autonomous agentsynthesis. In R. Brooks & P. Maes (Eds.) Artificial Life IV, pp 246-257.DRABC'94 - Proceedings of the III International Workshop on Artificial Life and ArtificialIntelligence "On the Role of Dynamics and Representation in Adaptive Behaviour andCognition". Dept. of Logic & Philosophy of Science, University of the Basque Country.Emmeche, C. 1994. The Garden in the Machine: The Emerging Science of Artificial Life. Princeton,NJ: Princeton University Press.Etxeberria, A. 1994. Cognitive bodies. In DRABC'94, pp. 157-159.Etxeberria, A. 1995. Representation and embodiment. Cognitive Systems, 4(2), 177-196.Etxeberria, A., Merelo, JJ & Moreno, A. 1994. Studying organisms with basic cognitive capacities inartificial worlds. Cognitiva, 3(2), 203-218; Intellectica, 10(4); Kognitionswissenschaft, 4(2), 75-84; Communication and Cognition-Artificial Intelligence, 11(1-2), 31-53; Sistemi Intelligenti.Floreano, D. & Mondada, F. 1994. Automatic creation of an autonomous agent: Genetic evolution of aneural-network driven robot. In D. Cliff et al. (Eds.), From Animals to Animats 3, pp. 421-430.Gibson, J. J. 1979. The Ecological Approach to Visual Perception. Boston, MA: Houghton-Mifflin.Gomi, T & Griffith, A. 1996. Evolutionary Robotics-An overview. In Proceedings of the 1996 IEEEInternational Conference on Evolutionary Computation (ICEC 96), Nagoya (Japan) May 20-22,pp. 40-49.Harnad, S. 1990. The symbol grounding problem. Physica D 42, 335-346.Harvey, I., Husbands, P. & Cliff, D. 1994. Seeing the light: Artificial evolution, real vision. In D. Cliffet al. (Eds.), From Animals to Animats 3, pp. 392-401.Haugeland, J. 1991. Representational genera. In W. Ramsey, S. Stich & D. Rumelhart (Eds.)Philosophy and Connectionist Theory, Hillsdale, NJ: L. Erlbaum, pp. 61-90.Heschl, A. 1990. L=C. A simple equation with astonishing consequences. Journal of TheoreticalBiology, 185, 13-40.Jakobi, N., Husbands, P. & Harvey, I. 1995. Noise and the reality gap: the use of simulation inEvolutionary Robotics. In F. Morán et al. (Eds.) Advances in Artificial Life, pp. 704-720.Keeley, B. L. 1993. Against the global replacement: On the application of the philosophy of AI to AL.In C. Langton (Ed.) Artificial Life III, Reading, MA: Addison-Wesley, pp. 569-587.Kitano, H. 1995. Cell differentiation and neurogenesis in evolutionary large scale chaos. In F. Moránet al. (Eds.) Advances in Artificial Life, pp. 341-352.Langton, C. (Ed.) 1989. Artificial Life. Reading, MA: Addison-Wesley.15Lewontin, R. C. 1982. Organism and environment. In H. C. Plotkin (Ed.) Learning, Development,and Culture, New York: John Wiley & Sons, pp. 151-170.Lewontin, R. C. 1983. The organism as the subject and the object of evolution. Scientia, 118, 65-82.Maes, P. (Ed.) 1990. Designing autonomous agents: theory and practice from biology to engineeringand back. Cambridge, MA: MIT Press.Maturana, H. R. & Varela, F. J. 1980. Autopoiesis and Cognition. Dordrecht: Reidel (Kluwer).Meyer, J.-A. & Wilson, S. W. (Eds.) 1991. From Animals to Animats 1. Proceedings of the FirstInternational Conference on Simulation of Adaptive Behavior. Cambridge, MA: MIT Press.Meyer, J.-A., Roitblat, H. L. & Wilson, S. W. (Eds.) 1992. From Animals to Animats 2. Proceedingsof the Second International Conference on Simulation of Adaptive Behavior. Cambridge, MA:MIT Press.Morán, F., Moreno, A., Merelo, J. J. & Chacón, P. (Eds.) 1995. Advances in Artificial Life.Proceedings of the 3rd European Conference on Artificial Life (ECAL95). Berlin: Springer.Moreno, A. & Etxeberria, A. 1992. Self-reproduction and representation. The continuity betweenbiological and cognitive phenomena. Uroboros, II(1), 131-151.Moreno, A., Etxeberria, A. & Umerez, J. 1994. Universality without matter? In R. Brooks & P. Maes(Eds.) Artificial Life IV, pp. 406-410.Moreno, A., Umerez, J. & Fernández, J. 1994. Definition of life and research program in ArtificialLife. Ludus Vitalis. Journal of the Life Sciences, II(3), 15-33.Newell, A. 1980. Physical Symbol Systems. Cognitive Science, 4, 135-183.Nolfi, S. et al. 1995. How to evolve autonomous robots: different approaches in evolutionary robotics.In R. Brooks & P. Maes (Eds.) Artificial Life IV, pp 190-197.Parisi, D. Artificial Life and Higher Level Cognition. (this issue).Pattee, H. H. 1982. Cell Psychology: An evolutionary approach to the symbol-matter problem.Cognition and Brain Theory, 5 (4), 325-341.Pattee, H. H. 1986 Universal principles of measurement and language functions in evolving systems.In J. L. Casti & A. Karlqvist (Eds.) Complexity, Language, and Life, Berlin: Springer-Verlag,pp.: 268-281.Pattee, H. H. 1987. Instabilities and information in biological self-organization. In F. E. Yates (Ed.)Self-Organizing Systems. The Emergence of Order, New York: Plenum, pp. 325-338.Pattee, H. H. 1989. The measurement problem in artificial world models. BioSystems, 23, 281-290.Pattee, H. H. 1993. The limitations of formal models of measurement, control, and cognition. AppliedMathematics and Computation, 56, 111-30.Pattee, H. H. 1995. Evolving self-reference: matter, symbols, and semantic closure. Communicationand Cognition-Artificial Intelligence, 12(1-2), 9-27.Pines, D. (Ed.) 1987. Emerging Synthesis in Science. Reading, MA: Addison-Wesley.Pylyshyn, Z. 1984. Cognition and Computation. Cambridge, MA: MIT Press.Simon, H. A. 1969 (1981, 2nd ed.). The Sciences of the Artificial. Cambridge, MA: MIT Press.Sims, K. 1994. Evolving 3D morphology and behavior by competition. In R. Brooks & P. Maes(Eds.) Artificial Life IV, pp. 28-39.Smithers, T. 1994. What the dynamics of adaptive behaviour and cognition might look like in agent-environment interaction systems. In DRABC'94, pp. 134-153.Smithers, T. Autonomy in Robots and Other Agents. (this issue).Smolenski, P. 1988. On the proper treatment of connectionism. Behavioral and Brain Sciences, 11, 1-74.Sober, E. 1992. Learning from functionalism. Prospects for strong Artificial Life. In C. G. Langton, J.D. Farmer, S. Rasmussen & C. E. Taylor (Eds.) Artificial Life II, Reading, MA: Addison-Wesley, pp. 749-765.Stewart, J. 1992. Life = Cognition. The epistemological and ontological significance of ArtificialIntelligence. In F. Varela & P. Bourgine (Eds.) Toward a practice of autonomoussystems.Proceedings of the 1st European Conference on Artificial Life (ECAL91)., CambridgeMA: MIT Press, pp. 475-483.Thompson, E. Symbol grounding: A bridge from Artificial Life to Artificial Intelligence. (this issue).Turvey, M. & Carello, C. 1981. Cognition: The view from Ecological Realism. Cognition, 10, 313-321.Umerez, J. & Moreno, A. 1995. Origin of life as the first MetaSystem Transition - Control hierarchiesand interlevel relation. World Futures, 45, 139-154.Umerez, J. 1995. Semantic Closure: A guiding notion to ground Artificial Life. In F. Morán et al.(Eds.) Advances in Artificial Life, pp. 77-94.16Van Gelder, T. 1992/1995. What might cognition be if not computation? Technical Report 75, IndianaUniversity, Cognitive Sciences / Journal of Philosophy, XCII(7), 345-381.Van Valen, L. 1973. A New Evolutionary Law. Evolutionary Theory, 1, 1-30.Varela, F. J., Thompson, E. & Rosch, E. 1991. The embodied mind. Cognitive science and humanexperience. Cambridge MA: MIT Press.Varela, F. J. 1989. Connaître. Les sciences cognitives. tendences et perspectives. Paris: Seuil.Varela, F. J. Patterns of Life: Intertwining Identity and Cognition. (this issue).Wiener, N. 1948 [1961]. Cybernetics or control and communication in the animal and the machine.Cambridge, MA: MIT Press.Yamauchi, B. & Beer, R. 1994. Integrating reactive, sequential, and learning behavior using dynamicalneural networks. In D. Cliff et al. (Eds.), From Animals to Animats 3, pp 382-391.17Acknowledgements.Authors are very grateful to Arantza Etxeberria for her suggestions and ideas to original drafts, and toherself, Andy Clark, Pablo Navarro and Tim Smithers for their comments and discussions that helped toclarify some dark passages of the final draft. This research was supported by the Research ProjectNumber PB92-0456 from the DYGCIT-MEC (Ministerio de Educación y Ciencia, Spain) and by theResearch Project Number 230-HA 203/95 from the University of the Basque Country. Jon Umerezthanks a Postdoctoral Fellowship from the Basque Government.Footnotes.1.- In this paper we are going to use the concept of autonomy in two senses, which will be discernedin each case, i.e., as a general idea of self-sustaining identity and as the more concrete result of somekind of operational closure. See Smithers (this issue) for a fuller and more encompassing treatment ofthe plural and different uses of the concept and its related terms.2.- As to the suggestion that the immune system could be considered cognitive, we would say that,better than considering it cognitive, the immune system is a system where processes similar to those ofthe biological evolution take place, but inside an individual and in the lapse of a few hours or days(instead of populations and millions of years). Its functionality and speed notwithstanding, it is more acase of very complex adaptive system in somatic time than cognitive. Functionally it is not in directrelationship with sensorimotor coordination (not functionally linked to directed movement).Furthermore, the immune system has only been developed within the frame of some cognitiveorganisms (vertebrate animals) and does not exist in non-cognitive evolved organisms. It is thereforepossible to pose the question whether it is not precisely following the development of complex formsof identity like the one occurring through the entanglement between the metabolic and nervousoperational closures that the appearance of the immune system was propitiated.3.- This leads us to interpret the information in the NS as metabolic global patterns that in turnmodulate a flow of chemical and physical processes in a circular manner. The NS is connected to theoutside through diverse organs, sensors and effectors (two levels of exteriority: outside the nervoussystem and outside the whole organism, the latter being the most important). Accordingly, we cannotinterpret correlated functional states with external processes as informational when these states aremerely metabolic ones (e.g. bacteria, paramecium, plant). However, in the case of adaptive metabolicchanges that take place in animals, surely both levels (metabolic and informational patterns) arestrongly interconditioned.4.- Even though, due to the impossibility to artificially build agents able of self- production and self-reproduction and, even less, of disposing of the necessary long periods of time, we are obliged toresort to the computational simulation of evolutionary processes.