Commentaires 0

Retranscription du document

1Why Heideggerian AI Failed and how Fixing it would Require making it moreHeideggerianHubert L. DreyfusI. The Convergence of Computers and PhilosophyWhen I was teaching at MIT in the early sixties, students from the ArtificialIntelligence Laboratory would come to my Heidegger course and say in effect: “Youphilosophers have been reflecting in your armchairs for over 2000 years and you still don’tunderstand intelligence. We in the AI Lab. have taken over and are succeeding where youphilosophers have failed. We are now programming computers to exhibit humanintelligence: to solve problems, to understand natural language, to perceive, and to learn.”1

In 1968 Marvin Minsky, head of the AI lab, proclaimed: “Within a generation we willhave intelligent computers like HAL in the film,2001.”2As luck would have it, in 1963, I was invited by the RAND Corporation toevaluate the pioneering work of Alan Newell and Herbert Simon in a new field calledCognitive Simulation CS). Newell and Simon claimed that both digital computers andthe human mind could be understood as physical symbol systems, using strings of bits orstreams of neuron pulses as symbols representing the external world. Intelligence, theyclaimed, merely required making the appropriate inferences from these internalrepresentations. As they put it: “A physical symbol system has the necessary andsufficient means for general intelligent action.”3

As I studied the RAND papers and memos, I found to my surprise that, far fromreplacing philosophy, the pioneers in CS and AI had learned a lot, directly and indirectlyfrom the philosophers. They had taken over Hobbes’ claim that reasoning was calculating,Descartes’ mental representations, Leibniz’s idea of a “universal characteristic” – a set ofprimitives in which all knowledge could be expressed, -- Kant’s claim that concepts wererules, Frege’s formalization of such rules, and Wittgenstein’s postulation of logical atomsin hisTractatus. In short, without realizing it, AI researchers were hard at work turningrationalist philosophy into a research program.At the same time, I began to suspect that the insights formulated in existentialistarmchairs, especially Heidegger’s and Merleau-Ponty’s, were bad news for those working

1

2in AI laboratories-- that, by combining representationalism, conceptualism, formalism, andlogical atomism into a research program, AI researchers had condemned their enterprise toconfirm a failure.II. Symbolic AI as a Degenerating Research ProgramUsing Heidegger as a guide, I began to look for signs that the whole AI researchprogram was degenerating. I was particularly struck by the fact that, among other troubles,researchers were running up against the problem of representing significance and relevance– a problem that Heidegger saw was implicit in Descartes’ understanding of the world as aset of meaningless facts to which the mind assigned what Descartes called values and JohnSearle now calls function predicates.But, Heidegger warned, values are just more meaningless facts. To say a hammerhas the function of being for hammering leaves out the defining relation of hammers tonails and other equipment, to the point of building things, and to our skills. – all of whichHeidegger called readiness-to-hand --and so attributing functions to brute facts couldn’tcapture the meaningful organization of the everyday world. “[B]y taking refuge in 'value'-characteristics,” Heidegger said, “we are … far from even catching a glimpse of being asreadiness-to-hand ...”4

Minsky, unaware of Heidegger’s critique, was convinced that representing a fewmillion facts about objects including their functions, would solve what had come to becalled the commonsense knowledge problem. It seemed to me, however, that the realproblem wasn’t storing millions of facts; it was knowing which facts were relevant in anygiven situation. One version of this relevance problem is called the frame problem. Ifthe computer is running a representation of the current state of the world and somethingin the world changes, how does the program determine which of its represented facts canbe assumed to have stayed the same, and which might have to be updated?As Michael Wheeler in his recent book,Reconstructing the Cognitive World, putsit:[G]iven a dynamically changing world, how is a nonmagical system ... totake account of those state changes in that world ... that matter, and thoseunchanged states in that world that matter, while ignoring those that do

2

3not? And how is that system to retrieve and (if necessary) to revise, out ofall the beliefs that it possesses, just those beliefs that are relevant in someparticular context of action?5Minsky suggested that, to avoid the frame problem, AI programmers could usedescriptions of typical situations like going to a birthday party to list and organize those,and only those, facts that were normally relevant. Perhaps influenced by a computerscience student who had taken my phenomenology course, Minsky suggested a structureof essential features and default assignments-- a structure Husserl had already proposedand called a frame.6

But a system of frames isn’t in a situation, so in order to select the possiblyrelevant facts in the current situation one would need frames for recognizing situationslike birthday parties, and for telling them from other situations such as ordering in arestaurant. But how, I wondered, could the computer select from the supposed millionsof frames in its memory the relevant frame for selecting the birthday party frame as therelevant frame, so as to see the current relevance of an exchange of gifts? It seemed tome obvious that any AI program using frames to organize millions of meaningless factsso as to retrieve the currently relevant ones was going to be caught in a regress of framesfor recognizing relevant frames for recognizing relevant facts, and that, therefore, thecommonsense knowledge storage and retrieval problem wasn’t just a problem but was asign that something was seriously wrong with the whole approach.Unfortunately, what has always distinguished AI research from a science is itsfailure to face up to and learn from its failures. In this case, to avoid facing the relevanceproblem the AI programmers at MIT in the sixties and early seventies limited theirprograms to what they called micro-worlds – artificial situations in which the smallnumber of features that were possibly relevant was determined beforehand. Since thisapproach obviously avoided the real-world frame problem, PhD students were compelledto claim in their theses that their micro-worlds could be made more realistic, and that thetechniques they introduced could be generalized to cover commonsense knowledge.There were, however, no successful follow-ups.7

3

4The work of Terry Winograd is typical. His “blocks-world” program, SHRDLU,responded to commands in ordinary English instructing a virtual robot arm to moveblocks displayed on a computer screen. It was the parade case of a micro-world programthat really worked – but of course only in its micro-world. So to develop the expectedgeneralization of his techniques, Winograd started working on a new KnowledgeRepresentation Language. (KRL). His group, he said, was “concerned with developing aformalism, or ‘representation,’ with which to describe ... knowledge.” And he added:“We seek the ‘atoms’ and ‘particles’ of which it is built, and the ‘forces’ that act on it.”8

But his approach wasn’t working either. Indeed, Minsky has recentlyacknowledged in Wired Magazine that AI has been brain dead since the early 70s when itencountered the problem of commonsense knowledge.9Winograd, however, unlike hiscolleagues, was courageous enough to try to figure out what had gone wrong. So in themid 70ies he began having weekly lunches with John Searle and me to discuss hisproblems in a broader philosophy context. Looking back, Winograd says: "My own workin computer science is greatly influenced by conversations with Dreyfus over a period ofmany years.”10

After a year of such conversations, and after reading the relevant texts of theexistential phenomenologists, Winograd abandoned work on KRL and began includingHeidegger in his Computer Science courses at Stanford. In so doing, he became the firsthigh-profile deserter from what was, indeed, becoming a degenerating research program.John Haugeland now refers to the symbolic AI of that period as Good Old Fashioned AI—GOFAI for short—and that name has been widely accepted as capturing its currentstatus. Michael Wheeler argues explicitly that a new paradigm is already taking shape.He maintains:[A] Heideggerian cognitive science is … emerging right now, in thelaboratories and offices around the world where embodied-embeddedthinking is under active investigation and development.11Wheeler’s well informed book could not have been more timely since there are nowat least three versions of supposedly Heideggerian AI that might be thought of asarticulating a new paradigm for the field: Rodney Brooks’ behaviorist approach at MIT,

III. Heideggerian AI, Stage One: Eliminating representations by building Behavior-based RobotsWinograd sums up what happened at MIT after he left for Stanford.For those who have followed the history of artificial intelligence, it isironic that [the MIT] laboratory should become a cradle of "HeideggerianAI." It was at MIT that Dreyfus first formulated his critique, and, fortwenty years, the intellectual atmosphere in the AI Lab was overtly hostileto recognizing the implications of what he said. Nevertheless, some of thework now being done at that laboratory seems to have been affected byHeidegger and Dreyfus.13

Here’s how it happened. In March l986, the MIT AI Lab under its new director,Patrick Winston, reversed Minsky’s attitude toward me and allowed, if not encouraged,several graduate students, led by Phil Agre and John Batali, to invite me to give a talk.14Icalled the talk, “Why AI Researchers should studyBeing and Time.” In my talk I repeatedwhat I had written in l972 inWhat Computers Can’t Do: “[T] he meaningful objects ...among which we live are not a model of the world stored in our mind or brain; they are theworld itself.”15And I quoted approvingly a Stanford Research Institute report that pointedout that, "It turned out to be very difficult to reproduce in an internal representation for acomputer the necessary richness of environment that would give rise to interesting behaviorby a highly adaptive robot,"16and concluded that “this problem is avoided by human beingsbecause their model of the world is the world itself.”17The year of my talk, Rodney Brooks, who had moved from Stanford to MIT,published a paper criticizing the GOFAI robots that used representations of the world andproblem solving techniques to plan their movements. He reported that, based on the ideathat “the best model of the world is the world itself,” he had “developed a differentapproach in which a mobile robot uses the world itself as is own representation –

5

6continually referring to its sensors rather than to an internal world model.”18Lookingback at the frame problem, he says:And why could my simulated robot handle it? Because it was using theworld as its own model. It never referred to an internal description of theworld that would quickly get out of date if anything in the real worldmoved.19Although he doesn’t acknowledge the influence of Heidegger directly,20Brooks gives mecredit for “being right about many issues such as the way in which people operate in theworld is intimately coupled to the existence of their body.”21

Brooks’ approach is an important advance, but Brooks’ robots respond only tofixed features of the environment, not to context or changing significance. They are likeants, and Brooks aptly calls them “animats.” Brooks thinks he does not need to worryabout learning. He proposes it as a subject for future research but not currently hisconcern.22But by operating in a fixed world and responding only to the small set ofpossibly relevant features that their receptors can pick up, Brooks’ animats beg thequestion of changing relevance and so finesse rather than solve the frame problem.Merleau-Ponty’s work, on the contrary, offers a nonrepresentational account ofthe way the body and the world are coupled that suggests a way of avoiding the frameproblem. According to Merleau-Ponty, as an agent acquires skills, those skills are“stored”, not as representations in the mind, but as a bodily readiness to respond to thesolicitations of situations in the world. What the learner acquires through experience isnot represented at all but is presented to the learner as more and more finelydiscriminated situations, and, if the situation does not clearly solicit a single response orif the response does not produce a satisfactory result, the learner is led to further refinehis discriminations, which, in turn, solicit more refined responses. For example, what wehave learned from our experience of finding our way around in a city is sedimented inhow that city looks to us. Merleau-Ponty calls this feedback loop between the embodiedagent and the perceptual world the intentional arc. He says: “Cognitive life, the life ofdesire or perceptual life – is subtended by an ‘intentional arc’ which projects round aboutus our past, our future, [and] our human setting.”23

6

7Brooks comes close to a basic existentialist insight spelled out by Merleau-Ponty,24viz. that intelligence is founded on and presupposes the more basic way of coping weshare with animals, when he says:The "simple" things concerning perception and mobility in a dynamicenvironment … are a necessary basis for "higher-level" intellect.…Therefore, I proposed looking at simpler animals as a bottom-up modelfor building intelligence. It is soon apparent, when "reasoning" is strippedaway as the prime component of a robot's intellect, that the dynamics of theinteraction of the robot and its environment are primary determinants of thestructure of its intelligence.25Brooks is realistic in describing his ambitions and his successes. He says:The work can best be described as attempts to emulate insect-levellocomotion and navigation. …There have been some behavior-basedattempts at exploring social interactions, but these too have been modeledafter the sorts of social interactions we see in insects.26Surprisingly, this modesty did not deter Brooks and Daniel Dennett fromrepeating the extravagant optimism characteristic of AI researchers in the sixties. As inthe days of GOFAI, on the basis of Brooks’ success with insect-like devices, instead oftrying to make, say, an artificial spider, Brooks and Dennett decided to leap ahead andbuild a humanoid robot. As Dennett explained in a l994 report to The Royal Society ofLondon:A team at MIT of which I am a part is now embarking on a long-termproject to design and build a humanoid robot, Cog, whose cognitivetalents will include speech, eye-coordinated manipulation of objects, and ahost of self-protective, self-regulatory and self-exploring activities.27

Dennett seems to reduce this project to a joke when he adds in all seriousness: “While weare at it, we might as well try to make Cog crave human praise and company and evenexhibit a sense of humor.”28(That should have been my put down line.)Of course, the “long term project” was short lived. Cog failed to achieve any ofits goals and is already in a museum.29But, as far as I know, neither Dennett nor anyone

7

8connected with the project has published an account of the failure and asked whatmistaken assumptions underlay their absurd optimism. In response to my asking whathad been learned, Dennett offered one of the usual AI lame excuses for failure-- in thiscase, the lack of graduate students—and put the usual misleading positive spin on whathad been accomplished:Cog never advanced beyond the toddler stage in any competence (andnever got out of neonate in many others). But then, after the first fewyears, only two or three grad students were working on it full time.Progress was being made on all the goals, but slower than had beenanticipated.30

If progress was actually being made the graduate students wouldn’t have left, orothers would have continued to work on the project. Clearly some specific assumptionsmust have been mistaken, but all we find in Dennett’s assessment is the implicitassumption that human intelligence is on a continuum with insect intelligence, and thattherefore adding a bit of complexity to what has already been done with animats countsas progress toward humanoid intelligence. At the beginning of AI research, YehoshuaBar-Hillel called this way of thinking the first-step fallacy, and my brother quipped, “it'slike claiming that the first monkey that climbed a tree was making progress towards flightto the moon.”Compared to Dennett’s conviction that Brooks’ AI research is progressing along acontinuum that will eventually lead from animats to humanly intelligent machines,Brooks is prepared to entertain the possibility that he is barking up the wrong tree. Heconcludes a discussion of his animats with the sober comment that:Perhaps there is a way of looking at biological systems that will illuminatean inherent necessity in some aspect of the interactions of their parts thatis completely missing from our artificial systems…. I am not suggestingthat we need go outside the current realms of mathematics, physics,chemistry, or biochemistry. Rather I am suggesting that perhaps at thispoint we simply do not get it, and that there is some fundamental changenecessary in our thinking in order that we might build artificial systems

8

9that have the levels of intelligence, emotional interactions, long termstability and autonomy, and general robustness that we might expect ofbiological systems.31We can already see that Heidegger and Merleau-Ponty would say that, in spite ofthe breakthrough of giving up internal symbolic representations, Brooks, indeed, doesn’tget it -- that what AI researchers have to face and understand is not only why oureveryday coping couldn’t be understood in terms of inferences from symbolicrepresentations, as Minsky’s intellectualist approach assumed, but also why it can’t beunderstood in terms of responses caused by fixed features of the environment, as inBrooks’ empiricist approach. AI researchers need to consider the possibility thatembodied beings like us take as input energy from the physical universe and respond insuch a way as to open them to a world organized in terms of their needs, interests, andbodily capacities, without their minds needing to impose meaning on a meaninglessgiven, as Minsky’s frames require, nor their brains converting stimulus input into reflexresponses, as in Brooks’ animats.At the end of this talk, I’ll suggest that Walter Freeman’s neurodynamics offers aradically new Merelau-Pontian approach to human intelligence – an approach compatiblewith physics and grounded in the neuroscience of perception and action. But first weneed to examine another approach to AI contemporaneous with Brooks’ that actuallycalls itself Heideggerian.IV. Heideggerian AI, Stage 2: Programming the Ready-to-handIn my talk at the MIT AI Lab, I not only introduced Heidegger’s non-representational account of the relation of Dasein (human being) and the world, I alsoexplained that Heidegger distinguished two modes of being: the readiness-to-hand ofequipment when we are involved in using it, and the presence-at-hand of objects whenwe contemplate them. Out of that explanation and the lively discussion that followed,grew the second type of Heideggerian AI. The first to acknowledge its lineage.This new approach took the form of Phil Agre’s and David Chapman’s program,Pengi, which guided a virtual agent playing a computer game called Pengo, in which theplayer and penguins kick large and deadly blocks of ice at each other.32Agre’s approach,

9

10which he called “interactionism,” was more self-consciously Heideggerian than Brooks,in that Agre proposed to capture what he calls “Heidegger’s account of everyday routineactivities.”33

In his book,Computation and Human Experience, Agre takes up where my talkleft off, saying:I believe that people are intimately involved in the world around them andthat the epistemological isolation that Descartes took for granted isuntenable. This position has been argued at great length by philosopherssuch as Heidegger and Merleau-Ponty; I wish to argue it technologically.34

Agre’s interesting new idea is that the world of the game in which the Pengi agentacts is made up, not of present-at-hand facts and features, but of possibilities for actionthat require appropriate responses from the agent. To program this involved approachAgre used what he called “deictic representations.” He tells us:This proposal is based on a rough analogy with Heidegger's analysis ofeveryday intentionality in Division I of Being and Time, with objectiveintentionality corresponding to the present-at-hand and deicticintentionality corresponding to the ready-to-hand.35And he explains:[Deictic representations] designate, not a particular object in the world,but rather a role that an object might play in a certain time-extendedpattern of interaction between an agent and its environment. Differentobjects might occupy this role at different times, but the agent will treat allof them in the same way.36Looking back on my talk at MIT and rereading Agre’s book, I now see that, in away, Agre understood Heidegger’s account of readiness-to-hand better than I did at thetime. I thought of the ready-to-hand as a special class of entities, viz. equipment,whereas the Pengi program treats what the agent responds to purely as functions. ForHeidegger and Agre the ready-to-hand is not a what but a for-what.37As Agre saw, Heidegger wants to get at something more basic than simply a classof objects defined by their use. At his best, Heidegger would, I think, deny that a

10

11hammer in a drawer has readiness-to-hand as its way of being. Rather, he sees that, forthe user, equipment is encountered as a solicitation to act, not an entity with functionfeatures. He notes that: “When one is wholly devoted to something and 'really' busiesoneself with it, one does not do so just alongside the work itself, or alongside the tool, oralongside both of them 'together'.”38And he adds: “the peculiarity of what is proximallyready-to-hand is that, in its readiness-to-hand, it must, as it were, withdraw in order to beready-to-hand quite authentically.”39

As usual with Heidegger, we must ask: what is the phenomenon he is pointingout? In this case he sees that, to observe our hammer or to observe ourselves hammeringundermines our skillful coping. We can and do observe our surroundings while we cope,and sometimes, if we are learning, monitoring our performance as we learn improves ourperformance in the long run, but in the short run such attention interferes with ourperformance. For example, while biking we can observe passers by, or think aboutphilosophy, but if we start observing how we skillfully stay balanced, we risk fallingover.Heidegger struggles to describe the special, and he claims, basic, way of being hecalls the ready-to-hand. The Gestaltists would later talk of “solicitations”, and J.J.Gibson, even later, would introduce the idea of “affordances.” InPhenomenology ofPerceptionMerleau-Ponty speaks of “motivations” and later, of “the flesh.” All theseterms point at what is not objectifyable--a situation’s way of drawing one into it.In his 1925 course,Logic: The Question of TruthHeidegger describes our mostbasic experience of what he later calls “pressing into possibilities” not as dealing with thedesk, the door, the lamp, the chair and so forth, but as directly responding to a “what for”:What is first of all ‘given’ …is the ‘for writing,’ the ‘for going in and out,’the ‘for illuminating,’ the ‘for sitting.’ That is, writing, going-in-and-out,sitting, and the like are what we are a priori involved with. What we knowwhen we ‘know our way around’ and what we learn are these ‘for-what’s.40

It’s clear here, unlike what some people take Heidegger to suggest in BeingandTime, that this basic experience has no as-structure.41That is, when absorbed in coping, I

11

12can be described objectively as using the door as a door, but I’m not experiencing thedoor as a door. In coping at my best, I’m not experiencing the door at all but simplypressing into the possibility of going out. The important thing to realize is that, when weare pressing into possibilities, … there is no experience of an entity doing the soliciting;just the solicitation. Such solicitations disclose the world on the basis of which wesometimes do step back and perceive things as things.But Agre’s Heideggerian AI did not try to program this experiential aspect of beingdrawn in by an affordance. Rather, with his deictic representations, Agre objectified boththe functions and their situational relevance for the agent. In Pengi, when a virtual ice cubedefined by its function is close to the virtual player, a rule dictates the response, e.g. kick it.No skill is involved and no learning takes place.So Agre had something right that I was missing -- the transparency of the ready-to-hand -- but he also fell short of being fully Heideggerian. For Heidegger, the ready-to-handis not a fixed function, encountered in a predefined type of situation that triggers apredetermined response that either succeeds or fails. Rather, as we have begun to see andwill soon see further, readiness-to-hand is experienced as a solicitation that calls forth aflexible response to the significance of the current situation – a response which isexperienced as either improving one’s situation or making it worse.Although he proposed to program Heidegger’s account of everyday routineactivities,42Agre doesn’t even try to account for how our experience feeds back andchanges our sense of the significance of the next situation and what is relevant in it. Byputting his virtual agent in a virtual world where all possibly relevance is determinedbeforehand, Agre can’t account for how we learn to respond to new relevancies in oureveryday routine activities, and so, like Brooks, he finessed rather than solved the frameproblem. Thus, sadly, his Heideggerian AI turned out to be a dead end. Happily, however,Agre never claimed he was making progress towards building a human being.V. Pseudo Heideggerian AI: Situated Cognition and the Embedded, Embodied,Extended Mind.InReconstructing the Cognitive World, Wheeler praises me for putting theconfrontation between Cartesian and Heideggerian ontologies to an empirical test.

12

13Wheeler claims, however, that, I only made negative predictions about the viability ofGOFAI and Cognitive Science research programs. The time has come, he says, for apositive approach, and he claims that the emerging embodied-embedded paradigm in thefield is a thoroughly Heideggerian one.As if taking up from where Agre left off with his objectified version of the ready-to-hand, Wheeler tells us:[O]ur global project requires a defense of action-oriented representation.… [A]ction-oriented representation may be interpreted as the subagentialreflection of online practical problem solving, as conceived by theHeideggerian phenomenologist. Embodied-embedded cognitive science isimplicitly a Heideggerian venture.43He further notes:As part of its promise, this nascent, Heideggerian paradigm would need toindicate that it might plausibly be able either to solve or to dissolve theframe problem.44And he suggests:The good news for the reoriented Heideggerian is that the kind of evidencecalled for here may already exist, in the work of recent embodied-embedded cognitive science.45He concludes:Dreyfus is right that the philosophical impasse between a Cartesian and aHeideggerian metaphysics can be resolved empirically via cognitivescience. However, he looks for resolution in the wrong place. For it is notany alleged empirical failure on the part of orthodox cognitive science, butrather the concrete empirical success of a cognitive science withHeideggerian credentials, that, if sustained and deepened, wouldultimately vindicate a Heideggerian position in cognitive theory.46I agree it is time for a positive account of Heideggerian AI and of an underlyingHeideggerian neuroscience, but I think Wheeler is looking in the wrong place. Merely insupposing that Heidegger is concerned with subagential problem solving and action

13

14oriented representations, Wheeler’s project reflects not a step beyond Agre but aregression to pre-Brooks GOFAI. Heidegger, indeed, claims that that skillful coping isbasic, but he is also clear that, at its best, coping doesn’t involve representations orproblem solving at all.47Wheeler’s cognitivist misreading of Heidegger leads to his overestimating theimportance of Andy Clark’s and David Chalmers’ attempt to free us from the Cartesianidea that the mind is essentially inner by pointing out that in thinking we sometimes makeuse of external artifacts like pencil, paper, and computers.48Unfortunately, this argumentfor the extended mind preserves the Cartesian assumption that our basic way of relating tothe world is by thinking, that is by using representations such as beliefs and memories bethey in the mind or in notebooks in the world. In effect, while Brooks and Agre dispensewith representations where coping is concerned, all Chalmers, Clark, and Wheeler give usas a supposedly radical new Heideggerian approach to the human way of being in the worldis the observation that memories and beliefs are not necessarily inner entities and that,therefore, thinking bridges the distinction between inner and outer representations.49

When we solve problems, we do sometimes make use of representationalequipment outside our bodies, but Heidegger’s crucial insight is that being-in-the-worldis more basic than thinking and solving problems; it is not representational at all. That is,when we are coping at our best, we are drawn in by affordances and respond directly tothem, so that the distinction between us and our equipment--between inner and outer—vanishes.50As Heidegger sums it up:I live in the understanding of writing, illuminating, going-in-and-out, andthe like. More precisely: as Dasein I am -- in speaking, going, andunderstanding -- an act of understanding dealing-with. My being in theworld is nothing other than this already-operating-with-understanding inthis mode of being.51

Heidegger’s and Merleau-Ponty’s understanding of embedded embodied coping, therefore,is not that the mind is sometimes extended into the world but rather that, in our most basicway of being, -- that is, as skillful copers, -- we are not minds at all but one with the world.

14

15Heidegger sticks to the phenomenon, when he makes the strange-sounding claim that, in itsmost basic way of being, “Dasein is its world existingly.”52

When you stop thinking that mind is what characterizes us most basically but,rather, that most basically we are absorbed copers, the inner/outer distinction becomesproblematic. There's no easily askable question about where the absorbed coping is in meor in the world. Thus, for a Heideggerian all forms of cognitivist externalism presupposea more basic existentialist externalism where even to speak of “externalism” ismisleading since such talk presupposes a contrast with the internal. Compared to thisgenuinely Heideggerian view, extended-mind externalism is contrived, trivial, andirrelevant.VI. What Motivates embedded/embodied coping?But why is Dasein called to cope at all? According to Heidegger, we areconstantly solicited to improve our familiarity with the world. Five years before thepublication ofBeing and Timehe wrote:Caring takes the form of a looking around and seeing, and as thiscircumspective caring it is at the same time … concerned aboutdeveloping its circumspection, that is, about securing and expanding itsfamiliarity with the objects of its dealings.53This pragmatic perspective is developed by Merleau-Ponty, and by Samuel Todes.54Theseheirs to Heidegger’s account of familiarly and coping describe how an organism, animal orhuman, interacts with the meaningless physical universe in such as way as to experience itas an environment organized in terms of that organism’s need to find its way around. Allsuch coping beings are motivated to get a more and more secure sense of the specificobjects of their dealings. In our case, according to Merleau-Ponty:My body is geared into the world when my perception presents me with aspectacle as varied and as clearly articulated as possible, and when mymotor intentions, as they unfold, receive the responses they anticipate[attendent, not expect] from the world.55To take Merleau-Ponty’s example:

15

16For each object, as for each picture in an art gallery, there is an optimumdistance from which it requires to be seen, a direction viewed from whichit vouchsafes most of itself: at a shorter or greater distance we have merelya perception blurred through excess or deficiency. We therefore tendtowards the maximum of visibility, [as if seeking] a better focus with amicroscope.56

In short, in our skilled activity we are drawn to move so as to achieve a better andbetter grip on our situation. For this movement towards maximal grip to take place, onedoesn’t need a mental representation of one’s goal nor any subagential problem solving,as would a GOFAI robot. Rather, acting is experienced as a steady flow of skillfulactivity in response to one's sense of the situation. Part of that experience is a sense thatwhen one's situation deviates from some optimal body-environment gestalt, one's activitytakes one closer to that optimum and thereby relieves the "tension" of the deviation. Onedoes not need to know what that optimum is in order to move towards it. One's body issimply solicited by the situation [the gradient of the situation’s reward] to lower thetension. Minimum tension is correlated with achieving an optimal grip. As Merleau-Ponty puts it: “Our body is not an object for an ‘I think’, it is a grouping of lived-throughmeanings that moves towards its equilibrium.”57[Equilibrium being Merleau-Ponty’sname for zero gradient.]VII. Modeling Situated Coping as a Dynamical SystemDescribing the phenomenon of everyday coping as being “geared into” the worldand moving towards “equilibrium” suggests a dynamic relation between the coper and theenvironment. Timothy van Gelder calls this dynamic relation coupling. He explains theimportance of coupling as follows:The fundamental mode of interaction with the environment is not torepresent it, or even to exchange inputs and outputs with it; rather, therelation is better understood via the technical notion of coupling. ...The post-Cartesian agent manages to cope with the world withoutnecessarily representing it. A dynamical approach suggests how this mightbe possible by showing how the internal operation of a system interacting

16

17with an external world can be so subtle and complex as to defy descriptionin representational terms -- how, in other words, cognition can transcendrepresentation.58Van Gelder shares with Brooks the idea that thought is grounded in a more basicrelation of agent and world. As van Gelder puts it:Cognition can, in sophisticated cases, [such a breakdown, problem solvingand abstract thought] involve representation and sequential processing; butsuch phenomena are best understood as emerging from [i.e. requiring] adynamical substrate, rather than as constituting the basic level of cognitiveperformance.59This dynamical substrate is precisely the skillful coping first described by Heidegger andworked out in detail by Todes and Merleau-Ponty.Van Gelder importantly contrasts the rich interactive temporality of real-time on-line coupling of coper and world with the austere step by step temporality of thought.Wheeler helpfully explains:[W]hilst the computational architectures proposed within computationalcognitive science require that inner events happen in the right order, and (intheory) fast enough to get a job done, there are, in general, no constraints onhow long each operation within the overall cognitive process takes, or onhow long the gaps between the individual operations are. Moreover, thetransition events that characterize those inner operations are not related inany systematic way to the real-time dynamics of either neural biochemicalprocesses, non-neural bodily events, or environmental phenomena(dynamics which surely involve rates and rhythms).60

Computation is thus paradigmatically austere:Turing machine computing is digital, deterministic, discrete, effective (inthe technical sense that behavior is always the result of an algorithmicallyspecified finite number of operations), and temporally austere (in that timeis reduced to mere sequence).61

17

18Ironically, Wheeler’s highlighting the contrast between rich dynamic temporalcoupling and austere computational temporality enables us to see clearly that his appeal toextended minds as a Heideggerian response to Cartesianism leaves out the essentialtemporal character of embodied embedding. Clarke’s and Chalmer’s examples of extendedminds dealing with representations are clearly a case of computational austerity. Wheeleris aware of this possible objection to his backing both the dynamical systems model and theextended mind approach. He asks. “What about the apparent clash between continuousreciprocal causation and action orientated representations? On the face of it this clash is aworry for our emerging cognitive science.”62But, instead of engaging with theincompatibility of these two opposed models of ground level intelligence — on the onehand, computation as in GOFAI, classical Cognitivism, and Agre-like action-orientatedrepresentations, and on the other, dynamical models as demonstrated by Brooks anddescribed by van Gelder — Wheeler punts. He simply suggests that we must somehowcombine these two approaches and that “this is the biggest of the many challenges that lieahead.”63

Wheeler’s ambivalence concerning the role of computation undermines his overallapproach. This is not a mere local squabble about details, although Wheeler clearly wishesit were.64It is, as Wheeler himself sees, the issue as to which approach is more basic – thecomputational or the dynamic. The Heideggerian claim is that action-oriented coping aslong as it is involved (on-line, Wheeler would say) is not representational at all and doesnot involve any problem solving, and that all representational problem solving takes placeoff-line and presupposed this involved coping,65Showing in detail how therepresentational un-ready-to-hand and present-at-hand in all their forms are derivative fromnon-representational ready-to-hand coping is one of Heidegger’s priority projects.More broadly, a Heideggarian cognitive science would require working out anontology, phenomenology, and brain model that denies a basic role to austerecomputational processing, and defends a dynamical model like Merleau-Ponty’s and vanGelder’s that gives a primordial place to equilibrium, and in general to rich coupling.Ultimately, we will have to choose which sort of AI and which sort of neuroscience toback, and so we are led to our final questions: could the brain as its most basic way of

18

19making sense of things instantiate a richly coupled dynamical system, and is there anyevidence it actually does so? If so, could this sort of non-computational coupling bemodeled on a digital computer to give us Heideggerian AI?IntermissionVIII Walter Freeman’s Heideggerian/Merleau-Pontian NeurodynamicsWe have seen that our experience of the everyday world is organized in terms ofsignificance and relevance and that this significance can’t be constructed by givingmeaning to brute facts -- both because we don’t experience brute facts and, even if wedid, no value predicate could do the job of giving them situational significance. Yet, allthat the organism can receive as input is mere physical energy. How can such senselessphysical stimulation be experienced directly as significant? If we can’t answer thisquestion, the phenomenological observation that the world is its own best representation,and that the significance we find in our world is constantly enriched by our experience init, seems to suggest that the brain is what Dennett derisively calls “wonder tissue.”Fortunately, there is at least one model of how the brain could provide the causalbasis for the intentional arc. Walter Freeman, a founding figure in neuroscience and thefirst to take seriously the idea of the brain as a nonlinear dynamical system, has workedout an account of how the brain of an active animal can find and augment significance inits world. On the basis of years of work on olfaction, vision, touch, and hearing in alertand moving rabbits, Freeman proposes a model of rabbit learning based on the couplingof the brain and the environment. To bring out the relevance of Freeman’s account to ourphenomenological investigation, I propose to map Freeman’s neurodynamic model ontothe phenomena we have already noted in the work of Merleau-Ponty.1. Involved action/perception. [Merleau-Ponty’s being-absorbed-in-the-world (être aumonde) -- his version of Heidegger’s in-der-welt-sein.]

The animal will sometimes sense a need to improve its current situation. When itdoes, an instinct or a learned skill is activated. Thus, according to Freeman’s model,when hungry, frightened, etc., the rabbit sniffs around seeking food, runs toward a hidingplace, or does whatever else prior experience has taught it is appropriate. The animal’sneural connections are then changed on the basis of the quality of its resulting experience,

19

20that is, they are changed in a way that reflects the extent to which the result satisfied theanimal’s current need. This is not simple behaviorism, however, since, as we shall nowsee, the changes brought about by experience are global, not discrete.2. Holism

The change is much more radical than adding a new mechanical response. Thenext time the rabbit is in a similar state of seeking and encounters a similar smell theentire olfactory bulb goes into a state of global chaotic activity. Freeman tell us:[E]xperiments show clearly that every neuron in the [olfactory] bulbparticipates in generating each olfactory perception. In other words, thesalient information about the stimulus is carried in some distinctive patternof bulb wide activity, not in a small subset of feature-detecting neuronsthat are excited only by, say, foxlike scents.66Freeman later generalizes this principle to ‘brain-wide activity’ such that a perceptioninvolves and includes all of the sensory, motor and limbic systems.3. Direct perception of significance

After each sniff, the rabbit’s bulb exhibits a distribution of what neural modelerstraditionally call energy states. The bulb then tends toward minimum energy the way aball tends to roll towards the bottom of a container, no matter where it starts from withinthe container. Each possible minimal energy state is called an attractor. The brain statesthat tend towards a particular attractor are called that attractor’s basin of attraction.The rabbit’s brain forms a new basin of attraction for each new significant input.67

Thus, the significance of past experience is preserved in the set of basins of attraction.The set of basins of attraction that an animal has learned forms what is called an attractorlandscape. According to Freeman:The state space of the cortex can therefore be said to comprise an attractorlandscape with several adjoining basins of attraction, one for each class oflearned stimuli.68Freeman argues that each new attractor does not represent, say, a carrot, or thesmell of carrot, or even what to do with a carrot. Rather, the brain’s current state is theresult of the sum of the animal’s past experiences with carrots, and this state is directly

20

21coupled with or resonates to the affordance offered by the current carrot. What in thephysical input is directly picked up and resonated to when the rabbit sniffs, then, is theaffords-eating.69Freeman tells us “The macroscopic bulbar patterns [do] not relate to thestimulus directly but instead to the significance of the stimulus.”70[ Stuart asks: Arethere attractors for carrot, celery, etc. or just for affords eating, running way frometc ?]Freeman adds:These attractors and behaviors are constructions by brains, not merelyreadouts of fixed action patterns. No two replications are identical: likehandwritten signatures, they are easily recognized but are never twiceexactly the same.71

4. The stimulus is not further processed or acted upon. [Merleau-Ponty: We normallyhave no experience of sense data.]

Since on Freeman’s account the attractors are coupled directly to the significanceof the current input, the stimulus need not be processed into a representation of thecurrent situation on the basis of which the brain then has to infer what to do. So, afterselecting and activating a specific attractor and modifying it, the stimulus has no furtherjob to perform. As Freeman explains:The new pattern is selected, not imposed, by the stimulus. It is determinedby prior experience with this class of stimulus. The pattern expresses thenature of the class and its significance for the subject rather than theparticular event. The identities of the particular neurons in the receptorclass that are activated are irrelevant and are not retained72... Havingplayed its role in setting the initial conditions, the sense-dependent activityis washed away.735. The perception/action loop.

The movement towards the bottom of a particular perceptual basin of attraction iscorrelated with the perception of the significance of a particular scent. It then leads to theanimal’s direct motor response to the current affordance, depending on how well thatmotor response succeeded in the past. According to Freeman, the perceptual

21

22“recognition” of the instrumental significance74of the current scent places the animal’smotor system into into an appropriate basin of attraction. [Stuart asks. how?] Forexample, if the carrot affords eating the rabbit is directly readied to eat the carrot, orperhaps readied to carry off the carrot depending on which attractor is currently activated.Freeman tells us:The same global states that embody the significance provide… thepatterns that make choices between available options and that guide themotor systems into sequential movements of intentional behavior.75The readiness can change with each further sniff or shift in the animals attention likeswitching from frame to frame in a movie film.But the changing attractor states are not fast enough to guide the animal’smoment-by-moment motor responses to the changing situation. For that, the brain needsto switch to another form of processing that is directly responsive to the sensory input.This other form of processing must guide the moment-by-moment muscle contractionsthat control the animal’s movements. It must therefore take account of how things aregoing and either continue on a promising path, or, if the overall action is not going aswell as anticipated, it must signal the attractor system to jump to another attractor so as toincrease the animals sense of impending reward.76If the rabbit achieves what it isseeking, a report of its success is fed back to reset the sensitivity of the olfactory bulb.And the cycle is repeated.6. Optimal grip.

The animal’s movements are presumably experienced by the animal as tendingtowards getting an optimal perceptual take on what is currently significant, and, whereappropriate, an actual optimal bodily grip on it. Freeman sees his account of the braindynamics underlying perception and action as structurally isomorphic with Merleau-Ponty’s. He explains:Merleau-Ponty concludes that we are moved to action by disequilibriumbetween the self and the world. In dynamic terms, the disequilibrium ...puts the brain onto … a pathway through a chain of preferred states, whichare learned basins of attraction. The penultimate result is not an

22

23equilibrium in the chemical sense, which is a dead state, but a descent fora time into the basin of an attractor, giving an awareness of closure.77[Stuart says moving from one attractor to another requires animpossible discontinuous change in brain state. And also asks, whatdecides which attractor to move into?]Thus, according to Freeman, in governing action the brain normally moves fromone basin of attraction to another descending into each basin for a time without coming torest in any one basin. If so, Merleau-Ponty’s talk of reaching equilibrium or maximalgrip is misleading. But Merleau-Pontians should be happy to improve theirphenomenological description on the basis of Freeman’s model. Normally, the copermoves towards a maximal grip but, instead of coming to rest when the maximal grip isachieved, as in Merleau-Ponty’s example of standing and observing a picture in amuseum, the coupled coper, without coming to rest, is drawn to move on in response tothe call of another affordance [How do affordances call?] that solicits her to take up thesame task from another angle, or to turn to the next task that grows out of the current one.7. Experience feeds back into the look of the world. [Merleau-Ponty’s intentional arc.]

Freeman claims his read out from the rabbit’s brain shows that each learningexperience that is significant in a new way sets up a new attractor and rearranges all theother attractor basins in the landscape:.I have observed that brain activity patterns are constantly dissolving,reforming and changing, particularly in relation to one another. When ananimal learns to respond to a new odor, there is a shift in all other patterns,even if they are not directly involved with the learning. There are no fixedrepresentations, as there are in [GOFAI] computers; there are onlysignificances.78Freeman adds:I conclude that the context dependence is an essential property of thecerebral memory system, in which each new experience must change all ofthe existing store by some small amount, in order that a new entry beincorporated and fully deployed in the existing body of experience. This

23

24property contrasts with memory stores in computers…in which each itemis positioned by an address or a branch of a search tree. There, each itemhas a compartment, and new items don't change the old ones. Our dataindicate that in brains the store has no boundaries or compartments....Each new state transition … initiates the construction of a local patternthat impinges on and modifies the whole intentional structure.79The whole constantly updated landscape of attractors is correlated with the agent’sexperience of the changing significance of things in the world.The important point is that Freeman offers a model of learning which is not anassociationist model according to which, as one learns, one adds more and more fixedconnections, nor a cognitivist model based on off-line representations of objective factsabout the world that enable inferences about which facts to expect next, and what theymean. Rather, Freeman’s model instantiates a genuine intentional arc according to whichthere are no linear casual connections nor a fixed library of data, but where, each time anew significance is encountered, the whole perceptual world of the animal changes sothat significance as directly displayed is contextual, global, and continually enriched.8. Circular causality

Such systems are self-organizing. Freeman explains:Macroscopic ensembles exist in many materials, at many scales in spaceand time, ranging from…weather systems such as hurricanes andtornadoes, even to galaxies. In each case, the behavior of the microscopicelements or particles is constrained by the embedding ensemble, andmicroscopic behavior cannot be understood except with reference to themacroscopic patterns of activity…80Thus, the cortical field controls the neurons that create the field. In Freeman’s terms, inthis sort of circular causality the overall activity “enslaves” the elements. As heemphasizes:Having attained through dendritic and axonal growth a certain density ofanatomical connections, the neurons cease to act individually and startparticipating as part of a group, to which each contributes and from which

24

25each accepts direction….The activity level is now determined by thepopulation, not by the individuals. This is the first building block ofneurodynamics.81Given the way the whole brain can be tuned by past experience to influenceindividual neuron activity, Freeman can claim:Measurements of the electrical activity of brains show that dynamicalstates of Neuroactivity emerge like vortices in a weather system, triggeredby physical energies impinging onto sensory receptors. ... Thesedynamical states determine the structures of intentional actions.82Merleau-Ponty seems to anticipate Freeman’s neurodynamics when he says:It is necessary only to accept the fact that the physico-chemical actions ofwhich the organism is in a certain manner composed, instead of unfoldingin parallel and independent sequences, are constituted… in relativelystable “vortices.”83

In its dynamic coupling with the environment the brain tends towards equilibriumbut continually [discontinuously] switching from one attractor basin to another likesuccessive frames in a movie. In Freeman’s terms:Neocortical dynamics progresses through time by continual changes in statethat adapt the cortices to the changing environment.84The discreteness of these global state transitions from one attractor basin to another makesit possible to model the brain’s activity on a computer. Freeman notes that:At macroscopic levels each perceptual pattern of Neuroactivity is discrete,because it is marked by state transitions when it is formed and ended. ... Iconclude that brains don't use numbers as symbols, but they do use discreteevents in time and space, so we can represent them …by numbers in order tomodel brain states with digital computers.85That is, the computer can model the anticipaion of input as well as the series of discretetransitions from basin to basin they trigger in the brain, thereby modeling how, on the basisof past experiences of success or failure, physical input acquires significance for the

25

26organism. When one actually programs such a model of the brain as a dynamic physicalsystem, one has an explanation of how the brain does what Merleau-Ponty thinks the brainmust be doing, and, since Merleau-Ponty is working out of Heidegger’s ontology, one hasdeveloped Freeman’s neurodynamics into Heideggerian AI.Time will tell whether Freeman’s Merleau-Pontian model is on the right track forexplaining how the brain finds and feeds back significance into the meaningless physicaluniverse. Only then would we find out if one could actually produce intelligent behaviorby programming a model of the physical state transitions taking place in the brain. Thatwould be the positive Heideggerian contribution to the Cognitive Sciences that Wheelerproposes to present in his book but which he fails to find. It would show how theemerging embodied-embedded approach, when fully understood, could, indeed, be thebasis of a genuinely Heideggerian AI.Meanwhile, the job of phenomenologists is to get clear concerning the phenomenathat need to be explained. That includes an account of how we, unlike classicalrepresentational computer models, avoid the frame problem.IX. How would Heideggerian AI dissolve the Frame Problem?As we have seen, Wheeler rightly thinks that the simplest test of the viability ofany proposed AI program is whether it can solve the frame problem. We’ve also seenthat the two current supposedly Heideggerian approaches to AI avoid the frame problem.Brook’s empiricist/behaviorist approach in which the environment directly causesresponses avoids it by leaving out significance and learning altogether, while Agre’saction-oriented approach, which includes only a small fixed set of possibly relevantresponses, also avoids the problem.Wheeler’s approach, however, by introducing flexible action-orientedrepresentations, like any representational approach has to face the frame problem headon. To see why, we need only slightly revise his statement of the problem (quotedearlier), substituting “representation” for “belief”:[G]iven a dynamically changing world, how is a nonmagical system ….toretrieve and (if necessary) to revise, out of all the representations that it

26

27possesses, just those representations that are relevant in some particularcontext of action?86

Wheeler’s frame problem, then, is to explain how his allegedly Heideggerian system candetermine in some systematic way which of the action-oriented representations itcontains or can generate are relevant in any current situation and keep track of how thisrelevance changes with changes in the situation.Not surprisingly, the concluding chapter of the book where Wheeler returns to theframe problem to test his proposed Heideggerain AI, offers no solution or dissolution ofthe problem. Rather he asks us to “give some credence to [his] informed intuitions,”87

which I take to be on the scent of Freeman’s account of rabbit olfaction, thatnonrepresentational causal coupling must play a crucial role. But I take issue with hisconclusion that:in extreme cases the neural contribution will be nonrepresentational incharacter. In other cases, representations will be active partners alongsidecertain additional factors, but those representations will be action orientedin character, and so will realize the same content-sparse, action-specific,egocentric, context-dependent profile that Heideggerian phenomenologyreveals to be distinctive of online representational states at the agentiallevel.88For Heidegger, all representational states are part of the problem.Any attempt to solve the frame problem by giving any role to any sort ofrepresentational states even on-line ones has so far proved to be a dead end. Sononrepresentational action had better not be understood to be merely the “extreme case. ”Rather, it must be, as Heidegger, Merleau-Ponty and Freeman see, our basic way ofresponding directly to relevance in the everyday world, so that the frame problem doesnot arise.Heidegger and Merleau-Ponty argue that, thanks to our embodied coping and theintentional arc it makes possible, our skill in sensing and responding to relevant changesin the world is constantly improved. In coping in a particular context, say a classroom,we learn to ignore most of what is in the room, but, if it gets too warm, the windows

27

28solicit us to open them. We ignore the chalk dust in the corners and chalk marks on thedesks but we attend to the chalk marks on the blackboard. We take for granted that whatwe write on the board doesn’t affect the windows, even if we write, “open windows,” andwhat we do with the windows doesn’t affect what’s on the board. And as we constantlyrefine this background know-how the things in the room and its layout become more andmore familiar and take on more and more significance. In general, given our experiencein the world, whenever there is a change in the current context we respond to it only if inthe past it has turned out to be significant, and when we sense a significant change wetreat everything else as unchanged except what our familiarity with the world suggestsmight also have changed and so needs to be checked out. Thus the frame problem doesnot arise.But the frame problem reasserts itself when we need to change contexts. How dowe understand how to get out of the present context and what to anticipate when we do?Merleau-Ponty has a suggestion. When speaking of one’s attention being drawn by anobject, Merleau-Ponty uses the term summons to describe the influence of a perceptualobject on a perceiver.To see an object is either to have it on the fringe of the visual field and beable to concentrate on it, or else respond to this summons by actuallyconcentrating on it.89Thus, for example, as one faces the front of a house, one’s body is already being summoned(not just prepared) to go around the house to get a better look at its back.90

Merleau-Ponty’s treatment of what Husserl calls the inner horizon of the perceptualobject e.g. its insides and back, applies equally to our experience of the object’s outerhorizon of other potential situations. As I cope, other tasks are right now present on thehorizon of my experience summoning my attention as potentially (not merely possibly)relevant to the current situation. Likewise, my attention can be summoned by otherpotentially relevant situations already on the current situation’s outer horizon.If Freeman is right, this attraction of familiar-but-not-currently-fully-present aspectsof what is currently ready-to-hand and of potentially relevant other familiar situations onthe horizon might well be correlated with the fact that our brains are not simply in one

28

29attractor basin at a time but are influenced by other attractor basins in the same landscape,and by other attractor landscapes. [How are they influenced?]According to Freeman, what makes us open to the horizonal influences of otherattractors instead of our being stuck in the current attractor is that the whole system ofattractor landscapes collapses and is rebuilt with each new rabbit sniff, or in our case,presumably with each shift in our attention. And once one correlates Freeman’sneurodynamic account with Merleau-Ponty’s description of the way the intentional arcfeeds our past experience back into the way the world appears to us so that the worldsolicits from us appropriate responses, the problem of how we are summoned by what isrelevant in our current situation, as well as other bordering situations, no longer seemsinsoluble.But there is a generalization of the problem of relevance, and thus of the frameproblem, that seems intractable. InWhat Computers Can’t DoI gave as an example how,in placing a racing bet, we can usually restrict ourselves to such facts as the horse's age,jockey, past performance, and competition, but there are always other factors such aswhether the horse is allergic to goldenrod or whether the jockey has just had a fight withthe owner, which may in some cases be decisive. Human handicappers are capable ofrecognizing the relevance of such facts when they come across them.91But since anythingin experience can be relevant to anything else, such an ability seems magical.Jerry Fodor follows up on my pessimistic remark:“The problem,” he tells us, “is to get the structure of an entire belief systemto bear on individual occasions of belief fixation. We have, to put it bluntly,no computational formalisms that show us how to do this, and we have noidea how such formalisms might be developed. … If someone --a Dreyfus,for example-- were to ask us why we should even suppose that the digitalcomputer is a plausible mechanism for the simulation of global cognitiveprocesses, the answering silence would be deafening.92

However, once we give up computational Cognitivism, and see ourselves insteadas basically coupled copers, we can see how the frame problem can be dissolved by anappeal to existential phenomenology and neurodynamics. In the light of how learning

29

30our way around in the world modifies our brain and so builds significance and relevanceinto the world so that relevance is directly experienced in the way tasks summon us, eventhe general problem raised by the fact that anything in our experience could in principlebe related to anything else no longer seems a mystery.X. ConclusionIt would be satisfying if we could now conclude that, with the help of Merleau-Ponty and Walter Freeman, we can fix what is wrong with current allegedly HeideggerianAI by making it more Heideggerian. There is, however, a big remaining problem.Merleau-Ponty’s and Freeman’s account of how we directly pick up significance andimprove our sensitivity to relevance depends on our responding to what is significant forus given our needs, body size, ways of moving, and so forth, not to mention our personaland cultural self-interpretation. If we can’t make our brain model responsive to thesignificance in the environment as it shows up specifically for human beings, the projectof developing an embedded and embodied Heideggerian AI can’t get off the ground.Thus, to program Heideggerian AI, we would not only need a model of the brainfunctioning underlying coupled coping such as Freeman’s but we would also need—andhere’s the rub—a model of our particular way of being embedded and embodied suchthat what we experience is significant for us in the particular way that it is. That is, wewould have to include in our program a model of a body very much like ours with ourneeds, desires, pleasures, pains, ways of moving, cultural background, etc.93So, according to the view I have been presenting, even if theHeideggerian/Merleau-Pontian approach to AI suggested by Freeman is ontologicallysound in a way that GOFAI and the subsequent supposedly Heideggerian modelsproposed by Brooks, Agre, and Wheeler are not, a neurodynamic computer model wouldstill have to be given a body and motivations like ours if things were to count assignificant for it so that it could learn to act intelligently in our world. The idea of super-computers containing detailed models of human bodies and brains may seem to makesense in the wild imaginations of a Ray Kurzweil or Bill Joy, but they haven’t a chance ofbeing realized in the real world.

30

31

1This isn’t just my impression. Philip Agre, a PhD’s student at the AI Lab at this time,later wrote:I have heard expressed many versions of the propositions …that philosophy is amatter of mere thinking whereas technology is a matter of real doing, and thatphilosophy consequently can be understood only as deficient.Philip E. Agre,Computation and Human Experience, (Cambridge: Cambridge UniversityPress, 1997), 239.2Marvin Minsky as quoted in a 1968 MGM press release for Stanley Kubrick’s2001: ASpace Odyssey.3Newell, A. and Simon, H.A., "Computer Science as Empirical Inquiry: Symbols andSearch",Mind Design, John Haugeland, Edt. Cambridge, MA, MIT Press, 1988.4Martin Heidegger,Being and Time, J. Macquarrie & E. Robinson, Trans., (New York:Harper & Row, 1962), 132, 133.5Michael Wheeler,Reconstructing the Cognitive World: The Next Step, (Cambridge,MA: A Bradford Book, The MIT Press, 2005), 179.6Edmund Husserl, Experience and Judgment (Evanston: Northwestern University Press,1973), 38.Roger Schank proposed what he called scripts such as a restaurant script, “Ascript, he wrote, “is a structure that describes appropriate sequences of events in aparticular context. A script is made up of slots and requirements about what can fill thoseslots. The structure is an interconnected whole, and what is in one slot affects what canbe in another. A script is a predetermined, stereotyped sequence of actions that defines awell-known situation.” R.C. Schank and R.P. Abelson,Scripts, Plans, Goals andUnderstanding: An Inquiry into Human Knowledge Structures(Hillsdale, NJ: LawrenceErlbaum, 1977) 41. Quoted in:Views into the Chinese Room: New Essays on Searle andArtificial Intelligence, John Preston and Mark Bishop, Eds, (Oxford: Clarendon Press,2002).7After I published,What Computers Can’t Doin l972 and pointed out this difficultyamong many others, my MIT computer colleagues, rather than facing my criticism, triedto keep me from getting tenure on the grounds that my affiliation with MIT would give

31

32

undeserved credibility to my “fallacies,” and so would prevent the AI Lab fromcontinuing to receive research grants from the Defense Department. The AI researcherswere right to worry. I was considering hiring an actor to impersonate an officer fromDARPA and to be seen having lunch with him at the MIT Faculty Club. (A plan cutshort when Jerry Wiesner, the President of MIT, after consulting with Harvard andRussian computer scientists and himself reading my book, personally granted me tenure.)I did, however, later get called to Washington by DARPA to give my views, and the AILab did loose DARPA support during what has come to be called the AI Winter.8Winograd, T. (1976). 'Artificial Intelligence and Language Comprehension,' inArtificial Intelligence and Language Comprehension, Washington, D.C.: NationalInstitute of Education, 9.9

Wired Magazine, Issue 11:08, August 2003.10

Heidegger, Coping, and Cognitive Science, Essays in Honor of Hubert L. Dreyfus, Vol.2, Mark Wrathall Ed, (Cambridge, MA: The MIT Press, 2000), iii.11Michael Wheeler,Reconstructing the Cognitive World, 285.12Reference??13Terry Winograd, “Heidegger and he Design of Computer Systems,” talk delivered atApplied Heidegger Conference, Berkeley, CA, Sept. 1989. Cited inWhat ComputersStill Can't Do,Introduction to the MIT Press Edition, xxxi.14Not everyone was pleased. One of the graduate students responsible for the invitationreported to me: “After it was announced that you were giving the talk, Marvin Minskycame into my office and shouted at me for 10 minutes or so for inviting you.”15

Brooks uses what he calls the "subsomption architecture", according towhich systems are decomposed not in the familiar way by local functionsor faculties, but rather by global activities or tasks. ... Thus, Herbert hasone subsystem for detecting and avoiding obstacles in its path, another forwandering around, a third for finding distant soda cans and homing in onthem, a fourth for noticing nearby soda cans and putting its hand aroundthem, a fifth for detecting something between its fingers and closing them,and so on... fourteen in all. What's striking is that these are all completeinput/output systems, more or less independent of each other. (JohnHaugeland,Having Thought: Essays in the Metaphysics of Mind,(Cambridge, MA: Harvard University Press, 1998), 218.)19Ibid. 42.20In fact he explicitly denies it saying:In some circles, much credence is given to Heidegger as one who understood thedynamics of existence. Our approach has certain similarities to work inspired bythis German philosopher (for instance, Agre and Chapman 1987) but our workwas not so inspired. It is based purely on engineering considerations. (Ibid., 415)21Rodney A. Brooks,Flesh and Machines: How Robots Will Change Us, Vintage Books(2002), 168.22“Can higher-level functions such as learning occur in these fixed topology networks ofsimple finite state machines?” he asks. (“Intelligence without Representation,”MindDesign, 420.)23Maurice Merleau-Ponty,Phenomenology of Perception, trans. C. Smith, Routledge &Kegan Paul, 1962, 136.24See, Maurice Merleau-Ponty,The Structure of Behavior, A. L. Fisher, Trans, Boston:Beacon Press, 2nd edition 1966.25Brooks. “Intelligence without Representation, 418. Insert Haugeland.26Rodney A. Brooks, “From earwigs to humans,”Robotics and Autonomous Systems,vol. 20, 1997, 291.

Computation and Human Experience, 243. His ambitious goal was to “develop analternative to the representational theory of intentionality, beginning with thephenomenological intuition that everyday routine activities are founded in habitual,embodied ways of interacting with people, places, and things in he world.”34Ibid, xi.35Ibid. 33236Ibid. 251. As Beth Preston sums it up in her paper, “Heidegger and ArtificialIntelligence:”Philosophy and Phenomenological Research53 (1), March 1993: 43-69.What results is a system that represents the world not as a set of objects withproperties, but as current functions (what Heidegger called in-order-tos). Thus, totake a Heideggerian example, I experience a hammer I am using not as an objectwith properties but as in-order-to-drive-in-this-nail.37Heidegger himself is unclear about the status of the ready-to-hand. When he isstressing the holism of equipmental relations, he thinks of the ready-to-hand asequipment, and of equipment as things like lamps, tables, doors, and rooms that have aplace in a whole nexus of other equipment. Furthermore, he holds that breakdownreveals that these interdefined pieces of equipment are made of present-at-hand stuff thatwas there all along. (Being and Time, 97.) At one point Heidegger even goes so far as toinclude the ready-to-hand under the categories that characterize the present-at-hand:We call ‘categories’ –characteristics of being for entities whose characteris not that of Dasein.” …“Any entity is either a “who” (existence) or awhat (present-at-hand in the broadest sense.)

Page?

34

35

38

Being and Time,405.39Ibid. 99.40Martin Heidegger,Logic: The Question of Truth, Trans. Thomas Sheehan manuscript.Gesamtausgabe, Band 21, 144. This is precisely what Brooks’ animats cannot learn.41Heidegger goes on immediately to contrast the total absorption of coping he has justdescribed with the as-structure of thematic observation:Every act of having things in front of oneself and perceiving them is heldwithin [the] disclosure of those things, a disclosure that things get from aprimary meaningfulness in terms of the what-for. Every act of havingsomething in front of oneself and perceiving it is, in and for itself, a‘having’ something as something.

To put it in terms ofBeing and Time,the as-structure of equipment goes all theway down in the world, but not in our experience of absorbed coping. It’s badphenomenology to read the self or the as-structure into our experience when we arecoping at our best.42New insert from G43Michael Wheeler,Reconstructing the Cognitive World,222-223.44Ibid. 187.45Ibid. 188.46Ibid. 188-189.47Merleau-Ponty says the same:[T]o move one’s body is to aim at things through it; it is to allow oneselfto respond to their call, which is made upon it independently of anyrepresentation. (Phenomenology of Perception, 139.)48See, Clark, A. and Chalmers, D., "The Extended Mind," Analysis 58 (1): 7-19, 199.49According to Heidegger, intentional content isn’t in the mind, nor in some 3rdrealm (asit is for Husserl), nor in the world; it isn’t anywhere. It’s a way of being-towards.50As Heidegger puts it: “The self must forget itself if, lost in the world of equipment, it isto be able 'actually' to go to work and manipulate something.” Being and Time Page?.

35

36

51

Logic, 146. It’s important to realize that when he introduces the term “understanding,”Heidegger explains (with a little help from the translator) that he means a kind of know-how:In German we say that someone can vorstehen something—literally, standin front of or ahead of it, that is, stand at its head, administer, manage,preside over it. This is equivalent to saying that he versteht sich darauf,understands in the sense of being skilled or expert at it, has the know-howof it. (Martin Heidegger,The Basic Problems of Phenomenology, A.Hofstadter, Trans. Bloomington: Indian University Press, 1982, 276.)52

Being and Time, 416. To make sense of this slogan, it’s important to be clear thatHeidegger distinguishes the human world from the physical universe.53Martin Heidegger, Phenomenological Interpretations in Connection with Aristotle, inSupplements: From the Earliest Essays to Being and Time and Beyond, John Van Buren,Ed., State University of New York Press, 2002, 115. My italics.This away of putting the source of significance covers both animals and people.By the time he publishedBeing and Time, however, Heidegger was interestedexclusively in the special kind of significance found in the world opened up by humanbeings who are defined by the stand they take on their own being. We might call thismeaning. In this paper I’m putting the question of uniquely human meaning aside toconcentrate on the sort of significance we share with animals.54Todes goes beyond Merleau-Ponty in showing how our world-disclosing perceptualexperience is structured by the actual structure of our bodies. Merleau-Ponty never tells uswhat our bodies are actually like and how their structure affects our experience. Todesnotes that our body has a front/back and up/down orientation. It moves forward moreeasily than backward, and can successfully cope only with what is in front of it. He thendescribes how, in order to explore our surrounding world and orient ourselves in it, wehave to be balanced within a vertical field that we do not produce, be effectively directed ina circumstantial field (facing one aspect of that field rather than another), and appropriatelyset to respond to the specific thing we are encountering within that field. For Todes, then,perceptual receptivity is an embodied, normative, skilled accomplishment, in response to

36

37

our need to orient ourselves in the world. (See, Samuel Todes,Body and World,Cambridge, MA: The MIT Press, 2001.)55Merleau-Ponty,Phenomenology of Perception, 250. (Trans. Modified.)56Ibid. 30257Ibid, 153.58Van Gelder, "Dynamics and Cognition",Mind Design II, John Haugeland, Ed., ABradford Book, (Cambridge, MA: The MIT Press, 1997), 439, 448.59Ibid.60Michael Wheeler, Change in the Rules: Computers, Dynamical Systems, and Searle, inViews into the Chinese Room: New Essays on Searle and Artificial Intelligence, JohnPreston and Mark Bishop, Eds, (Oxford: Clarendon Press, 2002), 345.61Ibid. 344, 345.62Wheeler,Reconstructing the Cognitive World, 280.63Ibid.64See Wheeler’s footnote on the same subject declaring this confrontation a “minor spat”concerning “certain local nuances.”(Ibid. 307.)65I’m over simplifying here. Wheeler does note that Heidegger has an account of on-line, involved problem solving that Heidegger calls dealing with the un-ready-to-hand.The important points for Heidegger but not for Wheeler, however, are that (1) coping atits best deals directly with the ready-to-hand with no place for representations of any sort,and that (2) all un-ready-to-hand coping takes place on the background of an even morebasic holistic coping which allows copers to orient themselves in the world. As we shallsee, it is this basic coping, not any kind of problem solving, agential or subagential, thatenables Heideggerian AI to avoid the frame problem.66Walter J. Freeman, “The Physiology of Perception”,Scientific American,264: 78-85,1991; and W. J. Freeman and K.A. Grajski, “Relation of olfactory EEG to behavior:Factor Analysis,”Behavioral Neuroscience, 101: 766-777, 1987. Page? My italics.)67In Freeman’s neurodynamic model, the input to the rabbit’s olfactory bulb modifies thebulb’s neuron connections according to the Hebbian rule that neurons that fire together

37

38

wire together. Just how this Hebbian learning is translated into an attractor is notsomething Freeman claims to know in detail. He simply notes:The attractors are not shaped by the stimuli directly, but by previousexperience with those stimuli, which includes preafferent signals andneuromodulators as well as sensory input. Together these modify thesynaptic connectivity within the neuropil and thereby also the attractorlandscape. [Walter Freeman,How Brains Make Up Their Minds, NewYork: Columbia University Press, 2000, 62]68Walter Freeman,How Brains Make Up Their Minds, New York: Columbia UniversityPress, 2000, 62. (Quotations from Freeman’s books have been reviewed by him andsometimes modified to correspond to his latest vocabulary and way of thinking about thephenomenon.)69Should have a footnote on this being the brain activity presupposed by Gibsontalk of resonating to affordances.70. Walter Freeman,Societies of Brains: A study in the neuroscience of love and hate, TheSpinoza Lectures, Amsterdam, Netherlands, Hillsdale, N.J.: Lawrence ErlbaumAssociates, Publisher, 1995, 59.71Walter Freeman,How brains. 2000 , 62, 63. Merleau-Ponty is lead to a similarconclusion. Ref. from Camillo.)72, Walter Freeman,Societies of Brains, 66.73Ibid. 67.74See Sean Kelly, Logic of Motor Intenionality, Ref. Also, Corbin Collins describes thephenomenology of this motor intentionality and spells out the logical form of what hecalls instrumental predicates. Ref75Walter Freeman,How Brains Make Up Their Minds, 114.76Freeman does not attempt to account for this direct control of moment-by-momentmovements. To understand it we wold have to turn another form of nonrepresentationallearning and skill called TDRL. See my paper with Stuart Dreyfus, Ref77Ibid. 12178Ibid. 2279Walter Freeman,Societies of Brains.99.

Phenomenology of Perception, 67. My italics.90Sean D. Kelly, " Seeing Things in Merleau-Ponty," inThe Cambridge Companion toMerleau-Ponty.91Hubert L. Dreyfus,What Computers Can't Do, (New York, NY: Harper and Row,1997, 258.92Jerry A. Fodor,The Modularity of Mind, Bradford/MIT Press,93Dennett sees the “daunting” problem. He just doesn’t see that it is a problem that wehave no idea how to solve and which may well be insolvable. In a quotation I mentionedearlier he says further:Cog, …must have goal-registrations and preference-functions that map inrough isomorphism to human desires. This is so for many reasons, ofcourse. Cog won't work at all unless it has its act together in a dauntingnumber of different regards. It must somehow delight in learning, abhorerror, strive for novelty, recognize progress. It must be vigilant in someregards, curious in others, and deeply unwilling to engage in self-destructive activity. While we are at it, we might as well try to make itcrave human praise and company, and even exhibit a sense of humor.(Consciousness in Human and Robot Minds For IIAS Symposium on Cognition,Computation and Consciousness, Kyoto, September 1-3, 1994, forthcoming in Ito, et al.,

39

40

eds., Cognition, Computation and Consciousness, OUP.)We can, however, make some progress towards animal AI. Freeman claims hisneurodynamic theory can be used to model lower organisms. In fact, he is actually usinghis brain model to program simulated robots. (See: Kozma R, Freeman WJ Basicprinciples of the KIV model and its application to the navigation problem. J Integrat.Neurosci 2: (2003 125-145. )Freeman thinks that if he and his coworkers keep at it for a decade or so theymight be able to model the body and brain of the salamander sufficiently to simulate itsforaging and self-preservation capacitates. (Personal communication, 2/15/06)