Comments 0

Document transcript

Workshop on the Challenges and Promises of an Ecological Approach to Robotics

Eco-robotics: The Evolutionary Intentional Dynamics

of Adaptive Systems

Robert Shaw1

and William Mace2

1Department of Psychology

University of Connecticut

Storrs

CT 06269

USA

Roberteshaw@aol.com

2Department of Psychology

Trinity College

300 Summit Street

Hartford, CT

CT 06106

USA

William.Mace@trincoll.edu

MSVersion 1.0

Aims

The paper has three major aims: Section I provides a brief overview of the state-of-the-art in robotics.Recent assessments by experts are briefly discussed. A consensus seems to have developed in the field that thetraditional approach to robotics using GOFAI (good old fashion AI) has failed, so something new needs to be tried.

Section II reviews some of the most promising new techniques that have come to prominence in recentyears—such as, behavior-based robotics (e.g., Brooks), genetics algorithms, and reconfigurable hardware (e.g.,Thompson, Higuchi). The technique which many experts believe holds the most promise is evolvable hardware(EHW)—a wedding of genetic algorithms with reconfigurable hardware. This technique has become central to thecurrent development of evolutionary robotics (ER), the generic name for the field.

Finally, Section III shows how ER using EHW can be given a natural interpretation within the methods ofecological psychology, particularly that branch called intentional dynamics (ID). By combining intentional dynamicswith evolutionary techniques, we have evolutionary intentional dynamics (EID); and when EID is applied torobotics, evolutionary ecological robotics is the result.

Preliminaries

For convenience, here is a summary of the abbreviations to be found in the paper,

GOFAI

= good old fashion artificial intelligence

GA

= genetic algorithm

FF

= fitness function

2

OHW

= ordinary (fixed) hardware

RHW

= reconfigurable hardware

FPGA

= a field programmed gate array, a type of RHW

EHW

= evolutionary hardware

ER

= evolutionary robotics

ID

= intentional dynamics

EID

= evolutionary intentional dynamics

EER

= evolutionary ecological robotics

To anticipate, here is a general sketch of how the different methods to be discussed relate to define EID, namely

EID = EHW + ID = (RHW + EHW) + ID:

NOTE: A system istractable in the formal senseif a set of equations in closed form describing its behavior canbe given , By contrast, it istractable in a practical sense

if either an approximation to its equations can be given(say by a Monte Carlo routine) or if a scheme is given by which the system can be brought into some desired statein some reasonable amount of time—say, by a GA applied to a population of systems EHW).

*

*

*

I. Current State-of-the-Art in Robotics

Humans are by nature inventive artificers, typically remaining undaunted even when faced by the most highlycomplicated and subtle engineering problems. Recently, many experts in the robotics community, however, haveconcluded that traditional methods offer little hope for building robots that can learn to perform practical tasks indiverse real-world environments. Instead, roboticists seek new methods by which to evolve autonomous agentsthat can function effectively in ecologically valid situations.

A situation isecologically valid

if it is sufficiently rich in resources (e.g., affordances and information) toallow an evolutionarily attuned, autonomous agent adequate intrinsic means to perform tasks effectively. Hence analternative to traditional GOFAI or typical cognitive neuroscience methodsis needed. In later sections, we willconsider some of these alternatives.

Challenges to Traditional Robotics

Practical robotics aims to develop agents capable of performing significant goal-directed tasks in diversenatural or man-made environments.It hopes to replace humans in tasks that are too dangerous, too difficult, or tooboring for human agents to perform effectively, as well as to produce agents that can perform tasks impossible orinconvenient for humans.

Roboticists have proposed a variety of real-world tasks, any of which if accomplished would indicatesignificant progress in the field. They include:

• Architectural robots to help in the construction and maintenance of homes and buildings

• Hazardous duty robots capable of working in places too small or too dangerous for humans tonavigate or manipulate tools.

• Domestic help robots to help cook, do housekeeping, make repairs, do gardening, entertain orteach children.

3

Progress toward such goals has been disappointing to many contemporary roboticists. For instance, agroup of roboticists at Brandeis Universityrecently made the following pessimistic assessment:

"The field of Robotics today faces a practical problem: most problems in the physical world are too difficult for thecurrent state of the art. The difficulties associated with designing, building and controlling robots have led to a stasis(Moravec, 1999) and robots in industry are only applied to simple and highly repetitive tasks" (Pollack et al.,www//paper, p. 1).

The problem is not so much the lack of engineering know-how, nor even the lack of appreciation of whatreal-world behaviors demand—but, unfortunately, runs much deeper. For there has been a failure to identifyfundamental principles for explaining how goal-directed behaviors might be programmed into or learned bymachines that are sufficiently competent to be of practical value. This lack has impeded the development ofautonomous robots in all areas where significant real-world tasks are to be performed.

To be capable of performing real-world tasks, robots must possess complex hardware

to support exquisitecontrol, and yet, to be practical in that they must operate by principles sufficiently tractable to be generalized overenvironmental contexts distinguished by variable task demands, as well as over robots of different design andcomplication. Current robots, however, cannot perform any real-world tasks adequately because they fall short inseveral fundamental ways.

Major deficiencies in current robotics technology were identified by a blue ribbon committee convened inApril 2002 by Information Society Technologies (IST), a European Community (EC) funding agency, whose aim isto initiate projects addressing problems of Future and Emerging Technologies (FET). Some of the problemsidentified were:

(1) Robotic principles must exhibit

scalability over real-world environments.

Systems are needed that exhibit robust operation in spite of changes in task and environment. Ones thatseem to work in simulated or artificially simple environments fail when placed in real environments fraught

with noisydetail.

"None of the artifacts that have been designed this far have demonstrated capability to open ended domains or trulycomplex task environments. . . For operation in natural domains it is necessary for these systems to be able to copewith scalability that are several magnitudes beyond those available today" (the FET Beyond Robotics WorkGroup).

Likewise, the principles governing the coordination of subsystems responsible for learning, information detection,and control of action must

be scalable over robots of different functional complexity.

(2) Criteria are needed to evaluate a robot's degree of success.

Although the performance of robots today is nearly always evaluated in application settings, criteria are toooften qualitative, lacking in scientific rigor (IST report 2002). Unfortunately, psychologists have offered little help inthis regard because there is no consensus on how to evaluate human performance on similar tasks over diversesettings.

(3) Robot subsystems require coordination and integration.

Many subsystems (e.g., sensors, effectors, controllers, electrical circuits, mechanical linkages,computational algorithms, environmental physics) must be integrated or coordinated in order for a robot to performnontrivial tasks in real-world situations. Since no one person can be expert in all facets of this problem, a team ofexperts from diverse fields is required. The primary difficulty is finding a common ground for such diverseexpertise—a general systems theory of autonomous agents.

4

"The need for large interdisciplinary teams or an operational theory makes this a hard problem. So far robots havealmost exclusively been successful in areas where it has been possible to identify the 'ecological niche' for thesystem, i.e., where all relevant parameters can be identified or characterised sufficiently well to allow design ofsituated system—this is the case of 'insect robots' . . " (IST report, 2002).

This calls for identification of environmental invariants to underwrite robust performance in the present of task andenvironmental changes—a clear task for ecological psychology and the special branch of intentional dynamics (SeeSection III).

(4) Robots require robust perception of environmental information in changing environments if their taskrelevant actions are to be robust as well.

For a robot to be capable of robust performance on a wide variety of tasks in changing environments, itmust also be capable of robust perception to direct those actions. (In Section III, we shall construe this problem ofrobust perception and robust action in ecological psychological terms: namely, a robot must be able to detectinformation specific to environmental affordances. Likewise, to act upon these affordances, a robot must haveskillful control, or in ecological terms, it must act according to "rules" for the perceptual control of the relevantactions.

(5) Robot learning must extend beyond fixed task domains.

Some of the most interesting work on robotic learning has emerged in the last decade, but, perhapssurprising, not from the popular areas such as artificial intelligence, artificial neural nets, humanoid simulation, orartificial life. These are all relevant but insufficient because, to be effective, they must restrict learning to fixed taskdomains (FET, 2002). In the next section, we consider three recent methods that have shown promise.

II. The Promise of a New Robotics

Traditional robots acquire their designs by human intervention (e.g., programming); theyreach a degree ofsuccess only when they are designed to operate in closed task domains; but when placed in new task domains, theirprograms fail to generalize. If traditional methods fail, is there an alternative approach by which robots mightsucceed?Perhaps, if they can design themselves, then human intervention could be avoided.

Fortunately, there is a continuum of ways in which robots might have their designs evolved in lieu of humanintervention. At one end are techniques where evolutionary algorithms are used to design simulated versions ofrobots performing simulated tasks in simulated environments; at the other end are populations of real robots whosehardware architectures evolve new configurations from repeated interactions with real environments, under theguidance of an fitness function which determines the 'selection pressure' for a given task function. Intermediatemethods exist by which a robot's design is evolved by simulating a genetic algorithm for a simulated environmentand then by

placing the robot in the corresponding real environment for training in the specific exigencies of thetask—characteristics too subtle to be explicitly described and too complicated to be handled by programming.

A major goal that remains elusive is how to make the evolutionary process free of human intervention, suchas supervised learning (connectionism), selection criteria (evolutionary grammars), fitness functions (geneticalgorithms), rejection criteria (Monte Carlo), or any other source of constraints on learning or evolution notintrinsically derived from interaction with the environmental situation itself.

"Evolvable hardware (EHW) refers to hardware that modifies its architecture and behaviour dramatically andautonomously by interacting with the environment. At present, almost all EHW uses an evolutionary algorithm (EA)as their main adaptive mechanism. One of the key motivations behind EHW is to learn from Nature since she hasdone so well in evolving wonders such as ourselves (i.e., human beings) without external forces" (Yao and Kiguchi,1996).

The techniques of EHW have the following virtues:

5

(1) Evolutionary design approach, it seems, can explore a much wider range of design alternatives for robotsthan those that can be programmed by humans.

(2) Evolutionary design does not assume a priori knowledge of any particular design domain.

(3) Evolutionary design can work with varying degrees of constraints and special requirements, if necessary, byincorporating them in "chromosome" representations and fitness functions.

Weirdness in Evolutionary Circuits

Much of the power of EHW arises from several unusual properties of the FPGA's used:

Mysterious Couplings. The functional parts of the circuit consists of those logic modules which becomeconnected (hardwired) and, surprisingly, also of some of the modules that are not even connected to the othermodules (i.e., are isolates). Thompson (1998) reports that if some of these isolated modules are clamped, theperformance of the circuit doesnot noticeably suffer. On the other hand, some isolated modules if clamped docause the performance of the circuit to be noticeably degraded. How might this happen?

No one knows for sure but several hypotheses have been offered: The isolated modules may be coupled tothe rest of the circuit magnetically, or may be influencing it through the shared power supply wiring.

Digital or Analog Circuit?Although the final input/output behavior of an evolved FPGA is, by design,digital, at intermediate stages

of evolution complex analog waveforms may be detected at the output. This suggeststhat the internal processes of the chip may be exploiting a rich continuous time, continuous value dynamics—abehavior that cannot be controlled from the outside by programming.

Over-specific Adaptation. A practical difficulty is encountered with these chips. Their free dynamics allowthem to explore and exploit the subtle physics of the device or its context (e.g., occupying different positions on athermal gradient).

Thus chips designed to do the same task but which evolved that competence indifferentsituations, may achieve specific adaptation to the details of their distinct real-world context. Hence they may ceaseto function properly when moved to another context in which these details vary. Fortunately this fault can beremedied.

"Evolving circuits will potentially come to depend uponany

properties that are sufficiently stable during evolution forat least the number of generations it takes to exploit them"

(Thompson, 1998, p. 87).

Notice, this statement applies equally well to different devices evolving to do the same task in the sameenvironment. The evolving circuits will likewise come to depend so much on the most stable properties of eachdistinct device, while adapting to the same task situation, that their circuit designs will be quite dissimilar. Hencethey perform the same task in the same environment but do so differently in ways unique to the physical propertiesof their bodies.

Thompson points

out that this statement of the problem suggests a solution: If we subject the ". . . evolving circuitsto the range of conditions in which they will be required to operate . . . The intrinsic evaluation of an individualcircuit's fitness will be the measurement of its ability to perform underall

of these conditions, with the circuit beinginstantiated on all the chips" [where on chip is adapted to each condition] (Thompson, 1998, p. 87).

The circuits for different devices, evolving a given task competence in distinct contexts (whether the contexts inquestion are distinct environments or distinct robot bodies) will exhibit a competence specific to the given context(situation, body). Hence competent performance on the given task may show degradation if

generalized over eithersituations or devices.The most general solution to the autonomous agent problem dictates that the chipsdesigned for a given range of tasks be exposed to the widest possible range of stable physical propertiescharacteristic ofenvironments in which the agent must operate and of the robot bodies with which theymust operate.

6

In the next section, we shall interpret the three properties for EER, and use the same argument to generalizeover situations, tasks, and robot embodimentsin order to obtain an autonomous agent with the most generalcompetence.

Section III. Is an Eco-robotics Theory Possible?

As argued earlier, it is generally recognized that the most serious impediment to progress in robotics is howto design autonomous

agents that, not only can perform given tasks competently in specific environments but, moreimportantly, can learn or develop a broader competence for effective performance on many tasks across diversereal-world environments. And, as argued above, because each robot body will differ slightly in structural designand functional detail, the exact duplication of evolutionary contexts is impossible. Ecological robotics, or, moresimply, eco-robotics (EER), is suggested as the name of the hybrid field which

In 2002 an evolutionary robotics conference was held in Fukui, Japan with the stated goal of furthering thiseffort. Progress reports on on-going ER projects were presented: They ranged from the evolution of real-worldobstacle-avoiding flying or rolling robots, and robots that walk over uneven terrain, or simulated robots that adaptto dynamic environments, such as RoboCup soccer, or highway traffic. Nearly all of the projects were concernedwith ER adapted to real or simulated dynamical environments, and thus could profit from the development of EER.

The natural question to ask is can ecological psychology furnish a general account of autonomous agents tocomplement ER? And if so, what are the major issues to be resolved? We explore some of these issues in this lastsection of the paper.

Before doing so, let's clear up one point that has sometimes been a source of confusion. We shall expectEER to provide methods and concepts for understanding autonomous agents that exhibit effective performance inreal-world task environments. Note, however,effective

performance does not meanoptimalperformance, butonly that it be tolerant, that is, successful to a limited but practical degree. The degree of effective performance isdefined relative to some practical criterion for success—thus in the case of an EHW device, it either learns orevolves so as to satisfy a kind of FF. Indeed, finding a proper ecological interpretation of this FF will prove to bethe key by which we can formulate a theory of EER relevant to our problem.

Evolving Ecological Robots with Affordance-effectivity Fits

Ecological psychology argues that effective learning is shaped by selective evolutionary pressures whichguarantee a commensurability between an animal's action capabilities and what the environment demands ofadaptive acts. That is to say, the agent'seffectivities

It should be clear from the definitions given above that a measure of an agent's competence in reaching agoal is synonymous with how well the task affordances are matched by the agent's effectivities. The evolution ofsuch competence can be construed as a GA-driven EHW process, where the degree of match required plays therole of the FF. The greater the affordance-effectivity match in a given generation, the better the FF score.

More specifically, imagine a population of robots whose members are endowed with the same type ofEHW chips and which share the same FF for a given task. Further, assume that this population is partitioned intoseveral sub-populations, with each sub-population being assigned to a different environmental situation. Theenvironmental situations belong to a range of task-situations over which one would like an autonomousagent to becompetent.

Consider an ecologized version of Thompson's suggestion for how to evolve autonomous agents with broadcompetencies: Let each situation-specific sub-population of robots evolve according to the same FF. Becausethey share the same FF, over many generations, the emerging selection pressure evolves robots which are allcompetent to exploit the same affordances, namely, do the same task; but do so in different ways since theireffectivities were attuned to different situations. Thus the different strains of selection pressure will produce a classof circuit designs that support different effectivities but realize the same task-specific affordance. The common

7

affordance defines aninvariant structure

over all the robots and reflects

those properties most stable over changein situation. The distinct effectivities, by contrast, provide theperspective structure

specific to properties that areunstable over change in situation.

Thus a robot whose FPGA circuit evolved for one task situation may not work properly when transferredto a new situation. To construct a robot that can solve the same task in different situations, the trick will be tocouple all the different circuits into a single super-FPGA endowed with all the situation-specific, embodiment-specific competences.

A New Tractability

Critics of the ecological approach have often complained that the affordance concept is unclear, primarilybecause the notion of invariant structure has not been explicitly defined. They also have complained thateffectivities, here construed as the situation-specific perspective structure, would not be scientifically acceptable until

a mechanism can be given for their design (Kubovy & Pomerantz). We now reply to those critics: Any equation

set adequate to express the defining characteristics of either of these real-world constructs would have too manydimensions to be solved in closed form. Their context-sensitivity makes them truly intractable in the mathematicalsense.

But, here, by interpreting affordances and their matching effectivities in terms of ER, we have a constructiveargument for their scientific tractability; they can be reliably evolved even if not formally or mechanisticallydescribed. It may be that the robotics field has run up against what might be called the von Neumann barrier tomathematical tractability.

A half-century ago, John von Neumann, the great Hungarian-American mathematician, and a father of thecomputer era, conjectured that there may exist abarrier to tractability

that a theorist encounters in trying to stateexplicitly how systems of even moderate complexity work. By complexity, he was not referring to howcomplicated a system is as determined by counting parts; rather he was referring to how competent the system wasin a variety of real-world contexts, that is, how manyeffectivities

(goal-directed behaviors) it was able to perform.When the complexity barrier is reached, he argued, the best model of such systems may not be mathematical orverbal butthe system itself. This prompts the question: Is the deep level of physical detail that EHW modulesengage and exploit with their rich digital-analog dynamics already on the wrong side of this tractability barrier?

It may be frustrating to the positivist to admit that possibility, but, by its very nature, the conjecture cannotbe formally proven since it tries to relate a mathematical domain to a nonmathematical domain—a semantic ratherthan formal problem. Still, the example of EHW may be informalevidence of its validity. If so, then EER methodsdemonstrating that a complex system can, in principle, be evolved to have certain competencies (specificaffordance-effectivity matches) may offer another kind of tractability.

The Case for History Machines

Here are three significant ways that robots with EHW may be an improvement over those with OHW:

First, consider Thompson's aphorism: "Implementationis

design" (Thompson, 1998) reminds us that byimplementing EHW we immediately endow the system's architecture with the means to design (or redesign) itselfrepeatedly over time.

Second, analog hardwareis

constraint-free digital hardware, obtained by exploiting the transients of the pulseswhich are permitted when the constraints that render them "all-or-nothing" are removed.

Third, where ordinary hardware is characterized by states and state transitions, EHW, instead, is characterized byconfigurations and their histories (ahistory

is defined as a succession of epochs, where a historical epoch isasegment of history preserved between a pair of generations).

The first and second points help explain the improvement of ER with EHW over robots with OHW. The third pointis the basic thesis of EER, or any other evolutionary system for that matter; it

asserts that any evolutionary systems

8

is best described by their unfolding histories rather than by transitions over states. Indeed, given the tractabilitybarrier, there may be no other choice. States, however, may be used where they specify directly the history towhich the states, like symbols, refer (and thus are semanticallygrounded).

The developing EHW strategy (see, Higuchi, et al., 1995; Thompson, 1998; Nolfi & Floreano, 2000),when merged with ecological psychology yields the basic ingredients for an EER. Earlier, a case was made thatEER is both valid and (practically) tractable. To the extent that this is so, then, to that extent, it counters the Minsky(1967) and Wells (2002) claim that history machines are useless because they are too ill-defined and cumbersometo use.

In addition, it also redeems the Shaw & Todd (1981) claim that, contrary to Minsky/Wells claim, history-driven machines not only are tractable (again, in a practical if not formal sense), but in many important ways cansurpass state-controlled systems, especially, in modeling how systems (e.g., robots) evolve the competence forlearning to solve real-world tasks. For instance, Thompson (1998) shows how systems based on evolutionaryprinciples (e.g., EHW) can explore circuit designs that transcend the scope of conventional ones. These circuits,being less constrained in spatial structure than ordinary silicon chips, exhibit considerably richer dynamics than usual.

For having more freedom, they achieve greater sensitivity

to the properties of the physical medium in which thecircuit is implemented. This results in a circuit that is better tailored to exploit all of the characteristics available in

animplementation medium.

The late Robert Rosen in his seminal book,LifeItself, makes a strong case for the logic and design ofbiological systems being quite different from that of programmable machines (Rosen, 1991). Until the advent ofevolvable hardware, machines could be endowed with artificial intelligence only througha human's intervention asdesigner and programmer. Now, it seems, we have systems that can evolve their own intelligence but which maynot be so much artificial as ecological. If so, then evidence to support Rosen's case is evolving.