What makes a neural microcircuit computationally powerful?
Or more precisely, which measurable quantities could
explain why one microcircuit C is better suited for a
particular family of computational tasks than another
microcircuit C'? One potential answer comes from results
on cellular automata and random Boolean networks, where
some evidence was provided that their computational power
for offline computations is largest at the edge of chaos,
i.e. at the transition boundary between order and chaos. We
analyse in this article the significance of the edge of
chaos for real time computations in neural microcircuit
models consisting of spiking neurons and dynamic synapses.
We find that the edge of chaos predicts quite well those
values of circuit parameters that yield maximal
computational power. But obviously it makes no prediction
of their computational power for other parameter values.
Therefore, we propose a new method for predicting the
computational power of neural microcircuit models. The new
measure estimates directly the kernel property and the
generalization capability of a neural microcircuit. We
validate the proposed measure by comparing its prediction
with direct evaluations of the computational performance of
various neural microcircuit models. This procedure is
applied first to microcircuit models that differ with
regard to the spatial range of synaptic connections and
their strength, and then to microcircuit models that differ
with regard to the level of background input currents, the
conductance, and the level of noise on the membrane
potential of neurons. In the latter case the proposed
method allows us to quantify differences in the
computational power and generalization capability of neural
circuits in different dynamic regimes (UP- and DOWN-states)
that have been demonstrated through intracellular
recordings in vivo.

Developmental robotics is an emerging field located at the
intersection of robotics, cognitive science and
developmental sciences. This paper elucidates the main
reasons and key motivations behind the convergence of
fields with seemingly disparate interests, and shows why
developmental robotics might prove to be beneficial for all
fields involved. The methodology advocated is synthetic and
two-pronged: on the one hand, it employs robots to
instantiate models originating from developmental sciences;
on the other hand, it aims to develop better robotic
systems by exploiting insights gained from studies on
ontogenetic development. This paper gives a survey of the
relevant research issues and points to some future research
directions.

Summary: A mobile robot should be designed to navigate
with collision avoidance capability in the real world,
flexibly coping with the changing environment. In this
paper, a novel limit-cycle navigation method is proposed
for a fast mobile robot using the limit-cycle
characteristics of a 2nd-order nonlinear function. It can
be applied to the robot operating in a dynamically changing
environment, such as in a robot soccer system. By adjusting
the radius of the motion circle and the direction of
obstacle avoidance, the navigation method proposed enables
a robot to maneuver smoothly towards any desired
destination. Simulations and real experiments using a robot
soccer system demonstrate the merits and practical
applicability of the proposed method.

Invariant features of temporally varying signals are
useful for analysis and classification. Slow feature
analysis (SFA) is a new method for learning invariant or
slowly varying features from a vectorial input signal. SFA
is based on a non-linear expansion of the input signal and
application of principal component analysis to this
expanded signal and its time derivative. It is guaranteed
to find the optimal solution within a family of functions
directly and can learn to extract a large number of
decorrelated features, which are ordered by their degree of
invariance. SFA can be applied hierarchically to process
high dimensional input signals and to extract complex
features. Slow feature analysis is applied first to complex
cell tuning properties based on simple cell output
including disparity and motion. Then, more complicated
input-output functions are learned by repeated application
of SFA. Finally, a hierarchical network of SFA-modules is
presented as a simple model of the visual system. The same
unstructured network can learn translation, size, rotation,
contrast, or, to a lesser degree, illumination invariance
for one-dimensional objects, depending only on the training
stimulus. Surprisingly, only a few training objects
sufficed to achieve good generalization to new objects. The
generated representation is suitable for object
recognition. Performance degrades, if the network is
trained to learn multiple invariances simultaneously.

J. Tani.
Learning to generate articulated behavior through the bottom-up and
the top-down interaction processes.
Neural Networks. The Official Journal of the International
Neural Network Society, European Neural Network Society, Japanese Neural
Network Society., 16(1):11-23, 2001.
[ bib ]

Summary: A novel hierarchical neural network architecture
for sensory-motor learning and behavior generation is
proposed. Two levels of forward model neural networks are
operated on different time scales while parametric
interactions are allowed between the two network levels in
the bottom-up and top-down directions. The models are
examined through experiments of behavior learning and
generation using a real robot arm equipped with a vision
system. The results of the learning experiments showed that
the behavioral patterns are learned by self-organizing the
behavioral primitives in the lower level and combining the
primitives sequentially in the higher level. The results
contrast with prior work by Pawelzik et al. [Neural
Comput. 8, 340 (1996)], Tani and Nolfi
[From animals to animats, 1998], and
Wolpert and Kawato [Neural Networks 11, 1317
(1998)] in that the primitives are represented in a
distributed manner in the network in the present scheme
whereas, in the prior work, the primitives were localized
in specific modules in the network. Further experiments of
on-line planning showed that the behavior could be
generated robustly against a background of real world noise
while the behavior plans could be modified flexibly in
response to changes in the environment. It is concluded
that the interaction between the bottom-up process of
recalling the past and the top-down process of predicting
the future enables both robust and flexible situated behavior.

G. Taga.
Nonlinear dynamics of the human motor control - real-time and
anticipatory adaptation of locomotion and development of movements.
In Proceedings of the International Symposium on Adaptive Motion
of Animals and Machines, 2000.
[ bib ]

The positive-feedback nature of Hebbian plasticity can
destabilize the properties of neuronal networks. Recent
work has demonstrated that this destabilizing influence is
counteracted by a number of homeostatic plasticity
mechanisms that stabilize neuronal activity. Such
mechanisms include global changes in synaptic strengths,
changes in neuronal excitability, and the regulation of
synapse number. These recent studies suggest that Hebbian
and homeostatic plasticity often target the same molecular
substrates, and have opposing effects on synaptic or
neuronal properties. These advances significantly broaden
our framework for understanding the effects of activity on
synaptic function and neuronal excitability.

[137]

R. Brooks.
Cambrian intelligence: The early history of the new ai, 1999.
[ bib ]

[138]

S. Nolfi and J. Tani.
Extracting regularities in space and time through a cascade of
prediction networks: The case of a mobile robot navigating in a structured
environment.
Connection Science, Journal of Neural Computing, Artificial
Intelligence and Cognitive Research, 11(2):125-148, 1999.
[ bib ]

Summary: We propose that the ability to extract
regularities from time series through prediction learning
can be enhanced if we use a hierarchical architecture in
which higher layers are trained to predict the internal
state of lower layers when such states change
significantly. This hierarchical organization has two
functions: (a) it forces the system to recode sensory
information progressively so as to enhance useful
regularities and filter out useless information; and (b) it
progressively reduces the length of the sequences which
should be predicted going from lower to higher layers.
This, in turn, allows higher levels to extract higher-level
regularities which are hidden at the sensory level. By
training an architecture of this type to predict the next
sensory state of a robot navigating in an environment
divided in two rooms, we show how the first-level
prediction layer extracts low-level regularities, while the
second-level prediction layer extracts higher-level
regularities.

G. G. Turrigiano.
Homeostatic plasticity in neuronal networks: The more things change,
the more they stay the same.
Trends in Neuroscience, 22:221-228, 1999.
[ bib ]

During learning and development, neural circuitry is
refined, in part, through changes in the number and
strength of synapses. Most studies of long-term changes in
synaptic strength have concentrated on Hebbian mechanisms,
where these changes occur in a synapse-specific manner.
While Hebbian mechanisms are important for modifying
neuronal circuitry selectively, they might not be
sufficient because they tend to destabilize the activity of
neuronal networks. Recently, several forms of homeostatic
plasticity that stabilize the properties of neural circuits
have been identified. These include mechanisms that
regulate neuronal excitability, stabilize total synaptic
strength, and influence the rate and extent of synapse
formation. These forms of homeostatic plasticity are likely
to go 'hand-in-glove' with Hebbian mechanisms to allow
experience to modify the properties of neuronal networks
selectively.

We propose that a regulation mechanism based on Hebbian
covariance plasticity may cause the brain to operate near
criticality. We analyze the effect of such a regulation on
the dynamics of a network with excitatory and inhibitory
neurons and uniform connectivity within and across the two
populations. We show that, under broad conditions, the
system converges to a critical state lying at the common
boundary of three regions in parameter space; these
correspond to three modes of behavior: high activity, low
activity, oscillation.

This paper provides an overview of the bottom-up approach
to AI, commonly referred to as behavior-oriented AI. The
behavior-oriented approach, with its focus on the
interaction between autonomous agents and their
environments, is introduced by contrasting it with the
traditional approach of knowledge-based AI. Different
notions of autonomy are discussed, and key problems of
generating adaptive and complex behavior are identified. A
number of techniques for the generation of behavior are
introduced and evaluated regarding their potential for
realizing different aspects of autonomy, as well as
adaptivity and complexity of behavior. It is concluded
that, in order to realize truly autonomous and intelligent
agents, the behavior-oriented approach will have to focus
even more on lifelike qualities in both agents and
environments. (Author)

The automated construction of dynamic system models is an
important application area for ILP. We describe a method
that learns qualitative models from time-varying
physiological signals. The goal is to understand the
complexity of the learning task when faced with numerical
data, what signal processing techniques are required, and
how this affects learning. The qualitative representation
is based on Kuipers' QSIM. The learning algorithm for model
construction is based on Coiera's GENMODEL. We show that
QSIM models are efficiently PAC learnable from positive
examples only, and that GENMODEL is an ILP algorithm for
efficiently constructing a QSIM model. We describe both
GENMODEL which performs RLGG on qualitative states to learn
a QSIM model, and the front-end processing and segmenting
stages that transform a signal into a set of qualitative
states. Next we describe results of experiments on data
from six cardiac bypass patients. Useful models were
obtained, representing both normal and abnormal
physiological states. Model variation across time and
across different levels of temporal abstraction and fault
tolerance is explored. The assumption made by many previous
workers that the abstraction of examples from data can be
separated from the learning task is not supported by this
study. Firstly, the effects of noise in the numerical data
manifest themselves in the qualitative examples. Secondly,
the models learned are directly dependent on the initial
qualitative abstraction chosen.

A new incremental network model for supervised learning is
proposed. The model builds up a structure of units each of
which has an associated local linear mapping (LLM). Error
information obtained during training is used to determine
where to insert new units whose LLMs are interpolated from
their neighbors. Simulation results for several
classification tasks indicate fast convergence as well as
good generalization. The ability of the model to also
perform function approximation is demonstrated by an
example. 1 Introduction Local (or piece-wise) linear
mappings (LLMs) are an economic means of describing a
}wellbehaved { function f : R n ! R m . The principle
is to approximate the function (which may be given by a
number of input/output samples (; i) 2 R n Theta R m )
with a set of linear mappings each of which is constrained
to a local region of the input space R n . LLM-based
methods have been used earlier to learn the inverse
kinematics of robot arms [7], for classificat...

The paper proposes a mechanism for the spontaneous
formation of perceptually grounded meanings under the
selectionist pressure of a discrimination task. The
mechanism is defined formally and the results of some
simulation experiments are reported.

We present a tree-structured architecture for supervised
learning. The statistical model underlying the architecture
is a hierarchical mixture model in which both the mixture
coefficients and the mixture components are generalized
linear models (GLIM's).Learning is treated as a maximum
likelihood problem; in particular, we present an
Expectation-Maximization (EM) algorithm for adjusting the
parameters of the architecture.We also develop an on-line
learning algorithm in which the parameters are updated
incrementally.Comparative simulation results are presented
in the robot dynamics domain.

We describe a project to capitalize on newly available
levels of computational resources in order to understand
human cognition. We will build an integrated physical
system including vision, sound input and output, and
dextrous manipulation, all controlled by a continuously
operating large scale parallel MIMD computer. The resulting
system will learn to ``think'' by building on its bodily
experiences to accomplish progressively more abstract
tasks. Past experience suggests that in attempting to build
such an integrated system we will have to fundamentally
change the way artificial intelligence, cognitive science,
linguistics, and philosophy think about the organization of
intelligence. We expect to be able to better reconcile the
theories that will be developed with current work in
neuroscience.

The dynamics of nonlinear systems often vary qualitatively
over their parameter space. Methodologies for designing
piecewise control laws for dynamical systems, such as gain
scheduling, are useful because they circumvent the problem
of determining a single global model of the plant dynamics.
Instead, the dynamics are approximated using local models
that vary with the plant's operating point. When a
controller is learned instead of designed, analogous issues
arise. This article describes a multi-network, or modular,
neural network architecture that learns to perform control
tasks using a piecewise control strategy. The
architecture's networks compete to learn the training
patterns. As a result, a plant's parameter space is
adaptively partitioned into a number of regions, and a
different network learns a control law in each region. This
learning process is described in a probabilistic framework
and learning algorithms that perform gradient ascent in a
log likelihood function are discussed. Simulations show
that the modular architecture's performance is superior to
that of a single network on a multipayload robot motion
control task.

This is the first volume in the series WAVELET ANALYSIS
AND ITS APPLICATIONS. It is an introductory treatise on
wavelet analysis, with an emphasis on spline wavelets and
and time-frequency analysis. Among the basic topics covered
are time frequency localization, intergral wavelet
transforms, dyadic wavelets, frames, spine wavelets,
orthonormal wavelet bases, and wavelet packets. Is is
suitable as a textbook for a beginning course on wavelet
analysis and is directed toward both mathematicians and
engineers who wish to learn about the subject.

Automatic classification of active sonar signals using the
Wigner-Ville transform (WVT), the wavelet transform (WT)
and the scalogram is addressed. Features are extracted by
integrating over regions in the time-frequency (TF)
distribution, and are classified by a decision tree.
Experimental results show classification and detection
rates of up to 92 WVT and the scalogram, particularly at high noise levels.
This can be partially attributed to the absence of cross
terms in the WT.

R. S. Sutton.
Integrated architectures for learning, planning and reacting based on
approximating dynamic programming.
In Proceedings of the Seventh International Conference on
Machine Learning, June 1990.
[ bib ]

The Behavior Language is a rule-based real-time parallel
robot programming language originally based on ideas from
[Brooks 86], [Connell 89], and [Maes 89]. It compiles into
a modified and extended version of the subsumption
architecture [Brooks 86] and thus has backends for a number
of processors including the Motorola 68000 and 68HC11, the
Hitachi 6301, and Common Lisp. Behaviors are groups of
rules which are activatable by a number of different
schemes. There are no shared data structures across
behaviors, but instead all communication is by explicit
message passing. All rules are assumed to run in parallel
and asynchronously. It includes the earlier notions of
inhibition and suppression, along with a number of
mechanisms for spreading of activation.

We argue that generally accepted methodologies of
artificial intelligence research are limited in the
proportion of human level intelligence they can be expected
to emulate. We argue that the currently accepted
decompositions and static representations used in such
research are wrong. We argue for a shift to a process based
model, with a decomposition based on task achieving
behaviors as the organizational principle. In particular we
advocate building robotic insects.

The
authors' effort to build a situated knowledge-based system
resultet in a cooperative hybrid system called DDT ( device
diagnostic tool). DDT is a hybrid symbolic/connectionist
system, embodying cooperativity and self-tuning
capabilities, thus being able to face the problem of model
explosion cycle. The approach is illustrated using a
real-life expert system in the domain of technical
troubleshooting.

The paper proposes a set of principles and a general
architecture that may explain how language and meaning may
originate and complexify in a group of physically grounded
distributed agents. An experimental setup is introduced for
concretising and validating specific mechanisms based on
these principles. The setup consists of two robotic heads
that watch static or dynamic scenes and engage in language
games, in which one robot describes to the other what they
see. The first results from experiments showing the
emergence of distinctions, of a lexicon, and of primitive
syntactic structures are reported.

Summary: A mobile robot should be designed to navigate
with collision avoidance capability in the real world,
flexibly coping with the changing environment. In this
paper, a novel limit-cycle navigation method is proposed
for a fast mobile robot using the limit-cycle
characteristics of a 2nd-order nonlinear function. It can
be applied to the robot operating in a dynamically changing
environment, such as in a robot soccer system. By adjusting
the radius of the motion circle and the direction of
obstacle avoidance, the navigation method proposed enables
a robot to maneuver smoothly towards any desired
destination. Simulations and real experiments using a robot
soccer system demonstrate the merits and practical
applicability of the proposed method.

Summary: The paper surveys some of the mechanisms that
have been demonstrated to be relevant for evolving
communication systems in software simulations or robotic
experiments. In each case, precursors or parallels with
work in the study of artificial life and adaptive behaviour
are discussed.

Summary: The problem of category learning has been
traditionally investigated by employing disembodied
categorization models. One of the basic tenets of embodied
cognitive science states that categorization can be
interpreted as a process of sensory-motor coordination, in
which an embodied agent, while interacting with its
environment, can structure its own input space for the
purpose of learning about categories. Many researchers,
including John Dewey and Jean Piaget, have argued that
sensory-motor coordination is crucial for perception and
for development. In this paper we give a quantitative
account of why sensory-motor coordination is important for
perception and category learning.

Summary: The paper describes a system for open-ended
communication by autonomous robots about event descriptions
anchored in reality through the robot's sensori-motor
apparatus. The events are dynamic and agents must
continually track changing situations at multiple levels of
detail through their vision system. We are specifically
concerned with the question how grounding can become shared
through the use of external (symbolic) representations,
such as natural language expressions.

Keywords: Autonomous Robots; Event Descriptions; Open-Ended

[344]

J. Tani.
Learning to generate articulated behavior through the bottom-up and
the top-down interaction processes.
Neural Networks. The Official Journal of the International
Neural Network Society, European Neural Network Society, Japanese Neural
Network Society., 16(1):11-23.
[ bib ]

Summary: A novel hierarchical neural network architecture
for sensory-motor learning and behavior generation is
proposed. Two levels of forward model neural networks are
operated on different time scales while parametric
interactions are allowed between the two network levels in
the bottom-up and top-down directions. The models are
examined through experiments of behavior learning and
generation using a real robot arm equipped with a vision
system. The results of the learning experiments showed that
the behavioral patterns are learned by self-organizing the
behavioral primitives in the lower level and combining the
primitives sequentially in the higher level. The results
contrast with prior work by Pawelzik et al. [Neural
Comput. 8, 340 (1996)], Tani and Nolfi
[From animals to animats, 1998], and
Wolpert and Kawato [Neural Networks 11, 1317
(1998)] in that the primitives are represented in a
distributed manner in the network in the present scheme
whereas, in the prior work, the primitives were localized
in specific modules in the network. Further experiments of
on-line planning showed that the behavior could be
generated robustly against a background of real world noise
while the behavior plans could be modified flexibly in
response to changes in the environment. It is concluded
that the interaction between the bottom-up process of
recalling the past and the top-down process of predicting
the future enables both robust and flexible situated behavior.

A behavior system consists of the components and control
programs necessary to establish a particular behavior in a
robotic agent. The paper proposes a mathematical approach
for the analysis of behavior systems. The approach rests on
viewing a behavior system as a dynamical system whose
equilibrium state is attained when the behavior it is
responsible for is achieved.

The paper describes how symbolic processes are
self-organized in the navigational learning of a mobile
robot. Based on a dynamical system's approach, the paper
shows that the forward modeling scheme based on recurrent
neural network (RNN) learning is capable of extracting
grammatical structure hidden in the geometry of the
workspace from navigational experience. This robot is
capable of mentally simulating its own actions using the
acquired forward model. The paper shows that such a mental
process by the RNN can naturally be situated with respect
to the behavioural contexts, provided that the forward
model learned is embedded on the global attractor. The
internal representation obtained is proved to be grounded,
since it is self-organized solely through itnteraction with
the physical world. The paper shows also that structural
stability arises in the interaction between the neural
dynamics and the environment dynamics, accounting for the
situatedness of the internal symbolic process.

The paper discusses a novel scheme for sensory-based
navigation of a mobile robot. In our previous work (Tani
& Fukumura, 1994, Neural Networks, 7(3), 553-563), we
formulated the problem of goal-directed navigation as an
embedding problem of dynamical systems: desired
trajectories in a task space should be embedded in an
adequate sensory-based internal state space so that a
unique mapping from the internal state space to the motor
command could be established. In the current formulation a
recurrent neural network is employed, which shows that an
adequate internal state space can be self-organized,
through supervised training with sensorimotor sequences.
The experiment was conducted using a real mobile robot
equipped with a laser range sensor, demonstrating the
validity of the presented scheme by working in a noisy
real-world environment.

This paper describes the use of the C4.5 decision tree
learning algorithm in the design of a classifier for a new
approach to the mapping of a mobile robot's local
environment. The decision tree uses the features from the
echoes of an ultrasonic array mounted on the robot to
classify the contours of its local environment. The
contours are classified into a finite number of two
dimensional shapes to form a primitive map which is to be
used for navigation. The nature of the problem, noise and
the practical timing constraints, distinguishes it from
those typically used in machine learning applications and
highlights some of the advantages of decision tree learning
in robotic applications.

Ideally, sensory information forms the only source of information to a robot. We consider an algorithm for the self-organization of a controller. At short time scales the controller is merely reactive but the parameter dynamics and the acquisition of knowledge by an internal model lead to seemingly purposeful behavior on longer time scales. As a paradigmatic example, we study the simulation of an underactuated snake-like robot. By interacting with the real physical system formed by the robotic hardware and the environment, the controller achieves a sensitive and body-specific actuation of the robot.

Homeokinetic learning provides a route to
the self-organization of elementary behaviors
in autonomous robots by establishing
low-level sensomotoric loops. Strength and
duration of the internal parameter changes
which are caused by the homeokinetic adaptation
provide a natural evaluation of external
states, which can be used to incorporate information
from additional sensory inputs and
to extend the function of the low-level behavior
to more general situations. We illustrate
the approach by two examples, a mobile
robot and a human-like hand which are
driven by the same low-level scheme, but use
the second-order information in different ways
to achieve either risk avoidance and unconstrained
movement or constrained movement.
While the low-level adaptation follows a set of
rigid learning rules, the second-order learning
exerts a modulatory effect to the elementary
behaviors and to the distribution of their inputs.

The paper presents a method to guide the self-organised development of behaviours
of autonomous robots.
In earlier publications we demonstrated how to use the homeokinesis principle
and dynamical systems theory to obtain self-organised playful but goal-free behaviour.
Now we extend this framework by reinforcement signals.
We validate the mechanisms with two experiment with a spherical robot.
The first experiment aims at fast motion, where the robot reaches on average
about twice the speed of a not reinforcement robot.
In the second experiment spinning motion is rewarded and we demonstrate
that the robot successfully develops
pirouettes and curved motion which
only rarely occur among the natural behaviours of the robot.

Self-organization and the phenomenen of emergence play an
essential role in living systems and form a challenge to
artificial life systems. This is not only because systems
become more life like but also since self-organization may
help in reducing the design efforts in creating complex
behavior systems. The present paper exemplifies a general
approach to the self-organization of behavior which has
been developed and tested in various examples in recent
years. We apply this approach to a spherical robot driven
by shifting internal masses. The complex physics of this
robotic object is completely unknown to the controller.
Nevertheless after a short time the robot develops
systematic rolling movements covering large distances with
high velocity. In a hilly landscape it is capable of
manoeuvering out of the basins and in landscapes with a
fixed rotational geometry the robot more or less adatps its
movements to this geometry - the controller so to say
develops a kind of feeling for its environment although
there are no sensors for measuring the positions or the
velocity of the robot. We argue that this behavior is a
result of the spontaneous symmetry breaking effects which
are responsible for the emergence of behavior in our
approach.

Self-organization and the phenomenon of emergence play an
essential role in living systems and form a challenge to artificial life systems.
This is not only because systems become more lifelike, but also since self-organization
may help in reducing the design efforts in creating complex behavior systems.
The present paper studies self-exploration based on a general approach to the
self-organization of behavior, which has been developed and tested in various
examples in recent years. This is a step towards autonomous early robot development.
We consider agents under the close sensorimotor coupling paradigm with a certain
cognitive ability realized by an internal forward model.
Starting from tabula rasa initial conditions we overcome the bootstrapping problem and
show emerging self-exploration.
Apart from that, we analyze the effect of limited actions,
which lead to deprivation of the world model.
We show that our paradigm explicitly avoids this
by producing purposive actions in a natural way.
Examples are given using a simulated simple wheeled robot and a
spherical robot driven by shifting internal masses.

Despite the tremendous progress in robotic hardware and in
both sensorial and computing efficiencies the performance
of contemporary autonomous robots is still far below that
of simple animals. This has triggered an intensive search
for alternative approaches to the control of robots. The
present paper exemplifies a general approach to the
self-organization of behavior which has been developed and
tested in various examples in recent years. We apply this
approach to an underactuated snake like artifact with a
complex physical behavior which is not known to the
controller. Due to the weak forces available, the
controller so to say has to develop a kind of feeling for
the body which is seen to emerge from our approach in a
natural way with meandering and rotational collective modes
being observed in computer simulation experiments.