We are investigating principles for designing or explaining
architectures for "whole" intelligent agents, combining many kinds of
functionality, whether natural or artificial.

Multiple approaches to mind

We try to approach an understanding of human-like
minds gradually, from several directions.

Much of the work is theoretical: informed by cross-disciplinary
themes from philosophy, AI, psychology, ethology,
neuroscience and biology.
For example

Philosophicaltechniques help us detect, analyse, and hopefully
remove or reduce conceptual confusions.

AI research, including robotics
extends our ideas about varieties of mechanisms, varieties of
forms of information, varieties of non-obvious design
problems needing to be solved, reasons why specific mechanisms have
proved to be inadequate, and why some work on particular tasks.

Psychology, including developmental and clinical psychology,
provides a wealth of information about what humans are and can do, and a
multitude of examples of failed explanations.

Ethology extends this with information about other animals, some
like humans, but most very different.

Neuroscience helps to identify possible low level mechanisms that
can account for some of the higher level functionality, and helps us to
identify crucial questions that neuroscience cannot answer because its
conceptual tools are inappropriate.

Biology provides information about the huge variety of organisms and
the niches they inhabit, the huge variety of sub-organism mechanisms
that are required for individual survival, for development and for
reproduction. Evolutionary biology may provide constraints on theories
about how minds evolved, and may be informed by theories concerning
possible forms of information processing mechanisms and architectures.

TOPICS WE STUDY

It is not easy to give a simple high level definition of the work in the
Birmingham Cognition and Affect project, but it does include at least
the following:

Philosophical foundations of computing and AI

Architectures for intelligent agents of various sorts
(natural and artificial)

Architectures for human-like vision systems: multi-layered visual
perception and the
perception of affordances.

Diagrammatic/spatial reasoning and how these relate to visual
perception.

Consciousness - what it is and isn't, and how various types
might have evolved (consciousness in humans, in other animals,
and in machines)

Evolution and Co-evolution
(Evolution via concurrent interacting trajectories
in "design space" and "niche space": a non-mystical version of
Gaia? Co-evolution of different parts of the same architecture, e.g.
perceptual modules that serve the needs of higher level central modules
co-evolved with the central modules. Likewise high level action
modules.)

Evolvable architectures for human-like minds. (The evolutionary history
of an architecture has implications for what is in the architecture.)

The use of those tools in education: e.g. teaching students to think
about how complex systems work by designing testing, debugging and
documenting working systems.

Philosophy of mind

Philosophy of Artificial Intelligence

Philosophy of Cognitive Science

Ontology

The nature of causation and whether events in virtual machines can be
real causes
(Including "downward" causation")

Philosophy of Computation
E.g. the nature of virtual machines, and how the implementation of
virtual machines in physical machines is related to the philosopher's
notion of supervenience of minds on matter.

Emergence of various kinds

The development of different attachment patterns in infants (PhD work by
Dean Petters).

Since September 2004 this work has been substantially extended by the
EC-Funded CoSy Project
Cognitive Systems for Cognitive Assistants
an attempt to understand how to design and build a working robot which
combines a wide range of capabilities normally investigated separately,
including perception, learning, reasoning, planning, communicating, and
self-understanding.

Science as well as engineering

Unlike many researchers on agent architectures, whose aim is primarily
to solve some engineering problem, our motivation is mainly to
understand the functioning of human minds in the context of a broader
study of possible designs for behaving systems, natural and artificial,
including processes of learning, development and evolution.

We believe that understanding the broader class of designs for minds is
essential for understanding the special features of human minds.
To understand something you need to understand not only what it is, but
also in which respects it might have been different, and what
differences those differences would have made.

Practical activities

Some of the work is practical: building actual examples of agents or
groups of agents with interesting architectures, both in order to help
us understand the problems and in order to demonstrate
some design ideas.
Eventually some of the implemented examples may be of practical as well
as theoretical use.

To help with this practical work we have designed and implemented a
sophisticated, and unusually flexible toolkit for implementing agents
composed of multiple interacting mechanisms performing different tasks
concurrently, e.g. perception, reasoning, learning, modifying motives,
producing emotions, making plans, executing plans, etc. This toolkit
is freely available on the Web with full source code, here:
http://www.cs.bham.ac.uk/research/poplog/packages/simagent.html

Initially our simulated agents inhabit only simple simulated worlds,
though our work is largely inspired by much of what is known about real
animals (especially humans) in real physical and social environments.

We shall gradually make the simulations more and more complex as our
understanding and our resources grow, and as our theories develop
so as to explain an increasingly wide range of phenomena in increasingly
rich detail.

Potential applications

Although our primary goals are to advance our scientific and
philosophical understanding of the nature of mind whether in biological
systems or in robots, and how various kinds of minds can be implemented,
this research also has many potential applications, in the long term,
including the design of intelligent software of many kinds (e.g. factory
controllers, personal assistants, teaching systems, hazard warning
systems, aids to managing disasters, etc.) design of more "believable"
agents in computer games and entertainments, and perhaps robots to help
the agent and infirm to lead relatively independent lives.

The work may also achieve the practical goal of helping us understand
better how people work, how they learn, how they go wrong, etc. which
could have profound implications for education and therapy, and
generally improving the quality of life.

In particular our survey of various kinds of architectures and how they
might have evolved should shed light on problems of controlling
attention in resource-limited agents (natural or artificial), and
explain how various kinds of emotional states are possible.
We have shown how at least three different sorts of emotions
(primary emotions, secondary emotions and tertiary emotions) can
arise in agents with the sorts of layered architectures we have been
studying. Further, more detailed, architecture-based classifications of
affective states will follow from an analysis of the states and
processes supported by our hypothesised architectures.

These emotional capabilities are shared to varying degrees with other
animals, depending on their information processing architectures.

Gerd
Ruebenstrunk has written a survey entitled
Emotional Computers
available in
German
and
English
which includes an overview of much of our work (up to 1988).

Related work by Marvin Minsky and Push Singh

Marvin Minsky's draft book
The Emotion Machine, and other papers available at his web site
http://www.media.mit.edu/~minsky/
develops many ideas closely related to ours, from which we have learnt
much.

Push Singh's PhD thesis is also very closely related. It is available
in two formats:

This includes understanding the relations between machines of various
sorts, i.e. explaining how a particular sort of information processing
machine may "inhabit" various types of physical machines of the first
two types. This leads to new ideas about the nature of consciousness.

Collaboration is always welcome, as is critical comment.

FURTHER INFORMATION

The Leverhulme-funded project on
Evolvable virtual information processing architectures for human-like
minds,
lasted from October 1999 to June 2003.
See this web site:
http://www.cs.bham.ac.uk/~axs/lev/