THIS IS AN OLD WEB SITE - NOT UPDATED SINCE ABOUT 2009
SEE THE LINKS BELOW TO MORE RECENT WORK

I stopped trying to obtain funding for this work several years ago.
It wasted too much time, and even if funding becomes available,
finding suitably educated researchers with the right sorts of
broad interests and willingness to take risks with their own
careers is too difficult. So I just get on with it, and anyone
interested can join in.

Software tools
SimAgent is a free, open source, toolkit for teaching and research on
architectures for more or less intelligent systems, built on the
Free Poplog system.

GOALS

The main goal of the CogAff project was to understand the types of
architectures that are capable of accounting for the whole range of
human (and non-human) mental states and processes, including not only
intelligent capabilities, such as the ability to learn to find your way
in an unfamiliar town and the ability to think about infinite sets, but
also moods, emotions, desires, and the like. For instance, we have
investigated whether the ability to have emotional states is an
accident of animal evolution or an inevitable consequence of design
requirements and constraints, for instance in resource-limited
intelligent robots.

We also hoped to show that many of our current mental concepts (including
"emotion", "consciousness" and many others) are inherently ambiguous
"cluster concepts" which can be refined and clarified by deriving more
precise and richer families of concepts from specifications of
architectures able to support human like mental states and processes.

However, it is not possible to understand one type of mind fully without
understanding how it is similar to and how it differs from others, and
what the implications of those similarities and differences are. So the
study of human minds has to be part of a larger investigation, including
various kinds of animals, possible robots, and even minds which might
in principle have evolved but did not.

An example of the "architecture based" approach is the realisation
that there are at least three very different kinds of emotions related
to different architectural layers described below. In particular there
are primary and secondary emotions shared to varying degrees with other
animals and tertiary emotions which use an architectural layer that
perhaps is found only in (adult) humans, and perhaps very few other
animals (orangutans, chimpanzees
and bonobos perhaps? Probably other animals too.)

The work also has implications for theories of different kinds of
consciousness, the nature of vision, forms of representation, varieties
of learning and development, and possible evolutionary trajectories. In
particular, we attempt to understand the trade-offs which led to the
evolution of hybrid multi-layer information processing architectures
implemented in human brains.

SUB-TOPICS

Our study of design principles for intelligent autonomous agents,
whether natural or artificial, includes the following topics:

The ontology of a human-like mind: what sorts of states,
properties, processes, capabilities can occur in various sorts of
minds, e.g. beliefs, desires, decisions, deliberation, intentions,
plans, suppositions, idle wishes, preferences, ambitions, motive
generators, personalities, emotions, moods, loss of control of
attention, and states and processes possible only in minds that are
unlike ours.

What kinds of architectures can support biological and artificial agents
with different kinds of intelligence?
This requires a study of `design space' and `niche space' and their
relationships, including the different sorts of trajectories possible in
these spaces, e.g. for an individual, for a naturally evolving species,
or for a system explicitly modified or repaired by an engineer.

To what extent do humans and other agents have simple and uniform
architectures,
and to what extent do they have hybrid architectures, e.g.
combining neural nets interacting with symbolic reasoning systems? Is a
human brain an unintelligibly complex morass of mechanisms, or is there
sufficient modularity of design to enable us to attain at least a partial
understanding of how we work?

How do various kinds of mental states in humans (and presumably other
intelligent agents) arise out of the architecture (e.g. emotional
states)?
What is the relationship between the philosopher's notion of a
mind SUPERVENING ON a body and the engineer's notion of a
virtual machine being IMPLEMENTED IN a physical machine?

What forms of motivation are there
(desires, wishes, pleasures,
dislikes, etc.), how they are generated, and how they are managed in
autonomous agents. What sorts of motivation can be generated within
different sorts of architectures? Is there a distinction between motives
that come from an individual and those which are produced entirely by
"external" causes?

What kinds of learning and development
are possible in agents with
different sorts of architectures? This includes processes such as
including acquiring new facts, new rules for internal or external
behaviour, new forms of representation, new links between components of
the architecture, and adding new sub-systems to the architecture. (A
newborn infant doesn't have the same architecture as an adult.)

How can we design interacting communicating agents?
How are the
possibilities for communication, cooperation, competition and
other interactions BETWEEN agents related to the architectures WITHIN
those agents?

How can resource-limited agents cope with time pressures and limited
knowledge in their deliberations?
If it is hard to get the design of
such a deliberative mechanism optimised for all situations, would it be
useful to have a higher order meta-management mechanism able to observe,
evaluate and modify the deliberative mechanisms?

Can we evolve artificial human-like architectures using genetic
algorithms and genetic programming, or similar techniques?
What are the
requirements for such evolution to succeed within a reasonable amount of
time given the astronomical size of the search space in which complex
designs are embedded? Is such evolution truly a matter of random change
with selection-driven hill-climbing or are there more subtle
knowledge-based control mechanisms implicit in some of the mechanisms
(especially when co-evolution is involved)?

Will the study of natural evolution and the study of artificial
evolution be mutually informative?

ARCHITECTURAL LAYERS

We have conjectured that human-like
architectures require several different sorts of concurrently acting
sub-architectures to coexist and collaborate including a "reactive"
layer, a "deliberative" layer and a "meta-management" layer, along with
one or more global "alarm mechanisms", a long term associative store
(e.g. for answering "what if?" questions), various motive generating
mechanisms, and layered perception and action mechanisms operating at
different levels of abstraction.

The different components of the architecture will have evolved at
different times under the influence of different sorts of evolutionary
pressures and will be subject to different sorts of constraints and
tradeoffs. E.g. the global alarm mechanism may have to sacrifice
accuracy and correctness for speed. This may be why many emotional
reactions are inappropriate.

BROAD AND SHALLOW ARCHITECTURES

Like the OZ project of Bates and colleagues at CMU (see below),
we aim to start with "broad but shallow" architectures. That
is, the architectures should accommodate and integrate a wide range of
functions, such as vision and other forms of perception, various kinds
of action, motivation, various kinds of learning, skilled "automatic"
behaviour, explicitly planned behaviour, various kinds of problem
solving, planning, self-awareness, self-criticism, changing moods, etc.

A "broad" architecture contrasts with "deep and narrow" systems, like
most AI systems, e.g. systems to analyse images, or understand
sentences, or solve mathematical problems, or make plans, etc.

It may be necessary for a while to tolerate relatively shallow and
simplified components as we explore the problems of putting lots of
different components together. Later we can gradually add depth and
realism to the systems we build. Shallowness is not an end in itself.

EMERGENCE

It is to be expected in this context that many aspects of human minds
will not be a product of explicit mechanisms which evolved to produce
those capabilities, but will "emerge" as side-effects of the interaction
of many mechanisms whose primary function is different. For example, we
hope to show that certain motivational and emotional states and
processes for which other researchers postulate explicit rules, can
instead emerge from deeper and more general mechanisms in
resource-limited agents. Thus the ability to feel humiliation is not the
product of a humiliation module or any specific emotion module, but a
side-effect of the interaction between many different modules.

Others involved in the more general Cognition
and Affect project are listed
here

OTHER INFORMATION

There is an email list for discussions of cognition and affect (e.g.
architectures of autonomous agents, and processes involving motivation
and emotions). If you wish to join, send a message addressed to
majordomo@cs.bham.ac.uk
containing just one line

subscribe cognition_affect

After that you can send a message to the list itself
cognition_affect@cs.bham.ac.uk giving information about yourself -
who you are, what you do, where you are, and your email address.

Please do not advertise the list to other list managers: we are all
bombarded with too many irrelevant announcements.