PAPERS IN THE COGNITION AND AFFECT DIRECTORY
Produced or published in the period 1996-1999 (Approximately)
(Latest first)

Most of the papers listed here are in compressed or uncompressed
postscript format. Some are latex or plain ascii text. Some of the
postscript files are duplicated in PDF format.
For information on free browsers for these formats see
http://www.cs.bham.ac.uk/~axs/browsers.html

This abstract was included in the 'Philosophy' section of the
proceedings of this conference:
Toward a Science of Consciousness 1998
"Tucson III"
April 27-May 2, 1998
Tucson, Arizona
All the abstracts are online
here.

Abstract:
A decision made by an autonomous system to adjust its autonomy status (e.g.
override manual control) must be based on
reliable information. In particular, the system's anomaly-detection mechanisms
must be intact. To ensure this, a high degree
of self-monitoring (reflective
coverage) is necessary. We propose a distributed reflective system, where the
participating agents monitor each other's performance and software execution
patterns. We focus on two things: monitoring of the anomaly-detection
components of an agent (which we call meta-observation) and evaluating the
"quality" of the agent's actions (does it make the world better or worse?).
Using a simple scenario, we argue that these features can
enhance the reliability of autonomy adjustment.

Filename: kennedy.immune0.psFilename: kennedy.immune0.pdf
Title: Evolution of Self-Definition
In Proceedings of the IEEE International Conference on Systems, Man and
Cybernetics Invited Track on "Immune Systems: Modelling and Simulation",
San Diego, October 1998.
Author: C. Kennedy
Date of paper: October 1998

Abstract:
When considering an architecture for an artificial immune system, it is
generally agreed that discrimination between self and non-self is required.
With current immune system models, the definition of "self" is usually
concerned with patterns associated with normal usage. However, this has the
disadvantage that the discrimination process itself may be disabled by a virus
and there is no way to detect this because the algorithms controlling the
pattern recognition are not included in the self-definition. To avoid an
infinite regress of increasingly higher levels of reflection, we propose a
model of mutual reflection based on a multi-agent network where each agent
monitors and protects a subset of other agents and is itself monitored and
protected by them. The whole network is then the self-definition. The paper
presents a conceptual framework for the evolution of algorithms to enable
agents in the network to become mutually protective. If there is no critical
dependence on a global management component, this property of symbiosis can
lead to a more robust form of distributed self-nonself distinction.

Abstract:
The autonomy of a system can be defined as its capability to recover
from unforeseen difficulties without any user intervention.
This thesis proposal addresses a small part of this problem, namely the
detection of anomalies within a system's own operation by the system
itself. It is a response to a challenge presented by immune systems
which can distinguish between "self " and "nonself ", i.e. they can
recognise a "foreign" pattern (due to a virus or bacterium) as different
from those associated with the organism itself, even if the pattern was
not previously encountered. The aim is to apply this requirement to an
artificial system, where "nonself " may be any form of deliberate
intrusion or random anomalous behaviour due to a fault. When designing
reflective architectures or self-diagnostic systems,
it is simpler to rely on a single coordination mechanism to make the
system work as intended. However, such a coordination mechanism cannot
be inspected or repaired by the system itself, which means that there is
a gap in its reflective coverage. To try to overcome this limitation,
this thesis proposal suggests a conceptual frame-work based on a network
of agents where each agent monitors the whole network from a unique and
independent perspective and where the perspectives are not globally
"managed". Each agent monitors the fault-detection capability and
control algorithms of other agents (a process called meta-observation).
In this way, the agents can collectively achieve reflective coverage of
failures.

Abstract:
Patrice Terrier asks and Aaron Sloman attempts to answer questions about
AI, about emotions, about the relevance of philosophy
to AI, about Poplog, Sim_agent and other tools.
(EACE =
European Association for Cognitive Ergonomics

This paper has been superseded by a longer revised version with
the same name in Cognitive Processing, Vol 1, 2001, pp 1-22,
(Summer 2001), available
here.

(Originally presented at
I3 Spring Days Workshop
on Behavior planning for life-like
characters and avatars Sitges, Spain, March 1999)

Author: Aaron Sloman
Date: 3 Aug 1999

Abstract:

There is much shallow thinking about emotions, and a huge diversity of
definitions of "emotion" arises out of this shallowness. Too often the
definitions and theories are inspired either by a mixture of
introspection and selective common sense, or by a misdirected
neo-behaviourist methodology, attempting to define emotions and other
mental states in terms of observables. One way to avoid such
shallowness, and perhaps achieve convergence, is to base concepts and
theories on an information processing architecture, which is subject to
various constraints, including evolvability, implementability, coping
with resource-limited physical mechanisms, and achieving required
functionality. Within such an architecture-based theory we can
distinguish primary emotions, secondary emotions, and tertiary emotions,
and produce a coherent theory which not only explains a wide range of
phenomena but also partly explains the diversity of theories: most of
them focus on only a subset of types of emotions.

Abstract: (This was a short abstract. See later version)
Because we apparently have direct access to the phenomena, it is
tempting to think we know exactly what we are talking about when we
refer to consciousness, experience, the "first-person" viewpoint, etc.
But this is as mistaken as thinking we fully understand what
simultaneity is just because we have direct access to the phenomena, for
instance when we see a flash and hear a bang simultaneously.

Einstein taught us otherwise. From the fact that we can recognise some
instances of a concept it does not follow that we know what is meant in
general by saying that something is or is not an instance. Endless
debates about which animals and which types of machines have
consciousness are among the many symptoms that our concepts of mentality
are more confused than we realise.

Too often people thinking about mind and consciousness consider only
adult human minds in an academic culture, ignoring people from other
cultures, infants, people with brain damage or disease, insects, birds,
chimpanzees and other animals, as well as robots and software agents in
synthetic environments. By broadening our view, we find evidence for
diverse information processing architectures, each supporting and
explaining a specific combination of mental capabilities.

When concepts connote complex, clusters of capabilities, then different
subsets may be present at different stages of development of a species
or an individual. Very different subsets may be found in different
species. Different subsets may be impaired by different sorts of brain
damage or degeneration. When we know what sorts of components are
implicitly referred to by our pre-theoretic "cluster concepts" we can
then define new more precise concepts in terms of different subsets. It
helps if we can specify the architectures which generate different
subsets of information processing capabilities. That also enables us to
ask new, deeper, questions not only about the development of individuals
but about the evolution of mentality in different species.

Architecture-based concepts generated in the framework of virtual
machine functionalism subvert familiar philosophical thought experiments
about zombies, since attempts to specify a zombie with the {\sc} right
kind of {\em virtual machine} functionality but lacking our mental
states degenerates into incoherence when spelled out in great detail.
When you have fully described the internal states, processes,
dispositions and causal interactions within a zombie whose information
processing functions are alleged to be {\em exactly} like ours, the
claim that something might still be missing becomes incomprehensible.

Abstract:
(Intended as a partial antidote to wide-spread shallow views about
emotions, and over-simplified ontologies too easily accepted by AI and
HCI researchers now becoming interested in intelligence and affect.)

Our everyday attributions of emotions, moods, attitudes, desires, and
other affective states implicitly presuppose that people are information
processors. To long for something you need to know of its existence, its
remoteness, and the possibility of being together again. Besides these
semantic information states, longing also involves a
control state. One who has deep longing for X does not merely
occasionally think it would be wonderful to be with X. In deep longing
thoughts are often uncontrollably drawn to X.

We need to understand the architectural underpinnings of control of
attention, so that we can see how control can be lost. Having control
requires being able to some extent to monitor one's thought processes,
to evaluate them, and to redirect them. Only "to some extent" because
both access and control are partial. We need to explain why. (In
addition, self-evaluation can be misguided, e.g. after religious
indoctrination!)

"Tertiary emotions" like deep longing are different from "primary"
emotions (e.g. being startled or sexually aroused) and "secondary
emotions" (e.g. being apprehensive or relieved) which, to some extent,
we share with other animals. Can chimps, bonobos or human toddlers have
tertiary emotions? To clarify the empirical questions and explain the
phenomena we need a good model of the information processing
architecture.

Conjecture: various modules in the human mind (perceptual, motor, and
more central modules) all have architectural layers that evolved at
different times and support different kinds of functionality, including
reactive, deliberative and self-monitoring processes.

Different types of affect are related to the functioning of these
different layers: e.g. primary emotions require only reactive layers,
secondary emotions require deliberative layers (including "what if"
reasoning mechanisms) and tertiary emotions (e.g. deep longing,
humiliation, infatuation) involve additional self evaluation and self
control mechanisms which evolved late and may be rare among animals.

An architecture-based framework can bring some order into the morass of
studies of affect (e.g. myriad definitions of "emotion"). This will help
us understand which kinds of emotions can arise in software agents that
lack the reactive mechanisms required for controlling a physical body.

HCI Designers need to understand these issues (a) if they want to model
human affective processes, (b) if they wish to design systems which
engage fruitfully with human affective processes, (c) if they wish to
produce teaching/training packages for would-be counsellors,
psychotherapists, psychologists.

Filename: Sloman.kd.pdf
Title: Architectural Requirements for Human-like Agents Both Natural and
Artificial.
(What sorts of machines can love? )
To appear in
Human Cognition And Social Agent Technology
Ed. Kerstin Dautenhahn, in the
"Advances in Consciousness Research" series, John Benjamins Publishing
Extended version of slides on love for
"Voice box" talk, presented in London (below)
Authors: Aaron Sloman
Date: 10 Jan 1999 (Book Published, March 2000)

Abstract:
This paper, an expanded version of a talk on love given to a literary
society, attempts to analyse some of the architectural requirements for
an agent which is capable of having primary, secondary and tertiary
emotions, including being infatuated or in love. It elaborates on work
done previously in the Birmingham Cognition and Affect group, describing
our proposed three level architecture (with reactive, deliberative and
meta-management layers), showing how different sorts of emotions relate
to those layers.

Some of the relationships between emotional states involving partial
loss of control of attention (e.g. emotional states involved in being in
love) and other states which involve dispositions (e.g. attitudes such
as loving) are discussed and related to the architecture.

The work of poets and playwrights can be shown to involve an implicit
commitment to the hypothesis that minds are (at least) information
processing engines. Besides loving, many other familiar states and
processes such as seeing, deciding, wondering whether, hoping,
regretting, enjoying, disliking, learning, planning and acting all
involve various sorts of information processing.

By analysing the requirements for such processes to occur, and relating
them to our evolutionary history and what is known about animal brains,
and comparing this with what is being learnt from work on artificial
minds in artificial intelligence, we can begin to formulate new and
deeper theories about how minds work, including how we come to think
about qualia, many forms of learning and development, and results of
brain damage or abnormality.

But there is much prejudice that gets in the way of such theorising, and
also much misunderstanding because people construe notions of
"information processing" too narrowly.

This entry has been moved here.
Title: Towards a Grammar of Emotions,
in New Universities Quarterly, 36,3, pp 230-238, 1982.
Authors: Aaron Sloman

Abstract:
A discussion of some of the commonalities between brains and computers
as physical systems within which information processing machines can be
implemented. Includes a distinction between machines which manipulate
energy and forces, machines with manipulate matter and machines which
process information. Concludes that we still have much to learn about
computers and brains, and although it seems likely that brains are
computers we don't yet know what sorts of computers they are.

Abstract:
The HTML file is the abstract for an invited talk at the
DIGITAL BIOTA 2
Conference
The .ps and .pdf files are postscript and PDf files containing
slightly extended versions of the slides I presented at the conference.

Abstract:
This review summarises the main themes of Picard's book, some of which
are related to Damasio's ideas in Descartes' Error. In
particular, I try to show that not all secondary emotions need manifest
themselves via the primary emotion system, and therefore they will not
all be detectable by measurements
of physiological changes. I agree with
much of the spirit of the book, but disagree on detail.
NOTE: Rosalind Picard's reply to this review is available online
at
http://www.findarticles.com/cf_dls/m2483/1_20/54367782/p1/

Abstract:
The paper "What's an AI toolkit for", presented at
AAAI-98 Workshop on Software Tools for Developing Agents
at AAAI98 in Madison, USA, July 1998, is listed below. This file
contains the slides (two slides per A4 page) prepared for the
presentation.

Abstract:
Clearly we can solve problems by thinking about them. Sometimes we have
the impression that in doing so we use words, at other times diagrams or
images. Often we use both. What is going on when we use mental diagrams
or images? This question is addressed in relation
to the more general multi-pronged question: what are representations,
what are they for, how many different types are they, in how many
different ways can they be used, and what difference does it make
whether they are in the mind or on paper? The question is related to
deep problems about how vision and spatial manipulation work. It is
suggested that we are far from understanding what's going on. In
particular we need to explain how people understand spatial structure
and motion, and I'll try to suggest that this is a problem with hidden
depths, since our grasp of spatial structure is inherently a grasp of a
complex range of possibilities and their implications. Two
classes of examples discussed at length illustrate requirements for
human visualisation capabilities. One is the problem of removing
undergarments without removing outer garments. The other is thinking
about infinite discrete mathematical structures.

Abstract:
This paper attempts to characterise a unifying overview of the practice
of software engineers, AI designers, developers of evolutionary forms of
computation, designers of adaptive systems, etc. The topic overlaps with
theoretical biology, developmental psychology and perhaps some aspects
of social theory. Just as much of theoretical computer science follows
the lead of engineering intuitions and tries to formalise them, there
are also some important emerging high level cross disciplinary ideas
about natural information processing architectures and evolutionary
mechanisms and that can perhaps be unified and formalised in the future.
There is some speculation about the evolution of human cognitive
architectures and consciousness.

Abstract:
This paper discusses some of the requirements for the control
architecture of an intelligent human-like agent with multiple
independent dynamically changing motives in a dynamically changing only
partly predictable world. The architecture proposed includes a
combination of reactive, deliberative and meta-management mechanisms
along with one or more global "alarm" systems. The engineering design
requirements are discussed in relation our evolutionary history,
evidence of brain function and recent theories of Damasio and others
about the relationships between intelligence and emotions.
(The paper was completed in haste for a deadline and I forgot to
explain why Descartes was in the title. See Damasio 1994.)

Abstract:
To select an appropriate tool or tools to build an agent-based system we need
to map from features of agent systems to implementation technologies. In this
paper we propose a simple scheme for classifying agent systems. Starting from
the notion of an agent as a cluster concept, we motivate an approach to
classification based on the identification of features of agent systems, and
use this to generate a high level taxonomy. We illustrate how the scheme can
be applied by means of some simple examples, and argue that our approach can
form the first step in developing a methodology for the selection of
implementation technologies.

Abstract:
This paper identifies a collection of high level questions which need to
be posed by designers of toolkits for developing intelligent agents
(e.g. What kinds of scenarios are to be developed? What sorts of agent
architectures are required? What are the scenarios to be used for? Are
speed and ease of development more or less important than speed and
robustness of the final system?). It then considers some of the toolkit
design options relevant to these issues, including some concerned with
multi-agent systems and some concerned with individual intelligent
agents of high internal complexity, including human-like agents. A
conflict is identified between requirements for exploring new types of
agent designs and requirements for formal specification, verifiability
and efficiency. The paper ends with some challenges for computer science
theorists posed by complex systems of interacting agents.

Abstract:
There is now a huge amount of interest in consciousness among
scientists as well as philosophers, yet there is so much confusion and
ambiguity in the claims and counter-claims that it is hard to tell
whether any progress is being made. This "position paper" suggests
that we can make progress by temporarily putting to one side questions
about what consciousness is or which animals or machines have it or how
it evolved. Instead we should focus on questions about the sorts of
architectures that are possible for behaving systems and ask what sorts
of capabilities, states and processes, might be supported by different
sorts of architectures. We can then ask which organisms and machines
have which sorts of architectures. This combines the standpoint of
philosopher, biologist and engineer.
If we can find a general theory of the variety of possible architectures
(a characterisation of "design space") and the variety of
environments, tasks and roles to which such architectures are well
suited (a characterisation of "niche space") we may be able to use
such a theory as a basis for formulating new more precisely defined
concepts with which to articulate less ambiguous questions about the
space of possible minds.
For instance our initially ill-defined concept ("consciousness") might
split into a collection of more precisely defined concepts which can be
used to ask unambiguous questions with definite answers.
As a first step this paper explores a collection of conjectures
regarding architectures and their evolution. In particular we explore
architectures involving a combination of coexisting architectural levels
including: (a) reactive mechanisms which evolved very early, (b)
deliberative mechanisms which evolved later in response to pressures on
information processing resources and (c) meta-management mechanisms that
can explicitly inspect evaluate and modify some of the contents of
various internal information structures.
It is conjectured that in response to the needs of these layers,
perceptual and action subsystems also developed layers, and also that an
"alarm" system which initially existed only within the reactive layer
may have become increasingly sophisticated and extensive as its inputs
and outputs were linked to the newer layers.
Processes involving the meta-management layer in the architecture could
explain the origin of the notion of "qualia". Processes involving the
"alarm" mechanism and mechanisms concerned with resource limits in the
second and third layers gives us an explanation of three main forms of
emotion, helping to account for some of the ambiguities which have
bedevilled the study of emotion. Further theoretical and practical
benefits may come from further work based on this design-based approach
to consciousness.
A deeper longer term implication is the possibility of a new science
investigating laws governing possible trajectories in design space and
niche space, as these form parts of high order feedback loops in the
biosphere.

Abstract:
This paper discusses agent architectures which are describable in terms
of the "higher level" mental concepts applicable to human beings,
e.g. "believes", "desires", "intends" and "feels". We
conjecture that such concepts are grounded in a type of information
processing architecture, and not simply in observable behaviour nor in
Newell's knowledge-level concepts, nor Dennett's "intentional stance."
A strategy for conceptual exploration of architectures in design-space
and niche-space is outlined, including an analysis of design trade-offs.
The
SIM_AGENT (SimAgent)
toolkit,
developed to support such exploration,
including hybrid architectures, is described briefly.

The slides begin to apply the ideas developed in the Cognition and
Affect project to the analysis of architectural requirements for love
and various other emotional and affective states.
[THE SLIDES ARE PARTLY OUT OF DATE. See
Filename: Sloman.kd.ps
]

Abstract:
Which agent architectures are capable of justifying descriptions in terms
of the 'higher level' mental concepts applicable to human beings? We
propose a new kind of architecture-based semantics for mentalistic
descriptions in which mental concepts (e.g. 'believes', 'desires',
'intends', 'mood', 'emotion', etc.) are grounded in assumptions
about information processing architectures, and not merely in concepts
based solely on Dennett's 'intentional stance'. These ideas have led to
the design of the SIM_AGENT toolkit which has been used to explore a
variety of such architectures.

Abstract:
How can a virtual machine X be implemented in a physical machine Y? We
know the answer as far as compilers, editors, theorem-provers, operating
systems are concerned, at least insofar as we know how to produce these
implemented virtual machines, and no mysteries are involved. This paper
is about extrapolating from that knowledge to the implementation of
minds in brains. By linking the philosopher's concept of supervenience
to the engineer's concept of implementation, we can illuminate both. In
particular, by showing how virtual machines can be implemented in
causally complete physical machines, and still have causal powers, we
remove some philosophical problems about how mental processes can be
real and can have real effects in the world even if the underlying
physical implementation has no causal gaps. This requires a theory of
ontological levels.
Note:
This is an extract from a much longer, evolving, paper, in part about
the relation between mind and brain, and in part about the more general
question of how high level abstract kinds of structures, processes and
mechanisms can depend for their existence on lower level, more concrete
kinds.

Abstract:
This is an attempt to characterise a new unifying generalisation of the
practice of software engineers, AI designers, developers of evolutionary
forms of computation, etc. This topic overlaps with theoretical biology,
developmental psychology and perhaps some aspects of social theory (yet
to be developed!). Much of theoretical computer science follows the lead
of engineering intuitions and tries to formalise them. Likewise there
are important emerging high level cross disciplinary ideas about
processes and architectures found in nature that can be unified and
formalised, extending work done in Alife and evolutionary computation.
This paper attempts to provide a conceptual framework for thinking about
the tasks.
Within this framework we can also find a new approach to the so-called
hard problem of consciousness, based on virtual machine functionalism,
and find a new defence for a version of the "Strong AI" thesis.

Abstract:
The objectives of this thesis are to elucidate adaptive change from a
design-stance, provide a detailed examination of the concept of
evolvability and computationally model agents which undergo both
genetic and cultural evolution. Using Sloman's (1994) design-based
methodology, Darwinian evolution by natural selection is taken as a
starting point. The concept of adaptive change is analysed and the
situations where it is necessary for survival are described. A wide
array of literature from biology and evolutionary computation is used
to support the thesis that Darwinian evolution by natural selection is
not a completely random process of trial and error, but has mechanisms
which produce trial-selectivity. A number of means of creating
trial-selectivity are presented, including reproductive, developmental,
psychological and sociocultural mechanisms. From this discussion, a
richer concept of evolvability than that originally postulated by
Dawkins (1989) is expounded. Computational experiments are used to show
that the evolvability producing mechanisms can be selected as they
yield, on average, 'fitter' members in the next generation that inherit
those same mechanisms. Thus Darwinian evolution by natural selection is
shown to be an inherently adaptive algorithm that can tailor itself to
searching in different areas of design space. A second set of
computational experiments are used to explore a trajectory in design
space made up of agents with genetic mechanisms, agents with learning
mechanisms and agents with social mechanisms. On the basis of design
work the consequences of combining genetic and cultural evolutionary
systems were examined; the implementation work demonstrated that agents
with both systems could adapt at a faster rate. The work in this thesis
supports the conjecture that evolution involves a change in replicator
frequency (genetic or memetic) through the process of selective-trial
and error-elimination.

Abstract:
The emotions are investigated from the perspective of an Artificial
Intelligence engineer attempting to understand the requirements and design
options for autonomous resource bound agents able to operate in complex and
dynamic worlds. Both natural and artificial intelligences are viewed as more
or less complex control systems. The field of agent architecture research is
reviewed and Sloman and Beaudoin's design for human-like autonomy introduced.
The agent architecture supports an emergent processing state, called {\em
perturbance}, which is a loss of control of thought processes. Perturbances
are a characteristic feature of many human emotional states. A broad but
shallow implementation of the agent architecture, called MINDER1, is
described. MINDER1 can support perturbant states and is an example of a
'protoemotional' agent. Several interrupt theories of the emotions are
critically reviewed, including the theories of Simon, Sloman, Oatley and
Johnson-Laird and Frijda. Criticisms of the theories are presented, in
particular how they fail to account for both learning and the mental pain and
pleasure associated with some emotional states. The field of machine
reinforcement learning is reviewed and the concept of a scalar quantity form
of value introduced. Forms of value occur in control systems that meet a
requirement for trial and error learning. A philosophical argument that {\em a
society of mind will require an economy of mind} is presented. The argument
draws on adaptive multi-agent system research and basic economic theory. It
generalises reinforcement learning to more complex systems with more complex
capabilities. A design hypothesis is proposed -- {\em the currency flow
hypothesis} -- that states that a scalar quantity form of value is a common
feature of adaptive systems composed of many interacting parts. A design
specification is presented for a motivational subsystem conforming to the
currency flow hypothesis and theoretically integrated with Sloman and
Beaudoin's agent architecture. An explanation of a subset of mental pain and
pleasure is provided in terms of an agent architecture monitoring its own
processes of reinforcement, or virtual 'currency flows'. The theory is
compared to Freudian metapsychology, in particular how currency flow avoids
the vitalism associated with Freud's concept of 'libidinal energy'. The
explanatory power of the resulting theory of {\em valenced perturbances}, that
is painful or pleasurable loss of control of attention, is demonstrated by
providing an architecturally grounded analysis of grief. It is shown that,
amongst other phenomena, intense mental pain and loss of control of thought
processes can be readily explained in information processing terms. The thesis
concludes with suggestions for further work and prospects for building
artificial emotional agents.

Filename: Sloman.what.arch.pdf
Title: What sort of architecture is required for a human-like agent?
Author: Aaron Sloman
In: M Wooldridge and A Rao (Eds),
Foundations of Rational Agency,
Kluwer Academic Publishers, 1999
(Expanded version of: Aaron.Sloman.aaai96.cog.ps)
Date: Installed 13 May 1997. Published 1999
Abstract:
This paper is about how to give human-like powers to complete
agents. For this the most important design choice concerns the overall
architecture. Questions regarding detailed mechanisms, forms of
representations, inference capabilities, knowledge etc. are best
addressed in the context of a global architecture in which different
design decisions need to be linked. Such a design would assemble
various kinds of functionality into a complete coherent working
system, in which there are many concurrent, partly independent, partly
mutually supportive, partly potentially incompatible processes,
addressing a multitude of issues on different time scales, including
asynchronous, concurrent, motive generators. Designing human like agents
is part of the more general problem of understanding design space, niche
space and their interrelations, for, in the abstract, there is no one
optimal design, as biological diversity on earth shows.
[[This version includes diagrams not in the original version.]]

Abstract:
Under what conditions are "higher level" mental concepts which are
applicable to human beings also applicable to artificial agents? Our
conjecture is that our mental concepts (e.g. "belief", "desire",
"intention", "experience", "mood", "emotion", etc.) are grounded
in implicit assumptions about an underlying information processing
architecture. At this level mechanisms operate on information structures
with semantic content, but there is no presumption of rationality. Thus
we don't need to assume Newell's knowledge-level, nor Dennett's
"intentional stance." The actual architecture will clearly be richer
than that naively presupposed by common sense. We outline a three tiered
architecture: with reactive, deliberative and reflective layers, and
corresponding layers in perceptual and action subsystems, and discuss
some implications.

Abstract:
Everybody seems to be talking about agents, though it's not clear when
the word "agent" adds anything beyond "system", "program", "tool", etc.
My concern is to understand some of the main features of human agency:
what they are, how they evolved, how they differ between individuals,
how they are implemented, and how far they can be implemented in
artificial systems. This is part of the general multi-disciplinary study
of "design space", "niche space", their interrelations, and the
trajectories possible within these spaces.

I outline a conjecture that many aspects of human mental functioning,
including emotional states, can be explained in terms of an architecture
approximately decomposable into three layers, with different
evolutionary origins, shared with different animals. The oldest and most
widespread is a *reactive* layer. A more recent development, probably
shared with fewer animals is a *deliberative* layer. The newest layer is
concerned with *meta-management* and may be found only in a few species.
The reactive layer involves highly parallel, dedicated and fast
mechanisms, capable of fine-tuning but no major structural changes. The
deliberative layer involves the ability to create, compare, evaluate,
select and act on enw complex structures (e.g. plans, solutions to
problems, linguistic constructs), a process that requires much stored
knowledge and is inherently serial and resource limited, for several
different reasons.

Perceptual and action subsystems had to evolve corresponding layered
architectures in order to engage with all these to greatest effect. The
third layer is linked to phenomena involving self consciousness and self
control (and explains the existence of qualia, as the contents of
attentive processes).

Different sorts of emotional states and processes correspond to
different architectural layers, and some of them are likely to arise
in sophisticated artificial agents of the future.

A short introduction is given to the SIM_AGENT toolkit developed in
Birmingham for research and teaching activities involving the design of
agents each of which has complex interacting internal mechanisms running
concurrently, including symbolic and "sub-symbolic" mechanisms. Some of
the material overlaps with the Synthetic Minds poster, below.

Abstract:
This paper discusses conditions under which some of the "higher level"
mental concepts applicable to human beings might also be applicable to
artificial agents. The key idea is that mental concepts (e.g.
"believes", "desires", "intends", "mood", "emotion", etc.) are
grounded in assumptions about information processing architectures, and
not merely Newell's knowledge-level concepts, nor concepts based solely
on Dennett's "intentional stance."

Abstract:
An implementation of an autonomous resource-bound agent able to operate in a
simulated dynamic and complex domain is described. The agent, called MINDER1,
is a partial realisation of an architecture for motive processing and
attention. It is shown that a global processing state, called perturbance, can
emerge from interactions of subcomponents of the architecture. Perturbant
states are characteristic features of many states that are commonly called
emotional. The agent is compared to other computer simulations of emotional
phenomena.

Abstract:
A society of mind will require an economy of mind, that is multi-agent systems
(MAS) that meet a requirement for the adaptive allocation and reallocation of
scarce resources will need to use a quantitative universal representation of
value that mirrors the flow of agent products, much as money is used in simple
commodity economies. The money-commodity is shown to be an emergent exchange
convention that serves both to constrain and allow the formation of
commitments by functioning as an ability to buy processing power. MAS with
both currency flow and minimally economic agents can adaptively allocate and
reallocate control relations and scarce resources, in particular labour or
processing power. The implications of these views are outlined for MAS
research and cognitive science.

This is a philosophical 'position paper', starting from the observation
that we have an intuitive grasp of a family of related concepts of
"possibility", "causation" and "constraint" which we often use in
thinking about complex mechanisms, and perhaps also in perceptual
processes, which according to Gibson are primarily concerned with
detecting positive and negative affordances, such as support,
obstruction, graspability, etc. We are able to talk about, think about,
and perceive possibilities, such as possible shapes, possible pressures,
possible motions, and also risks, opportunities and dangers. We can also
think about constraints linking such possibilities. If such abilities
are useful to us (and perhaps other animals) they may be equally useful
to intelligent artefacts. All this bears on a collection of different
more technical topics, including modal logic, constraint analysis,
qualitative reasoning, naive physics, the analysis of functionality, and
the modelling design processes. The paper suggests that our ability to
use knowledge about "de-re" modality is more primitive than the
ability to use "de-dicto" modalities, in which modal operators are
applied to sentences. The paper explores these ideas, links them to
notions of "causation" and "machine", suggests that they are
applicable to virtual or abstract machines as well as physical machines.
The concept of "possibility-transducer" is introduced.
Some conclusions are drawn regarding the nature of mind and
consciousness.

Filename: Sloman.emotions.mit96.slides.pdf
Title: What sort of architecture can support emotionality?
(Slides for a talk at MIT Media Lab, Nov 1996. Now out of date.)
Authors: Aaron Sloman
Date: Nov 1996
Abstract:
Although much research on emotions is done on other animals (e.g. rats)
there seem to be certain characteristically human emotional states which
interest poets, novelists, and gossips, such as excited anticipation of
an election victory, humiliation at being dismissed. Similar states are
inevitable in intelligent robots. Obviously these states involve
conceptual abilities not shared by most other mammals. Less obviously,
they involve "perturbant" states in which there is partial loss of
control of thought processes: you want to prepare that lecture but
your mind is drawn back to the source of joy or pain. This presupposes
the ability to be in control: you cannot lose what you've never had. The
talk contrasts the design-based approach to the study of mind with other
approaches. The former involves explorations of "design space", "niche
space", and their interconnections. A design-based theory is presented
which shows how emotional (perturbant) states are possible.

Abstract:
This work investigates some uses of self-monitoring in classifier systems (CS)
using Wilson's recent XCS system as a framework. XCS is a significant advance
in classifier systems technology which shifts the basis of fitness evaluation
for the Genetic Algorithm (GA) from the strength of payoff prediction to the
accuracy of payoff prediction. Initial work consisted of implementing an XCS
system in Pop-11 and replicating published XCS multiplexer experiments from
(Wilson 1995, 1996a). In subsequent original work, the XCS Optimality
Hypothesis, which suggests that under certain conditions XCS systems can
reliably evolve optimal populations (solutions), is proposed. An optimal
population is one which accurately maps inputs to actions to reward
predictions using the smallest possible set of classifiers. An optimal XCS
population forms a complete mapping of the payoff environment in the
reinforcement learning tradition, in contrast to traditional classifier
systems which only seek to maximise classifier payoff (reward). The more
complete payoff map allows XCS to deal with payoff landscapes with more than 1
niche (i.e. those with more than 2 payoff levels) which traditional
payoff-maximising CS find very difficult. This makes XCS much more suitable as
the foundation of animat control systems than traditional CS. In support of
the Optimality Hypothesis, techniques were developed which allow the system to
highly reliably evolve optimal populations for logical multiplexer functions.
A technique for auto-termination of learning was also developed to allow the
system to recognise when an optimal population has been evolved. The
self-monitoring mechanisms involved in this work are discussed in terms of the
design space of adaptive systems.
Filename: Davis.atal96.pdf
Title: Reactive and Motivational Agents: Towards a Collective Minder

Author:
Darryl Davis, now at Hull University
In Proceedings Workshop on Agent Theories, Architectures, and Languages,
at 12th European Conference on Artificial Intelligence, Budapest,
Hungary, August 1996
Date: 29 Sep 1998 (inserted here)

Abstract:
This paper explores the design and implementation of a societal
arrangement of reflexive and motivational agents which will act as the
building blocks for a more abstract agent within which the current
agents act as distributed dynamic processing nodes. We contest that
reactive, deliberative and other behaviours are required in complete
(intelligent) agents. We provide some architectural considerations on
how these differing forms of behaviours can be cleanly integrated and
relate that to a discussion on the nature of motivational states and the
mechanisms used for making decisions.

Filename: ftp://ftp.cs.bham.ac.uk/pub/authors/B.S.Logan/plansig-96.ps.gz
Title: Route planning in the space of complete plans
Authors: Brian Logan and Riccardo Poli
In Proceedings of the 15th Workshop of the UK Planning and Scheduling
Special Interest Group, 21-22 November 1996, Liverpool John Moores
University, Liverpool, pp 233-240 (also available as University of
Birmingham School of Computer Science technical report CSRP-97-18).
Date: July 1996

Abstract:
Design requirements for a computational libidinal economy are presented that
constitute a preliminary theory of basic types of motivation and learning. The
theory avoids many of the difficulties of Freudian libido theory and has new
arguments in favour of it. A corollary is a circulation of value theory of
simple affect that builds upon existing information processing theories of
emotion. Such a theory can account for some forms of cognitive pleasure and
unpleasure, in particular the feelings involved in attachment and loss.

Filename: Ian.Wright_animat_emotions.ps.gz
Filename: Ian.Wright_animat_emotions.txt.gz
(Plain text version).
Title: Reinforcement learning and animat emotions
Authors: Ian Wright
Date: 24 Jan 1996
Abstract:
Emotional states, such as happiness or sadness, pose particular problems for
information processing theories of mind. Hedonic components of states,
unlike cognitive components, lack representational content. Research within
Artificial Life, in particular the investigation of adaptive agent
architectures, provides insights into the dynamic relationship between
motivation, the ability of control sub-states to gain access to limited
processing resources, and prototype emotional states. Holland's learning
classifier system provides a concrete example of this relationship,
demonstrating simple 'emotion-like' states, much as a thermostat demonstrates
simple 'belief-like' and 'desire-like' states.
This leads to the conclusion that valency, a particular form of pleasure or
displeasure, is a self-monitored process of credit-assignment. The importance
of the movement of a domain-independent representation of utility within
adaptive architectures is stressed. Existing information processing theories
of emotion can be enriched by a 'circulation of value' design hypothesis.
Implications for the development of emotional animats are considered.

Filename: Aaron.Sloman.rock.pdf (PDF 1996 version)
HTML version (updated 2017)
Title: What is it like to be a Rock? (Unpublished)
Author: Aaron Sloman
Date: 24 Jan 1996
Abstract:
This (semi-serious) paper
aims to replace deep sounding unanswerable, time-wasting
pseudo-questions which are often posed in the context of attacking some
version of the strong AI thesis, with deep, discovery-driving, real
questions about the nature and content of internal states of intelligent
agents of various kinds. In particular the question 'What is it like to
be an X?' is often thought to identify a type of phenomenon for which no
physical conditions can be sufficient, and which cannot be replicated in
computer-based agents. This paper tries to separate out (a) aspects of
the question that are important and provide part of the objective
characterisation of the states, or capabilities of an agent, and which
help to define the ontology that is to be implemented in modelling such
an agent, from (b) aspects that are incoherent.
The paper supports a philosophical position that is anti-reductionist
without being dualist or mystical.

Filename: Aaron.Sloman.vienna.pdf
Title: What sort of control system is able to have a personality?
Authors: Aaron Sloman
in
Robert Trappl and Paolo Petta (eds),
Creating Personalities for Synthetic Actors: Towards Autonomous
Personality Agents,
Springer (Lecture notes in AI), 1997 pp 166--208,
https://link.springer.com/book/10.1007/BFb0030565
(Originally presented at Workshop on Designing personalities for
synthetic actors, Vienna, June 1995.
Includes some edited transcripts of discussion following
presentation.)
Date: 24 Jan 1996

Abstract;
This paper outlines a design-based methodology for the study of mind as
a part of the broad discipline of Artificial Intelligence. Within that
framework some architectural requirements for human-like minds are
discussed, and some preliminary suggestions made regarding mechanisms
underlying motivation, emotions, and personality. A brief description is
given of the 'Nursemaid' or 'Minder' scenario being used at the
University of Birmingham as a framework for research on these problems.
It may be possible later to combine some of these ideas with work on
synthetic agents inhabiting virtual reality environments.

Abstract:
SIM_AGENT is a toolkit that arose out of a project concerned with
designing an architecture for an autonomous agent with human-like
capabilities. Analysis of requirements showed a need to combine a wide
variety of richly interacting mechanisms, including independent
asynchronous sources of motivation and the ability to reflect on which
motives to adopt, when to achieve them, how to achieve them, and so on.
These internal 'management' (and meta-management) processes involve a
certain amount of parallelism, but resource limits imply the need for
explicit control of attention. Such control problems can lead to
emotional and other characteristically human affective states. In order
to explore these ideas, we needed a toolkit to facilitate experiments
with various architectures in various environments, including other
agents. The paper outlines requirements and summarises the main design
features of a Pop-11 toolkit supporting both rule-based and
'sub-symbolic' mechanisms. Some experiments including hybrid
architectures and genetic algorithms are summarised.

(This is a revised version of the paper presented to the Geneva
Emotions Workshop, April 1995 entitled
The Architectural Basis for Grief.)

Abstract:

The design-based approach is a methodology for investigating mechanisms
capable of generating mental phenomena, whether introspectively or
externally observed, and whether they occur in humans, other animals or
robots. The study of designs satisfying requirements for autonomous
agency can provide new deep theoretical insights at the information
processing level of description of mental mechanisms. Designs for
working systems (whether on paper or implemented on computers) can
systematically explicate old explanatory concepts and generate new
concepts that allow new and richer interpretations of human phenomena.
To illustrate this, some aspects of human grief are analysed in terms of
a particular information processing architecture
being explored in our research group.

We do not claim that this architecture is part of the causal
structure of the human mind; rather, it represents an early stage in the
iterative search for a deeper and more general architecture, capable of
explaining more phenomena. However even the current early design
provides an interpretative ground for some familiar phenomena, including
characteristic features of certain emotional episodes, particularly the
phenomenon of perturbance (a partial or total loss of control of
attention).

The paper attempts to expound and illustrate the design-based approach
to cognitive science and philosophy, to demonstrate the potential
effectiveness of the approach in generating interpretative
possibilities, and to provide first steps towards an information
processing account of 'perturbant', emotional episodes.

Many of the architectural ideas have been developed further in later
papers and presentations, all available online, e.g.

Abstract:
What is the relation between intelligence and computation? Although the
difficulty of defining 'intelligence' is widely recognized, many are
unaware that it is hard to give a satisfactory definition of
'computational' if computation is supposed to provide a non-circular
explanation for intelligent abilities. The only well-defined notion of
'computation' is what can be generated by a Turing machine or a formally
equivalent mechanism. This is not adequate for the key role in
explaining the nature of mental processes, because it is too general, as
many computations involve nothing mental, nor even processes: they are
simply abstract structures. We need to combine the notion of
'computation' with that of 'machine'. This may still be too restrictive,
if some non-computational mechanisms prove to be useful for
intelligence. We need a theory-based taxonomy of {\em architectures} and
{\em mechanisms} and corresponding process types. Computational machines
may turn out to be a sub-class of the machines available for implementing
intelligent agents. The more general analysis starts with the notion of
a system with independently variable, causally interacting sub-states
that have different causal roles, including both 'belief-like' and
'desire-like' sub-states, and many others. There are many significantly
different such architectures. For certain architectures (including
simple computers), some sub-states have a semantic interpretation for
the system. The relevant concept of semantics is defined partly in terms
of a kind of Tarski-like structural correspondence (not to be confused
with isomorphism). This always leaves some semantic indeterminacy, which
can be reduced by causal loops involving the environment. But the causal
links are complex, can share causal pathways, and always leave mental
states to some extent semantically indeterminate.