Dr. Tony Prescott, University of Sheffield, Computational
Ethology of an Active Sense System: Biological and Robotic
Investigations of Tactile Perception in the Rat

Sensing in animals and in robots is often an active process in
which the position and orientation of the sensory apparatus is
controlled so as to enhance the agent's capacity to obtain
behaviourally-relevant information. The rat whisker system is
considered to be a paradigmatic example of such an "active sense
system", and together with researchers at Bristol Robotics
Laboratory my research group has been conducting novel ethological,
computational neuroscience, and robotic investigations of whisking
viewed an active tactile sense. A key focus of our research has been
on the feedback control strategies that the rat uses to position its
whiskers whilst exploring the world. Results from the study of both
natural and artificial whisking systems will be presented to show
how active control can increase both the quantity and quality of
information acquired by the whiskers. Humans, and other primates,
also use active control strategies to position the tactile sensory
surfaces on the fingertips. Therefore our findings may be relevant
to the wider understanding of the mammalian sense of touch as well
as to the design of active sense systems for autonomous robots.

Dr Mohamed Henini, School of Physics & Astronomy, University of
Nottingham, Molecular Beam Epitaxy from Research to Mass-Production

Most physics undergraduates will have encountered the potential
well as their first problem in undergraduate quantum mechanics. New
crystal growth technique such as molecular beam epitaxy has made it
possible to produce quantum wells in practice. This sophisticated
technology for the growth of high quality epitaxial layers of
compound semiconductor materials on single crystal semiconductor
substrates is becoming increasingly important for the development of
the semiconductor electronics industry. In this article I will
review the main achievements of MBE both in fundamental research and
manufacturing.

Alessandro Bassi, Hitachi, Data Management Challenges in the
Next Decade

One of the main challenges, if not the main one, we will have to
face in the coming future is how to manage an ever growing amount of
data. It seems hard to sustain that our current storing and
networking capabilities will be able to handle what Tony Hey, former
director of the UK e-Science program, now at Microsoft, calls "data
deluge". The emergence of new storage networking technologies such
as Fibre Channel and the ramp-up of the Internet in the mid-nineties
were announcing a much longer-term transformation of data access and
management practices. Whilst the Internet use was somehow
homogeneising the access to web-based data repositories and
services, inside the data centres the infrastructure complexity was
and is still increasing in order to fulfil business application
requirements. As data repositories in operations grow rapidly and in
certain cases exponentially, IT users want more flexibility to
access, filter, update, distribute or analyse their data as well as
a simplification of the management interface and tools. Is the
current TCP/IP mechanism up to this challenge?

Machine learning can be utilized to build models that predict the
runtime of search algorithms for hard combinatorial problems. Such
empirical hardness models have previously been studied for complete,
deterministic search algorithms. In this work, we demonstrate that
such models can also make surprisingly accurate predictions of the
run-time distributions of incomplete and randomized search methods,
such as stochastic local search algorithms. We also show for the
first time how information about an algorithm's parameter settings
can be incorporated into a model, and how such models can be used to
automatically adjust the algorithm's parameters on a per-instance
basis in order to optimize its performance. Empirical results for
Novelty+ and SAPS on structured and unstructured SAT instances show
very good predictive performance and significant speedups of our
automatically determined parameter settings when compared to the
default and best fixed distribution-specific parameter settings.

Prof. Lik-Kwan Shark, University of Central Lancashire, Image
Matching: From Large Aerostructure to Deformable Anatomy

Image matching is concerned with the alignment of multiple images
of the same scene/object acquired from different
times/views/sensors. A wide spectrum of issues related to applied
image matching will be covered in the talk, using examples from
automatic matching of non-destructive testing images of large
aerostructures with 3D CAD model to assist structure integrity
evaluation, to automatic matching of radiotherapy images of patients
to assist cancer treatment. Drawn from a number of recent
collaborative research projects with leading aerospace companies in
Europe (such as BAE Systems and EADS) and major NHS Trusts in the
North-West (such as Christie Hospital), Professor Shark will present
several novel image matching techniques which include 2D-2D feature
based geometry matching based on higher geometric primitives, 2D-3D
matching based on silhouette via spherical projection combined with
the iterative closest point algorithm, and elastic image matching
based on adaptive deformation controlled by anatomical structures.

Gordon Wyeth, Director, Robotics Laboratory, University of
Queensland, Australia, Robotics Research at the University of
Queensland

The robotics laboratory at the University of Queensland has
conducted a variety of projects over the past five years. This talk
will present an overview of the activities of the lab in the
following areas:

Humanoid Robot: We have developed a functional 1.2m tall humanoid
robot, called "GuRoo". This robot has been an interesting
mechantronics project, and has been used as a testbed for
investigations in biologically inspired robot control techniques.
Our humanoid came fourth in the world in robot soccer in 2005.

Video-conferencing Robots: Five "Ambassador" robots were built
and delivered to SAP as prototypes of highly interactive video
conferencing robots. These robots are now in operation in Australia,
Singapore and Germany and have been presented to the SAP
international research meeting where they were enthusiastically
received.

SLAM: We have demonstrated a biologically inspired solution to
this widely-held "holy grail" of autonomous robot problems. The
RatSLAM system has been demonstrated mapping large, unmodified
indoor and outdoor areas using colour vision as the only sensor.

Autonomous Helicopters: Working in collaboration with CSIRO, we
have developed a system for learning elements of helicopter flight,
particularly focussing on the challenging problem of hover. We
demonstrated autonomous hover using both adaptive and non-adaptive
methods.

Small Size Robot Soccer: Our team of soccer robots has
demonstrated some of the most sophisticated multi-robot intelligence
and control in the world. The robots achieve sophisticated
coordination while moving at speeds of up to 2 m/s. They can
accurately pass and shoot the ball at over 4 m/s. This outstanding
performance has led the team to second place in the robot soccer
World Cup in 2003 and 2004, beaten by a single goal in both case

Autonomous control for MAVs like Micro-Flying Robot (µFR) will be
presented. In case of natural disaster like earthquake, MAVs will be
very effective for surveying the site and environment in dangerous
area or narrow space where human cannot access safely. In addition,
it will be a help to prevent secondary disaster.

This talk is concerned with autonomous hovering control, guidance
control of µFR. Next, the present state of research and development
for civil use autonomous uninhabited aircraft like UAVs which are
bigger than MAVs is introduced.

Many autonomous flight movies are introduced. In particular, the
research and development in the world and in Japan, and the subjects
and prospects for control and operation systems of civil use
autonomous UAVs are presented. Also, the trial of formation control
will be discussed.

Finally, the legged robots which have six legs as UGVs(Unmanned
Ground Vehicle) are introduced with vision based locomotion.

CARMEN is a recently started e-Science pilot project on
neuroinformatics: that is on the use of computing technology for
analysing neural signals. It is an interdisciplinary project
involving 11 Universities in the UK, and the investigators include
experimental neurophysiologists, computational neurophysiologists,
as well as computer scientists. The seminar will start with an
overview of the project, and then the particular research ongoing at
Stirling will be discussed this concentrates on techniques for
detecting action potentials ('spikes') and associating them with
specific neurons (spike sorting) in extracellular recordings.

Phylogeny can be defined as the reconstruction of the
evolutionary history of a set of species identified by sequences of
molecular or morphological characters. The Maximum Parsimony problem
aims at reconstructing a phylogenetic tree according to the
criterion of minimizing the number of genetic transformations. To
solve this difficult problem, a number of heuristic methods have
been developed, many of them being based on the local search
paradigm. In this talk, we will present two new heuristics for the
Maximum Parsimony problem. The first one, called Progressive
Neighbourhood Search (PNS), is a descent method that is based on a
parametric and progressively decreasing neighbourhood [1]. The
second heuristic method is a hybrid genetic algorithm, which
combines the Progressive Neighbourhood Search and a Distance-based
Information Preservation (DiBIP) Tree Crossover [2]. We will show
experimental results obtained with these heuristics on a number of
benchmarks of the literature.

Prof. Eduardo Miranda, University of Plymouth, Disciplinary
Routes to Innovation in Computer Music Research

Sophisticated tools for the creation, generation and
dissemination of music have established clear synergies between
music and leisure industries, technology within art, the creative
industries and the creative economy of the twenty first century.
This can be seen in internet radio, internet self publishing, easy
email access to geographically distributed audiences with minority
interests, podcasts, mobile technologies and digital archives. This
lecture will demonstrate how research combining evolutionary
computation, neuroscience and music is leading to the development of
new intelligent interactive systems that will be able to evolve
their own rules for musical composition and the ability to interact
with musicians and listeners in much more sophisticated ways than
the present ones can do. We predict the emergence of new kinds of
music content, most of which will be generated on the fly, requiring
new modes of representation, access and interaction. The lecture
will be illustrated with snapshots of ongoing research at the
University of Plymouth on modelling musical evolution, interactive
music systems and brain-computer music interfacing.

Dr. Krzysztof R. Apt, CWI and University of Amsterdam, The
Netherlands, Infinite Qualitative Simulations by Means of Constraint
Programming

We introduce a constraint-based framework for studying infinite
qualitative simulations concerned with contingencies such as time,
space, shape, size, abstracted into a finite set of qualitative
relations. To define the simulations we combine constraints that
formalize the background knowledge concerned with qualitative
reasoning with appropriate inter-state constraints that are
formulated using linear temporal logic. We implemented this approach
in the constraint programming system Eclipse by drawing on the ideas
from bounded model checking. The implementation became realistic
only after several rounds of optimizations and experimentation with
various heuristics. The resulting system allows us to test and
modify the problem specifications in a straightforward way and to
combine various knowledge aspects. To demonstrate the expressiveness
and simplicity of this approach we discuss in detail two examples: a
navigation problem and a simulation of juggling.

Em. Professor Leslie Kay has been comercialising ultrasonic
mobility aids for blind people for 40 years. These sensing systems
use a Continuously Transmitted Frequency Modulated (CTFM) signal. Dr
McKerrow has used these sensors to classify objects, including leafy
plants, for landmark navigation of mobile robots. He believes that
the key to successful landmark navigation is in the quality of the
sensing of the landmarks. Classification involves both identifying
the object and measuring the correctness of the classification. He
has developed an outdoor 4-wheel drive mobile robot and is
investigating using surface roughness as a landmark in conjunction
with a Fuzzy map of paths. As echolocation is used by bats to
navigate in flight, his research has naturally progressed to the
development of an indoor aerial robot for use in disaster search.

Dr. John Levine, University of Strathclyde, Evolving Intelligence

Designing intelligent behaviour is very hard; recognising it is
much easier. Furthermore, given two behaviours we can generally
recognise the more intelligent one. If a population of diverse
behaviours can be created, we can use artificial evolution to find
behaviours which are more intelligent. Over time, intelligent
behaviours will emerge by competition with each other, rather than
needing to be designed. This simple and intuitive idea can be
applied to a wide range of tasks which need intelligence, such as
games playing, robotic control, planning, problem solving - indeed,
anything which involves perceptions, decisions and action. In the
talk I will present some diverse examples of evolution used to
produce intelligent behaviours, including some recent examples from
the L2Plan (Learn to Plan) and EvoTanks (evolving game playing
agents) projects at Strathclyde. I will also speak on the
limitations of the current process: it evolves agents which are
specialised to the task in hand and are usually only reactive rather
than reflective. Can we go beyond this and evolve true intelligence?

Dr. Jonathan M Garibaldi, University of Nottingham, Towards Modelling the Variation in Human
Decision Making

The purpose of developing an expert system is to encapsulate
knowledge and expertise of human experts in the particular domain.
However, human experts exhibit variation in their decision making.
Variation may occur among the decisions of a panel of human experts
(inter-expert variability), as well as in the decisions of an
individual expert over time (intra- expert variability). A
conventional fuzzy expert system (FES) was developed to assess the
health of infants immediately after birth by analysis of the
biochemical status of lood taken from an infant's umbilical cord
(umbilical acid-base analysis). A database of fifty difficult to
interpret cases had previously been collected and six experts in
umbilical acid-base analysis had ranked these cases from `worst' to
`best'. It was observed that the experts' conclusions varied both
among themselves and from their previous conclusions, whereas the
FES produced the same output each time the same input was given. The
source of this variation was suspected to lie in minor variations of
the interpretations of linguistic terms used in the rules. In order
to model this, experiments were carried out by introducing small
changes in the membership functions associated with the linguistic
terms. Thus it was possible to explore the relationship between the
vagueness of these terms and the variation in the overall decision
making (the conclusions reached by means of the fuzzy inferencing).
The term `non-stationary' was introduced to describe such
time-varying fuzzy systems. In this seminar, Dr Garibaldi will
detail the specific decision making problem, will describe the fuzzy
expert system created and then will present the novel techniques
utilised for introducing variation into fuzzy expert systems.
Finally, results will be presented of comparisons between this novel
fuzzy inferencing technique and human decision making.

Professor Sameer Singh, University of Loughborough, Developing Autonomous Systems
with Computer Vision Technology: An Overview of Research at RSI,
Loughborough University

This talk will provide a broad overview of our research in the
area of developing computationally intelligent computer vision based
autonomous systems. The overview will discuss research problems and
key research results in the areas of biomedical imaging, scene
analysis and novelty detection, security and visual surveillance,
machine vision based industrial inspection, digital photography,
autonomous mobile systems, and multi-resolution pattern recognition.
The overview is intended to give a broad appreciation to the
audience on the need for such systems in their respective
application areas, how to develop the methodology for solving such
complex problems, appropriate experimental design and key challenges
faced when developing real solutions. A number of experimental
results will be visually presented.

Dr. Heiko Hoffmann, University of Edinburgh, Using Sensorimotor Models to Make Sense
out of Sensory Input

Sensory input, such as the activation of the rods and
cones on the retina or the activation of tactile sensors on the
hand, does not resemble the geometric properties of the objects
observed (for example, lines are not mapped onto lines on the
retina, or the tactile patterns obtained from holding a ball do not
resemble a spherical shape). Yet, how do we understand, for example,
the rotational symmetry upon seeing or touching a ball and the
meaning of a dead-end when looking down a corridor? To shed light
into these questions, I present experiments in which a robot learns
a sensorimotor model by observing the sensory consequences of the
robot's actions. Using this model, the robot can link object
properties and symmetries with sensory input and has thus not only a
means to detect such properties, but can also assign to them a
behavioural meaning.

A brain-computer interface (BCI) is a communication system that
does not depend on the brain's normal pathways of peripheral nerves
and muscles.
Non-invasive BCIs are implemented based on the decoding of brain
waves recorded on the surface of scalp, which are probably the most
acceptable systems to BCI users. Two non-invasive BCI systems
recently developed at Tsinghua University, a brain-controlled
telephone and a mind-controlled robot, will be presented with demo
videos.

P2P Networks are dynamic distributed systems which are
increasingly being used as a methodology for organising and
efficiently retrieving information and resources between end users
on a large scale. Current research has focused on optimising the
searching process through caching, topology adaptation, hashing and
probabilistic heuristics. In this talk I will firstly provide a
gentle introduction to P2P networking and secondly I'll present the
main research conducted in this field at the Department of Computing
of the University of Surrey. Our work has focused on two specific
aspects of P2P optimisation: Firstly on using the network
construction process in order to keep typical structured P2P
overlays just big enough to handle their current workload and
secondly on linking independent P2P networks together so that they
can share capacity on demand. The former principle enhances the
search process as the actual size of the network determines the
actual data traffic each user query produces and therefore smaller
networks exhibit less traffic for the same number of queries. The
latter enables a P2P network to dynamically acquire capacity from
other under-utilised networks when it experiences peaks in its
workload and therefore makes it more scalable in terms of the number
of user queries it can process. In this context, the talk will show
an overview of the design of two systems which are based on the
above principles and a set of simulation results that show their
effectiveness in enhancing the performance and scalability of P2P
networks.

This seminar refers to the problem of congestion resolution in
all optical burst or packet switched networks.

The seminar presents the basic concepts of engineering optical
buffers realized with Delay Lines. The principle of operation of a
delay buffer is presented and compared to conventional queuing. The
related problems of performance analysis and engineering are
considered, presenting a review of analytical models, with
particular emphasis on the case of asynchronous variable length
packets.

Finally examples of engineering of delay buffers in various
scenarios of optical networking are presented, with particular
reference to a Dense WDM scenario, focusing on scheduling algorithms
that exploit buffering in conjunction with load balancing on the
wavelength set.

The aim is that of explaining which are the critical parameters
to be dimensioned, showing at what extent delay buffer may be
effective to solve congestion in all optical network nodes.

Dr. Jose A. Lozano, University of the Basque Country
(Spain), Probabilistic
Graphical Models in Optimization: Applications in Protein Folding
and Design

We present two different ways of using probabilistic graphical
models in combinatorial optimization: estimation of distribution
algorithms and message passing algorithms. Several instances of
these two sets of algorithms will be applied to solve problems in
the protein field. Particularly, we deal with protein folding and
protein design. In the former problem two different protein models
will be used, the H-P model and side chain placement prediction. The
advantages of using optimization algorithms based on probabilistic
graphical models in these two problems will be highlighted.

Dr. Dariusz Kowalski, University of Liverpool, Distributed Fault-Tolerant Algorithms
with Small Communication

Distributed systems contain several procedures supporting the
communication between computers, synchronisation, scheduling tasks,
and many other basic operations involving more than one machine.
These procedures are implemented independently on the network layer,
and so they need to tolerate various types of failures which may
occur during the execution, including crashes, omissions, malicious
bahaviour. This talk is about fundamental fault-tolerant primitives
in message-passing synchronous environment, such as gossip,
consensus, performing tasks. Presented solutions are optimized
simultaneously for the time and communication complexities, in the
presence of various types of failures. As an example, in the
Consensus problem, each processor starts with its input value and
the goal is to have all processors agree on exactly one value among
the inputs. We show that it can be solved in time proportional to
the number of failures and sending asymptotically n\log^5 n
messages, where n stands for the number of processors in the system.
We also discuss some future research directions connected with
presented topics.

Prof. Eduardo Miranda, University of Plymouth, Trans-Disciplinary Routes to
Innovation in Computer Music Research

Sophisticated tools for the creation, generation and
dissemination of music have established clear synergies between
music and leisure industries, technology within art, the creative
industries and the creative economy of the twenty first century.
This can be seen in internet radio, internet self publishing, easy
email access to geographically distributed audiences with minority
interests, podcasts, mobile technologies and digital archives. This
lecture will demonstrate how research combining evolutionary
computation, neuroscience and music is leading to the development of
new intelligent interactive systems that will be able to evolve
their own rules for musical composition and the ability to interact
with musicians and listeners in much more sophisticated ways than
the present ones can do. We predict the emergence of new kinds of
music content, most of which will be generated on the fly, requiring
new modes of representation, access and interaction. The lecture
will be illustrated with snapshots of ongoing research at the
University of Plymouth on modelling musical evolution, interactive
music systems and brain-computer music interfacing.

Prof. Silvano Cincotti, University of Genoa (Italy). Power
Exchange

Imagination is commonly thought to amount to nothing more than a
creative faculty. But imagination as bodily expectation can be
revealed through examination of the imagination’s productive,
bodily, character, that aspect of the mind that extrapolates through
bodily consciousness or experience an anticipation or, let us say,
an expectation of how our world will continue to be from moment to
moment, from sensation to sensation. It is a notion of imagination
that is fundamental to any conscious experiencing system, for it is
through the smooth functioning of this imagination that we are able:
(i) to have unconscious expectations about how our world will be
with regard to each of our senses and (ii) to recognise change when
our sensory expectations are not realised. Perhaps most importantly
it is from this power of the imagination, to build up unconscious
sensory expectations and recognise when they are not realised, that
we are able to develop our sense of the passage of time. Thus, our
sense of the passage of time is, at base, physiological and
unconscious, and derived from our plenisentient and dynamic coupling
with our environment.

Pierre Roduit, École Polytechnique Fédérale de Lausanne,
Switzerland Using a Point
Distribution Model to extract behavioral features from the
trajectories of a mobile robot

The work that will be presented results from a collaboration
between the Vision Group of the Laboratoire de Production
Microtechnique
(LPM) and the Swarm Intelligent Systems Group (SWIS), both at the
École Polytechnique Fédérale de Lausanne in Switzerland. Combining
the knowledge and expertise of these groups in robotics (SWIS) and
in computer vision
(LPM) allows us to use mobile robots as trajectory generators to
test models in computer vision and to develop methods of analysis
for mobile-robot controllers based on their trajectories. The Point
Distribution Model is a deformable template widely used in computer
vision to model complex shapes. Applying it to the trajectories of a
mobile robot provides an effective tool for analyzing them, as
behavioral differences and hardware heterogeneities can be
classified in the PDM-transformed space. Moreover, the quantitative
dissimilarities observed in the PDM space can also be transformed
back into physical space, making it easier for us to develop an
understanding of their spatial meaning.

Prof. Ian Marshall, University of Kent, Autonomous
Systems and Their Role in Supporting Scientific Discovery

Dr Luciano Floridi, University of Oxford, The Informational
Nature of Reality

In recent years, there has been a renewed and growing interest in
ontological issues. One of the classic questions that has resurfaced
is whether the ultimate nature of reality might be digital
(discrete) or analogue (continuous). Digital Ontology (and its
corollary thesis of Pancomputationalism) and Informational
Structuralism seek to provide an answer to this fundamental and very
old question, by relying on original and innovative analyses in
terms of computational systems. They are fellow travellers, which
share a similar outlook but differ in the conclusions reached. In
this talk, I shall first provide a brief introduction to a digital
approach to ontology. I shall then argue that Digital Ontology is
mistaken insofar as it favours a one-sided interpretation of the
ultimate nature of reality. I shall present a thought experiment
against the dichotomy “digital vs. analogue” in order to argue that
either (a) the dichotomy is misapplied because of a confusion
regarding the nature of the object under analysis or (b) if properly
applied, it runs into counterintuitive consequences. This leaves
Informational Structuralism—according to which it is a matter of
levels of abstraction whether reality is to be described as digital
or analogue—as a more convincing option. Informational Structuralism
may not be the only game in town, but it is the only one that is
metaphysically safe to play.

Professor Mahdi Mahfouf, University of Sheffield, From
Biological Systems to Materials Processing: Investigations into the
Contributions of Intelligent Systems Modelling Optimisation, and
Control

From Biological Systems to Materials Processing: Investigations
into the Contributions of Intelligent Systems Modelling
Optimisation, and Control
There is no doubt that clinical medicine is among the last areas
where systems engineering tools, such as modelling, system
identification, signal processing, and feedback control, have
proliferated. The real reason behind such a change of attitude rests
in the fact that clinical medicine had to be convinced first that
all the above systems engineering tools can be effective and
reliable but most importantly safe. During this seminar, the speaker
will review his research experiences in tackling some of the
problems and complexities embedded in the human system when looking
particularly at anaesthesia and intensive care therapies. In
contrast to patient care, the speaker will also emphasise the
generic nature of his research by explaining how some of his
previously developed techniques for knowledge extraction/fusion, and
control have been extended to the metal industry which, while
apparently different from the patient care industry, seems to be
driven by similar requirements/priorities.

There are many techniques being researched for solving
optimisation problems. These range from classic complete search
algorithms to complex variations of stochastic approaches. The
selection of the most suitable algorithm, coupled with the perennial
issue of how to model a problem effectively means that choosing the
right algorithm/model combination is extremely important. The
difference between getting it right or wrong can result in order of
magnitude performance variations. This has been a hot topic in
research since the mid 1990's.

In this talk I shall be outlining some examples of constraint
based optimisation problems which occur in industry, how they might
be tackled, and some of the issues which can arise. These will
include transport scheduling, personnel scheduling and port
demurrage minimisation. There will also be the chance of winning a
mystery prize at the end of the talk.

The TrueSkill ranking system is a skill based ranking system for
Xbox Live developed at Microsoft Research. The purpose of a ranking
system is to both identify and track the skills of gamers in a game
(mode) in order to be able to match them into competitive matches.
The TrueSkill ranking system only uses the final standings of all
teams in a game in order to update the skill estimates (ranks) of
all gamers playing in this game. Ranking systems have been proposed
for many sports but possibly the most prominent ranking system in
use today is ELO.

In this talk I will present my design and experiments on a planar
biped robot (RunBot) under the control of a sensor-driven neuronal
controller. This design has some special mechanical features, such
as small curved feet allowing rolling action and a properly
positioned centre of mass, which facilitate fast walking. The
sensor-driven controller is built with biologically inspired sensor-
and motor-neuron models, and does not employ any kind of position or
trajectory-tracking control algorithm.

Instead, it allows the RunBot to exploit its own natural dynamics
during critical stages of its walking gait cycle. Due to the
interaction between the neuronal controller and the properly
designed mechanics of the robot, RunBot can realize stable dynamic
walking gaits in a large domain of the neuronal parameters. In
addition, this structure allows the use of a policy gradient
reinforcement learning algorithm to tune the parameters of the
sensor-driven controller in real-time, during walking. In this way
RunBot can reach a relative speed of 3.5 leg-lengths per second
after only a few minutes of online learning, which is tremendously
faster than that of any other biped robots, and is also comparable
to the fastest relative speed of human walking. News about RunBot.

Professor Maria Stokes
University of Southampton, Brain-computer Interfacing in
Rehabilitation

Brain-computer interfacing (BCI) systems can enable
a person to control a computer using thoughts, detected by
electroencephalography (EEG). The field of BCI research is at a
relatively early stage of producing reliable, robust systems that
are widely accessible for everyday use. The clinical implications of
this technology in different areas of rehabilitation will be
reviewed.

Several BCI research groups are developing systems
for communication and environmental control in people with severe
disabilities. Another dimension to BCI in rehabilitation research is
using the technology for investigating mechanisms of dysfunction,
which would aid diagnosis and provide a screening tool to assess a
person’s ability to access BCI. Such application would also extend
the use of BCI as an aid to re-training of movement and language.

Signal processing techniques continue to be refined
and are improving the accuracy and reliability of BCI technology but
translation into routine clinical use is limited by several factors
influencing accessibility and compliance (e.g. difficult user tasks
requiring long training periods, number of recording channels etc.).
Surface or implanted recording devices can be used. It is suggested
that for transient use in certain areas of rehabilitation, surface
electrodes are appropriate.

The Southampton BCI Research Programme is a joint
venture between the School of Health Professions & Rehabilitation
Sciences and the Institute of Sound & Vibration Research, and is
embedded in the University’s Life Sciences Interface Forum. The
Programme involves collaborative research between various
disciplines in rehabilitation (physiotherapy, speech and language
therapy, medicine, psychology) and engineering, computing and
biological sciences. An important aim is to bridge the gap between
major technological advances and the relatively limited success in
practical applications of BCI systems.

Dr Jonathan Rowe,
University of Birmingham, Propagation time in stochastic
communication networks

Dynamical processes taking place on networks have received much
attention in recent years, especially on various models of random
graphs (including “small world” and “scale free” networks). They
model a variety of henomena, including the spread of information on
the Internet; the outbreak of epidemics in a spatially structured
population; and communication between randomly dispersed processors
in an ad hoc wireless network. Typically, research has concentrated
on the existence and size of a large connected component
(representing, say, the size of the epidemic) in a percolation
model, or uses differential equations to study the dynamics using a
mean-field approximation in an infinite graph. Here we investigate
the time taken for information to propagate from a single source
through a finite network, as a function of the number of nodes and
the network topology. We assume that time is discrete, and that
nodes attempt to transmit to their neighbours in parallel, with a
given probability of success. We solve this problem exactly for
several specific topologies, and use a large-deviation theorem to
derive general asymptotic bounds, which we use, for example, to show
that a scale-free network has propagation time logarithmic in the
number of nodes, and inversely proportional to the transmission
probability.

Dr James Anderson,
University of Reading, The Perspex Machine

The perspex machine unifies the Turing machine with
geometry so that any symbolic computation can be performed
geometrically, though some geometrical computations have no symbolic
counterpart. Even when simulated approximately on a digital
computer, the Turing computable, geometrical properties of the
perspex machine are useful. There will be a concise presentation of
three properties of the Perspex machine, followed by a discussion
that can be as wide ranging, or as focused, as the audience desires.
1) The co-ordinates used by the perspex machine form a total
arithmetic so there are no arithmetical exceptions in the machine.
This means that no error-handling is needed in the perspex machine
and that general software can be more secure. The total arithmetic
used illustrates a fundamental error in the way IEEE floating-point
arithmetic handles NaN. 2) In general, a Turing computable sequence
of numbers with less than quadratic convergence is non-monotonic.
But then a general, Turing computable, perspex computation with less
than quadratic convergence is non-monotonic. But then general,
physical operations with less than quadratic convergence are
non-monotonic. Plausible examples include: punctuated equilibrium in
genetic evolution, the emergence of biological species, and paradigm
shifts in science. 3) The perspex machine is implemented in terms of
a single instruction that has the geometry of an artificial neuron.
This neuron will be illustrated via a compiler that can compile a
sub-set of the C programming language into networks of perspex
neurons. In principle, any Turing program could be compiled into a
network of perspex neurons.

Dr Daniel Polani,
University of Hertfordshire, Organization of Information Flows in
the Perception-Action Loop of Agents

Information is an essential and omnipresent resource
and has long been suspected as a major factor shaping the emergence
of intelligence in animals and as a guideline to construct
artificial intelligent systems. In search for fundamental principles
guiding the self-organization of neural networks, Linsker (1988)
formulated a number of information-theoretic hypotheses. His model
(and most of its successors) was purely passive. However, recent
work by Touchette and Lloyd (2000) extending early work by Ashby
(1953), as well as some work by Polani et al. (2001) has shown that
actions can be incorporated into the information- theoretical
analysis.

As was found by Klyubin et al. (2004), incorporating
actions into an information-theoretic formalization of the
perception action-loop of agents has dramatic consequences in terms
of self-organization capabilities of the processing system. As
opposed to Linsker's model which required some significant
pre-structuring of its neural network, this new model makes only
minimal assumptions about the information processing architecture.
The agent's "embodiment", i.e. the coupling of its sensors and
actuators to the environment, is sufficient to give rise to
structured pattern detectors driven by optimization principles
applied to the information flow in the system.

In the present talk, we will motivate Shannon
information as a primary resource of information processing,
introduce a model which allows to consider agents purely in terms of
information and show how this model gives rise to the aforementioned
observations. If there is time, the talk will discuss the use of
information-theoretic methods to structure the information
processing also in real robot systems.

Dr Andrew Davison,
Imperial College London, Real-Time SLAM with a Single Camera

Recently I have generalised the Simultaneous
Localisation and Mapping (SLAM) methodology, developed to enable
mobile robots to navigate in unknown environments, to demonstrate
real-time 3D motion estimation and visual scene mapping with an
agile single camera. A webcam attached to a laptop becomes a
high-performance position and mapping sensor, which can be used by
advanced mobile robots or coupled to a wearable computer for
personal localisation. This research improves on the previous state
of the art in 3D computer vision research, where emphasis had been
on off-line reconstruction algorithms inspired by photogrammetry.
When hard real-time performance (e.g. 30Hz) is required, the limited
processing resources of practical computers mean that fundamental
issues of uncertainty propagation and selective visual attention
must be addressed via the rigorous application of methods from
probabilistic inference and information theory. The result is an
algorithm which harnesses background knowledge of the scenario to
obtain maximum value from visual processing. Practically, this
technology has a host of interesting potential applications in many
areas of robotics, wearable computing and augmented reality. The
presentation will include a live demonstration.

Dr John McCall,
The Robert Gordon University, EDAs and Markov Networks

Estimation of Distribution Algorithms (EDA) are
derived from Genetic Algorithms. In an EDA, a probabilistic model is
constructed from a population of solutions. The model is then
sampled repeatedly to generate a successor population of solutions,
thus replacing the traditional use of genetic operators to breed
children from selected solutions.

EDAs apply selective pressure by "estimating the
distribution" of good solutions. A common approach is to select good
solutions from the population and then construct a joint probability
distribution of solution variables. Sampling the jpd then generates
the next population.

Problem-dependant choices need to be made here about
the jpd. Simple (univariate) EDAs assume a fully factorised jpd.
More sophisticated approaches assume partial factorisations of the
jpd, traditionally classified as bivariate and multivariate EDAs. A
common approach is to represent the jpd using a Bayesian Network and
an armoury of techniques now exists to build a Bayesian Network from
a population of solutions.

In this talk, I will explore an alternative approach
to modelling the distribution, which differs from the mainstream in
two key aspects. Firstly the jpd is modelled as an undirected graph
(Markov Network) and factorised as a Gibbs distribution. Secondly,
the construction of the model does not require selection but rather
uses solution fitness values directly to determine the parameters of
the distribution. The approach has benefits in improving evolution
but at some computational expense in building the model. I will
demonstrate the costs and benefits of a Markov Network EDA by
comparing with other EDAs on a range of problems.

The growth of service industries in recent times has been
phenomenal with services displacing manufacturing as the main driver
of western industrialised economies. In this context, systems for
service management are becoming increasingly important for companies
to achieve competitive advantage. Two areas of focus are
applications for defining/implementing Service Strategy (i.e. design
and launch of new services) and also Advanced Planning and
Scheduling (APS) solutions for automating and optimising Service
Operations.

Advanced Planning and Scheduling has been a well-known term in
manufacturing being the core of Supply Chain Management suites.
However, the applicability of that part of ERP/SCM systems has been
so far limited to product-driven industries. Despite the increased
importance of services, integrated APS systems for service
operations are yet to emerge from the main enterprise software
vendors. More importantly, solutions, where they exist, often
address only one industry vertical (e.g. airlines) and/or consider
only a part of the people and assets that may exist within an
organisation.

In this seminar, I present an APS template for service operations
based on technologies for forecasting demand, planning resources,
scheduling work and reserving/pricing capacity which links a service
enterprise with its customers and suppliers. Artificial Intelligence
and Operations Research are playing a major role within the
architecture in tackling complex decision making problems. The
approach has already been used to structure BT's systems for its
extensive field service operations while its applicability to other
verticals (e.g. transport, health) is also currently pursued.

Machine Learning (ML) of language studies the
acquisition of linguistic knowledge from examples. While statistical
learning methods are widely spread nowadays, they have their
limitations. The need for adequate modelling of semantics, and
acquisition of human-comprehensible theories has led to an increased
interest in symbolic learning. The talk demonstrates how Symbolic ML
(SML) is used across a range of natural language processing (NLP)
tasks. Word morphology learning is chosen to introduce a number of
SML techniques and compare supervised and unsupervised learning,
followed by a discussion of Inductive Logic Programming (ILP)
learning of syntax, lexical semantics and ontologies. Hybrid SML is
the last topic covered.

Data recovery is a conventional statistics paradigm
that has not found much attention in clustering so far, in spite of
the fact that such popular algorithms as K-Means partitioning, Ward
agglomeration, and CobWeb conceptual clustering can be derived
within the data recovery approach. Moreover, this approach gives
tools to tackle many issues of current interest in clustering.
Examples of such issues are: presence of categorical and
quantitative features in data, initial setting in K-Means,
interpretation aids for hierarchical clustering, incomplete
clustering, multiple data sources. I will introduce the data
recovery approach and associated Pythagorean decompositions and show
how the issues above can be tackled with these instruments.

Since Chomsky (1957) poverty of stimulus arguments have been used
to motivate the claim that a powerful domain-specific language
faculty is required to account for the facts of first language
acquisition. This faculty is regarded as encoding the intricate
parameterized principles of a Universal Grammar (UG) that defines
the set of possible human languages. Recent work in the application
of machine learning methods to grammar induction tasks in natural
language processing provides support for an alternative view on
which domain general learning algorithms with language models that
encode relatively weak language specific assumptions provide the
basis for human language acquisition. There are two ways in which
this research program can be pursued. In supervised learning the
algorithms are trained on data sets annotated with the features to
be learned. This method attributes rich representational structure
to the data. A more interesting and fruitful approach is to use
unsupervised learning in which clustering and distributional
patterns are identified in unannotated data in order to extract
information on structural dependency relations. To the extent that
unsupervised learning with weak bias models succeeds in achieving
induction of credible grammars from corpora, it shows that poverty
of stimulus arguments for the necessity of UG, construed as a
strongly biased language model, do not hold. They demonstrate the
viability of an account that takes language acquisition to depend
primarily on domain general learning procedures, with minimal
assumptions on the nature of linguistic structure that impose weak
constraints on the set of possible grammars.

Dr Jonathan Shapiro,
University of Manchester, Scaling and sensitivity in
machine-learning based optimisation algorithms

Scaling and sensitivity in machine-learning based
optimisation Estimation of Distribution Algorithms (EDAs) are
optimisation algorithms which use machine learning to learn
promising search directions. These algorithms try to learn the
relationships between variables in the problem which are necessary
to search effectively. In practise, the algorithms have been shown
to be effective in certain hard problems.

However, there can also be problems in getting EDAs
to work effectively; in many cases the performance is very sensitive
to the details of the algorithm. We have shown that the performance
of many EDAs on simple problems obeys a scaling relation between the
control parameters of the algorithm and the number variables in the
problem, and this relationship is problem-dependent. As an example,
it has been shown in that EDAs may converge to a state in which the
optimum is never found. The probability of never finding the optimum
has been shown previously, both in empirical studies and
theoretically, to depend on the control parameters and the problem
size in a very sensitive and problem dependent way. It is
exponential in some problems and polynomial in others. This means
that it can be very difficult to set the control parameters of the
algorithm in an effective way.

We will reveal the causes of this sensitivity. There
are several ways to remove it, which we will present and discuss.

Professor Philippe De Wilde, A neural basis for uncertainty and
its implications for fuzzy logic

Fuzzy logic and Bayesian inference are two popular models of
uncertainty. They present a consistent set of results based on
mathematical constructions such as membership functions and
probability distributions. In this talk we look for neural
correlates that can support or suggest improvements to the ways we
model uncertainty. We use results from neuro-imaging and
neurophysiology that have been established in the last three years.
We discuss neuronal population codes, transformations and noise in
neural tuning curves, brain areas that are active when we are
uncertain, spontaneous activations, and slow noise and other slow
processes that affect neural processing. We draw conclusions for
fuzzy logic from each of these areas.

Dr Matthew Stone, Generating Animated Utterances

People's utterances in conversation are composed of short,
clearly-delimited phrases; in each phrase, gesture and speech go
together meaningfully and synchronize at a common point of maximum
emphasis. In our work on RUTH, the Rutgers University Talking Head,
we rely on this structure to create general-purpose annotation and
synthesis to animate an expressive talking face. We can also exploit
this structure to create application-specific animated characters
using databases of recorded speech and captured motion. Here we
start with tools that help create scripts for performers, help
annotate and segment performance data, and construct specific
messages for characters to use within application contexts. Our
animations then reproduce the structure of natural utterances; they
recombine motion samples with new speech samples to recreate
coherent phrases, and blend segments of speech and motion together
phrase-by-phrase into extended utterances. By framing problems for
utterance generation and synthesis so that they can draw closely on
a talented performance, our techniques support the rapid
construction of animated characters with rich and appropriate
expression.

Professor Jim Doran, Iruba: an Agent-Based Model of the
Guerrilla War Process

It is potentially insightful and useful to model guerrilla wars
in agent-based computational terms, as an extension to traditional
historical analysis. Accordingly, the Iruba agent-based model of a
typical guerrilla war has been designed and implemented. This model
will be described, and some experimental results obtained with it
will be reported along with the conclusions that may be drawn from
them. The results will be interpreted particularly in relation to
Guevara and Debray’s theory of “foco” and to a set of sufficient
conditions for insurgent success asserted by T. E. Lawrence. Then
the limitations of the Iruba and similar models will be considered
and the need for and possibility of direct analysis of an
agent-based computational model discussed. The advantages and
disadvantages of formulating a model as a production system will be
considered.

Dr William Harwin, Using Haptics to do Impossible Things

A new way of using computers is emerging based on the technology
of the haptic interface. Using these ideas the University of Reading
have developed the worlds first virtual drum kit, a haptic torch
that could enable blind users to feel without touching, and the
first ever virtual building blocks. The technology also has some
important medical and therapeutically applications in areas such as
stroke rehabilitation. What is this technology, nothing more than a
simple robot manipulator, but with a difference. Whereas In the past
robots had their own worlds and people discouraged from entering,
now the robots are being designed with the specific intension of
interacting with people. Now the same technology that builds cars,
might soon sit at your desk, at your gym, or in hospitals and could
be used by the humans to model cars, manipulate impossible objects
and teach us how to play golf like Tiger Woods, or even learn how to
use a yo-yo on Mars.

At the centre of the Clinical e-Science Framework (CLEF) is a
repository of well organised, detailed clinical histories, encoded
as data that will be available for use in clinical care and in-silico
medical experiments. This talk will present an overview of the CLEF
concept, and will describe two ways in which natural language
generation technologies are being employed to service the
requirements of clinicians and medical researchers as users of CLEF:
* a query editing interface, which makes use of natural language
generation techniques in order to alleviate some of the problems
generally faced by natural language and graphical query interfaces,
thereby allowing biomedical researchers and clinicians to query --
in an intuitive way -- the repository of patient data and to receive
dynamically generated multimedia responses * a report generator,
which produces on-demand a range of medical reports on the clinical
histories of individual patients, tailored to different needs (e.g.,
perspectives and purposes) of the clinician.

Professor Barry Smith, Why Computer Science Needs Philosophy

Human beings are becoming ever increasingly dependent on
computers, not only in their everyday traffic with the world but
also in their special activities as scientists, doctors, managers,
administrators, soldiers. This means that software engineers are
increasingly confronted with challenges of a new kind. How can they
build software that can do justice in a useful and sophisticated way
to the reality of chemistry, or biology, or medicine, or of a large
enterprise, or battlefield? How, even more problematically, can they
do justice simultaneously to a plurality of such domains - for
example of genes/proteins/cells/toxins/diseases - in ways which
would allow inferences to be drawn automatically from the
corresponding data? One currently popular response to these
challenges from the computational side consists in the building of
what are called 'ontologies'. We shall examine some of the results
of this work, and show in what ways it is confused. We shall then
describe how philosophy is being used to resolve such confusions,
and indicate more generally the practical value that can be derived
from the confrontation between computer science and philosophy.

Discovering links and relationships among entities is one of the
main challenges in biomedical research, as scientists are interested
in uncovering entities that have similar functions, take part in the
same/related processes or are co-regulated. This is typically done
by mining experimental data. In this talk we will discuss the
extraction of semantically related entities from the biomedical
literature. The method combines various text-based aspects, such as
lexical, syntactic and contextual similarities between entities
(identified by domain terms). The results of initial experiments
will be presented and compared to other approaches. Several
applications that can make use of extracted relationships will be
discussed, including the possibilities to combine data from
experimental and textual resources.

The combination of information from multiple sources is a
widespread problem, mainly because it is rare for two sources to
adhere to precisely the same conventions. For example do “author”,
“creator” and “composer” label the same property? Even where agreed
conventions exist, interpretations may differ – a supplier may take
"shippingDate" to be the time when goods are sent, whereas a
customer may interpret it as the time when they are delivered. This
is an example of a semantic gap – a term which can be interpreted in
multiple and possibly inconsistent ways.

The semantic web has a vision is for “islands of standardisation”
in which there are small, agreed ontologies for particular sets of
users or communities. Wherever two communities meet or overlap,
integration of information is a potential problem.

In this seminar I will outline the background to this problem,
and argue that it is essential to account for the uncertainty
inherent in the process. I will describe a framework for integrating
information sources, given a number of approximate mappings between
their attributes.

Some fundamental ideas in computer science just do not seem to
quite pan out. One fundamental idea in logic programming is that a
logic program consists of a logic component and a control component.
The former is a specification in predicate logic whereas the latter
defines the order of sub-goals selection. The programmer then
defines the control with delay declarations that specify that a
sub-goal is to suspend until some condition on its arguments is
satisfied -- or at least that is the idea. The problem is that a
reasoning about delay declarations is notoriously tricky and it is
not unusual for a program and goal to reduce to a state that
contains a sub-goal that suspends indefinitely. Suspending sub-goals
are usually unintended and often indicate a programming error. This
talk will show how these programming errors can be located by
abstract interpretation. A debugger/analyser/verifier will be
demonstrated which has been used to locate concurrency bugs in real
programs.

This seminar will describe experiments aimed at adapting Ant
Colony Optimisation (ACO) techniques to an e-learning environment,
thanks to the fact that the available online material can be
organised in a graph by means of hyperlinks between educational
topics. The structure of this graph is to be optimised in order to
facilitate the learning process for students. ACO is based on an
ant-hill metaphor. In this case, however, the agents that move on
the graph are students who unconsciously leave pheromones in the
environment. The whole process is therefore referred to as a
"man-hill" Real-size tests have been performed, showing that
man-hills behave differently from ant-hills.

Currently existing systems that use artificial intelligence rely
on fixed rule-bases or neural networks. They are trained off-line
and do not adapt to the environment. They do not develop their
structure (do not evolve). The environment, in which real systems
operate, however, is often unpredictably changing. This applies to
robotic systems, transport vehicles, technological processes, etc.
The challenge is to develop systems capable of higher level
adaptation to the environment in which they operate and to internal
changes.

Evolving systems aim to address this challenge. They should be
able to change their structure in addition to their parameters which
conventional adaptive system does. They should be able to grow,
update and shrink when necessary. Having this ability they will be
capable of adapting their goals, strategy, and constraints while
performing a task. This concept is closely related to the recent
advances in real-time evolving rule-based and neuro-fuzzy systems.
It lies on the crossroads of computational intelligence and system
theory and borrows heavily from the biological and life sciences.
Thus the field is highly interdisciplinary and its applications -
highly innovative.

The talk will concentrate to the problems and results the author
encountered during last several years of research in this emerging
area as well as to one particular approach to on-line identification
of Takagi-Sugeno fuzzy models and some engineering applications.

An agent-based artificial financial market (AFM) is used to study
market efficiency and learning in the context of the Neo-Austrian
economic paradigm. Efficiency is defined in terms of the 'excess'
profits associated with different trading strategies, where excess
for an active trading strategy is defined relative to a dynamic buy
and hold benchmark. We define an Inefficiency matrix that takes into
account the difference in excess profits of one trading strategy
versus another ('signal') relative to the standard error of those
profits ('noise') and use this statistical measure to gauge the
degree of market efficiency. A one-parameter family of trading
strategies is considered, the value of the parameter measuring the
relative 'informational' advantage of one strategy versus another.
Efficiency is then investigated in terms of the composition of the
market defined in terms of the relative proportions of traders using
a particular strategy and the parameter values associated with the
strategies. We show that markets are more efficient when
informational advantages are small (small signal) and when there are
many coexisting signals. Learning is introduced by considering
'copycat' traders that learn the relative values of the different
strategies in the market and copy the most successful one. We show
how such learning leads to a more informationally efficient market
but can also lead to a less efficient market as measured in terms of
excess profits. It is also shown how the presence of exogeneous
information shocks that change trader expectations increases
efficiency and complicates the inference problem of copycats.

Simulation methods are frequently used to value financial
instruments whose payoffs depend upon a sequence of stock prices or
FX rates. Fast simulation methods are essential to enable traders to
quote prices in real time.

A standard assumption is that asset return processes are scaled
arithmetic Brownian motions. In this situation Brownian bridge
methods can be used as an effective variance reduction technique.

Empirically, asset return processes are not Gaussian. Models
based on processes such as the Variance-Gamma (VG) and Normal
Inverse Gaussian (NIG) Levy processes have recently been developed.
In this talk we develop bridge methods that can be used with these
processes. We demonstrate that considerable variance reduction can
be achieved for a range of path-dependent financial instruments.

Satellite Positioning in the 21st Century, Dr David Park, University of Nottingham

The American owned Global Positioning System (GPS) is still the
only fully operational satellite navigation system in the world.
However, in the coming 5 to 10 years, a number of additional global,
regional and local positioning and navigation systems will be
developed and deployed, including Europe’s Galileo system.

This talk will provide a quick introduction to the field of
satellite positioning including its history, how it works, and
current commercial, scientific and military applications. A closer
look at some of the new satellite-based systems likely to be
available in the next 10 years, including Galileo, GPS III and EGNOS,
will then be followed by a brief look at the various
terrestrial/on-board positioning sensors that are likely to be
integrated into every ‘Satnav’ system within the next 10 years (eg
inertial navigation systems).

It is hoped that an open question and answer session will be
possible at the end of the presentation where any outstanding issues
can be discussed further.

Face-to-face communication cannot be reduced to the exchange of
verbal information. While speaking, one can e.g. indicate the size
and shape of an object by a few handstrokes, direct attention to an
object by pointing or gaze, and modify what is being said by
emotional facial expression. The meanings transmitted in this way
are multimodally encoded and strongly situated in the present
context and especially the body.

With the artificial humanoid agent MAX under development at
Bielefeld University we investigate embodied communication by way of
computer simulations in virtual reality. Equipped with a synthetic
voice and an animated body and face, Max is able to speak and
gesture, and to mimic emotions. By means of microphones and tracker
systems, Max can also "hear" and "see" and is able to process spoken
instructions and gestures. Current research challenges include what
is required for enabling an artificial agent to imitate human
gestures, and how it could express event-related emotions.

In a multidisciplinary endeavour, embodied communication (in
humans and machines) is also the theme of a forthcoming ZiF research
year, which will bring together theoreticians, empirical and applied
researchers from the cognitive, neuro- and computer sciences. The
aim is to develop an integrated perspective of embodiment in
communication and study novel forms of human machine communication.

"Deep" grammars relate strings to information/meaning, usually in
the form of predicate-argument structures, deep dependency relations
of logical forms. "Shallow" grammars define languages as sets of
strings and may associate syntactic structures (e.g. CFG parse
trees) with strings. Natural languages do not always interpret
linguistic material locally where the material is encountered in the
string (tree etc.). In order to obtain accurate and complete
information/meaning representations, a hallmark of deep grammars is
that they usually involve a long-distance dependency (LDD)
resolution mechanism.

Traditionally, deep grammars are hand-crafted. Wide-coverage,
deep grammar development, particularly in rich constraint-based
formalisms such as LFG and HPSG, is knowledge-intensive,
time-consuming and expensive, constituting an instance of the (in-)
famous "knowledge acquisition bottleneck" familiar from other areas
in traditional, rule-based AI and NLP and very few deep grammars (Riezler
et al., 2002) have, in fact, been successfully scaled to
unrestricted input.

The last 15 years have seen extensive efforts on treebank-based
automatic grammar acquistion using a variety of machine-learning
techniques (e.g. Charniak, 1995; Collins, 1997; Johnson, 1998;
Charniak, 2000; Johnson, 2002). The grammars induced are
wide-coverage and robust and, in contrast to manual grammar
development, machine-learning-based grammar induction incurrs
relatively low development cost. With few notable exceptions
(Collins' Model 3, Johnson, 2002), however, these treebank-induced
wide-coverage grammars are shallow: they usually do not attempt to
resolve long-distance dependencies nor associate strings with
information/meaning representations (in the form of
predicate-argument structure, dependency relations or logical
forms).

Traditional vector-based models use word co-occurrence counts
from large corpora to represent lexical meaning. In this talk we
present a novel framework for constructing semantic spaces that
takes syntactic relations into account. We introduce a formalisation
for this class of models which allows linguistic knowledge to guide
the construction of semantic spaces. We evaluate our framework on
tasks that are relevant for cognitive science and NLP: semantic
priming, lexicon acquisition and word sense disambiguation. In all
cases we show that our framework obtains results that are comparable
or superior to state-of-the art.

The extraordinary human construct that is the World Wide Web is a
truly Disruptive Technology. There are now hundreds of millions of
users, billions of indexed web resources, it is used in every
country on Earth and yet only a tiny percentage of users is
“trained” in any way. This remarkable construct is both massively
distributed and largely open.

With this amount of content and usage the integration of
information across space and time leads to new opportunities. From
on-line shopping to collaborative e-Science the web is changing how
information is generated, deployed and used. The military will have
few options but to take advantage of the huge investment that the
commercial and research sectors will make in web service solutions
and architectures.

This lecture will examine the extent to which intelligent web
services are evolving to cope with diverse sources of information on
a global scale. It will examine the particular way in which
Artificial Intelligence is being woven into the web. It will review
how these developments are likely to shape future military
capabilities.

On Concepts, Words and Syntax: the featural and unitaryspace
semantic hypothesis and beyond,
Dr Gabriella Vigliocco, University of Sheffield

No abstract available

The Power of Large-Scale Multi-Agent
Systems,
Dr George Rzevski, Multi-Agent & Distributed Intelligence
Research Alliance, Brunel University

Four years ago Professor Rzevski has founded a software development
company to commercialise his research into Ontology-based
Multi-Agent Systems. The company is rapidly expanding and currently
has 100 employees producing large-scale multi-agent software for
diverse applications such as transportation logistics, design of
very large structures, data clustering and text understanding. The
key ideas will be illustrated by examples of large-scale commercial
systems.

In this seminar, I will address the problem of using vision for
tasks related to mobile robot navigation (in particular mapping and
homing). I will claim that the only method worth using is based only
on the appearance of the environment (and will give the audience the
opportunity to contest this claim ;-), will discuss the implications
and limitations of this claim and will show how this can be done.
I'll show results for different tasks and talk about where we are
going with this.

Blind Source Separation (BSS) and Independent Component Analysis
(ICA) are increasing in popularity in the field of biomedical signal
processing. They are generally used when it is required to separate
measured multi-channel biomedical signals into their constituent
underlying components. The use of ICA has been facilitated in part
by the free availability of toolboxes that implement popular
flavours of the techniques. Fundamentally ICA in biomedicine
involves the extraction and separation of statistically independent
sources underlying multiple measurements of biomedical signals.
Enhanced results are obtained with the introduction of ICA/BSS
techniques based on time, frequency and joint time-frequency
decomposition of the data.

This talk will focus on my work using such BSS techniques applied
to the analysis of electromagnetic (EM) brain signals - i.e. the
electroencephalogram and the magnetoencephalogram. These techniques
are used for decomposing measured EM brain signals into their varied
underlying sources - these have multiple uses, from predicting the
onset of an epileptic seizure, to monitoring changing brain-state in
a neurological Intensive Care Unit.

Agent-environment State Machines, Dr Tom Ziemke, University of
Skövde, Sweden

Traditional cognitive science and AI viewed cognition as, roughly
speaking, the computational manipulation of internal
representational knowledge of the external world through state
machines (e.g., of the Turing machine type). Most of this is
strongly questioned in recent work on embodied, situated and
distributed cognition. However, using examples from robotic
experiments and human case studies, this talk will illustrate that
the notion of state machines is also useful in describing
distributed cognitive processes emerging from the interaction of
embodied agents and their environments, where knowledge and states
are not purely internal, but distributed over agent and environment.

Aerobot technology is generating a good deal of interest in
planetary exploration circles. Balloon based aerobots have much to
offer ESA's Aurora programme, e.g. high resolution mapping, landing
site selection, rover guidance, data relay, sample site selection,
payload delivery, and atmospheric measurement. Aerobots could be
used in a variety of configurations from uncontrolled free-flying to
tethered rover operation, and are able to perform a range of
important tasks that other exploration vehicles cannot. In many ways
they provide a missing ‘piece’ of the exploration ‘jigsaw’, acting
as a bridge between the capabilities of in-situ rovers or landers
and non-contact orbiters. Technically, a Lighter then Air (LTA)
aerobot concept is attractive because it is low risk, low-cost,
efficient, and much less complex than Heavier than Air (HTA)
vehicles such as fixed wing gliders, and crucially, much of the
required technology ‘building blocks’ currently exist. Smart imaging
and localisation is a key enabling technology for remote aerobots.

The University of Wales Aberystwyth (UWA) has been working in the
area of aerobots for future planetary exploration for a number of
years, and is involved in an ESA funded contract to develop a
demonstrator imaging and localisation package (ILP) for a Martian
balloon. This ILP system will incorporate a unique combination of
image based relative and absolute localisation techniques.
Additionally the Aberystwyth team is working on a possible tethered
balloon system for the ESA Aurora ExoMars rover study.

This talk provides a background to the application of aerobot
technology for future planetary exploration, and details the
research being undertaken at Aberystwyth in the areas of aerobot
imaging and localisation, and tethered balloon/rover operations.

Satellite Positioning in the 21st Century, Dr David Park, IESSG,
University of Nottingham

The American owned Global Positioning System (GPS) is still the
only fully operational satellite navigation system in the world.
However, in the coming 5 to 10 years, a number of additional global,
regional and local positioning and navigation systems will be
developed and deployed, including Europe’s Galileo system.

This talk will provide a quick introduction to the field of
satellite positioning including its history, how it works, and
current commercial, scientific and military applications. A closer
look at some of the new satellite-based systems likely to be
available in the next 10 years, including Galileo, GPS III and EGNOS,
will then be followed by a brief look at the various
terrestrial/on-board positioning sensors that are likely to be
integrated into every ‘Satnav’ system within the next 10 years (eg
inertial navigation systems).

It is hoped that an open question and answer session will be
possible at the end of the presentation where any outstanding issues
can be discussed further.

Computer checkers players that have been designed through
evolution are now available commercially. This talk details a study
into whether classic playing strategies can really be evolved from
nothing, or whether many of the evolved players seen in the
literature actually benefit from expert knowledge. The results
suggest that some common heuristics will evolve, but it is not
necessarily easy. The implications for other evolved players in the
literature are analysed and discussed.

Investigations into Graceful Degradation of Evolutionary
Developmental Software, Dr Peter Bentley, University College London

Today’s software is brittle. A tiny corruption in an executable
will normally result in terminal failure of that program. But nature
does not seem to suffer from the same problems. A multicellular
organism, its genes evolved and developed, shows graceful
degradation: should it be damaged, it is designed to continue to
work. This talk describes an investigation into software with the
same properties, focussing on the use of fractal proteins in a model
of evolutionary development.

Three programs, one human-designed, one evolved using genetic
programming, and one evolved and developed using the fractal
developmental system are compared. All three calculate the square
root of a number. The programs are damaged by corrupting their
compiled executable code, and the ability for each of them to
survive such damage is assessed. Experiments demonstrate that only
the evolutionary developmental code shows graceful degradation after
damage.

Reality is too complex and unpredictable; models cannot be
precompiled to cope with real data. The talk will present an ongoing
endeavour to design algorithms to deal with real data in complex
environments. Examples will be given in computer vision and computer
animation. Learning can be used in computer vision to identify
events of interest, calibrate cameras and combine information in a
multi-view system. Learning can be employed in computer animation to
build self-sustainable avatars, able to optimise their actions in a
complex search space. Learning can be devised in isolation, but,
most importantly, it can be distributed across a number of modules
that, when communicating, synergistically give birth to emerging
functionality.

Lightweight Formal Tool Support for Software Evolution, Tom Mens,
University of Mons-Hainaut

Since all software is subject to evolution, there is a clear need
for better tools to support and automate various aspects of software
evolution. Such tools are not only needed at the level of source
code (evolution of programs) but also at a design level (evolution
of models).

During my talk I will briefly sketch two formal approaches that
can be used to provide better support for software evolution. First,
I will present an approach for software refactoring that relies on
the formalism of graph transformation. Second, I will present an
approach for maintaining the consistency of evolving UML models
based on the formalism of description logics.

Parser combinators are higher order functions that transform
parsers into parsers. Parsing with context free grammars can be
handled by defining combinators for the key operations in context
free grammar rules: recognizing epsilon, recognizing a terminal,
choice between rewrite rules for a given nonterminal, and sequencing
under a nonterminal. After explaining and illustrating this in some
detail, we demonstrate how (stack) parser combinators can be used
for the analysis of movement (or extraction) in natural language.

Support vector machines are not directly suitable for data with
different length. Chain-coded hand-written digit recognition is a
problem of this type. In this talk I will discuss several ways of
using SVM on such data sets. In particular, we show that edit
distance between two sequences can be good features (attributes) for
transforming original data to a form suitable for SVM. Good accuracy
on some real-world data sets will be presented. Furthermore, I will
discuss some implementation issues.

Modelling and Reasoning about Extended Business Transactions,
Professor Michael Butler, University Of Southampton

In this talk I will give an overview of a formal modelling language called StAC (Structured Activity Compensation) which resulted from a collaboration with a group at IBM
Hursley. Compensation is an important feature of StAC. This is a generalisation of the traditional commit/rollback approach in traditional transaction processing designed to deal with situations where an undo is infeasible in the case of error. I will also talk about the relationship between StAC and the BPEL4WS business coordination language for web services.

Vision faces the problem of an extremely high degree of vagueness
and uncertainty in its low level processes such as edge detection,
optic flow analysis and stereo estimation. However, the human visual
systems acquires visual representations which allow for actions with
high precision and certainty within the 3D world under rather
uncontrolled conditions. Humans can achieve the needed certainty and
completeness by integrating visual information across visual and
sensorial modalities.

This integration is manifested in the huge connectivity between
brain areas in which the different visual modalities are processed
as well as in the large number of feedback connections between
higher and lower cortical areas. The essential need for integrating
information across modalities in addition to optimising single
modalities has been recognised in the vision community after a long
period of work on improving single modalities. The power of modality
fusion arises from the intrinsic relations given by deterministic
and statistic regularities across visual modalities and across
different sensors (e.g., haptic information and vision). Two
important regularities in visual data with distinct properties are
(1) motion (most importantly rigid body motion, RBM) and (2)
statistical interdependencies between features such as collinearity
and symmetry.

In my talk I will introduce a multi-modal representation that has
been applied to the problem of visual scene analysis. This
representation codes information in terms of local visual Primitives
in a sparse and condensed way. The Primitives code orientation, grey
value, colour, optic flow, stereo information as well as information
about "junction-ness" or "edge-ness". They are motivated by
hyper-column structures in the visual cortex. The Primitives
initialise a disambiguation process that utilizes statistical and
deterministic regularities. Besides its usefulness for computer
vision our-multi modal representation have resulted in a new form of
computer art, called `symbolic pointillism'. Finally, a haptic
exploration of objects will be shown and its potential interaction
with vision will be discussed.

Generation of Multiple-choice Tests, Professor Ruslan Mitkov,
University of Wolverhampton

The objective of this on-going project is to deliver a
comprehensive computer-aided environment for generating
multiple-choice tests from electronic instructional documents
(textbooks). In addition to employing various Natural Language
Processing (NLP) techniques including among many other things, term
extraction and semantic distance computing, the system makes use of
language resources such as corpora and ontologies. It identifies
sentences containing important concepts and generates test questions
as well as distractors, offering the user the option to post-edit
the test items in a user-friendly interface. The talk will present
the evaluation results of the current system which show that in
assisting test developers to produce tests in a significantly
faster, expedient manner and without any compromise of quality, this
new tool saves both time and production costs.

This introductory talk is tailored for people who haven't come
across SDPs before - it begins with a novel example to illustrate
principles behind SDPs called, 'The Restaurant Game'; gives a
taxonomy of the key types of SDS, (together with examples of SDPs in
nature), discusses properties of the algorithm and concludes with a
demo of various different SDPs in action performing 'best-fit'
string search.

CSP and Communicating B Machines, Professor Steve Schneider,
University of London

Recent work on combining CSP and B has provided ways of
describing systems comprised of components described in both B (to
express requirements on state) and CSP (to express interactive and
controller behaviour). This approach is driven by the desire to
exploit existing tool support for both CSP and B, and by the need
for compositional proof techniques. This talk is concerned with the
theory underpinning the approach, and discusses a number of results
for the development and verification of systems described using a
combination of CSP and B, with particular focus on concurrency and
abstraction. The talk discusses how results obtained (possibly with
tools) on the CSP part of the description can be lifted to the
combination. The results are illustrated with a toy lift controller
running example.

A Predicative Model for General Correctness, Dr Steve Dunne,
University of Teesside

The predicative approach to program semantics was pioneered by
Hehner, and adopted by Hoare and He for their seminal work on
unifying theories of programming. I will formulate alternative
healthiness conditions intended to characterise predicates which
give us a general-correctness rather than total-correctness
description of sequential program behaviour. Surprisingly, these are
simpler and have more mathematically desirable properties than Hoare
and He's original ones, suggesting that general correctness might be
a more suitable basis for sequential program semantics than total
correctness.

The range of roles for thesauruses within NLP is presented and
the WASPS thesaurus is introduced. We argue that manual and
automatic thesauruses are alternative resources for the same NLP
tasks. This involves the radical step of interpreting manual
thesauruses as classifications of words rather than word senses: the
case for this is made. Thesaurus evaluation is now becoming urgent.
A range of evaluation strategies, all embedded within NLP tasks, is
proposed.

Real time simulation trials involve people and are particularly
subject to a number of natural constraints imposed by standard work
patterns as well as to the vagaries of the availability of
individuals and unscheduled upsets. They also typically involve many
factors. Well thought-out simulation experimental design is
therefore especially important if the resulting overall trial is to
be efficient and robust. We propose hybrid experimental designs that
combine the safety of matched runs with the efficiency of fractional
factorial designs. This article describes real experiences in this
area and the resulting approach and methodology that has evolved
from these and which has proved effective in practice.

The mechanisms whereby mature multicellular biological organisms
are constructed from the original fertilized egg are extraordinary
and complex. Living organisms are perpetually changing yet outwardly
they often give the appearance of stasis. This is accomplished by
cells constantly reproducing and dying. There is no central control
behind this. Such a process leads naturally to the self-repairing
properties of organisms. At present humans design systems using a
top down design process. Until now, this methodology of design has
served humanity well. However when one looks at the problem of
constructing software and hardware systems with of the order of 1013
or more components (roughly the number of cells in the human body)
it appears that we face a design crisis. As the number of
interacting components grows the complexity of the system grows
also. This can lead to many problems. It becomes difficult to
formally verify such systems due to the combinatorial explosion of
configurations or states. The cost of maintenance grows alarmingly
also and their unreliability increases. Electronic circuits are now
routinely constructed at sub-micron level. This is leading to
immense wiring problems. As the trend for increasing miniaturization
of circuits continues, it will become ever more difficult to supply
the huge amounts of data required to define or program these
devices. Living systems demonstrate very clever solutions to these
problems. The information defining the organism is contained within
each, and every, part. There is no central design. Consequently, it
is likely that designers will have to increasingly turn to methods
of constructing systems that mimic the developmental process that
occur in living systems. To some extent, such an agenda has been
embraced in the nascent field of amorphous computing.

I will describe my recent work on evolving developmental programs
inside a cell to create multicellular organisms of arbitrary size
and characteristics. The cell genotype is evolved so that the
organism will organize itself into well defined patterns of
differentiated cell types (e.g. the French Flag). In addition the
cell genotypes are evolved to respond appropriately to environmental
signals that cause metamorphosis of the whole organism. A number of
experiments are described that show that the organisms exhibit
emergent properties of self-repair and adaptation.

Lucy and the Quest for the Holy Grail, Steve Grand, Cyber Life
Limited

Lucy and the quest for the holy grail. The cerebral cortex seems
to be a general-purpose machine with the capacity to self-organise
into an assortment of useful specialist machines, guided only (or at
least mostly) by the information being supplied to it. This is such
a striking facility that one might expect models of cortical
function to lie at the heart of most AI research, yet in practice it
seems to receive very little attention. There are several good
reasons for this, but it doesn't follow that the problem is
intractable. Lucy the robot is my (amateur and largely unfunded)
attempt to discover the fundamental engineering principles that lie
behind cortical organisation. The project is still in its early
days, but nevertheless is starting to show promise. This is a
progress report.

A pedagogical introduction to financial markets, both real and
virtual, will be given, showing how one can model a financial market
using "intelligent" artificial agents. I will consider elements such
as: organizational structure of the market, models of various types
of market participant, such as market makers, price formation and
information and learning. In particular I will consider the
phenomenon of adaptation and learning, showing how it is,
essentially, a problem in signal identification in a noisy
environment. Implications for the efficiency of markets will be
considered.

Neuro Robots: Issues in Evolvability and Plasticity, Professor
Phil Husbands, University of Sussex

Recent years have seen the discovery of freely diffusing gaseous
neurotransmitters, such as nitric oxide (NO), in biological nervous
systems. A type of artificial neural network (ANN) inspired by such
gaseous signalling, the GasNet, has previously been shown to be more
evolvable than traditional ANNs when used as artificial nervous
system in an evolutionary robotics setting. Evolvability is used in
the sense of consistent speed to very good solutions, here
appropriate sensorimotor behaviour generating systems. We present
two new versions of the GasNet which take further inspiration from
the properties of neuronal gaseous signalling. The plexus model is
inspired by the extraordinary NO producing cortical plexus structure
of neural fibres and the properties of the diffusing NO signal it
generates. The receptor model is inspired by the mediating action of
neurotransmitter receptors. Both models are shown to significantly
further improve evolvability. We describe a series of analyses
suggesting that the reasons for the increase in evolvability is
related to the flexible loose coupling of distinct signalling
mechanisms, one `chemical' and one `electrical'.

Learning Domain Theories, Professor Steve Pulman, Oxford
University

By `domain theories' we mean collections of non-linguistic facts
and generalisations or `commonsense theories' about some domain.
Traditional approaches to many problems in AI, and in NL
disambiguation, presupposed that such theories were to hand for the
relevant domain of application. Notoriously these were difficult to
formalise and fragile to use, and over recent years the traditional
approach has been almost entirely superseded by statistical learning
mechanisms. However, this does not mean that the traditional view
was entirely wrong: it is clear that many aspects of linguistic
interpretation are based on reasoning to novel conclusions of a type
that is unlikely to be replicable by purely statistical methods.
This talk reports some recent experiments on learning domain
theories automatically, starting with parsed corpora, and using
various machine learning techniques, particularly Inductive Logic
Programming, to derive some simple Domain Theories from them.

If we want dialogue systems to cope with the semantic content of
utterances, we need tools that build meaning representations, detect
inconsistencies, distinguish old from new information, extract
specific features, and determine speech acts (and possibly more).
Moreover, all of this needs to be accomplished in a systematic and
efficient way, taking situational and background knowledge into
account. In this talk I investigate how realistic this goal is by
illustrating the role of computational semantics and inference in
spoken dialogue systems in general, and for two tasks in particular:

In the last part of the talk I will discuss two applications of
spoken dialogue systems (a mobile robot and home automation) that
successfully use these techniques, showing that we are significantly
closer to our goal than even a few years ago. Finally, I will
suggest future research directions and challenges for the deployment
of computational semantics in dialogue systems with larger domains.

Graph Colouring as a Model of Parallel Computation, Dr Alexei
Vernitski, University of Essex

In graph theory, the problem of colouring a graph is the problem
of assigning a colour (from a restricted list of available colours)
to each vertex of the graph in such a way that no two connected
vertices are assigned the same colour. We can imagine solving a
graph colouring problem in a parallel architecture: let each vertex
of the graph have a simple processor, whose sole function is to
assign a colour to the vertex, making sure that this is not the same
colour as of any of its adjacent vertices. Unexpectedly, this
architecture has an excellent computational power: I shall show how
it can calculate every Boolean function, and, moreover, every
function of multiple-valued logic.

An Introduction to Drift Analysis for Estimating Computation
Time of Evolutionary Algorithms, Dr Jun He and Xin Yao, University
of Birmingham

Evolutionary algorithms (EAs) have been used widely in solving
combinatorial optimisation problems. However, few theoretical
results exist that analyse the time complexity of EAs for these
problems. Recently, this topic has being attracted a lot of
interests from researchers both in theoretical computer science and
in evolutionary computation. This talk will report the application
of drift analysis in estimating computation time of EAs. Drift
analysis aims to derive some important properties of a Markov chain
(or a general stochastic process), e.g. the expected first hitting
time to the optimum, by its one-step mean drift. Based on drift
analysis, we will present some useful drift conditions for analysing
the time complexity of EAs.

This talk will provide a brief introduction to a challenging
research area in scheduling by focusing on the algorithmic aspects
of scheduling problems.

Simulations for failure semantics, Eerke Boiten, University
College London

Data refinement is a generally accepted and well established
method of refining sequential programs. Data refinement is normally
verified using so-called simulations, which provide a sound and
complete method.

This talk demonstrates how the simulations method can also be
applied to concurrent systems. In concurrent systems, notions of
refinement correspond to notions of "testing" of such systems, which
vary according to what can be observed in a "test". Such
observations can be added explicitly to those already made by
sequential programs, yielding a new semantics. In particular, when
these observations correspond to sets of refused operations, we
obtain a failures semantics. The established data refinement theory
then provides a notion of simulation, which can be used to verify
failures refinement.

Improving the performance of support vector methods, Prof Tom
Downs, School of Computer Science and Electrical Engineering,
University of Queensland

Abstract classification and for regression. It will also describe some variations on the SVM theme that have been developed in recent years to provide either simpler learning algorithms or improved generalization performance. Some examples of the application of SVMs to practical datasets will be given with performance compared to other leading techniques.

Finally the seminar will describe some recent work carried out at the University of Queensland which allows the exact simplification of the (sometimes quite complex) solutions provided by
SVMs. An additional method, that has grown out of this technique, will be described that allows identification of redundant training data
(ie data that contribute nothing to the learning process beyond what is provided by other data).

Search-Based Software Testing, Mark Harman, Brunel University

Search-based testing is the name given to a set of problems where
test data is to be generated using search-based algorithms, most
notably genetic algorithms. There is a growing interest in
search-based testing techniques, reflecting a similar increase in
awareness of search-basedtechniques for solving a wide class of
traditional problems in software engineering.

The search for good quality test data is a central plank of many
organisations' verification and validation strategies. It turns out
that many of the criteria that define "good quality" in test data
form ideal candidates for fitness functions. In this talk I will
present a survey of the field, relating both academic results and
industrial practice, focusing in particular, on my joint work with
DaimlerChrysler's Evolutionary Testing group. (This work is also
conducted in collaboration with Chris Fox of Essex University.)

Navigation of the Titan 4WD outdoor mobile robot using a custom
made phase array CTFM sonar for landmarks recognition, Dr Danny
Ratner, Aerial and Mobile Robotics Research Group, University of
Wollongong

The talk will present ongoing research at the Aerial and Mobile
Robotics Research Group at the University of Wollongong, focussing
on the continuous transmission frequency modulated sonar sensors
developed at Wollongong. These sensors can be used for range finding
as well as identifying objects by the spectrum of their sonar
response. The talk will also cover aspects of the work carried out
the a the Israeli Air Force Laboratory (Rafael) and through the
DDARM2 project at CMU with Takeo Kanade and others).

Research on cooperative, adaptive intelligent systems, involves
studying, developing and evaluating architectures and methods to
solve complex problems using adaptive and cooperative systems. These
systems may range from simple software modules (such as a clustering
or a classification algorithm) to physical systems (such as
autonomous robots or sensors). The main characteristic of these
systems is that they are adaptive and cooperative. By adaptive, it
is meant that the systems have a learning ability that makes them
adjust their behaviour or performance to cope with changing
situations. The systems are willing to cooperate together to solve
complex problems or to achieve common goals.

In this talk we will report on the recent research in the Pattern
Analysis and Machine Intelligence group on Cooperative Intelligent
Systems. We will focus on adaptive aggregation methods and
architectures, use of agents and multilearning with applications in
classification, sensor planning and robotics.

Optimization in WDM Optical Networking: Some New Challenges, Dr.
Gaoxi Xiao, School of EEE, Nanyang Tech. University of Singapore

This talk is composed of two parts. The first part will be a very
brief introduction to the Network Technology Research Centre (NTRC)
of Nanyang Technological University (NTU), where the speaker is
conducting most of his research jobs.

In the second part, the speaker will provide a personal view of
some new challenges in WDM optical networking. Specifically, he
would discuss several problems that the optimization specialists
have incurred; this information probably will be very beneficial.

A Practical Guide to Support Vector Classification, Chih-Jen
Lin, National Taiwan University

Support vector machine (SVM) is a popular technique for
classification. However, people who are not familiar with SVM often
get unsatisfactory results since they may miss some simple but
significant steps. In this talk, we first discuss a simple
procedure, which usually gives reasonable results. It has been fully
implemented in our package libsvm, one of the main SVM software.
Secondly, we discuss challenging cases for which this simple
procedure may not perform well and then some possible research
issues.

Some initial experiments using GP to evolve comprehensible robust
predictive models based on gene expression as measured by DNA chips
will be described. DNA chip data is notoriously difficult to data
mine. It is noisy, has thousands of features but often few training
examples.

The approach is based on selecting genes via a small number of
iterations of many GP runs in parallel. In a final iteration a
non-linear model is evolved using a few tens of genes. Bloat is
combated by size fair crossover.

Simple models using few genes are preferred, as these may be of
interest to Biologists, over complex models using more genes (even
though these bigger models may have greater accuracy when measured
on the limited data available).

Comparison of data and process refinement, Dr David Streader,
University of Waikato, New Zealand

From what point of view is it reasonable, or possible, to refine
a one place buffer into a two place buffer? In order to answer this
question we characterise refinement based on substitution in
restricted contexts. We see that data refinement (specifically in Z)
and process refinement give differing answers to the original
question, and we compare the precise circumstances which give rise
to this difference by translating programs and processes into
labelled transition systems, so providing a common basis upon which
to make the comparison. We also look at the closely related area of
sub-typing of objects. Along the way we see how all these sorts of
computational construct are related as far as refinement is
concerned, and discover and characterise some (as far as we can
tell) new sorts of refinement.

Pattern Analysis in Practice, Prof. Ian Nabney, Aston University

The aim of this talk is to demonstrate that the key to success in
developing pattern analysis applications is to achieve a principled
pragmatism; flexibility without prejudice.

We need theory to provide a wide range of models (with different
motivating assumptions) and so that we can apply appropriate
techniques given the structure of a problem. However, an application
is not just about picking a model with the best accuracy: practical
questions are not settled completely by minimising a sum-of-squares
error function. In particular, a model forms part of a larger system
that must be accessible to users which leads to a broader range of
challenges.

The main differences from purely theoretical investigations will
be described and illustrated with real examples including wind field
modelling, oil drilling condition monitoring, ECG arrhythmia
analysis, bioinformatics and medical diagnosis

A long and lasting problem in agent research has been to close
the gap between agent logics and agent programming frameworks. The
main reason for this problem of establishing a link is identified
and explained by the fact that agent programming frameworks have not
incorporated the concept of a declarative goal. Instead, such
frameworks have focused mainly on plans or goals-to-do instead of
the end goals to be realised which are also called {\em
goals-to-be}. In this talk, a new programming language called GOAL
is introduced which incorporates such declarative goals. The notion
of a commitment strategy - one of the main theoretical insights due
to agent logics, which explains the relation between beliefs and
goals - is used to construct a computational semantics for GOAL.
Finally, a proof theory for proving properties of GOAL agents is
introduced. Thus, we offer a complete theory of agent programming in
the sense that our theory provides both for a programming framework
and a programming logic for such agents. An example program is
proven correct by using this programming logic.

Research in the UCL Bioinformatics Unit, Prof David Jones,
University College London

In this talk some of the research being carried out in the UCL
Bioinformatics Unit will be discussed, with a focus on our work on
protein structure prediction. The so-called "protein folding
problem" is perhaps the biggest of the remaining challenges in
molecular biology.The puzzle as to how the complex 3-D structure of
a protein is encoded purely by its amino acid sequence is being
tackled by both experimentalists and theoreticians alike. Although a
full understanding of how proteins self-assemble into their native
structures is a long way off, a more tractable sub-problem is the
challenge of programming a computer to predict the structure of a
given protein from its sequence. In computational terms, the protein
folding problem can be described as a "black box" into which a
string of 20 different letters is fed and a set of X,Y,Z coordinates
for each atom in the protein produced as the output. In this talk, I
will review some of the work from my own group in developing
different protein folding "black boxes", and how these can help in
the annotation of genome sequences.

The Current Opinion of Robots for Landmine Detection, Dr
Chandrasekhar Kambhampathi, University of Hull

Anti-Personal landmines are a significant barrier to economic and
social development in a number of countries. Several sensors have
been developed but each one will probably have to find, if it
exists, a specific area of applicability, determined by
technological as well as economical or even social factors, and
possibly other sensors to work with using some form of sensor
fusion. A significant issue concerns the safety of the deminer. For
every 5000 mines removed, one deminer is killed. The design of an
accurate sensor may reduce the amount of time needed to determine
whether a landmine exists, but does not increase the safety of the
deminer. Since the safety issues during the eradication process are
of great concern, the use and integration of cheap and simple robots
and sensors in humanitarian demining has been raised. This removes
the deminer from close contact with mines. This seminar is concerned
mainly with the work that has been done in the area of robotics and
landmine detection, the problems involved, the issues that have been
overlooked and the future of robots in the field.

Research into dialogue systems has been ongoing for several
decades, often funded and championed by large scale national
programs, such as DARPA Communicator in the US. Despite this,
advances in dialogue research have been small, both in terms of the
usability and performance of such systems.

The community now has access to large scale corpora of dialogues,
annotated both for dialogue act structure and in some cases semantic
content. The use of such corpora allows us to acquire large scale
models of dialogue, and to determine empirically mappings between
incoming user utterances and their structure and content. This in
turn allows dialogue systems to be broader, more robust, and
increasingly conversationally plausible.

This talk will focus on this research at the University of
Sheffield in the context of a number of large scale European funded
grants.

What is the essential difference between stochastic local search
(LS) and systematic backtracking (BT) that often gives LS superior
scalability? Answering this question might lead to new search
algorithms combining advantages of both. I argue that the key is
LS's lack of commitment to any given variable assignment, and I show
that a non-committing (and incomplete) form of backtracking scales
exactly like LS. In fact I claim that this is simply LS applied to a
different space, offering a clean way of combining LS with
techniques such as constraint propagation and relaxation.

Spoken dialogue has the potential to improve the usability of
individual home devices, to allow convenient control of multiple
networked devices, and to enable programming of interactions between
devices. However, current dialogue systems tend to be based on form
filling or making choices from a fixed set of alternatives. For
command and control applications, this kind of dialogue is
particularly unnatural. This talk describes some of the challenges
imposed by the home domain, and some solutions. In particular we
show how the use of a domain model allows for a more easily
reconfigurable dialogue system, which allows flexible interaction
with a user.The talk will conclude with a demonstration of spoken
dialogue control of a simulated home.

Swarms in a Dynamic Environment, Dr Tim Blackwell, University
College London

The search and tracking of optima in dynamic environments is a
difficult problem for evolutionary algorithms due to diversity loss.
However, another population based search technique, particle swarm
optimisation (PSO), is well suited to the dynamic search problem. In
particular, if some or all of the particles are “charged”,
population diversity can be maintained, and good tracking achieved
with a simple algorithm. Charged particle swarms are based on an
electrostatic analogy – inter-particle repulsions enable charged
particles to swarm around a nucleus of neutral particles. This work
extends the difficulty of the search tasks by considering a bi-modal
parabolic environment of high spatial and temporal severity. Two
types of charged swarms and an adapted neutral swarm are compared
for a number of different dynamic environments which include extreme
‘needle-in-the-haystack’ cases. The results suggest that charged
swarms perform best in the extreme cases, but neutral swarms are
better optimisers in milder environments.

The E-MSD, EBI Macromolecular Structure Database for the Protein
Data Bank (PDB), http://www.ebi.ac.uk/msd, was set up in 1996 to
give Europe an autonomous and trusted facility to collect, organise
and make available data about biological macromolecular structures
and to integrate structure with the other biological databases at
the EBI. Since then, the E-MSD group has been working in the
following areas:

- Accepting and processing depositions to the (PDB)
- Transforming the PDB flat file archive to a relational database
system
- Developing services to search the PDB
- Developing standards for 3D-database information design and models
that describe the conceptual framework, in which all data are
collected, stored and disseminated.
- In partnership with projects to addresses an urgent need to
develop an end user environment to take advantage of the rapid
development of high throughput facilities associated with protein
crystallography.

The presentation will give an overview of the A-Z approach
adopted by the EBI
for the application of data handling techniques from experimentalist
through to a final web search system for macromolecular structures
via the MSD database components, the relational database and the
data warehouse. Both use Oracle as the database engine and are
designed and managed using the Oracle case tool Designer.

This talk concerns the application of some statistical estimation
tools in trying to understand the nature of the combinatorial
landscapes induced by local search methods. One interesting property
of a landscape is the number of optima that are present. I shall
show that it is possible to compute a confidence interval on the
number of independent local searches needed to find all optima. By
extension, this also expresses the confidence that the global
optimum has been found.
In many cases this confidence may be too low to be acceptable, but
it is also possible to estimate the number of optima that exist.
Theoretical analysis and empirical studies are discussed, which show
that it may be possible to obtain a fairly accurate picture of this
property of a combinatorial landscape. The approach will be
illustrated by analysis of an instance of the flowshop scheduling
problem.

Sparse Kernel Methods, Dr Steve Gunn, University of Southampton

A widely acknowledged drawback of many statistical modelling
techniques, commonly used in machine learning, is that the resulting
model is extremely difficult to interpret. A number of new concepts
and algorithms have been introduced by researchers to address this
problem. They focus primarily on determining which inputs are
relevant in predicting the output. This work describes a
transparent, non-linear modelling approach that enables the
constructed predictive models to be visualised, allowing model
validation and assisting in interpretation. The technique combines
the representational advantage of a sparse ANOVA decomposition, with
the good generalisation ability of a kernel machine.

Summarising scientific articles: The use of robust document
analysis, Dr Simone Teufel, University of Cambridge

Most current summarisers still use sentence extraction as their
main paradigm. But this assumes that each sentence carries its
meaning in isolation -- an assumption that is over simplistic and
brings with it many discourse syntactic and semantic problems. My
approach to the summarisation of scientific articles (which are
thoroughly different from newspaper articles, the text type which is
mostly used in the summarisation community) uses machine learning to
improve on sentence extraction by adding rhetorical context to each
extracted sentence. Current work concentrates on the task of merging
sentences of the same rhetorical status, e.g. sentences describing
the research goal, into one new, coherent sentence.

In this talk, we will describe two new methods for large-scale
nonconvex quadratic programming. The first is an interior-point
trust-region method, which has proved capable of solving problems
involving up to half a million unknowns and constraints. The second,
an active-set method, has just been completed, and is intended
primarily for the case where a good estimate of the optimal active
set can be predicted. Key linear algebraic aspects of both methods
will be described, as will the results of tests on a large set of
convex and nonconvex quadratic programming examples. Both codes are
available as part of the GALAHAD optimisation software library.

Generation eXtravert: language production and personality type,
Dr Jon Oberlander, University of Edinburgh

Work within Reeves and Nass's `Computers Are Social Actors'
(CASA)
paradigm suggests (a) that computer users reliably attribute
personality to computer interfaces, even in text-only environments;
and (b) that users apparently prefer interfaces which use language
associated with personality traits matching their own. We have been
studying in greater detail how dimensions like Extraversion or
Introversion influence people's language production. Applying bigram-based
techniques from statistical NLP to a corpus of email has replicated
earlier findings, and uncovered novel patterns of linguistic
behaviour. We discuss implications of these findings for
computational models of language production.

An Introduction to Multi-objective Evolutionary
Optimisation, Professor Kalyanmoy Deb, University of Birmingham

Many real-world search and optimisation problems are naturally
posed as non-linear programming problems having multiple objectives.
Due to lack of suitable solution techniques, such problems are
artificially converted into a single-objective problem and solved.
The difficulty arises because such problems give rise to a set of
Pareto-optimal solutions, instead of a single optimum solution. It
then becomes important to find not just one Pareto-optimal solution
but as many of them as possible.

Classical methods are not efficient because they require
repetitive applications to find multiple Pareto-optimal solutions
and in some occasions repetitive applications do not guarantee
finding distinct Pareto-optimal solutions. The population approach
of genetic algorithms (GAs) allows an efficient way to find multiple
Pareto-optimal solutions simultaneously in a single run. In this
talk, one implementation of a GA, which uses non-domination concept,
will be discussed. Simulation results on a number of test problems
and on a couple of engineering design problems will show the
efficacy of the method. Clearly, GAs (or other evolutionary search
methods) have a niche in solving multi-objective optimisation
problems compared to classical methods. This is why multi-objective
optimisation using GAs is receiving a growing attention in the
recent past. Since this is a comparatively new field of research, in
this talk, a number of future challenges in the research of
multi-objective optimisation will also be discussed.

In humans, concepts provide part of the cognitive apparatus
enabling generalization and behavioural flexibility. For example,
when we as adults identify a new game, joke, chair, cat, car, or
vehicle, we are in part using our conceptual skills. Conceptual
skills, in humans, are developed from an early age. In this talk I
consider the possibility of robots that developmentally acquire
skills with concepts. If we can create robots that develop their own
concepts, this may provide better generalization and behavioural
flexibility in these systems. Developmental (or epigenetic) robotics
is a relatively new area with central tenets of programming robots
with only a small set of “innate” skills (e.g., contingency
detection), and using task non-specific developmental algorithms (Weng
et al., 2001). I review some relevant literature and our initial
work in this area.

There has been much recent interest (stimulated principally by
the properties of the 'support vector machine') in 'sparse' machine
learning models. Such models, applicable to both regression and
classification, may be specified with a potentially large initial
set of basis functions, but as a result of some 'training'
procedure, only a fraction of the basis set is ultimately used for
prediction purposes. This talk will outline a practical and highly
effective such learning procedure derived from a Bayesian inference
perspective. I'll review the established 'relevance vector machine'
(a Bayesian counterpart to the support vector machine) as well as
presenting more recent related research, and attempt to convey the
core ideas behind sparse Bayesian learning in general.

Semistructured data, and in particular XML which can model it,
are fast
becoming perceived as the emerging de facto data format. Query
paradigms for such data is a topic in its infancy, however.
Various research projects (e.g. Lorel, Ozone) and commercial
standards (e.g. DOM, SAX, XSLT) give mechanisms for extracting
information from such data resources. However all of these require
interpretation over the semantics of the values modelled, which are
presented to the programmer as a tree or graph in all cases. This
leads to very important classes of errors, normally trapped by type
system mechanisms, to cause failure at run-time or, even worse, to
remain undetected and give incorrect answers.

Designers of dialogue systems face a number of nontrivial choices
in dialogue strategy, including choices in initiative and
confirmation strategy. We apply Markov decision processes (MDPs) and
reinforcement learning to the problem of automated dialogue strategy
synthesis. An MDP is built from training data gathered from an
"exploratory" system. This provides a model of user reactions to
system actions, which is used to choose the optimal among many
dialogue strategies. We have applied this methodology in the NJFun
spoken dialogue system, yielding experimental results of
statistically significant improvements in system performance.

Future domestic robots will need to adapt to the special needs of
their users and to their environment. Programming by natural
language will be a key method enabling computer language-naive users
to instruct their robots. This talk describes the design of a
practical system that uses unconstrained speech to teach a
vision-based robot how to navigate in a miniature town. The robot
knows a set of primitive navigation procedures that the user can
refer to when giving route instructions. A particularity of this
project is that the primitive procedures are determined by analysing
a corpus of route instructions. These can be very complex procedures
for the robot. Recognition errors may occur because the lexicon and
the set of primitive procedures is likely not to be closed. These
issues and explored solutions are discussed.

Fuzzy models are receiving growing interest to implement control
and interpretation systems for autonomous mobile robots. Learning
these models improves the design process, and adaptation makes it
possible to install mobile robots in new and changing environments.
Evolutionary and reinforcement learning are paradigms that fit well
the needs of this type of applications, but their use is often left
to the crafting ability of the designer, and a methodological
support is still lacking. We will present some applications
developed at the Politecnico di Milano Artificial Intelligence and
Robotics Project, where we are working on the definition of a set of
methodological tools to engineer the learning and adaptation
processes. Applications include both simulated and real robots,
operating in tasks such as prey-predator co-evolution, multi-agent
coordination, adaptation to unknown, dynamical and hostile
environments (Robocup).

This talk will present an overview and first results on a
collaborative project between J.W. Goethe University of Frankfurt
and the University of Essex on computer simulation of pigeon
navigation.

Release experiments with carrier pigeons suggest that pigeons use
environmental gradients as navigational factors to determine the
correct compass course leading to their home loft, and that they
follow that direction using one of their compass mechanisms.

The values of most of the parameters that govern this process -
such as required accuracy of the compass sense, required accuracy of
detecting gradients, "recomputation" rate, etc - are unknown, and
very difficult to assess through experiments with pigeons.

We have therefore built a computer model of pigeon navigation
that allows the principled investigation of the influence of
individual parameters upon navigation efficiency and reliability.

The talk will present the biological basis of the model, and
discuss the accuracy simulation predictions with biological
findings.

Is That Your Final Answer? : The Role of Confidence in Question
Answering Systems, Dr Rob Gaizauskas, University of Sheffield

Question answering (QA) has suddenly become a hot topic in
applied natural language research. Driven by the appearance of Web
search engines such as "Ask Jeeves", apparently offering
capabilities beyond those of conventional search engines, and by the
appearance of a QA track in recent Text REtrieval Conference (TREC)
evaluations, QA has moved to centre stage in a remarkably short
time. Once again, QA raises issues of the relative value of
shallower versus deeper models of language when addressing applied
language processing tasks.

In this talk I will discuss the methodology and results of the
TREC QA track evaluations, describe the approaches we have used in
our TREC-8 and TREC-9 QA entries, and end by considering the
heretofore neglected issue of confidence: why it is important for QA
systems to supply confidence measures for the answers they return
and how we go about computing such measures.

TESSA: A Constrained Speech to Sign Language Translation System,
Dr Stephen Cox, University of East Anglia

TESSA is a system that aims to aid transactions between a deaf
person and a clerk in a Post Office. The system uses a speech
recogniser to recognise speech from the Post Office clerk, and then
synthesises the recognised phrase in British Sign Language (BSL)
using a specially developed avatar. The BSL phrases are pre-stored
as parameter files, and can be concatenated to sign variable
quantities such as amounts of money, days of the week etc. This
approach places restrictions on the translating power of the system,
but it is still effective because of the highly formulaic nature of
the transactions in a Post Office. The design, development and
evaluation of TESSA will be described, and recent developments that
allow unconstrained speech input by the clerk will be explained.

Bayesian harmonic models for polyphonic computer music
transcription, Simon Godswill, University of Cambridge

There is an increasing requirement for automatic recognition,
classification and information retrieval from musical audio in
web-based multimedia databases. In this work we study the problem of
multiple pitch estimation for polyphonic music. We develop novel
Bayesian hierarchical models for musical audio signals comprising
several fundamental pitches and their harmonics and incorporate
reasonable Bayesian prior information about musical audio. The
methods are found to be effective for a number of examples, provided
the number of simultaneous sources is not too large, allowing
automatic estimation of pitches, number of notes and number of
harmonics for each note. We predict that the models developed will
find application as the low-level signal model in more complex
Bayesian models which incorporate the long-term time-domain
behaviour of musical signals. We are currently investigating
applications in instrument recognition, transcription and source
separation.

The building of intelligent systems for many real-world problems
requires modelling and inference methods that entail not only
computational efficiency and accuracy, but also user
understandability and interpretability. Existing approximate
model-based systems typically overlook the significance of such
requirements. In this talk, I shall introduce some of the recent
work on flexible descriptive modelling and reasoning, based on an
integrated use of rough and fuzzy set theories, that attempts to
address these requirements. To demonstrate the utility of such work
I shall present sample application results, in coping with problems
such as plant monitoring, e-text categorisation, and population
estimation. The talk will end with some thoughts of potential future
developments.

Foundations of Evolutionary Computation, Professor Riccardo Poli,
University of Essex

Since Holland's seminal work in the mid seventies and his well
known schema theorem, schemata have traditionally been used to
explain why genetic algorithms (GAs) and, more recently, genetic
programming (GP) work. However, in the last 7 or 8 years the
usefulness of schemata has been seriously questioned, and the
attention of GA theorists has moved away from schemata to land onto
Markov chains.

In this talk, after explaining what schemata and schema theorems
are, I will show why one should be interested in schema theories. I
will do that by presenting some very recent results including an
exact schema theory for GAs and my own schema theory for GP. I will
show how these results can be used in important ways in theoretical
investigations, which are also very relevant for practitioners.

I will also illustrate how the exact schema theory for genetic
programming supersedes almost all previous work on schemata for GAs,
including Holland's schema theorem, thus offering itself as an
important step towards the unification for the theoretical
foundations of evolutionary computation.

The aim of this talk is to discuss the results of the experiments
that have been done by using various representation frameworks and
various algorithmic methods for modelling and solving large
combinatorial constrained optimization problems, that appear when
managing the mission of Earth observation satellites.

The problem of the management of the mission of Earth observation
satellites will be presented, with all its physical and operational
constraints, especially in the context of the new generation of
agile satellites.

Its nature and its complexity will be then presented, as well as
the various modelling frameworks that can be used to represent it:
Integer Linear Programming, Constraint Programming, Valued
Constraint Satisfaction Problems, Graph Theory.

The various solving methods that have been used to attempt to
solve it will be presented too: Greedy Search, Local Search, Tree
Search, Dynamic Programming.

Finally, we will try to explain the results obtained with these
various modelling frameworks and solving methods and to discuss some
promising ways of research in combinatorial constrained
optimization.

Finite model theory, complexity theory and program schemes and
The MathFIT initiative, Professor Iain Stewart, University of
Leicester

Finite model theory is all about what one can say about classes
of finite structures (such as graphs, strings and so on) using
logic; and computational complexity is all about what one can
compute on finite inputs within given resources. There is a very
strong link between finite model theory and computational complexity
theory (exemplified by Fagin's Theorem that a problem is in NP if
and only if it can be defined in existential second-order logic).
Often, this link is strongest when the finite structures are
(essentially) strings: on arbitrary finite structures, the link
between resource-bounded computation and logical definability is
nowhere near as clear-cut. In this introductory talk, I will
introduce this subject, known as descriptive complexity, and I will
also introduce models of computation, program schemes, for computing
on arbitrary finite structures, and show how a consideration of
these models can lead to new results in finite model theory and
descriptive complexity. The talk will be introductory in nature and
suitable for a general audience.

This talk will describe our progress to date in investigating on
a topic fundamental to cognitive robotics, namely the interplay
between high-level reasoning and low-level perception. In
particular, I'll present ongoing work on spatial reasoning and
visual perception for a humanoid robot that has been built at
Imperial College. Issues that will be discussed include active
perception, top-down versus bottom-up processing of sensor data, and
symbol anchoring.

Thorny dialogues: Challenges in interfacing with the nervous
system, Dr Francisco Sepulveda, University of Essex

“Easier said than done”. Few areas of endeavour would be more
prone to comments of this nature than neural interfaces, though it
has not always been this way. The last century saw a tremendous
increase in our knowledge of how the nervous system works. Our
understanding of neural function went from the merely systemic down
to the molecular level, at times giving us the impression that the
precious gray matter could be mastered after all. But, alas, our
enhanced understanding of the nervous system has yet to have a big
impact on our ability to communicate with it. While monitoring and
stimulating nerve cells in vitro is very straight forward and has
been done for more than fifty years, interfacing with the human
nervous system in vivo has proven to be a major challenge. Observed
signal to noise ratios are often below unity, signal variability is
high and fast, functional maps drift, and interface selectivity is
still very poor.
In this seminar I will present the state of the art in neural
interfaces. Surface and implanted devices will be addressed with
reference to two-way communication with the peripheral and central
nervous systems. Special emphasis will be given to brain-computer
interfaces.

Formal Method in Software Design, Dr Amnon Eden, University of
Essex

The construction of software is generally divided into three
levels of abstraction: Architecture, design, and programming. From
the perspective of formal modelling, programming has won most of the
attention, involving various notations and formal languages (most
notably, lambda calculus and turing machines.) The 1990s gave rise
to the recognition of software architecture as a separate
discipline, and with it to the formalization of architectural
styles. But although techniques and methodologies of software design
were in wide use since the 1970s, software design has always been
perceived as an art rather than science and little formal work has
been done in this domain.

We focus of formalization of design techniques, most prominently
of object-oriented design patterns. We depict a formal ontology for
the discussion in object-oriented design and in the specification of
patterns and programs at the design level.

Ubiquitous computing is a new paradigm for interactive systems
that moves computers into the background while using them to support
people's activities and interactions. The ubiquitous computing
paradigm is underpinned by the development of components small
enough to be embedded in practically anything, of networks that
provide dense inter-connection of large numbers of objects, and of
sensing technologies that enable systems to become contextually
aware. A fascinating research challenge is to leverage these trends
to build systems and technologies that transform everyday
environments unobtrusively into smart spaces.

This talk will discuss innovative technology concepts and
prototypes that we have developed to explore how we may facilitate
everyday environments with ubiquitous computing technology. The
examples we investigate are augmentation of ordinary walls as
network for the things attached; embedding and application of load
sensing in common artefacts such as tables; and rapid prototyping of
networked smart objects with the Smart-Its device platform.

Tony will describe the German Lorenz cipher machine and how a
dreadful mistake by the German operators, in 1941, led to its
secrets being discovered by Bill Tutte in Bletchley Park, without
ever having seen the machine. This led to Colossus being designed by
Tommy Flowers at Dollis Hill, the Post Office Research laboratories.
10 Colossi were eventually built and Tony will describe their effect
on the Allied war effort. He will finish by describing his team's
efforts in rebuilding Colossus and show a short video of it working.
Further information can be found on his web site,
www.codesandciphers.org.uk

Smelling mobile robots, Professor GurvinderVirk, University of
Leeds

The seminar will consider biologically inspired design of mobile
robots. The focus will be on navigational aspects and how animals
use a variety of techniques to search for food and for mates. These
methods have been shown to be remarkably robust and able to work in
realistic and practical situations. In contrast, man-made
navigational methods, based on a number of AI methods, have failed
miserably to work other than in perfect laboratory conditions. In
view of this it seems useful to turn to nature and see if these
biological methods can be understood and then mapped onto robotic
machines. Much of the work in this area is biased towards vision
systems, which is very compute intensive. Prof Virk has been
investigating an alternative strategy for the navigation of mobile
robots based on smelling.

Work in this area will be described where an innovative smell
based navigational strategy has been developed. The work described
will include both fundamental theoretical ideas of how the unstable
diffusion fields can be tolerated to produce reliable strategies.
The method uses a biased random walking strategy. Theoretical
simulation and the practical robotic results will be described.

Research at the Bioinformatics Research Centre, University of
Glasgow, Professor David Gilbert, University of Glasgow

In this talk I will describe some current research in the newly
established Bioinformatics Research Centre. Attached to the
Department of Computing Science, this interdisciplinary Centre will
be physically located in the Institute of Biomedical and Life
Sciences from spring 2003, and hosts researchers from both Computing
Science and Life Sciences.

Specifically, I will describe some ongoing funded research
projects, highlighting the computational issues which are
characteristic of data-driven bioinformatics research:

- a graph-based approach to modelling and analysis of protein
structures, involving machine learning over graph data structures
- the use of concurrency theory to simulate and analyse biochemical
networks
- the use of functional genomics for the analysis of hypertension

I will also present some views on the ways in which the GRID may
play a part in enabling an eScience approach to research in
Bioinformatics.

The talk will describe an algorithm for the discrete minimax
problem and an algorithm for the continuous minimax problem. Both
are quasi-Newton algorithms but are radically different in the
construction of the search directions. The convergence properties
will be outlined. Time permitting, the application of these
algorithms to worst-case design problems in macroeconomics, finance
and process systems engineering will also be discussed.

The Evolution of Telecommunications Networks, Professor M
O'Mahony, Department of Electronic Systems Engineering, University
of Essex

The past three years has seen an exponential growth in the use of
Internet and mobile telephones. This has led to an enormous demand
for capacity within telecommunications networks. The only way this
can be met economically is through the use of advanced technologies
for transmission and switching based on optical techniques.
Telecommunications was transformed in the 1970s when the inventions
of the semiconductor laser and the optical fibre enabled the
realisation of low cost high capacity transmission. In 2001, it is
likely that optical switching will cause a similar leap in
capabilities. In the longer term optical packet switching will merge
with electronic packet switching to provide a high capacity global
network controlled by the user.

Computational Finance at University of Essex, Professor E Tsang,
Department of Computer Science

This talk gives an overview of the multi-disciplinary research in computational
finance at the University of Essex. I shall explain the EDDIE architecture
and its applications. I shall also explain our new research direction in
adaptive behavioural economics (the so called neo-classical economics by
Holland's followers)

British Airways has now installed several applications, which depend upon constraint technology. Two successful applications are "Aircraft Swapper" and "Operations Retimer". Prototype software running these applications was developed at IC-Parc some years ago. Since then, an enormous amount of ground has been covered in order to achieve delivery and acceptance of these two applications. As an employee of British Airways, I was closely involved in the delivery phase of these applications. I shall chart the progress of delivery phase of the Retimer projects, give some limited background to the mathematical basis of the constraint and search mechanisms involved and demo the product.

Domain constraints are central to many AI based planning systems,
and of relevance to a number of other areas. The majority of current
systems assume that these constraints will be provided by the domain
author. Experience suggests that this can be a difficult and error
prone task as domain constraints typically encode implicit and
obvious information, which is rarely articulated. Recent work has
begun to address the automatic synthesis of domain constraints from
STRIPS-like operators and to extend these techniques to more
expressive operator languages, such as ADL. The talk reviews this
work and discusses potential applications of the ideas both within
and outside planning systems.

Interest in personalisation technologies has grown rapidly in
recent years, inspired by the development of the Web, E-Business and
mobile communications. Profiling a user (or group of users) is an
essential pre-requisite of the personalisation process. The goal-of
user profiling can be simply stated as follows: to acquire a model
of a user encoded in some representation which can be utilised to
deliver a targeted (or personalised) service, product or outcome.
Profiling techniques in use at present fall into two broad
categories: content-based and collaborative. Content-based methods
extract information (such as keywords) from items manipulated by a
user and create a profile which links this information with the
user's preferences concerning those items. Collaborative methods
attempt to characterise the preferences of individual users within a
community, and then identify commonalities between users.

Recently, a number of researchers have combined content-based and
collaborative profiling to create hybrid profiling methods. In this
presentation, I shall argue that software agents which aim to offer
personalised services should do so through the application of
machine learning technologies. I will present recent work on systems
such as MAGI, LAW and COBRA all of which employ learning algorithms
to acquire and adapt user-profiles.

One way to do unsupervised machine learning is to build a
probabilistic generative model of the sensory data. I will describe
a Bayesian approach to learning the structure of such models. This
approach resolves the tension between fitting the data well and
keeping the models simple by averaging overall possible settings of
the model parameters. Unfortunately, these adverse are usually
intractable, resulting in the need for approximations. I will
present variational approximations, which are deterministic,
generally fast, and have an objective function, which is guaranteed
to increase monotonically. Some theoretical results show how
variational optimisation generalises both the EM algorithm and
belief propagation algorithms. This approach can be used (1) to
automatically infer the most probable number of clusters and the
intrinsic dimensionality of each cluster, (2) to find the number of
sources in a blind source separation (ICA) problem, (3) to infer the
state dimensionality of a linear dynamical system, and (4) to
optimise over the structure of a mixture of experts network.

Bionic Sonar for Feature and Texture Recognition for Robot
Navigation, Dr P Probert, Department of Engineering Science,
University of Oxford

This talk will describe how advances in sonar sensors based on
the observation of biological systems enables sonar to provide
reliable, efficient sensing for navigation. It will show how a
physical understanding of the sensor drives the development of a
classifier to recognise different features and textures in the
environment. Examples will be given on the classification of robot
pathways and on typical indoor scenes.

The Active Information Repository, Dr P Watson, Department of
Computer Science, University of Newcastle upon Tyne

In this talk I will describe the design of a scalable Grid node
to support computation on information stored in databases. It is
envisaged that huge amounts of information, both structured and
semi- structured, will be made available on The Grid. Simple
client-server style access to the information server will be
inadequate, as the network connecting the client to the server will
not have the required bandwidth to support computations that may
access Petabytes of data. A solution that allows clients to submit
computations to be run locally to the data is therefore required.
The Active Information Repository is designed for this. Data is
stored in a scalable object database server and computations, in the
form of agents, can be sent by clients for execution on a scalable
agent execution server that is tightly coupled, through a high
bandwidth network, to the database server. The talk will describe
the rationale for this approach, and the design of the major
components: the parallel database server, and the parallel agent
execution server.

The focus of this seminar will be a speculative list of ten
crucial advances I think we need to make in order to achieve
human-level artificial intelligence, starting with the construction
of simple robots that can reason with their actions, and ending with
metaphor, creativity and original thought. On a more concrete level,
I will present Imperial College's ongoing work on cognitive
robotics, which is a humble beginning to this research programme.

A new neighbourhood search technique is introduced that uses
dynamic programming to search an exponential size neighbourhood in
polynomial time. This technique is called dynasearch. We apply
dynasearch to three types of sequencing problems; the total weighted
tardiness scheduling problem, the linear ordering problem and the
travelling salesman problem. Computational results show that, for
restarts close to previous local minima, dynasearch is significantly
more effective than classical first improve or best improve descent.
As a consequence, our proposed algorithms use dynsearch with an
iterated local search framework, where each descent is started a few
random moves away from previous local minima. Computational results
show that the proposed dynasearch algorithms are more effective than
other known local search heuristics for the total weighted tardiness
problem and for the linear ordering problem, and are competitive
with other methods for the travelling salesman problem.

A simple model of the routing problem involves the determination
of "waypoints" between straight line stages of a route between
prescribed start and finish points. An optimum route is one which
has minimum length consistent with the avoidance of no-fly zones and
subject to flyability constraints such as limitations on turn angle.
This problem can be formulated as an optimization calculation which
turns out to have multiple solutions. The objective function, in
practice, is not easily differentiable because it will involve
interpolation in tabulated geographical data. Therefore we need to
consider direct-search methods for global optimization. An algorithm
called DIRECT (Jones et al) seems a promising contender for this
task, and the talk will describe the application of this, and other,
techniques to some sample problems.

Computational Models of the Emergence of Linguistic Diversity
Problems using Artificial Life to Answer Real Problems, Dr Daniel
Livingstone, Department of Computing and Information Systems,
University of Paisley

Over the past decade there has been a dramatic rise in interest
in Artificial Life and a co- incident resurgence of interest in the
evolution of language and languages. A significant, and growing,
number of researchers are using the former to try to answer a
variety of questions relating to the latter. My own work has
focussed on the emergence and evolution of linguistic diversity,
where debate continues amongst linguists on the causes of and
reasons for language diversity. The models I have developed show
that social function and adaptive benefits are not necessary forces
in linguistic evolution, and these are reviewed. However, other
computational models have results that lead to quite different
conclusions. A real problem for this work is how to resolve the
dispute between such contradictory findings. I conclude by looking
at how I can strengthen my own findings, while reviewing some
weakness of the various models mentioned (my own included) and of
the Artificial Life method.

Evolutionary Algorithms for Bus and Train Driver Scheduling, Dr
Raymond Kwan, School of Computer Studies, University of Leeds

At present the most successful systems for bus and train driver
scheduling are based on the set covering model solved by integer
linear programming (ILP) techniques. However, these systems still
rely on heuristics to cut down the solution space in finding
all-integer solutions. Large problems may have to be subdivided,
thereby losing optimality. The search for an all-integer solution
might be terminated prematurely because of practical computational
limits. Meta-heuristic approaches have the potential of yielding a
quicker and more robust method. In particular, Genetic Algorithm and
its derivatives have received much research attention in recent
years. Early research has concentrated on designing a suitable
genetic representation of the problem and the algorithms derived
were relatively simple and straightforward. While the computational
speed was very good and all-integer solutions were always
obtainable, the quality of solutions could not match up with that
obtained by ILP. Recent research has focussed on hybrid algorithms
that incorporate two main strategies. The first strategy is to
exploit strong domain knowledge, e.g. key patterns of work piece
combinations, so that important combinatorial characteristics in the
population members can be identified and utilised in the
evolutionary process. The second strategy is to fuzzify the criteria
in assessing the fitness of the population members, so that the
potential of intermediate solutions as stepping stones could be
better evaluated. Experiments and results based on real lift problem
data will be presented.

One can distinguish two major approaches to automated theorem
proving, machine-orientated methods like the resolution method and
human-orientated methods. Proof planning as first introduced by Alan
Bundy belongs to the second category. It tries to model the
cognitive ability of humans to perform plausible reasoning, that is,
to master huge search spaces by making good guesses. The process of
plausible reasoning is viewed as a planning process; its strength
heavily relies on the availability of domain specific problem
solving knowledge.
In the talk, I want to address mainly the following questions: what
is proof planning, what are successful applications of it, what are
major open problems, and how can they be addressed.

Solving Non-Boolean Satisfiability Problems, Dr A Frisch,
University of York

State-of-the-art procedures for determining the satisfiability of
formulas of Boolean logic have developed to the point where they can
solve problem instances that are large enough to be of practical
importance. Many of the problems on which satisfiability procedures
have been effective are non-Boolean in that they are most naturally
formulated in terms of variables with domain sizes greater than two.
Boolean satisfiability procedures have been applied to these
non-Boolean problems by reformulating the problem with Boolean
variables. An alternative, previously unexplored approach, is to
generalise satisfiability procedures to handle non-Boolean formulas
directly. This talk compares these two approaches. We begin by
presenting two procedures that we have developed for solving non-
Boolean formulas directly. We also present three methods for
systematically transforming non-Boolean formulas to Boolean
formulas. Finally, experimental results are reported on using all
these methods to solve random formulas, graph colouring problems and
planning problems.

A Proof Planner based on Methodicals and Methodical
Continuations, Dr Julian Richardson, Computing and Electrical
Engineering, Heriot Watt University

LambdaClam is a higher-order proof planner developed at the
University of Edinburgh. Planning strategies are hierarchical,
defined using methodicals, which are are analogues of tacticals in
tactic-based theorem proving. We can define a semantics for
methodical expressions which is naturally interpreted as a
depth-first planning algorithm. This is efficient, but there is no
notion of proof state - when planning the subgoals of a goal, each
child has no access to information held in the other children. There
are several reasons, notably for providing a user interface, and for
implementing proof plan critics, why we need to able to access the
whole proof state during planning. I define transformation rules,
which allow methodical expressions to be converted to an equivalent
form consisting of an atomic method to apply, and a continuation to
apply afterwards. I will describe how the continuation mechanism
provided by these transformation rules has been used to define a
partial proof plan data structure and a single-stepping planner.
This is done in a declarative way, which allows us to retain the
advantages of hierarchical proof planning, and of LambdaProlog's
bat-in mechanisms for handling variable binding and goal
implication. I have proved that the new planning algorithm is
equivalent to the standard planning algorithm. The partial proof
plan data structure and single-stepping planner are both implemented
in LambdaClam. They provide the infrastructure for a user interface,
and for proof plan critics.