About this book

Design of cognitive systems for assistance to people poses a major challenge to the fields of robotics and artificial intelligence. The Cognitive Systems for Cognitive Assistance (CoSy) project was organized to address the issues of i) theoretical progress on design of cognitive systems ii) methods for implementation of systems and iii) empirical studies to further understand the use and interaction with such systems. To study, design and deploy cognitive systems there is a need to considers aspects of systems design, embodiment, perception, planning and error recovery, spatial insertion, knowledge acquisition and machine learning, dialog design and human robot interaction and systems integration. The CoSy project addressed all of these aspects over a period of four years and across two different domains of application – exploration of space and task / knowledge acquisition for manipulation. The present volume documents the results of the CoSy project. The CoSy project was funded by the European Commission as part of the Cognitive Systems Program within the 6th Framework Program.

Advertisement

Table of Contents

Frontmatter

Introduction

Frontmatter

The CoSy project was setup under the assumption that the visionary FP6 objective

“To construct physically instantiated ... systems that can perceive, understand ... and interact with their environment, and evolve in order to achieve human-like performance in activities requiring context-(situation and task) specific knowledge”

is far beyond the state of the art and will remain so for many years. From this vision several intermediate targets were defined. Achieving these targets would provide a launch pad for further work on the long term vision.

Component Science

Frontmatter

The study of architectures to support intelligent behaviour is certainly the broadest, and arguably one of the most ill-defined enterprises in AI and Cognitive Science. The basic scientific question we seek to answer is: “What are the trade-offs between the different ways that intelligent systems might be structured?” These trade-offs depend in large part on what kinds of tasks and environment a system operates under (niche space), and also what aspects of the design space we deem to be architectural. In CoSy we have tried to answer that question in several ways. First, by thinking about the requirements on architectures that arise from our particular scenarios (parts of niche space). Second, by building systems that follow well-defined architectural rules, and using these systems to carry out experiments on variations of those rules. Third, by using the insights from system building to improve our understanding of the trade-offs between different architectural choices, i.e. between different partial designs. Our objective in CoSy has not been to come up with just another robot architecture, but instead to try to make some small steps forward in a new science of architectures.

This chapter presents an algorithm implementing a basic requirement for any interface between sensory data and cognitive functions: dimensionality reduction. The algorithm extends the classical framework of dimensionality reduction to the case where sensory data are acquired through an embodied agent, by grounding the metric that is at the basis of the dimensionality reduction in the sensorimotor abilities of the agent. The final objective (which was not realized because of time constraints) within CoSy was to build on this to provide a demonstration of some basic unsupervised learning of interactions with space and objects, as would be required in an explorer-type scenario.

The ability to recognize and categorize entities in its environment is a vital competence of any cognitive system. Reasoning about the current state of the world, assessing consequences of possible actions, as well as planning future episodes requires a concept of the roles that objects and places may possibly play. For example, objects afford to be used in specific ways, and places are usually devoted to certain activities. The ability to represent and infer these roles, or, more generally, categories, from sensory observations of the world, is an important constituent of a cognitive system’s perceptual processing (Section 1.3 elaborates on this with a very visual example).

A cornerstone for robotic assistants is their understanding of the space they are to be operating in: an environment built by people for people to live and work in. The research questions we are interested in in this chapter concern spatial understanding, and its connection to acting and interacting in indoor environments. Comparing the way robots typically perceive and represent the world with findings from cognitive psychology about how humans do it, it is evident that there is a large discrepancy. If robots are to understand humans and vice versa, robots need to make use of the same concepts to refer to things and phenomena as a person would do. Bridging the gap between human and robot spatial representations is thus of paramount importance.

The capacity for planful behavior is one of the major characteristics of an intelligent agent. When acting in realistic environments, however, reasoning about how to achieve one’s goals is complicated significantly, both by the limited perceptions of the agent and the high dynamics of the environment, especially when other intelligent agents are present. Fortunately, when acting continuously in such an environment, agents can actively try to reduce their uncertainties, for example by deliberative exploration, cooperation with others, and monitoring of failures.

The main topic of this chapter is learning, more specifically, multimodal learning.

In biological systems, learning occurs in various forms and at various developmental stages facilitating adaptation to the ever changing environment. Learning is also one of the most fundamental capabilities of an artificial cognitive system, thus significant efforts have been dedicated in CoSy to researching a variety of issues related to it.

In CoSy, our robots were to be able to interact with human. These interactions served to help the robot learn more about its environment, or to plan and carry out actions. For a robot to make sense of such dialogues, it needs to understand how a dialogue can relate to, and refer to, “the world” – local visuo-spatial scenes, as in the Playmate scenario (9), or the spatial organization of an indoor environment in the Explorer scenario (10).

Integration and Systems

Frontmatter

Research in CoSy was scenario driven. Two scenarios were created, the Play- Mate and the Explorer. One of the integration goals of the project was to build integrated systems that addressed the tasks in these two scenarios. This chapter concerns the integrated system for the PlayMate scenario.

In the Explorer scenario we deal with the problems of modeling space, acting in this space and reasoning about it. Comparing with the motivating example in Section 1.3 the Explorer scenario focuses around issues related to the second bullet in the example. The setting is that of Fido moving around in an initially unknown (Fido was just unpacked from the box), large scale (it is a whole house so the sensors do not perceive all there is from one spot), environment inhabited by humans (the owners of Fido and possible visitors). These humans can be both users and bystanders. The version of Fido that we work with in the Explorer scenario can move around but interaction with the environment is limited to non-physical interaction such as “talking”. The main sensors of the system are a laser scanner and a camera mounted on a pan-tilt enabling Fido to look around by turning its “neck”. Figure 10.1 shows a typical situation from the Explorer scenario.

From the very start the CoSy project set out to demonstrate and evaluate its progress in implemented, integrated systems. Chapters 9 & 10 set out both the two scenarios we chose to integrate around, and the contributions we made by studying problems following an integrative, rather than isolationist, methodology. However, these contributions did not come without a cost. Following an integrated systems methodology (and therefore delivering a genuinely integrated project) demands a large input in terms of person hours, a demand which is regularly underestimated in the planning phase (both of whole projects and of development cycles). In CoSy we put in an extremely large amount of time and effort into the “integration process.” At some point or other almost everyone associated with the project wrote code that was used in a demonstrator system. From undergraduates and masters students, to postgrads and postdocs, up to PIs and other faculty members, we all bought into the collective ingenuity or insanity required to produce a state-of-the-art intelligent robot. It is rare that so many people from so many different disciplines work together to integrate at this scale. Whilst many of us have built integrated systems before (some of which could do more within a single domain) none of us have worked to put so much from many different fields into a single system.

Nick Hawes, Michael Zillich, Patric Jensfelt

Summary and Outlook

Frontmatter

This chapter reports work done mostly by one member of the team – a philosopher with substantial AI programming experience, whose primary interests were in the very long term goals of the project, summarised in Chapter 1, including the goal of shedding light on problems solved by biological evolution, and who was not directly involved in the coding but who interacted closely with people who were, and with people outside the project, in several related disciplines. The majority of the work reported here is concerned with requirements, and gaps between those requirements and the current state-of-the-art in AI/Robotics, and related disciplines. A key feature of this work is its emphasis on study of aspects of the 3-D environment we and other animals inhabit, with which a Fido-like intelligent domestic robot (described in Chapter 1) would need to interact. This is an essential part of a strategy for developing a roadmap to bridge the gaps in the long term.

The CoSy project had a very ambitious set of goals from the outset. The effort was driven forward by scientific and technical goals as outlined in Chapter 1. Early in the project integration efforts were undertaken to ensure that both component and systems issues could be addressed. Already after 12 months early demonstrators were available for empirical studies of cognitive systems. Obviously, at that stage, the systems were brittle and of limited functionality but they framed the problems in a nice way.