You are here

CECIL: Complex Emotions in Communication, Interaction and Language

Finance ANR | 2008 - 2012

Project leader: Sylvie Pesty

This project concerns the interaction systems of new generation where the human user is at the core of the interaction. These systems are designed to be believable (i.e. not only trustworthy and honest, but also capable to give an illusion of life). Several studies show that the conception of this kind of systems can be realized only by integrating an advanced processing of emotions in the system. This is in order to endow the system with the capabilities to understand and to adapt to the user's emotions, to reason about the user's emotions, to plan its actions anticipating their effects on the user's emotions, and to express its emotions. These are necessary prerequisite to make the system capable to interact with the user in a natural way. The subject of emotions has been debated in the last twenty, thirty years in several disciples such as psychology, philosophy, cognitive sciences and economics. More recently, computer sciences have started to focus on emotions. Several computational models of the role of emotions in cognition and on the expression of affective contents have been proposed (e.g. the project on Affective Computing at MIT or the Kansei Information Processing in Japan). These researches have been at the basis of the development of several prototypes such as Embodied Conversational Agents (ECAs) to be used in different services (e.g. game platforms, simulators, tutoring agents, robotic assistants etc.)

In the domain of emotion expression and emotion recognition, researches have been focused on anthropomorphic systems that interact with a human user in a multi-modal way. Nevertheless, actual systems only consider few basic emotions such as joy, sadness, fear, surprise without considering more complex emotions such as regret, guilt, envy, shame. Several theoretical works (commonly called ``theories of appraisal'') show that complex emotions (which are typical of humans) are closely related to epistemic attitudes (beliefs, predictions,expectations...) and motivational attitudes (goals, desires, intentions,...). Logic is suited to enable reasoning and several logics with well-known properties exist which enable to represent these attitudes.

The aim of this project concern the study of complex emotions (i.e. emotions based on counterfactual reasoning and on reasoning about norms, responsibility, power and abilities).

As a first step, we will investigate and formalize this kind of emotions (such as regret, shame, guilt, jealousy, reproach...) in order to provide non-ambiguous definitions which can be used by an agent in reasoning.

As a second step, we will exploit these definitions in order to specify and to implement a library of speech acts of expressive type which are used to express feelings and emotions.

As a third step, we will implement in an embodied conversational agent (ECA) a planning module which take into account the emotions of the system and the emotions ascribed to the user by the system. Once the system has chosen a certain action to perform (either physical or communicative), it will compute the information to express through the different modalities (facial expressions, bodily movements, language) and it will communicate the information in a multi-modal way eventually including in its message an emotional content.

Main Objectives:

This project has 3 main objectives:

to develop a logical model of complex emotions (see below);

to develop a logical model of expressive speech acts and to define a corresponding implemented library of these acts;

to implement a virtual agent which is capable to reason strategically about the expressions of emotions, to anticipate the effects of its actions on the emotions of the user and to exploit several modalities (face, gestures, language) in order to communicate its emotions.

The 1st objective concerns a logical formalization of complex emotions, viz. those involving particular dimensions such as counterfactual (e.g. regret...), causality, responsibility, norms (pride, guilt...), power and abilities (envy, pride...). These dimensions have been neglected in the Computer Science up to now. This logical model will be based on the integration of several existing formalisms where each emotion will be based on more elementary mental ingredients such as beliefs, goals...

The 2nd objective concerns the development of a logical model of expressive speech acts (apologizing, thanking, regretting, lauding, congratulating) and of an implemented library of these acts. Each expressive act will be formally defined according to the emotions that its performance is supposed to express.

The 3rd objective of the project concerns emotion, planning, and multi-modality, in particular the implementation of a virtual agent which is capable: - to anticipate the effects of its actions on the emotions of the user, and to select the right plan according to its current goal and its beliefs about the user's emotions; - to plan the performance a certain expressive speech act as a means for achieving a certain goal; - to exploit each of its modalities in order to select the appropriate modalities for performing a certain expressive speech act. This require to know which output to produce for each selected modality.

Workpages:

The project CECIL is organized in three workpackages :

The first workpackage will be focused on complex emotions such as regret, jealousy, envy, pride, guilt, remorse, reproach, shame, admiration, embarrassment We will start with an overview of the relevant concepts involved in complex emotions: counterfactual, responsibility, power and abilities, norms and ideals. Then, we will propose a general logical framework which enables reasoning about these concepts. The logical framework will be exploited for the development of a logical model of complex emotions.

The second workpackage will be focused on emotion and communication. We will develop a general logical model of expressive speech acts, that is, those acts of the speaker which are aimed at communicating his attitudes and emotions to the hearer. The logical model of expressive speech acts will provide the theoretical foundations for the development of a library of expressive speech acts to be implemented in a computational system.

The third workpackage will be focused on multi-modal expression of emotion. Its aim is to implement an embodied conversational agent which is capable: to anticipate the effects of its actions on the emotions of the human user; to plan the performance of a certain expressive speech act as a means for achieving a certain goal and to manage the display of its emotions; to exploit each of its verbal and non-verbal modalities (gestures, facial expressions, language) in order to perform a certain expressive speech act.

Results:

The main results of the project will be in terms of scientific publications in international conferences and journals, and in terms of an implemented embodied conversational agent (ECA). The publications during the project will concern the following topics: a conceptual analysis of complex emotions; a logical framework supporting counterfactual reasoning and reasoning about agents' mental states, responsibility, power, and norms; a logical model of complex emotions; a logical model of expressive speech acts; an evaluation study aimed at extracting the facial expressions associated to complex emotions; an implemented ECA which is capable to reason strategically about the expressions of emotions, to anticipate the effects of its expressions on the emotions of the user, to manage the display of its emotions, and to exploit several modalities (face, gestures, language) in order to communicate its emotions. The planning module of the ECA is intended to be implement in the Jade Semantic Agent (JSA) framework (or equivalent), whereas the module of multi-modal expression of emotions will be implemented in the agent system Greta.