Theory of mind

A theory of mind in a robot is a model of a users (or another agent's) emotional state, gained by measuring affective state from sensor input - such as facial expression of tone of voice. In this way an agent may attempt to return an empathic relationship with a human. Such a model must include a users:

Emotional state

Goals

Intentions

Beliefs

Personality

Memory and Adaptation

A companion needs to adapt and evolve based on past experiences. The companion must adapt to a users personality in order to match it better as time passes.

Socially intelligent agents

Human intelligence includes both:

Efficient problem solving

Social and emotional intelligence

Socially intelligent agents need to have the appearance of social understanding and also recognise other agents or humans in order to establish relationships with them.

This social information is part of a robot's environment, and sensed through it's understanding of the outside world.

*The expectation of social interaction of an agent depends on it's form, a human form is more natural, but also creates very high expectations - expectations which cannot be currently met.*

Castelfranchi's ontology for social interaction

Goal oriented agents: the action the agent takes in the world (in other words, its behaviour) is aimed at producing some result.

Interference and dependence: implies that there must be interference among the actions and goals of the agents, i.e., the effects of the action of one agent are relevant for the goals of another.

Mindreading: representation of both beliefs and goals of the minds of other agents. This concept is frequently mentioned as Theory of Mind.

Coordination: when two agents coordinate their behaviour without influencing each other and without any explicit communication.

Delegation: an agent x needs or likes an action from the agent y and includes it on her own plan. Delegation (or relying on) presupposes trust among the agents.

Goals about other's action/goals: besides having beliefs about other agent's goals, agents can also have goals about the minds of the other. For instance, one agent might have the goal of changing something on other agent's mind.

Social goal adoption: when the mind of an agent x changes because of a goal of another agent y. In other words, when the agent y succeeds on the task of changing agent x's mind.

Joint action: when two or more agents (socially) commit each other and adopt the same goal.

Social structures and organisation: when a group of agents, with different goals and limited abilities and resources, interact in the same environment, a dependence structure emerges. Such emergence is what makes social goals evolve or be derived.

Establishing relationships might be easy - for instance, it's quite possible for people to have relationships of a form with inanimate objects, but maintaining and developing a relationship is harder.

Social Attraction

Heider's Balance Theory

The theory states that people tend to avoid unstable cognitive configurations. For instance if agent A knows and likes agent B, and they both are aware of, and have positive feelings towards object C then there is balance, similarly if they both have negative feelings towards object C. If they disagree then there is imbalance, which can be resolved from agent A's perspective by one of three steps:

Agent A switches to dislike of agent B

Agent A decides to change it's mind and agree with agent B about object C

Agent A attempts to change agent B's mind, in order to make it agree about object C

The last option takes more work than the other two, and so there is also a concept of cost for maintaining social balance.

Examples of Socially Intelligent Agents

Embodied conversational agents - these use face to face conversations in an attempt to simplify human/computer communication.