Social cognition

Participatory sense-making

Research on social cognition needs to overcome a disciplinary disintegration. On the one hand, in cognitive science and philosophy of mind – even in recent embodied approaches – the explanatory weight is still overly on individual capacities. In social science on the other hand, the investigation of the interaction process and interactional behaviour is not often brought to bear on individual aspects of social cognition. Not bringing these approaches together has unfairly limited the range of possible explanations of social understanding to the postulation of complex or mysterious internal mechanisms (e.g., contingency detection modules, mirror neurons).

The unjustified departure point of traditional social cognition research.

Together with Hanne De Jaegher we have extended the enactive concept of sense-making into the social domain. It takes as its departure point the process of interaction between individuals in a social encounter. It is a well-established finding that individuals can and generally do coordinate their movements and utterances in such situations. We argue that the interaction process itself can take on a form of autonomy (operationally defined). This allows us to reframe the problem of social cognition as that of how meaning is generated and transformed in the interplay between the unfolding interaction process and the individuals engaged in it. The notion of sense-making in this realm becomes participatory sense-making. This notion defines a spectrum of participation, from simpler cases of orientation of individual sense-making to joint sense-making (exemplified in acts that can only be completed socially, like the act of handing in an object). The onus of social understanding thus moves away from strictly the individual only.

Modelling perceptual crossing

The combined used of minimalism in empirical studies and in evolutionary robotics modelling can lead to a fruitful interactions between experimental and synthetic approaches. We have generated models of an experiment in perceptual crossing in humans participants interacting through a tactile-feedback device that not only verify the explanation originally advanced for the results but also predict certain statistical properties of the behavioural strategies used by the participants.

In this experiment, done by a team at the University of Compiègne (and recently replicated at Sussex), participants interact by means of a tactile feedback devide and moving a computer mouse left and right. This movement controls the position of a sensor in a virtual 1D space where fixed or moving objects may be encountered. Touching an object with the virtual sensor produces a tap on the finger and this is the only sensory information available to participants. They are told that one mobile object is controlled by the other person and asked to click the mouse whenever they think they are in contact with the object. The task is made non-trivial by the presence of static objects and a shadow object that moves exactly the same as the other person. In spite of this objective ambiguity participants are able to click most frequently when in direct contact with the other person’s sensor and not the shadow.

Together with Marieke Rohde, we have evolved agents that are able to perform this same task (left) confirming the collective mechanisms proposed by the Compiègne team. In the figure, red and black trajectories indicate receptive fields of agents 1 and 2. The other lines indicate the position of their shadows.

The simulation model makes an unintuitive prediction. While agents can easily discriminate between the other’s sensory field and its shadows (even though both move the exact same way), they have a bit of difficulty distinguishing the other agent from a static object. The plots at the right and below show 2 agents interacting (top) and one agent interacting with static object (bottom). Their movements, patterns of crossing, and sensorimotor variables are very similar although eventually the agents moves on after scanning the fixed object for a while. How is this discrimination achieved?

Detailed measures show that the neural controllers rely on information about the duration of the stimulation during crossing (i.e., the apparent size of the scanned object). All objects in the world have the same objective size, but because of the coordinated crossing at a same speed, the scanning of another agent results in a duration of stimulation of half that of the scanning of a fixed object. This is effectively a socially-induced modulation of the individual’s perception. Agents have evolved to distinguish between the two conditions by coordinating their perceptual activities in order to induce a subjective distinction between two objects that are objectively the same size.

These results predict that humans (counter-intuitively) will be likely to signal with the click of the mouse after scanning a fixed object (probably believing they have scanned the other participant). This prediction has been confirmed by the team in Compiègne. The model has recently been extended into 2D and other tasks.

Rohde, M. and Di Paolo, E. A. (2008). Embodiment and perceptual crossing in 2D: A comparative evolutionary robotics study. In From Animats to Animals 10, The Tenth International Conference on the Simulation of Adaptive Behavior, Osaka, Japan, July 7-10, 2008.

Modelling agency detection

The ongoing mutuality of influences during social engagement is a key property of the interaction process but its dynamical characteristics have not been sufficiently investigated. Important empirical evidence points to the central role of dynamic mutuality, or contingency, in sustaining and forming several aspects of an ongoing interaction. This is clearly shown in the perceptual crossing experiments above and in Murray and Trevarthen’s double TV-monitor experiments with infants. In this experiment, a mother and her baby are placed in separate rooms and allowed to interact only through video screens that display their faces to each other. During live interaction, mother and infant engage in coordinated utterances and affective expressions. However, if a delayed video recording of the mother is displayed to the baby, the baby becomes withdrawn and depressed. This shows that it is not sufficient for the baby to sustain interaction that the mother’s expressive actions be displayed on the monitor, but the mother is required to react ‘live’ to the baby’s own motions in order for the interaction to continue. So far, the proposed cognitive mechanisms for this result rely entirely on postulating computational modules internal to the infant to detect social contingency. These explanations fully ignore the possibility that the collective dynamics of interaction could produce similar results with more parsimonious assumptions.

We have produced a minimal model of contingent agency detection inspired by Murray and Trevarthen’s experiments. There are different versions developed by Hiro Iizuka and Tom Froese. Two agents moving left and right must interact by crossing their single sensors several times (top). However, if presented with a recording of a previous interaction (middle), the live agent must move away from it. Agents successfully evolve that can do this task. Looking at their relative positions during a live interaction (bottom) the full line shows that the agents maintain a constant engagement. However, if one agent interacts with a recording (dashed line) eventually it moves away from it.

An analysis of the neural and collective dynamics during interaction show that the pattern of rhythmic crossing (consequently, of sensor stimulation) keeps the two neurocontrollers in an oscillatory transient and prevents it from falling into asymptotic point attractors (marked in red, top left). If the size of the pattern of stimulation is artificially changed, it can be shown the the bottom agent becomes progressively unable to sustain the oscillatory transient (regions going from blue to red to yellow, bottom left). The pattern of stimulation is stable to perturbations in the case of double feedback (live interaction) but it becomes unstable in the non-contingent case and the agent eventually slides into a fixed behaviour of moving away from the interaction. Recently we have extended this result to agents selected only for live interaction, they also show withdrawal when presented with previous recordings even when they haven’t been explicitly selected for it.

What is the importance of these results? They help expand the domain of possible explanations of social cognition to explanations that rely on the collective dynamics of mutual and ongoing interaction. It may well be the case that specific empirical studies can be explained by a combination of social and individual factors which must be disentangled experimentally. But to show that relatively ‘dumb’ agents qualitatively replicate a result obtained in human infants opens up the possibility to consider that such dynamical factors may also play a significant role in these more sophisticated cases. Postulating individual, skull-bound, mechanisms becomes an option and not a requirement if social interaction dynamics are well-understood.

Froese, T. and Di Paolo, E. A. (2008). Stability of coordination requires mutuality of interaction in a model of embodied agents. In From Animats to Animals 10, The Tenth International Conference on the Simulation of Adaptive Behavior, Osaka, Japan, July 7-10, 2008.