Menu

… David was staring out of the window. “Teddy, you know what I was thinking? How do you tell what are real things from what aren’t real things?” The bear shuffled its alternatives. “Real things are good.”… David started to draw a jumbo jet on the back of his letter. “You and I are real, Teddy, aren’t we?” The bear’s eyes regarded the boy unflinchingly. “You and I are real, David.” It specialised in comfort.

– Brian Aldiss, Supertoys Last All Summer Long, 1969

Speaking of his trilogy of stories, Supertoys, in 2001, Brian Aldiss describes the central character, an android boy named David, as having been programmed “to believe himself to be a real boy, and to love his adopted human mother” as part of a larger narrative that is “more about love and the inability to love than the progress of computer science”.1 The subject of emotion in the relationship between humans and machines has long been a source of fascination. Indeed, the idea of inanimate matter endowed with the gift of life is as old as myth and fairy tale itself. Within the discourse of Artificial Intelligence, however, a relatively recent area of research has emerged, referred to as affective computing, which specifically aspires to the design of mechanical systems that can recognize, interpret and process human emotions.2 This interdisciplinary field spans computer science, psychology and cognitive science. In the broadest sense, it aims to answer a basic question: ‘Will we succeed in building robots that think and feel like we do?’

When the Czech playwright Karel Capek wrote R.U.R. in 1921, a play about humanoid machines that turn against their creators, he decided to call his imaginary creations ‘robots’ from the Czech word for ‘slave labour’.3 Ever since then our thinking about robots has been dominated by a ‘master-servant’ relationship, i.e. robots are designed to do the mundane or difficult jobs we would prefer not to do but, at the same time, they are potentially dangerous. This mindless quasi-proletariat, that can out-perform humans at most tasks and yet express no emotions, might one day rise up against its master. It was the Russian-American science fiction writer, Isaac Asimov, who attempted to emancipate robots from this downtrodden role by imagining a future in which intelligent machines might care for our children and even strike up reciprocal friendships with us.4 In the Three Laws of Robotics Asimov sets out a charter that is designed to protect humans from their fears of being over-powered:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

In recent times, however, the issue of human relations with intelligent machines has increasingly shifted away from an anxiety about power and domination towards the fantasy of reciprocal, emotional interaction. Of course, any ‘affective machine’ that is designed to interact with a human would need to be able to recognise expressions of emotion and to express its own quasi-emotions in ways that humans can recognise.5 However this could amount to little more than a simulation of emotion, a superficial mimicry. The question arises as to whether we simply want to replicate the appearance of human affect in machines, as a kind of sophisticated toy, or if the fantasy of robots developing their own independent ‘feelings’ could somehow be nurtured through an organic process, much as an infant’s subjectivity emerges through a gradual awareness of its own distinctiveness in a world of ‘object relations’.6

The Object Relations school of psychoanalysis, building on the ideas of Melanie Klein, placed an emphasis upon the primary experience of the infant’s relationship with the mother in the formation of a psychological subject. Rather than seeing the human being as simply a system of instinctual, biological drives, Object Relations theory places relationship at the very heart of what it is to be human, foregrounding the emerging subject’s development in relation to others, or ‘objects’, in the external world of shared reality. Emotions, such as love and hate, happiness or sadness, grow out of the subject’s original experience of being merged-in and then separated out from the mother, ranging from the oceanic sense of unbounded joy to the raging fury at unsatisfied desire or the all-consuming terror of annihilation by an overpowering maternal environment. If affective machines were to develop an equivalent to such embodied emotions, they would need to undergo an equivalent process, gradually developing their own subjectivity in relation to an external, object world.

At the start of this journey towards independence, child psychoanalyst Donald Winnicott famously declared that “there is no such thing as a baby”,7 by which he meant that wherever one finds an infant one also finds someone providing care for it, without which the child could not exist. The infant is completely enveloped within and conditional upon this holding environment (the mother) within which it can only gradually develop a sense of itself as an autonomous being. In the beginning the infant has no sense of separation at all. It feels completely merged in with the mother-environment, and this is accompanied by an illusion of omnipotence. The baby is all-powerful, the centre of existence. When it feels hungry it cries and, from the infant’s perception, the breast magically appears. It is as if their very desire had made it appear, as if they had actually created the breast itself. To the baby, in this primary state of omnipotence, the breast is a subjective thing, a part of ‘me’, over which it has total power.

However, the infant progressively becomes aware of ‘not-me’ aspects of reality, beyond its control, when the breast is not immediately available. As the mother gradually adapts her behaviour so as to disillusion the infant of its omnipotent powers, the schism between inner and outer worlds becomes increasingly undeniable and the baby has to begin to recognise the otherness of the mother, and thus its own separate being. As a way of accommodating this disillusionment, a third area of experiencing emerges, which Winnicott described as a potentialspace, “the hypothetical area that exists (but cannot exist) between the baby and the object (mother or part of mother) during the phase of the repudiation of the object as not me, that is at the end of being merged in with the object.” 8 This intermediate area is at the interplay between there being nothing but ‘me’ and there being objects outside omnipotent control.

In this third intermediate area of experience, co-existing between inner reality and the outside world, the infant can still exercise its feelings of subjective omnipotence whilst trying out its newfound relationship with the object world of shared reality. Through creative play, initially with its first ‘not-me’ possession, or ‘transitional object’ (typically something soft, such as a blanket, a piece of cloth or a teddy bear), the infant can explore, and sometimes playfully deny, the separateness and relationships between self, mother (the breast) and the world. Paradoxically the child’s experience of this special object is as both ‘me’ and ‘not-me’. The transitional object never comes under magical or ‘omnipotent’ control like an internal object or fantasy, but nor is it outside the infant’s control like the real mother (or world). It is “always on the theoretical line between the subjective and that which is objectively perceived”, 9 occupying an intermediate area “between the thumb and the teddy bear”,10 between ‘me’ and ‘not-me’, as a bridge between inner and outer worlds.

According to Winnicott, the task of accepting objective reality is never completed, “no human being is free from the strain of relating inner and outer reality”.11 There is a direct continuity between the child’s use of play and the adult’s later use of culture. The relief from the strain of reconciling internal and external worlds, he maintained, is provided by the continuance of an intermediate area of creative living; the potential space is reproduced between child and family, and between individual and society or the world. Thus the potential space becomes the location of cultural experience itself. Here the individual can explore the interplay between self and the world, and can create imaginative transformations of the world, not as mere fantasy, but as cultural products. “It is in the space between inner and outer world, which is also the space between people; the transitional space; that intimate relationships and creativity occur.”12 The experience of the small child who is ‘lost in play’ is retained in “the intense experience that belongs to the arts and to religion and to imaginative living.”13

Whether affective machines are to develop merely as evermore sophisticated ‘super-toys’, occupying the potential space of creative play as reciprocating transitional objects, or if they are to go further and start to develop their own subjective emotions through genuine ‘object relations’, the power relations between humans and robots will demand to be re-negotiated. Perhaps the replication of human feelings in a machine is really an excuse to liberate ourselves from the responsibility of reciprocal relationships, with super-toys becoming the fetish objects of our own projected desires? Or perhaps, rather than asking whether machines can ‘feel’ the real question should be ‘are we machines?’

For example, one of the pioneers of ‘affective computing’, Cynthia Breazeal, a roboticist at the Massachusetts Institute of Technology, has built what is described as an ‘emotionally expressive humanoid head’ called Kismet. This anthropomorphic machine has moveable eyelids, eyes and lips that allow ‘him’ to make a variety of emotion-like expressions. When left alone, Kismet looks sad, but when he detects a human face he smiles, inviting attention. If the carer moves too fast, a look of fear warns that something is wrong. Humans who play with Kismet cannot help but respond sympathetically to these simple forms of emotional behaviour.

For example, Felix Growing is a research project involving six countries, and 25 roboticists, developmental psychologists and neuroscientists which aims to build robots that “learn from humans and respond in a socially and emotionally appropriate manner” (Dr. Lola Canamero, University of Hertfordshire).