The development of robots that closely resemble human beings can contribute to cognitive research. An android provides an experimental apparatus that has the potential to be controlled more precisely than any human actor. However, preliminary results indicate that only very humanlike devices can elicit the broad range of responses that people typically direct toward each other. Conversely, to build androids capable of emulating human behavior, it is necessary to investigate social activity in detail and to develop models of the cognitive mechanisms that support this activity. Because of the reciprocal relationship between android development and the exploration of social mechanisms, it is necessary to establish the field of android science. Androids could be a key testing ground for social, cognitive, and neuroscientific theories as well as platform for their eventual unification. Nevertheless, subtle flaws in appearance and movement can be more apparent and eerie in very humanlike robots. This uncanny phenomenon may be symptomatic of entities that elicit our model of human other but do not measure up to it. If so, very humanlike robots may provide the best means of pinpointing what kinds of behavior are perceived as human, since deviations from human norms are more obvious in them than in more mechanical-looking robots. In pursuing this line of inquiry, it is essential to identify the mechanisms involved in evaluations of human likeness. One hypothesis is that, by playing on an innate fear of death, an uncanny robot elicits culturally-supported defense responses for coping with death’s inevitability. An experiment, which borrows from methods used in terror management research, was performed to test this hypothesis.[Thomson Reuters Essential Science Indicators: Fast Breaking Paper in Social Sciences, May 2008]

This paper reports our research efforts on social robots that recognize interpersonal relationships. These investigations are carried out by observing group behaviors while the robot interacts with people. Our humanoid robot interacts with children by speaking and making various gestures. It identifies individual children by using a wireless tag system, which helps to promote interaction such as the robot calling a child by name. Accordingly, the robot is capable of interacting with many children, causing spontaneous group behavior from the children around it. Here, group behavior is associated with social relationships among the children themselves. For example, a child may be accompanied by his or her friends and then play together with them. We propose the hypothesis that our interactive robot prompts a child’s friends to accompany him or her; thus, we can estimate their friendship by simply observing their accompanying behaviors. We conducted a field experiment for two weeks in a Japanese elementary school to verify this hypothesis. In the experiment, two “Robovie” robots were placed where children could freely interact with them during recesses. As a result, we found that they mostly prompted friend-accompanying behavior. Moreover, we could estimate some of their friendly relationships, in particular among the children who often appeared around the robot. For example, we could estimate 5% of all friendships with 80% accuracy, and 15% of them with nearly 50% accuracy. Thus, this result basically supports our hypothesis on friendship estimation from an interactive humanoid robot. We believe that this ability to estimate human relationships is essential for robots to behave socially.

This study examined preschool children’s reasoning about and behavioral interactions with one of the most advanced robotic pets currently on the retail market, Sony’s robotic dog AIBO. Eighty children, equally divided between two age groups, 34–50 months and 58–74 months, participated in individual sessions with two artifacts: AIBO and a stuffed dog. Evaluation and justification results showed similarities in children’s reasoning across artifacts. In contrast, children engaged more often in apprehensive behavior and attempts at reciprocity with AIBO, and more often mistreated the stuffed dog and endowed it with animation. Discussion focuses on how robotic pets, as representative of an emerging technological genre, may be (a) blurring foundational ontological categories, and (b) impacting children’s social and moral development.

A great deal of research has been performed recently on robots that feature functions for communicating with humans in daily life, i.e., communication robots. We consider it important to develop methods to measure humans’ attitudes and emotions that may prevent them from interaction with communication robots, as indices to study short-term and long-term interaction between humans and communication robots. This study is aimed at exploring the influence of negative attitudes toward robots, focusing on applications of communication robots to daily-life services. First, a scale of negative attitudes toward robots consisting of three subordinate scales, “negative attitudes toward situations of interaction with robots,” “negative attitudes toward the social influence of robots,” and “negative attitudes toward emotions in interaction with robots,” was developed based on a data sample comprising of 263 Japanese university students. This scale was administered to 240 Japanese university students to confirm its validity and reliability. In this paper, we report on the results of analyses of these data samples. Moreover, we discuss some future problems including a comparison of attitudes toward robots between nations.

A mobile service robot performing a task for its user(s) might not be able to accomplish its mission without help from other people present in the shared environment. In previous research, collaborative control has been studied as an interactive mode of operation with a robot, compensating for its limitations in autonomy. However, few studies of robots requesting assistance by detecting potential collaborators, directing its attention to them, addressing them, and finally obtaining help from them, have previously been performed in real-world use contexts. This study focuses on a fetch-and-carry robot, Cero, which has been designed to operate in an office environment as an aid for motion-impaired users. During its missions, the robot sometimes needs help with loading or unloading an object. The main question for the study was: under what conditions are people willing to help when requested to do so by the robot? We were particularly interested in bystanders, i.e. people who happened to be in the environment but who did not “have any official business” with the robot (they neither knew anything about the robot, nor did they have access rights to the robot or its functions). To answer these questions and to provide a better understanding of human–robot help-seeking situations, we conducted an experimental study in which subjects who had not encountered our service robot before were requested to assist it with a task. The results of the study confirm that bystanders can to some degree be expected to help in robot missions, but that their willingness to help the robot depends on the situation and state of occupation that people are in when requested to interact with and assist the robot.

This article studies the impact of a robot’s appearance on interactions involving four children with autism. This work is part of the Aurora project with the overall aim to support interaction skills in children with autism, using robots as ‘interactive toys’ that can encourage and mediate interactions. We follow an approach commonly adopted in assistive robotics and work with a small group of children with autism. This article investigates which robot appearances are suitable to encourage interactions between a robot and children with autism. The children’s levels of interaction with and response to different appearances of two types of robots are compared: a small humanoid doll, and a life-sized ‘Theatrical Robot’ (a mime artist behaving like a robot). The small humanoid robot appeared either as a human-like ‘pretty doll’ or as a ‘robot’ with plain features. The Theatrical Robot was presented either as an ordinary human, or with plain clothing and a featureless, masked face. The results of these trials clearly indicate the children’s preference in their initial response for interaction with a plain, featureless robot over the interaction with a human-like robot. In the case of the life-size Theatrical Robot, the response of children towards the plain/robotic robot was notably more social and pro-active. Implications of these results for our work on using robots as assistive technology for children with autism and their possible use in autism research are discussed.