World War II research into cryptography and computing produced methods, instruments and research communities that informed early research into artificial intelligence (AI) and semi-autonomous computing. Alan Turing and Claude Shannon in particular adapted this research into early theories and demonstrations of AI based on computers’ abilities to track, predict and compete with opponents. This formed a loosely bound collection of techniques, paradigms, and practices I call crypto-intelligence. Subsequent researchers such as Joseph Weizenbaum adapted crypto-intelligence but also reproduced aspects of its antagonistic precepts. This was particularly true in the design and testing of chat bots. Here the ability to trick, fool, and deceive human and machine opponents was a premium, and practices of agent abuse were admired and rewarded. Recognizing the historical genesis of this particular variety of abuse can help researchers develop less antagonistic methodologies.

Robots have been introduced into our society, but their social role is still unclear. A critical issue is whether the robot’s exhibition of intelligent behaviour leads to the users’ perception of the robot as being a social actor, similar to the way in which people treat computers and media as social actors. The first experiment mimicked Stanley Milgram’s obedience experiment, but on a robot. The participants were asked to administer electric shocks to a robot, and the results show that people have fewer concerns about abusing robots than about abusing other people. We refined the methodology for the second experiment by intensifying the social dilemma of the users. The participants were asked to kill the robot. In this experiment, the intelligence of the robot and the gender of the participants were the independent variables, and the users’ destructive behaviour towards the robot the dependent variable. Several practical and methodological problems compromised the acquired data, but we can conclude that the robot’s intelligence had a significant influence on the users’ destructive behaviour. We discuss the encountered problems and suggest improvements. We also speculate on whether the users’ perception of the robot as being “sort of alive” may have influenced the participants’ abusive behaviour.

The state of the art in human computer conversation leaves something to be desired and, indeed, talking to a computer can be down-right annoying. This paper describes an approach to identifying “opportunities for improvement” in these systems by looking for abuse in the form of swear words. The premise is that humans swear at computers as a sanction and, as such, swear words represent a point of failure where the system did not behave as it should. Having identified where things went wrong, we can work backward through the transcripts and, using conversation analysis (CA) work out how things went wrong. Conversation analysis is a qualitative methodology and can appear quite alien — indeed unscientific — to those of us from a quantitative background. The paper starts with a description of Conversation analysis in its modern form, and then goes on to apply the methodology to transcripts of frustrated and annoyed users in the DARPA Communicator project. The conclusion is that there is at least one species of failure caused by the inability of the Communicator systems to handle mixed initiative at the discourse structure level. Along the way, I hope to demonstrate that there is an alternative future for computational linguistics that does not rely on larger and larger text corpora.

Computer-facilitated self-service technologies (SSTs) have become ubiquitous in today’s consumer-focused world. Yet, few human–computer interactions elicit such dramatically polarizing emotional reactions from users as those involving SSTs. ATMs, pay-at-the-pump gas stations, and self-scanning retail registers tend to produce both passionate supporters and critics. While negative comments often center on unpleasant personal user experiences, the actual “abuse” related to such systems is really much deeper and more complex. SSTs carry with them a number of potentially insidious consequences, including the exploitation of consumers as uncompensated temporary workers; the sacrifice of our inherent humanity to delegate both skills and cognition to electronic helpers; and the enabling of a new type of posthuman consumer identity, where each transaction is completed by a cyborg entity constructed of the human on one side and the electronic mechanism on the other. As a result, we may ultimately lose the boundary between the human and the machine.

Numerous research groups around the world are attempting to build realistic and believable autonomous embodied agents that attempt to have natural interactions with users. Research into these entities has primarily focused on their potential to enhance human–computer interaction. As a result, there is little understanding of the potential for embodied entities to abuse and manipulate users for questionable purposes. We highlight the potential opportunities for abuse when interacting with embodied agents in virtual worlds and discuss how our social interactions with such entities can contribute to abusive behaviour. Suggestions for reducing such risks are also provided, along with suggestions for important future research areas.

This research provides a qualitative elaboration of the research of Reeves and Nass (1996) and Ferdig and Mishra (2004), examining the ways in people relate to computers as social agents. Specifically, this paper investigates the ways in which humans, due to a natural tendency to anthropomorphize computers, may experience significant emotions of grief and loss when computers crash. A content analysis of narratives describing human reactions to computer crashes demonstrates that the metaphoric language used to describe computer failure frames humans’ experience with computer loss in language that highlights the negative impact of human/computer interaction and that references Kubler-Ross’s (1969) stage theory of grief: denial, bargaining, anger, depression, and acceptance.

This paper describes our general framework for the investigation of how human gestures can be used to facilitate the interaction and communication between humans and robots. Two studies were carried out to reveal which “naturally occurring” gestures can be observed in a scenario where users had to explain to a robot how to perform a home task. Both studies followed a within-subjects design: participants had to demonstrate how to lay a table to a robot using two different methods — utilizing only gestures or gestures and speech. The first study enabled the validation of the COGNIRON coding scheme for human gestures in Human–Robot Interaction (HRI). Based on the data collected in both studies, an annotated video corpus was produced and characteristics such as frequency and duration of the different gestural classes have been gathered to help capture requirements for the designers of HRI systems. The results from the first study regarding the frequencies of the gestural types suggest an interaction between the order of presentation of the two methods and the actual type of gestures produced. However, the analysis of the speech produced along with the gestures did not reveal differences due to ordering of the experimental conditions. The second study expands the issues addressed by the first study: we aimed at extending the role of the interaction partner (the robot) by introducing some positive acknowledgement of the participants’ activity. The results show no significant differences in the distribution of gestures (frequency and duration) between the two explanation methods, in contrast to the previous study. Implications for HRI are discussed focusing on issues relevant for the design of the robot’s communication skills to support the interaction loop with humans in home scenarios.