Soon social and assistive robots will become ever more a part of our lives. They could be in our homes, our hospitals, and our schools, helping us with childcare, elderly care, in rehabilitation from injury or disease, and as social and assistive aids in all sorts of capacities.

But how much do we know about the psychology of our interactions with robots? What should any one social or assistive robot look like? How should it move and react to us –– and to what sorts of information? Should it appear to show “emotions” and be responsive to our own emotions? How much like a person should an assistive robot be? How innovative can we be in designing robots to be responsive assistants and sure supports including in times of stress or in tension-fraught situations?

Let’s take a look at two different recent research studies that explore how we understand and respond to expressions of emotion in robots. . . .