The New Face of Technology

Images

high-tech fuzzy friends. Visitors to Okita's office meet Paro the seal and a robot cat that purrs when you pet it. photograph by samantha isom

How does someone who does research on robots fit into a school of education?

Sandra Okita: Applying educational content and pedagogy to robots is not new. However, until recently, much of the effort focused on making machines more intelligent—research that fits more in a school of engineering or computer science. My research focuses on the impact of robot interaction on human learning. I examine what features in robots make humans think and learn more. Does it matter if a robot has a human voice versus a robotic voice? Does a cooperative peer-like robot have a different impact on learning than an authoritative instructor robot? I explore different interaction styles to see how a humanoid robot may become a good learning companion. I design relationships and test interventions to see how learning partnerships may develop between humans and robots. This makes a school of education a better fit for me.

What is the potential value of robots in education, and how do you study that?

Sandra Okita: One way I approach my research is to examine how humans interpret, understand and behave around robots. This helps me design features and interventions that invite interaction. I then run empirical studies to test if specific features in robots contribute to human learning and behavior. Coming from a cognitive science and learning science background, I try to capitalize on the social components of technology and design interactions geared toward learning. I work with children from four to 13 years old, and with adults, to see if there are developmental differences.

Robots have several features that are valuable for education. One is their human-like appearance, which seems to elicit a social response. Human-like behavior, such as petting a robot-like cat, seems to trigger familiar responses that may tap into people’s prior knowledge and experience. In previous research, we found that children had “scripts” about familiar play routines —playing house or school—through which they developed a relationship with robots. In a study we conducted with Honda Research Institute USA, children engaged in a table-setting task with the life-sized humanoid robot ASIMO. ASIMO exhibited different learning styles—authoritative, cooperative, parallel play—around familiar scripts. We found that younger children learned more from cooperative engagement.

Robots also have what I call a boundary-like property, and that, too, has implications for learning. Robots take on forms and motions similar to that of real humans and animals but still have machine-like properties. Because they belong simultaneously to two categories that seem mutually exclusive, I call them “technological boundary objects.” For example, a robotic cat would simultaneously reside in categories of animal and machine. This boundary-like quality often elicits strong responses that bewilder people’s beliefs. For example, in work with Dr. Shibata from National Institute of Advanced Industrial Science and Technology, we used various robotic animals, including a robotic baby seal, to probe young children’s understanding about biology and agency. We found that the complex nature of these robots challenged children’s beliefs and prompted kids to do some serious thinking when asked to make inferences about biological properties. For instance, children inferred that robots couldn’t grow and needed a remote control to move. Yet they also believed that robots need food or could be bad pets and jump on the couch when no one was looking.

Another feature with value for learning is that robots seem to offer people room for imagination and creativity. This may make the learning experience more original and motivating. McCloud, in his book Understanding Comics, demonstrated that simpler cartoon figures leave more room for interpretation and elicit greater empathy. A similar balance is needed where robots can’t be too human-like or too fake. The roboticist Mori warns of what he calls the “UncannyValley,” in which robots that are “too human-like, but not quite human” are distracting to people. People will suddenly notice the way the robot is “not human,” reminding them of a zombie. At that point, they find the robot scary instead of appealing. So a robot has to have enough familiarity to trigger a response and let the learner imagine but not run away.

I also study how special needs populations can benefit from robot interaction. For example, some autistic children have difficulty with joint attention—the ability to share information and to comprehend the thoughts and intentions of others. A robot, like an autistic child, responds very literally to a precise sequence of stimuli or commands, so it can model the problems the child faces. But we can also use robots to engage and assist an autistic child in order to increase the range of the child’s reactions. So I examine the different social cues the child responds to from the robot.