In all the Singularity-based angst over whether robots are going to take over, few have considered the human qualities that might allow our silicon cousins to prevail. Specifically, will robots dupe us into doing something dumb or dangerous because we trust them too much?

PCMag discussed this recently with robotics expert Dr. Ayanna Howard after her keynote at the first IEEE Multi-Robot Systems (MRS) conference. Dr. Howard spent 12 years at NASA JPL as a Senior Robotics Researcher but is now the founder and CTO of Zyrobotics, which creates advanced technology to assist children living with disabilities. She's also director of the Human-Automation Systems Lab (HumAnS) at the Georgia Institute of Technology.

Here are edited and condensed excerpts from our conversation.

Firstly, can you talk about your work at NASA on future Mars missions? At NASA, I was looking at how rovers can mimic behaviors of the scientists, especially geologists, and that's when I started thinking about the whole human-in-the-loop scenario, knowing that humans have a lot to offer in the Human-Robot Interaction field. Back then, though, I considered the human primarily as the input factor from which the system could learn.

So you wanted the robot to have some measure of autonomy as opposed to Houston controlling it, saying "turn left, please"? Right, but by mimicking human behavior. In fact, when I came to Georgia Tech, my first project was the SnoMotes, funded by NASA, which focused on developing autonomous-exploration robots that could work in ice environments, collecting data in ways similar to how a scientist would get it, in an area where we couldn't send human geologists.

Then you went wider with your robotic application work. Yes. I was looking for what I wanted to do next, and I was interested in developing ideas that could benefit a wider human population, in everyday situations. At Georgia Tech, as Director of the Human-Automation Systems (HumAnS) Lab, we draw on the disciplines of robotics, cognitive sensing, machine learning, computational intelligence, and human-robot interaction, carrying out research around the concept of humanized intelligence and the process of embedding human cognitive capability into the control path of autonomous systems.

Sounds like you're coming at this field from the engineering perspective, as opposed to a psychological, or design, aspect. Yes, I'm a hybrid—a classical engineer by training and a computer scientist by trade. So I examine problems through systems thinking. I came to robotics with the background of looking at robotics as a system of components, with the human as one of those components whose inputs need to be tweaked, with well-defined parameters. For example, in my work, "emotions" provide a valuable input into the robot, and vice versa. Within this systems thinking framework, human feedback influences robot behavior, and robot behavior influences human behavior, and so on.

Which robots do you use in your research studies?We use several, including the Darwin-OP Humanoid Research Robot, Nao and Darwin Robotis Mini.

Do you program them using the ROS middleware? Yes, we use a bunch of software, including ROS; TensorFlow machine intelligence to process the data, and Microsoft Emotion API, which we add to, with additional tools on top.

Can you give us some examples of your work? Once I'd transitioned to Georgia Tech, I started to pivot my research into robots within healthcare. There are currently 150 million children living with disabilities worldwide and, in the US alone, the pediatric rehabilitation industry is worth $1.6 billion, so it's a significant addressable market.

We have run studies with children with cerebral palsy, using a robot to extend their physical therapy into the home environment to improve outcomes. In a co-authored paper, which appeared in the journal Applied Bionics and Biomechanics, we show how robots are effective when integrated into therapy instruction for upper-arm rehabilitation.

These children are often in acute pain, but in your studies, you found they were willing to go the extra mile for the robot. Sometimes children don't always want to do an extensive exercise task for their human carer because it's physically uncomfortable. But we found that children want to cooperate with the robot. Also the robot doesn't get tired or lose patience with the child and that helps with maintaining longer-term interactions. We quantified our results of upper-arm exercises involving adduction and abduction and lateral and medial movements, observed in therapeutic situations by the robot, using a variety of computer vision techniques including Motion History Imaging (MHI), edge detection, and Random Sample Consensus (RANSAC). Results obtained showed improvement when doing these physical rehabilitation exercises with the robot as aide. Also, through recording their emotional state, we could see their facial expressions correlated with a happy state when interacting with the robot.

Can you now talk about the robots dancing as another example of your human-in-the-loop systems thinking? Children like to share what they know. So, in another one of our research studies on improving physical and cognitive abilities for children with disabilities, we asked them to instruct the robot to play Angry Birds on a tablet [video below]. In the game, if the building toppled and points increased, the robot's eyes lit up, it emitted a happy sound and did a little dance. This was to provide positive feedback to the child. We found that we could extend the length of the interaction by providing appropriate robot feedback.

You found that this feedback response increased efficacy. Yes, if the robot displays appropriate human-like behavior, such as joy, there is a form of trust developed, and the child is willing to engage longer in the game to teach the robot how to improve their score. Then, using our methods for extracting the child's motor performance, including position, arm trajectory, movement units during active movements, we can prove that the robot's animated state, which correlates with the child's behavior, is effective in engaging the child correctly.

Through your spin-off company, Zyrobotics, you're now taking some of this research into the commercial realm. Can you talk about any specific products? In our Access4Kids research study [video below], we developed a wireless controller for tablet accessibility to allow people with limited fine motor control to still use the common pinch-and-swipe gestures required for tablet control. This work is now commercially available, licensed to Zyrobotics, as the TabAccess product.

In all your work, you found that trust is an essential part of human-in-the-loop robotics, but it's not always the wisest decision. Trust is not what people say, it's their actions—what they're doing in any given situation. At Georgia Tech we've done several studies looking at trust. In our study of a simulated fire [video below], the subjects unquestionably followed the robot, because it was clear they perceived it as an authority figure. And this was even after it made mistakes and lead people into a room with no visible exits. We know that teaching the robot to mimic human behavior encourages interaction and builds trust.

But even when the robot doesn't have a clue? Worryingly so. In our research paper...we presented work that suggested people tend to be overly trusting and overly forgiving of robots in certain situations. Our experiments showed that, at best, human participants in our simulated emergencies focus on guidance provided by robots, regardless of a robot's prior performance or other guidance information, and at worst, believe that the robot is more capable than other sources of information.

You mean, as long as the robot illustrated learning from its mistakes, by doing a "re-think" and a possible reboot, it denoted intelligence and so people trusted it? We found that, even when the robots do break trust, a properly timed statement can convince a participant to follow it.

Related

How are you using this work in your lab at Georgia Tech? Trust is a two-edged sword. We need trust in the healthcare domain to ensure that children are compliant with their exercise goals. We're pushing the trust aspect to help children see the robot as a friend who helps them to improve life outcomes, but we also need to ensure that they do not wholly over-trust the robot. The key is to maximize the rewards while minimizing any potential risk.

So what are roboticists doing about the way humans seem to trust robots? We're continuing to do research studies on this issue, in order to better prepare the industry when developing autonomous systems, as well as contributing to the growing body of work, and suggested guidelines, at IEEE.

Get Our Best Stories!

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.