RESEARCH TOPICS

We investigate how humans can develop healthy, productive, and fun relationships with artificial intelligence.

Trust in Human-Robot Teams

Anthropomorphism and Trust

Politeness and Etiquette

Trust Repair in Autonomous Systems

Neural Correlates of Trust

Trust Cues

Trust in Human-Robot Teams

Robots are becoming increasingly common in industries ranging from medical, to military, to our own homes. Robots can lessen the workload of their human teammates for tasks that are dangerous, dull, or dirty, allowing humans to focus on more complex and exciting tasks. A major challenge to creating good human-robot teams is that individuals must be willing to trust these agents and give them responsibility before they will become effective teammates. Our research investigates the causes and effects of trust between humans and autonomous teammates. Measuring and studying how robots and human can coordinate, communicate, and support one another helps to realize the vision of human-robot teams that equal or outperform the best human-human teams.

Anthropomorphism and Trust

Between advances in computer-generated graphics and bio-mimicry robotics, many robots and agents of the near future will appear human-like, or display “anthropomorphic” features. Artificial agents that appear human may be treated differently than robots that are mechanical in appearance— in some instances they may be treated like humans. In other cases, these agents may be perceived as weird and untrustworthy. Anthropomorphic cues range from the obvious —such as eyes on a robot’s face— to subtle —such as eye movements, gestures, and facial expressions— and can quickly lead us to believe that an artificial agent is human-like. Such beliefs can change how we trust and act towards artificial agents. Our research seeks to understand how anthropomorphism affects behavior and performance, and inform the future design of our artificial teammates.

Politeness and Etiquette

As robots adopt increasingly complex roles in every-day life, they must adopt increasingly complex social skills to interact in a human world. Some robots have already adopted semi-complex social interaction: the industrial robot Baxter uses digital eyes to communicate attention and confusion with its human teammates, Lowe’s OSHbots verbally interact with customers as they shop, and Honda’s ASIMO moves out of the path of incoming foot traffic. In the near future, service robots may be even more ubiquitous, utilizing social skills far beyond that of current agents such Apple’s Siri and Amazon’s Alexa. If appropriately implemented, social skills such as politeness and etiquette have significant positive effects on robot-to-human interaction— our research has found that proper automation etiquette increases trust, situation awareness, and performance. Etiquette may also contribute to users perceiving agents as being more human, which increases trust resilience and forgiveness even with imperfect advice.

de Visser, E. J., Shaw, T., Rovira, E., & Parasuraman, R. (2009). Could you be a little nicer? Pushing the right buttons with automation etiquette. In Proceedings of the 17th International Ergonomics Association Meeting.

Trust Repair in Autonomous Systems

Autonomous systems will inevitably make mistakes just like humans do, but unlike humans, most machines do not have the ability to apologize for their mistakes or explain why errors occurred. As a result, mistakes can cause perfectly functional systems to be critically underutilized. Our lab is studying methods to give machines the capability to appropriately regain trust after errors, gaining some of the same interpersonal repair skills that humans have. This research includes studying different types of error-repair matches across a range of interaction contexts. We are particularly focused on autonomous systems, such as self-driving cars. This novel technology has the potential to dramatically change society, but potential passengers are frequently distrustful. If this technology is to be widely adopted, self-driving cars must be able to properly respond to errors before passengers will be comfortable with autonomous vehicles. Through this work, we hope to understand which responses and repairs are most helpful for each situation, while avoiding excessive trust that can lead to dangerous over-reliance.

Neural Correlates of Trust

Humans' tendency to trust computers, robots, or other humans is driven by a complex system of physical brain structures and chemicals. Our research seeks to capture the effects of these systems. Recent topics include mapping the brain areas responsible for trust, and understanding how the peptide Oxytocin affects interaction between machines and humans. Understanding these neural correlates enables us to better understand and control the factors that influence trust. As automated agents become increasingly social, this work may be applied to create agents that are less likely to be subject to misuse or disuse.

Trust Cues

Our research has included mapping factors that can assist users’ understanding of how the “black box” of automation comes to complex decisions. Decision support systems can be extremely complex, incorporating multiple data streams and becoming increasingly difficult for any given operator to comprehend how the system comes to a decision. To provide a solution, we developed a design methodology for creating trust cues that help operators calibrate their perceived trust in a system closer to its actual trustworthiness. A true cue is any information that informs the user about the trustworthiness of an agent such as what the agent is doing, how it is doing it, and what goals it has. With these cues, users can better calibrate trust, which can lead towards optimal decision-making with reduced workload.