Kara

conversational agent in ambulances

Helping medics transport patients

Transporting sick patients is hard and stressful work. Kara is a concept for an intelligent conversational agent designed to help medics transport patients using a friendly multimodal interface. As a helpful team member, Kara augments the foresight capacity of emergency transport personnel using computational power. Through conversational patterns attuned to the communication style of medics, Kara builds trust by sharing navigational logic, patient case analytics, and vehicle maintenance monitoring alerts to facilitate successful transportation in stressful situations.

How Kara Works

Kara uses a pervasive multimodal UI system within the front of an ambulance to help medics transport patients. Rather than replace medics, Kara acts as another team member. It establishes presence and shared context through visual light cues on the steering wheel and provides a gentle haptic response when it has information to contribute. With a heads up display feature, Kara both shows and explains its logic in order to build trust with medics. With access to information through the cloud as well as computational power, Kara provides real-time contextually-relevant information that helps medics complete transportation tasks while leaving the key decision-making moments to them.

Experiential Prototype

Kara's Interaction Model

Kara is designed to have agency limited to three specific areas of concern. Areas such as navigation, finding patient case patterns, and monitoring equipment are better suited as optimization problems where Kara's computational strengths are an asset. This means medics can avoid unforeseen complications that arise in transport and continue to spend their attention on responding quickly and appropriately in complex situations and providing patient care.

Kara employs a conversational approach modeled as a friendly team mate. Like a team mate, Kara contributes to conversations by finding appropriate moments to interject and by following social graces. It transitions smoothly between passive and active states of listening and speech and provides a consistent active speaking experience to demonstrate reliability over time.

Design Process

Project Background

As part of the Microsoft Design Expo, our project brief was to design a product, service or solution that demonstrates the value and differentiation of a conversational user interface.

Defining a Territory

We started by identifying an opportunity and territory for further investigation. With the advances in technology for driverless vehicles, we saw an opportunity to design conversational user interfaces that could make people feel more comfortable in these environments.

We began scoping the design challenge by choosing a user group to focus on. After speaking with various stakeholders, we brainstormed 12 scenarios with different user groups and decided that emergency medical transport was a rich design opportunity for conversational user interface, human computer symbiosis, and autonomous driving technology.

Framing the Opportunity

How might conversational user interfaces be used in the future of emergency transport?

Exploratory Research

Where stress happens

During the exploratory phase, we began to uncover the complexity of cases faced by EMTs through light-touch interviews. We discovered that emergency medical technicians have deep training and experience in protocols for stabilizing patients. Events that trigger stress tend to happen before, during, and after transport arising from unforeseen issues.

A multistakeholder system

We also learned that medics perform many tasks in addition to transporting patients such as communicating with hospitals and dispatchers while navigating many physical and informational infrastructures.

Generative Research

We met with Pittsburgh parademics and voluntary EMTs and used generative research activities to gain empathy and a deeper understanding about challenges and pain points.

Touchstone tour

A touchstone tour of the inside of an ambulance to understand current interface and interaction challenges. We learned that EMTs currently use several complicated and outdated interfaces—both visual and auditory—to complete their transport and communication tasks.

Collage activity

A collage activity to help us design for human-computer symbiosis. We learned that EMT teams function fluidly and need to trust each others' intuition, judgment, and problem-solving skills.

Journey mapping

A journey mapping exercise to visualize the end-to-end experience and identify moments of stress. Specific scenarios emerged to highlight the complexity of scenarios faced in everyday shifts.

Role playing

A future-looking role play exercise to test the boundaries of what participants were comfortable with using and interacting with. The EMTs enjoyed ideating on tools with forward-looking technological futures that would help them with their jobs.

Ride-along

16 hours of ride-alongs with EMTs also gave us a more empathetic view of the reality of daily routines, challenges, coping mechanisms, and stresses.

Research Insights

Through the research, we synthesized key insights to guide our designs.

Key Insights

EMTs are active problem-solvers.

"So much of what I need to know has to be committed to memory." What paramedics know is how to quickly assess and respond in the right way in complex and stressful human situations.

Design Principles

Let people do what they do best, let computers do the rest.

We don't want to remove or replace the job that EMTs really do. Let's design for where computers have strengths and augment what is working.

Communication is important, but trust is deeper than that.

"The nightmare scenario is a partner who doesn't know what to do." "Sometimes a gesture or look is enough." Each person on the team must play an active role. During moments of intense activity, each person is their own unit and EMS professionals must and do trust each other's intuition, judgment, and response.

Build for trust through reliable action.

Conversation is the tip of the experiential iceberg but it underlies an agent that should demonstrate trustworthy behavior by showing consistent actions and approaches over time.

Some noise is good noise.

"It's good to hear what's happening around the city." EMS professionals need to multi-task but rely on a variety of signals to gather information.

Not reductive but additive.

Rather than reducing information, we should provide information to contextualize situations at the right time, in the right way.

Resulting directions

This resulted in initial directions for appropriate roles of the CUI and its embodiment. We realized some of these intial roles were focused on more logistical tasks which led us to focus on designing for the 'front of the ambulance' experience.

Focused Question

How can we improve the front of ambulance transport experience with a conversational agent?

Generative Research Again

With these observations, combined with the resulting themes and directions, we brainstormed 100 conversational user interface concepts to fully externalize our findings as a team. We then ideated and synthesized these in 12 narrative storyboards. We took these sacrificial concepts back to our participants through a second workshop to elicit reactions, test boundaries, and gather feedback. We also conducted a card sorting exercise to determine which qualities should anchor the experiential design.

Design Principles

While participants were excited by the potential for making their jobs easier, they did not seem impressed by the stress-relieving concepts as they each had effective personal coping methods. In discovering that our users weren't looking for direct stress relief, we realized that our design would have to incorporate this indirectly. We focused our designs for three main scenarios: navigation, patient cases patterns, and vehicle maintenance supported by the design principles and initial heuristics.

Prototyping and Testing

We began designing multimodal interactions in the three scenarios through dozens of low-fidelity prototypes using scripts, audio recordings, mock-ups, and role-plays—first on ourselves and then with participants using a Wizard of Oz prototype. In the process, we played with experiential interactions with voice, tone, and personality of the conversational agent, as well as visual displays of information.

Refinement

In refining interactions, we began to codify the patterns we were learning into interaction models and refined prototypes for the conversational agent and then began to create the final design presentation and deliverables.

Opportunities for Future Development

We explored a lot of territory for designing conversational interfaces. Although the final concept may seem quite futuristic, it provoked ample discussion and provocation around the role and agency of computer agents via conversational, graphic, and haptic interfaces. In addition, gaining trust in potentially high-stress situations was another challenge that we tried to address in the design. As driverless cars and conversational agents continue to develop, there is the need for additional research into how to design for a symbiotic relationship between our tools and ourselves.

For a shorter-term design solution, we observed the complex interfaces and outdated processes in the current emergency medical transport experience, indicating that there is great opportunity to design better systems and experiences for this group of caretakers.