InMind – Rapport-Building Personal Assistant on Mobile Devices

Intelligent assistants of 2025

In 2014, Yahoo! and Carnegie Mellon University (CMU) have announced a five-year, $10 million partnership, called the InMind project. The goal of the InMind is to invent, implement, experiment, and iterate with an intelligent assistant for mobile devices. Major companies are already exploring intelligent assistants such as Siri, and Google Now, M, and Cortana, and we expect they will continue to develop these. To make an impact as a university, our goal is not to compete with incremental extensions these systems will make over the next year or two, but instead to use InMind as a research prototype to invent intelligent assistant capabilities that may become widespread in a 5-15 year horizon – the intelligent assistants of 2025.

Knowledge Inference for Reciprocal Self Disclosure

﻿﻿﻿

The dialogue system has a interpersonal user model representing user’s preferences and shared knowledge. The dialogue manger is connected to CMU’s Never-Ending Language Learner [Carlson et al. 2010], as a common sense database. In this demo, as one of conversational strategies building rapport, we particularly implemented “Reciprocal Self Disclosure” where the agent discloses its preferences with referring to the user’s preference.

U: “I like the Steelers.”
S: “Oh really? Well, my favorite team is the Eagles.”

The system has capabilities of knowledge inference and content planning. In tis example, we assume the knowledge inference by NELL is:

IsA(Steelers, SportsTeam)
TeamPlaysAgainstTeam(Steelers, Eagles)

Then a content planning rule would be:

IF UserMentions(likes(?x))
AND NELL_knows(IsA(?x, SportsTeam))
AND NELL_knows((TeamPlaysAgainstTeam(?x, ?y))
THEN InMind_Response="Wellmyfavoriteteamis?y"