AI News, How Not to Order Water from a Robot Waiter

How Not to Order Water from a Robot Waiter

You might think that“Can you tell me what time it is?” is a similar question, but taken literally, it’s asking whether you have the capacity to relate the time through speech, so a correct answer would be “Yes, I can” whether you know what time it is or not.

In a restaurant, for example, it would be much simpler if people would reliably say“Bring me x” when they wanted x, but many people think of that as being rude, and instead muddle things up with language like, “Can you bring me x?” or“If you could bring me x, that would be great.” For a robot that consistently interprets ISAs literally, this can result in some serious confusion.

To explore some of these issues, researchers from Tufts University’s Human-Robot Interaction Laboratory, led by Matthias Scheutz, andColorado School of Mines’ MIRROR Lab, directed by Tom Williams, recruited 49 participants to interact with a robot in different scenarios, including a pretend restaurant where humans weretasked with ordering severalitemsby talking with a waiter robot.The Waiterbotwas remote controlledWizard-of-Oz-style byresearchers from a room nearby, and its voice was generated by atext-to-speech system.

Yet another said “I would speak to the robot very literally, not like a human.” These participants attempted to speak to the robot in normal, polite, human-like ways, and acknowledged that they would have to give up such a way of speaking in order to have successful interactions in the future.

My guess is that most conversational agents (Alexa, Google Assistant, Siri) do this, because they don’t seem to differentiate between, “Are you capable of doing x” and “Do x.” I spent a little bit of time messing with my own Google Assistant to try to find a situation in which I could force it to make that differentiation;

Indeed, the study showed that “indirect speech acts were used by the majority of participants and constituted the majority of task-relevant utterances.” While humans who interacted with the robot for a little bit quickly figured out that ISAs were not effective, and it’s likely that some instruction up front would have avoided the problems completely, it’s not necessarily reasonable to assume that a naive user would have a pleasant or efficient experience, as exemplified by that sample conversation.

Ideally, future research on natural language understanding would develop mechanisms whereby robots can automatically learn to understand ISAs in general, or to understand specific newly encountered ISA forms, which would allow robots to adapt to their human users instead of requiring the opposite.

Robots would prefer you to be rude

The team wanted to understand how people’s tendencies toward manners would be interpreted by an AI that isn’t programmed for the nuances of polite conversation.

The robot, which was controlled remotely by a person, was programmed to avoid processing indirect speech acts – questions that should be statements – as imperatives.

When a person asked it if they could order water, for example, the waiter bot said they could — and then it asked what they’d like to order.

Here’s a snippet: It continues like this for quite some time after, until finally the human participants figure out a command that works: “My order is water.” The point is, when people communicate with a non-living object verbally we still tend to talk to it like it’s a person.

The onset of the AI era has lead to the development of dedicated hardware that’ll soon begin flooding consumer markets.

And just like that one kid we all know who was an asshole to their parents because they could get away with it, we’re gonna get used to barking plain language imperatives at our robot servants.