Thursday, April 22, 2010

Robot Attitude

I was at the grocery store this morning and said hi to my checker friend Kim, who was serving as queen of the self-service checkout area. We chatted about how the system was working, and she was explaining that self check-out is great when you have just a couple of things - maybe a coffee and a donut - but not so great when you have $250 worth of groceries and two impatient children. Because, she explained, if you pick anything up out of the bagging area before you've paid for it, the self-checkout system gets upset. Kim told me:

"She says, 'Put it back, put it back!'"

I was intrigued, and effectively said, "So she's like that, is she?" Kim explained that the checkout system is definitely a female, and "she has attitude."

"So she's OCD?" I asked. (obsessive-compulsive disorder)

"Yeah, pretty much," answered Kim. "She's all witchy."

We laughed about it and said goodbye, but as I was leaving, I knew I'd blog about it. These days we interact with computer systems all the time, and with talking computer systems too. The ATM might talk to us (it does all the time in Japan). The self-checkout system talks to us. The automatic flight information guy talks to us.

And darned if we don't feel that these things have personalities.

Our power to anthropomorphize is really quite astonishing, but at the same time, it must be taken into account. I'm absolutely sure that people have done A LOT of work to make sure that the computer guy who helps you with flight information is behaving really politely. I'm always impressed with him, in fact. He's polite, he's helpful and accurate, and if you're having trouble he'll immediately say, "It sounds like you need to talk to a representative. Let me get someone for you."

I'm sure there's a story there. Our assumption of the Cooperative Principle of conversation (H.P. Grice) is really strong. What if we ran into a real AI? Would we be able to tell? I've seen a bunch of stories where it's really clear the computer system is doing the impossible, i.e. thinking for itself, but I'm not sure in practice this would be easy to determine. The computers that talk to us now aren't utilizing a language system like the one we use to generate natural language. They're dealing with a microscopic subset of topics and have fixed responses. On the other hand, I know from learning foreign language myself that when you start out, you're pretty functional over a micro-subset of topics and then have to push yourself to get beyond them (even if your responses aren't entirely fixed).

I'm teaching my kids manners, and I always say to them, "If you're polite, people will like you and be happy to help you." It's amazing how much this is true. It's also true that politeness reflects on your personality, and that language learners can be wrongly thought to be bad people if they make errors of pragmatics. This principle that allows people to extrapolate back from your words to imagine the quality of your personality is the same one that allows Kim to tell me that the self-checkout is "witchy."

There are some computer systems that can almost pass the turing test if you do't stray outside of their area. It's pretty impressive.

Of course, as a language learner, you get to make mistakes that the computer just couldn't, so that's one difference. I think we're definitely moving towards a time when most people won't be able to tell a robot from a person, at least with the sort of automated messaging and help lines we have now. I suppose if you asked them to sing "Paradise", they'd probably have a little trouble.