Pages

Sunday, May 25, 2014

At one point or another, you've probably gotten into a conversation with a chatbot. A chatbot, of course, is designed to be able to hold up its end of a conversation about a given topic. Often, this is for commercial purposes and the chatbot is only programmed to deal with the commercial task at hand. For example, any of you who have Charter as your cable provider who have needed customer service at some point- like me- will run into a chatbot who will run through the more common customer-service issues with a complainant. It makes sure the issue is handled the same, pre-prescribed way every time, and it frees up their employees for the more complex issues. It pays for the chatbot to appear as human as possible; better that the customer never realize they're talking to a robot, because if they do, Charter knows as well as anyone that the customer is likely to just quit talking to the chatbot and demand to speak to a human. And that uses up resources.

Disguising a commercial chatbot as human is one matter. General conversation, which can be about literally any topic, is a far more dicey proposal. This is what the infamous Turing Test covers, and chatbots have historically fared rather poorly at it. Sooner or later, the topic gets too esoteric and the chatbot starts making odd conversation choices. Often, this comes sooner, as if the people I know are any indication, the chatbot will be immediately slammed with the most bizarre possible topic of conversation the human can come up with just to see how it handles it. And when a failure point is inevitably found, rest assured it will be mined for every last troy ounce of humor.

But perhaps this is being too rough on the chatbot. Perhaps it can be given an easier task. What if you merely asked a chatbot to hold a conversation with itself?

Meet Cleverbot, our contestant. Cleverbot has been running since 1997, and has remembered every single conversation it's ever had with a human, and if Wikipedia is to be believed on the matter, grand total that means over 150 million of them. As I write this, it's having over 9,400 conversations at this very moment; typically the number is around 10,000. It takes into account everything that's ever been said to it, everything it said in response, and everything that's been said in response to that response, and use that information to continually make for better conversation.

I did mention that said information is filled with people being deliberately ridiculous, correct?

In 2011, Igor Labutov, Jason Yosinski and Hod Lipson, three residents of the Cornell Creative Machines Lab- an endlessly fascinating playground of engineering- decided to hook Cleverbot up to itself. Well, to be honest, first they hooked up a different, earlier-model chatbot, Eliza. Eliza is essentially built to repeat your own conversation back to you, which is fine when a human is talking to it (and liable to quickly get them angry when their own ridiculousness is thrown right back in their face), but when Eliza has to come up with its own conversation to repeat back it itself, it gets a bit lost. The result isn't all that amusing, as I will demonstrate by feeding two Elizas each other's conversations:

"Hello. My name is Eliza. How may I help you?"
"Hello. My name is Eliza. How may I help you?"
"We were discussing you, not me."
"Oh... not you?"
"Oh...?"
"OK... "We was discussing you, not me." Tell me more."
"We were discussing you, not me."
"You're not really talking about me, are you?"
"What makes you think I am not really talking about you, am I?"
"How long have you been not really talking about me, are you?"
"Would you prefer if I were not?"
"Oh... prefer if you was not?"
"Oh... were not?"
"Tell me more..."
"Tell me more..."
"Tell me more..."
"Why did you repeat yourself?"
"We were discussing you, not me."

That's where I stopped, as it was clear to me at that point that I'd heard just about everything that was going to be said between them. Enter Cleverbot, which was fed into a text-to-speech synthesizer, which was then fed into an avatar that would speak that text, and then pitted against a second Cleverbot hooked up the same way.

This is what happened. (I am not ashamed to say, thank you, Outrageous Acts of Science on the Science Channel. I was blanking on a topic for today.)

That year, Cleverbot, or at least a souped-up version of it, was given a Turing test at the Techniche 2011 festival in India. In order to pass, generally a computer needs to make over half of the conversation subjects think it's human. Cleverbot managed to get 59.3% of its human counterparts to vote it as human. Actual humans, serving as controls, scored 63.3%.

Apparently, this is what almost passing a Turing test- or even passing it outright, depending on how you look at it- looks like. Ferrets. Which ones? All of the ferrets. What are their names? NO NOT ANY FERRETS AT ALL. Drink from a rhinoceros bean.