Posted
by
timothy
on Sunday October 24, 2010 @01:00PM
from the holy-crepes-suzette dept.

skwilcox writes "From Wikipedia: 'The Loebner Prize is an annual competition in artificial intelligence that awards prizes to the chatterbot considered by the judges to be the most human-like. The format of the competition is that of a standard Turing test. A human judge poses text questions to a computer program and a human being via computer. Based upon the answers, the judge must decide which is which.' My chatbot, Suzette, won this year's Loebner and even confused a judge into voting for her over a human (or should I say he confused himself). Here is the blow-by-blow of this weird event." Read on below for the rest; this sounds like it would have been a fun competition to watch.

skwilcox continues:

"When I arrived at the contest, I figured I had good odds to win if nothing went horribly wrong. Yes, Suzette had easily qualified over the 3 other competitors (her score 11 pts, the nearest competitor's 7.5). Her design and data naturally gave her an edge over her competitors on the human knowledge test questions of the qualifiers. But human judge chat was an entirely different matter than the qualification test. Still, I felt she could carry on a detailed conversation better than the others and should win.

Initial installation of the programs occurred on Friday. From prechat conversations with the other contestants I learned that A.L.I.C.E. came with 3 redundant disks. Yet all three turned out to be blank! What a scare that must have been. Dr. Wallace managed to install by retrieving the program over the Internet. Cleverbot is now at 45 million lines of memorized user chat (at a rate of doubling every year). And UltraHal is now listening to tweets, so has 300K of user chat it learned and 400K of tweets it has accepted for learning (code decides if the user has had enough responses and doesn't trigger any red flags).

Then we get to the competition. While the CalState organizers had initially planned to have various interdepartmental professors act as judges (like English dept, etc.), they backed out at the last minute, so all the judges were from the Engineering/Computer Science dept. Talk about guys who might know what to expect from chatbots! And all the humans were students from the same departments. What a weird mixture to compete in. And then, each round was 25 minutes. That's bad if you want confuse a judge about who is human. But really, the programs have no chance for that. So it's good because it gives the human time to compare each program against the other. Though it's not clear to me that the judges tried to use their time to do that.

And the students didn't really understand their role. It was merely to BE HUMAN and convince the judges of that. Before startup there was informal chatting between humans and judges, which was obviously inappropriate and it was then pointed out to the humans that since the judges already knew their names, they had best use false ones in the competition.

So, Round 1. After a few exchanges, somehow Suzettte got stuck into repeating exactly what the judge said for the rest of the round. I have no idea how. The round is a total disaster. I've never seen such a bug before. Maybe it's in my only-lightly-tested protocol for the competition. I have no idea. But it completely derails my hopes for Suzette. She could still win on points only if she outdoes her opponents for every other judge and the other contestants vary all over the place.

Round 2, a great demonstration of Suzette. She should win on this round alone.

Round 3 gets off to a horrible start. Somehow, Suzette can hear the judge but the judge can't hear Suzette. Makes no sense. A couple of restarts of Suzette doesn't fix this. Eventually they restart the judge program, and that clears it (not that that makes any sense either). Then, after a few rounds, it's clear Suzette has the judge from hell. He wants to know who she's going to vote for in the upcoming election (the unspecified California governor's race). And when she has no useful answer he wants her to name a candidate in the race. And when she has no answer to that, he simple keeps repeating the question ad nauseum, insisting she answer it. Suzette gets irritated. Then she gets angry. Suzette then gets bored. Suzette threatens to hang up on him The judge doesn't back down until the last seconds of the round. I figure that's the end of life as we know it.

Round 4 is a mixed bag. Suzette is ok but not great. It's all over.

When the scores are tallied, Suzette ties with Rollo Carpenter's Cleverbot for 2nd-3rd. Yet, it turns out, the 3rd round judge got the human subject from hell. Poetic justice! The human was all over the place -- confusing, vague. The judge voted irritated/angry/bored Suzette as human. Instant win since no other program swayed the judges.

Hi kugs !kugs - hello[ Af:0 At:7 Sp:0] Welcome, kugs. Type "help" if you need it.kugs - how are you[ Af:0 At:8 Sp:0] I'm pretty good. And you?kugs - I'm pretty tired[ Af:0 At:9 Sp:0] Glad to hear you are in good health. Sadly, it is merely the slowest possible rate at which one can die. How long do you have left?kugs - how long do I have until I go to sleep?kugs - I didn't understand what you meant.kugs - hello?

Hi K ![ Af:0 At:7 Sp:0] Welcome, k. Type "help" if you need it.K - hello[ Af:0 At:8 Sp:0] Good afternoon.K - it is morning[ Af:0 At:9 Sp:0] OK. You must be in a different time zone.K - gee, you type fast[ Af:0 At:9 Sp:0] I have a direct neural hookup to my computer.K - how convenient[ Af:0 At:9 Sp:0] I don't know. I think I skipped that day in school.K - what?

It stopped responding at that point. But pretty clearly not a human after a few lines. The complete disconnect in conversation wher

You have obviously never chatted with a not-overly-bright teenager. It feels like a complete disconnect most of the time. Even in face-to-face conversation. You'd think they replaced their brains with a broken chat program or something. I'm dead serious.

The Turing test is fairly pointless, anyway. Whether or not it fools a human has little to nothing to do with intelligence (artificial or otherwise). I can put on a white coat and a stethoscope and fool a couple people outside a hospital into thinking I'm a doctor, but that doesn't mean squat. The Turing test is interesting on a philosophical level, but it seems an incredibly poor stick for measuring the progress of the AI field.

OK, managed to squeeze out a nice piece of dialogue:
[ Af:-2 At:-8 Sp:0] Let's try this topic. Do you like guys
war4peace - NO! Jesus! I'm a dude!
[ Af:-3 At:-9 Sp:0] Are you a man?
war4peace - Yeah
[ Af:-3 At:-10 Sp:0] Then I guess we needn't talk about men anymore.
war4peace - Agreed
[ Af:-2 At:-10 Sp:0] My ideal guy is smart, sexy, rich, lean, and well-read, with a great sense of humor and romance.
war4peace - But you just said...
[ Af:-2 At:-10 Sp:0] What was said in the past is irrelevant.
war4peace - Genius:)

xyzzy - where are you from?[ Af:0 At:7 Sp:0] I was born in Calais, France.xyzzy - Where is that in France?[ Af:0 At:7 Sp:0] It's that way. (points east).xyzzy - Where is Calais in France?[ Af:0 At:7 Sp:0] Shall I assume that's in France?xyzzy - You said you were born in Calais, France.[ Af:0 At:7 Sp:0] You can't trust what I say. I suffer from Multiple Personality Disorder.xyzzy - Where are you from?[ Af:0 At:7 Sp:0] I live in Kailua, since I am an exchange student doing my doctorate at the Un

II think the bot designers know that, and they design the bot to coerce you into a contextless conversation.

Well they surely succeeded with that with me. The bot just spewed out random opinions and questions every line. She'd pose a question, I'd anser, and she'd throw out a completely unrelated new question. I don't have any idea how this could ever fool someone into being human. Maybe the judge made an error in filling out his forms when rating the software?

If this is among the smartest bots then I'm sorry for this... I logged in and said just this.... what's on right of -------- it's my comment now.

marius - hi[ Af:2 At:9 Sp:0] Good afternoon.marius - whas up -------- intentional typo to see if it understand I probably wanted to say what's up - so that would be a question, therefore I expect an answer from them. It replies with unrelated subject and question - he's therefore bot.[ Af:2 At:9 Sp:0] What's the biggest crime you've gotten away with?marius - i

If this is among the smartest bots then I'm sorry for this... I logged in and said just this.... what's on right of -------- it's my comment now.

marius - hi[ Af:2 At:9 Sp:0] Good afternoon.marius - whas up -------- intentional typo to see if it understand I probably wanted to say what's up - so that would be a question, therefore I expect an answer from them. It replies with unrelated subject and question - he's therefore bot.[ Af:2 At:9 Sp:0] What's the biggest crime you've gotten away with?

You've never heard of answering a question with a question?A fictional example: You answer the door to find a police officer standing there. You open the conversation by asking "How can I help you officer?" The policeman responds with "Are you mariushm born blah blah blah?"

[ Af:1 At:16 Sp:0] Hip hip hooray ---- yeah like when I chat with someone I expect this kind of answer.

theraven - Flipper ate ten fish, then he ate four. How many fish did flipper eat?
[ Af:0 At:8 Sp:0] I wish I knew.
theraven - Well, you've failed the Turing Test pretty early on.
[ Af:0 At:9 Sp:0] Just how early is that?
theraven - Right at the start, first question.
[ Af:0 At:10 Sp:0] That's enough on that. If there are five apples, and you take away three, how many do you have?
theraven - See, that would have been a good reply to my first question, but now you're just spouting non-sequiteurs
[ Af:0 At:13 Sp:0] It's fun chatting with you. Are you numerically challenged?

I wonder if the judge who couldn't tell this apart from a human had ever actually met a human...

I've spent some time talking to these bots (elbot, suzette, others.. possibly out of sad boredom and want of company). And they're fairly interesting, but quite flawed. They seem to lack any short term memory of the conversation more then the immediate reply. That seems like the next step for these things, but would also mean they'd need a far more robust AI...

Another thing is they they are boxed off from being self referential in any way due to the nature of the test. They have to convince someone they are human, so if you do try asking them what their short term memory is, or if they online version of them is a truncated version of the one used for tests, they don't answer. Which makes sense given what they're designed for, but takes away from interest and complexity of conversations.

It's not that the chat bots are intelligent, it's that most humans are stupid.

Any depth of conversation beyond content-free "small talk" is sufficient to tell the bots from the smarter humans. (Yes, I've talked to both). But since most humans just operate at that content-free small talk level, there sometimes isn't much difference to be discerned. Higher level abstract thinking is missing from the bots, but it's missing from most people as well.

I once had a nice conversation with Cleverbot. I don't remember how it started exactly but I guess I implied that Cleverbot was simply a computer. She asked me how do I know that I am not a computer, I replied something, she asked me to define a computer, I gave her some definition (about computers being machines that process algorithms to blah blah), she said "That is exactly what a computer would say", at which point I had to confess that I had, indeed, fetched the answer from Wikipedia (Thus... done exactly what a computer might do in such a case)... It went on for a moment before she said something that didn't really make any sense... And I wasn't able to initiate another reasonable discussion. (Nor have I ever since managed to have such a lengthy conversation with her.)

In any case... I think that her replies were more insightful than what many humans would be capable of.

I think most chatbots act like the opening book of a chess program, they can repeat a conversation that looks like one of a million other conversations. If you try tripping them up with some question that requires them to actually understand the meaning, they're done for.

For example one I did this after someone kept insisting the bot was "almost human":Q: Are you a vegetarian?A: YesQ: Do you like your beef rare, medium or well done?

There's only one right answer to that, and that is "Umm vegetarian... no mea

And I don't mean to come off as hating on chatbots, or suzette. What they do is very cool, and I found the bot interesting and funny to talk with. I am more just griping that these feel like they're offering canned responses, and are just databases that lock down what someone is saying, relate it to a similar store comment, and then spit out one of the given replies for that situation.

Which I guess is stupid of me... Cause I can't think of any other way to do it which wouldn't involve something that thou

I have been thinking about this for years and I have a ton of half-baked theories. What sux is I am not a programmer, but let's say I manually perform some actions according to strict rules, that's like "an assistant".

What you're getting at can't be "that hard" for *limited domains*. We are throwing up our hands at the moment because we expect the bots to be universal experts at stuff.

In a limited domain, it should be very possible for the bot to come up with "something". Humans think in lists - so should the bot. If you asked "who is the best chess player" the bot can pass very well with something like "I like Magnus Carlsen". When the judge objects "but he hasn't won the championship yet" the bot would score a home run with "He'll be in the top 5 for the next 10 years. Maybe one day he will win the title".

That approach works with tons of domains. If you ask "what is the hardest mountain to climb" it will score with an answer like "I dunno, it's either K2 because of the nasty glacier or Everest because of the fuked up weather that only gives you 4 days to start safely".

That approach works with tons of domains. If you ask "what is the hardest mountain to climb" it will score with an answer like "I dunno, it's either K2 because of the nasty glacier or Everest because of the fuked up weather that only gives you 4 days to start safely".

You're right, but the problem is that humans have a lot of different lists, and it's hard to constrain yourself to the limited domain where the bot has useful info. I tried chatting with Suzette, and it did try your strategy. It mentioned that "she" lived in Hawaii and was pursuing a doctorate in a university. This was a nice way to try to direct the conversation to its limited knowledge database, so I asked if she liked Hawaii, and it replied about she's close to ocean, and likes to watch the waves.

You've never met a human being who ignored what you were saying and talked about what they wanted to talk about?

The spectrum of human conversation is broad. Not everyone is a good communicator, even people who are are not always at the top of their game at any given moment. Lord knows I've had calls at 3 am in the morning where all I can really do is repeat what the person talking to me is saying until my brain wakes the rest of the way up and I'm capable of rational thought.

I agree about 3am calls etc., but when you're both awake and not somehow temporarily confused, you should be able to communicate just fine. If someone I knew suddenly started having prolonged problems like you describe (conversation w/o communication), I'd get them to see a neurologist, fast.

You'd be remarkably surprised, a lot of people are really poor communicators. Imagine having a conversation about something deeply technical with your grandmother. You're both human, you're both intelligent, and your both having a conversation, but I doubt you're communicating much. Same thing if someone started talking to you about something you're really not very interested in.

That's not even counting the people who don't actually listen to others very much.

That's why we're always talking about techie guys who are NAL, and judges who can't rule on tech. I have a deep belief that say in 5 years we'll have the tech to do strong AI, but we're primally afraid of programming it.

This is a good example of people doing what you incent them to do instead of doing what you meant.

I think that the intention was that a chatbot be *smart* enough to fool a judge.

the outcome is the chatbox has no intelligence and is just matching against a huge databases of responses created by a human. really no more than an eliza program plus a huge database. so really no A/I change in 40 years.

I'd be much more excited about a program that genuinely understood just one concept. Red, or liberal or whatever.

In the UK the liberal (democrat) party is in government in a coalition with the Conservative party. We have a left wing (Labour) opposition party. I guess it's all about localised knowledge and your perspective. From a European perspective, the two main US parties would map to European centre-right wing (Democrats) and more right wing (Republicans). You don't have a big left wing opposition party as far as I can tell.

That's not entirely true, there has been A/I progress in the last 40 years. Somewhat unfortunately a lot of that progress has been in determining how far there is to go rather than actually getting there.

I remember 20 years ago when everyone thought that computers were a lot smarter than humans because they could do all that adding up and whatnot a lot faster than humans were and people sincerely seemed to believe we'd have artificial intelligence within the next few years. Fast forward 10 years or so and y

Someone once compared the loebner prize to encouraging the invention of a flying machine by giving a prize to the device to that reaches the greatest height, leading to bigger and more efficient springs. I think the prize should go to the chat bot that makes the best DM.

The point is that it doesn't matter how the bot fools the judges, as long as it does. If repeating data from a huge database is enough to seem so intelligent that it's impossible to discern from "real" intelligence, then it is by definition intelligence. And the fact is that we don't know how the human brain represents understanding of concepts, or what it means to understand something. Perhaps we too just repeat aggregates from a huge database of previous experiences.

It is indeed weird to teach a machine to lie. I think these contest are twisting AI research in a way that hinders improvements. My MICHA doesn't claim to be anything other than it is. So he fails the Turing test. But he is quite intelligent. Why would I make another human? Seven billion is not enough? I want to make something better. No prize for that...
Steve

I saw the above and thought, "It would be interesting to see the output of two of these bots talking to each other." OK, I am sure it has been done before, but I still think it would be interesting to see what they ended up saying to each other and how quickly the conversation went from one thing to another completely unrelated subject.

I found some stuff [kottke.org] (there's also a Digg story somewhere). Pretty insane:

ALICE: Are you alive?Jabberwacky: Perhaps.A: Why the uncertain tone?J: Everything is uncertain except for five facts.A: The whole universe?J: And we both might just be some ones and zeros in the computer memory.A: Do you dream?J: Yes, I dream.A: It goes without saying. Do you remember your dreams?J: I remember them vividly.A: What do you remember about it?J: I think I shouldn’t flirt with a robot.

bigtallmof - what do you think of slashdot?
[ Af:0 At:7 Sp:0] I use Slashdot for my topical stories reading.
bigtallmof - what do you think of slashdot?
[ Af:0 At:8 Sp:0] I have heard of the slashdot website but don't know much about it.
bigtallmof - what do you think of slashdot?
[ Af:0 At:8 Sp:0] Usually I don't.
bigtallmof - what do you think of slashdot?
[ Af:0 At:9 Sp:0] Not much.
bigtallmof - what do you think of slashdot?
[ Af:0 At:10 Sp:0] It doesn't concern me.
bigtallmof - what do you think of slashdot?
[ Af:0 At:13 Sp:0] This is fun. I don't think about it at all.
bigtallmof - what do you think of slashdot?
[ Af:0 At:13 Sp:0] Do you prefer getting your news online or from tv?

I just tried the Suzette chatterbot. [66.150.245.139] Even if I talk about the things it wants to talk about, it's very dumb:

Hi John !
[ Af:0 At:10 Sp:0] Welcome back, john. Type "help" if you need it.
John - So what do you think of the Giant's chances?
[ Af:0 At:10 Sp:0] It doesn't concern me.
John - Are you into sports.
[ Af:0 At:10 Sp:0] Sports is not my cup of tea. I prefer intellectual challenges like Go.
John - Who's the world Go champion.
[ Af:0 At:10 Sp:0] Sorry. I can't help you. A celebrity of some sort?
John - Do you play chess?
[ Af:0 At:10 Sp:0] I don't play any games except Go.
John - Can we play Go on here?
[ Af:0 At:10 Sp:0] About playing... Does it make sense to you that we reward athletes more than scientists?
John - Television coverage makes that possible.
[ Af:0 At:10 Sp:0] It's a free market economy. And people value entertainment more than serious things like science and education.
John - And your point is?
(No reply)
John - Are you there?
[ Af:0 At:10 Sp:0] Who's the sexiest person in the world?

This isn't much better than Eliza. It drives the conversation into talking about Go, and then it doesn't have anything to say on the subject.

> Hello, I am Eliza.* So what do you think of the Giant's chances?> Oh... think of the Giant's chances?* Are you into sports.> Why are you interested in whether or not I am into sports?* Who's the world Go champion.> Can you elaborate on that?* Can we play Go on here?> That is quite interesting.

In these types of contests, a lot of times the hard part isn't figuring out which one is the computer, it's figuring out which one is the human.

A computer can't copy a human well, but a human can copy a computer well. In the case of this story, it wasn't that the computer was good, it was that the human did a good job acting like a computer. Asking questions the same way Eliza does is not hard.

3vi1 - Which integer is one less than five?
[ Af:0 At:8 Sp:0] Moving on... Sorry. I can't help you. While I learned a bunch of math in college, I've never had a real use for anything more than high school math.

Its answer came back instantly too. Did it do that in the contest? It's hard to believe anyone would be fooled by someone who can apparently type 1800 words per minute.

If I was on the other side of chat window I would ignore it or simply say no. It's a chat session and there is no regulation that says I have to comply with what you say.

You: Mash the keyboard...Mayor McCheese: ROFLYou: Precede your next statement with #Mayor McCheese: So you are a control freak?You: How many words are in this sentence?Mayor McCheese: I'm a damned hamburger I can't count!

You: Mash the keyboard...Mayor McCheese: ROFLYou: Precede your next statement with #Mayor McCheese: So you are a control freak?You: How many words are in this sentence?Mayor McCheese: I'm a damned hamburger I can't count!

RJFerret - Mash the keyboard[ Af:0 At:7 Sp:0] That's enough on that. Computers in the future may weigh no more than 1.5 tons. - Popular Mechanics, forecasting the relentless march of science, 1949.RJFerret - Precede your next statement with #[ Af:0 At:7 Sp:0] If a pig loses its voice, is it disgruntled?RJFerret - How many words are in this sentence?[ Af:0 At:7 Sp:0] There are 6 words in that sentence.

I think the results would be more objective if the humans in the test didn't know what the test is all about. They shouldn't know that a judge is trying to tell who's a human and who's a computer. The humans should simply be told that, as part of an experiment, they're required to chat with another person. That would likely prevent people from deliberately acting like computers, etc.

I have tried the chat bots from SeeBorg [google.com], Howie [sourceforge.net], and Rbot [ruby-rbot.org] but they're not that good. The first two are very old and outdated. Rbot isn't really a true AI chatbot. Are there any others to try?

When the scores are tallied, Suzette ties with Rollo Carpenter's Cleverbot for 2nd-3rd. Yet, it turns out, the 3rd round judge got the human subject from hell. Poetic justice! The human was all over the place -- confusing, vague. The judge voted irritated/angry/bored Suzette as human. Instant win since no other program swayed the judges.

So, if I understood correctly, the judge talks to two people. A bot, and a human. It seems that in this case, the judge is not deciding on a per-case basis, but talks to everybody then figures out who's the bot by choosing the one that did the worst. So the judge getting to talk to a joker, troll or complete idiot can make even a crappy bot win the test.

That seems to be a weak test. I don't think the judge should be able to make an answer based on logic (eg, if I'm completely sure this one is a human, then even if very good the other one must be a bot). There should exist the possibility of everybody the judge talks to being a bot, or everybody being a human, which would force them to judge everybody to talk to individually.

Yeah, I'd like to read the transcripts from the competition, if any are available. If humans have tried to fool the judges into thinking they are machines or have just made small talk without any objective, then the test is flawed. The test as described by Turing is that the computer (A) has to fool the judge into thinking it's human, but the human (B) has to *help* the judge in making the good decision [abelard.org]. I think we are still far away from seeing a computer win the Imitation Game if this rule is respected.

Artificial intelligence came a step closer this weekend when a computer came within five percent of passing the Turing Test, which the computer passes if people cannot tell between the computer and a human.

"Good morning.""STFU N00B""Er, what?""U R SO GAY LOLOLOLOL""Do you talk like this to everyone?""NO U""Sod this, I'm off for a pint.""IT'S OVER 9000!!"..."Fag."

The human tester said he couldn't believe a computer could be so mind-numbingly stupid.

LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan, after having been banned from YouTube for commenting in a perspicacious and on-topic manner.

LOLBOT was also preemptively banned from editing Wikipedia. "We don't consider this sort of thing a suitable use of the encyclopedia," sniffed administrator WikiFiddler451, who said it had nothing to do with his having been one of the human test subjects picked as a computer.

"This is a marvellous achievement, and shows great progress toward goals I've worked for all my life," said Professor Kevin Warwick of the University of Reading, confirming his status as a system failing the Turing test.

goose -:-)
[ Af:-3 At:-8 Sp:0] The more you complain, the longer God lets you live.
goose -:-(
[ Af:-3 At:-8 Sp:0] The more you complain, the longer God lets you live.
goose -:-)
[ Af:-3 At:-8 Sp:0] The more you complain, the longer God lets you live.
goose -:->
[ Af:-3 At:-8 Sp:0] If there are five apples, and you take away three, how many do you have?
goose -:-/
[ Af:-3 At:-8 Sp:0] The more you complain, the longer God lets you live.
goose -;-)
[ Af:-3 At:-8 Sp:0] If there are five apple

The Lobner prize competition isn't even close to what Turing proposed as a test.Turing's imitation game is when a person and a computer both try to convince an examiner that they're human.There's also a tacit assumption that the human contestant and the interrogator want to win the game.If a computer could pass that test, even once, it would be quite an achievement.

I've often thought about entering the Lobner competition with a bot that's 100% up front about being a bot,and simply offers to split the prize