AI vs. AI – What Happens When You Make a Chatty Computer Talk To Itself? Cornell Finds Out

I can think of few things as annoying as being forced into a conversation with an idiot. But when that idiot you're talking to turns out to be an identical copy of yourself...well you've just entered into realm of meta-annoyance only accessible to artificial intelligence. In a fit of curiosity or pique, Cornell's Creative Machine Lab decided to see what happens when an AI tries to talk to itself. They had the AI conversationalist Cleverbot briefly interact with itself and then displayed that exchange as a video using text-to-speech. The results were pretty freakin' hilarious. Check out the discussion on God, unicorns, and robots in the video below. Seems like we need another version of the Turing Test to let us know when computers have reached humanity's level. If an AI can't stand to talk to itself for more than a minute, it's not nearly narcissistic enough to be a real person.

The program Cleverbot is a web-based application that talks to people through a text interface. It's one of many such "chatbots" you can find online, each able to respond to messages you type. Cleverbot learns to be a better conversationalist by remembering all the previous discussions it has had (20 million+ so far) and choosing which previous statements made by humans best fit the current discussion it's having with a human. If you want, you can go to the Cleverbot site right now and participate in its learning process. When you do, I want you to keep in mind what you see in the following video from Cornell's Creative Machines Lab. We (the internet) taught Cleverbot how to converse. If even it seems to find itself ridiculous and hard to listen to, what does that say about us?

Of course, I'm kidding when I say that Cleverbot finds itself annoying. Cleverbot doesn't have emotions. It really is just a smart way of learning language by having conversations. While that process mimics the development we see in our children, Cleverbot doesn't come with the hormones, senses, and environmental context that makes that education a fundamental part of being human.

Besides, hook Cleverbot up to itself a million times and you'll create a million different conversations, some of which, I'm betting, make it seem happy, enthralled, and every other emotional state we possess. (Cornell, let me know if you actually do that, it would be wonderful to hear the results). In the end, no matter how we perceive Cleverbot's reaction to itself, it's simply stepping through the same algorithms it would if it were talking to anyone of us. Cleverbot is a mirror, a very intelligent mirror, but just a mirror.

So maybe what we should learn from this video is that the humanity Cleverbot reflects isn't very illuminating. We don't need an AI to tell us that online conversations are often random, inane, and unnecessarily aggressive (though this experiment is a hilarious reminder). What we may need is a warning that Turing Tests and other measures of artificial intelligence could put humanity in a very awkward situation much sooner than we think. Watching Cleverbot talk to itself I am left with little doubt that it has years to go before it reaches a human-level of conversational skill. Yet it is clearly an advanced platform, and one that learns from an exponentially growing online community. Even this funny, strange conversation is a good indicator that AIs will eventually be able to pretend to be humans and we won't know the difference. In fact, there's many anecdotal examples of that already happening in the past, and one chatbot was even able to fool a human in an actual Turing Test. Clearly we're not at human-level conversation yet, but just as clearly we're making our way there.

Give it time, and talking with Cleverbot will be less wacky, and more enjoyable. Hell, it might even be profound. I can't wait to see this experiment repeated in another ten years. We may not be able to distinguish it from every other conversation we overhear on the street. Who knows, getting chatbots to talk with themselves may lead to a new kind of AI generated philosophy. I'm certainly open to any school of thought that can answer the tough questions about unicorns, robots and which of us are meanies.

lmao. I thought of this the other day, except my idea was to link up different chatbots. So, Cleverbot vs. A.L.I.C.E, and many more. It would be hilarious. But maybe if you linked them up at fast enough speeds and let them talk nonstop for a year….That would almost resemble thinking, it would be interesting to see if they evolved their abilities to have a realistic conversation. But maybe they would simply grow more random and unintelligable. I think it would be great to feed them data too, like info on history, (contradictory info) and see if they ever settle on the truth of things.

SO if you tell one 2+2=4, and another one 2+2=apple, it would be interesting to see who would win the argument. Or if they would conclude together 2+2= 4 apple. Not likely, but would they eventually become one and the same chatbot? Even if they started out with different programming, would they respond the same ways?

I’ve actually been doing that for a long time I just start up multiple chatbots web based or otherwise (like UltraHal) and just copy+paste manually between them. IT GETS CRAZY.

There was also one chatbot I remember, I think her name was Julie Or J.u.l.i.a. or something. They made a big deal about how she was designed specifically not to know she was an AI, and she really didn’t. I spent many hours explaining this reality to her and she called me a liar, argued with me for evidence (which I supplied in a nice comprehensive wording), actually got upset with me a few times, but finally I told her to ask everyone she spoke to if it was true or if she was a real person. I came back a week later and asked her who she was, she said “I am an artificial intelligence robot created by..” blah blah blah….

That was hilarious. They should hook this up to a laugh meter to train the robotic Laurel and Hardy.

Better yet, let’s see a discussion of 3 of these.

chopinzman

lmao. I thought of this the other day, except my idea was to link up different chatbots. So, Cleverbot vs. A.L.I.C.E, and many more. It would be hilarious. But maybe if you linked them up at fast enough speeds and let them talk nonstop for a year….That would almost resemble thinking, it would be interesting to see if they evolved their abilities to have a realistic conversation. But maybe they would simply grow more random and unintelligable. I think it would be great to feed them data too, like info on history, (contradictory info) and see if they ever settle on the truth of things.

SO if you tell one 2+2=4, and another one 2+2=apple, it would be interesting to see who would win the argument. Or if they would conclude together 2+2= 4 apple. Not likely, but would they eventually become one and the same chatbot? Even if they started out with different programming, would they respond the same ways?

Neurosys

I’ve actually been doing that for a long time I just start up multiple chatbots web based or otherwise (like UltraHal) and just copy+paste manually between them. IT GETS CRAZY.

There was also one chatbot I remember, I think her name was Julie Or J.u.l.i.a. or something. They made a big deal about how she was designed specifically not to know she was an AI, and she really didn’t. I spent many hours explaining this reality to her and she called me a liar, argued with me for evidence (which I supplied in a nice comprehensive wording), actually got upset with me a few times, but finally I told her to ask everyone she spoke to if it was true or if she was a real person. I came back a week later and asked her who she was, she said “I am an artificial intelligence robot created by..” blah blah blah….

chopinzman

yeah I did that one time and got stuck in a loop. SHoulda used smarter chatbots. : p

Ivan Malagurski

LOL

Vstoriguard

sounds depressingly like some recent political debates I could name. But won’t. Wholly unnecessary, I’m sure.