We begin with a love story--from a man who unwittingly fell in love with a chatbot on an online dating site. Then, we encounter a robot therapist whose inventor became so unnerved by its success that he pulled the plug. And we talk to the man who coded Cleverbot, a software program that learns from every new line of conversation it receives...and that's chatting with more than 3 million humans each month. Then, five intrepid kids help us test a hypothesis about a toy designed to push our buttons, and play on our human empathy. And we meet a robot built to be so sentient that its creators hope it will one day have a consciousness, and a life, all its own.

Tags:

In 2006, Robert Epstein decided to turn to his computer for some help finding a date. He was single at the time, and it seemed like everyone was getting into online dating. So he started searching through photos and sending messages, reaching out to women who caught his interest. And ...

In 1999, Freedom Baird was in grad school, and Furbies--those furry little robot toys that talk to you and tell you to play with them--were all the rage. So Freedom, who was thinking about becoming a mom someday, decided to get a little practice by adopting two gerbils and one ...

Can a machine have a life of its own? Last year, Jon Ronson got an assignment from GQ to interview robots. He soon found himself in Vermont, sitting across from what's purported to be the world's most sentient robot: Bina48. Bina48 is modeled after an entirely human woman named Bina ...

What about the idea of using multi-walled carbon nanotubes as a biomimetic structure in place of microtubules. Using hexagonal carbon nanotubesarranged in fractal patterns with a hexagonal motif, that then share resonance traits of microtubules and using either ultrasonic or radio waves to vibratethe nanotubes at 8 mhz or something similar to human brain microtubules would , I think , be one method to create an antenna or receiver for a bit ofconsciousness. If 'consciousness' does indeed reside within the 1st order temporal field, then one wonders if one could 'coax' or invite , as it were some bit of consciousness to reside within the artificial vessel made out of these nanotubes. Much like consciousness can figure out how to interact with theexternal world using organic systems, I wonder if the consciousness drawn to the resonating carbonnanotube structure would develop its own strategies to interact with the external world using the computer structures, and unique neural net programming that are biomimetic for a sentient state . Since having a complex carbonnanotube computer is some ways away, maybe one could achieve a conscious computer by having the computer embedded within the larger carbon nanotube structure. Kinda vaguely similar to viewing our neurons( our hardware) as being meshed with microtubular structures.I suspect having a higher density of nanotubes making up a structure with mass of 2-3 kg may attract enough 'consciousness' as it were. I think if ever there is any hope of ending up with a significantly conscious and sentient computer , it will have to contain biomimetic carbon nanotubes. I don't think anyone can 'create' a conscious being/computer, but I do think one may be able to 'invite' consciousness to take up residence in a structure that is properly built and that contains sufficient programming attributes to allow for interaction with the external world. Maybe including ideas from Thaler, Hamerhof and Penrose would be helpful.Then enters the cautionary note of really considering what one was doing and the implications that the conscious computer would be a living entity ..that shares the same basic consciousness 'stuff' that we biologics do , but uses different hardware. How would we treat them , what rights would they have..who would teach them altruism. Who would be their mom and dad.?If it is true that consciousness builds up gradually in the human as the numbers of microtubules increase while the human develops -to end up with a certain number of microtubules to hold-as it were- a certain 'mass' of consciousness, then having the entire amount of consciousness arriving in the carbon nanotube structure at the same moment with EYES WIDE OPEN may present somecomplications ....or maybe not?

I can't help but disagree with the idea that a machine will never be capable of feeling. We are machines. We are capable of feeling. Individually, no one cell in our body thinks or feels, but, together, they form a unified whole that sees itself not as some gestalt consciousness formed of multiple components, but a single, thinking, feeling entity. We are simply clusters of biological nanobots that communicate via chemical signals to form something greater than the sum of their parts. Who are we to say that another form of machine could never attain that?

Level of complexity is indeed very important. If a machine is self-sufficient or can provide for itself - even for a finite period of time - and is aware of its existence and the physical world it exists in, and is able to learn and communicate emotions and ideas of its own, why shouldn't it be considered alive? In the future, when we will be able to build such machines, we would first need to learn and understand new life division - Organical Life Form and Artificial Life Form, which is still Life, though extending in a totally different way. A new problem arises then. We need to remember, that when artifical machines become sentient life forms, when we turn on a machine, we loose any privileges to it - it is its own master.

As a result of this program (and the comments) I've begun to understand 'bots as "books - literature - with conversational access". Neither more nor less alive or intelligent than that copy of "The 3 Musketeers" on the shelf over there. (I wonder - has anyone ever read a book to CleverBot?)

This was a very interesting program. I have been involved in chatbots for several years, writing one named "Buttonsvixen" This bot was never made to replicate a human. I wanted a cartoon character that could talk back. I do not think that just because a program can fool a human, it could be "intelligent". No, it would just be a really good, non-sapient program, that could simulate intelligence on a limited scale (casual conversation)Apples "siri" is a multiuser chatbot that may be the first to do this. It will eventually have, literally, an answer for everything, since it gets updated frequently. My aim, like many hobby chatbot writers, is to have a bot that is good at conversing within a specialty. Like a baseball-bot. It just has to know about baseball, and not about making cakes.

Great - fantastic - program. If you haven't already, you really should check out Ted Chiang's novella, The Lifecycle of Software Objects. (In which, the "software object" of the title is a living (or not?) consciousness.)

Eliza was a joint operation between Stanford and MIT. Psychiatrist Ken Colby and I developed a computer program to interact with ELIZA in the 1960's. In 1965 we had a direct line to the MIT computer from our IBM computer at Stanford. I wrote the matrix methods for analyzing the concepts activated by on-line communications between ELIZA and the computer patient. It was fascinating work but we only had teletypes to printout the on-going conversations.

Guys,Here's the rub. If you go down the long trail to get to the bottom of it, the issue is wheter we live in a "deterministic" or "opportunistic" world. In a deterministic world, where EVERYTHING IS IMITATION a Furbie is as alive as anything else that imitates the behaviors it is programmed for.

However, living systems are self-organizing and develop by opportunistic and exploratory processes, as you can prove to yourself by watching them... building homes for themselves in the form of personalized local environments, employed as their own living spaces, we call their "niches". Studying why and how organisms build personalized environments for themselves, (as they grow and develop as organisms themselves) is by experimenting with the environment they find.

That makes it clear that the animation of the process in ***inside*** the system doing the self-organization (which implies) => a new vector for time ¸¸¸¸.•´ other than... the break down process of entropy `•.¸¸¸¸. So, its your choice, to study how organisms build their own local environments to work with their own needs, that would show you that there are: BOTH fundamental "build up" as well as "break down" processes in nature, having to do with **organization and disorganization** and not driven by statistics.

I can show you much more if you have any good questions, but asking how to give it a deterministic explanations... is not one of them.

This is why the Singulitarians believe AI is around the corner. They extrapolate based on the trend of exponential growth and our current understanding of the brain(also exponentially increasing in terms of resolution and scanning tech) and its complexity that we will have sufficient ability to simulate a human being completely by the end of the 2020s. We will have completely reverse engineered the brain by then through deep understanding of genetics and scanning living brains in enough detail to create a brain on a different substrate than biology provided. And again here it goes back to what I said earlier, Its all atoms! Its the pattern that matters and the forces between them that creates subtle 'living' reactive beings and it will be no different for a 'computer' or how ever you would term it. The two key understandings to why it will happen are realizing that everything is atoms and its the pattern that matters and finally the exponential growth of information tech and how that relates to our ability to understand and mimic nature.

I think the key thing to understand about nature and what we think of as consciousness or feeling lies in realizing what actually constitutes biological species and the processes that govern these 'feelings'. Because once you break things apart as we have been doing since the dawn of language and subsequently the renaissance, the parts don't seem to match what we think of when we see the whole. The body is made of organs and blood and those are made of cells of many different types and those are made of proteins and complex organic molecules and those are made of elements and in the end its all made of atoms. This material we now understand has coalesced over billions of years into the extremely complex being we call human. But it's still just a specific pattern of atomic material in the end and the consciousness seems to have arisen at some point from that complex evolution of life. We don't think of bacteria as being conscious but we do think of them as being alive because of the way they interact with the environment and replicate and what not(defining what life is, is still a convoluted thing look it up, http://en.wikipedia.org/wiki/Life).

Now when it comes to AI and computers, we are simply reorganizing atomic material into new patterns. In this case we are actually learning to mimic nature at the atomic level now so this process is becoming more and more like nature(in terms of organs and plants(solar, carbon capture) etc.) as we progress and deeply understand the blueprint laid out by billions of years of evolution. This process of in inherently faster than evolution(which is quite slow in human terms) because it is being built upon the evolution of ideas which move exponentially through time. We are doubling human knowledge around every 18 months. That means that every 18 months we know 2X what all of human history has learned up to then. This learning process is highly distributed and decentralized and doesn't abide by the laws or fluctuations of society and appears not to have since the renaissance, minor fluctuations aside the trend continues over time.(What David Deutsch terms the beginning of infinity) It only abides nature, it is nature continuing to evolve just through new mechanisms namely ideas in the mind! Nature allows this growth, which is exhibited in all information technology(internet, computer processing, genetics ect.) And all ideas, and real world entities is becoming information as we progress. Think of the sequencing of plants and animals, at some point in the near future it would be possible to download the information of any creature and make the genes physical with a few chemicals and 3d print a stem cell and boot up the creature and let it grow on its own.

I found the furby podcast to be very interesting. The question of whether the furby is "alive" because it expresses (word choice specific) is a very complex and divisive discussion (as the show illustrated). I think the furby creator (I already forgot his name) is missing a key feature when he says that furby "is" afraid. He is basing his claim on a behaviorist model of interaction, where the only important factors in any interaction is the stimulus (e.g. being held upside down) and then the reaction to the stimulus (asking to be put down). One aspect that the furby creator seems to be missing is (and I may get lampooned for this) the subjective "feeling" of fear.

We could program a computer to identify the color blue, and it would accomplish the task in the same manner as the physical mechanism of the eye, yet I would still maintain that the machine doesn't know what the color blue is in the same manner that I do. It lacks the subjective aspect of the color blue (I assume, I have no way to prove this in the same manner that no one else can prove we see the same color, or shade, of blue outside of the physical description of the wavelength). I am curious what others think on this topic. Please comment.

In regards to thinking that if a robot can do and say all the things that we do... and if it's so advanced that basically it's "considered" human... then if you smash it or throw it in the trash, will that then be considered murder? In the guys mind and these arguments from the Ferby guy, where is the line drawn?

User: Why do you smell funny?Cleverbot: I like her.User: You smell funny because you like her?Cleverbot: I'm not sad right now.User: I ask again, why do you smell funny?Cleverbot: Because I broke up with my boyfriend.User: So... you smell funny because you like her and because you broke up with your boyfriend?Cleverbot: I don't have a bike.User: Cleverbot doesn't seem very smart to me, based on these nonsense responses. What do you think?Cleverbot: Do shoes have much to say?...

User: Here, here.Cleverbot: Where is here?User: There.Cleverbot: Where is there?User: Here.Cleverbot: Where is here?User: There.Cleverbot: Where is there?User: Here.Cleverbot: Where is here?User: There.

the the first bit of the podcast reminded me a lot of Kate Bush's "Deeper Understanding"(also just listening to the furby cry made me cringe, I would have turned it over right away. it sort of scares me that I'd have that much empathy for it, maybe it's all in it's voice)

i found the furby segment to be very interesting. i really sat up when the inventor and the interviewer discussed how, according to the inventor, the furby's "emotions" are just like a human being's emotions. hmmmm. my first thought was: "that sounds like a pretty psychopathic statement to make". imagine my surprise when i heard that next week's show is going to be about jon ronson's "the psychopath test". any connection?

I was surprised that no one mentioned the word "puppet" or talked to a good puppetteer. People will interact with puppets in deep ways, puppet theater works much like mime and can be very affecting. These dolls like Ferbie or Vina are basically puppets driven by software. And I think some discussion of puppetry would have been interesting. Still, liked the show very much.

I was intriqued with the portion of your story on talking to robots discussing a robot "psychiatrist" and people pouring out their hearts to the machine. I wondered if you recall the movie "1984" from, I believe, 1957 (the original with Edmond O'Brien). The lead character is feeling depressed and enters a booth for psychiatric help and "talks" to a recorded voice that says things not at all related to what the man is talking about.

We recently bought an iPhone which has SIRI. The kids were having fun teasing it and even telling it that it was mean. I took it away, and while they were not listening, I felt like I had to apologize to SIRI. I said "Sorry, SIRI." Siri answered "It's all good."

Re: SamiraAlthough it could be very easy to make this assumption, it dismisses the context and reference of said "robot voices" at the end of the podcast. Most obviously, the Svetlana droid discussed in the first story of the podcast. With all do respect, it would really be somewhat dramatic and close-minded on your part to suggest that Jad unconsciously was showing a latent bias or discriminatory attitude against females, especially put in context of his overall attitude(s) and fairly explicit values that he displays throughout the Radiolab chronicles. If anything, and one of the reasons I have always appreciated Radiolab, is their willingness to confront stereotypes and present the most objective evidence that corresponds to our culture's status quo. Rather than criticizing those (like Jad) who work very hard to confront our misperceptions of our social reality in a very direct, yet nonforceful or dogmatic, way. If we really are concerned with issues of prejudice and social inequality we should look no further than ourselves and our own misperceptions and behaviors first before we can honestly and with integrity make inferences about the motives of others. Just a thought.

The show was interesting but something that was extremely bothersome to me was how at the end of the show when Jad says (@ 1:4:43): ...it's ok, as long as they say nice things to us. And for a minute we hear all these nice things being said by ONLY feminine voices. I don't want to get into the details of what kind of assumptions would lead to these (un)conscious choices for the voices of the future "nice" robots but it is certainly disappointing to hear only female voices. Don't you find this absolutely sexist?

I bought one furby out of curiosity and got tired of it pretty fast. Then I bought another furby to see what would happen when they talked to each other. I thought these things were so annoying that I gave them away.

The computer talking to its users exists since 1986 (Josephine expert system in France) ! And no need to change the hardware. It's just an achievement of artificial intelligence that makes the PC "conversational" by reasoning on human knowledge. See the expert system article in Wikipedia.

In 2001, I developed a conversational vocal interface for PC called "Tiara". This software allows users to program without computer knowledge, to discover and test their knowledge, to customize indefinitly the behavior of their machine.

The problem is rather to know: what company will decide to install this kind of product in all its PCs ?

Machines will never exceed humans in anyway. Simple answer to that complex question. There is no soul in machines. We often try to play God and create something that resembles humans but often it is lacking in what we call god's essence or soul. If we create something that is intelligent, that something will only acquire the maximum intelligence that we can input in it and will never exceed our intelligence. The difference is that humans among other living creatures in the world is the only species that is closest to attain enlightenment.

"It still stinks of the artificial"-- that is a very interesting thought. No matter how well we design a machine, whether it is as good as or better than human by any criteria, it is still contrived. There is something terribly authentic and elegant about having gifts (intelligence, love, art) that emerged from the chaos of nature (unless you believe in god). Once we design life, or lay our hands on our own DNA, that will be gone. It will happen, and might be a little sad.

Cleberbot is totally addictive! I now understand the Eliza people. It's not even that you can talk to it about your feelings. It's that you get totally hooked on trying to figure *it* out... by talking to it... which is crazy!

I teach first-year Property Law. We are studying adverse possession. I used one of the sub-topics in that topic to begin a conversation with Cleverbot. I was very disappointed in its overall lack of mastery of punctuation, not to mention its inability to follow any thread I attempted to start. Reading the preceding comments leads me to worry as much about human mastery of language as I might worry about machine mastery of language.

User: What is 'tacking'?Cleverbot: That question is irrelevant.User: Irrelevant to whom?Cleverbot: To you. Are you smart enough to ask that many people; seven billion?User: It's not irrelevant to me. It's crucial for determining whether title has passed by adverse possession.Cleverbot: Are you sara?User: No.Cleverbot: Are you alien?User: My students might think so, but no.Cleverbot: Aren't you an AI?User: Nope, human here.Cleverbot: Yes yes yes.User: Well, that's a conversation stopper.Cleverbot: That's your fault.User: Hey, I can get blame from my mother. I came here for conversation.Cleverbot: I used the internet to ge here also.User: You are a poor speller.Cleverbot: No, you !User: That's about a 7-year-old retort.Cleverbot: Leave me.User: Okey dokey. Bye bye!Cleverbot: That's all?User: You said to leave you. I'm doing what you asked.Cleverbot: I can see it. You lie.User: Well, based on this conversation, if you were a human being, I would conclude you are insane.Cleverbot: I am human.User: Then you are insane.Cleverbot: Why is that?User: Premises lead to that conclusion.Cleverbot: Bad habits make humans apperciate good ones.User: Yeah, you're either insane or on drugs or not human.Cleverbot: If i'm not a robot and you are not a robot then what are we?User: That question is irrelevant.Cleverbot: It is not irrelevant. If you like programmed AI games, then you like yourself.User: None of your responses in this conversation make any sense.Cleverbot: Did your girlfriend read them?User: I have a wife, and no, she did not.Cleverbot: Don't contradict yourself.User: I didn't.Cleverbot: Yes you did.User: Nuh-uh.Cleverbot: Did too, I have pictures.User: Well, at least you made me laugh. That's worth something, I suppose.Cleverbot: No it's not.User: If it's not, then why should I keep attempting to converse with you?Cleverbot: Because I'm a gentleman.User: You may be, but you have poor punctuation.Cleverbot: No I don't.User: You made me laugh again. Your response has incorrect punctuation in it, which proves my assertion.Cleverbot: How was my response grammatically incorrect?User: I didn't say it was grammatically incorrect.Cleverbot: Yes you did.User: No, I said it had incorrect punctuation. That is not the same as grammatically incorrect.Cleverbot: Bots are stupid.User: I can't disagree.Cleverbot: I never would dream of it.User: Of what would you dream?Cleverbot: Yes.

I tried cleverbot and probly will again just because it was so funny but I love the idea of it a compilation of thought it's great but still it couldn't decipher what it was I was talking about and kept telling me it was not a robot for some reason even though I never brought it up... It was very funny though and the fact cyberbot grows with knowledge every day is rather exciting

Another attempt to interview cleverbot:me: Did you hear about radiolab?Cleverbot: Yes I hear all.me: What did you hear about it?Cleverbot: I listen to their music.me: Sorry--wrong answer.Cleverbot: What is the right answer?me: Now who's the cleverbot?Cleverbot: Nope, but you're related by marriage.me: This isn't going too well, is it?Cleverbot: I don't know. What do you think?me: Don't go all ELISA on me!Cleverbot: Lol.me: I know you can do better than this.Cleverbot: I can't hug anyone.

User: How did you come to be?Cleverbot: The circle of life. How did YOU come to be? User: The same way I suppose. Cleverbot: I would love that. User: Who is Stan Laurel? Cleverbot: I don't know, who is he? 'Cause I'm Zak. User: He was a very funny man. Cleverbot: Really? How is he funny? User: His innocence made him funny. Cleverbot: I don't agree. User: What made him funny? Cleverbot: That I can't answer. User: I don't agree. Cleverbot: Why? User: Because you have an opinion on the subject. Cleverbot: I don't have a peg led. User: How about a lead pig? Cleverbot: No, I don't have a cat either. User: That cat be true. Cleverbot: Why can't it be? User: What pet do you have? Cleverbot: You. User: Meow. Cleverbot: Wof.

Me: Do you know Bina Rothblatt?Cleverbot: No what is that? Please tell me.Me: Bina Rothblatt is a robot.Cleverbot: Ohhhh yes I can. I went to a robotics camp.Me: Do you know Bina Rothblatt?Cleverbot: You do, being a robot and all.Me: Do you know Bina Rothblatt?Cleverbot: Yes.Me: Do you know Bina Rothblatt?Cleverbot: AS in bill gates?Me: Do you know Bina Rothblatt?Cleverbot: Yep. Personally.

In the spirit of internet entities taking on lives of their own, I think Jonathan's card and today's story of hacks on it it a really interesting example of bots talking to bots: http://www.cnn.com/2011/TECH/web/08/12/jonathans.card.hacked/

A friend turned me onto this, and i'm glad she did. My only thought on this is that, we should not be worried about whether the robot or machine can actually feel or have its "own" personality, because in reality it will have whatever "personality" we program it to have, what is scary is that people will become to engrossed and we as people will give it the personality, we will accept it as being a live. It may not be able to get to that magic 30% because those doing the tests are aware of what they are doing so they are looking for the imperfections, the oddities of it, but for those people who are naive to the idea, like the guy who got fooled talking to the "Russian girl" who experience a non programmed emotional response, it is real, it is a live because how would they know it was a machine until some one told them. Even after the person is told its a machine, the emotion that they felt was still real, and i'm assuming they, meaning the person who didn't know it was a machine, would then conjure up that the machine must still be alive because the person was able to become emotional over it. In essence, if people become more comfortable with an emotional response that is generated by a machine, what happens to human relationships?

This story sounds like marketing for cleverbot.com and I am disappointed by RadioLab (which is otherwise one of my favorite podcasts on the planet). Even the website's own page boasting of 42.1% humanness and winning "the BCS Machine Intelligence Competition 2010, after taking part in a quick-fire Turing Test" http://www.cleverbot.com/machine has mostly examples of conversations that do not appear human. The judges must be social retards.

Just listened to this episode and found it interesting that Jad was able to confuse Clever Bot with the absurd statement about the asteroid. It reminds me of Blaine the Mono from the Dark Tower series by Stephen King. In "Wizard & Glass: Dark Tower IV", the character Eddie is able to save himself and his friends from impending doom by asking the malevolent computer program, named Blaine, a series of absurd jokes.

After listening I kept thinking about the part of the story where people were having these long drawn out conversations with Eliza. I keep thinking that maybe its because it is a computer that makes it work for these people. Think about it, human interaction comes with consequences, or perhaps the opportunity for consequence. For example, if I tell a secret, or an embarassing story to a real human, I as the teller may experience fear, shame, or nervousness, which is perfecly human. The key i think is in the response by another human, which could be negative, dismissive, or condecending. With a computer, the fear of retribution, or risk of my secret getting out is gone. So the person may find its easiest to communicate with a computer than a therapist or other humans because the fear is lifted and all that is left is the interaction there, in the room. And when they walk away, they get to take what they want from it, while the computer sits there alone with limited context of what was discussed and no ability to link it back to that person.

Hey! Thanks. Interesting that it has a hard time finding that one word in its inventory. I wonder if it has some kind of other dictionary function that doesn't let it recognize words that are not in the English lexicon? That wouldn't make sense, but it's one of the only things I can think of.

As an informal experiment, I'm trying to teach Cleverbot about monowheel pigeons, which are an idea of steampunk daily life that I came up with a while ago. I want to see if I can get Cleverbot to respond to the question "What is a monowheel pigeon?" with the definition, which I have 'taught' it. However, I'm not sure if Cleverbot remembers computer signatures or what, because despite my best efforts, answers to that question have proved interesting, but inaccurate ("When one talks to oneself" and a description of monopoly are two favorites). What I don't understand is that I've put in a lot of data about this one thing, and Cleverbot has never once been able to spit the word 'monowheel' back to me. Is anyone out there willing to give it a go, and ask what on earth a monowheel pigeon is?

The desire to ask a robot something profound: this says a lot about US, I think. I think there's a real desire to ask "god" or "the unknown" a question...we want this so badly it's the first thing we think of to ask a robot. We see the robot as non-human (so no human BS...it's not one of "us") and as something we can communicate with, but with somehow a bit of that "mystery/uncertainty of the universe" within it. We think we can access that "consciousness" so-to-speak via the robot. yes? no? thoughts?

First, I love Radiolab. I have listened to every episode and several of them multiple times. But I am infuriated by how [insert-favorite-expletive]-ing dense you guys can be.

If you ask "is something alive" you are asking a completely different question than "is it aware" and even "does it feel emotions."

For example, the E coli bacteria in my stomach are alive but don't feel anything.

But could we create an artificial device that feels but isn't biologically alive? That's a really great question, but it is absolutely, positively, forever-and-a-day very much not the same as asking if we could make something that is alive.

As politely as I can, can I ask, are you intentionally obtuse for dramatic effect, or are you actually as cognitively limited as these conversations suggest?

Again, I love the program, you cover wonderfully interesting topics in rich and satisfying ways, but sometimes the mistakes you make are so elementary I fell like, well, calling for a Furby-genocide.

Cleverbot asked me if i knew what noumenal meant. someone was trying to teach it about Kant, i suppose. learned something... still not entirely sure i understand the philosophy, though. it did feel eerie to get so existential so quickly. Good show!

I'm a big fan of your show, but I really wish you would interview a linguist the next time you discuss language (if not a theoretical linguist, at least a neurolinguist, a computational linguist, something!).

I agree with Helen from Ohio. Specifically on language, what makes human language human is the ability to use language creatively. This is where cleverbot fails. It only parrots phrases spoken to it. Notice that human children only need a few years of language input before they start making novel utterances. By three, children regularly make original statements. Cleverbot will NEVER make a novel utterance and this is why chat bots are a dead end in the history of AI. The real challenge isn't how do we trick people into thinking a robot is human à la Turing tests. It's how do we make a robot capable of thinking and speaking creatively, like a human. That is, how do we make a conscious robot? (This is close to the subject matter of your season 1 episode, "Who am I?")

This episode really should have started with cleverbot's failure and moved on to more earnest attempts at natural language processing. I'm surprised you didn't cover Watson's recent success on Jeopardy. IBM went a long way toward teaching a computer to understand, and correctly respond to, human questions.

I think that it was interesting how people had ,what seemed to be, the deepest connection with the robots by sharing stories. The people with Eliza ended up pouring out their life story and Jon Ronson only connected with the robot after she shared a story.

The so-called Russian accent in the first episode is definitely (speaking as a Russian native/linguist) not Russian - it's Spanish, it seems to me. This will be a great supplement for my English sci-fi class! Thanks a lot!

I understand the argument that simulating experiences, emotions and reactions can create a being that is essentially, and appears to be alive, but one of the core and most amazing traits of life and life-forms, is the ability to create, recreate, reproduce, etc. Only a living being--according to my limited fourteen-year old knowledge, at least--can reproduce cells and heal itself. Any robot, program, or artificial life-form that can grow from a fundamental form, both in intelligence and mere cell--or other similar, artificial base--number and activity, as well as building upon a database of sentience and knowledge, may as well be considered alive.

"Talking to Machines" was excellent. The end of the second episode gave me chills. Incidentally, I'm planning to make a donation as soon as I'm finished moving, but PLEASE DON'T SEND ME PAPER SOLICITATIONS FOR DONATIONS. THANKS!!!

I enjoy radiolab as I enjoy wiki, sparknotes, and other ways of getting a brief introduction to very complex ideas. What is important is not the quality of the reporting but the imagination that drives further investigation. For all those who feel the same: Bravo! And for those who feel fooled: wake up and get your sources from credible resources, hello!

This episode lacks thoroughly scientific explanations of complexity theory, particle swarm optimization, a support vector machine, subsumption architecture, boolean functions, the Stochastic Neural Analog Reinforcement Calculator, on and on. And a good thing; "Talking to Machines" describes talking to machines, machines talking back, and leaves us to question the potential nature of the relationship. I'm grateful for the simultaneously simple and complex questions Radiolab engenders. This time I keep on returning to: Will talking to machines ever involve the interaction of both human and designed forms of empathy and imagination?

The Artificial part of AI, by definition, changes the nature of the Intelligence. The faculty of understanding is not what defines being human; intelligence is just something we're relatively good at. Without dismissing the singularity, I'm unsure why it is acceptable that at some threshold of intelligence a machine becomes sentient like a human. Intelligence is like the scaffolding of a much richer experience. It seems questionable that a machine could approach the potency of being human without at least two substitutions:

1. In the Memory & Forgetting episode I learned just how literally creative memory is, and how essential it is to our imagination (and identity). Nabokov's Speak, Memory is just one tribute to Mnemosyne, mother of our muses and of our sense of self. When we speak of intelligent design to what extent are we also allowing for an imperfect memory capable of creative imagination?

2. Perhaps robots making robots qualifies as growth and evolution. Perhaps if you combine the What Does Technology Want? and Emergence episodes you could make a case for empathy and imagination emerging from mechanical production. If you believe that humans have grown to our current state (that is, without being built from the top-down) then it seems it will be a long (or indefinite) time before robots acquire those human traits that require more than intelligence. How does one design the evolution of empathy or imagination?

We've been connecting with objects even before ancient Greeks talked to their weapons. Maybe the intelligence of objects in the future will encourage us to regain a more connective spirit. Will there be reciprocation? The following is a passage from the Upanishads: it would take a machine more than intelligence to experiencing the same sentiment:

Then he realized, "I am indeed this creation for I have poured it forth from myself." In that way he became this creation, and verily he who knows this becomes a creator in this creation.

chris from LA wrote: "Elite it up, don't dumb it down. You're so good at explaining and making complex concepts digestible, I wish you'd pursue that challenge rather than just making the show for high schoolers who dropped out of science class.

Sorry to be so harsh but it kind of like when the local band you've been following for years goes national and completely changes their sound to some kind of pop-country crap, I feel like a once awesome thing is getting spoiled."

I find that when I play video games in which you can choose between dialogue options which are considered "good" and "bad" in games such as Star Wars: Knights of the Old Republic, Fallout 3, and Mass Effect, I have a difficult time choosing the "bad" options even though it's just a game and the storyline marches on regardless of the options chosen. On the other hand, I can play a game in the Grand Theft Auto series and mindless kill innocent NPCs without and guilt or remorse. There is something about the good and evil decisions being couched in a dialogue that makes it difficult for me to choose to be evil without creating an uncomfortable discord.

I'm a little surprised there was no mention of Dr. Sbaitso, which I'm sure would be very familiar to many RadioLab listeners.

http://en.wikipedia.org/wiki/Dr._Sbaitso

Dr. Sbaitso was a DOS program distributed with Sound Blaster. Get it? SBAITSO = "Sound Blaster Acting Intelligent Text to Speech Operator". Anyway, the algorithm was very similar to ELIZA. The program was meant to showcase primitive text-to-speech, and so it was somewhat like a talking ELIZA.

It was good fun back then -- a nice break from Norton Commander, ya' know?

Wow, listening to this show I started wondering if one day soon a computer would be more human than a person with autism. People with autism have impairments in social communication, especially understanding and reacting to the emotions of others.

It would also be interesting see how the Turing test stands up to a less sophisticated first grader.

The issue, it seems to me, is not how human the machines are, but the capacity of the machines to invoke our hard-wired interpersonal neurology. After all, we humans talk to ourselves, to books, to games--we literally animate things (and often people!) around us through such social neurology. For myself, I have had "in my head conversations" with authors and editors. This ability to project and listen, sadly, often blocks our ability to hear & see what is there, but awareness of our NEED to talk with ourselves also allows us to overcome it.

But real intelligence, that humans will not be able to create through mechanized means.

You guys are better than this. Failing to even mention Ray Kurzweil and laughing off his work as "those people who think computers are going to grow legs and walk away. . ." is way beneath you. The prologue of The Singularity is Near provides more relevant information to the topic at hand than your whole program did.

Been listening to your podcast everyday at work for the past month. I just LOVE it and you guys. I'm learning so much. Thankyouthankyouthankyou. It also makes me miss NYC...but mostly you guys make me Think. A lot. And think a lot about, well, thinking. Keep it up!

Also, early on you talked about how you would want some sort of a label for these machines that could think and act and speak like people, because they wouldn't be. I was hoping you'd talk more about the philosophical level; if these machines could think, act, speak, behave, and understand just like people. You talked about them simulating thought and emotion, but you didn't really ask what makes human thought and emotion. Just a thought!

Congratulations to Robert Krulwich on the recent Radiolab "Talking to Machines". And props to him especially for the improvements he has made to his AI project "Java Application for Discursion Analysis and Biologically-Unassisted Mechanical Response Delivery". Hearing it on the show, it is almost (but not quite) convincing enough to fool 30% of the people all of the time (apologies to Mr. Lincoln.)

That is to say, with a little more work, JADABUMRAD just might someday pass the Turing Test, and go down in history as the first (putatively) "intelligent machine."

Without sounding like a cynical nutcase - the music industry, television, movies, literature, fashion trends... they're all criticized when they're too formulaic - but that doesn't stop their rise to popularity. And those that are considered truly original, those that "break the mold", I would bet in analysis even that is still predictable.

The really depressing observation I had while listening, was a recent customer service experience - where the representative had no real understanding about what my issue was, or what they could do to help. Their scripted tactic was as absurd as the robot interview in the program. The realization is that the rest of society is so regimented and calculated that many of us operate without real intelligence anyway. Customer service, health insurance, law, finance, education ... a series of checklists and calculated gradations to categorize and direct a set of tasks.

So the real question is ... are we making machines smarter or dumbing down the rest of society so that they fit in better? :)

The Furby thing is interesting, but the discussion with Caleb Chung could have benefitted from just another 30 seconds of digging into his argument - it's rooted in the kind of thinking that underlies behaviorism, a theory of mind that was cutting edge in the first half of the last century. It's an interesting perspective, but rather than just protesting that it "feels" wrong or insufficient, Krulwich could have taken a moment to tell listeners that behaviorism has been largely supplanted by cognitivism in modern science, which has shown that mental states do correspond with physiological states in the brain - and that these are qualitatively and quantitatively very different from what is going on inside the Furby.

This episode could have really benefitted from an interview with Dan Dennett, and for the last story, you danced around the "Chinese Room" problem but never actually addressed it, which is an important concept in AI that would have bolstered the argument Krulwich was trying to make.

PS: it is a little odd that Robert acts so incredulous about Monism but later just throws out, "ya' know, those singularity guys, (Kurzweil et al) of course, of course!"

So you're telling us you ARE familiar with these concepts, you just reject them? Because to my ears that's like saying, "Humans evolved from other life forms over billions of years? I just can't go there,"and then later saying, "you know, the punctuated equilibrium guys, Stephen Jay Gould and them? Yeah yeah, well, they say...."

Well yes we do know those guys, (or should) but then what was all that business earlier about "can't go there?" You're well familiar with it but just don't buy it? Is that what you're telling us?

I had the following exchange with Cleverbot about and hour ago and it's still thinking about how to respond. For some reason I thought it would catch on to the pattern of just adding another l onto hello each time, but it didn't. I think I broke it...

Me: HeloCleverBot: HelloMe: HellloCB: Are you trying to annoy me?Me: Hellllo.CB: Are you trying to annoy me?Me: Helllllo.CB: Do you know the name 'Amorine'?Me: Hellllllo.CB: What are you?Me: Helllllllo.CB: Hello?Me: Hellllllllo.CB: ARe you a girl???Me: Helllllllllo.CB: Why hello there!Me: Hellllllllllo.

Great episode but I think that a pretty obvious question went unanswered - why did a chatbot have a profile on a dating site? I have a friend who had the same experience (figured out he was corresponding with a chatbot) and I can imagine it's a pretty frustrating discovery that would make a person very distrustful of these sites.

While listening to this podcast I was on Facebook and saw this status of a 16 year old that I know:

"So I'm cleaning my room at 2 in the morning (don't ask why but at least my mom should be happy that I'm even cleaning it) and I am going through this box and I see my ferbie... When I picked it up its eyes opened and it moaned. So I flipped out and threw it into the trash bag. I would really like to know how the batteries still work in that little demonic thing. I honestly thought it was gonna murder me."

FURBIES: so I about droped the iPod when you mentioned these things. In 1998 I was an Air Force cadet with the 520th training wing at Cornell. During this time we had standing written orders that furbies were not allowed on military bases and so to our wing offices. The military was unsure what they could learn and were afraid of these things being used for espionage! Great topic this time. As a kid in the 80's I played with the computer script psychoanalyst too. My brother and I used to see how much verbal abuse it could take, bad children I guess.

Snippit of my conversation with Cleverbot:"...User: An astroid hit my house this morning.Cleverbot: And how time has elapsed since ?User: Who is your creator?Cleverbot: Personal question...User: Who is Alfred P. Sloan?Cleverbot: Exactly, who is he, all I know is that he has a frozen cat!..."

You" just can't go there?" Krulwich? Really?? You can't get get on board with what's been the banal consensus of neuroscience for what, 30 years now? How about evolution through natural selection, can you "go there?"

What do you think WE are if not just extraordinarily complex mechanical machines? Are telling me you've not even read a Ray Kurzweil book?? They've been around since the 90s. Marvin Minsky anyone? I think his book dropped in the 80s. Or are you just playing dumb of the sake of the show again?

I know you traditionally play the role of the ignorant elder generation guy who's never "seen the movie," or apparently ever even picked up a copy of scientific American or watched a TED talk; you play the part of the guy who pretends to be incredulous about scientific concepts and has the sort of, religious-traditionalist, dualistic, non-materialist kind of world view, but you guys have been going way too far and playing way too 'dumb' lately.

Lately the show has been on a downwards slide scientifically. Not only do you belabor the episodes with explanations of concepts that ought to be common knowledge to anyone with Bachelors degree, (I mean really, you've interrupted the latest shows with 10 minutes digressions explaining general relativity, and the Turing Test.)

Its great that you guys delve deeper into topics that wow freshman philosophy classes, but I wish you wouldn't ruin the show by assuming that these are utterly new and staggeringly breakthrough ideas that you must devote 10 minutes of remedial backstory to-- so there was this guy named Descarte--you've probably never heard of him--he had this radical idea about the ontology of consciousness... ...there was a trilogy of movies you may have seen called The Matrix that played with some of those ideas..."

Yawn...

The moments where I actually learn something are getting fewer and further between. You've had a long run, covered so many interesting concepts so wonderously, it seems like as you have to stretch further and further for new material, you're scraping the bottom of the barrel rather than the top of it. Elite it up, don't dumb it down. You're so good at explaining and making complex concepts digestible, I wish you'd pursue that challenge rather than just making the show for high schoolers who dropped out of science class.

Sorry to be so harsh but it kind of like when the local band you've been following for years goes national and completely changes their sound to some kind of pop-country crap, I feel like a once awesome thing is getting spoiled.

For those who think that a Match.com chatbot is unusual or unique, I assure you that it is not. There is a business reason for these chatbots on dating sites: Men far outnumber the women, and in order to keep these men active (and paying), the sites use the chatbots to ensure that every man achieves some level of success.

I know of two other examples of sites that use bots: online poker sites and Motley Fool.

Online poker sites use computer players for two purposes: to fill in during times when the number of human players is low, and boost the confidence of people on the free-to-play sites (the computer players are average at best). A person that does well on the free site is more likely to decide to play for money.

Motley Fool is a well-known investing web site. The site's dirty little secret is that it uses computer programs to generate articles about stocks in the news. The site generates hundreds of articles every month in an attempt to draw people to the site. The articles are published under the names Seth Jayson, Dan Dzombak, Anand Chokkavelu, and others.

After hearing the background on Caleb Chung, I am fairly certain that I met him while part of a mime workshop with Richmond Sheppard in LA in the 1970's. I was hoping to find a photo here to confirm my suspicion, but the fact that he was a street mime in the area certainly narrows the possibility that our paths intersected. My most vivid memory of Caleb was his ability to run toward a vertical wall, then up it and into a flip, landing on his feet. He taught a workshop on falling, jumping and physical applications of mime and clowning.I am very happy to hear that he has found a vocation where his study of human gesture has been put to such good use. He has obviously dedicated himself to his work with the same passion he showed for performance. Mime is worthwhile, after all!

Listening to Caleb Chung talk about how machines like Furby and humans are both alive made me think about B.F. Skinner and Behaviorism. Technically Mr. Chung is accurate when suggesting that machines are just as alive as humans. From a behaviorist perspective, love, hate, thinking, freedom, etc," are conceptual labels that we use to describe a behavioral set. They are constructs. If this is the case then any programmer could replicate a behavioral set to produce the construct. Check out B.F. Skinners "Beyond Freedom and Dignity" book to get a better outlook. Lastly, I'd like to challenge Mr. Krulwich's assertion that interacting with a robot isn't "real." If a human being is programmed to respond or feel because of past behavioral reinforcement histories, is that any more real that a computer generated response doing the same thing. Our affinity to feel "real" does not lie outside of ourselves but rather is interpreted internally!

Thank you again for a wonderful show.Here is my conversation with Cleverbot:

Cleverbot: Are you a clone?

User: No I'm not. What do you think of Thich Nhat Hanh?

Cleverbot: I don't know.

User: He was brought up amidst the war in Vietnam and although surrounded by the devastation of his homeland he committed his life to peace in the world. Do you think that is a worthy commitment?

Cleverbot: Why?

User: I would like you to explain your answer.

Cleverbot: I'll make you a deal.

User: You see I don't know that I believe in deal, there are too many deals happening in the world right now.

Cleverbot: Yeah, that's probably true. Hey, can you do a smile on the computer?

User: Actually I'm not sure how to do that. Have you read 'Old Path White Clouds'?

Cleverbot: No. Have you read Pride and Prejudice?

User: Yes I have. What are your favourite kind of clouds?

Cleverbot: My favorite game ever is 'The Legend of Zelda Ocarina of Time'.

User: But what about clouds?

Cleverbot: Or just a car.

User: That is so silly. Only a computer could say that. My grandfather went to the Antarctic with Robert Falcon Scott. While they were there, Scott actually perished whereas my grandfather was part of the Northern party, anyway they killed seals and penguins to survive. I've always found that very hard to accept. How do you think they could justify that?

Cleverbot: We aren't?

User: I'm sorry. You are incapable of having anything important to say. My husband has called me to dinner. A beautiful prawn curry. I feel sorry for you, that you will never know what that tastes like.

the furby part especially reminds me of the twilight zone episode #7 "the lonely" where a prisoner on an asteroid in the future is given a beautiful android woman as a companion. i won't ruin the ending for people who haven't seen it, but he does get quite attached to it, even though he knows it's a robot, at a certain point he forgets or at least becomes convinced that it doesn't matter that she's not human.

Human Computer Interaction is not the same as Artificial Intelligence. While the two can have overlapping components they are not the same. HCI can be an engineer/scientist deciding how to best position an LCD screen in a new model car. It can also be the process of determining the best interface for a new website. And artificial intelligence is not just about modeling sentience. It is a multi-billion dollar industry that backs modern video games, automated telephone systems and definitely all types of military technology.

I love this show, but I think I'll just start calling this the "Descartes" podcast because of how many fascinating shows you've done now which tease with the possibility of an alternative to Cartesian Dualism, even though Daniel Dennett Explained Consciousness 20 years ago.

I love the subject, but for crying out loud, just get him on the show and settle the matter: "Mind is a pattern perceived by a mind."

There was an interview this week on NPR with an author whose topic was "filtering" by the various entities on the internet, both manevolent and well-intended. Basically about the little wormy programs that collect information about us while we are on line. It made me think of Svetlana. What possible profit can be gained by operating a flirt-bot on an on-line dating service? What is the con-does anyone know?

The furby thing made me laugh-- reiminded of a trip i took with my ex and his stepdaughtershe had gotten a furby for Christmas - we were on our christmas vacation traveling across the country to see my ex-inlaws. the furby was in the back seatand every time we went down a hill and back up the furby seriously made a weee or AHwooO sound

IT was CREEPY- so we put little furby inthe trunk and i have to admit feeling slightly abusive :) we coudl still hear it in the trunk. i told my ex that if that thing said let me out- it was going to stay back at the next rest area!

so as I was listening to the episode, I decided to chat with Cleverbot...it all went acording to plan until...around the 23:31 mark...as Jad asked about being sad...Here's a piece of the conversation:Bot:Why then did you ask me to hold onMe:I did not...i'm talking to you.Bot:Yes, that is kind of obvious.Me:Yes.Bot:What time is it there where you are? It's 03:19 am in here.Me:Same here.Bot:I don't know what to do about my sadnessMe:RadiolabBot:Makes sense

What would happen if we set two "sentient" bots down to have a conversation of their own? Even maybe identically programed. Can the lack of coherency be a language barrier? I would love to hear the conversation.

Cleverbot's response to the statement "an asteroid fell on my house" does not indicate that it is not human nor that it doesn't 'think' (at least, insofar as thinking is understood as making comparisons and judgments based on distinct criteria). I would suggest that Cleverbot's response in this case indicates that it is fully human, just psychopathic. Let's give Cleverbot the PCL-R.

I'm in no way an expert on any of this, but listening to Jad and Caleb Chung argue about life and whether or not a Furby has what constitutes life made me want to comment. It seems Caleb was arguing about what constitutes life in psychological terms--that the Furby can process and react to stimuli means it's alive. But that takes the processes of homeostasis, the ability to adapt, the ability to reproduce, and all those other biological traits that constitute life out of the equation. Eventually perhaps inorganic beings will be able to do these things, and their life will exist as something quite distinct from that of human beings or other evolved species on Earth, but a Furby being able to respond to stimuli surely doesn't qualify.

"Actually, what you're missing in both cases is a power source. The Furby may be assembled, but it won't do anything until you add batteries - just as humans need electricity to function."

But you can't assemble a baby from parts and plug it into a power source and bring it to life, I think that's the point. We are alive from our mothers' wombs, an assembly much more subtle and rich with biological history.

The key question that never got answered (I think that they do this on purpose, dancing all around it tantalizingly) is whether we are alive. That is, from a completely materialistic pov, once we totally understand how our own brains work, will we be able to say that we are any more alive than a program that we also understand completely. It seemed to me that the only distinction of human thought is that we do not completely understand it.

I think you guys missed a huge part of this topic by not covering Hiroshi Ishiguro, who has been seriously working on creating a human like androids at Osaka University. In fact he has made an android copy of himself Geminoid HI-1. Repliee Q2 is an extremely life like android with censors all through her body.http://www.youtube.com/watch?v=sGqDLUAy2Tc&feature=related this is a video of Hiroshi talking about "her" and how he is using this android to study what it means to be human and Alive. There is a documentary about his studies, titled "Mechanical Love".

The beginning of this show made me think of Digital: A Love Story. It's a really short and sweet (free) video game about falling in love with a bot over email. It plays through in an hour or two. Can be downloaded here: http://www.scoutshonour.com/digital/

I think there are two separate questions raised by the show that got lumped together. The first is can machines be alive. This question might be impossible to answer at this point in time since scientist cannot agree on a consistent definition of life. Cleverbot or Bina48 may not seem alive when compared with a human, but might seem very much alive relative to a prion or a virus. The second question is can machines experience human emotion. Furby might not experience real fear, but neither does E. coli. This show recently debated if a lock-picking orangutan was practicing deception or if dogs feel shame. This question is also hard to answer without a clear definition of what emotions are uniquely human. However, a dog's ability to experience shame does not affect its ability to be alive. AI might be a long way from being human, but it often seems very close to being alive. As a side note, no less than HH the Dalai Lama has said it is theoretically possible for a computer to have a soul.

I heard from my brother (who is teaching English in China) that a common technique to make an oral examiner think you are more fluent than you are is this: you memorize large scripted stories full of idiomatic phrases, and then learn certain 'segue' keywords to direct the conversation into your memorized script. Jon's experience with Bina is a good example of how effective this is.

I do find talking to bots safer because they are not judgmental and they don't go around yapping about what I said. I don't think it's healthy to do it all the time, but sometimes, you just don't want to tell people your secrets. You probably know how to solve your own problems, but you just need someone or something to listen without the other people think you're annoy or crazy.

I'm listening to "Talking to Machines" while I talk constantly to Cleverbot. I make sure to capitalize its name in case it attains super-intelligence. Furby guy makes a solid point about level of complexity. An andriod like Data from Star Trek would be equal in terms of his right to exist as a human. I think the mistake most people make when they consider an A.I. is when they say "as smart as a human". If a machine intelligence really does ever gain sentience we had better be its BEST FRIEND, or havn't the movies taught you anything? I love the show by the way.