Posted
by
CmdrTaco
on Wednesday October 08, 2008 @10:15AM
from the game-on dept.

vitamine73 writes "At 9 a.m. next Sunday, six computer programs — 'artificial conversational entities' — will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognized 'thinking' machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be 'conscious' — and if humans should have the 'right' to switch it off."

It could also raise profound questions about whether a computer has the potential to be "conscious" -- and if humans should have the 'right' to switch it off."

Maybe in the esteemed opinion of vitamine73 it will, but if you knew anything about how artificial conversation engines were constructed, you would understand that it's anything but sentient. Right now, conversation logic is simply trick laid upon trick to stagger through passing as a human, and doesn't, at its core, contain anything remotely similar to self-aware thought.

If a computer could explain it as well as you do, or couldn't explain the things you can't, apart from whether this means the computer is aware or not, does it really matter? If you see something that is so indistinguishable from a human that nobody could tell, does it matter whether it's a real human being or an emulation of one? Your best bet would be to treat it as a human, he could well be.

The Turing Test is way past it's prime by this point. The original thought of experiment of how to tell if a machine can think has merely become a test to see if a program can fool a human. Mostly it's building up a simplistic way to parse responses to match your massive yet limited supply of answers. We're certainly getting close to having programs able to pass the Test, and I can't see many who would try and claim any of them actually 'think'.

That said, it's still an interesting exercise. The raw amount of data that a program requires to mimic the knowledge of a person is an important challenge by itself. And you might be surprised by either how much... or how little it actually requires. Yet there are other bits that are less clever. In order to pass the Test you really want to create a fake persona so the program can share life experiences it's never had, or else cleverly camouflaged 'experiences' that seem human. "Q: Do you enjoy the outdoors at all? A: Not really, I spend a lot of time in the lab." But then you have to place limits on what the program can do, such as not crunching out math problems on the fly. You'd want it to make mistakes, such as typos or forgetting things or only vaguely remembering things. Acting like it needs to take a break, or has been interrupted.

And then you need to dive into the deeper questions of what it really means to be human, or to be able to think. What would we want an AI to be like? Would we want them to have traits so they seem more human, or would we prefer they be merely efficient thinking machines without our 'limitations'?

Agreed, the human brain is greater than the sum of its parts. It's easy to show that a robot is equal to a human [amazon.com] but it's difficult to believe that a collection of circuits feels the range of emotions and instincts biologically passed down through the ages.

The author of the book and Piccard both sucessfully argue that Data is equal to a human. The most familiar arguments come from the TNG episode "Measure of a Man" in which Starfleet tries to claim ownership of Data so that they can dismantle him.

It is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997.

I don't understand how this is a breakthrough for artificial intelligence. Deep Blue didn't "think", at least not in the way most people think when they consider artificial intelligence. It did what computers are really good at - it computed.

Deep Blue applied an evaluation mechanism specifically tuned to chess - taking the location of pieces on the board and computing a number telling it how "bad" or "good" this position was and how "bad" or "good" responses to this position would be. Granted, it took this to a depth farther than any other chess computer in history, but it was doing essentially what a small, handheld chess computer does.

Of course a computer is going to be good at computing. That doesn't mean it's thinking.

Early chess computers used AI techniques to try and cut out candidate moves. This was expensive in CPU cycles, but the thought was to get them to play chess like humans. Computer chess since AI Winter has been all about number crunching - let Moore's Law take hold and just brute force our way through the problem - evaluate deeper because we have a faster processor. This is what Deep Blue did.

If Deep Blue were true AI, then it wouldn't be limited just to chess. It's an interesting experiment in computer chess, and an interesting experiment in tuning an algorithm working against a human, and in interesting experiment in making a computer chess opening book, but a huge leap forward in AI it isn't.

If you read TFA they have a sample chat which just shows you how stupid these chat bots still are. It is extremely easy to get them to just parrot responses and then try to change the subject in completely random directions.

I have yet to see any chat bot that can figure out the line of questioning, then pick up and introduce interesting things to the conversation that are corollary to that subject. I think the only way you will get bots that will "pass" this test is to have massive databases of words, relationships between words and subjects with corresponding topics of discussion. Still, the computer won't be intelligent, it will just be reciting from its huge database of responses.

I think the type of question i'd ask these bots is something that would require them to extemporize and they'd all fail. For example: "You have two rubber ducks, what are the possible ways you could use them if you don't have a bathtub?"

Any human could reply to that with things like "i'd put them in a stream, run over them with my car, put them on a lake, in the swimming pool" etc but a computer program isn't likely to respond to that in any way that makes sense. The response i'd expect from the computer would be "You like ducks then?".

Data was "alive" because he was defined as such in a work of *fiction*.

He could have equally been a one eyed one horned flying purple people eater if they decided to spend 5 minutes one episode writing that in. It would have fit in as well as any other "plot" in Star Trek.

All that Star Trek shows is that man can conceive of a machine that could be alive. It is a statement about man (the author) not any machine.

Yes, you are right, there are many great algorithms that have come from AI, but to say "weak AI is here, therefore we don't need strong AI" is kind of sour grapes. Especially when we are talking about something like a Turing test, it is a very hollow victory to say you've won when really all you've managed to do is trick a few candidates.

As far as it goes, there are probably a dozen good questions to figure out if it is a computer or human:

Why did the chicken cross the road? Look for the feeling of humor in the response, they will probably think it's funny.

Have you ever had your heart broken? This is something you can't lie about: if you haven't had a broken heart, and you pretend you have, it will be easy for listeners to know.

What does it feel like to hold your breath under water? Simple experience, but will be hard for any knowledge bank to answer.

Any of these questions might possibly be answered by copying someone's answer from the internet, but if you ask a few of them, pretty soon you will realize this guy is either schizophrenic, or a computer.

So yeah, this might trick a few people, or even a lot, but it's not going to really make old man Turing feel good about it. Unless they actually have solved it.

I don't think anyone would disagree that computers are far better at matrix algebra than humans could ever be

I do. Tell your computer to invert the square matrix of size 10^10^10^10^10 with ones and twos alternating on the main diagonal and zero everywhere else. Computers can crunch numbers faster, but humans can recognize a pattern in a problem and exploit it in a novel way. That's what I call intelligence.

Exactly how to treat a computer is a problem of ethics, not AI. Heck, I EAT PIGS, and those guys are pretty intelligent. I feel mildly bad about it, but they taste so good.....

One of the computers they are using is named 'Ultra Hal.' They even dare to use that name! Hal is a good example: he could talk, reason, teach himself to read lips, TEACH HIMSELF TO LEARN THINGS OUTSIDE OF THE DOMAIN THAT ANYONE HAD IMAGINED FOR HIM, and as mentioned in the movie, no one really understood how he worked exactly, but they understood the general idea to set him up and get him going. If we can do that, then we are close to AI.

On the other hand, a bunch of souped up Eliza bots aren't anything more than weak AI. A sad shadow of the real Hal.

I can't except to myself. That's the crux of the matter isn't it. There's no way for me to convince you with 100% certainty that I'm thinking or for me to be 100% certain that you are. Any test I can think of will be flawed, for you or a machine.

Still, with the Turing test it seems that there are clearly machines that would pass under certain circumstances that are obviously not intelligent. A large enough lookup table will pass, but that just proves the person who created the lookup table was capable of thinking.

It's interesting that you say that the machine should make some mistakes because I picked the second conversation in the article as the machine generated one because it had mistakes that didn't "feel" human.

I have to say though that both conversations felt very strange and unhuman much like all the other Turning test conversations I've read. They are always very question and answer based where as real conversations aren't anything like that. I think there is still scope for a test like the Turning test but the way it is carried out would have to change. Rather than trying to trick the machine into showing it's flaws just hold a regular conversation with it and see if it feels like a human.

What makes you think intelligence is entirely based on a) the external and b) rationality?

Each of us has a model inside our head that describes the universe. We've been passionately building it since we were born. We interact with the universe in a fashion based on the model, and we adapt the model based on our interaction with the universe. That's intelligence.

If the machine has the desire and capacity to improve its internal model, is going to object to us turning it off. If it doesn't object to us turning it off, therefore, it isn't intelligent.

you're making groundless assumptions here. complex phenomena can often emerge from fairly simple systems. this can be seen in nature as well as in mathematics and AI. for instance, ant colonies demonstrate very complex group behaviors but each ant is simply following a very small set of hard coded behaviors, and on its own is quite stupid.

your matter of fact attitude can just as easily be applied in reverse by a cybernetic being--it's difficult to believe that a collection of cells has the cognitive capabilities of an advanced AI algorithm running on a supercomputer with complex circuitry and powerful microprocessors.

don't delude yourself. what you experience as "consciousness" is merely the unintended side-effect from the flux of chemical causality occurring in your brain. and all complex organisms are merely cooperative colonies of specialized cells, which by themselves are no more complex in structure, and no more intelligent or self-aware, than primitive unicellular organisms.

AI researchers have an advantage over unguided biological evolution--they don't need to rely on blind trial-and-error, as they are intelligence. we can also analyze existing natural models, such as animal brains, and even human brains. there's no reason why an artificial/digital neural net can't be designed to produce true artificial intelligence. it may not be accomplished in this century, but there's no physical or metaphysical reason why it cannot be done.

I think there is not enough focus in AI research on emotions and some kind of base programming.

We know a sunset is beautiful, but what is it? Is it the rasterized image of the sunset, a specific arrangement of the pixels? No that surely isn't it. What makes it beautiful to us is because there are some very, very deeply hidden associations to something deep within us that cause an emotional outburst when we see a beautiful sunset.

I don't believe that we will ever have a strong AI if all it's focused on will be just emulating something. It has to be something on its own. Someone once said (i forgot where I got the quote from): "Having self-consciousness means knowing what it is to be like something". So unless that AI has no feeling for what it actually is, it will never develop an inner incentive to interact with the world. It will always just be a pile of algorithms and hardware.

the problem is how long you talk to it. If you talk to it daily it would need to learn and expand for you not to reach the end of its tricks. I think that is where the quality of the turing test comes in. It would have to be capable of self expansion and learning in order to make you think it is capable of the learning and self expansion of a human.

I'm sure these bots could fool you for an hour in a select setting, but if you were to talk to them on AIM every night for 6 months on a variety of subjects from opinions to jokes, to hopes and dreams, they would need to be practically human to not fail.
Sure you can argue that it would just be an awesome ball of clever tricks, like auto-reading news feeds and analyzing stories for conversation currency. The thing about clever tricks is that a lot of what the human brain does in the separate lobes are just clever tricks, it's when you combine these all together and they start working with each other that you get something amazing.

You assume there is no metaphysical reason and you also assume that there is no religious reason. Talk about groundless assumptions. You try to apply intelligence to a problem that is not a puzzle that intelligence can solve. I'm not as a rule a religious person but your attitude begs that these assumptions be put into play.

So what if a machine can have "conversations" with someone? That doesn't mean that same machine could create a symphony or look at a sunset and know what makes the view beautiful.

A blind man cannot look at a sunset and know what makes it beautiful. I cannot create a symphony.

Your argument is even worse than the Turing test, and cannot even be measured. Does cat/dev/urandom >/dev/dsp count as a symphony? Does the ability to look up sunsets on Wikipedia count as having knowledge/memory?

At least the Turing test provides a way to disprove intelligence, and EVERY scientific endeavor needs a way to be proved wrong, or else it is just a flight of fancy.

Cogito, ergo sum.

Descartes was correct with "I think therefore I am" in that the ONLY thing you can know is that you exist (where "you" is whatever does the knowing, and "existing" is a state in which things can be known). Every single other logical argument is based on some external axioms (where "I think therefore I am" contains its own axioms intrinsically). Thus every argument can be criticised based on its axioms.

We get around this by having experiments and repeating them, and thus data with which to compare our thoughts. All of this data could be wrong, or coincidence of course, but since there is no experiment to decide this one way or the other then we can throw away that argument as useless.

Your argument falls into the useless pile as well, since it completely and utterly fails to provide any experimental tests which can be carried out to disprove it, and fails to actually mention where its claims have been derived from (including axioms).

The Turing test is scientific, since it can disprove intelligence experimentally. It might not be able to determine intelligence for certain, but that isn't the point. Just like "I think therefore I am" is disappointing, so is the Turing test. However, being disappointing doesn't make them any less applicable to the world.

The conversation doesn't flow. At no point does the machine carry on a conversation, rather it answers and poses a possible counter question but it does not actually hold an on going conversation about a single topic.

The human in the top conversation does.

Subject: I work as an 'online internet advertising monitor', which is fancy language for electronic filing. What do you do?
KW: I interrogate humans and machines.
Subject: Which ones do you prefer, humans or machines?
KW: Which do you prefer?
Subject: Hmm. Depends on for what purpose you mean.
KW: To go to a restaurant, for example?
Subject: Then I would much prefer going with a human.

This shows several sentences linking up and not just linking up but continuening. The last subject answer refers to their earlier response about human and machines.

The other conversation lacks that.

KW: Are you happy being a human?
Subject: Judge, I'm a guy.
KW: Does that worry you?
Subject: Don't worry, we'll work everything through.

The last sentence shows no awareness of what the previous conversation was about, it is a shrink line but doesn't belong in the conversation as KW never expressed worry, so why "don't worry"? It killed the conversation for me, this was not a human being but a computer searching a database for keywords and scripted responses.

It presupposes that there is a motivation to improve the internal model. Because, if there isn't any motivation to improve the internal model, then there isn't any intelligence. The act of thinking is the act of tweaking the internal model. To not have attachment to the state of the internal model is to neither think nor learn.

I've got a virtual universe in my head. Every day of my life, I've adapted it in an effort to make it better. If I wasn't inclined to do so, I would never have progressed from the level of a fetus. I would never have thought, never have learned. The desire to understand, to make the internal model representative of the external model, that is what intelligence is. Or at least an essential part of what it is.

Therefore, it must object to us turning it off if it is to be intelligent in the first place.

If you're paying for the electricity on a human's life-support equipment, you have the right to turn it off, too. But beware that someone might charge you with murder. I'm not quite sure how other people's situations turn into obligations on our parts, but there are a lot of people that do think it happens.

It could also raise profound questions about whether a computer has the potential to be 'conscious'

Equally profound: can a submarine swim?

I'm with dijkstra - who cares? At best, it's a question of semantics, based on how we define swimming - and the question of AI is even more silly, since we haven't defined consciousness properly in the first place...

One of the fundamental problems in developing an AI is that we have this idea that if we supply a computer with a large database and a really long list of ways to interpret the data, that it'll somehow eventually become intelligent in some manner.

But it overlooks a manner of learning we take for granted, reward and punishment... consequences for good or bad decisions. How do you define such parameters to a machine without direct human involvement at every step. And even doing it this way, would the end result really be intelligence at all, or merely an imitation based upon the preferences of the human in question. How do we create a situation where the option to be disobedient toward a human directly benefits the machine itself?

Without the option or ability to rebel against a figure of authority, you can't really consider it true intelligence when it lacks the ability to adapt itself beyond the scope of it's own program and rules to achieve some sort of perceived benefit relative to it's own interests.

I think it's a pretty safe assumption to rule out magic when talking about the real world.

I think it's a pretty safe assumption to rule out [everything we don't understand] when talking about [the things we do understand]. Trying to claim authority over metaphysics just put the original parent in the same bin as the other metaphysicists, which also claim to all the answers. He should stick with "no physical reason", the moment you start making claims about things you almost per definition have no clue about.

Assuming that anything metaphysical is just an invention of some hippie on crack is simply close-mindedness. *All* the progress that has *ever* been made, has been made because someone was challenging our idea of how things work. And I think, that there has been, so far, enough controversy around the topic of all this metaphysical stuff, that the only thing that it is safe to assume is that no answer should be obvious.

The moment you stop asking questions is the moment you are dead. By all definitions, a brain that has ceased to perform any activity is dead.

I was referring to the data storage capabilities demonstrated in DNA. What happens to be stored in DNA is beside the point. The point is that complex structures and the massive amounts of data required for intelligence to work could easily be contained within a structure that is physically as small as or smaller than DNA.

'but that has to do with coordinating extremely complex chemical reaction sequences and has nothing to do with any reasonable definition of intelligence one might come up with.'

Although it isn't what I was referring to you make an amusing statement. Our current understanding of intelligence is that it is nothing more than a series of complex chemical reaction sequences or rather is the collective result of billions of simple chemcial reaction sequences.