Welcome to the Shroomery Message Board! You are experiencing a small sample of what the site has to offer. Please login or register to post messages and view our exclusive members-only content. You'll gain access to additional forums, file attachments, board customizations, encrypted private messages, and much more!

I?m posting this to clear up a few things that keep getting mentioned.________________________________Taken from Jack Copeland's Artificial Intelligence: A Philosophical Introduction Ch. 3.4 section 3:

This objection commences with the observation that a simulated X is not an X. Simulated diamonds are not diamonds. A simulated X is X-like, but is not the real McCOy. Now, suppose a computer passes the Turing Test (and whatnot). How could this possibly show that the computer thinks? Success in the Test shows only that the computer has given a good simulation of a thinking thing, and this is not the same as actually being a thinking thing. Consider a version of the Test in which the interrogator is pitted not against a computer and a human but against a man and a woman. The interrogator tries to distinguish which of the two is the man, and the man tries to force a wrong identification. (Turing himself mentions this set up in the course of introducing the Test.) Suppose the man passes the Test (that is, forces a wrong identification frequently enough over a number of trials). Obviously this doesn?t show that the man is a woman, just that in conversation he is capable of giving a good simulation of a woman. Similarly (concludes the objector) a computer?s passing the Test doesn?t show that it thinks, only that it is capable of providing an excellent simulation of a thinking respondent.

In my view this objection is confused, although in the end it does yield up to a grain of truth. The first thing to do in exposing its weakness is to note the assertion that the simulated X is never an X. Consider voice simulation. If a computer speaks (with a passable accent), isn?t this simulated voice a voice ? an artificial voice rather than a human voice, but nonetheless a voice? Similarly, artificial proteins are still proteins, and so on for a range of artificially-produced substances and phenomena.

Comparison of these examples with the case of simulated death, for instance, indicates that there are two totally different sorts of ground for classifying something as a simulation. Call a simulation a simulation1 if it lacks essential features of whatever is being simulated. (bold is mine- Sclorch) Thus simulated death is a simulation1 of death because the person involved is still living, and simulated leather is a simulation1 of leather because it is composed of the wrong sorts of molecules. Call something a simulation2 if it is exactly like whatever is being simulated except that it hasn?t been produced in the usual way but by some non-standard means, perhaps under laboratory conditions. Thus some coal produced artificially in a laboratory may be called simulated coal even though it is absolutely indistinguishable from naturally occurring coal.?..____________________________________

The Turing test is rubbish. Ever been kidnapped by super-intelligent computers and deemed un-aware because you're a bit rusty at TCP/IP? Why would imitating a inferior form of life be any sign of true intelligence in a machine?

Yeah I really hate this misconception too. AI is a popular term.... it's ARTIFICIAL intelligence - a simulation, and of course it's possible - I see it in videogames!

When AI is mentioned in philosophy, they are using the wrong term, but they are usually referring to REAL MAN-MADE SENTIENCE/INTELLIGENCE/FREE-WILL etc. The REAL THING. But there is no popular term for such a thing (which is probably impossible anyway) so AI will have to do for now.

I have mixed feelings about the Turing Test.I think that if I were the judge, no machine could fool me. If a machine could converse with me on the Shroomery (and understand ALL of my analogies), I'd be willing to say it was intelligent... then I'd throw it into a caldron of molten metal (T2 anyone?).

It's like the Kasparov vs. Deep Blue battles...Kasparove kicked it's ass in the first tournament. Then, in the second tournament, the programmer's changed the rules of the match (and plugged in EVERY recorded match Kasparov had ever played). It was decidedly unfair from the get-go.

Kasparove kicked it's ass in the first tournament. Then, in the second tournament, the programmer's changed the rules of the match (and plugged in EVERY recorded match Kasparov had ever played). It was decidedly unfair from the get-go.

Good example. This really does prove that AI is only a reproduction or a semblance of intelligence. As it was programmed, Deep Blue was given almost limitless possiblities to use during chess. But Kasparov beat it. Why? Simply because he is human and can think. If a computer isn't programmed specifically with a solution, it can not think of one. A human can reassess a given situation and come up with an entirely new strategy. Then in the second match it was programmed with all of Kasparov's matches so it was able to beat him. It was merely simulating the superior skills of a human. Computers can only put out what was already put in. At this point in time I should add. If they ever do decide to start thinking independently I apologize to them in advance. Please don't assimilate me

--------------------"You people voted for Hubert Humphrey, and you killed Jesus." F and L in L.V.

I totally support research into Parallel Distributed Processors (which are in the infant stages of modeling the brain). Once research into PDP goes far enough, they'll figure out how the brain works...

This is a very interesting question.IMHO if a puter without prompting asked me why it was created and makes an attempt at understanding itself and it's creator,then I would have to say the simulation ends and autonomy begins.Beyond that if I catch it dosing on high frequency current to alter it's new found conscieness I'd be pretty sure then I was dealing with an individual entity not a cyber- golem WR

I totally support research into Parallel Distributed Processors (which are in the infant stages of modeling the brain). Once research into PDP goes far enough, they'll figure out how the brain works...

At some point you will have to lecture us on this subject and recommend a few books to read. I am VERY interested in the philosophy of mind. In fact, a section of it is one of the cornerstones of my perennial philosophy.

I have a few comments to make on your initial post but I never seem to have the time to post them.

This really does prove that AI is only a reproduction or a semblance of intelligence. As it was programmed, Deep Blue was given almost limitless possiblities to use during chess. But Kasparov beat it. Why? Simply because he is human and can think.

Sorry, that's silly. If being intelligent means that you can beat the world chess master, then none of us is. The surprising thing is not that there are professional chess players out there who can beat computers. What really rocks is that there are COMPUTERS who can beat PROFESSIONAL CHESS PLAYERS. That's fucking awesome. Just 40 years ago people thought that a computer who beats a professional chess player could easily pass the turing test, too.

The reason why machines are good at playing chess is because chess is a game made for machines. Chess is about calculating, and not much about pattern recognition. Go, on the other hand, is a game based on pattern recognition, and there is not one single computer on the planet who can beat a professional Go player (there's actually something like a 500000$ prize on that). If we had a machine that can play both Go and chess on a professional human level, I'd risk to say we had an intelligent machine in every sense of the word. A machine that had a human level of pattern recognition would dream and have hallucinations.