A stunning misinterpreatation of a point I have posted again and again...

In response to post like this one, I have been asked again and again questions like "Why do you deny the possibility that machines might be able to think?"

This is rather stunning to me, as I have posted numerous times, on this very same blog, my affidavit that I do not now, and never have, denied this possibility for even a second! I have posted the example of a thermostat turning up the heat because it "feels cold," not because I deny the possibility that the thermostat feels cold, but because I want to see if my AI enthusiast correspondents are willing to be consistent, and admit that their thermostat might truly feel cold, just as their Turing-test-passing machine might truly be conversing.

So I do not deny the possibility that, say, when I type "ls" at a Unix system prompt, my Unix computer "knows" that I want it to list the files in my current directory. What I do deny is that the fact that a more involved program than "ls," simply because what it does is more complicated and harder to follow than "ls," should suddenly deemed to be thinking, while the "ls" command is not so deemed. It seems to me that such people are simply punting, and once a program gets too complicated for them to understand why it does what it does, revert to cargo-cultism, and declare "Ooh, magic!"

So, people who have objected to my recent Turing posts: let me put forward a (hypothetical) metaphysical position in a series of assertions:

1) Electrons orbit around atomic nuclei because they feel very attracted to protons.
2) Hydrogen atoms unite with oxygen atoms because they know they will achieve a lower energy state by doing so.
3) Microorganisms understand that if they move away from toxins they will survive better.
4) The Unix "ls" program knows that I want it to list the files in my current directory.
5) The latest IBM chess program understands how to checkmate its opponents.

I have never denied that 5) might be true. But what I do deny is that any of my critics can formulate any reason to reject 1-4 while asserting 5. And the fact that they keep asserting that I think 5 is "impossible," I suggest, means that they have an ideological attachment to asserting proposition 5, even while denying propositions 1-4.

16 comments:

I have never denied that 5) might be true. But what I do deny is that any of my critics can formulate any reason to reject 1-4 while asserting 5.

Assume evolution is true. Then, thinking animals must have evolved from non-thinking animals. At an advanced enough level, machines must be able to think. Mr. Data was a "sentient being" -- though that was sometimes challenged (perhaps by crotchety bloggers).

2 is woo.3 is exactly the point at issue in the Turing test. This is the "calling the bluff" aspect someone mentioned. In the case where finally people cannot distinguish, what do you say? Four legs good, two plugs bad?

You could define "knows" , "feels" and understands in such a way that 1-5 become true.

Even if you stick to more conventional definitions you could never really tell if 1-5 were true or not.

But this seems beside the point as far as AI and the Turin test is concerned. The Turing test according to Wikipedia is "a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human."

It sets the bar quite high and none of the entities in 1-5 meet that criteria (though of course they may all have rich inner lives , but not be very good at communicating).

Your evading of the central issue is becoming just a wee bit off-putting here, rob. And the fact that you think the problem might be that I don't know what the Turing test is is even more so.

The fact that you think my 1-5 are "beside the point" is a symptom of the evasion. The issue is, "If X *appears* to be doing Y, do we have to concede it is doing Y?"

Turing's claim is "If a machine *gives the appearance* of thinking, then one MUST admit it IS thinking."

I am asking if you are willing to be consistent here. If a thermostat *appears* to be chilly (it turned the heat up!) are you willing to say that we MUST admit it IS chilly? Your bringing up that the thermostat did not pass the Turing test is a complete red herring and another evasion: I am not asking if it is as intelligent as a human being, just whether it feels chilly!

"why would you assert that five must be true while denying 1-4 must be true?"

I don't think I do. I said that the answer depends upon the definition of terms used.

You paraphrase Turing's view as "If a machine *gives the appearance* of thinking, then one MUST admit it IS thinking.".

If you define "thinking" to be merely the act of running some sort of algorithm the way that a thermostat does then yes, a thermostat is thinking. But I do not think that would be a very useful definition. AI is normally interested in higher level of intelligence which is why a definition of thinking as"running an algorithm that simulates human thinking" would (in my view) be more useful. And defined in this way I would argue even 5 fails.

You could define thinking as "thinking exactly like a human thinks" and then even passing the Turing test would not prove that a machine thinks, and you would have to take the agnostic view that you refer to.

"If a thermostat *appears* to be chilly (it turned the heat up!) are you willing to say that we MUST admit it IS chilly?"

Yes I would. But my thermostat doesn't appear to me to be capable of feeling chilliness. My definition of "chilly" would be that it is a feeling only applicable to sentient beings , and I do not think mine would pass whatever the equivalent of the Turing test is for sentience.

(BTW: I appear to have a writing style and tone that annoy you. It is rarely intentional, and certainly isn't here).

1. yes, as long as we define our terms to be consistent with the answer being yes. You rejected this as an evasion (and didn't like my dragging in the Turin test stuff).

2. yes, but that doesn't mean my thermostats feels chilly because in my subjective view (and using a conventional view of feeling chilly) my thermostat doesn't appear to be feeling chilly. You rejected this because my thermostat apparently passes the (undefined) Turing test for chilliness, and therefore my subjective views on its feelings of chilliness don't count.

3. I suggested that the statement might make sense when applied to advanced algorithms but not when applied to more basic ones . You rejected this because It defines ' "thinking" as "appearing to think'. So here we are having a discussion about under what criteria it may be valid to conclude that appearance is reality, but apparently its against the rules to suggest that some things (such as human-style thinking) may actually meet these criteria.

For instance, 1) is an obvious evasion: you are dragging in "well we could define it that way" when that is not what we are talking about. The issue is, why does your "subjective" impression count against the thermostat but my "subjective" impression isn't allowed with a machine that appears to converse? (The Turing test is DESIGNED to rule out this "subjective" belief. Why is it suddenly allowed as relevant for the thermometer?)

By the way, I am sure you are not consciously evading this issue: you shy away from it because if you looked it squarely in the eye, your view would collapse. So it is quite natural for you to be baffled as to what you are evading!