Archive for ‘AI’

about Searle’s “Chinese Room” argument. My response is a bit long for a comment, so I’ll respond here.

Understanding

Here’s how Coel frames the issue:

You’ve just bought the latest in personal-assistant robots. You say to it: “Please put the dirty dishes in the dishwasher, then hoover the lounge, and then take the dog for a walk”. The robot is equipped with a microphone, speech-recognition software, and extensive programming on how to do tasks. It responds to your speech by doing exactly as requested, and ends up taking hold of the dog’s leash and setting off out of the house. All of this is well within current technological capability.

Did the robot understand the instructions?

My answer would be “obviously not.” So, according to Coel, that makes me a Searlite. If I had agreed that the robot understood, then he would say that I’m a Dennettite.

Could a computer ever be conscious? I think so, at least in principle.

As O’Brien says, people have very different intuitions on this question. My own intuition disagrees with that of O’Brien.

Assumptions

After a short introduction, O’Brien presents two starting assumptions that he makes, and that he will use to support his intuition on the question.

Empirical assumption 1: I assume naturalism. If your objection to computationalism comes from a belief that you have a supernatural soul anchored to your brain, this discussion is simply not for you.

Personally, I do not assume naturalism. However, I also do not believe that I have a supernatural soul. I don’t assume naturalism, because I have never been clear on what such an assumption entails. I guess it is too much metaphysics for me.

That blog post has a link to the podcast. I listened to that podcast this morning, and will comment on it in this post.

I have been clear that I am skeptical of computationalism. And Pigliucci is equally clear that he, too, is a skeptic. But I don’t plan to repeat those earlier posts here.

Analog computation

What surprised me about the discussion, was that O’Brien emphasized analog computation. Perhaps O’Brien is conceding that there might be problems with computationalism in the form of digital computation.

I remember, perhaps around 15 years ago, somebody argued for analog computation rather than digital computation. This was in a usenet post, and possibly the poster was Stefan Harnad. I remember, at the time, that my response was something like:

there is a research effort beginning at MIT, aimed at coming up with some of the more human elements that have, up till now, been missing from AI projects (h/t Walter).

So here is my prediction. This project will fail. The project may come up with a lot that is interesting and perhaps valuable. It may be deemed to have been worth the cost. But I expect that it will fail to achieve the stated goal. In a way, this is an easy prediction. Thus far, AI research has a 100% perfect record of failure, when it comes to producing something that looks like human intelligence.

From the report:

At a new center based at the Massachusetts Institute of Technology, researchers will seek to craft intelligence that includes not just knowledge but also an infant’s ability to intuit basic concepts of psychology or physics.

I am sometimes asked to explain why I am skeptical about the possibility of AI (artificial intelligence). In this post, I shall discuss where I see the problems. I sometimes express my skepticism by way of expressing doubt about computationalism, the view of mind that is summed up with the slogan “cognition is computation.”

Terminology

I’ll start by clarifying what I mean by AI.

Suppose that we could give a complete map or specification of a person, listing all of the atoms in that person’s body, and listing their exact arrangement. Then, armed with that map, we set about creating an exact replica. Would the result of that be a living, thinking person? My personal opinion is that it would, indeed, be a living thinking person, a created twin or clone of the original person that was mapped.

Let’s use the term “synthetic person” for an entity constructed in that way. It is synthetic because we have put it together (synthesized it) from parts. You could summarize my view as saying that a synthetic person is possible in principle, though it would be extremely difficult in practice.

To build a synthetic person, we would not need to know how it functions. Simply copying a real biological person would do the trick. However, if we wanted to create some sort of “person” with perhaps different materials and without it being an exact copy, then we would need to understand the principles on which it operates. We can use the term “artificial person” for an entity so constructed.

My own opinion is that an artificial person is possible in principle, but would be very difficult to produce in practice. And to be clear, I am saying that even if we have full knowledge of all of the principles, we would still find it very difficult to construct such an artificial person.

As I shall use the term in this post, an artificial intelligence, or an AI, is an artificial person built primarily using computation. In the usual version, there are peripheral sensors (input devices) and effectors (output devices), but most of the work is done by a central computer so can be said to be computation.

Physicist David Deutsch has an interesting article on AI in aeon magazine. I thank Ant for bringing it to my attention in a comment on another blog. My view of AI is rather different from that of Deutsch, though I agree with some of what he has to say.

I started this blog in order to discuss some of what I have learned about human intelligence, as a result of my own study of AI. It turns out that I have not actually posted much that is directly on the topic of AI. So I am using this post mainly as a vehicle to present my own views, though I will present them in the form of commentary on Deutsch’s article. I’ll note that Deutsch uses the acronym AGI for Artificial General Intelligence, by which he means something like the intelligence of humans to be created artificially.

I’m a bit late adding my two cents to the blog debate between PZ Myers and Ray Kurzweil. If you want to review the debate, then a good place to start would be with PZ’s August 21 post on “Kurzweil still doesn’t understand the brain“, and follow some of the links from that report.

I was reminded of the debate by a recent John Wilkins post. And when I reread Kurzweil’s response to the first PZ post, I noticed how well that raises some of the design versus evolution themes that I have been raising in this blog. This post will comment on some of what Kurzweil has posted.

For starters, I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports.

I’ll go on record as doubting that the brain will be reverse engineered within 100 years.

I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated.

There we see a key point. Kurzweil is talking about the design of the brain. He is looking at the brain as a designed thing rather than as an evolved thing. We generally see “reverse engineering” as a way of retrieving the underlying design from an designed thing. However, the brain is not a designed thing, it is an evolved thing. And evolved things are very different from designed things. If the brain is not a designed thing, then there is no underlying design to retrieve, and thus the planned reverse engineering is bound to fail. Or, to put it differently, the brain was not engineered in the first place, so there is no engineering step that could be reversed.

A little later, Kurzweil says:

To summarize, my discussion of the genome was one of several arguments for the information content of the brain prior to learning and adaptation, not a proposed method for reverse-engineering.

And there is another illustration of the problem. For the developing embryo interacts with its environment from the start, even before the brain begins to form. Adaptation of the developing foetus is well underway before there is a brain. So talk of “content of the brain prior to learning and adaptation” is seriously confused.

As best I can tell, Kurzweil seems to be thinking of the brain as a fixed designed thing that is engineered from a detailed blueprint in the genome. And he takes this fixed designed thing to be an information processing system. He sees adaptation as an information processing detail.

By way contrast, I see adaptation as distinctively biological. If I look at a tree in my back yard, I see how it adaptively grows, and thus modifies its own shape, so as to better gain access to the available light. And if I were to look at the roots, I expect that I would find the same kind of adaptive growth to better find nutrients in the soil. This cannot be a matter of information processing in the brain, for the tree has no brain, no neurons.

The goal of reverse-engineering the brain is the same as for any other biological or nonbiological system – to understand its principles of operation.

Fair enough. But Kurzweil assumes that those “principles of operation” are information processing principles. It is far more likely that they are biological principles, not information technology principles, and that those biological principles are already being intensively studied within biology. In terms of biological principles, my brain probably works in much the same way as Kurzweil’s brain. In terms of information processing principles, the chances are that any two brains are very different.