Can computers become conscious by increasing processing power?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article. (H/T Sarah)

Searle is writing about the IBM computer that was programmed to play Jeopardy. His Chinese room example shows why no one should be concerned about computers acting like humans. There is no thinking computer. There never will be a thinking computer. And you cannot build up to a thinking computer my adding more hardware and software.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

Jay Richards is my all-round favorite Christian scholar. He has the Ph.D in philosophy from Princeton.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

I don’t think that intelligence is a function of computing power– although I don’t have a philosophical problem with the notion of a truly intelligent computer, Data from Star Trek style, the computations it can do have nothing to do with it.

Ability to make moral decisions is where I draw the line for an AI (or other species) to be shown as a person. (and goodness do I have a rant-post boiling in my head about the constant “oh, the Catholic Church would think so and so didn’t have souls; chapter 8 covers it)

When I first started reading this post I was surprised, I thought you were taking a break from your normal themes…but alas, I was wrong.

I did notice you don’t really define consciousness when claiming that computers can’t attain it – being of a similar academic background as you, I lean very heavily towards your take on this subject…though if terminator style self-awareness isn’t likely, what about “I Robot” style logic – in order to help humans, it must hurt/destroy some and restrict the rest – a simple autamaton action with great repercussions…or is the real point of this article to say god made man, hence man can’t make anything nearly as “intelligent”?

Out of curiousity, if it is the latter, does consciousness really matter that much and if so, what about people in comas, vegetative states, or fetus’s that are yet self-aware?

Actually, he did define consciousness and ‘thinking’– not formally, but in the flow of conversation and by demonstration.

You, on the other hand, jumped over the notion of thinking entirely, and went to wondering about what novel ways machines might make choices. (also without defining consciousness)

The whole impact of the original ‘I, Robot’ story was that without a full set of the three laws, the programing was dangerous. A classic unintended consequences story. I could write a rather good paper on it as a metaphor for sociopaths.
(Isaac Asimov most assuredly had conscious computers– I remember a famous ‘twist’ where you find out at the end that the doctor whose head you’d been setting in was a robot.)

Out of curiousity, if it is the latter, does consciousness really matter that much and if so, what about people in comas, vegetative states, or fetus’s that are yet self-aware?

And children, and folks who are asleep, and really stupid people, and….
It’s interesting because people are freaking out over the worry that we’ll make a non-human– non-bio!– person.

So what I said still stands – beating around the bush (or as you call it flow) does not define anything. As for me, machines don’t have to have consciousness to think/make decisions – the entire reason I used the I robot scenario. Those robots were not conscious (I know, I haven’t defined it, so for this, I will use a simple self-aware, self-preserving simple morality) but were able to make decisions.

I completely disagree with your interpretation of the 3 laws – one of the main characters (the dead inventor) was postulating that in a sufficiently complex system, you cannot gaurantee that the 3 laws will ever be preserved – you have emergent behavior that the robot can interpret to be in compliance with the 3 laws – as the manufacturers main system did – to strip humans of their freedoms and liberties will ensure the 3 laws.

As for your last sentence, it’s completely overblown and I think it would be centuries before we could ever develop anything that would resemble a Terminator…but since I don’t believe our consciousness is god given, I don’t think it’s out of the realm of possibility that someday we can completely simulate the chemical processes that make up our brain and hence, our thought process – will that result in a mechanical man, I have my doubts.

So what I said still stands – beating around the bush (or as you call it flow) does not define anything. As for me, machines don’t have to have consciousness to think/make decisions – the entire reason I used the I robot scenario. Those robots were not conscious (I know, I haven’t defined it, so for this, I will use a simple self-aware, self-preserving simple morality) but were able to make decisions.

So you have a different definition than he was clearly using– and you apparently didn’t notice that the robots in both the movie and (at least most of the) anthology “I, Robot” were self-aware, self preserving and in “Little Lost Robot” (the short story that most of the movie’s basic premise was stolen from) it has a desire to prove superiority, which would be a moral choice.

Your entire last paragraph shows that you simply don’t understand.

I do think our consciousness is from God, but have not philosophical problem with the possibility of a computer person. I simply do not believe this is even a first step to that.

Additionally, the other classes are human persons who are not currently conscious.

The entire interest in the subject is the notion of a person who isn’t human. See also elves, dragons, aliens, angels, spirits, aware golems, talking horses….

when I say it’s completely overblown – I’m saying I agree with you – people’s belief that technology is advancing that fast is overblown…heck, after the computer jeopardy win, there were several “technology experts” (I use that term loosely) that were claiming in 30 years or so we wouldn’t need any radiologists since computers will replace those doctors…and the host just sat there in amazement

To be fair, circuit board design is really fun– one of my favorite parts of AT school. With enough layers of y/n and some clever switches, you can get some really amazing results– but it’s really just a Chinese Room.

The temptation to think that you can boil everything down to a program is very great– heck, it might even be theoretically possible. If it can’t go outside of the programming, though, it’s still not AI in the scifi sense.

I am writing a thesis examining the ideas of Searle and Dennett in the area of consciousness and intentionality (the directedness of thoughts toward their objects, or simply, having thoughts of or about other things). You give a daunting task of defining consciousness in a short blog post as I recently finished Dennett’s book, Consciousness Explained which filled over 450 pages.

However, with that said, among the features of consciousness that Dennett treats as an “as if” feature and Searle regards as a real feature that I believe the latter would say that machines will never achieve is that of intentionality. Machines don’t have thoughts as such. Sure, we say they think, but really we treat them as if they were thinking like we think. Still, they don’t have thoughts about the subjects and don’t have intrinsic intentionality. Any intentionality that they have is derived from their programming and the programmers behind that code. For example, when I look up a map on Google, it has intentionality as it is about the location for which I am searching, but that intentionality is there because I derive meaning from the people who programmed it in to reflect the destination for which I am searching. The computer that delivered the information has no thought about the map or its contents.

Searle holds to a view of weak AI, but, I believe, would not hold to a view of strong AI. Dennett would hold to strong AI, but it is treated “as if” intelligence, just as he believes we exhibit “as if” intelligence, intentionality, and even, consciousness.

Won’t it be the other way around? Humans will merge with artificial computer intelligence. We won’t be holding it in our hands (ala smart phone device or tablet), it’ll be written to our memories or device that’s implanted in our brain that we can somehow access. It’ll be a PED of a different sort, Performance Enhancing Devices (instead of drugs).