Artificial Intelligence Is Lost in the Woods

Artificial Intelligence Is Lost in the Woods

But there’s a solution to these problems. Suppose we set aside the gigantic chore of building a synthetic human body and make do with a mind-in-a-box or a mind-in-an-anthropoid-robot, equipped with video cameras and other sensors–a rough approximation of a human body. Now we choose some person (say, Joe, age 35) and simply copy all his memories and transfer them into our software mind. Problem solved. (Of course, we don’t know how to do this; not only do we need a complete transcription of Joe’s memories, we need to translate them from the neural form they take in Joe’s brain to the software form that our software mind understands. These are hard, unsolved problems. But no doubt we will solve them someday.)

Nonetheless: understand the enormous ethical burden we have now assumed. Our software mind is conscious (by assumption) just as a human being is; it can feel pleasure and pain, happiness and sadness, ecstasy and misery. Once we’ve transferred Joe’s memories into this artificial yet conscious being, it can remember what it was like to have a human body–to feel spring rain, stroke someone’s face, drink when it was thirsty, rest when its muscles were tired, and so forth. (Bodies are good for many purposes.) But our software mind has lost its body–or had it replaced by an elaborate prosthesis. What experience could be more shattering? What loss could be harder to bear? (Some losses, granted, but not many.) What gives us the right to inflict such cruel mental pain on a conscious being?

In fact, what gives us the right to create such a being and treat it like a tool to begin with? Wherever you stand on the religious or ethical spectrum, you had better be prepared to tread carefully once you have created consciousness in the laboratory.

The Cognitivists’ Best ArgumentBut not so fast! say the cognitivists. Perhaps it seems arbitrary and absurd to assert that a conscious mind can be created if certain simple instructions are executed very fast; yet doesn’t it also seem arbitrary and absurd to claim that you can produce a conscious mind by gathering together lots of neurons?

The cognitivist response to my simple thought experiment (“Imagine you’re a computer”) might run like this, to judge from a recent book by a leading cognitivist philosopher, Daniel C. Dennett. Your mind is conscious; yet it’s built out of huge numbers of tiny unconscious elements. There are no raw materials for creating consciousness except unconscious ones.

Now, compare a neuron and a yeast cell. “A hundred kilos of yeast does not wonder about Braque,” writes Dennett, “… but you do, and you are made of parts that are fundamentally the same sort of thing as those yeast cells, only with different tasks to perform.” Many neurons add up to a brain, but many yeast cells don’t, because neurons and yeast cells have different tasks to perform. They are programmed differently.

In short: if we gather huge numbers of unconscious elements together in the right way and give them the right tasks to perform, then at some point, something happens, and consciousness emerges. That’s how your brain works. Note that neurons work as the raw material, but yeast cells don’t, because neurons have the right tasks to perform. So why can’t we do the same thing using software elements as raw materials–so long as we give them the right tasks to perform? Why shouldn’t something happen, and yield a conscious mind built out of software?