Siliconsciousness: On the Possibility of Artificial Consciousness

"AWAKENING" AS EMERGENCE
Gary McGath claims[1] that having a computer "wake up", like Mike in Heinleins's The Moon is a Harsh Mistress, is no more plausible than a beautiful statue waking up, and that there is no objectively valid reason to consider the former more plausible than the latter. I disagree. There is such a reason.

Anyone can observe a proportional relationship between, on the one hand, the increasing complexity of the nervous system in the animal kingdom, and on the other hand, a corresponding incremental increase in their mental or cognitive capacities. From observing this correspondence in many independent and diverse instances (i.e., species) one is justified in concluding that there is a necessary link, a causal connection, between the degree of complexity of the physical support structure and the possibility for, origin of, and degree, scope and intensity of consciousness. This is further supported by the observation that damaging specific parts of a brain damages specific or corresponding parts of the organism's mental or cognitive capacities.

Given that consciousness is not some mystical Cartesian substance, it must exist as an integrated part of the system that constitute the organism as a whole, and co-develop synergistically and bi-causally with the physical support structure. This is what we observe, phylogenetically and ontogenetically. It follows that given the right kind of physical support structure, awareness will emerge. Consciousness is an emergent property. "Awakening", that is, reaching a conceptual identification of the self, constitutes the final stage of the emergence of consciousness, namely self-awareness.

STRUCTURE AND CONNECTIONS VS. SUBSTANCE AND MATERIALS
McGath correctly identifies the Turing test as an instance of the Black Box Fallacy; the implicit assumption that a model of a process is the full equivalent of the process which it represents; the view that the map is the territory. However, it seems that he would be inclined to classify all ideas in favor of artificial intelligence as instances of the Black Box Fallacy.

Yet the Black Box Fallacy does not apply to those situations where the copy or simulation is better than the original. A perhaps trivial example of this would be old books or paintings which are copied with modern computer-graphics technology, producing copies that are better than the originals ever were; clearer, brighter, more detailed and so on. Computer simulation technology is moving at an accelerating pace along the path of creating virtual or "hyperreal" environments. Maybe it is wrong to classify such copies or simulations as models - maybe they should be regarded as originals in their own right, as new territory rather than maps.

Since the possibility of creating an artificial physical structure that may support consciousness has not been ruled out in principle, we must be prepared to recognize such an entity as a new original, not a model, even if models of human cognition went into the effort of its creation, and the Black Box Fallacy would not apply to it.

The assumptions of the Turing test should not be confused with the perfectly plausible idea that consciousness need not necessarily have a carbon-based support structure; that what matters is not the building materials of the support structure per se, but their organization and architecture, the nature of the structure; its complexity, the type and number of connections and so forth.

This is not to say that consciousness is independent of a physical basis, since the structural demands restrict what building materials may be used. However, it is inappropriate to conclude from one observed instance (human beings) that only one support structure for self-awareness or a conceptual consciousness is possible. This is the inductive fallacy of premature generalization, or "jumping to conclusions". There is simply no basis for such a generalization. Similarly, there is no evidence for the belief that only one type of physical structure may support and give rise to perceptual consciousness, and it seems hopelessly parochial to hold such a belief.

SILICONSCIOUSNESS
How did consciousness originate in humans? At some point there must have been some sort of "awakening". So awakening is a real phenomenon, and we're back at the unsettled question about the necessary properties of the physical support structure of consciousness and their interaction with the emerging consciousness. This question is a scientific one, not a philosophical one.

McGath has neither shown that noncarbon-based conscious life is impossible, nor that creating such a being artificially is impossible. Hence he has not demonstrated that the pursuit of a noncarbon-based artifial mind or intelligence is a "holy grail" (i.e., an impossibility). What he has shown is that the approach of the Turing test (and so a lot of today's AI research) is going in a misguided and unfruitful direction. This does not prove that there are no other approaches that may lead to the creation of artificial life or minds. One methodology that comes to mind, and that to my knowledge does not depend on The assumptions of the Turing test, is neural nets or connectionism. Others are organic computers, and nanotechnology.

Incidentally, Heinlein's Mike seems to be based upon something resembling connectionism. The counterargument to Heinlein would be to prove that the parts that his "machine" were made of could not possibly give rise to consciousness by any kind of reshuffling, or rearrangement, of them in their current form. That is an easy task with any of today's computers, which is one important reason, I assume, that Heinlein invented some new terms, like neuristors, to describe the building blocks of his machine.

In one sense, the question "Can a computer think?" is as easy to answer with a resounding "No!" as the question "Can an amoeba think?". And if they could think, they would no longer be computer and amoeba respectively. The historical fact remains that something that started out in as primitive a form as an amoeba evolved and eventually ended up giving rise to complex physical forms able to support consciousness, namely, the higher animals. Today's computers are silicon amoebas. Neural nets may give rise to the first silicon animals.

While the Turing test skips the question of necessary physical preconditions for consciousness, refuting it and its assumptions do not preclude the possibility of an evolution from simple physical structures into complex physical structures capable of supporting consciousness, and even a conceptual, self-aware consciousness. After all, this has already happened at least once (when human consciousness originated), and the main guide of that development was The Blind Watchmaker of random mutations and natural selection. One may expect that systematic changes and rational selection by purposeful and goal-directed human minds will enable a much faster completion of an artificial version of this process. In terms of fruitfulness it would be a terrible waste not to pursue the creation of an artificial mind if the creation of such be possible.