Wednesday, November 14, 2007

So... Artificial Intelligence. An outdated term, perhaps, but a recognizable one nonetheless, and will be used here in its original sense. I tend to prefer artificial general intelligence (AGI), to maintain the distinction between the search for genuine sentience and strong artificial intelligence, and the desire for limited (but unquestionably useful) applications of AI.

Where does artificial intelligence stand today? Surely it has become more of a fanciful term, a sci-fi author's meal ticket, a crazy scientist's late-night obsession. The grand promises of the 1950's have yet to be realized, for the mind is a complicated game.

Sure, there has been progress. In October of this year, the simplest universal Turing machine was proved (by a 20-year-old undergraduate, no less). Government agencies such as DARPA are pouring millions into both general and specific AI, from the Integrated Learner, to DARPA's Grand Challenge. Ray Kurzweil and Ben Goertzel are shouting news of the Singularity from every street corner. Silicon Valley's prematurely-rich are getting bored and doing AI research of their own (see Jeff Hawkins' Redwood Neuroscience Institute). Social networking sites such as Second Life are becoming foster homes for baby AI's. Google is the new Bell Labs. Roboticists and computer scientists are finally beginning to collaborate with neuroscientists and cognitive scientists. Throw developmental biologists and philosophers into the mix, and we've got something serious beginning to cook. Could the age of strong AI be here at last?

The problem with progress in this field, I think, lies not with our ability to create intelligent machines -- not yet. We must first transcend the disciplines we divide our science into; it is almost shameful for a person to admit to being interested in philosophy or psychology... at least to anyone involved in mathematics, computer science, computational biology, or physics. The arrogance with which those in the "hard sciences" assert themselves is not only unseemly, it is dangerously counterproductive. The tension between disciplines is aggravated by this behavior, and the knowledge and wisdom of each is neatly sectioned into membranes as exclusive as the disciplines themselves.

May I be forgiven for all the time I looked down my nose at those in the social sciences or the humanities. May we hubristic scientists somehow prove ourselves worthy of the knowledge possessed by those immersed in the study of the human mind. May we somehow overcome our need for labels, our desperate desire to climb the scientific hierarchy, our need to satisfy ourselves with the knowledge that our work takes the most math or processing power to complete.

May we overcome the desire for the latest fashion in science: interdisciplinary teams of researchers, churning out papers with 20+ authors, treating scientific knowledge like a cheap, trendy accessory. Do take a moment, if you've got one, to read this spectacular article by Sean Eddy, a scientist who refuses to conform to one discipline. Some choicier quotes from the article:

"Progress is driven by new scientific questions, which demand new ways of thinking. You want to go where a question takes you, not where your training left you."

"Molecular biologists even worried about what to call themselves, like we argue over whether we're computational biologists or bioinformaticians. Any revolution needs to find the right slogan to unify under. Francis Crick explained, 'I myself was forced to call myself a molecular biologist because when inquiring clergymen asked me what I did, I got tired of explaining that I was a mixture of crystallographer, biophysicist, biochemist, and geneticist, an explanation which in any case they found too hard to grasp' [4]."

"Perhaps the whole idea of interdisciplinary science is the wrong way to look at what we want to encourage. What we really mean is "antedisciplinary" science—the science that precedes the organization of new disciplines, the Wild West frontier stage that comes before the law arrives. It's apropos that antedisciplinary sounds like "anti-disciplinary." People who gravitate to the unexplored frontiers tend to be self-selected as people who don't like disciplines—or discipline, for that matter.

One can't deny that science is getting more complex, because the sheer amount of knowledge is growing. But the history of science is full of ideas that seemed radical, unfathomable, and interdisciplinary at the time, but that now we teach to undergraduates. Every generation, we somehow compress our knowledge just enough to leave room in our brains for one more generation of progress. This is not going to stop.

It may take big interdisciplinary teams to achieve certain technical goals as they come tantalizingly within view, but someone also needs to synthesize new knowledge and make it useful to individual human minds, so the next generation will have a taller set of giants' shoulders to stand on. Computer science mythologizes the big teams and great computing engines of Bletchley Park cracking the Enigma code as much as we mythologize the Human Genome Project, but computer science rests more on the lasting visions of unique intellectual adventurers like Alan Turing and John von Neumann. Looking around my desk at the work I'm trying to build on, I do see the human genome paper, but even more, I see the work of individual pioneers who left old disciplines and defined new ones—writing with the coherence, clarity, and glorious idiosyncrasy that can only come from a single mind."

2 Comments:

Seven Theses On Consciousness, Intelligence, and Artificial Intelligence, Exclusive to "Adventures of a Wetware Hacker"!

1. We need big new ideas to understand consciousness, but not to achieve AGI. Some patchwork combination of already known algorithms and architectures will be sufficient for the latter.

2. The natural sciences, as presently constituted, cannot explain consciousness, because they are reducible to physics, and you cannot even get basic sensory qualities like color out of existing physics, which is conceived only in geometric and algebraic terms.

3. This can be overcome if the quest for the neural correlates of consciousness is combined with the study of consciousness as it presents itself to the individual. Phenomenology is a guide to ontology - whatever entities and relationships do exist in reality, we know that those which show up in consciousness must be among them. Conversely, physics can be reduced to a mathematics largely independent of ontology - all we need is for some part of the mathematics to relate to observations. Therefore, a phenomenological ontology of conscious states has a unique ability to tell us what physics is really about, by telling us what NCCs, physically described, really are. The relationships between ontology and physical formalism established in the case of NCCs could then, one hopes, be extended to the rest of physics.

4. The key to understanding consciousness (which I take to be an aspect of your concept of sentience) thus turns out to be something which must come, at least in part, from within. I might go further and suggest that the key is to be found in the concept of self. The intuitive pre-scientific experience of the world can be roughly described as an experience of things through the senses and an experience of the self through thought. The scientific experience of the world focuses very much on things, even though it employs thought to make progress. When science attempts ontology, it attempts to explain everything using concepts developed on the thing side of the thing/self dichotomy. But in fact a whole other set of concepts - such as 'sensation' and 'thought'! - can be developed by making the self the object of investigation. The synthesis of physics and ontology will lie in knowing some NCC-like thing, formally described in the sense-originated language of physics, to be the very same "thing" known introspectively as the self.

5. However, it is not likely that this degree of insight is necessary in order to achieve AGI, nor do I think it likely that an entity must actually possess consciousness to have intelligence as presently understood - because intelligence is presently understood in terms of functional competence, the ability to solve problems or achieve goals. Earlier, I distinguished between experience of things and experience of the self, though they are both just aspects of experience as a whole. Similarly, one could talk about intelligence regarding things and intelligence regarding the self - meaning a capacity to get the facts right, solve problems, etc., involving things or self, as the case may be.

6. In AI it is sometimes argued that self-intelligence is the key to consciousness - the day when Cyc knows that propositions about "Cyc" are about itself, will be the day when Cyc wakes up. I do not agree, and until one has an ontological theory of selfhood, this is actually just mysticism. I can certainly exhibit a toy ontology in which this would not be so: simply suppose that consciousness is always and only monadic (i.e., that the only things which truly have consciousness are elementary in some sense), but that we assemble groups of these elementary things into causal aggregates which have the ability to report correctly on their own properties as an aggregate. Functionally this is self-intelligence, but ontologically it is not.

7. Revisiting proposition 1: we don't need big new ideas to achieve AGI, but we do need big new ideas if we are to understand what we are doing when we do it. We know enough to copy it functionally, but not enough to understand it ontologically. This has to be a dangerous situation, because even the most benevolently programmed AGI will still be capable of making a mess if its ontology is wrong.