Mindless Intelligence

Posted by jofr

Jordan Pollack argues in one of his recent papers, Mindless Intelligence, that it was a mistake of AI to focus on symbolic reasoning. The processing of symbols and logical thoughts makes humans minds unique, yet the minds themselves continue to operate on principles that don’t rely on symbols and logical reasoning: “Most of what our brains are doing involves mindless chemical activity not even distinguishable from digestion of the food”. Our minds are built from mindless stuff. Our brains “are not instruction set computers”, as Pollack says, “they’re complicated biological networks with all kinds of feedback at all levels, like metabolisms, gene regulatory networks, and immune systems.”. He defines mindless intelligence as intelligent behavior ascribed (by an observer) to any process lacking a mind-brain:

“we must recognize that many intelligent processes in Nature perform more powerfully than human symbolic reasoning, even though they lack any of the mind-like mechanisms long believed necessary for human “competence.” Once we recognize this and start to work out these scaleable representations and algorithms without anthropomorphizing them, we should be able to produce the kind of results that will get our work funded to the level necessary for growth and deliver beneficial applications to society, without promising the intelligent English-speaking humanoid robot slaves and soldiers of science fiction”

He identifies self-* properties and processes as essential for complex living beings:

“Wherever we look in Nature, we see amazingly complex processes to which we can ascribe intelligence, yet we observe symbolic cognition in only one place, and only there as a result of introspection. Many of these natural processes have been studied under the aegis of complex systems or have been given the prefix ‘self’ or ‘auto.'”

And he argues that examing these self-* processes will bring us closer to the original goals of AI. The problem of AI (or strong AI) is not a problem in the sense of “could it possibly exist?”; it is evidently an engineering problem (see here and here). Since we all agree on AI’s fundamental hypothesis, that physical machines have the capacity for human level intelligence, is there any greater intellectual and engineering challenge ? Pollack says:

“AI, which represents one of the greatest intellectual and engineering challenges in human history—and should command the same fiscal resources as efforts to cure cancer or colonize Mars—is sometimes relegated to a laughingstock, because we can’t prevent bogus claims from cropping up in newspapers and books.”