Our Cognitive Evolution

How did the human brain acquire its incomparable power? Our species emerged less than 200,000 years ago, but it has no "new" modules compared to other primates. Our brains have retained vestiges from our evolutionary ancestors. The vertebrate (e.g., fish) nervous system is very old, and we have retained elements of the vertebrate brain, especially in the organization of spinal cord and brainstem systems. One radical change in evolution occurred in the transition from the aquatic to terrestrial environment. New "modules" arose to deal with the more complex needs of this environment in the form of thalamic, basal ganglia, and cortical "modules" evident in the mammalian brain. The changes in brain structure between lower and higher mammals are related to size rather than to any novel structures. There was a dramatic growth in the size of the cerebral cortex between higher mammals and monkeys. But the difference between the monkey brain, the ape brain, and the human brain is again one of size. In comparing these three brains, we find that the size of the primary cortical areas

(those dealing with sensory and motor functions) are similar in size, but in higher species, secondary and especially tertiary cortical areas (those dealing with higher-level processing of sensory and motor information) are the ones undergoing dramatic increases in size, especially in the human. That is, we have conserved a number of brain structures throughout evolution, but we seem to just have more of everything, especially cortex (Donald 1991).

As individuals, the factors that determine the anatomy of our cortex are genes, environment, and enculturation (Donald 1991). For instance, the structure of the basic computational unit of the cortex, the cortical column, is set genetically. However, the connectivity between cortical columns, which brings great computational power based on experience, is set by the environment, especially during critical stages in development. Moreover, the process of enculturation determines the plastic anatomical changes that allow entire circuits to be engaged in everyday human performance. This can be demonstrated experimentally. Genetic mutations lead to dramatic deficits in function, but if there is no genetic problem yet environmental exposure is prevented (such as covering the eyes during a critical period in development) lifelong deficits (blindness) result. If both genetic and environmental factors proceed normally, but enculturation is withdrawn, symbolic skills and language fail to develop, with drastic effects.

The unprecedented growth of the cortex exposed to culture allowed us to develop more complex skills, language, and unmatched human performance. It is thought that it is our capacity to acquire symbolic skills that has led to our higher intelligence. Once we added symbols, alphabets, and mathematics, biological memory became inadequate for storing our collective knowledge. That is, the human mind became a "hybrid" structure built from vestiges of earlier biological stages, new evolutionarily-driven modules, and external (cultural "peripherals") symbolic memory devices (books, computers, etc.), which, in turn, have altered its organization, the way we "think" (Donald 1991). That is, just as we use our brain power to continue to develop technology, that technological enculturation has an impact on the way we process information, on the way our brain is shaped. This implies that we are more complex than any creatures before, and that we may not have yet reached our final evolutionary form. Since we are still evolving, the inescapable conclusion is that nanotechnology can help drive our evolution. This should be the charge to our nanoscientists: Develop nanoscale hybrid technology.

What kind of hybird structures should we develop? It is tempting to focus nanotechnology research on brain-machine integration, to develop implantable devices (rather than peripheral devices) to "optimize" detection, perception, and responsiveness, or to increase "computational power" or memory storage. If we can ever hope to do this, we need to know how the brain processes information. Recent progress in information processing in the brain sciences, in a sense, parallels that of advances in computation. According to Moore's Law, advances in hardware development enables a doubling of computing and storage power every 18 months, but this has not lead to similar advances in software development, as faster computers seem to encourage less efficient software (Pollack 2002, this volume). Similarly, brain research has given us a wealth of information on the hardware of the brain, its anatomical connectivity and synaptic interactions, but this explosion of information has revealed little about the software the brain uses to process information and direct voluntary movement. Moreover, there is reason to believe that we tailor our software, developing more efficient "lines of code" as we grow and interact with the environment and culture. In neurobiological terms, the architecture of the brain is determined genetically, the connectivity pattern is set by experience, and we undergo plastic changes throughout our lives in the process of enculturation. Therefore, we need to hone our skills on the software of the brain.

What kind of software does the brain use? The brain does not work like a computer; it is not a digital device; it is an analog device. The majority of computations in the brain are performed in analog format, in the form of graded receptor and synaptic potentials, not all-or-none action potentials that, after all, end up inducing other grade potentials. Even groups of neurons, entire modules, and multi-

module systems all generate waveforms of activity, from the 40 Hz rhythm thought to underlie binding of sensory events to slow potentials that may underlie long-term processes. Before we can ever hope to implant or drive machines at the macro, micro, or nano scale, the sciences of information technology and advanced computing need to sharpen our skills at analog computing. This should be the charge to our information technology colleagues: Develop analog computational software. However, we do not have to wait until we make breakthroughs in that direction, because we can go ahead and develop nanoscale peripherals in the meantime.

Have you ever been envious of people who seem to have no end of clever ideas, who are able to think quickly in any situation, or who seem to have flawless memories? Could it be that they're just born smarter or quicker than the rest of us? Or are there some secrets that they might know that we don't?