Subscribe to Edge

You can subscribe to Edge and receive e-mail versions of EdgeEditions as they are published on the web. Fill out the form, below, with your name and e-mail address and your subscription will be automatically processed.

Email address *

Your name *

Country *

NOTE: if you use a spam-filter that uses a challenge/response or authenticated e-mail address system, you must include "[email protected]" on your list of approved senders or you will not receive our e-mail.

Eugene McDermott Professor in the Department of Brain and Cognitive Sciences; Director of NSF Center for Brains, Minds and Machines, MIT

"Turing+" Questions

Recent months have seen an increasingly public debate taking form around the risks of AI (Artificial Intelligence) and in particular AGI (Artificial General Intelligence). A letter signed by Nobel prizewinners and other physicists defined AI as the top existential risk to mankind. The robust conversation that has erupted among thoughtful experts in the field has, as yet, done little to settle the debate.

I am arguing here that research on how we think and how to make machines that think is good for society. I call for research that integrates cognitive science, neuroscience, computer science, and artificial intelligence. Understanding intelligence and replicating it in machines, goes hand in hand with understanding how the brain and the mind perform intelligent computations.

The convergence and recent progress in technology, mathematics, and neuroscience has created a new opportunity for synergies across fields. The dream of understanding intelligence is an old one. Yet, as the debate around AI shows, this is now an exciting time to pursue this vision. We are at the beginning of a new and emerging field, the Science and Engineering of Intelligence, an integrated effort that I expect will ultimately make fundamental progress with great value to science, technology, and society. I believe that we must push ahead with this research, not pull back.

A top priority for society

The problem of intelligence—what it is, how the human brain generates it and how to replicate it in machines—is one of the great problems in science and technology, together with the problem of the origin of the universe and of the nature of space and time. It may be the greatest of all because it is the one with a large multiplier effect—almost any progress on making ourselves smarter or developing machines that help us think better, will lead to advances in all other great problems of science and technology.

Research on intelligence will eventually revolutionize education and learning. Systems that recognize how culture influences thinking could help avoid social conflict. The work of scientists and engineers could be amplified to help solve the world's most pressing technical problems. Mental health could be understood on a deeper level to find better ways to intervene. In summary, research on intelligence will help us understand the human mind and brain, build more intelligent machines, and improve the mechanisms for collective decisions. These advances will be critical to future prosperity, education, health, and security of our society. This is the time to greatly expand research on intelligence, not the time to withdraw from it.

Thoughts on machines that think

We are often misled by "big", somewhat ill-defined, long used words. Nobody so far has been able to give a precise, verifiable definition of what general intelligence or what thinking is. The only definition I know that, though limited, can be practically used is Turing's. With his test, Turing provided an operational definition of a specific form of thinking—human intelligence.

Let us then consider human intelligence as defined by the Turing test. It is becoming increasingly clear that there are many facets of human intelligence. Consider for instance a Turing test of visual intelligence—that is questions about an image, a scene. Questions may range from what is there to who is there, what is this person doing, what is this girl thinking about this boy and so on. We know by now from recent advances in cognitive neuroscience, that answering these questions requires different competences and abilities, often rather independent from each other, often corresponding to separate modules in the brain.

For instance, the apparently very similar questions of object and face recognition (what is there vs who is there) involve rather distinct parts of visual cortex. The word intelligence can be misleading in this context, like the word life was during the first half of the last century when popular scientific journals routinely wrote about the problem of life, as if there was a single substratum of life waiting to be discovered to completely unveil the mystery.

Of course, speaking today about the problem of life sounds amusing: biology is a science dealing with many different great problems, not just one. Intelligence is one word but many problems, not one but many Nobel prizes. This is related to Marvin Minsky's view of the problem of thinking, well captured by his slogan "Society of Minds". In the same way, a real Turing test is a broad set of questions probing the main aspects of human thinking. For this reason, my colleagues and I are developing the framework around an open-ended set of Turing+ questions in order to measure scientific progress in the field. The plural in questions is to emphasize that there are many different intelligent abilities that have to be characterized, and possibly replicated in a machine, from basic visual recognition of objects, to the identification of faces, to gauge emotions, to social intelligence, to language and much more.

The term Turing+ is to emphasize that a quantitative model must match human behavior and human physiology—the mind and the brain. The requirements are thus well beyond the original Turing test. An entire scientific field is required to make progress on understanding them and to develop the related technologies of intelligence.

Should we be afraid of machines that think?

Since intelligence is a whole set of solutions to rather independent problems, there is little reason to fear the sudden appearance of a super-human machine that think, though it is always better to err on the side of caution. Of course, each of the many technologies that are emerging and will emerge over time in order to solve the different problems of intelligence, is likely to be powerful in itself and therefore potentially dangerous in its use and misuse, like most technologies are.

Thus, as it is the case in other parts of science, proper safety measures and ethical guidelines should be in place. In addition, there is probably the need for constant monitoring—perhaps by an independent supernational organization—of the supralinear risk created by the combination of continuously emerging technologies of intelligence. All in all, however, not only I am not afraid of machines that think but I find their birth and evolution one of the most exciting, interesting and positive events in the history of human thought.