Minds and Machines

Last February, the IBM supercomputer Watson won an exhibition game of the American TV show “Jeopardy” against two of its best contestants. This was a significant advance on Deep Blue, another IBM supercomputer, which had defeated the six-time world chess champion Gary Kasparov in 1997. It was hailed in the popular media as heralding the “triumph of machine intelligence over the human”. Of course it was nothing of the sort. It was the triumph of a top team of human researchers at IBM, aided by hundreds of others from many of the leading technological universities in the US, who had programmed Watson over five years and at the cost of $3 million.

In my last post I referred to EP guru Steven Pinker’s claim that the human mind is a “system of organs of computation designed by natural selection to solve the problems faced by our natural ancestors.” In this “computational theory of the mind”, the latter is treated as a set of computer programs or “modules” that are being executed in the electrical wiring (“hardware”) of your brain even as you read this page. Linked to this is the key assumption that what the mind-brain essentially does is “process information”, and this is usually understood as the manipulation of symbols by rules or algorithms. By using a common terminology (e.g. “information”, “intelligence”, “neural networks”) when discussing minds, brains and computers, the human-machine barrier is easily straddled. The mind is both naturalized and computerized. And the brain can now be described as an incredibly powerful microprocessor, the mother of all motherboards.

It requires a certain philosophical sophistication to see through the sleights of hand that ends up reducing human minds and persons to bundles of neural activity in the brains. As Ludwig Wittgenstein famously put it, when looking back on the naive philosophy of science (“logical positivism”) that had once seduced him in the 1920s: “A picture held us captive. And we could not get outside it, for it lay in our language and language seemed to repeat it to us inexorably.” (Philosophical Investigations, 1953)

There is nothing new in the way scientists take the most advanced machines of their day as models or analogies for human functioning. Steam engines and telegraph systems have served this purpose before. But there is a short (though calamitous) step from modelling to identification. We then imagine that machines which help us perform certain functions have those functions themselves. We humanize the machines even as we mechanize humans. When we speak of “clocks telling the time”, what we mean is just that they enable us (conscious human persons) to tell the time. Walking sticks don’t actually walk, and running shoes don’t run. The same applies to “radar searching for aircraft”, “telescopes discovering black holes” or “smart phones remembering our appointments”: they do not literally search, discover or remember. If there were no conscious human persons using these prosthetic tools, these activities would not happen.

In one of the most cited philosophical papers of recent decades (“Minds, Brains, and Programs”, 1980) , John Searle invited us to imagine somebody totally ignorant of Chinese seated in a closed room and receiving inputs of Chinese symbols. He is also given a rule-book for processing these symbols, so he can manipulate them and produce an output. Suppose that the input of Chinese is in the form of questions. It would appear, then, from the output symbols that the person in the room was answering the questions. However, he has not understood anything that was passing through his hands. Searle used this analogy to argue that electrical flows in computers do not count as the processing of symbols, since symbols are symbols only to those who understand them as symbols. It is wrong to imagine the mind as analogous to a super-computer, because in the absence of minds computers do not do what minds do.

In our IT-obsessed age, not only is information confused with knowledge, but the special engineering use of the term is confused with meaning. A meaningful message may actually have less information (from a technical point of view) than a sentence made up of pure gibberish. It is all matter of the range of alternatives from which the message is selected and their prior probabilities. As Claude Shannon, a pioneer of the mathematical theory of communication, reminds us, the “semantic aspects of communication are irrelevant to the engineering aspects.”

In a loose sense of “inform”, the books (and this Blog) I write may be said to be “filled with information” and stored in print (or on the internet) indefinitely. However, it is strictly only potential information that can be inscribed and stored outside a conscious mind. Once the concepts of information, informing and being informed start to be liberated from a conscious someone being informed or intending to inform, language goes on holiday (another Wittgensteinian expression) and reason disappears.

Distinguishing person-talk from neuro-talk, and neuro-talk from computer-talk, are indispensable if we are to explore the distinctively human and rescue the humanities and human sciences.

Like this:

Related

5 Responses to "Minds and Machines"

I see this common phenomenon in the practice of medicine. Modern investigations such as CT, MRI and ultrasound scans are spoken of by the experts as “CT can diagnose a given condition better than MRI” etc. We rapidly forget the use of our minds in the synthesis of a diagnosis using these tools. The net result is poor clinicians who have little time to correlate clinical examination and imaging investigations (a to and fro process beteween the bedside and tests). Hence the interactions and debate regarding the patient’s reality is subordinated to trusting the report (rather than a robust discussion with colleagues even at Multi Disciplinary Meetings) which when correlated with the clinical findings could result in a different (more accurate) view of reality. “Blessed are the clinicians who enjoy the satisfaction of using both clinical examination and investigative tests with insight into their limitations.”

We are indeed obsessed with information, but one could argue that perhaps the human brain is also created to ‘forget’ some information. Brain machine interfaces that mimic the human brain, models this with a ‘forgetting factor’. Maybe we need to learn from machines now!

Working in the area of brain machine interfaces, I know I could never humanize the machine I make. It is indeed a great thing to be able to interpret the intentions of a person and convert that to a physical movement. But the machine itself would never BE a human. Yes, we work on every day to perfect the system we have created, but take away human user that was intended to use it, and it reduces to a machine that someone programmed.

On the other hand I wonder if I may be guilty of reducing the human mind to a bundle of neural networks. I will treat it as such because, it is easier to simplify a complex system in order to come up with a solution to a problem. But then again I am almost happy that the brain does not behave within the simplified box I have put it in. Maybe as a scientist, I am not threatened that the brain does not fit into my box, instead may be I am happy that, of all the things God created, the human mind is the least understood of all.

consciousness. that is the element which is differentiate us as human with the machine. if someday someone can create consciousness and embedding it into a robot/machine, the distinction human-machine will no longer exist.

You might enjoy my recent book, THE MIND AND THE MACHINE: WHAT IT MEANS TO BE HUMAN AND WHY IT MATTERS (Brazos, 2011) which provides a fairly in depth critique of the computational view of the human person espoused by Dennett, Dawkins, etc.