conversations and learning in the digital world

Tag Archives: Type physicalism

In week 3 we contemplate the question: What is it to have a mind? What are the special properties that beings with minds have? What sorts of things have those properties: animals? Infants? Computers?In this week we discuss some of the approaches contemporary philosophers have taken to the question of what it is to have a mind. We begin our discussion with Cartesian dualism, which claims that mind is immaterial, continue to identity theory, a view that mind is identifiable with physical matter, and finish with functionalism, according to which a mental state is essentially identifiable with what it does. In the second part, we concentrate on the problems that thought experiments of Alan Turing and John Searle pose to the account of mind.

Philosophy technique used to determine properties is to look at something that doesn’t have those properties. Ex: a day in the life of a tennis ball, dog and human to compare the characteristics of the daily existence of each.

Human mind can think thoughts about thoughts, imagine unreal states, plan for future, change the environment for survival, have conscious awareness or “what is it like?” the awareness of the experience that accompanies each thought process.

How do we characterize that “what is its likeness” to have a particular experience? Any story of how the mind works is going to have to explain why we have this “what is its likeness” and how it is we’re able to think about things.

Cartesian (or Substance) Dualism:

Rene Descartes– iconic 17th cent. Philosopher of mind believed minds were made of immaterial substance, different from human body. He reached this conclusion by arguing that the nature of the mind (that is, a thinking, non-extended thing) is completely different from that of the body (that is, an extended, non-thinking thing), and therefore it is possible for one to exist without the other. This argument gives rise to the famous problem of mind-body causal interaction still debated today: how can the mind cause some of our bodily limbs to move (for example, raising one’s hand to ask a question), and how can the body’s sense organs cause sensations in the mind when their natures are completely different?

Problem of Causation:

Elisabeth of Bohemia, his student, challenged Descartes’ view. She asked if we have this immaterial substance then how does it affect changes in the physical body. This is known as the Problem of Causation. For physical things to move, including humans, there must be some physical impetus changing the physical state so it/we can move.

Problem of Causation- how does an immaterial substance cause a physical substance to move? Thoughts, beliefs, and desires can cause particular behaviours. Behaviours happen in physical bodies.

Interestingly, Elisabeth introduces her own nature as female as one bodily ‘condition’ that can impact reason. While Descartes concedes that a certain threshold of bodily health is necessary for the freedom that characterizes rational thought, he disregards Elisabeth’s appeal to the “weakness of my sex” http://plato.stanford.edu/entries/elisabeth-bohemia/

Minds, Brains and Computers

Physicalism is the view that everything in this world exists only in its physical stuff. It is the view that minds and bodies are made of exactly the same things. Physicalism is sometimes known as “materialism”.

There are different views to explain the idea of Physicalism, some of which are:

Identity theory is the idea that mental states or properties are neurological states or properties i.e. If two things are physically identical, then they will be psychologically identical. It asserts that mental events can be grouped into types and then can be correlated with the types of physical events in the brain. Identity theory is a reductionist view as it reduces psychological thoughts to the physical.

There are two ways of spelling out the Identity Theory:

Token/Type Identity:Type = category
Token = instance

For example: Certain class of objects such as cars = category = type, while a particular car, say Audi, maybe called a token; representative of the type car.

Type Identity theory claims for every type of mental phenomenon, there is a corresponding physical phenomenon i.e. certain types of brain state are identical with certain types of mental state. So, all sorts of happy mental states would be identical with certain sorts of brain states. A type of psychological state is identical to a type of physical state. This form of the theory assumes two things:

Every time you are in a certain mood – such as being happy – there is the same corresponding brain state

The same mood/brain state relationship occurs in everyone else

Token identity theory, by contrast, only admits that for every type of mental phenomenon there some kind of physical phenomenon but it states that it is possible that brains may not function in exactly the same way to produce mental states. So for example, according to Token identity theory, two people might be happy at the same time, but their brain states would be different.

A problem for Type Identity theory:

Hilary Putnam argued that type identity theory is too narrow. The theory states that particular mental states are narrowed down to particular physical states. This is all reduced down the human brain and the human body. But other species also feel pain. The problem is the same for actual species such as fish and hypothetical species such as aliens. Given that these species have very different ways (very different chemical states of brains) of realising a sensation such as pain, how can we assume that such an experience is identical with only a certain brain state? This greatly reduces the strength of Identity Theory. There are two options here: either we assume that such creatures do not have similar experiences to us, or we admit that such conscious experiences as pain are “multiply realisable”.

The key point for Putnam is that mental states are multiply realisable. This means that a certain mental state such as pain can be identified in a variety of different physical states. Each specie has a different chemical build up and can realise pain in a different manner. Hence a particular psychological state can be identified in many different physical ways. This is the theory of multiple realisability.

Functionalism

Hilary Putnam thought that we should understand mental states in terms of their function. Instead of thinking about what the brains are made up of, we should think about what functions they perform. Functionalism is the approach that concentrates on what the mind does, i.e. the function of mental activity. Functionalists claim that trying to give an account of mental states in terms of what they’re made of is like trying to explain what a chair is in terms of what it’s made of. What makes something a chair is whether that thing can function as a chair? Putnam’s claim was that we should identify mental states by not what they’re made of, but by what they do.

Mental states are caused by sensory stimuli and certain beliefs. They also cause behaviours and new mental states.

Computational theory of mind

Mind as a computer is a view that the human mind and/or human brain is an information processing system and that thinking is a form of computing. One could argue that minds are information processing machines; they take information provided by our sense and other mental states which we have, process it and produce new behaviours and mental states. We equate our minds with computers in its function of input-process-output.

Source: Wikipedia

Turing Machines

If minds are computing machines, then how complex does an information processing system needs to be before it counts as a mind? He proposed the following experiment as a response to this question. He invented the “Turing” machine in 1936 and called it an “a-machine” (automatic machine). The Turing test is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed this scenario: a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers.

Problems for Turing Test:

It’s language based: all testing is dependent on intelligence conveyed through language. We cannot test animal intelligence as they cannot talk.

It’s too anthropocentric: We’re testing for human intelligence, seems chauvinistic to think of intelligence worth studying is only human intelligence. There could be other forms of intelligence out there.

It doesn’t take into account the inner state of the machine. If a machine is able to pass the Turing test, then one must look into that particular machine to see what it’s made of. Would this machine pass as having a mind?

The idea that the mind is a computing machine is certainly an attractive one. However, there are problems with that view.

The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence. The argument centres on a thought experiment in which someone who knows only English sits alone in a room following English instructions for manipulating strings of Chinese characters, such that to those outside the room it appears as if someone in the room understands Chinese. The argument is intended to show that while suitably programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principle. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searle’s argument is a direct challenge to proponents of Artificial Intelligence, and the argument also has broad implications for functionalist and computational theories of meaning and of mind.