Saturday, April 2, 2016

The Church-Turing Thesis, Turing Computable Functions, and the Human Mind

First, the Church-Turing Thesis.

The Church-Turing Thesis states that, given any effective procedure or method (or algorithm) by which the value of a mathematical function can be obtained, then that same function can also be computed and its value obtained automatically by a Turing machine (see Copeland 1997, “The Thesis and its History”).

To be “effective,” the algorithm or procedure must have a finite number of instructions in a finite number of steps, and could in principle be done by a human being doing manual calculations (see Copeland 1997, “The Thesis and its History”).

A function that can be implemented by a Turing machine by an effective method is Turing computable, and a Turing computable function can be implemented on a Turing machine with an effective algorithm to compute the function.

This is brought out very well in explicit statements by Turing in “Computing Machinery and Intelligence” (1950):

“The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer. The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail. We may suppose that these rules are supplied in a book, which is altered whenever he is put on to a new job. He has also an unlimited supply of paper on which he does his calculations.” (Turing 1950: 436).

“The reader must accept it as a fact that digital computers can be constructed, and indeed have been constructed, according to the principles we have described, and that they can in fact mimic the actions of a human computer very closely.

The book of rules which we have described our human computer as using is of course a convenient fiction. Actual human computers really remember what they have got to do. If one wants to make a machine mimic the behaviour of the human computer in some complex operation one has to ask him how it is done, and then translate the answer into the form of an instruction table.” (Turing 1950: 438).

But apparently even Turing recognised unsolvable problems that cannot be computable by Turing machines, e.g., the Entscheidungsproblem (the “Decision Problem”) of Hilbert relating to the predicate calculus.

Copeland (1997, “Misunderstandings of the Thesis”) points to a number of myths that exist about the Church-Turing Thesis:

(1) people have forgotten the sense in which Turing stated that the function had to be “effective.”

(2) although Turing did prove that a Universal Turing Machine is capable of computing any function that any specific Turing machine can compute, he did not state that

(1) Universal Turing Machines can solve any procedures or explicitly stated rules of any kind,

(2) nor did he say that a Universal Turing Machine could compute any function that any type of computer, with any kind of architecture, was capable of computing, since there are machines, either hypothetical (such as hypercomputers) or possibly even physically possible using known or unknown physical processes, that could perform functions not Turing computable.

(3) the Church-Turing Thesis does not state that the human mind is equivalent to a Turing machine, since there may well be aspects of the mind that transcend Turing machines.

(4) the Church-Turing Thesis does not entail that the human brain or any physical or biological phenomenon can be simulated by a Turing machine. In reality, if the human mind or a natural or biological phenomenon or process involves functions which are not effectively calculable, then they cannot be simulated in a Turing machine. (Copeland 1997, “Misunderstandings of the Thesis”).

So a universal Turing machine has clear limitations.

And digital computers are a type of Turing machine.

Let us now review an important point about functions and computation.

Now a function is an operation which begins with inputs and ends with an output. The input of a function is its argument and its output a value. An algorithm is a specific procedure, method or set of fixed rules used to find the value of the function, and any one function x might have more than one algorithm for finding its value.

As long as there is an effective algorithm (in Turing’s sense of “effective”) that can be designed to give the value of a function, then the function is computable (Crane 2003: 88).

Computing a function, then, in a Turing machine involves the use of an automatic algorithm, which represents the arguments and values of a function and the intermediate states, to find value of the function. The fundamental concept here is representation.

Computing a function involves representation of information – that is, computers use symbols to represent something and then manipulate those symbols by means of algorithms. However, as John Searle has argued (Searle 1980, 1990), Turing machines engage in computation that is purely syntactic—there is no semantics. A computer, then, is an automated rule-governed symbol manipulation machine devoid of semantic understanding.

By the Church–Turing thesis, all effective computable functions that exist are computable by a Universal Turing Machine.

But there is a crucial distinction between (1) instantiating a function and (2) computing a function. When many physical processes instantiate what can be represented as a function, they do so in a manner that is physically real and that has no symbolic representations of the input or output.

Many such physical phenomena in nature are therefore not computational processes (Crane 1995: 104). For example, a person’s height can be represented with a number, but of course a person’s height is not actually a number.

Much activity in nature determined by physical and chemical laws is not representational and, therefore, not computational. A good example of this is Einstein’s famous equation e = mc2, which means that energy is equivalent to mass by the speed of light squared.

For example, when matter is converted into energy in a nuclear explosion, this is an example of a physical process in which the amount of energy released can be represented as the value of a function whose arguments are the mass of the matter converted times the speed of light squared.

But the explosion has performed no computation, since nothing has been explicitly represented. In a computer, however, the nuclear explosion can be simulated through computing a function. The computer would represent the arguments as numbers that each represent quantities (the mass, energy, and speed of light), and through a system of syntax (in this case a mathematical operation) effected through an algorithm, it would find the value as another symbol that represents a quality of a natural phenomenon. Computation has occurred because explicit symbolic representation is present.

Some natural systems do, however, seem to perform computation or, at least, involve information processing. For example, the human mind.

But here the human mind has traits that go well beyond mere syntactic information processing without semantics, e.g., sensation, perception, consciousness, emotion, intuition – that is to say, the view that these are not Turing computable is to be taken very seriously indeed.

When we turn to the actual biology and functioning of the brain, it isn’t like a digital computer.

The brain seems to be an integrated system resembling, if anything, an analogue computer (where there is no separation between hardware and software), not a digital computer. Information in the brain is embedded in matter, and so cannot be extracted. In other words, information in human brains is substrate-dependent.

If consciousness requires physically necessary processes or substrates in our brain biology, just as the biological naturalist theory of the mind argues (and there are plenty of advocates of this from philosophers, cognitive scientists to neuroscientists), Turing machines can never be conscious.

To see this, a simple analogy suffices: you can model human digestion in a digital computer simulation, but this is clearly not the physical process of digestion. Turing machines cannot reproduce digestion, nor could they produce consciousness, if consciousness is causally dependent on physically necessary biology, neurophysiology and physics in the brain.

25 comments:

Seriously? 'even Turing' recognized there were things Turing machines could not calculate? What next, even Einstein understood time is not absolute, even Darwin understood evolution sometimes happens, even Mises believed in human action?

(1) if human consciousness is causally dependent on physically necessary biology, neurophysiology and physics in the brain, then Turing machines can never be conscious?

(2) that Turing did not say that a Universal Turing Machine could compute any function that any type of computer, with any kind of architecture, was capable of computing, since there are machines, either hypothetical (such as hypercomputers) or possibly even physically possible using known or unknown physical processes, that could perform functions not Turing computable.----That last idea is a profound point: there may be types of computer we haven't even thought of yet.

I am also noticing a strange unwillingness by you to answer questions I've posed in the last post.

How do you *know* that even in a simple life form with a simple brain, that its rudimentary conscious perception or primitive consciousness is just a property that could be reproduced in Turing computable functions on a Turing machine?

As far as I can see this post fails to address any of the arguments already made by KenB.

There are non-computable functions, for example *any* function dealing with floating point numbers can not be represented in the general case by computers. Computers use limited precision for their arithmetic, for example the full value of Pi can never be recorded by a computer (or written down symbolically). However despite this sufficient amounts of Pi can be written down for the purposes of a calculation requiring Pi.

In order to make an intelligence algorithm on a computer then we will almost certainly only need sufficient precision of its calculations in order for it to be successful.

John Searles argument assumes incorrectly characterises the Chinese room as lacking intelligence, but it clearly has the trait of speaking Chinese (which is an emergent property of the human brain) and there is no reason it should not also have the similarly emergent property of intelligence.

In these regards the Turing test is clearly a fully sufficient test of intelligence.

The reason to say with confidence that a rudimentary consciousness could be re-produced in a Turing machine is because as far as we have looked at it, there are no non-computable functions in there that could not be simulated with sufficient precision which have been identified in there.

In your semantics parlance there is no 'part' of the brain which represents the 'semantics' and could not be re-produced on a computer and sufficient is known about what makes this up in a brain to understand that semantics are an emergent property of the brain and therefore about what algorithm needs to be re-produced.

Having said that to produce semantics on actual computer systems uncertain details and challenges are immense. It seems likely from Chomsky's linguistics that large parts of the initial configuration of intelligence are hard coded into our Genes.

The underlying reason we feel emotion is probably due to some evolutionary traits we have developed. In that sense they may be pointless additions to an artificial intelligence, but I see no reason they would not be computable.

The issue with constructing one however being that we don't have a sufficient understanding of the psychology here to confidently describe what kind of function it is.

"No, a really sophisticated machine will still just be an expensive machine that manipulates symbols by rules with no understanding"

How do you know that (for example) an image search algorithm does not have any understanding of what it is searching for?

Nic the NZer, also don't fall in the trap of assuming human brain can go beyond computable functions and robots can't. Most likely, both can or neither can. In both cases it's the same result for our discussion.

You are profoundly confused. If I feel up to it I will try a little remediation next week.

It was Turing, in the papers that defined the TM and the UTM who answered Hilbert's challenge by proving the halting problem was uncomputable. I think your comment betrays how little you know about this stuff LK.

As for 2, that's the Church Turing thesis. There have been several independent attempts to define mathematically what is computable. They have all been proven to be equivalent.

And you are burden shifting. As Nic put it well, to deny as you do the possibility of emergence from a system of Turing machines you must prove they lack some element. And you cannot. But in any case the burden is yours.

(1) "As Nic put it well, to deny as you do the possibility of emergence from a system of Turing machines you must prove they lack some element"

lol.. Searle has already established that Turing machines are just things that manipulate symbols by rules with no understanding.

Your response is: we need a really sophisticated machine. No, a really sophisticated machine will still just be an expensive machine that manipulates symbols by rules with no understanding.

(2) and you've yet to answer:

If, for the sake of argument, we say that human consciousness is causally dependent on physically necessary biology, neurophysiology and physics in the brain, then it is true that Turing machines can never be conscious?

You imagine a purely imaginary or hypothetical world in which pigs can fly? Yes, within that purely imaginary or hypothetical world, pigs can fly. Just as, for example, if we speak of the purely imaginary or hypothetical world of Game of Thrones, there exist within that imaginary world of fiction dragons that fly.

In our actual concrete world of reality, pigs do not fly.

As nearly always, these puzzles or paradoxes tend to be based on fallacies of equivocation in the meanings of words.