On Sep 29, 2006, at 1:44 AM, praatika at mappi.helsinki.fi wrote:
> Robbie Lindauer <robblin at thetip.org>:
>>> Actually, Lucas replied to this at length in the Freedom of the Will.
>> Certainly, but whether the reply is any good is a different matter.
Someone had said, incorrectly, that he hadn't responded to the various
objections, when in fact Lucas has at length both in journals and in
two useful books on the subject.
In particular, he's got well-reasoned responses to the questions
brought up in this thread thus far and a thorough investigation of just
his website would have revealed that. In particular the contention:
1) That the human need not be told which machine it is that is
representing him.
2) That the human might be inconsistent (and so correctly represented
by the machine)
3) That the human need not understand the machine being proposed as
him.
None of these objections are damning to the argument in anyway.
>>> In particular, a machine which is inconsistent will produce "1 + 1 =
>> 3"
>> as a theorem. A human (sane one) will be able to see that that is
>> obviously false.
>> So can a machine, say, one which lists the theorems of Robinson
> arithmetic.
Not an inconsistent machine. It will prove that 1 + 1 = 2 and 1 + 1 =
3.
>> The argument is structured thus:
>>>> 1) IF the machine proposed as a model of the mind is consistent then
>> there exists a godel sentence G for the formalism represented by the
>> machine which a Human can recognize true and which that machine can
>> not
>> produce as true. (Godel's Theorem)
>> No, he/she cant. Only if he/she could "see" that the formal system is
> consistent. But that is not in general possible.
This is irrelevant to the point at hand since the human has a handy
proof that they (themselves) are not consistent (e.g. that they won't
claim that 1=0) and we know that IF the machine being proposed as him
is inconsistent, then it will prove that 1=0. So we can eliminate the
possibility that the machine proposed is inconsistent.
The point being we can rule out the possibility that the human is
inconsistent (since we don't prove that 1=0) and therefore that IF the
machine is a representation of us, then (supposedly) it is also
consistent. It follows that there is a Godel sentence for it.
> Anyway, this reply demands that a mechanist must provide a particular
> machine as a model of the human mind.
The theory "There may be a machine that is a model of your mind but we
can never say which one it is" is uninteresting, at best obtuse and
certainly wouldn't qualify as a scientific hypothesis. For instance,
try entering this hypothetical machine into a competition with the
AliceBot!
I liken this objection to "There is an apple in the sky, but we can't
detect it."
Once it is specified, e.g. the candidate for being the model of a
particular human's mind, then the Godel sentence can be generated (and
it need not be generated by the Human), and it will be a sentence the
human (logically) can decide which the machine (logically) can not.
> But this amounts to changing the
> subject. Orginally, the claim at stake was whether there could be a
> Turing
> machine which would be able to prove everything that a human mind can.
Obviously THAT's not the question. If it were, then an inconsistent
turing machine could clearly do that and that would be the end of the
discussion.
No, the question is whether or not any particular turing machine could
be a model of a particular human mind.
Since that machine must be consistent (to repeat) because humans are
minimally consistent (because we don't prove that 0=1), it follows that
there is a godel sentence G which is provable mechanically on the
machine but which the machine itself can not prove. While there is no
absolute criteria of identity for machine specifications, certainly one
will have to be that they can (logically) prove all and only the same
theorems.
Robbie Lindauer