From Aaron Fri Jun 9 11:39:17 BST 1995
Newsgroups: comp.ai.philosophy,sci.cognitive
References: <1995May19.222112.9348@media.mit.edu> <801270729snz@longley.demon.co.uk> <3qjqmk$jnf@barros.cs.ucf.edu> <1995Jun5.031707.17309@media.mit.edu> <802595047snz@longley.demon.co.uk>
Subject: Quinean indeterminacy (Was: Society of Mind)
David Longley writes:
> Date: 8 Jun 1995 08:28:23 +0100
> Organization: Relational Technology
> I'm working on trying to present Quine's Indeterminacy of Translation
> argument concisely for the net. Since Quine is one of the most concise
> writers I have encountered, that is proving to be quite a challenge. I
> have the text as an ASCII file, its the paraphrasing which is hard.
I'll try [very rashly?] to give a brief summary from memory
(dangerous because my memories of reading Quine are years old, and I
am typing very quickly and have no time to check references -- so I
may misrepresent Quine: but the issues are of some interest anyway.)
I think Quine was very close to being the sort of behaviourist who
thinks that really there are no internal phenomena other than
physical brain states and that mental states are definable entirely
in terms of externally observable behavioural dispositions.
Quine argued that if you (e.g. as an anthropologist from another
culture) are trying to determine what I mean by my words, by
watching my behaviour, i.e. watching me interact with you, with
others in my society, and with the environment, then, no matter how
long you observe me, there will always be different, interpretations
of my meaning that are consistent with all the available evidence.
Quine concludes that you can never find out exactly what I mean,
exactly what conceptual framework I use, etc.
I think he also suggested that I am in no better position to
determine which of the various possible interpretations fits my
intention.
He concludes that what I mean, what I intend, are issues on which
there may be no right answers.
I think this is one of many cases where philosophers try to
interpret the world while ignoring much of the information available
about the world. Philosophy is important and needs to be done, but
armchair philosophy is often seriously misleading.
Like the vast majority of philosophers with no training as software
engineers, Quine did not realise that besides the visible external
behaviour of a system and the (inevitably limited) internally
accessible information that the system has about itself, there is
also another view, namely what a designer of the system might know
about how it works. I am not talking only about the designer of the
hardware (who may be unable to "decompile" the system) but also the
designer of software.
A simple example: most computing systems use bit patterns for a
variety of purposes. Some of those bit patterns are used as
addresses of locations in a virtual memory. Others are used for
other purposes. Note that the bits are interpreted BY THE MACHINE,
since otherwise it could not perform its function. The designers of
those machines know EXACTLY how the bit patterns are interpreted,
at the level of the virtual machine architecture defined by the
machine language.
Anthropologist studying the machine with whatever electronic or
other aids would find it very hard to reach the depth and clarity of
understanding that the designers have.
There are more complicated examples that can be used to make similar
points, but at a different virtual machine level, e.g.
organisational information systems and databases. These manipulate
information about employees, orders, bills, stocks, unpaid accounts,
etc. etc. Again, if the system is properly designed, the designers
know what information is processed: Quineian arguments about
ambiguity are irrelevant.
Here's a good task for contemporary and future non-armchair
philosophers: give physics a rest for a while, and examine these and
related cases of design carefully, and come out with good accounts
of what's going on, including how the relation of implementation
accounts for the coexistence of different superimposed ontologies
with rich causal interactions.
Of course, we did not design human beings. We therefore cannot use
that basis for grasping the semantics of the mental states and the
language of human beings. However, there are a few points that can
be made
(a) That there's no designer does not mean there's no design. One
way of construing evolution is as a process that creates extremely
sophisticated designs. Finding out the design is a very complex and
difficult task for us in most cases, but in principle we may
eventually get it right. Of course, we'll never know with absolute
certainty that we have got it right, any more than any other
scientific theory can be known with absolute certainty. More likely
we'll get some parts right and some wrong. But it is at least as
conceivable that there is a right theory here, as as there is in
physics, chemistry, etc. It's just a different sort of theory.
(NB. I am assuming an ontology of designs. That's another topic
requiring further philosophical analysis.)
It is possible that when we have that right theory we shall see that
there are many aspects of the semantics of human intentions and
human language that are radically indeterminate, though not for
Quine's reasons, but for deeper reasons to do with the nature of
semantics. E.g. I have argued elsewhere that we shall find human
semantic states indeterminate (not only that there are N different
interpretations but that the interpretations leave undetected gaps
to be filled: and that's partly how our language, concepts, theories
develop.)
(b) Many philosophers, including sometimes Quine (and some
contributors to usenet discussions), construe the task of
accounting for human knowledge as if knowledge were a process of
starting with a totally blank mind and collecting information, then
performing logically valid deductions from the information. This
view has been criticised many times from many different directions:
e.g. a totally blank system would not be able to collect any
information to start with: it would have no concepts, no perceptual
algorithms, no internal language for storing information, etc.
Once you abandon the notion of such a "tabula rasa" and start asking
how evolution might have designed organisms of various kinds, then
you may admit the possibility that we actually start off with a
quite specific conceptual apparatus, e.g. a basic ontology for
acquiring, storing, transforming and using information about
ourselves and the environment -- a system that was selected by
evolutionary (trial and error) processes over millions of years,
instead of being learned in a short time by an initially blank (but
totally rational!) mind.
It is then possible that all human thought and language starts from
(roughly) the same conceptual system, and adds variations and
extensions to suit local conditions and needs. In that case, instead
of the anthropologist having to infer my conceptual system by
observing me, and being faced with a radically indeterminate
problem, he may actually start by sharing most of my conceptual
apparatus which he cannot help using, because he has no other.
(That's probably how evolution solved the "other minds" problem: we
don't use rational inference to conclude that other minds exist. We
are designed from the start to interpret various sorts of things in
the environment as having thoughts, beliefs, desires, etc., just as
we are designed to interpret sensory information in terms of spatial
structure and motion -- no mean task, especially for a new-born foal
that has to run with the herd within a few hours of birth.)
On this view we don't use Dennett's "intentional stance" to
interpret other agents as having beliefs, desires, etc. We use a
pre-programmed implicit commitment to other agents being designed in
such a way as to support certain kinds of states. (Exactly what sort
of commitment, and how the implicitly presupposed architecture
explains the states is a topic for another day.)
Of course, this still leaves the possibility that an anthropologist
from another planet, who does not share our evolutionary background,
may find it impossible to discern our conceptual system. Perhaps,
but that does not mean there's nothing to be discerned.
Similarly, if we are sufficiently different from other animals we
may never be able to find out exactly how they interpret the world.
We may never find out what it is like to be a bat. (Actually, even
if we could do the decompilation and find out, we'd still be
incapable of *experiencing* what it is like to be a bat if our
representational apparatus was sufficiently different from that of a
bat, which I suspect is the case.)
Summary:
Quinean arguments for indeterminacy of meaning, or translation, are
unconvincing, and ultimately depend on a purely behaviourist stance,
ignoring the design stance.
There's much we still need to understand about the basis of human
meanings, and about many different ways in which machines and
organisms may have semantic states.
There may remain kinds of semantic indeterminacy that will be fully
explained by a good theory of how the system works, but not by a
philosopher's argument about how hard it is to discover how the
system works.
Cheers.
Aaron
From Aaron Fri Jun 16 00:59:51 BST 1995
Newsgroups: comp.ai.philosophy,sci.cognitive
Distribution: world
References: <3r5750$fsp@barros.cs.ucf.edu> <802695870snz@longley.demon.co.uk> <3r9rjc$669@agate.berkeley.edu>
Subject: Re: Society of Mind
Edward Faith writes:
> Date: 9 Jun 1995 16:07:08 GMT
> Organization: America Online
>
> In article <3r98c5$s06@percy.cs.bham.ac.uk> Aaron Sloman,
> A.Sloman@cs.bham.ac.uk writes:
>
> >Quine concludes that you can never find out exactly what I mean,
> >exactly what conceptual framework I use, etc.
>
> [. . .]
>
> >Like the vast majority of philosophers with no training as software
> >engineers, Quine did not realise that besides the visible external
> >behaviour of a system and the (inevitably limited) internally
> >accessible information that the system has about itself, there is
> >also another view, namely what a designer of the system might know
> >about how it works.
>
> You ignore the fact that the designer of
> the system is herself a human, subject to Quine's indeterminacy.
That would be an acceptable response if Quine had given any good
reasons to believe in the indeterminacy. He hadn't (for the
reasons I tried to indicate, namely he assumed (roughly) that all
descriptions of mental states had to be based on external
observation of behaviour, whereas there are other bases, from the
design point of view).
So it is not a "fact" that I am ignoring, only an unjustified
Quinean assertion.
> How can you prove that this is not the case?
I gave a long answer in the posting to which you responded. Maybe
the answer was too long to be clear. Sorry about that.
> ...I think perhaps
> your mistake is similar to the mistake of people who propose
> sweeping government solutions to social problems, ignoring
> the fact that government is itself composed of people.
Yes, except that I am not trying propose solutions to social
problems.
I am trying to draw attention to the importance of paying attention
to what complex systems are composed of instead of only viewing them
holistically. A designer would think of a mind as made up of many
interacting information processing mechanisms. There's lots that a
designer can say about such a system that Quine deems impossible.
(Of course, we have to watch out for unjustified anthropomorphisms,
and some people do over interpret their designs, as Drew McDermott
pointed out long ago. But not everyone does, or has to.)
Aaron
---
From Aaron Sun Jun 11 04:05:59 BST 1995
Newsgroups: sci.lang,sci.psychology,comp.ai.philosophy
References: <3q2put$p07@Mercury.mcs.com>
Subject: Re: Chomsky on Consciousness and Dennett
[I've removed rec.arts.books]
jmc@SAIL.Stanford.EDU (John McCarthy) writes:
> Date: 09 Jun 1995 18:23:18 GMT
> Organization: Computer Science Department, Stanford University
>
> So Quine was not able to devise a logical system in which it was
> possible to believe each of one's beliefs and yet belief that some of
> them are incorrect. I think I can.
Its easy to formulate a set S of sentences
I believe p1
I believe p2
I believe p3
I believe ....
I believe that for some X (I believe X & X is incorrect).
Several questions need to be separated
(1) Is the set of belief contents mentioned in S consistent
If the set is finite, the answer seems to be No.
However, if infinite it might be omega consistent?
(2) Is the set S of statements about the beliefs consistent?
I don't see why not. I think even these two statements are
consistent, even the contents of the beliefs are not:
I believe p1
I believe not-p1
Someone who is not totally rational may have inconsistent
beliefs, which manifest themselves in different circumstances.
So the statement that he has them is consistent.
(3) Could a successful intelligent agent have a set of beliefs such
as the set S above, i.e. including the belief that some of its
beliefs are false?
I'd say that the answer is obviously yes. Examples are
all around us.
(4) Could a successful intelligent agent AVOID having such sets of
beliefs with inconsistent contents?
Maybe, but I suspect not because of the difficulty of
maintaining consistency in a complex and changing world.
Checking conclusively whether a large set of propositions is
consistent or not (satisfiable or not) is an intractable
problem (even if you use only propositional logic): no
physically implemented machine could do it, in this universe.
In general it's not even decidable. So finding and removing
inconsistencies is intractable.
I conclude that inconsistent beliefs are inevitable in resource
limited agents. That includes me, and probably any robot
intelligent enough to follow this argument.
Is there some clever way of indexing incoming information so that
incremental checking gets round the tractability problem?
Aaron
---
From cyang@research.nj.nec.com Mon Jun 12 21:04:03 1995
Date: Mon, 12 Jun 95 16:07:17 EDT
From: cyang@research.nj.nec.com (Charles Yang)
To: A.Sloman (Aaron Sloman)
Subject: Re: Quinean indeterminacy (Was: Society of Mind)
Dear Aaron,
I think that your memory is crisp and clear (despite your
worries) and that your remarks are right on target. I share
lots of your view from a linguistics perspective, among others
as well.
Charles Yang
MIT AI Lab
and
NEC research institute
From Aaron Tue Jun 13 23:08:02 BST 1995
Newsgroups: sci.lang,sci.psychology,comp.ai.philosophy
References: <3q2put$p07@Mercury.mcs.com> <3rdmi2$kat@percy.cs.bham.ac.uk> <3rimfn$31f@condor.ic.net>
Subject: Re: Chomsky on Consciousness and Dennett
pjackson@falcon.ic.net (Philip Jackson) writes:
> Date: 13 Jun 1995 00:35:02 GMT
I wrote
> : (4) Could a successful intelligent agent AVOID having such sets of
> : beliefs with inconsistent contents?
>
> : Maybe, but I suspect not because of the difficulty of
> : maintaining consistency in a complex and changing world.
>
> : Checking conclusively whether a large set of propositions is
> : consistent or not (satisfiable or not) is an intractable
> : problem (even if you use only propositional logic): no
> : physically implemented machine could do it, in this universe.
> : In general it's not even decidable. So finding and removing
> : inconsistencies is intractable.
Phil corrected me.
> At present, the satisfiability problem for propositional logic is
> widely considered, but not proven to be intractable. Until it is
> proven that P does not equal NP, we cannot be certain that the
> no physically implemented machine could solve this problem
> efficiently. Despite conventional wisdom, it may eventually be proven
> that P=NP in which case the problem could be tractable for physical
> machines, i.e. computers, depending on the degree of the polynomial
> describing the performance of a hypothetical efficient algorithm
> for satisfiability.
>
> Also, even if it is proven that P does not equal NP, in which case
> no Turing machine could efficiently solve the propositional satisfiability
> problem, the problem might still be tractable for quantum computers,
> if quantum computers can actually be built and perform as current
> theories predict -- this appears to still be an open question.
OK you are right. I was making some unproven assumptions including
P/=NP
and
there's no kind of physically implemental computational engine
that can do better than Turing machines as regards complexity
measures.
I could be wrong on both counts.
However, empirically I feel fairly confident that human brains are
not very good at detecting inconsistencies in complex sets of
formulas. Whether some new kind of engine will one day do much
better than we can is an open question.
If not, then sophisticated robots are likely to have inconsistent
sets of beliefs, for the reason I gave.
Even with the new engines they may have them anyway, if the benefits
of detecting all inconsistencies turn out not to be worth the costs.
I asked
> : Is there some clever way of indexing incoming information so that
> : incremental checking gets round the tractability problem?
Phil replied:
> If there were such a clever method then it could be used to tractably
> solve the propositional satisfiability problem. We could simply apply
> the clever method as each clause is added to a propositional formula
> to check whether the formula is still satisfiable.
>
> So, at present there does not appear to be any such clever method.
Thanks for such an elegant answer. I should have thought of
it myself!
Aaron
---
From Aaron Fri Jun 16 00:47:37 BST 1995
Newsgroups: comp.ai.philosophy,sci.cognitive
References: <1995Jun5.031707.17309@media.mit.edu>
Subject: Re: Society of Mind
To: rv@tahoe (rodrigo vanegas)
Thank you for your comments, also sent to me by email.
> In-reply-to: A.Sloman@cs.bham.ac.uk's message of 15 Jun 1995 02:28:12 GMT
> cc: A.Sloman@cs.bham.ac.uk
(I wrote)
> > The trouble with all those philosophers (Quine, Fodor, Geach, etc.)
> > is that none of them has ever spent any time trying to design
> > anything that works. They have only the vaguest notion of what
> > processes, architectures, mechanisms, are, and they produce apriori
> > arguments that all sorts of things are impossible.
> >
> > Meanwhile, lots of other people blithely ignore those pontifications
> > and happily get on and do what's supposed to be impossible.
(rv commented)
> ....and then fail. As a scientist you should understand that the very
> idea of a proof (apriori or otherwise) is that its conclusion is
> essentially final.
> .......
As I you guessed, I do not regard them as having proved anything of
the sort.
I (wrongly) allowed myself to be irritated by David Longley's
message because I had previously given a fairly long account of what
I thought Quine's argument was, and why I thought it failed. I had
stated that I was not sure I had remembered it accurately (though
one reader wrote to tell me very confidently that he thought I got
Quine's arguments exactly right).
I had expected David to tell me exactly where my attack on Quine
went wrong. Instead I was sent a long list of extracts from the
writings of a lot of philosophers, who knew little or nothing about
AI or computation, and which did not address the issue, or, insofar
as they attempted to, started from assumptions that I thought I had
already shown were incorrect.
In the circumstances I foolishly allowed my exasperation to get the
better of me.
(rv)
> But perhaps what you mean is that these philosophers did not succeed
> in their proofs? If so, one would be interested to know where you
> think they went wrong. Presumably, if you are even familiar with all
> these "apriori proofs" you know exactly where the faulty assumptions
> slip in, otherwise you'd agree with them right? Or is logical
> validity less than compelling to you?
I'll try to make my comments on Quine's position accessible as
http://www.cs.bham.ac.uk/~axs/cog_affect/usenet.quine
(I am not sure if my symbolic link will work for external readers.)
> It sometimes seems as if philosophers are being asked to accept a
> proof by extrapolation and analogy that the "farce" of all their
> paradoxes will eventually be exposed without the need for any focused
> non-scientific effort. Uh huh.
Although I started as a mathematician (not a very good one), and
have put a lot of effort into software development, and still do, I
am not against philosophy or philosophers: I regard myself as
*primarily* a philosopher, trying to understand what minds are by
finding out how to design working instances, and I regard many of
the great philosophers as allies (e.g. Frege, Kant). I'm merely
against ill-informed armchair philosophy, especially the kind that
argues from the premiss that there are only possibilities X and Y,
where any good engineer would see that P, Q and R are other
alternatives. (And yet more are revealed by brain scientists,
anthropologists, developmental psychologists, etc.)
Armchairs may make one feel safe, but the safety is an illusion if
one cares about what's out there.
In case anyone takes my critical comments on philosophers to imply
that I am against philosophy: please note that I am presenting a
"Philosophical Encounter" at IJCAI95 in Montreal in August (19-25th
Aug) at which I shall try to argue that AI needs (good) philosophy
and philosophy needs AI. John McCarthy has agreed to help, and
Marvin Minsky may join in, if he's not too busy to attend the
conference.
My four page summary introduction can be found at:
ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/
or
http://www.cs.bham.ac.uk/~axs/cog_affect
in the compressed postscript file
Sloman.ijcai95.ps.Z
John's addition is available via his Web page:
http://www-formal.stanford.edu/jmc/
near the end of the file.
It looks as if there'll be an unusual amount of philosophy at IJCAI
this year. E.g. Pat Hayes and Ken Ford have been given a slot to
present their views on why the Turing test should be considered
harmful. Also Herb Simon has been given an award for research
achievement and in his address to the conference we can expect some
comments on philosophy, not necessarily complimentary!
Cheers.
Aaron
---
From Aaron Sloman Sat Jun 17 18:44:15 BST 1995
Newsgroups: comp.ai.philosophy,sci.cognitive
References: <803310004snz@longley.demon.co.uk> <803318559snz@longley.demon.co.uk>
Subject: Re: Society of Mind (why we need intensional language)
David Longley writes:
> Date: 16 Jun 1995 17:36:19 +0100
> .....
> 'I should like to see a new conceptual apparatus of a
> logically and behaviourally straightforward kind by
> which to formulate, for scientific purposes, the sort of
> psychological information that is conveyed nowadays by
> idioms of propositional attitude.'
>
> W V O Quine (1978)
Like many philosophers he (Quine) wants conceptual apparatus in a
vacuum. (Or should I say "in an armchair".)
If we adopt the design stance and work out in detail which kinds of
states are and which are not possible in various sorts of
information-processing architectures, then we get different sorts of
conceptual frameworks (i.e. different ways of generating
descriptions of states and processes) for different sorts of
designs.
(Compare different theories of the architecture of matter will
generate different sets of concepts for describing physical states
and processes.)
For instance, a design for an information processing system based
only on changing values of a fixed number of continuous variables
linked by a fixed set of partial differential equations (such as
control engineers typically think about) will support an
impoverished set of concepts -- compared with a design for a system
based on computations involving structural changes (e.g. creating
plans and sentences of varying complexity, and even generating new
independent goals at run time, instead of forever being doomed to
minimise the same error function, or whatever.)
For instance: you can get deadlock in a computing system of the
second kind, but the concept makes no sense in an old-fashioned
analog control system of the first kind. Similarly because there
are well defined metrics for machines of the first kind the
distinction between positive and negative feedback makes
sense, whereas it may not be applicable to all the systems of the
second kind: how do you impose a uniform metric on plans, or sets
of beliefs?
Some designs, though by no means all, will support the ascription of
intentional states, requiring intensional language.
Even some existing office automation systems are sufficient to
support intensional descriptoins (e.g. I can talk about the system
having information about a customer called Smith, and the system's
mistakenly sending Smith an invoice, even when there is no such
customer: i.e. the extension does not exist, though the intension
does).
Establishing that properly needs detailed analysis that I have not
yet done! Offers welcome....
Of course, there's always a different description of the same system
that's not intensional: that's as true of computers as of brains.
But if you want the non-intensional description to involve causal
connections, beware:
Some people have argued that even if you describe causal relations
you are committed to intensionality since the statement that X
caused Y may be true while the statement that Z caused Y is false,
even though X and Z are (in some sense) the same thing. That's
because causal relations (it is claimed) hold only under a
description, and the intension of the description can make a
difference to the truth-value. I.e. it's not determined solely by
the extensions of the linguistic expressions.
An example might be these two statements:
"His pressing the switch (X) caused the light to go on (Y)"
and
"His moving his thumb one cm further away from the south
wall (Z) caused the light to go on (Y)".
Both subject-phrases could refer to the same physical event, and yet
some people will say the first assertion could be true while the
second was false (unless the room was wired to detect thumb
movements and trigger relays...)
[Not everyone would agree that the same physical event is referred
to by X and Z.
But this leads to mysteries about extensions that are as obscure as
the intensions they were supposed to render unnecessary.
And not everyone agrees that the second statement is false when the
first is true.]
The source of the problem is that the concept of causation is
inherently "modal" i.e. connected with what's possible, impossible,
necessarily the case, etc. Modality is a deep philosophical bag of
worms. For more on causation see:
Taylor, C.N, 1992 {\em A Formal Logical Analysis of Causal
Relations}
DPhil Thesis, Sussex University. Available as
Cognitive Science Research Paper No.257
Email orders to celiam@cogs.sussex.ac.uk (6 pounds UK)
Compare other sorts of modal statements, like statements about
obligations, permissions, etc. E.g. "The chairman of the company is
allowed to sign company cheques" may be true whereas "The occupant
of the house on the corner of Petunia avenue and Elm street is
allowed to sign company cheques" is false (under at least one
interpretation), even though the occupant happens to be the
president. [Intuitions vary about these examples. Some people will
reject them. You have to set the context for the utterance, e.g. a
law court.]
In an earlier posting David quoted Quine (always a good source of
pithy provocation):
'Intensional and extensional ontologies are like oil and
water.'
W.V.O. Quine (1953)
From a Logical Point of View p.157
It often happens that people who have failed to find a good way to
think about some combination of characteristics deem it impossible.
Examples from the past have included irrational numbers, complex
(imaginary) numbers, everywhere continuous but nowhere
differentiable functions, infinite sets, action at a distance,
objects lacking definite speed and location, etc. etc.
This one's a bit better:
'The keynote of the mental is not the mind; it is the
content-clause syntax, the idiom 'that p'.'
W.V.O Quine (1990)
Intension
The Pursuit of Truth p.71
My version: the keynote of the mental is the ability to process
information.
But note that there's no sharp, clearly defined, boundary between
what does and what does not process information. Even a simple house
thermostat can be construed as processing information about
temperature in the house, and, as John McCarthy (or was it first Dan
Dennett?) has pointed out, it can be described as having, or using,
information that the temperature is so and so, whilst being wrong.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
p
But here there's practically no support for intension because the
link between the information state and the environment is so close,
so direct, i.e. there's practically nothing to mediate the semantic
relation apart from physical laws. The intension, the answer to the
question, "HOW does it refer to the temperature?" is almost empty.
Contrast MY information that the temperature in New York is now
above freezing. My current state is able to include a reference to
New York and its current temperature (or whatever you take the
extension to be) on account of having a rich supporting information
processing apparatus, whose current state largely determines the
intension. (There is also a small(?) contributory role for my
location in space time and various causal links with the
environment: so it is not all internally determined.)
Similarly the office information system: its ability to refer is
mediated, and the mediating states are intensions.
[NB: I am not talking about how I get *correct* information, but how
I can refer to New York at all, whether in correct, or incorrect,
beliefs.]
The more indirect the link between information state and thing
denoted, the more scope there is for asking HOW the reference is
achieved. And the answer gives the intension: that's what can differ
between two information states with the same extension.
The detailed form of the answer to "how?" will vary enormously
depending on the sophistication of the system being described.
(Exactly what the question means, remains to be explained.)
David gives us another quote from Quine 1992:
'At first the problem of mind was ontological and linguistic.
With the passing of mind as substance, there remained a
twofold problem of mentalistic language: syntactic and
semantic. .....'
Note that Quine rightly assumes that a particular old metaphysical
view of mind (as substance) has gone, but goes on (later) to assume
a particular sort of solution:
.....Quotational treatment of propositional attitudes
^^^^^^^^^^^^^^^^^^^^^
de dicto delivers them to the extensional domain of predicate
^^^^^^^^
logic, thanks to the reduction of quotation to spelling.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
W. V. O. Quine (1992)
Intension
The Pursuit of Truth p.72-73
I.e. according to Quine, when I talk about your mental state, I am
really only talking about sentences in your head (or something like
that).
However, he does not notice that if taken literally this is
unacceptable for humans for the same reason as it is unacceptable
for computational machines.
For example, when presented with three machines A, B and C, all of
which have the same extensional sorting capability (feed in a list
of words and they come out alphabetically sorted the same way) I can
talk about two of them sorting in the same way. E.g. I can say that
A and B both used an insert-sort algorithm, but not C. (Never mind
what that algorithm is.). Half-way through the sort I can ask about
which item A and B are inserting where in the output list. But that
question may make no sense for C, e.g. if it uses an algorithm that
builds up a lot of little sorted lists and then merges them.
However when I say all that, I am NOT saying that they had the same
bits of text (the same spelling) or any other similar concrete
manifestation of the insert-sort algorithm. I.e. there's nothing
quotational or de-dicto (about language) in my intensional
description.
For example one could be a SPARC and one a VAX, using totally
different machine codes. Moreover, one may have been programmed in
Pascal and one in Lisp. One might have been compiled, and one
interpreted. (NB: it makes no difference whether you include source
program, compiler and run time system, or only the run time system:
my point holds either way. For an interpreted language you can't
even make some of those distinctions. For an incrementally compiled
language they are only half there: for compile-time and run-time are
interleaved.)
There is NO de-dicto reality that those two machine share. And yet
they use the same algorithm, as their implementers may well know.
Much of computer science is about such algorithms and how to
describe them. (E.g. complexity theory includes the analysis of how
the number of steps in the algorithm depends on the size of the
input list.)
The machines A and B have (partly) the same intensional states and
processes in producing the sorted lists, while sharing no syntactic
or physical states. But machine C does not have those states,
despite producing exactly the same input-output behaviour.
Quine can perhaps be excused for not knowing much about such things.
He did not grow up surrounded by computers, compilers, programming
languages of various kinds, operating systems, data structures etc.
But we must not let contemporary youngsters be befuddled by the
philosophy from an out of date (but not yet dead) culture.
So if I say that Quine and Dennett had the same thought, I am NOT
saying that you'll find the same spelling in their brains.
I.e. the "de dicto" "quotational" "spelling-based" or "syntactic"
solution to the nature of mentality fails. It fails for machines,
and it fails for humans.
We have abandoned the dualist metaphysics of mind as substance, but
we have instead a much richer metaphysics of multiple virtual
machines of many different kinds and (I claim that) our intensional
idioms refer to their states.
Of course, there's still work to be done clarifying all that: the
new metaphysics has not killed philosophy, just given it challenging
new jobs to do.
NOTE: all this is quite independent of whether predicate calculus,
or even first order predicate calculus suffices to enable us to
express everything we might want to say about intentional states,
virtual machines, etc. I think that question is quite orthogonal to
whether we should abandon intensionality. E.g. lots of people use
first order predicate calculus to study algorithms and their
properties.
It is also orthogonal to the questions whether a working intelligent
agent could or should make do only with predicate calculus at run
time. We can use predicate calculus to talk about machines that
don't use predicate calculus.
And all these questions are orthogonal to the question whether AI is
possible.
Oh dear, I've gone on too long again.
Aaron
----
From Aaron Sloman Sat Jun 17 18:58:41 BST 1995
Newsgroups: comp.ai.philosophy,sci.cognitive
Distribution: world
References: <3r5750$fsp@barros.cs.ucf.edu> <802695870snz@longley.demon.co.uk> <3r9rjc$669@agate.berkeley.edu> <3rqhh2$nqe@percy.cs.bham.ac.uk> <803288011snz@longley.demon.co.uk>
Subject: Re: Society of Mind (Talking about designs and intensions)
David Longley writes:
> Date: 16 Jun 1995 08:36:54 +0100
>
> In article <3rqhh2$nqe@percy.cs.bham.ac.uk>
> A.Sloman@cs.bham.ac.uk "Aaron Sloman" writes:
> >
> > I am trying to draw attention to the importance of paying attention
> > to what complex systems are composed of instead of only viewing them
> > holistically. A designer would think of a mind as made up of many
> > interacting information processing mechanisms. There's lots that a
> > designer can say about such a system that Quine deems impossible.
> > (Of course, we have to watch out for unjustified anthropomorphisms,
> > and some people do over interpret their designs, as Drew McDermott
> > pointed out long ago. But not everyone does, or has to.)
> >
>
> The fact that people do interpret their designs should alert one to
> the fact that the designer has to translate his or her behaviour into
> language. This is where indeterminacy creeps in, and why we strive to
> work within non-natural languages.
If, like an earlier objector to my comments, you are going to start
by *assuming* that Quine is right and everything is indeterminate,
then you'll have to apply that to everything: including non-natural
languages, all your arguments, all Quine's arguments.
If we are all floating in a sea of indeterminacy how can we engage
in any fruitful debate, since we are not clear what we are defending
or what we are attacking.
As it happens I can talk in a very precise way about the algorithms
used by my programs, and I can do it in English, or in predicate
calculus (though I have not tried the latter except for very
simple cases).
What's more my description in English can be precise enough to
enable people who program in six different languages each to
implement my algorithm. (There may be residual implementation
differences that are irrelevant, e.g. names of variables, which
machine the program runs on, how much the program is divided into
subroutines, etc.)
Actually it takes time for people to learn to express algorithms
with that kind of precision, and not everyone succeeds. Many
students in computer science courses find it incredibly difficult.
They end up writing programs they do not understand themselves and
cannot explain coherently. I am not talking about that sort of
designer.
Aaron
From Aaron Sloman Sat Jun 17 19:18:36 BST 1995
Newsgroups: sci.lang,sci.psychology,rec.arts.books,comp.ai.philosophy,sci.cognitive
References: <802988140snz@longley.demon.co.uk> <803222477snz@longley.demon.co.uk> <803235165snz@chatham.demon.co.uk> <803320333snz@longley.demon.co.uk>
Subject: Re: Chomsky on Consciousness and Dennett (and clinicians)
[I have added sci.cognitive, because this follows on a thread that
was posted there.]
I think I at last know what is bugging David Longley
. He writes:
> Date: 16 Jun 1995 17:36:21 +0100
> ....
Quoting this
> HOUSE OF CARDS:
> Psychology and Psychotherapy Built on Myth
> ROBYN M DAWES 1994 FREE PRESS
>
> extract
>
> PREFACE
>
> As I argue throughout this book, behavior is influenced
> by multiple factors. My own decision to write the book
> has been motivated by two factors in particular:
> anger, and a sense of social obligation.
> .....
> ......Worse yet, far too much professional
> practice in psychology has grown and achieved status by
> espousing principles that are known to be untrue
> and by employing techniques known to be invalid.
>
> Instead of relying on research-based
> knowledge in their practice, too many mental health
> professionals rely on "trained clinical intuition".
> But there is ample evidence that such intuition does
> not work Novell in the mental health professions. (In
> fact, it is often no different from the intuition of
> people who have had no training whatsoever. Forty
> years ago, professionals could be excused for
> believing in the power of their own intuitive judgment,
> because at that time there was very little evidence
> concerning its accuracy one way or the other. That is
> no longer true. Today there is plenty of evidence
> about the accuracy of their intuition, and it's
> negative.
>
> Thus, I am angered when I see my former
> colleagues make bald assertions based on their "years
> of clinical experience" in settings of crucial
> importance to others' lives-such as in commitment
> hearings, or in court hearings about custody
> arrangements, or about suspected child sexual abuse. I
> am particularly infuriated when they base these
> assertions on results of psychological techniques
> that have been proven to be invalid but that "I myself
> have found to be of great help in my clinical
> practice." Those are real people out there about whom
> the judgments are being made.
In other words: important decisions are being made in the courts
(and, according to David, in prisons) that are based on confident
clinical judgements based on intuitions and empathy and untested
experience, ignoring scientific evidence showing that those
decisions are wrong.
Those intuitions and empathies concern mental states. David wishes
to attack the judges and other officials who abuse their positions
by relying on their own judgements instead of trying to find out
what scientists have already discovered about such cases.
However, for his attack he wheels in Quine's arguments against the
ascription of mental states using intensional idioms. I presume he
does this because the practicioners he is concerned about express
their judgements by making statements about the beliefs, desires,
intentions, attitudes, etc. of others (e.g. prisoners).
Thus, by showing that such statements are philosophically
unacceptable he can undermine such bad practices.
But that's using a broken sledge hammer to turn a screw. Stick with
the sorts of criticisms that Dawes (whom I had never previously
read) makes: i.e. show that the judgements are based on ignorance
and contradicted by well established generalisations (if you can --
I have my own doubts about that, but never mind for now).
You can show all that without showing that intensional language is
fundamentally misguided: for it isn't, as I've tried to show in
detail in my other responses to your postings in comp.ai.philosophy
and sci.cognitive.
[For some reason this thread has gone on in parallel in
sci.lang, sci.psychology, rec.arts.books, comp.ai.philosophy,
so only those who read c.a.p will have seen all the earlier
arguments.]
If you try to attack judges etc by using bad philosophy, they (or
their philosophical friends) will defend themselves by refuting the
bad philosophy.
The real criticism that they are full of prejudice and unjustified
hunches, and ignorant of well supported generalisations, will
then go unnoticed.
Cheers.
Aaron
---
From Aaron Tue Jun 20 00:00:17 BST 1995
Newsgroups: comp.ai.philosophy,sci.cognitive
References: <3r9rjc$669@agate.berkeley.edu> <803288011snz@longley.demon.co.uk> <3rv53s$j5f@percy.cs.bham.ac.uk> <803460951snz@longley.demon.co.uk> <3s1u0e$mii@agate.berkeley.edu>
Subject: Re: Society of Mind
Edward Faith writes:
> Date: 18 Jun 1995 19:15:26 GMT
>
> In article <3rv53s$j5f@percy.cs.bham.ac.uk> Aaron Sloman,
> A.Sloman@cs.bham.ac.uk writes:
> >Actually it takes time for people to learn to express algorithms
> >with that kind of precision, and not everyone succeeds. Many
> >students in computer science courses find it incredibly difficult.
> >They end up writing programs they do not understand themselves and
> >cannot explain coherently. I am not talking about that sort of
> >designer.
>
> If we're going to compare evolution to human design,
> this "confused" sort of designer would seem to be
> the most relevant sort of designer, since Nature ends
> up 'designing' machines that it does not understand
> itself and cannot explain coherently.
Yes. And that's true also of many human designs.
However, I was merely making the claim that in at least *some* cases
humans can communicate with as much precision as is required for the
task.
(Moreover, I have also been trying, in other postings, to draw
attention to the fact that there are precise and important things
that we can say about states and processes in information processing
systems that are not merely about their behaviour nor about
physics, but about the virtual machine states that explain their
behaviour and which are implemented in physical mechanisms.
Talking about the algorithms used by such a machine is an example.)
> >What's more my description in English can be precise enough to
> >enable people who program in six different languages each to
> >implement my algorithm.
>
> I suspect that the situation you describe, of designers
> understanding each other, is a highly artificial one,
> due to the development of a community with a limited
> range of ends.
Yes. I was merely claiming that in at least *some* cases there is no
serious problem of indeterminacy of content or of communication.
I would not claim that *all* human utterances or communications or
mental states are like that. Many are very vague, ambiguous,
unclear, or whatever. (Like the previous sentence.)
> ..I wonder if all designed objects could
> in fact be easily described in the "international" way
> you describe, or only those sorts of objects that you
> have a reason to design. I think it much more likely
> that new designed objects will sometimes need to be
> observed first, and a new language developed around
> them to describe them.
I think there are many sorts of designs for which we do not yet have
adequate design languages. In particular, I suspect we do not have
an adequate language yet for specifying designs for human-like minds
or for specifying the requirements (niches) which those designs fit.
Here as in many areas of science, conceptual development is a
requirement for significant progress (unlike the case where you
already have a language in which you can formulate theories
precisely and test them, etc. which seems to be the case for at
least some areas of physics, chemistry, biology, etc.)
Anyhow, the main point is that I was not making any general claim,
only saying what is possible in at least some cases. That's enough
to refute Quine's argument that every utterance is radically
indeterminate in meaning, an argument that ultimately rests (as I
understand it) on a behaviourist philosophy of mind.
Cheers.
Aaron
From Aaron Mon Jul 3 02:01:36 BST 1995
Newsgroups: comp.ai,comp.ai.philosophy,sci.logic,sci.cognitive
References: <3svu91$d4b@news.eecs.nwu.edu> <804498994snz@longley.demon.co.uk> <3t1lef$4dj@percy.cs.bham.ac.uk> <3t4as2$ea3@saba.info.ucla.edu>
Subject: Re: FIRST order? was: why Ginsberg grouses
Hello Michael
zeleny@oak.math.ucla.edu (Michael Zeleny) writes:
> Date: 1 Jul 1995 20:23:30 GMT
>
> In article <3t1lef$4dj@percy.cs.bham.ac.uk>
> A.Sloman@cs.bham.ac.uk (Aaron Sloman) writes:
>
> >David Longley writes:
>
> >> ....
(DL)
> >> What I would find very helpful would be some some development of the
> >> the Quinean thesis that deductive inference fails within intensional
> >> contexts.
(AS)
> >This is totally trivial.
(MZ)
> This is totally unimpressive.
Actually, I was not trying to impress. I only try to demystify. I
think some people appear to be deeply impressed by the fact that
certain sorts of logic can't handle certain sorts of phenomena. I am
trying to get them to stop being impressed, so that they can instead
start *thinking*.
(AS)
> >If an intensional context is DEFINED as one in which replacement of
> >a term by one that is referential equivalent does not preserve truth
> >value then it follows trivially that the particular sort of
> >deductive logic that allows substitution of referential equivalents
> >will fail in intensional contexts. So what?
(MZ)
> Intensional contexts need not be so defined.
Well you can define words any way you like, as long as you are
clear about what you are doing. The standard definition
of "intensional" in my experience is in terms of failure of the
substitution of referential equivalence to preserve truth. This
is the phenomenon that Frege noticed in connection with "the evening
star" and "the morning star". It's also the definition David was
clearly using.
Of course, there's a different question: how many different sorts of
uses of language are intensional, and why? I've tried to give
partial answers and hints of various kinds (I don't think the topic
is all sewn up yet). In particular, I have tried several times to
show that there are many sorts of things that are capable of
processing information, or following procedures or algorithms, and
when we talk about what they do we have the option of talking about
results using an extensional language, or talking about HOW the
results are achieved in which case we often need intensional
formulations. Frege saw this very clearly.
(Unfortunately, David is apparently so impressed by Quine's trivial
thesis, that he feels he has to give up the right to talk about what
goes on inside information-processing systems, as that cannot,
according to Quine, be a topic for science. I hope he will
eventually be cured of this phobia. After all, talking about
algorithms, and more generally how a computing system works, is
commonplace in computer science and software engineering, and it
would be silly to give up such disciplines just because a
philosopher, however famous, says that only what is handled by a
narrow kind of logic can be science.
It's like deciding in advance that the only place to look for your
lost keys is where the streetlamp is, instead of trying to find a
torch that works in other places.)
(MZ)
> ..Counterexample: the
> language of de re modality, suitably construed, sustains both the
^^^^^^^^^^^^^^^^
> substitution salva veritate of codesignative terms, and deductive
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> closure, in all contexts.
I don't know what point you are making. Any such counter example
will not be intensional, according to any definition of
"intensional" that I have ever come across. In what sense is this
intensional?
If the technical term "intensional" is ambiguous, I apologise. The
posts I responded to all seemed to accept what I think is the
"standard" definition, and DL even quoted such a definition more
than once. (E.g. the version he attributed to U.T.Place.)
(AS)
> >There's nothing to develop, except to notice that you had better not
> >use that narrow sort of logic in that sort of context.
(MZ)
> Proving too much, like protesting too much, is deleterious to the
> credibility of your case.
Sorry, I don't understand. What's too much about this?
By any logic I understand, if a tool T cannot work in context C, and
you want to study context C, then it follows that you need to look
for some tool other than T.
That's all I am saying. (I.e. what I am saying is as trivial as
Quine's thesis. It's so trivial that I don't see where its
credibility is at stake.)
(AS)
> >(a) intensionality (as I've pointed out previously) is not unique to
> >psychological contexts, and
> >
> >(b) the widely believed claim that ALL referring expressions in
> >psychological contexts are intensional is false.
(MZ)
> Strong words. See above.
OK: (a) and (b) are a bit less trivial. But I gave examples to back
them up.
(AS)
> >Here are some more examples of (a)
> >
> > "It's easy to prove that the sum of the first 5 odd
> > numbers is 25."
> > (replace "25" with a very complicated expression that
> > evaluates to 25)
> >
> > "The set of chairs in this room is easily identifiable."
> > (Replace "the set of chairs in this room" with an expression
> > that refers to the same set, but uses a different membership
> > criterion, e.g. "the set of objects in this room that were
> > all manufactured in Wigan in 1973")
(MZ)
> Ease of proof and identification can be readily cashed out in terms
> of psychological states.
OK, I should have noticed that my examples needed to be stated more
precisely, as they can be misconstrued.
Consider statements of the form:
"There exists a proof of less than N steps in logical system S
that the sum of the first 5 odd numbers is 25"
Where N is to be found by doing some exploration of logical system
S. Then replace "25" with a complex expression whose denotation
cannot be computed in S in less than N+1 steps. You'll replace one
expression with another co-designating expression, and switch from a
true to a false statement.
There's nothing deep about this. I think intensions and intensional
contexts are all over the place, and we need to understand them
instead of shutting our eyes to them because Quine says we must.
Similarly, instead of my second example use this pair of sentences:
"The set of chairs in this room can be detected by robots
manufactured according to principles X Y and Z."
and compare with
"The set of objects in this room that were all manufactured in
Wigan in 1973 can be detected by robots manufactured according
to principles X Y and Z."
You anticipate this sort of clarification with this comment:
> Likewise, alternative construals in terms
> of mechanical processes can be readily impeached as not exemplifying
> genuine instances thereof.
Well that comment just sounds like dogma to me, unless you have
reasons for it that I don't understand.
To show that my examples are not genuine instances of
non-psychological intensionality (which is all I claimed for them)
you must show
EITHER
(i) that the substitution does really preserve truth-value, despite
prima-facie appearance, so that they are not
examples of intensionality (in the sense that was under
discussion) after all
OR
(ii) that these statements are really about (human?) psychology even
though they don't appear to be.
(Of course, I can imagine someone defining psychology as the realm
of the intensional - e.g. someone who believes that intensionality
is intimately bound up with information processing capabilities and
that information processing capabilities, whether natural or
artificial, are the domain of psychology. However, I would not
expect such a person to say that psychology is beyond the realm of
science.)
(MZ)
> .."Nothing is itself identified or proven,
> but thinking that makes it so." Even if this is merely a dialectical
> position, nothing in the above examples suffices to dismiss it.
If you think that the very concept of proof in any logical system is
a psychological concept, then you are using "psychological" in a
very broad sense, and maybe I don't want to argue that in YOUR sense
there are non-psychological intensional contexts. It's always
difficult to anticipate all terminological differences, especially
in a forum like this.
(AS)
> >There are lots and lots of examples of (a) relating to computers,
> >e.g.
> > "The computer has the information that Fred Smith was
> > born in 1960"
> >
> >Even if Fred Smith is your brother and that statement is true, this
> >might be false
> > "The computer has the information that your brother was
> > born in 1960"
(MZ)
> The same objection applies to your proposed imputation of possession
> of information that something is the case, to "the computer," in
> contradistinction from imputing it to the individuals responsible
> for its programming and use.
Well, you may also be using "possess information" in some technical
sense that I do not understand.
It's perfectly OK to say in the language I and my colleagues use
every day that the computers round here possess lots of information
that neither their programmers nor their users have or want to have.
For example, the computers have information about precisely how long
all sorts of processes have been running, how much memory each
process currently consumes, which pages of each process are
currently in physical memory, how their virtual address spaces map
onto physical locations, where particular data-structures are
located in (virtual) memory, how often I have logged in over the
last few months, etc. etc.
Some of the information the machine I am using possesses is of
interest to me, and is given to me from time to time: e.g. when I've
exceeded my disk quota I get a warning from the computer (not the
system adminsitrator, who does not know and normally does not need
to know, as all appropriate steps are taken by the machine, most of
the time). But the machine has the information BEFORE I do.
Of course there are always people who say that this language is a
distortion, that the machines don't *REALLY* calculate, infer, have
information, use information, and all the other things we think they
do (why else would we built them?).
I deal with such objections by saying lets bifurcate our concepts.
E.g. there's calculateR ("Really calculate") and calculateM (do
what's common to the machine's activity and mine when I calculateR).
Similarly there's possessR information and possessM information, and
likewise for all the other concepts you may not wish to allow me to
apply to machines.
I then happily continue using my M concepts to talk in a convenient
way about machines and people, and you have to use a more cumbersome
vocabulary with twice as many concepts, or worse, with lots of
circumlocutions about imputed states and processes when you talk
about machines.
(AS)
> >There's nothing mysterious about psychological statements being
> >intensional (referentially opaque). The same is true of many
> >statements about other information processing systems besides
> >people.
(MZ)
> This cannot be maintained without arbitrarily fixing the onus for
> processing information, on the basis of some ideologically driven,
> pre-theoretical considerations.
I am not aware of pursuing any ideology. I merely make simple
observations about the things around me and the things we naturally,
and usefully, and, as far as I can tell, truthfully, say about them.
And then I try to understand what's going on. (That's why I like to
do my philosophy in a department with comptuer scientists and
software engineers: I get useful new concepts for thinking about
these problems, which previous philosophers sadly lacked.)
(AS)
> >But not all psychological statements are intensional (referentially
> >opaque).
> >
> >Another example of (b)
> >
> > "The policeman noticed the burglar climing over the wall."
> > (I claim that for extensionally equivalent
> > substitutions of "the burglar" the truth value of
> > the sentence will not change.)
(MZ)
> Nonsense. If the semantics of noticing is compositional, your claim
> will never get off the ground -- for then, P's ability to notice B
> doing W, would have to depend on his ability to identify B as such.
> Moreover, the fundamental assumption is widely disputed in literature.
We'll this becomes an empirical question about how language is used,
and there may be different dialects and idiolects. In my version of
English, it follows from the fact that the burglar is the uncle of
Joe Bloggs and the fact that the policeman noticed the burglar
climbing over the wall, that the policeman noticed the uncle of Joe
Bloggs climbing over the wall, even if he did not know it was Joe's
uncle. So for me that's not an intensional utterance.
However this slightly different example (usuall) is:
The policeman noticed that the burgler was climbing over the
wall.
I.e. the use of a sentential complement to "noticed" has different
implications from the use of a noun phrase (in my language -- maybe
not yours).
Similar comments apply to other "success" verbs like "saw", "heard".
E.g. Whoever heard the burglar climbing over the wall heard Joe's
uncle climbing over the wall. But he may have heard THAT the burglar
was climbing over the wall without hearing THAT Joe's uncle was
climbing over the wall.
By the way, an example I gave in an earlier posting seems to be
perfectly intelligble to most English speakers and cannot make sense
unless one of the occurrences of the repeated noun phrase is
non-intensionsal (or so it seems to me):
When the policeman made the mistaken arrest he did not know that
the owner of the house was the owner of the house.
^^^^^^^^^^^^^^^^^^^^^^
I think the underlined bit is extensional (referentially
transparent).
> See e.g. Dretske's recent discussions of the identity of percepts and
> the nature of noticing.
I have not read anything recent by him, but when we took part in a
conference in London a year ago he gave me some of his earlier
papers on the notion of information which I found pretty
unconvincing. (Like many other philosophers, he wants to restrict
certain concepts to phenomena that have biological evolutionary
explanations, where I just find this an unnecessary and arbitrary
linguistic straight jacket, for which I've never met any serious
justification. However I don't object if people wish to constrain
*their own* vocabularies so as to restrict what they talk about, as
long as they allow the rest of us to go on communicating with one
another without cumbersome circumlocutions.)
You seem to have some sympathy for such philosophers, as you want me
not to talk about what a computer can actually do, but about what
people impute to it.
> All one would have to claim, is that the computer's ostensible ability
> to process information is entirely imputed thereto by its human users
> and observers.
Well, I see no merit in encumbering my talk with reference to
hypothetical humans engaging in hypothetical imputations. I actually
believe, for the reasons indicated, that there are all sorts of
cases of information processed by machines that no human knows
about. Your claims sound to me similar to the desperate attempts to
cling on to the belief that we are at the centre of the universe.
Perhaps I am missing something.
> ..For the chronologically gifted among us, this might be
> easier to see by comparing it to a slide rule.
I had the privilege of using slide rules in my youth. The contrast
between a slide rule (which does nothing on its own) and all these
wonderful new machines is evident to anyone who doesn't approach
the latter with an ideological axe to grind.
> >As for whether Quine claimed that first order predicate calculus
> >was the language of science, all I can say is that IF he did, he
> >merely showed what a narrow view of science he had.
>
> And, presumably, still has.
I don't know. Some of the recent stuff of his quoted by DL was so
turgid that I could not see what he (Quine) was getting at.
(AS)
> >There are many extensions to first order predicate calculus that
> >people have found necessary for one purpose or another (e.g. modal
> >logics, which also produce intensional contexts).
(MZ)
> Not all of them are computationally tractable, however.
Well a system that is not totally computationally tractable may
still be very useful in many special cases. I use a Lisp-like
programming language (Pop-11) for which the halting problem is
unsolvable in general, yet I can be sure that many of my programs
will terminate, and that others (e.g. the infinite loop of an
editor) will not (unless interrupted by the user or operating
system or a power failure, etc.).
In general the language allows one to create programs that will
overflow the memory of the computer, or take too long to be useful
-- and yet many of the programs fit nicely into the space available,
and finish about when I expect them to.
Some people want to base science on perfect, infallible, totally
general tools. Just as some people seek the perfect programming
language.
I just don't believe either exists: so I use whatever tools are
available and try to use them with due care.
>...An easy way
> to see that is by considering the importation of intensional entities
> as a type-raising operation, along the lines of Kaplan's Russelling a
> Frege-Church.
Sorry I don't understand this. But I believe the conclusion that you
based on it anyway: lots of interesting things are intractable in
general.
Apologies for going on too long (again).
Cheers.
Aaron