Posted Sun Jul 28 16:34:47 BST 1996
Newsgroups: comp.ai.philosophy
References: <4rcir5$ioh@usenet.srv.cis.pitt.edu> <4s4blh$kkj@sun4.bham.ac.uk> <4s71cm$i83@usenet.srv.cis.pitt.edu> <4squb4$6jj@sun4.bham.ac.uk> <4t6jps$q70@usenet.srv.cis.pitt.edu> <4t9i5t$3vd@percy.cs.bham.ac.uk> <4ta1ms$tp6@zen.hursley.ibm.com>
Subject: Re: Note on truth-values, Frege -- Re Sloman Part 7
In my response to Anders Weinstein I think I partly answered some of
the questions Peter_Lupton@uk.ibm.com raises.
> Date: 26 Jul 1996 09:04:28 GMT
>
> In <4t9i5t$3vd@percy.cs.bham.ac.uk>, A.Sloman@cs.bham.ac.uk (Aaron Sloman) writes:
> ....
> >I take "sense" or "intentional" ("intensional"?) content to be
> >concerned with HOW the mapping is done. Thus
> > That figure is bounded by three straight lines
> > That figure is bounded by three straight lines meeting in three
> > corners.
> >
> >divides possible states of affairs into two classes: the same two
> >classes. But how they do it is different. They have different
> >senses. (This example, or something like it, is in Frege's
> >writing, by the way.)
> >
> >Anyhow, there's nothing normative about any of this. Its all a
> >matter of the *mechanisms* of language. And a solitary highly
> >intelligent thinker could in principle make use of the mechanisms
> >without ever having to have anything to do with a society, other
> >speakers or norms. (He might find it useful to keep records of
> >events, and geographical facts etc.)
[PL]
> Can I ask for clarification, here. Aaron says 'there is nothing
> normative here'. I take it that this means that there is no *social*
> norm here.
Yes
[PL]
> A rational person *ought* to be able to understand that
> the two definitions above include and exclude the same cases.
> This 'ought' is a norm - and the fact that we separate causal
> links (which are immune from error) from rational links (which
> involve the possibility of error) is in itself normative as I understand
> the word.
What sort of "ought" are you using? It is not often noticed that
"ought" and "should" have a purely instrumental use, e.g.
If you want to remain healthy then you ought to take more
exercise.
If you are trying to use the computer then you should switch
it on at the wall socket.
etc. These are statements about necessary conditions for something.
Similarly if a monkey is leaping from one branch to another then
it "ought" to judge the distance accurately. If a robot is to get
its batteries charged in time then it ought to start moving to the
recharge point before the batteries are fully run down.
[PL]
> So aren't there two statements here? The first statement (which
> I agree with) that rationality need not involve *social* norms;
> the second statement (that I would disagree with) that rationality
> needs no norms *of any sort*?
I don't think talking about norms is at all helpful in this second
context. It just causes confusion because the normal notion of a
norm is either something to do with statistical averages, or
something social, something to do with what happens to be approved
of or valued by a group of people.
The sense in which an animal's or a robot's information processing
systems can get things right or wrong, make errors and correct them,
has nothing to do with those notions of norms. It is all to do with
conditions for a sub-system to perform its function or for the whole
sytem to achieve its goals.
[Qualification:
It's not quite as simple as that, because it turns out to be useful
for the more intelligent systems to devise an ontology that goes
beyond the specifics of feedback control mechanisms and sensory
values.
You have advanced the notion that that extended ontology can be
derived on the basis of a striving for parsimony (I think). For now
I want to leave that open: in the case of animals SOME of it may be
pre-programmed into the animal's cognitive apparatus by the genes
because that is what evolution found happens to work, rather than
because some parsimony-seeking mechanism achieved a new
optimisation.
Either way, the notion of truth, or correctness, becomes far more
complex when the animal's ontology extends way beyond what it can
check with its sensors.]
[PL]
> Now I *think* I'm belabouring the obvious - that Aaron uses the
> word 'norm' to mean 'social norm'. But perhaps I'm mistaken.
Anyhow that's the notion of norm that I am claiming is irrelevant.
It seemed to be the notion of norm that Anders claimed was
inherently involved in the concepts of belief, intention,
perception, etc.
[PL]
> Now maybe I'm mistaken, but the idea of a norm (something one can
> succeed or fail at) with its associated notions of doing something
> wrongly, doesn't seem to be necessarily a social notion. Surely there
> is nothing wrong with the idea of non-social, individualistic norms,
> of which reason might be one.
I prefer not to bring notions of "reason" and "rationality" in here.
I don't think rationality is a requirement for intelligence or
intentionality, for reasons I have given previously. E.g. I believe
very young children, monkeys and perhaps also rats have percepts,
beliefs, desires, decisions. I have no reason to think they are
rational (or irrational). I think these notions are applicable only
when there's an extra level of sophistication, where you are able to
monitor your own reasons for doing things, and take decisions that
are based on, or ignore, your reasons.
[AS]
> >I help myself to the idea that the organism or robot can associate a
> >particular sentence or proposition with a way of dividing possible
> >states of affairs into two sets. That's enough.
[PL]
> Surely that couldn't be *sufficient*.
Sufficient for what? Sufficient for success as a chimp? Success as a
domestic robot?
> ..Surely you want consistency
> over time - inconsistency would need an *explanation*.
No more than consistency does.
We know that perfect consistency is not possible because achieving
it in complex sets of information is computationally intractable if
you allow normal forms of inference.
(You can achieve consistency if you do nothing more than record
primitive sensory data. Then no item in the information store can
ever be inconsistent with any other, as long as you do not have any
generalisations. Such an information store would be useless for an
animal or robot since no predictions would ever be possible.)
[PL]
> ..Either a
> confession that the robot had made an error
I've already agreed that the possibility of perceptual or
inferential error is inherent in the design of the sort of system I
am talking about. (There are many different cases.)
> ...or that things aren't
> as one thinks, etc.,etc.
I don't understand this option, unless you mean that what appears as
inconsistent isn't really. If it isn't then it's not what we are
talking about.
It's a very old idea from work in AI (frequently pointed out by
Minsky I think) and also philosophy of science (Hesse? Toulmin?)
that as long as sets of beliefs are locally consistent it may be
possible to get along without detecting and removing all
inconsistencies. One requirement for this is (paradoxically perhaps)
that the agent's derivational mechanisms are resource-bounded.
Otherwise from the inconsistencies it could derive anything. But if
it were not resource bounded the problem would not arise because the
inconsistencies could be detected and possibly eliminated, or at
least disjunctive belief sets created.
[PL]
> ..Without such consistencies and explanations
> for the inevitable inconsistencies, I think something is missing and that
> missing something is, surely, picked out by the word 'norm'.
I suspect that use of this word in this context will simply cause
confusion, because of its normal connotation and associations with
standards, conventions, social systems, and rationality.
[PL]
> Nor do these explanations and confessions have to be social. I often
> 'kick myself' for not seeing something, etc.. Many errors that I make I
> judge myself as having made them without outside interference.
There are two different things going on here. (a) One is the ability
to detect and correct errors. (b) The other is the ability to praise
or blame oneself. This may well be based on your being part of a
social system that uses praise and blame as mechanisms of social
control. Or it may be that there are similar mechanisms that can
work without a social context. Either way, I suspect that (a) occurs
in many animals that are not capable of (b). Maybe the same is true
of psychopaths also. Likewise some robots.
>
> >[AW]
> >> I don't see how you can deny that these
> >> notions are normative.
[AS]
> >I seem to be able to do lots of things that you think are
> >impossible.
> >
> >Sorry.
>
> Now I'm really not sure what's going on. Is Aaron denying normativity,
> or is Aaron denying the need for *social* norms?
>
> Cheers,
> Pete Lupton
I hope it's clearer now. If you and Anders wish to use (I first
wrote "stretch") the word "norm" to cover the sorts of things
described above that I agree are required for intelligence then I
won't argue that that's a mistake, as long as you are aware of the
difference between criteria for correctness or incorrectness that
are ultimately instrumental, as opposed to those that are
judgemental or social.
I just think this wider usage of "norm" and "normative" will lead to
confusion among students and colleagues.
But that's a mere empirical conjecture!
Aaron