From Aaron Sloman Mon May 8 02:57:22 BST 1995
From Aaron Sloman Wed May 10 09:05:56 BST 1995
To: dave@twinearth.wustl.edu
Subject: consciousness and cognition (comments)
[I am re-sending the message you said you had not received. Thanks
for the new papers. Ascii is fine. It makes responding easier.
I've mainly expressed, sometimes tersely, points of disagreement.
I think there's agreement on the main ideas -- far more than I
expected after our interchange late 1992.]
Dave,
I noticed in an old file of messages that you had referred me to an
online paper of yours which I did not follow up (possibly because I was
too busy at the time). So I have now fetched it and read it. The paper
is
Consciousness and Cognition
at the Indiana ftp directory.
I was very surprised to find on reading it that our positions are
actually very similar, apart from some terminological points and
difference of emphasis.
We both think that qualia exist and have causal roles, and claim that
they would exist in any well designed human like system. We both claim
that they can both be observed by the system within which they occur and
talked about "from the outside" using the design standpoint
(functionalist standpoint).
There are several differences:
1. You claim that patterns can have some sort of information content
merely in virtue of their structure, whereas I claim that what
information is in a pattern depends on the architecture (virtual machine
architecture, not physical machine architecture) within which it plays a
functional role.
2. I claim that there's not a well defined question whether things have
qualia (or consciousness) because there are so many different cases,
involving different collections of features and capabilities (arising
out of different architectures) and there's no clear notion of which
subsets of those features and capabilities suffice. You seem to think
it's a simple yes/no question, but don't discuss the point in any
detail, though you seem to be willing to attribute qualia to
thermostats.
3. I think that at present we do not have any really good terminology
for asking the questions or formulating the answers, and this leads to
all sorts of ill-defined questions and assertions, and much use of
rhetoric and metaphor instead of good scientific description. The only
way we'll arrive at good terminology is by developing a "meta-theory" of
types of architectures for behaving systems and the kinds of states,
events and processes that different architectures can support. Then
instead of asking whether thermostats, or fleas, or plants have qualia
we'll be able to ask precisely defined questions about whether they have
Q23 (i.e. qualia of type 23, or whatever).
4. We are both dissatisfied with theories that treat relations between
low level (e.g. neural, physical) phenomena and high level phenomena
(e.g. concerning mental states, etc.) as mere empirical correlations.
Rather we want them to have deep non-empirical connections. (This is a
variant of your Coherence Test. I'd prefer to call it the
Non-contingency Test.) I'd go further and say that the connections need
to be of the same type as exist between a high level virtual machine
(e.g. prolog machine) and its physical implementation. More generally, I
think the well understood notion of implementation, in software
engineering and computer science, provides a good basis for trying to
explain various more complicated types of supervenience. (A supervenes
on B = B implements A.)
Unfortunately most philosophers who discuss this kind of thing are
pretty ignorant of types of implementation, and consequently lack the
conceptual tools to think about their problems.
5. You seem to want very strong correspondences (correlations) between
mental (virtual machine) processes and the underlying physical
processes. I think that this is a mistake. There may be a global
implementation of a complex representing structure, but not a mapping
from particular bits of the structure to bits of the physical machine.
The simplest example is a sparse array which may even contain far more
items of information than there are physical componenets in the machine.
6. Closely related to the previous point, I think that one of the levels
of implementation between the level of mental phenomena and the leval of
physical phenomena is something I call the information processing level.
At that level the architecture can support semantic relations (of the
sorts software engineers talk about in designing, e.g. an office
automation system.) These semantic states do not require Dennett's
intentional stance, nor any assumption of rationality. Compare my
IJCAI85 and Royal society 1994 papers, listed below.
I've made a few notes on details of the paper, which follow now. In each
case I have quoted some of your text (extricated from the postscript
file) followed by a comment in [[double square brackets]].
I'll number the items using double braces e.g. {{23}}, for easy
reference. I've also appended references to some of my papers that take
the issues further. They are all available via our ftp directory:
ftp://ftp.cs.bham.ac.uk/pub/dist/cog_affect/0-INDEX.html
Apologies if the quotations on which I've commented are now totally out
of date. Was the paper published somewhere?
=======================================================================
{{1}}
In any event, it is uncontroversial that for every subjective mental
event there is a corresponding physical event;
[[It is very controversial and I reject the claim. Compare: if I sell
you half of my land, which physical event corresponds to the event of
division of the land, change of ownership, etc.]]
{{2}}
For every subjective sensation there corresponds a perception,
objectively characterizable, and so on.
[[Not sure what you mean here, unless it is what I would express by
saying that when we focus on our qualia we are, in effect, perceiving
our own mental state, possibly with some inaccuracy, incompleteness,
etc..]]
{{3}}
(1) The third-person approach is sufficient, in principle, to yield a
complete explanation of human behaviour. This follows from the
explanatory completeness of physical laws --- a hypothesis which is
relatively uncontroversial.
[[NO: depends on what kind of explanation you are offering. E.g. some
explanations are designed to evoke sympathy, not prediction. All that
can be completely explained in physical terms is the physical events and
processes. There are emergent phenomena for which non-physical concepts
are required, and whose laws have nothing to do with those of physics.
E.g. the laws of a chess playing machine are not derivable from physics,
they could be changed to suit different chess playing cultures. (Compare
Sloman 1994c)
]]
(1a) The first-person approach is not needed to explain anything about
human behaviour. Of course, the first-person approach may well prove to
be a useful short-cut in explaining behaviour, and a powerful tool, but
the point is that it is not necessary --- it is in principle
dispensable, as far as behaviour is concerned.
[[No. it would not have evolved if it played no useful role, e.g.
enabling people to introspect can help them and others. Also I think
that my "Non-contingency" version of your Coherence Test would require
the phenomena identified via the first-person approach to exist in human
like intelligent systems. I.e. they would not be redundant or
dispensable. (You may think you can refer to something redundant,
dispensable, etc. but I think that's an illusion because the
connections are not visible.)]]
(1b) Behaviour would still be exactly the same, even if we had no
subjective mental states. This is not meant to beg the question of
whether it is possible that we could lack subjective states, but the
idea at least seems coherent.
[[No. some of the kinds of behaviour might be incapable of being
produced except via the sorts of subjective states you discuss.]]
{{4}}
(4a) The first-person approach is not needed to explain our claims about
consciousness.
[[Whereas I'd say its possibility is part of what needs to be
explained.]]
{{5}}
Everything that we say about consciousness is, in principle, amenable to
the usual kind of physical/functional analysis. There is no need to
appeal to any mysterious metaphysical constructs to explain the things
we say. On this account, the last 2000 years of debate over the
Mind-Body Problem would have gone exactly the same in the absence of
subjective experience
[[No - we need the information processing level of analysis, and that is
metaphysically distinct from the physical level and is not the same as
functional analysis in general (such as might apply to a car engine).
See my 1994c]]
{{6}}
Most people will agree that consciousness is a surprising property.
[[Rash claim. Have you done a survey?]]
If it weren't for the fact that first-person experience was a brute fact
presented to us, there would seem to be no reason to predict its
existence. All it does is make things more complicated.
[[On the contrary, there could be a good functional explanation, and
I think there is. Moreover, I'd use design arguments to predict its
existence.]]
{{7}}
Somebody who knew enough about brain structure would be able
to immediately predict the likelihood of utterances such as `I feel
conscious, in a way that it seems no physical object could be', or even
Descartes' `Cogito ergo sum'.
[[Compare explaining the capabilities of a software system. We do not
do this via physics, and it may not even be possible to do it via
physics. Incidentally, even if the physical production of the
utterances can be predicted via physics it does not follow that the
those statements with that semantic content would be predicted from
physics. That's because the contents of the utterances are not
expressible in the langage of physics.]]
{{8}}
It becomes a third-person high-level term, playing a similar role in
cognitive science as `heat' does in thermodynamics. As for this weird
`first-person experience', then insofar as it is not a purely cognitive
phenomenon, it simply does not exist.
[[It does exist, but it can be misconstrued.]]
{{9}}
Everything we can say about consciousness is physically caused, and
somehow, at the back of our minds, we are aware of this. So when the
words come out of our mouths, we are aware of how easy it might be to
explain our claims without invoking any great mysteries.
^^^^^^^^^^^^^^^^^^
[[Do not confuse explaining claims with explaining the production of
verbal noises, etc. Compare my previous point. I think you make similar
points.]]
{{10}}
I nevertheless believe that there is a grain of truth in
epiphenomenalism, but that it needs to be spelt out more carefully.
[[Supervenient systems all exhibit a kind of epiphenomenalism, a kind of
causal dispensability. I.e. if A supervenes on B, or is implemented in
terms of B, it is possible to give a complete causal account of all the
phenomena at the level of description of B without mentioning phenomena
at the level of description of A. But that does not mean that A-type
phenomena have no causal powers on B type. E.g. a nation's wounded pride
(type A) can cause a war (type A) and that can cause lots of explosions,
and movement of matter (type B). ]]
{{11}}
If there is anything at all called `consciousness' that plays a causal
role, then it does so in exactly the same way that centers of gravity
play a causal role in physics, or that temperature plays a causal role
in thermodynamics: as a convenient third-person abstraction.
[[No there are subtle differences between physical abstractions and
information processing abstractions. Physical properties are (I think)
intrinsic, whereas information processing states are in an important
sense inherently relational. Compare the level of description of control
systems that refers to positive or negative feedback, or damping, etc.
These are not mere abstractions that abbreviate complex physical
descriptions. By contrast talk about centre of gravity is a way of
talking about physical properties.]]
{{12}}
The consideration of questions like (4c) has been used as an explicit
argument for the third-person approach in a few places. Foss (1989),
arguing against the Nagel/Jackson arguments, argues that a `Super
Neuroscientist' could in principle know everything that a being would say
about its qualia-experience, and everything that it might say. From
this, he draws the conclusion that the Super Neuroscientist would know
everything that there is to know about qualia.
[[No. For example, the Super Electronic Engineer would not know
everything there is to know about states of operating systems,
compilers, parsers. He/she might not even know the language required
for describing them, e.g. deadlock, virtual memory, parse tree,
control stack, lexically scoped identifiers, etc. etc. Similarly, a
physicist could not, as physicist, know everything there is to know
about qualia, beliefs, desires, etc.]]
{{13}}
Nevertheless, I believe there is a grain of truth in these arguments.
There is perhaps some sense in which qualia are physical. There is
perhaps some sense in which consciousness is understandable from the
third-person viewpoint. What must be explicated is the nature of this
sense.
[[I think it is important to distinguish:
(a) X is implemented in physical systems.
(b) X is physical in some sense.
(a) does not entail (b). An economic recession, is not physical in any
sense.
]]
{{14}}
However, what needs to be explained is why Absent Qualia are impossible,
and how qualia could play a causal role. This question is nowhere
considered by functionalism.
[[ I take it as obvious that if they exist they must have a causal role,
which is partly why evolution selected mechanisms that have them.
Incidentally I am not unique: e.g. during discussions, Nico Frijda
(Amsterdam) put forward an exactly analogous position, against Harnad,
at a recent conference on Emotions, in Geneva. ]]
{{15}}
Another way in which to phrase (C2) might be `An account of why
consciousness seems so strange to us.'
[[NB Lots of things seem strange to scientists, e.g. gravity,
electromagnetic radiation, undecidability results, etc. ....]]
{{16}}
Another way of putting the Coherence Test is that we need accounts of
(C1a) The mind that we experience (a first-person question); and
(C2a) The mind that the brain perceives (a third-person question).
[[Both are grossly under-specified. But there is something right
about this: our consciousness (whatever that is) and our talk about it
must be closely related in the same explanatory framework. Also our
high level mental states and processes and our neural processes must
not be merely contingently related: The Non-Contingency Test]]
{{17}}
So when it comes to explaining the things we say and even believe about
consciousness,
[[i.e. not just the production of noises, etc.]]
there is little doubt that a functional analysis will be the way to go.
[[However there are different kinds of functional analysis. And the kind
of functional analysis that is adequate for a car engine or telephone
exchange need not be adequate for an information processing system,
which uses semantic relations.]]
{{18}}
It seems very likely that when an account of the processes which lead to
our consciousness-claims is found, it will be specifiable at a level
which lies above the biochemical. If we insist that consciousness
itself inheres at a lower level than this, then the source of
consciousness is independent of the source of our consciousness-claims,
and our theory fails the Coherence Test.
[[This is important. Not just consciousness, but all high level mental
phenomena must not be ARBITRARILY correlated with low level phenomena.
The possibility of their existence must be explainable at an appropriate
functional level.]]
{{19}}
For instance, the difference between red-sensations and green-sensations
is very difficult to articulate, and so the precise nature of these
sensations might be dependent on certain low-level facts.
[[But not in any special way. Those differences are part of a system of
information processing, with "red" and "green" labelling particular
nodes in a web of relationships. The notion that the web can be
"inverted" is not necessarily coherent, depending on its structure.
You seem to end up saying similar things below.]]
{{20}}
A more profound difficulty with functionalism is that it does not come
close to dealing with (C1). It has often been noted (recently by Searle
1989) that functionalism is not a theory which is motivated by the
existence of first-person mentality; rather, conscious mental states are
viewed as an obstacle for functionalism to deal with (witness all the
Absent Qualia and Inverted Spectrum objections).
[[Not my form of functionalism. The existence of something like qualia
is something which has to be explained by an adequate functionalist
account, just like the existence of all other mental phenomena: beliefs,
desires, emotions, etc.. The only qualification is that some of our
pre-theoretical descriptions of mental phenomena may turn out to be
inadequate after the development of the explanatory theory. Often our
view of what we were trying to explain changes when we have found the
explanation, especially in science.]]
{[21}}
It is quite possible that functionalism is compatible with a correct
theory of consciousness, but taken alone it is not that theory.
[[ That's true of the simplest versions of functionalism. We need a good
taxonomy of varieties of functionalism. I think a suitably rich
functionalist theory of mind would entail whatever correct theory of
consciousness there is.]]
{{22}}
(Functionalism seems to do an excellent job of capturing
non-phenomenological mental states, such as propositional attitudes, but
that is not our concern here.)
[[If this is the claim that existing theories in AI or cognitive science
explain propositional attitudes I would deny that. Such states could not
be explained without a much deeper theory of the architecture(s) within
which they have their functional role. I think AI is still at a very
primitive stage.]]
{{23}}
...a theory must provide three things, on my estimation:
(1) A metaphysical account of why subjective experience is possible.
This will almost certainly include some new metaphysical construct, over
and above physical laws.
[[Why so much emphasis on PHYSICAL laws? They are insufficient for all
sorts of things. Physical laws do not explain how a prolog engine works.
E.g. there's no way of deducing the unification algorithm from physics.
Compare the analysis of emergence in Sloman 1994c]]
{{24}}
(2) A functional account of why we think and claim that we are
conscious. This will presumably be in the idiom of cognitive science.
[[This idiom needs to be specified. I have no reason to believe that
CURRENT cognitive science (or AI) has anything like a sufficiently rich
vocabulary for talking about architectures for behaving systems, nor the
states and processes and causal relations that can occur within them.]]
{{25}}
But before I go on to elaborate on this particular theory, it might be
useful to say a couple of words about why double aspect theories are so
attractive. In short, this is because double aspect theories are at the
same time almost epiphenomenalism and almost identity theories,
combining the virtues of both with the vices of neither.
[[I agree. This is true of all cases of supervenience, implementation
hierarchies, talk of virtual machines, levels of description, etc.]]
{{26}}
The theory I will propose is almost an identity theory (by this term I
include both `Brain State' and `Functional State' Identity Theories,
though the proposed theory is nearer to the latter than the former), in
that it holds that first-person and third-person states are the same
thing --- but different aspects of the same thing, a crucial difference.
[[I think it can be both correct and incorrect to say that supervenient
states and the states that implement them are the same thing, depending
on how the identity claim is construed. In general I think the identity
claim is more confusing than illuminating, and I prefer to try to
clarify the notion of supervenience WITHOUT relying on identity. To
explicate this I think it is best to start with much simpler examples
than human minds and brains. Start with well understood software systems
for information processing. These exhibit most of the important aspects
of supervenience, different levels of description different aspects etc.
In particular, in a typical computing system there are some parts that
interpret other parts, e.g. the CPU using a bit pattern as an address or
instruction name.]]
{{27}}
It is almost a version of epiphenomenalism, as we can imagine the
subjective aspects `hanging off' the non-subjective aspects, allowing
the complete autonomy of the physical --- while at the same time
allowing that subjective states can be causally efficacious, as they are
but another aspect of the objective states.
[[We must get rid of these confusing metaphors (`hanging off') and
develop a rich and precise terminology. Compare the development of
concepts of feedback and types of control in cybernetics.]]
{{28}}
The two different `aspects' that I propose are pattern and information.
Wherever they occur, pattern and information occur together. All
information is carried by some pattern in the physical world; all
patterns carry some information.
[[I think you are espousing a view close to that of Newell and Simon
when they talk about physical symbol systems. I think this is
fundamentally wrong. Many of the patterns and symbol structures that
carry information exist not in physical mechanisms, but in some sort of
virtual machine implemented in physical mechanisms. However, I agree
that
(a)among the most important features of mind-like systems are the
storage, construction, manipulation and interpretation of
structures.
(b) another important feature is the interpretation of some
structures as having a semantics.
Maybe that's another way of talking about your pattern and your
information.
]]
{{29}}
My proposal is that third-person (objectively understandable) mental
events are patterns in the brain, while the corresponding subjective
(first-person) mental events are information.
[[I agree with the spirit of this, but it is what I have always thought
any sensible functionalist about mental states would say anyway. Of
course there are many philosophical functionlists who are regrettably
ignorant of information processing systems and how they work, and
understand very little about the multiplicity of ways in which they can
be implemented. E.g Fodor seems to think that all implementations of
high level virtual machines use compilers that translate programs to
machine code. A minor quibble I have with your remark is that most
patterns used in the brain are not physical patterns but patterns in
some sort of virtual machine, just as addresses etc. in computers are
bit patterns that exist in some virtual machine, implemented using
tiny switches.]]
{{30}}
It is rarely made clear just what ontological claims a functionalist
would wish to make, though. Are they committed to patterns as part of
their natural ontology? They might not necessarily have to take this
step, as if we are considering mental events only from the third-person
viewpoint, then it is not clear that we have to admit them into our
ontology anymore than we have to admit centers of gravity --- both might
be `convenient fictions', in Dennett's terminology. But I am prepared
to bite the bullet, and reify patterns.
[[Well I have always taken it as blindingly obvious that most of the
things we talk about in real life are non-physical things implemented in
the physical world, e.g. families, fueds, wars, recessions, economic
inflation, ownership of property, poverty, social cohesion, etc etc.
Anyone who says that only physical things are real and only physical
events can stand in causal relations or have causal powers would have to
give up being a normal human being interacting with other normal human
beings. Most of what is studied in universities, would then be concerned
with convenient fictions.
I.e. your position is correct, but, to me totally non-controversial.
Functionalists and other philosophers who say otherwise are probably
using muddled (probably incoherent) notions of `real', `cause', etc.]]
{{31}}
Admitting patterns into our ontology is a surprisingly painless way of
doing this --- and once it's done, we get minds for free. Of course,
we've done a little more than reify pattern: we've reified
pattern/information.
[[I think you need to do more than that. You need to reify the
architecture within the patterns exist, and you need to be very
explicitly concerned with kinds of causation and control. These are
ideas that I have been trying to work out, painfully slowly, over the
last ten years or so. (E.g. my paper in IJCAI85)]]
{{32}}
...I can't imagine two more appropriate or natural candidates
for the `aspects' than pattern and information.
[[But they are not enough, for the control architecture, which defines
causal relationships, is missing.
Also you had better say what you mean by `information'. Pattern is easy
enough: the concept of a abstract data-type in computer science probably
gives as good an account as we need.]]
{{33}}
Once we've made this posit, then conscious mentality falls out.
Third-person mental events are patterns in the brain: the corresponding
conscious mental events are the information that these patterns carry.
[[No. You have gone too far. There are many forms of information and
information processing in the brain that are unconnected with
consciousness as normally understood. So something more is needed to
differentiate conscious from unconscious or non-conscious phenomena
involving pattern and information.]]
{{34}}
(A nice way of putting this is `information is what pattern is like from
the inside". I'm not quite sure what it means, but it certainly sounds
good.)
[[It is not good! There's work still to be done, to get it right.
In my 1995c I try to show how information bearing substates in a
complex control architecture may have a syntax, a pragmatics, a
semantics and possible a role in reasoning. All of these depend on
how the substates relate to and are used by the rest of the system.
They are all part of what the control states are like `from the
inside']]
{{35}}
Anyway, conscious mentality arises from the one big pattern which I am.
[[Now you have gone over the top! One of the most important features of
information processing systems is their architectural intricacy combined
with functional differentiation. Statements like yours
(unintentionally?) distract attention from the important task of
analysing that intricacy, which is required for an understanding of how,
e.g., we differ from office automation systems, fleas, amoebas, etc. all
of which are information processing architectures.]]
{{36}}
That pattern, at any given time, carries a lot of information --- that
information is my conscious experience.
[[You are assuming that there is ONE well defined pattern somehow
identified by being you at a certain level of description. I think you
have fallen into the sort of trap that leads some philosophers to follow
Kant in talking about the `unity of consciousness'.]]
{{37}}
(I should be more careful, I know.
[[yes : -) ]]
I don't necessarily want to commit myself to the existence of a single
individual that the term `I' refers to. But the picture is much
the same whether one is committed to that or not.)
[[I think you will retract that comment after doing more work on the
details.]]
{{38}}
The last few paragraphs may strike the reader as wanton ontological
extravagance.
[[Only ill-informed philosophers who think everything is either physical
or magical. Gilbert Ryle, for example, knew better than that, when
he wrote The Concept of Mind.]]
{{39}]
The key point is that once the information flow has reached the central
processing portions for the brain, further brain function is not
sensitive to the precise original raw data, but only to the pattern (to
the information!) which is embodied in the neural structure.
[[I think that what you are saying is basically correct, except for the
implication that there is some well defined set of mechanisms referred
to as "the central processing portions". The sort of abstraction from
original data that you are referring to starts right at the lowest
levels, e.g. close to the retina, and goes on getting more and more
sophisticated and less and less closely related to the original details
as processing proceeds. Have you ever worked on 3-D vision?
I.e. centrality has nothing to do with it. Compare your later
comments on colour.]]
{{40}]
Anyway, here is why colour-perception seems strange. In terms of
further processing, we are sensitive not to the original data, not even
directly to the physical structure of the neural system, but only to the
patterns which the system embodies, to the information it contains. It
is a matter of access. When our linguistic system (to be homuncular
about things) wants to make verbal reports, it cannot get access to the
original data; it does not even have direct access to neural structure.
It is sensitive only to pattern.
[[ Several comments:
(a) it's not just the linguistic system. It's all the internal modules
that need to make use of visual information about what's out there: they
can't directly access what's out there, nor the photons hitting the
retina, nor the low level implementation details. (I think Dennett's
emphasis on linguistic mechanisms is quite misguided. They developed
long after most of the system was in place.)
(b) There's nothing strange or metaphysical about this: it is equally
true of the vast majority of computing systems. E.g. although BASIC
includes "peek" and "poke" those commands only access abstractions: bit
patterns in a virtual address space.
(c) how a pattern within the system is interpreted can depend not only
on the structure of the pattern but on
(i) the context provided by other patterns and control states,
(ii) the functional role of the sub-system accessing it.
I.e. there's a sense in which the systems are sensitive to more than
the patterns.]]
{{41}}
Thus, we know that we can make distinctions between certain wavelength
distributions, but we don't know how we do it. We've lost access to the
original wavelengths --- we certainly can't say `yes, that patch is
saturated with 500-600 nm reflections".
[[In fact, I think it is even more complicated than that. As biological
systems we are not particularly concerned with wavelengths but with
properties of objects out there - edibility, rigidity, whether it's a
predator, etc. etc. Following Gibson, I think that visual mechanism have
evolved to provide information about the objects, not about the proximal
pattern of radiation on the retina. Thus we'll see things looking the
same colour even though the illumination changes and the distributions
of energy among wavelengths changes. This is consistent with your
general claims, but not your particular comments on colours.]]
{{42}}
We have access to nothing more --- we can simply make raw
distinctions based on pattern --- and it seems very strange.
[[Notice that it is strange only to people with a restricted
philosophical viewpoint. The person in the street finds nothing strange
about it, anymore than he finds it strange that a recession can cause
an increase in unemployment.]]
{{43}}
Shape-perception, for instance, strikes us as relatively non-strange;
[[I think you should stop making claims about "us" and "we" etc. There
are different people with different reactions, many of them culturally
determined and I include the culture of contact with professional
philosophers and scientists who are amateur philosophers. Having
worked on vision for some time, I think shape perception is one of
the hardest unsolved problems in AI, and very mysterious.]]
{{44}}
the visual system is extremely good at preserving shape information
through its neural pathways.
[[Clearly you have never done any detailed work on visual perception.
Most of what we see is TOTALLY different from what's on the retina.
E.g. retinal patterns are not three dimensional.]]
{{45}}
The story for `internal perception' is exactly the same. When
we reflect on our thoughts, information makes its way from one part of
the brain to another, and perhaps eventually to our speech center.
[[I suspect you have a grossly over-simplified notion of the
architecture of the mind. I've been struggling with these issues for
some time.
With research students I've written a draft paper on the architectural
basis for grief which includes some handwaving about the issue of
internal perception. After revision it will be published in Philosophy
Psychiatry and Psychology. The paper is available as
ftp://ftp.cs.bham.ac.uk/pub/dist/cog_affect/geneva.ps.Z
as it was originally prepared for the geneva emotions week in April 95]]
{{46}}
That is why consciousness seems strange, and that is why the debate over
the Mind-Body Problem has raged for thousands of years.
[[Has it? Cathy Wilkes, a philosopher at Oxford claims that even the
word "consciousness" and similar words in other languages, cannot be
found used in anything like the modern sense until about two or three
hundred years ago. It's an invention of philosophers and theologians,
with etymological roots connected with conscience, and God talking to
us! (If I remember correctly what she claims.)]]
{{47}}
On this account, there are two criteria for being a conscious entity: a
metaphysical (first-person) and a functional (third-person) criterion.
The metaphysical criterion is that one must be a pattern. All patterns
exist, presumably. It would seem strange to reify some patterns but not
others. But not all patterns are conscious. To be a conscious pattern,
one must be part of the right kind of pattern processor, bearing an
appropriate relation to it.
[[I don't think there's anything metaphysical about it, except
insofar as everything's metaphysical, including recessions, wars,
feedback, etc..]]
{{48}}
An interesting and difficult question concerns the status of patterns
which are not part of pattern processors.
[[Before getting into this huge jump between patterns that are part of
human like systems and patterns that are not part of any information
processing system, it would be useful to have a careful analysis of the
variety of cases in between, covering all sorts of different animals and
control systems.
Then we can get away from broad-brush metaphors and hand-waving to
really detailed explanatory theories linked to a much richer
theory-grounded vocabulary than we now have. (Compare how concepts of
kinds of stuff were enriched by development of the periodic table, on
the basis of a theory of the architecture of matter.)]]
{{49}}
Being the number 5 might be like something; if only like being asleep,
without the excitement of dreaming.
[[John Austin once said, in discussions of whether "exists" is a
predicate, something like: "some people think existence is like
breathing, only quieter"]
{{50}}
There is no principled distinction between the kind of
pattern-processing performed by such a network and that performed by a
brain, except one of complexity. And there is no reason to believe
that complexity should make such a great difference ...
[[You'll probably change your mind when you have spent more time working
on architectures. There are other differences besides degree of
complexity. There are kinds of organisation and functional
differentiation. These are far more important than degree of
complexity. A thundercloud has a high degree of complexity, but no
mental states.]]
{{51}}
Certainly human qualia will be more complex and interesting,
reflecting the more complex processing; but then, even considering our
processing of `redness', there is no evidence that much more complex
processing goes into this than goes into a typical large connectionist
network.
[[Frankly I think you grossly underestimate the sophistication of our
visual system. That may be because you don't agree with, (or don't
understand?) JJ Gibson's point about the senses being concerned with
detecting affordances in the environment, as opposed to classifying
stimuli.]]
{{52}}
4 Connectionist networks help illustrate another point: that patterns
may supervene on functional descriptions.
[[You are now coming close to an important issue. There are not just two
levels of description: patterns/qualia and physical. Implementation can
go through complex hierarchies of levels, and need not even be fully
hierarchical, for circles of implementation are possible: e.g. if
portions of the microcode are implemented in terms of user-routines
which in turn are implemented via microcode.]]
{{53}}
The current account is thus compatible with functionalism of a certain
variety. You might say that it takes functionalism as a starting point,
and adds what is necessary to deal with the problems of consciousness.
[[I think not. You will need to go much further and present a proper
theory of the architecture within which consciousness-supporting states
can arise.]]
{{54}}
Conscious experience is identified with the `information' aspect of
certain pattern/information states. This information can certainly make
a difference. It is just another aspect of the pattern, and there is no
question that the pattern plays a causal role. Changing the information
changes the pattern, and changing the pattern changes many consequences.
[[I am growing more and more uncomfortable with your use of the word
"information". I think you need to say more about it. E.g. are you
referring to semantic content (i.e. petterns have information in that
they can refer to something else)? In that case what's the difference
between patterns with information and those without? How many different
kinds of information are there? Is expressing information an intrinsic
property of the pattern or is it essentially dependent on its causal
role within a system? You begin to answer this later. See below.]]
{{55}}
Thus, in a single head, there might be two quite distinct
phenomenologies, without any overlapping first-person mental states.
[[Several people said this in response to Searle in the original
BBS commentaries. Mine was one. Searle was unmoved. Why should he be
moved by your version?]]
{{56}}
Some might claim that a given pattern can be interpreted as carrying any
information you like. I believe that this is using a different sense of
the word `information' to mine. The kind of information I have dealt
with here is not `information that' --- it does not need to refer. The
kind of information I believe in is intrinsic. In this sense, there is
a natural mapping from pattern to information.
[[Sounds as if you are close to the Shannon notion of information, which
is purely syntactic. (likewise the Algorithmic notion of information.)
Why you think this is relevant to mentality is not clear to me. In my
royal society paper I've tried to do something similar to you, but
starting from the notion of information used by software engineers, e.g.
when they talk about information about the employees in an organisation,
or information about which machines are currently up, etc. This involves
semantic content, and arises in a machine only where there is a suitable
supporting architecture. However, some aspects of the semantic content,
including reference to particular located individuals or events,
requires causal links in the environment.]]
{{57}}
My maximal hope is that it has removed much of the confusion
surrounding the problem, and localized the mystery at one key, primitive
locus: the relationship between pattern and information.
[[It's a good start. But without bringing in the architecture within
which patterns and information can have functional roles you have not
done the job!]]
Some references follow.
=======================================================================
Sloman, A (1992b)
`The emperor's real mind: review of Roger Penrose's
.ul 2
The Emperor's new Mind: Concerning Computers Minds and the Laws of
Physics,'
in
.ul
Artificial Intelligence
56 (1992) pp 355-396
.pp
Sloman, A.,
The mind as a control system,
.ul 2
Philosophy and the Cognitive Sciences,
(eds) C. Hookway and D.Peterson,
Cambridge University Press, pp 69-110 1993
(Supplement to Philosophy)
.pp
Sloman, A, (1994a) Explorations in design space in
.ul
Proc ECAI94, 11th European Conference on Artificial Intelligence
Edited by A.G.Cohn, John Wiley, pp 578-582, 1994
Sloman, A, (1994b)
Computational modeling of motive-management processes,
in Proceedings of the Conference of the
International Society for Research in Emotions, Cambridge, July 1994.
Ed N.Frijda, p 344-348.
ISRE Publications, 1994.
Sloman, A. (1994c),
`Semantics in an intelligent control system,'
in
.ul 2
Philosophical Transactions of the Royal Society: Physical Sciences and
Engineering
Vol 349, 1689 pp 43-58, 1994
Sloman, A, (1995a)
Exploring design space and niche space, in
.ul
Proceedings 5th Scandinavian Conf. on AI,
Trondheim May 1995,
IOS Press, Amsterdam, 1995
.pp
Sloman, A, (1995b)
A philosophical encounter: An interactive presentation of some of the
key philosophical problems in AI and AI problems in philosophy.
.ul
Proc 14th International Joint Conference on AI,
Montreal, 1995.
.pp
Sloman, A, (1995c)
Towards a general theory of representations, in
D.M.Peterson (ed)
.ul
Forms of representation
Intellect press (to appear 1995)
[end]