Yes: I feel pretty sure that anyone reading this is indeed conscious. However, the NYT recently ran a short piece from Michael S. A. Graziano which apparently questioned it. A fuller account of his thinking is in this paper from 2011; the same ideas were developed at greater length in his book Consciousness and the Social Brain

I think the startling headline on the NYT piece misrepresents Graziano somewhat. The core of his theory is that awareness is in some sense a delusion, the reality of which is simple attention. We have ways of recognising the attention of other organisms, and what it is fixed on (the practical value of that skill in environments where human beings may be either hunters or hunted is obvious): awareness is just our garbled version of attention. he offers the analogy of colour. The reality out there is different wavelengths of light: colour, our version of that, is a slightly messed-up, neatened, artificial version which is nevertheless very vivid to us in spite of being artificial to a remarkably large extent.

I don’t think Graziano is even denying that awareness exists, in some sense: as a phenomenon of some kind it surely does: what he means is more that it isn’t veridical: what it tells us about itself, and what it tells us about attention, isn’t really true. As he acknowledges in the paper, there are labelling issues here, and I believe it would be possible to agree with the substance of what he says while recasting it in terms that look superficially much more conventional.

Another labelling issue may lurk around the concept of attention. On some accounts, it actually presupposes consciousness: to direct one’s attention towards something is precisely to bring it to the centre of your consciousness. That clearly isn’t what Graziano means: he has in mind a much more basic meaning. Attention for him is something simple like having your sensory organs locked on to a particular target. This needs to be clear and unambiguous, because otherwise we can immediately see potential problems over having to concede that cameras or other simple machines are capable of attention; but I’m happy to concede that we could probably put together some kind of criterion, perhaps neurological, that would fit the bill well enough and give Graziano the unproblematic materialist conception of attention that he needs.

All that looks reasonably OK as applied to other people, but Graziano wants the same system to supply our own mistaken impression of awareness. Just as we track the attention of others with the false surrogate of awareness, we pick up our own attentive states and make the same kind of mistake. This seems odd: when I sense my own awareness of something, it doesn’t feel like a deduction I’ve made from objective evidence about my own behaviour: I just sense it. I think Graziano actually wants it to be like that for other people too. He isn’t talking about rational, Sherlock Holmes style reasoning about the awareness of other people, he has in mind something like a deep, old, lizard-brain kind of thing; like the sense of somebody there that makes the hairs rise on the back of the neck and your eyes quickly saccade towards the presumed person.

That is quite a useful insight, because what Graziano is concerned to deny is the reality of subjective experience, of qualia, in a word. To do so he needs to be able to explain why awareness seems so special when the reality is nothing more than information processing. I think this remains a weak spot in the theory, but the idea that it comes from a very basic system whose whole function is to generate a feeling of ‘something there’ helps quite a bit, and is at least moderately compatible with my own intuitions and introspections.What Graziano really relies on is the suggestion that awareness is a second-order matter: it’s a cognitive state about other cognitive states, something we attribute to ourselves and not, as it seems to be, directly about the real world. It just happens to be a somewhat mistaken cognitive state.

That still leaves us in some difficulty over the difference between me and other people. If my sense of my own awareness is generated in exactly the same way as my sense of the awareness of others, it ought to seem equally distant – but it doesn’t, it seems markedly more present and far less deniable.

More fundamentally, I still don’t really see why my attention should be misperceived. In the case of colours, the misrepresentation of reality comes from two sources, I think. One is the inadequacy of our eyes; our brain has to make do with very limited data on colour (and on distance and other factors) and so has to do things like hypothesising yellow light where it should be recognising both red and green, for example. Second, the brain wants to make it simple for us and so tries desperately to ensure that the same objects always look the same colour, although the wavelengths being received actually vary according to conditions. I find it hard to see what comparable difficulties affect our perception of attention. Why doesn’t it just seem like attention? Graziano’s view of it as a second-order matter explains how it can be wrong about attention, but not really why.

So I think the theory is less radical than it seems, and doesn’t quite nail the matter on some important points: but it does make certain kinds of sense and at the very least helps keep us roused from our dogmatic slumbers. Here’s a wild thought inspired (but certainly not endorsed) by Graziano. Suppose our sense of qualia really does come from a kind of primitive attention-detecting module. It detects our own attention and supplies that qualic feel, but since it also (in fact primarily) detects other people’s attention, should it not also provide a bit of a qualic feel for other people too? Normally when we think of our beliefs about other people, we remain in the explicit, higher realms of cognition: but what if we stay at a sort of visceral level, what if we stick with that hair-on-the back-of the-neck sensation? Could it be that now and then we get a whiff of other people’s qualia? Surely too heterodox an idea to contemplate…

Share this:

Posted by Peter on October 19, 2014 at 11:21 am.
You can leave a response, or trackback from your own site. Follow any responses to this entry through the RSS 2.0 feed.

37 Comments

1. Yotam Shmargad says:

Awareness of our own states may have come about as a way of honing our awareness skills. A kind of benchmark against which to check the accuracy of our evaluations. It feels more special than awareness of others’ states simply because of the frequency with which we point the lens at ourselves.

Peter,
You address, at least implicitly, a possible evolutionary framework for the relations between attention and awareness. And it may be interesting to see what such a perspective can bring.
Perhaps ‘a kind of primitive attention-detecting module. …, at a sort of visceral level’ could be looked at as animals interpreting their environment in order to satisfy their vital constraints (like stay alive as individual & as species). And this can be considered as a generation of meaningful information about elements of the environment in order to implement actions that will satisfy the internal constraints.
When the elements of the environment have been experienced and memorized through different meanings and action scenario, they can be considered as ‘represented’ within the animal (still for constraints satisfaction). And perhaps then we can speak of animals aware of their environment through meaningful representations.
The picture changes when evolution brings in self-consciousness. The human animal is then capable to access his own internal mental states and think about them. What is very difficult to do about the mental states of others humans.
The same approach based on meaning generation for internal constraint satisfaction can be applied to machines submitted to derived constraints that we have programmed in them (detecting an obstacle is meaningful for a robot programmed to avoid obstacles).
The difference with living entities is that they carry intrinsic constraints like stay alive & live group life (for human we can add constraints like ‘avoid anxiety’, ‘valorize ego’, ….).
I feel that a lot can be said on subjects related to computation and information following an evolutionary approach. More on this is available at http://philpapers.org/archive/MENCOI.pdf

Peter, I will just reproduce bits from my book to answer your posts. No point in tailoring, as it goes way over the head of your readers. The book is free on skydrive at http://sdrv.ms/1a4HBbk

“I say a contained unit must exist to create ongoing subjective experience of its functions in a world in feelings and thoughts, even though they are the only means by which it can know it exists. Descartes correctly identifies subjective experience as primary, and chooses thoughts over feelings in this quote. The existence of the unit, “I”, “therefore” follows from being aware by thought. But Descartes’ “I”, as “thought”, can revert to a
pure point of reference for Kant as “the self”. I cannot vouch for either of them beyond that, as they go off the rails.

In his Transcendental Apperception, Kant is basically saying all sense datum is integrated into a self by a soul, for an experience of a unified self in relation to all sense datum. The self and sense datum become objects in one soulful experience. I will say the sense datum is of an interface of my moves and a world, rather than just of a world. I will say the self does not comprise merely my own moves, as imperative immersion provides
continual real feelings in a world that are mine upon capture. I will say the self is not an object, and nor is the sense datum,
as both constitute the experience itself, which is subjective at every instant.

There is no soul, just processing to create the subjective experience automatically when it finalizes. The subject that lives at every instant of finalized processing can purport to apply itself in rigorous objectivity around itself. It is an experience of thoughts about real feelings, with freedom to
know more about “oneself in the world” by the unified experience of “oneself in the world” setting its own course. We do not perceive ourselves, we are always ourselves, and we apply ourselves to perceive what we choose about oneself in
the world.”

Christophe, “representation” is “the self” in my post above – its is the experience itself in every subjective instant, ongoing. It applies itself in every moment as it chooses, to explore its own past thoughts and feelings and its potential future with growing confidence that it knows itself ALSO as an object in a world of objects – but always as a subject doing that objective exploration. Simple. Don’t obfuscate.

And if, unlike anyone here, you read my work (btw it doesn’t interest me personally if you do, it just an opportunity for you to make progress), you can see computerization is a misplace analogy that needs dumping. Its fine to have preconceived ideas about storage retrieval and encoding, as we use those basics for computers, and we indeed to have inputs to a brain, outputs from it, and processing in between. That system and processing need not be anything like the computer equivalent.

Page 112:

“I do not use computational “encoding”, “storage”, and “retrieval” of memory by neurons. It is an automatic flow from sites and
back, resolving preferences by integration, with comprehensive neural cognition as no more than coordinated site stimulation. I
value sites rather than searching for abstruse networking, and I make identification an automatic consequence of a present
flow across a base of past flow comprised of replenished, strengthened neurons. In my model, memory is automatic from
identification across a brain, as a past base drawing down a present flow across it to match delineations in flow, or as a
present flow awakening a past base to inform the processing of delineations, but the matched interface is vital, and emphasis
either way is unimportant here. I rate the personal experience of real feelings and thoughts as friction or “shearing” between
fresh ion current inwards across refreshed hormonal sediment as each single neuron capacity synchronizing across vast
networks from one continual flow, to live continually.”

Chris, I will toss you another bone, as your work shows promise. You tightened up nicely in the April 2014 extract compared to the February 2014 Extract and Paper. Pity you didn’t give a link to a free Paper for April 2014, but anyway you tightened up nicely to emphasize the words “oneself in an environment”.

As one gets older Chris, one realizes the importance of words, even if they get a bit spewed out. Locate the key concepts in pity encapsulation and stick with them until overturned, which may never happen in the case of “oneself in an environment”. Did you catch up with my book between Feb and April? I post regularly at blogs and never get replies, except those like Bakker in the previous thread. I sometimes wonder how many academics over the past 2 years of intermittent blogging actually read it, but would not admit to in in blogs. Hard to image my incontrovertible posts could be always erroneously dismissed as trash. Still, never an exchange in 2 years, quite amazing.

Peter, I’m overloading your site with my ideas, but its just space taken up by type I suppose. I just posted this on a site I haven’t visited for a while, Neurologica, and it might benefit your readers as a potted version.

Let me make two initial points. First – forget encoding for the time being. There is no “conversion” of anything in my view. Likewise forget a Homunculus brain that perceives thoughts and feelings as objects, like a soul in control, favoured by Kant. Both those notion have UNNECESSARY interpositions! Let me tell you, one wrong interposition can completely skew ideas in all directions, let alone two. The relation of neural flow to the experience of awareness is direct – receptors are directly represented upon finalization of current across refreshed neurons in continual flow and refreshment. Value the receptor source, as a brain merely integrates and finalizes for the experience of vision, touch, and even thoughts by TOTAL integration. Value sites, and do not interpose some “encoding” as a step between stimulation and experience – the effectors have delineated inputs that split and bundle in networking to be directly represented in the experience upon finalization of networking.

The representation is of an experience of “oneself in the world”. Your next step is to reject the interposition of a Homunculus. The “self” of Kant, as a point of reference to the experience, has nothing behind it. There is no perceiver of thoughts and feelings, just thoughts and feelings automatically from integration of site flow after delay for integrated processing. We feel and we also think about what we feel, as they finalize together after delay. Feelings are diverse, and THEIR thoughts are integrated, for seamless accompaniment at all times across those two levels in ONE experience.

As a “self”, or point of reference, we are in fact bound to that primary experience, even to know that we exist, following Descartes “I think, THEREFORE (it follows) I am (I exist as an object). By having thoughts and feelings as a SUBJECTIVE experience at every instant of finalization of processing, we can explore any OBJECTIVE reality to that subjective experience. The experience is of “oneself in a world” broadly stated, as it is of the INTERFACE of our own moves and a world of stimulation. It is one experience of awareness, with MY own thoughts and feelings, including capture of light and sound from the world, which are MINE in the experience, and obviously including my own moves to the extent I am aware of them. In the experience it is ONE STATE, and we then use rationality and observation to differentiate ONESELF from the WORLD in that interface between both represented in the experience. One interface, one experience, but differentiable.

They are differentiable because the subjective experience is manifestly, to humans, ABOUT oneself as an object in a world of objects. This is obvious because we have wilful motor control to move about freely amongst other objects, and yet all interfaces will combine our own moves and that world of objects in the one experience. In fact, a key to “schizophrenia”, loosely defined, is differentiating oneself form the world, and confusion about attributing experiences to oneself or the world. On the other hand Sensory Attenuation is usually healthily employed to lessen feeling our own moves to feel a world more clearly, but possibly with a drawback if taken too far. These are resolved by the subject from an experience unifying both but capable of differentiation using motor control to move about independently. It is easy to know oneself as an independent object amongst objects in the world TO SOME EXTENT, but we are always fully immersed and dependent on world stimulus at every moment, and grow from ONE CELL in full dependency – so true differentiation is a DEEP MATTER. Is that last breath you took “yours” or “the world’s”?

You seem bright enough to grasp these important concepts to make progress. However, in this context, CAUSATION is also DEEP. Even when genes are isolated we cannot simply attribute them as the cause of an illness, because it is a gene-environment relation, with timings, triggers, and therapies, and not dominoes falling as cause-to-effect to explain the complexity of any illness, and particularly when it involves subjective experiences in psychology. It will always be correlation, for reasons explained in my free book, but ever improving, and correlated also to neurons. Match the correlations to report and consensus views from evident behaviour (including language) compiled by the physician. In psychology the difference between the subjective experience and an objective explanation of it will be, I assume, a rock solid barrier forever, but we can correlate with ever greater precision. Read my work to make progress, much more in there. Good luck

It’s a blind brain approach he’s taking on the negative side – one that’s actually very similar to Nick Humphrey’s. I was glad to see he referenced Dennett as holding a precursor of his view in the NYT article. In the book he makes it sound like no soul in history has realized that the extraordinary properties of consciousness are better explained in terms of cognitive incapacity as opposed to the possession of some emergent super-capacity.

Personally, I think he simply has yet to achieve a high resolution understanding of the debates animating the field, and as a result glosses over many otherwise deep problems. As you note, Peter, his ‘social’ angle simply seems to confuse ‘mind-reading’ issues with consciousness issues. The idea that we use the same apparatus to interpret ourselves as we do others is an old, deeply researched/studied and argued one by now. It would be nice to see him work that into his view. And any approach that reduces consciousness to representation (in his case, via the ‘attention schema’) is spectacularly uninformative so long as conscious content remains mysterious.

The empirical biggie is that I’ve yet to see him account for the way attention and consciousness decouple – I’d be keen if anyone knows. Have I missed something? Either way, since his basic claim is the same as my own, I’m very keen to see how he’s able to tie this into his actual neuroscientific research. Give me a lab in Princeton! man…

8. Sci says:

9. john davey says:

Sci

Probably. It’s pretty dull. There is a confusion between “function” – which is a value judgment – and observable phenomena. Mental states are observable phenomena – they have properties, causal origins, characteristics .. what function they have involves some kind of (usually juvenile) evolutionary analysis – although of course a natural phenomena need have no “function” whatsoever and need never have to justify itself to any analytical framework. That’s called “science”, and lot of computationalists simply don’t have a conception of it. In the computationalist framework, if it doesn’t make sense, it doesn’t exist. They might as well be the good christian folk they seek to confound.

And that old, idiotic, simply unbelievably cretinous confusion of cause and effect when it comes to colour. If I hit myself on the head with a hammer, it hurts. The hammer is not the pain. It is the cause of the pain. I don’t say “there is no such thing as pain, there are only hammers”. Likewise the EM radiation is not the colour. It is the cause of colour. EM radiation has no colour. Only an idiot would think that it did.

10. Sci says:

@John Davey: Yeah, it’s him (maybe Princeton should vet its people more carefully in the future):

“It seems crazy to insist that the puppet’s consciousness is real. And yet, I argue that it is. The puppet’s consciousness is a real informational model that is constructed inside the neural machinery of the audience members and the performer. It is assigned a spatial location inside the puppet. The impulse to dismiss the puppet’s consciousness derives, I think, from the implicit belief that real consciousness is an utterly different quantity, perhaps a ghostly substance, or an emergent state, or an oscillation, or an experience, present inside of a person’s head. Given the contrast between a real if ethereal phenomenon inside of a person’s head and a mere computed model that somebody has attributed to a puppet, then obviously the puppet isn’t really conscious. But in the present theory, all consciousness is a “mere” computed model attributed to an object. That is what consciousness is made out of. One’s brain can attribute it to oneself or to something else. Consciousness is an attribution…”
~ Michael Graziano (2013), Consciousness and the Social Brain, p. 208

John Davey: The computer people are working backwards to explain human function, generally facilitated by neurons, given that we do function. Agreed, evolution of function is currently no more than a “just so” narrative using retrospect, but evolution needs fundamental reworking anyway.

You seem to grasp the fact that we are aware of the interface of oneself and world, and that “light” out there (if it exists at all) is changed by our receptors for OUR experience of color, which is all we can know unless we use our subjective experience to speculate about what is “out there”. We are a subject in awareness at all times, and that subject explores itself as a “purported” object.

Think about it. You have a rational subjective experience you use as a subjective tool, always locked into your experience (as per color, we only know OUR receptor construction from world stimulus). The world purportedly has whatever properties we experience, but that is not “the world”. Its impossible to say what “the world” is, because we only know our NEURAL construct of it. We “purport” about objects “out there” at all times, including ourselves as objects.

We even “purport” that WE are objects, rather than being able to simply say that is the case from the experience. As per Descartes, we only know we exist because we have the experience first, then work out our existence using the experience. It is ALWAYS an experience of a world INTERFACE and it always combines self and world in ONE experience, requiring us to CONJECTURE about the differentiation between oneself as “separate” object from a world of objects. We are NEVER separated from the world as an object, or in the experience, we just move around between different objects and use CONJECTURE about the “levels” of our supposed separation from them.

Dunno if you are a scientist or philosopher. I’m a lawyer and I’m used to using logic to sort rubbish. The level of intellect and logic in science and philosophy, compared to real battle in legal cases, is an absolute disgrace in my view (although the lawyer is likely to fleece you, true to say).

Sci, I think it might be a different kind of conciousness refered to, in particular how it’s attributed. It’s being used like the attribution of value in terms of money. If you got a coin from some old country that doesn’t exist anymore, well back when it existed that coin would be attributed with value. Now the coin is simply a worthless bit of metal or pottery. In the same way, I think he’s trying to say as his hypothesis, the puppet can be attributed or not attributed as much as a coin can be ‘obviously valuable’ or ‘worthless’.

There was a cognitive science experiment where people were asked to hold a joystick inside a box with a glass top. Trick was it was just a very realistic looking hand and their real hand was hidden lower down. After awhile the researches would make the fake hand move out of sequence with the commands the subject was giving his hand on the joystick. Which the subjects often attributed as some kind of possession or something, to what was ostensibly their hand.

“When we introspect and seem to find that ghostly thing — awareness, consciousness, the way green looks or pain feels — our cognitive machinery is accessing internal models and those models are providing information that is wrong…there is no subjective impression; there is only information in a data-processing device.”

So he’s saying awareness, consciousness, the “what it is like,” is an illusion. We only seem to have experiences. However, I don’t find experiences to be ghostly, but very concrete. If they are mere seemings, these seemings have exactly the properties of experiences, namely being qualitative, non-decomposable and ineluctable (for basic qualia), and private (unobservable). So, despite his disavowal of consciousness, one could say that Graziano is actually offering a theory of how and why we have experience.

And indeed, later in the article he says “In this theory, awareness is *not* an illusion” (my emphasis). Seemings, impressions, and therefore, I would say *experiences*, actually exist for the subject, and he says they result from modeling the process of attention (a by-product, perhaps?). That modeling serves a behavior control function, so we can explain the existence of consciousness from an evolutionary standpoint. Of course it would be nice to fill in the story of how modeling attention entails the existence of qualitative states for the system doing the modeling.

16. Sci says:

Seems like Graziano is confused about what he means when using the words “consciousness”, “awareness”, and “attention”?

Beyond that I agree with John Davey -> computationalists got some push from Dennet who at times seems more interested in advancing New Atheist politics than good philosophy.

But really the person who might deserve the most blame for computationalism being taken too seriously is actually Chalmers. Both Tallis(1) and EJ Lowe(2) have interesting arguments asserting Chalmers’ separation of the Hard & Easy Problems amounts to functionalism+qualia, thus missing the complex intertwining of experience, intentionality, and human behavior.

Now those two might in turn be wrong about materialism being unable to explain mind, but I think their criticisms do a good job of challenging computationlism as it’s usually presented.

Tom: “However, I don’t find experiences to be ghostly, but very concrete. If they are mere seemings, these seemings have exactly the properties of experiences, namely being qualitative, non-decomposable and ineluctable (for basic qualia), and private (unobservable). So, despite his disavowal of consciousness, one could say that Graziano is actually offering a theory of how and why we have experience.”

Following these debates I’m always amazed at the ease with which ghosts become foundations depending on whose intuitions you consult. In my case, I’ve gone from Hume to Heidegger to Wittgenstein and back to Hume again! At each stage I could have sworn that it was soup, that it was ontologically anchored/anchoring, that it was norm anchored/anchoring, then back to the confidence that Hume was right all along. The capacity to experience such wild swings in metacognitive assurance in a single individual, let alone between countless others, has got to mean that the unreliability of human metacognitive capacity plays *some* pivotal role in all this. If so, it becomes difficult to credit concreteness over ephemerality on the basis of reflection: this is just not a problem metacognition possesses the capacity to solve.

People often forget it’s a data point because it’s so enormous, but if *millennia* of failed attempts to decide between gas or solid when comes to consciousness don’t count as evidence of metacognitive incompetence regarding this issue then what does?

19. Vicente says:

This happens because information and knowledge are used in-distinctively. The idea of information, in ICT, understood mainly as binary coding of data is directly equated to conscious knowledge, big mistake. IMO we are lacking of good definitions of information for each domain of application, in particular of conscious experience. In the case of conscious knowledge, information is meaningless unless it is referred to a general frame or scenario. To me, consciousness would be more related to the interdepedence of data (or information) rather than integration (probably also important). Does knowing that tomorrow will rain make sense ‘per se’, not to me unless it is related to many other informations (place, plans, etc etc); in a computer it would be just 1 or 0. I think this is a psychologic law: the more factors about anything that you can interrelate the more aware you are about it, irrespective of the amount of information involved (or integrated). These relations are not really information themselves, how does the mind link all the elements? I think it is a creative process. This is, to me, one of features of consciousness that it can produce new information not included in the initial data. I would say that it is “relation” rather than “integration” the driving mechanism required for a meaningful conscious experience (otherwise it would just be perceptual). Similarly, intentionality (aboutness) relies on a relations web. Isolated “aboutness” makes no sense, something is something with respect to something else.

This way of thinking about information has been helpful to me:
……………………………………………

Def. Information is any property of any object, event, or situation that can be detected, classified, measured, or described in any way.

1. The existence of information implies the existence of a complex physical system consisting of (a) a source with some kind of structured content (S), (b) a mechanism that systematically encodes the structure of S, (c) a channel that selectively directs the encoding of S, (d) a mechanism that selectively receives and decodes the encoding of S.

2. A distinction should be drawn between *latent* information and what might be called *kinetic* information. All structured physical objects contain latent information. This is as true for undetected distant
galaxies as it is for the magnetic pattern on a hard disc or the ink marks on the page of a book. Without an effective encoder, channel, and decoder, latent information never becomes kinetic information. Kinetic information is important because it enables systematic responses with
respect to the source (S) or to what S signifies. None of this implies consciousness.

3. A distinction should be drawn between kinetic information and*manifest* information. Manifest information is what is contained in our phenomenal experience. It is conceivable that some state-of-the-art
photo –> digital translation system could output equivalent kinetic information on reading English and Russian versions of *War and Peace*, but a Russian printing of the book provides *me* no manifest information
about the story, while an English version of the book allows me to experience the story. The explanatory gap is in the causal connection between kinetic information and manifest information.
……………………………………………

21. Sci says:

@Arnold: I don’t see how we can define information as any property available to our detection/description. Things just have properties, and information is our description of those properties via our biological & cultural interfaces.

As such, it seems to me there is a distinct separation between information and properties. Conflating the two smells like dualism to me.

23. Sci says:

@Arnold: Apologies if that last post came off as harsh. Like the materialist Lycan(1) I’m willing to accept dualism as a possible solution to the mind-body problem. Yet I think if/when we accidentally raise the spectre of dualism it shows we’ve possibly taken a wrong turn.

Now all that said, it’s not clear to me whether information as you define it is itself an ontological primitive or simply a signifier for our knowledge of varied properties in the physical world. For example, when you say the shift from latent to kinetic information requires no consciousness it suggests information is something out in the world. Yet you also say “Kinetic information is important because it enables systematic responses with respect to the source (S) or to what S signifies.”

Perhaps I’m missing something, but to talk of signification suggests semiotics, which AFAICTell requires the intrinsic intentionality of a mind as opposed to the derived intentionality of a book or thermometer.

To examine this issue in more depth consider John S. Wilkin’s critique(2) of how the term information is used in scientific conversation:

“In this age of computers and internets, we have taken to mistaking the thing described for the thing itself, and treat information as a property out there in the world, not a representation in our heads and language…

…Aristotle, in contrast, explained the physical things in the world by supposing that they had matter, which filled space and gave weight (made from several admixtures of the four elements, two light and two heavy) which the scholastics called substance (substantia, meaning that which stands under), and form, the structure and mathematical properties of a thing. This matter/form dualism is called hylomorphism, from the two Greek words hule, meaning stuff (it originally meant “wood”) and morphe, or form. Hylomorphism was intended to be an alternative view to atomistic materialism, which had become a widely held (and generally atheistical) view in his day. Epicurus, his contemporary, had an entire philosophical school based upon the older Democritan atomism [see this excellent review just revised in the Stanford Encyclopedia].

Now hylomorphism was roundly demolished as a scientific hypothesis when Daltonian elements were named and investigated in the nineteenth century. By 1900, terms like “substance” (for matter that is propertyless apart from mass and extension in space) and “form” had taken on a largely philosophical sense that differed extensively from Aristotle’s own views. Instead, an increasingly elaborate atomism had won the day, far beyond anything Epicurus or Democritus had posited. The properties of things, including their mass and filling of space, were the result of fields in space-time…”

Now I’m not convinced he’s right that hylomorphism was “demolished” (see Cartwright’s No God, No Laws) but I do think he’s correct that many people have confused the abstraction involved with mathematical modeling for substances in themselves. Or, as I suspect here, that the abstraction is attributed to things in the world rather than the mind.

Where I think this ends up being a problem is it becomes unclear what exactly “kinetic” information is. If it’s our thoughts about things, then this runs into the mystery of intentionality, namely how can can one piece of matter be *about* any other?. If this intentionality is not cashed out in physical terms then it seems to me the concept of kinetic information is assuming a mind within the configuration of matter.

Sci: “For example, when you say the shift from latent to kinetic information requires no consciousness it suggests information is something out in the world. Yet you also say ‘Kinetic information is important because it enables systematic responses with respect to the source (S) or to what S signifies.'”

Yes, the universe is full of latent information. As for kinetic information, imagine a telescope that is locked on to a particular star tracking the star in the night sky and recording its spectra on the basis of the kinetic information in its opto-electro-mechanical system without any human participation. This would be a systematic response to the star source (S) without signification/manifest information. If a human astronomer were to observe the event then the kinetic information signifies something to the astronomer as manifest information.

Sci: “Where I think this ends up being a problem is it becomes unclear what exactly “kinetic” information is. If it’s our thoughts about things, then this runs into the mystery of intentionality, namely how can can one piece of matter be *about* any other?. If this intentionality is not cashed out in physical terms then it seems to me the concept of kinetic information is assuming a mind within the configuration of matter.”

Kinetic information does not imply intentionality, but kinetic information is necessary for intentionality to exist in the context of manifest information. And intentionality/aboutness is “cashed out in [bio]physical terms” in the *retinoid model* of consciousness. For details, see “Space, self, and the theater of consciousness”, “Where Am I? Redux”, and “A foundation for the scientific study of consciousness”, here:

25. John Davey says:

Sci

“..seems to me there is a distinct separation between information and properties. Conflating the two smells like dualism to me.”

I have come to hate the word “information” with a passion .. there is no intrinsic information,anywhere, and ironically for the computationalists who seek to disown consciousness by a world of “information” it strikes me that without consciousness and intentionality there can be no “information”. Information can only be a servant of intentional agents who (thus far, in this universe) all seem to have consciousness. It is not a master, any more than a telephone decides what the content of the call will be, or a television cable decides what the content of the show will be. It’s a medium and no more than that.

“Information” it seems to me to be that which the monicker implies : a technology designed to inform, a means of communication. It has no implicit semantic, but provides the syntactical mechanisms that enable one semantic-comprehending being to communicate with another, given that they share the same common objects – ie language, character code standards (ASCII, Unicode etc), whatever.

Whatever it is, as a concept (indeniably observer-relative) it clearly neither necessitates nor even implies a link to the phenomena of the mental, and as such is a complete waste of time.

29. Sci says:

@Arnold – thanks, will take a look at your work.

@John Davey – Yeah, I think computationalists have taken us on a wrong turn that we won’t reverse course from for at least another decade or two. Would be great to send Lanier’s “You Can’t Argue With a Zombie”(1) back in time before people started getting so committed to uploading their brains into the cloud they were willing to waste everyone’s time for the sake of their religious belief in the Singularity.

“Imagine a future in which your mind never dies. When your body begins to fail, a machine scans your brain in enough detail to capture its unique wiring. A computer system uses that data to simulate your brain. It won’t need to replicate every last detail. Like the phonograph, it will strip away the irrelevant physical structures, leaving only the essence of the patterns. And then there is a second you, with your memories, your emotions, your way of thinking and making decisions, translated onto computer hardware as easily as we copy a text file these days.”

I’d like to imagine a soylent green inspired movie, where everyone is told they will live forever from uploading and the protagonist receives what seems a fatal injury so gets taken into hospital for upload. But then…wakes up, still in their mortal body.

And yet ‘he’ is in the system. But he isn’t. So how does that work?

And when the system becomes aware of the discrepancy, cue action movie sequences and the moral protagonist surviving by skin of his mortal teeth, as they try to cover up the whole thing.

With some kind of soylent green ending, twisted the other way ‘The digital afterlife – it’s not made of people! It’s not made of people!’

33. Charles Wolverton says:

Information in the Shannon sense (as used by Tonini and Koch) is essentially a measure of the uncertainty in the outcome of a random event. A familiar (at least to Americans with any exposure to our history) is “one if by land, two if by sea”. The random event was the mode of transportation used by the invading British, the channel was optical, and the capacity of the channel was one bit per use, ie, each use conveyed one bit of “information” (assuming unobstructed line-of-sight, ie, an error-free channel). [C = loq2(N), where N is the number of possible outcomes; for a binary channel N=2.]

As John Davey notes, use of the channel requires “agreement” between users as to the symbols to be used, ie, the syntax – in the case of the colonists, hanging up either one or two lamps. I put agreement in quotes because one can argue that the agreement can be implicit. Eg, an animal taught to distinguish between two other kinds of animal, one a potential threat, the other a potential meal – could be said to obtain one bit of information upon distinguishing which type is approaching. That presumably is what Arnold has in mind by “latent” information. The astronomer has been trained to resolve uncertainties in observed celestial phenomena, thereby making that latent information “manifest”.

As John also observes, a separate issue is semantics – what the information “means”, or as I prefer, what one is to do having resolved the uncertainty. That requires a separate agreement between the parties. Again, it can be implicit – the observing animal can be taught to either run away from or attack the approaching animal depending on the information contained in the observation (in this simplified scenario, again the output from a binary channel with capacity of one bit per use).

And FWIW, how this relates to mental phenomena escapes me as well even though I can follow the math.

35. Charles Wolverton says:

Sci –

As usual in these exchanges, I’m not sure how various terms are being used. Any implication that I was addressing “representations” was unintentional (I’m a Rorty disciple). I was only making the very simple – and of course unrealistic – assumption that the animal has learned to make a binary distinction between two patterns of neural activity – one due to a particular approaching threat, another to a particular retreating prey. Hence, if I understand your use of “intentionality” (aboutness?), I don’t see it as applicable. I’m not assuming that the animal has learned anything “about” either the approaching or retreating animal, just to distinguish the corresponding neural events and to execute an appropriate reaction.

As I said, I’m defining the “meaning” of each channel symbol (neural pattern) to be the action precipitated by its occurrence, which may not be the sense in which you are using that word.

And finally, I didn’t intend to address evolution at all.

In short, I think you’re attributing unwarranted sophistication to my comment. It was really a quite trivial observation (at least for someone with a background in info theory) intended to help with some confusions I sensed in the use of “information” in previous comments.