Thursday, May 06, 2010

Consciousness (13): The Interpreter versus the Scribe

Number thirteen in my series of posts on consciousness. Table of Contents is here.---------------------------

While elaborating on the parallels between perception and language interpretation, we have unpacked many features of the nature of visual perception that should hold up even if we end up finding the view of perception-as-interpretation wanting. In this post I’ll briefly integrate the data and theory from the past seven posts into a more tidy and (hopefully) coherent story.

As we discussed in some detail in post ten, the contents of experience have properties that are, on the surface, quite different from the properties of the underlying neural machinery doing the experiencing. I can see an iridescent jewel two feet in front of me (that’s the content), but the vehicle doing the experiencing is neither iridescent nor two feet in front of me.

We can be intimately familiar with the contents of our experience while remaining in complete ignorance of facts about nervous systems. I hope I don’t offend my fellow neuroscientists when I claim that our species’ great artists, playwrights, musicians, and novelists have revealed more about the contents of experience than any neuroscientist. Yet most of these artists worked without knowing the most basic facts of neuroscience. The vehicles of experience are effectively invisible to us, while the contents of experience are as familiar as breathing. Anyone that has savored an authentic lobster roll from a rundown shack on the coast of Maine knows what it is like to revel in the contents of experience (and those who have not have yet to fully live).

In sum, the contents of our experience seem to be a neurally-constructed portrait of what is happening beyond the brain. The brain faces some rather severe obstacles if its goal is to make this portrait accurate. For one, a great deal of information is lost in the projection from the scene to the retina (a projection we discussed in some detail in post nine).

Consider the case in which a projection onto the retina is square-shaped. What can we say about the object that generated that projection? Assuming there are no distance cues present, the same square shape on the retina could be produced by a tiny square that is extremely close to the eye, a medium-sized square a moderate distance away, or a colossal square that is extremely far away. It could even be generated by non-square shapes transmitted through a distorting funhouse-type medium.

Purves and Lotto state the point nicely:

[T]he retinal output in response to a given stimulus can signify any of an infinite combination of illuminants, reflectances, transmittances, sizes, distances, and orientations in the real world. It is thus impossible to derive by a process of logic the combination of these factors that actually generated the stimulus[.]

In other words, given only retinal movies as data, the brain cannot determine with perfect accuracy the scene in the world that generated said movies. Given the often striking ambiguity of the source of a retinal projection, it is remarkable that our visual system usually locks in on a single perceptual response to a given stimulus. Even during bistable perception we typically experience one object at a time, not a superposition of two objects.

How does the brain settle on a unique percept when provided with an inherently ambiguous retinal projection? It seems the brain uses context (post eleven) as well as background assumptions and knowledge (post twelve) to help narrow down the range of reasonable interpretations. Bistable percepts merely serve to highlight those rare instances when these contributions from the brain are not sufficient to settle on a single interpretation for an extended period of time.

In general, while we know that the retinal movies strongly influence the brain’s construction of experience, our experience is obviously not a mere report or transcription of what is happening in the retinae. If it were, ambiguous stimuli wouldn’t spontaneously reorganize in such drastic ways such as we observe in the Spinning Girl and Rotating Necker Cube (post seven), the angles in Purves’ Plumbing would look the same, the tabletop dimensions in Turning the Tables would look identical (post twelve), the yellow and blue squares in Purves’ Cubes would look grey, Shepard's subterranean monsters would look identical in size (post eleven), etc..

Hopefully the previous seven posts have made it clear why psychologists often say that the brain constructs interpretations of stimuli in a context-sensitive way, based on background knowledge and assumptions, in the light of sometimes intense ambiguity of the actual source of the stimulus. If we were forced to choose between the false dichotomy of saying that experience is an interpretation of what is happening in the retinae versus a transcription of what is happening on the retinae, I think the choice is clear.

Richard Gregory (1966) summed up the view quite well when he said that “Perception is not determined simply by the stimulus patterns; rather it is a dynamic searching for the best interpretation of the available data.” Our visual experience is clearly the result of neuronal events downstream from the stimulus, a construction of an experience whose contents mostly include worldly events beyond the eyes. It is such worldly events that we must engage with, after all, and such engagement with the world determines whether we eat, reproduce, flee, or die.

Next upIn the next couple of posts we’ll persue the idea that perception is interpretation down more specific paths, looking at a prominent view that the mechanism of interpretation is a kind of unconscious inference, and finally we’ll end up heading into the brain, looking at the neuronal basis of these internal “portraits” of the world.

25 comments:

"Consider the case in which a projection onto the retina is square-shaped. What can we say about the object that generated that projection? Assuming there are no distance cues present, the same square shape on the retina could be produced by a tiny square that is extremely close to the eye, a medium-sized square a moderate distance away, or a colossal square that is extremely far away....In other words, given only retinal movies as data, the brain cannot determine with perfect accuracy the scene in the world that generated said movies. Given the often striking ambiguity of the source of a retinal projection, it is remarkable that our visual system usually locks in on a single perceptual response to a given stimulus."

Doesn't this assume that the perceiver is perfectly still? If we assume that the animal is capable of moving around the object, the invariants in the objects would be nomothetically preserved in the retinal transformations as the animal moves towards or around the object. If the object was a cube, and the animal was capable of moving around the cube, there would be no ambiguity about the shape of the object because the invariant structure (its cubeness) would be preserved in the possible retinal transformations given through motion around it. The fact that locomotion enables us to pick up invariant patterns of transformation in the optic array which reflects information about the cube should tell us that the brain does not need to "compute" or "assume" that the cube is 3D. This information is already available in the stimulus provided we don't assume that the animal is perfectly passive. In the same way, distorted room illusions are defeated as soon as we move about and look at the room from multiple angles. James J. Gibson is really nice on this point.

Gary: good point that the ambiguity can sometimes be resolved by multiple factors. One example you point out is movement of the animal or its eyes. I had included a long caveat footnote about this, but it seemed a bit too much.

Regardless, it is clear the brain makes its own contributions to the experience, that it isn't merely a matter of picking up information that is there in the retinal images. There has to be some processing (or whatever you want to call it) past the retina. Otherwise, the rotating Necker wouldn't change, spinning girl would always be the same, etc (see the dozens of examples in previous posts: for instance, the context-dependent effects of Purves' cubes cannot be resolved by movement of the animal).

I have experienced bistability in real life. A car had a rotating skeleton key on top (some promotional deal for a locksmith) and the key kept changing its apparent direction of rotation. I asked the crowd of people with me which direction the key was rotating, and we got about 50/50! And we were moving, struggling, arguing to figure out the correct direction (it wasn't actually changing), and we could not.

Similarly, if I show you a wire frame of a cube it will be bistable, even if you move (this is one of the points of the rotating Necker cube, which reproduces the observer moving around the cube).

While the ambiguity can be resolved, what I find incredible is that the brain typically settles on one interpretation when it is radically ambiguous. We see one direction of rotation of the Necker Cube, not a superposition of the two possibilities. Similarly, the key on the top of the car settled into a single direction of rotation.

Sure, while it would be possible to resolve it by getting closer to the car, touching the key, etc, that wouldn't obviate the fact that the brain settled on one percept given the ineliminable ambiguity in the actual context (and sometimes the context is keeping the body quite still, as when a cat is watching a mouse).

I do have some familiarity with Gibson's approach. While I think it is interesting, my impression is that he underestimates the importance of what is happening inside the brain beyond the retinal movies (and the same goes for most of the other related neo-behaviorist theories of perception that look at perception as a kind of skilled interaction or coping with the world: they are horrible at dealing with things like hallucinations, dreams, experiences of people that are paralyzed, etc..).

Now, whether we call the contributions from the brain to perception "assumptions" or "background knowledge" actually will turn out to be not all that important to me. What those phrases are meant to capture is merely that the brain is constructing experience that include more than is technically included in the stimulus itself. The Necker Cube (and dozen-or-so other examples I provided of ambiguous stimuli in two posts (posts six and seven)) don't include enough information to disambiguate the source, but our brain settles on a single percept anyway (consider also the barber pole illusion, which I didn't include in posts 6/7 for space considerations).

I'll get more literal about this once I start to dig under the hood and look at representational systems in brains. Right now I am speaking like a psychologist would speak, which is with a heavy use of analogy and such. My goal is to construct a kind of dialectic between a psychological way of thinking, and a more neuroscientific way of thinking.

While I certainly agree that the brain contributes a lot to experience, all the examples you use (necker cubes, spinning girls,etc.) seem irrelevant to the kinds of perceptual experience our evolutionary ancestors had for millions of years. All the examples seem "academically neat" or "lab neat". You say ecological approaches fail to adequately deal with hallucinations, dreams, etc., but aren't we putting the explanatory cart before the horse? Shouldn't we be explaining average everyday veridical experience before we explain rare cases of perceptual ambiguity and "construction"? We do not construct the ground, we do not construct the sky, so why should we explain experience of those things in terms of the special cases of hallucination and dreaming? The reliable pickup of information seems like a different explanandum than how those processes can go wrong. You don't explain how a car works in terms of how it breaks down, so why would you explain perception in terms of how it breaks down?

I am perfectly willing to accept that the brain processes information. In fact, I think the brain processes a HUGE amount of information in order to "cope" or "deal". This of course requires a very complicated story involving, yes, representations and other classic theoretical entities. However, representations which "stand in for" the external world and representations which topographically map the world so as to guide behavior are totally different ontologically. Indicator representations are problematic explanatorily; isomorphic representations not so tricky.

If we are going to tell a physical story about perception, Gibsonian theory is helpful because it tells us that we shouldn't expect the brain to be constructing the spatial layout of the ground from 2D inputs. This is computationally unnecessary according to the work being done in ecological optics.

But don't get me wrong, I am a "constructivist" in many ways, just not when it comes to the basic perceptual experience we have of the ground or the Umvelt broadly speaking. I think the brain makes many contributions in social situations, in our self-concepts operate, in folk psychological attitudes work, and in how narrative and reasoning processes operate, etc. You could say then that I am a constructivist when it comes to higher-order processes but not to lower-order ones. The evolutionary need for the former is evident, but I don't see one for the latter.

Gary:I would say that ambiguous stimuli aren't of merely academic interest, for a couple of reasons. One, they don't just happen in ivory towers. The rotating key example I gave above was not from an experiment or lab; it happened out walking around with a big crowd of people that all saw the illusion and couldn't figure out what was going on (I, on the other hand, had just written a post about ambiguous stimuli, otherwise they wouldn't even have noticed something was amiss). I'm sure the owner of the car had no clue he was generating bistable percepts.

While it is admittedly sometimes possible to resolve ambiguity by moving about, that doesn't address the initial ambiguity. It isn't as if the brain waits to setle on an experience until it has done all the proper movements. The key rotation was bistable.

Also there were many examples I mentioned that weren't cases of bistable perception (those were in posts 6/7 and we are up to 13 now!). For instance, Purves' Cubes, a color illusion, will not go away due to bodily movement. Color is notoriously ambiguous.

More to the point, there is no evidence that the brain switches to a different processing mode when there are illusions present (how would it even know to do so?). Hence, it seems perfectly fine to use visual illusions to pick apart the normal mechanisms of visual perception (normal mechanisms in abnormal circumstances: that's pretty much the hallmark of exploration in biology).

This relates to the following:"You don't explain how a car works in terms of how it breaks down, so why would you explain perception in terms of how it breaks down?"

In biology it is an experimental staple to probe a normal process by studying unusual and damaged cases. Neuropsychology is premised on looking at broken cars to help us understand how normal cars work (blindsight, HM, etc). When we want to know the contribution of some gene to the development of an animal, one of the things we do is knock the gene out of the animal's genome and determine what the effects of this perturbation are.

All of the above make me think that illusions reveal interesting features of normal perception, features that we usually don't notice precisely because in normal life they don't come to the fore. I don't take this as evidence that the features (e.g., context) aren't important, but that they usually work pretty well.

Moving past bistable perception, it is an empirical question whether consciousness during dreams/hallucinations is different than when we are awake, a point I made back in the third post in this series. I will consider such things later on. One person's falsification is another's anomaly. The fact that skillful interactions with the world are obviously not necessary for experiences is enough to kill some of the more naive theories of experience. It comes off as special pleading to ask for exceptions for hallucinations and dreams.

These are not conceptual issues as much as empirical issues, so experiments should settle them more conclusively if dreaming/hallucination isn't enough to convince the die-hard Merleau-Pontians out there that they are wrong.

As for the claim that the brain doesn't have to do as much work as people often assume, that is an interesting point, something I know the embodiment folks have been pushing. I look at it as an empirical question how much detail and accuracy is in a given neural representational system. It is something we can, and do, measure directly, so no need to make assumptions about it. Sure, maybe the brain doesn't have to build a model of a 3-d world to navigate a body out of the room (a la Rodney Brooks), but that doesn't mean it doesn't!

At any rate, we have fundamental disagreements here. I think this stuff is important for what you might call "lower level" experiences. E.g., seeing a sunset.

But at least it is good to have someone commenting who isn't a dualist :)

But yeah, it seems that we are both operating with a different set of assumptions about (1) what constitutes the perceptual stimulus and (2) how the brain reacts to that stimulus. You assume that the stimulus is very much poor (ambiguous) and the brain needs to bring a great deal of context to the stimulus. I think that the stimulus is incredibly rich and less ambiguous and the brain needs to bring less context (particularly about invariant features specified in the ambient array like spatial layout and texture).

What I am not sure about in regards to your position is the extent to which you believe phenomenal experience itself is a construction or simulation. This of course runs into homunculus problems people like Metzinger are trying to deal with. The question becomes: is everything we experience a figment of our imagination generated from spots of sensation differing in brightness? This seems unbelievable to me; it goes against the "efficient" and "scrappy" engineering style of Mother Nature.

Moreover, Fodor and friends seemed to believe that even if representationalism was wildly miraculous, it was the "only game in town". Surely, dynamic systems theory now obviates this defense. If we can explain the same behaviors (navigation and behavioral reactivity) in nonrepresentational terms, and if DST is more plausibly realized in the brain than explicit token symbols somehow "standing in for" the Earth (and the body), then surely it becomes a matter of competing paradigms.

However, you mentioned something about Merleau-Pontian theory being most applicable to lower-level processes. I agree. This is why I like Andy Clark's approach. He recognizes that we are going to have to tell a story using both nonrepresentational and representational theory. However, I think the latest buzzword is "virtual space". To what extent can we explain the virtual workspace of executive function in classic computational terms? Perhaps a new vocabulary (possible Deleuzian?) is necessary to explain concepts like the Global Workspace or visual-spatial sketchpads. I don't think functionalism is adequate to explain the workspace and neither is behaviorism. It seems then that we need a new ontological vocabulary. Personally, I think Julian Jaynes offers such a vocabulary. He has a radically behaviorist view of nonverbal animal behavior but when it comes to humans, he uses the concept of an "analog functional workspace" (grounded in the nonverbal behavior) to "scaffold" higher-level processes such as narrative consciousness, introspection, and executive function. You should check out his book if you have the time.

I don't actually think that all stimuli are always ambiguous. I just think that stimuli that are ambiguous are useful for revealing things about how normal perception works. It is an empiricial question how much ambiguity there is in the retinal array, and likely it depends strongly on the type of stimulus (e.g., color versus shape versus distance versus size). (Note also when I said "context" I meant context of the stimulus as discussed quite a bit a couple of posts ago).

I spent a lot of time with the DST stuff in grad school, which is one reason I used to be sympathetic to an antirepresentational embodied approach. I briefly drank the van Gelder kook aid.

Brains are obviously dynamical systems (in particular, Hodgkin and Huxely models of neurons are nonlinear dynamical systems) but that doesn't mean they don't trade in representations too! (For that matter, a Turing Machine can be modelled as a dynamical system, so really that is an orthogonal issue). Note to other readers: I haven't actually said anything about representations in my posts, so this is jumping the gun a bit.

Regardless of whether you call the obviously substantive neuronal contribution to experience a "representation", there are obvious problems with radical embodied approaches. To wit: if someone accepts that dreams are a type of experience, that paralyzed people can have experiences (which we know is true based on horrible anesthesia mishaps), then any view that movement or skilled interaction with the world is necessary for experience is immediately falsified. This is a different topic from what positive story I would give, whether it be representational or not. Ask anyone who has been paralyzed but not anesthetized during an operation if this is of merely academic concern :)

Note I don't think representations (in the way I will define them that we use in practice in neuroscience) are sufficient for consciousness, just necessary. There is no serious homunculus problem there, because we know how one part of the brain can "read out" the activity in another part of the brain. Again, these are not particularly thorny conceptual issues, but mostly handled empirically at this point.

For people reading, "the homunculus problem" is the supposed problem that you need someone to read out and use those representations, and this amounts to a devastating objection in some philosophers' minds. My PhD thesis in neuroscience basically was on this topic in an invertebrate model system: see Thomson and Kristan 'Encoding and Decoding Touch Information in the Leech CNS.' (Note my neuro thesis was not at all motivated by this homonculus "problem", but by a substantive debate within neuroscience about how sensory systems represent stimuli).

But this is to get ahead of ourselves. So far I've stuck pretty close to the psychological data, drawing out a hypothesis that seems reasonable. For those interested in representations and such, as discussed in practice in neuroscience, I wrote about this topic here. I haven't gotten there yet in this series of posts.

I think a few neuroscience courses might be the antidote to all this odd neobehaviorism that is spreading around philosophy departments. :)

Kuijsten, M. (2009). New Evidence for Jaynes's Neurological Model: A Research Update. The Jaynesian: Newsletter of the Julian Jaynes Society, 3(1).

Also, Jaynes was one of the first theorists to integrate Gazzaniga's research on split-brain patients into a comprehensive neurological model. His views are also being corroborated by work on inner speech, see:

Jaynes was light years ahead of his time, philosophically and empirically. I wouldn't laugh him off so easily. If you are interested in a defense of Jaynes' views on consciousness, I have a paper available to read on academia.edu (which also provides an extensive defense of Noe and Gibson-style approaches to perception). Check it out:

"To wit: if someone accepts that dreams are a type of experience, that paralyzed people can have experiences (which we know is true based on horrible anesthesia mishaps), then any view that movement or skilled interaction with the world is necessary for experience is immediately falsified."

I don't know any embodiment theorist who makes the claim that crude motor movement is necessary for experience. That's patently absurd. Besides, the paralyzed person can still move their eyes and head, which constitutes a kind of "sensorimotor skill". Even saccadic eye motion counts as "sensorimotor skill". Surely you wouldn't deny that saccadic motion is crucial for getting vision to work right. Moreover, I don't know any embodiment theorist who makes strong claims about dreaming, as if dreaming actually falsifies the thesis that the mind is fundamentally embodied. Moreover, even the patient who is completely paralyzed is still "skillfully interacting" with the world provided we understand homeostatic regulation and breathing as a primitive kind of interaction with the world.

As for representations in the brain, many critics of Noe seem to have missed when he said that the rejection of internal world modeling is “compatible with there being all sorts of representations in the brain, and indeed, with the presence of such representations being necessary for perception" (2004, p. 22).

"For people reading, "the homunculus problem" is the supposed problem that you need someone to read out and use those representations, and this amounts to a devastating objection in some philosophers' minds."

I think this problem constitutes more of a challenge than you are willing to concede. Even it we don't require a literal "reader" of the re-presented information, how do we explain the functionality of such representations? William Ramsey calls this the "job description challenge". Basically, if you can't explain how a neuron "stands in for" something else without resorting to a causal story, then there are problems. For if you do just resort to the causal function of the neuron to "do the job" of representing, then why can't we say that the system is simply an extremely complex causal system rather than an "information bearing" system? To my mind, this challenge has not been met by ANY representational theorist. To brush it off as a philosophical concern is to ignore the deep ontological problems inherent to the idea of one thing "standing in for" or "indicating" something else, as opposed to simply acting as a causal mediation of some sort.

Gary: Cases of anesthesia paralysis the patients aren't moving at all, obviously if they were doing saccades the anesthesiologists would stop the surgery immediately! I'm not attacking any specific philosopher, but an attitude I've encountered in phil departments hat is surprisingly naively behavioristic in a new dress.

I'm really not trying to engage philosophers right now, but just explaining how your garden-variety biologist is attacking the problem (see first few posts in this series).

Thanks for the Jaynes references I frankly don't have the time to look into this I'm pushing a more biological approach, not interested in self consciousness, language, etc.. I know he has the alternate hypothesis that these are important for consciousness, but I look at it as a fringe view that I'd address in the same chapter I address Penrose. As I said I'm taking the garden-variety biological perspective here.

I was out on a bike ride thinking about this discussion, and it seems a lot of talking past each other. If by "consciousness" you mean some language-dependent thing that comes into existence via some kind of cultural symbolic scaffolding operation, then we are not talking about the same thing. I am talking about basic perceptual awareness, all the evidence suggests it exists in monkeys, for instance (for instance binocular rivalry can be induced with very similar results to those seen in humans).

I delineated my target in post one in this series (basically perceptual awareness, as of the taste of a banana), so you might want to start there.

While I haven't talked about representations yet, if you are interested in discussing that you should read what I've written (the link above to my post at philosophy of brains, and perhaps my paper on neural coding/decoding in the leech). When out on my bike ride, I realized you were sort of giving generic arguments about representations, but hadn't really addressed what I have actually written on the subject.

At any rate, if your Jaynesian enactive embodied situated dynamical systems people can handle dreams, hallucinations, experiences during paralysis, then to that extent it isn't naive neobehaviorism. I know in Noe's book Out of our heads he gives a very desultory and inadequate consideration to dreams/hallucinations. This is typical of that whole approach, but as I said I don't really want to engage with it seriously right now I'm too busy.

The evidence will settle things more conclusively eventually (to the extent we have not been talking past each other that is: you seem to have mostly attacked a view I haven't defended, so perhaps check out the first 12 posts in this series and my post on representations).

I may have overstated the importance of ambiguity in this post, but hopefully I made clear in my comments above that I look at it as an empirical question just how much ambiguity is present in naturalistic contexts. Regardless, that doesn't mean ambiguous stimuli can't reveal things about the mechanisms of perception.

Previously I had included the following in the post as a footnote to the discussion of ambiguous squares, but cut it just because of space considerations. I probably should have kept it in:"Obviously, in real-life perception, the world overflows with additional cues and constraints that would tell us how far away the square was. For instance, can you reach and touch it? If you move, how much does the square appear to move? (Closer objects appear to move more when we move.) Distant things appear with less resolution because of smog and other signs that air, or at least smoggy air, is present.]]

Hi, Jonathan Speke Laudly here, The operation of the brain asobserved could be content too! Consciousness could be fundamental and giving rise to the to the physical world and to some sense of individuality or self. That consciousness is created by an organized bunch of atoms--is an assumption. When a person becomes what is commonly called unconscous--as in deep sleep---one could argue that this is just the halting of the content of consciousness-----consciousness itself remains always and is always untouched. This point of view is no more or less an assumption than that the brain--atoms-- gives rise to consciousness. What evidence would make either notion indubitable? I know this is peripheral to your immediate purpose--but I think it relevant generally to make clear what conceptual foundation underlies your point of view.

Jonathan: I think I'd want to say that without contents, there isn't consciousness. However, I would prefer to just have no opinion at this point, and say that while you could be right that there is a consciousness there even in deep sleep, it is more fruitful to start with instances where there is little question that we are conscious. That way we don't have to get bogged down in the philosophers' trap of trying to define the phenomenon before we have studied it thoroughly.

My overarching perspective is that of the flat-footed unreflective scientist who wants to treat conscious experience like he would treat digestion or other biological phenomena (I discussed this some starting in the second post).

To illustrate how I am trying to approach the issue, I'll consider the illusion you call "bistable perception". I'll start by doing what Gary Williams alluded to in one of his comments - flash back to the needs of our more primitive ancestors, or even to pre-humans, then work forward to more capable critters.

I assume that one of their primary needs was the ability to detect an object and its present motion and to predict its future motion. Focusing on vision, assume that in the subject's FOV there is a dark object against a light background. Call the pattern of retinal stimulation caused by this scenario the "object shadow". Then it would seem minimally sufficient for the "visual processing system" (the inputs to which are neural correlates in the brain of the retinal sensory stimulae) to have these capabilities:

- recognize neural activity corresponding to the presence of an object shadow, ie, to detect the presence of an object in the FOV

- recognize changes in neural activity corresponding to motion of the object across the FOV; eg, by detecting translation of the object shadow across the retina or coincidence of the object shadow in successive saccades

- recognize changes in neural activity corresponding to motion of the object toward (away from) the subject; eg, by detecting enlargement (reduction) of the object shadow

These capabilities seem pretty straightforward, something a reasonably good programmer could implement in software to process the outputs from a digital camera (reminiscent of Bach-y-Rita's sensory substitution experiment). Ie, so far no reason to toss this functionality into my set C of functions attributable to something we'll ultimately call "consciousness".

My next planned step is to speculate on how this might relate to bistable perception (hint: it's the prediction part that I'm guessing is critical). Unfortunately, it's late, I need to go to bed, and we'll be traveling tomorrow. I'll check back in a day or two, anticipating being shot down - which is OK since my primary objective is to learn, and this is an unexpected opportunity.

Charles, I think you are on the right track. One of the main lines of thought in the neural coding literature is that the brain engages in prediction of states of the world (whether these predictions be of future or present states usually depends on the task being analyzed). Bistable percepts are often described as two different estimates or predictions (or hypotheses) about what is in the world. The jargon is slightly different but the point is usually the same. This is all quite consonant with all the stuff on interpretation, especially post 14. I will talk about it quite a bit more when I turn to neuronal representations.

For one example from the neural literature, there's this paper, but it is pretty much the default way of thinking about how brains represent the world so you'd find it in most papers of neural coding.

One thing you said seemed a little strange:These capabilities seem pretty straightforward, something a reasonably good programmer could implement in software to process the outputs from a digital camera (reminiscent of Bach-y-Rita's sensory substitution experiment). Ie, so far no reason to toss this functionality into my set C of functions attributable to something we'll ultimately call "consciousness".

Generally I wouldn't take whether we could program something ourselves as evidence that it isn't important for consciousness. However, if it is something that we have good reason to think is done by the retina (as I discussed in post nine a powerful information processor in its own right) or the spinal cord (also a powerful information processor) then it is not likely to be essential for consciousness.

Luckily we can empirically determine which parts of the brain are important for consciousness, and empirically determine what those parts of the brain are doing at a causal/functional/information processing level. Ultimately I see the psychological and neural stories coming together this way.

"Generally I wouldn't take whether we could program something ourselves as evidence that it isn't important for consciousness."

All I meant to say was that IF we can create an inanimate entity that implements a specific functionality, we can't use that functionality in as evidence of the existence of something we choose to label "consciousness", assuming that we intend "consciousness" to be a word uniquely applicable to humans and possibly "higher-order" animals. My objective is to start with simple functionalities and then try to exclude increasingly complex ones, hopefully including some that would usually be considered characteristic of whatever is typically envisioned when people use the word "consciousness".

In essence, I'm trying to pursue an eliminativist strategy with respect to a concept of "consciousness" as a collection of some supposedly "special" functionalities. I don't think this pursuit is accurately described as arguing that some functionality "isn't important for consciousness": first, because I don't consider that currently there is consensus on the meaning of "consciousness", secondly because some function that isn't by itself eligible for inclusion in my set C might be a critical component of some more complex functionality that is.

I made a mistake in that first post in opining on implementability in software, not being competent to make such a judgment. Henceforth, I'll merely argue that IF some functionality is so implementable, then it must be eliminated as a stand-alone functionality eligible for inclusion in set C.

And in that pursuit, I next consider additional functionalities necessary to recognize bistability. In addition to the primitive ability of the subject organism to detect the presence of an object, recognize it as moving laterally across the FOV and either toward or away from the subject, and learn to predict its subsequent motion, I need to assume that the subject organism can recognize certain detected objects as solid (ie, 3D), recognize the lateral and to-fro motion of specific parts of the surface of one of those solid objects, and learn to predict the subsequent motion of those parts of the object's surface. (Note that I have clarified the prediction capability assumed by explicitly stating that it is a "learned" capability.)

Given this expanded capability, the subject organism can recognize that an area on the surface of the object moves in a regular pattern as follows: from (say) the right part of the FOV and farther away, to the center of the FOV and closer, then to the left of the FOV and farther away, and finally out-of-sight, subsequently to reappear at the original location; ie, can recognize motion we describe as "clockwise rotation about a vertical axis". It can also learn to predict an area's subsequent behavior consistent with that pattern based on sequences of observations of its past behavior.

It seems straightforward to take the next step to recognizing bistability. But I have more to say about that than I have time for right this moment. So, I'll post this much and add the rest when I have time or after a response, whichever comes first.

Charles:All I meant to say was that IF we can create an inanimate entity that implements a specific functionality, we can't use that functionality as evidence of the existence of something we choose to label "consciousness

This doesn't follow. As I said, it could be necessary but not sufficient for consciousness. If we have a four-step process, then just because one step doesn't implement the entire process it doesn't mean it isn't necessary.

In general, I'm not sure of the motivation for your conceptual approach rather than study systems that we know are conscious and see how they work? E.g., we know a lot about the topic, why start from scratch?

Correct - an editing oversight. Although I failed to change the part you quoted, I explicitly addressed the point in the last sentence of that paragraph, and added "stand-alone" in the next to cover it.

Charles: yes, I saw that qualifier, and thought it was necessary. My concern is this throws into question the method of trimming away functions one by one they way you are trying to do. It seems a more fruitful approach to look at how multiple subsystems interact to produce consciousness, and take a more data-driven empirical approach.

I submitted this yesterday, but it seems to have disappeared.====================================You obviously have a much clearer idea of how to attack "consciousness" than I do - or, as nearly as I've been able to tell, than do numerous people who - unlike me - do have some credibility. Are the Koch/Crick information theoretical view, the computational view, Dennett's heterophenomenological view, the Noe/O'Regan extended system view, etc, just minor variants of an underlying consensus? If so, what is that consensus view?

To repeat part of an earlier comment on Neurologica, in "Conversations in Consciousness" Susan Blackmore asks each interviewee how they define "consciousness". Some responded with what seemed to me uselessly vague definitions, some declined to answer directly, and those who answered more clearly seemed to have notably different and often incompatible ideas. Now admittedly many were philosophers, but some weren't. But in any case, could those twenty plus interviewees all have been ignorant of the consensus you seem to see? Or has that consensus congealed only in the intervening half dozen years? If not, it seems that there may still be some open questions at the integrated system level in addition to those at the component level.

For example, consider "memory". We apparently have some pretty good insights into how memory works at the neuron level, and even at a higher level when it comes to learned repetitive behavior, AKA "knowing how". But my (possibly incorrect) understanding is that in the case of "knowledge", AKA "knowing that", we don't currently know what entities are actually "stored" in the brain, ie, what the content of our "memory" is. I read an essay a while back containing the claim that whatever is stored in memory when we "know" a fact, it certainly isn't the proposition asserting that fact. While I agree with that claim taken literally (ie, no alphanumeric strings stored in hexidecimal), it seems possible that in essence a proposition actually is stored - specifically, the motor neuron stimulae necessary to produce one or more variations of the vocalized proposition in response to appropriate stimulae. If so, it would seem to put "knowing that" E = MC**2 pretty much on par with "knowing how" to execute a topspin forehand in tennis - which would seem to blur the distinction between which activities go into the "conscious" bin and which go into the "unconscious" bin.

Charles: I don't think there is a consensus, but there is a lot known. They are different :)

I think it is a mistake to worry too much about a precise definition before the science is more developed (imagine if biologists had spent a lot of time defining 'life' rather than studying things that they already knew were living).

I have a fundamentally different perspective than those that want to take a more conceptual approach. This is probably more of a consensus than any particular definition or theory: to do what we always do in biology, to go for the concrete and specific rather than generic and abstract.

Well, it appears there is an unbridgeable gap between what we consider "concrete and specific rather than generic and abstract". I view the various relevant fields - physics, chemistry, biology, physiology, psychology, et al - as differing in the level of integration they are addressing. At each level there are "concrete and specific" issues to be addressed using the vocabulary appropriate to that level, and the issues appropriate to one level may not help in addressing those at another. Eg, I find it hard to imagine that knowing the electro-chemical aspects of nerve cell behavior is going to reveal what gets stored in the "memory" implemented by such cells.

You alluded to intentions to further pursue some topics addressed in these posts. Is there a target time schedule for that?

Ultimately I see it as an empirical question what levels are needed to explain any given phenomenon. E.g., for retinal phototransduction we will likely need quantum mechanics. For the behavior of large (i.e., thousands or more) populations of neurons, the formalism from Hodgkin and Huxely (or other scaled-down models) start to take precedence. The way we find out what is important is by studying what we are interested in, and the principles from studying one thing (e.g., retinal phototransduction) won't necessarily be helpful for another thing (e.g., consciousness).

What I'd resist is an approach which was more conceptual than empirical in nature. For instance, where the first order of business would be to spend a lot of time worrying about definitions of what we are studying (though I did characterize what I mean by 'consciousness' in the first post in this series). E.g., is it more important to define digestion or study digestion? To define 'life' or study living things? I take the same approach to consciousness, and that is probably the consensus (at least within biology--philosophers are a different story of course, and I'm avoiding a philosophical approach, giving Mr B first shot at the floor (as discussed in posts 2 and 3 and comments therein)).

In terms of where I'm headed here, I probably will stop publishing this stuff on the blog as I want to write up a manuscript potentially book, and I've found the blog format is a bit tedius for expressing serious thoughts that involve sometimes involved chains of reasoning. Plus nobody really reads it anyway :)

If interested you can give me your email and I'll send you chapters when I write them.

Our disconnect is that you consider "consciousness" to be in the same functional category as digestion, walking, vision, speaking, et al - the category of functions for which it is relatively clear what subfunctions play a role. For example, in the case of vision, you know where to to burrow down into the details - the retina, specific parts of the cortex, etc.

In the case of "consciousness", it doesn't appear to me that the proper assignment of subfunctions is all that clear. So, all I'm really suggesting is that as the burrowing into various subfunctions progresses, it might be better to defer C/not-C assignments until more is known about how they interact. It seems inevitable that premature assignment will slant the burrowing.

And in any event, I rather doubt that "consciousness" is the binary ON-OFF phenomenon implied by making C/not-C assignments. And if not, such assignment seems a significant mistake - one with potentially serious consequences. One of my motivations for pursuing this area is it's applicability to assignment of legal responsibility. For what strikes me as a questionable use of C/not-C assignment, see this essay co-authored by your mentor Pat Churchland:

Charles:Our disconnect is that you consider "consciousness" to be in the same functional category as digestion, walking, vision, speaking, et al

Yes, for now I am assuming that consciousness is a biological process, as those other things are.

Then you added:the category of functions for which it is relatively clear what subfunctions play a role.

Well, for digestion it is clear now that we have a biological decomposition of the process. 150 years ago it wasn't clear at all. There's a great description by Bill Bechtel of the great chemist Leibig who generated an incredibly detailed functional decomposition of digestion (with specific chemical reactions) that turned out to be just flat-out wrong. Things can be unclear, we can get it wrong, in our decomposition of the underlying mechanisms of digestion, just as we can be wrong as we approach systems that are conscious.

(Bechtel discusses this in some detail here in a paper that deserves much broader readership).

In the case of "consciousness", it doesn't appear to me that the proper assignment of subfunctions is all that clear.

True, and it wasn't with digestion either in the 19th century. What was required was fallible theorizing based on limited data. That's the best we can ever do. Such work is not set in stone, but provisional, ladders that can bring us to the next iteration.

And in any event, I rather doubt that "consciousness" is the binary ON-OFF phenomenon implied by making C/not-C assignments. And if not, such assignment seems a significant mistake - one with potentially serious consequences.

Sure, but that doesn't mean we can't start somewhere. "Life" is probably not a binary category, but that doesn't mean we can't pick clear instances of living things and study them experimentally to generate data that are the ultimate engine of conceptual innovation.