This editorial piece notes that we still haven’t nailed down the neural correlates of consciousness (NCCs). It’s part of a Research Topic collection on the subject, and it mentions three candidates featured in the papers which have been well-favoured but now – arguably at any rate – seem to have been found wanting. This old but still useful paper by David Chalmers lists several more of the old contenders. Though naturally a little downbeat, the editorial piece addresses some of the problems and recommends a fresh assault. However, if we haven’t succeeded after twenty-five or thirty years of trying, perhaps common sense suggests that there might be something fundamentally wrong with the project?

There must be neural correlates of consciousness, though, mustn’t there? Unless we’re dualists, and perhaps even if we are, it seems hard to imagine that mental events are not matched by events in the brain. We have by now a wealth of evidence that stimulating parts of the brain can generate conscious experiences artificially, and we’ve always known that damage to the brain damages the mind; sometimes in exquisitely particular ways. So what could be wrong with the basic premise that there are neural correlates of consciousness?

First, consciousness could itself be a mixed bag of different things, not one consistent phenomenon. Conscious states, after all, include such things as being visually aware of a red light; rehearsing a speech mentally; meditating; and waiting for the starting pistol. These things are different in themselves and it’s not particularly likely that their neuronal counterparts will resemble each other.

Then it could be realised in multiple ways. Even if we confine ourselves to one kind of consciousness, there’s no guarantee that the brain always does it the same way. If we assume for the sake of argument that consciousness arises from a neuronal function, then perhaps several different processes will do, just as a bucket, a hose, a fountain and a sewer all serve the function of moving water.

Third, it could well be that consciousness arises, not from any property of the neurons doing the thinking, but from the context they do it in. If the higher order theorists were right, to take one example, for a set of neurons to be conscious would require that another set of neurons was directed at them – so that there was a thought about the thought But whether another set of neurons is executing a function about our first set of neurons is not an observable property of the first set of neurons. As another example it might be that theories of embodiment are true in a strong sense, implying that consciousness depends on context external to the brain altogether.

Fourth, consciousness might depend on finely detailed properties that require very complex decoding. Suppose we have a library and we want to find out which books in it mention libraries; we have to read them to find out. In a somewhat similar way we might have to read the neurons in our brain in detail to find out whether they were supporting consciousness.

Quite apart from these problems of principle, of course, we might reasonably have some reservations about the technology. Even the best scanners have their limitations, typically showing us proxies for the general level of activity in a broad area rather than pinpointing the activity of particular neurons; and it isn’t feasible or ethical to fill a subject’s brain with electrodes. With the equipment we had twenty-five years ago, it was staggeringly ambitious to think we could crack the problem, but even now we might not really be ready.

All that suggests that the whole idea of Neural Correlates of Consciousness is framed in a way which makes it unpromising or completely misconceived. And yet… understanding consciousness, for most people, is really a matter of building a bridge between the physical and the mental; even if we’re not out to reduce the mental to the physical, we want to see, as it were, diplomatic relations established between the two. How could that bridge ever be built without some work on the physical side, and how could that work not be, in part at least, about tracking neuronal activity? If we’re not going to succumb to mystery or magic, we just have to keep looking, don’t we?

I think there are probably two morals to be drawn. The first is that while we have to keep looking for neural correlates of consciousness in some sense (even if we don’t describe the porject that way), it was probably always a little naive to look for the correlates, the single simple things that would infallibly diagnose the presence of consciousness. It was always a bit unlikely, at any rate, that something as simple as oscillating together at 40 Hertz just was consciousness; surely it’s was always going to be a lot more complicated than that?

Second, we probably do need a bit more of a theory, or at least a hypothesis. There’s no need to be unduly narrow-minded about our scientific method; sometimes even random exploration can lead to significant insights just as well as carefully constructed testing of well-defined hypotheses. But the neuronal activity of the brain is often, and quite rightly, described as the most complex phenomenon in the known universe. Without any theoretical insight into how we think neuronal activity might be giving rise to consciousness, we really don’t have much chance of seeing what we’re after unless it just happens by great good fortune to be blindingly obvious. Just having a bit of a look to see if we can spot things that reliably occur when consciousness is present is probably underestimating the task. Indeed, that is sort of the theme of the collection; Beyond the Simple Contrastive Approach. To put it crudely, if you’re looking for something, it helps to have an idea of what the thing you’re looking for looks like.

In another 25 or 30 years, will we still be looking? Or will we have given up in despair? Nil Desperandum!

3. Sci says:

While I accept we may have to revolutionize our thinking and go on bold paths like Blind Brain Theory, it’s not clear to me how finding the relevant neuronal wiring is going to solve the Hard part of the problem – explaining how the qualitative aspect of subjective experience arise from arrangements of matter.

The physicist Smolin notes this in Time Reborn:

‘The problem of qualia, or consciousness, seems unanswerable by science because it’s an aspect of the world that is not encompassed when we describe all the physical interactions among particles. It’s in the domain of questions about what the world really is, not how it can be modeled or represented.

Some philosophers argue that qualia simply are identical to certain neuronal processes. This seems to me wrong. Qualia may very well be correlated with neuronal processes but they are not the same as neuronal processes. Neuronal processes are subject to description by physics and chemistry, but no amount of detailed description in those terms will answer the questions as to what qualia are like or explain why we perceive them.’

‘We don’t know what a rock really is, or an atom, or an electron. We can only observe how they interact with other things and thereby describe their relational properties. Perhaps everything has external and internal aspects. The external properties are those that science can capture and describe – through interactions, in terms of relationships. The internal aspect is the intrinsic essence, it is the reality that is not expressible in the language of interactions and relations. Consciousness, whatever it is, is an aspect of the intrinsic essence of brains.

On further aspect of consciousness is the fact that it takes place in time. Indeed, when I assert that it is always some time in the world, I am extrapolating from the fact that my experiences of the world always takes place in time. But what do I mean by my experiences? I can speak about them scientifically as instances of recordings of information. To speak so, I need not mention consciousness or qualia. But this may be an evasion, because these experiences have aspects that are consciousness of qualia. So my conviction that what is real is real in the present moment is related to my conviction that qualia are real.’

Peter, so much to say about this!
First of all, “around here?”, “not here” and “definitely not”… 🙂 🙂 you made me giggle at work!
Second, for a radical take, and one that I would endorse without reservation, if I didn’t think that declaring this or that mental phenomena “an illusion” is usually unhelpful (I agree with the gist, but have reservations on how the point is made), you can revisit Sue Blackmore’s answer to last year’s Edge question (“What scientific idea is ready for retirement?”): The Neural Correlates of Consciousness.

Third, all this chimes with Aru and Bachmann views as well as with the additional considerations you are proposing. In short, I think we might agree (!!) that the mere search for NCCs seems to be yet another case of searching behind the lamp-post, or, if you prefer, the paradigmatic example of The Streetlight Effect. However, it is important to note that the over-simplifying hypotheses that inspired the search, like the assumption that you could find a simple phenomenon, detectable with crude measurements such as EEGs (cfr. P300 referred by Aru and Bachmann), which would constitute the unequivocal signature of consciousness, have been extremely useful. They are wrong, but far less wrong that just assuming intractability or full blown metaphysical dualism. Therefore they made it possible for the science of consciousness to take off, get grants, and study the problem empirically. As far as I know, the vast majority of actually funded research in the field rests on one or the other simplistic assumption: that “oscillations bind the contents of conscious experience”, that a “global broadcast” or “information integration” generate “phenomenal experience”, etc.

This IS a problem, but a necessary problem: because of how science funding works, without such shortcomings, it would have been impossible to find enough grants to do anything but armchair philosophising.

Finally, there is a chicken and egg problem (did I mention this one before?). Yes, we need better theories, or at the very least, we need more convergence on the better theories: on one side, the simplifications that guided research so far are starting to show their limitations (I can optimistically detect a consensus rising on this over the last 5 years or so, at least on NCCs), but on the other side, there is no shortage of alternatives, as all your readers will know.
The chicken and egg problem arises because entirely theoretical efforts are, almost by definition, unable to attract enough consensus on the empirical camp (not even the philosophical!), not without strong data to support them, but at the same time, good new and alternative approaches will inevitably find it difficult to get funds, precisely because there is no data and precedents to support them. So, enough consensus can’t be reached without supporting evidence, new theories will lack their own supporting evidence (may attempt to borrow some), and thus will lack the ability to generate enough interest to fund appropriate research.
It’s a difficult conundrum, because simply calling for more imaginative efforts may potentially open the doors to all sorts of whacky pseudoscience, and no well intentioned funding body would happily take that risk.

Is this downbeat enough? I’m depressed now 🙁

On a happier note, all this is another reason why people like you (independent, blue-sky thinkers with an allergy to fluff) are extremely useful. You’ve given a voice to plenty of alternative views, and applied your no-nonsense criticism to all. Perhaps one will stick, who knows?

5. Vicente says:

On the theoretical side completely new concepts are needed, and on the experimental side bioethical constraints make progress extremely slow, if not impossible (rats intellect research aside).

Peter, since you started C.E., what is the progress that has been made in the field? is there any significant breakthrough that you deem worth mentioning? Which would you say is the most important piece of knowledge that we have now, and we didn’t eleven years ago.

I have argued that a solid foundation for the scientific study of consciousness can be built on three general principles. First is the metaphysical assumption of dual-aspect monism in which private (subjective) descriptions and public (objective) descriptions are separate accounts of a common underlying reality. Second is the adoption of the bridging principle of corresponding analogs between subjective/phenomenal events and biophysical brain events. And third is the adoption, as a working definition, that consciousness is a transparent brain representation of the world from a privileged egocentric perspective; i.e., subjectivity.

On the basis of the bridging principle and the definition of consciousness stated above, we can ask what kind of brain mechanism can generate neuronal activation patterns that are proper analogs of salient conscious events. I have proposed the retinoid model as an explanation for our conscious experience just because the specification of its detailed neuronal structure and dynamics enables us to explain previously inexplicable phenomena and predict novel conscious events. This goes beyond the search for the neural correlates of consciousness. On what grounds would one argue that this explanation of conscious experience is not within the current norms of science?

Vicente: in some ways you could say we know less than we did ten years ago! We’re not so euphoric about computation. We know a lot more about neurology but what we’ve mainly learned is how fantastically complex it is, I think even more than we realised. I think there’s a greater diversity of theoretical viewpoints now than there was, which is a good thing; but in the end there has to be some convergence again if we’re going to get real accepted answers. There still isn’t the level of cross-disciplinary understanding that we really need, though I think people try a little harder at that than they once did. In AI it seems as if brute force is gradually winning over subtle understanding, which is a shame.

But it’s still all to play for and it could be that the thing we’ll all recognise as decisive with hindsight has already happened, without (yet) being recognised.

9. Sci says:

@Sergio: I confess I don’t really see any resolution to the Hard Problem in Blackmore’s comments – I’ve see her pull this “If you disagree with me you’re a dualist and that’s bad” card before and with each iteration it rings more hollow.

I read her claim about illusions and look at the tag-line of this blog – who is it that is being fooled?

Beyond that, she seems to simply avoid the problem of intentionality materialism faces, aptly and directly presented to us in Rosenberg’s Atheist’s Guide to Reality:

“…What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all…When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.”

“It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all…When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.”

Funny, it seems as if Rosenberg is referring to stuff here, or is the above quote mere marks on a page?

Anyway, re the NCC, I always point out that there’s been major progress in discovering the cognitive functions associated with conscious content (phenomenal experience), which in turn can suggest hypotheses about why systems carrying out such functions will of necessity end up with qualia. One is that representational systems like ourselves, to be behaviorally reliable, perforce employ basic, irreducible, cognitively impenetrable, *hence* qualitative-for-the-system elements with which to model the world. To be qualitative is just for a representation to not be second-guessable or transcendable by the system using it. And this qualitativeness is only a fact for the system, thus not a publicly observable fact about the world; hence the privacy of experience.

13. Hunt says:

Lack of progress could be due to general theoretical deficit, which is frustrating since it implies that advance must wait for the general advance of science. It might be like trying to understand what the kidney does before understanding blood chemistry. Frustrating also, since we don’t even know precisely where the theoretical limitations lie; they could be in any of a number of fields.

With proper theoretical backing, it’s ‘possible’ that consciousness might be a rather trivial elucidation — not ‘trivial’ as in “this is all there is to it?” but trivial in the sense that the truly satisfying answer we all yearn might be easily apprehended. Something akin to when Watson and Crick entered the pub proclaiming that they had discovered the secret to life. Understanding that key information process at the heart of life didn’t diminish the wonder we know as the process of life. In a sense, it explained everything and nothing, but was deeply satisfying all the same. That’s the type of answer we seek, and it may well not yield to direct assault. Not yet, anyway.

@Sci
The short Blackmore’s essay I’ve linked to does indeed sidestep the problem of intentionality, and even if I somehow subscribe to (my own) sort of hard-core scientism, I don’t think the problem should be side-stepped or ignored. In this case, I don’t have your problem with that particular essay because intentionality is outside the scope, you don’t need to tackle it to point out that the search of NCCs has often overlooked limitations and shortcomings.

I do have a problem with saying “consciousness is an illusion” and leaving it like that. This is a circumstantial problem (with the essay itself), because, Blackmore’s position is more nuanced: she believes that consciousness is not what it seems, that “Perhaps the answer here is to admit that there is no stream of conscious experiences on which we act”.
[This and the following quote come from a “New Scientist” article, and are both based on:
Blackmore, S. (2002). There Is No Stream of Consciousness. Journal of Consciousness Studies, 9(5-6), 17-28.]

In short, I don’t have a problem with the following:

Perhaps a new story is concocted whenever you bother to look. When we ask ourselves about it, it would seem as though there’s a stream of consciousness going on. When we don’t bother to ask, or to look, it doesn’t, but then we don’t notice so it doesn’t matter.

This again doesn’t address “aboutness” and in my opinion doesn’t need to: it may be true or false, but on its own terms, intentionality does not need to enter the picture. Actually, I’ll be even bolder and say: in all honesty, I don’t see why you raised the problem in the first instance.

On the other hand, I’m with you and Peter on “If the conscious self is an illusion – who is it that’s being fooled?” and that’s exactly why I don’t like it when people say “this or that mental phenomenon is an illusion” when they actually mean “this mental phenomenon isn’t what we are intuitively inclined to think it is” (see BBT, of course). The word “illusion” suggests non-existence and therefore “feels” misleading to me (as non native speaker, I may be wrong).

Like Darwin found you cannot initially explain anything down the pub or any where. Especially if there is only one of you talking to restricted top down thinkers, who are unable to think outside of their brain washed boxes.
Originally every brain evolved to serve the physical needs of its body, which is of course the driver of the driver of the mind. If you cannot see that, was and still is the reason for anything having a brain, then you will need to look for some transparent blind NCCs etc. Best of luck with that.

16. Vicente says:

Well, I’m afraid my view is very blurry at this moment. I quite share your situational analysis, but I am a bit more pessimistic. I suspect that consciousness hard problem solution problem progress is stalled. I feel that there are a few main aspects to look at:

– first, the problem statement itself. It is difficult to provide a coherent answer if the question is not clearly made and understood, and the problem specifically framed. In this sense, I am still missing a satisfactory definition of consciousness and its constituents. For the time being all definitions seem tautologic to me (which could be reasonable if we are facing a fundamental phenomenon).

– Then, if we would come to the conclusion that the problem statement cannot be strictly written down in orthodox scientific terms, i.e. the problem can’t be bound within a scientific framework, then we should start thinking of an alternative approach. I suggest that, and related to this post topic, since neuroscientists have been looking for the answer at the brain side of the system, quite unsuccessfully for the moment, maybe they should start paying more attention to the phenomenological side, to understand it better and look for inspiration, that could then be used in the neuro-lab. Probably the pharmaceutical industry would have never developed pain killers without the direct experience of pain, or the opticians glasses without their own experience of bad sight. So I suggest that neuroscientists start exploring as many states of mind as possible by meditation or chemical means. First thing is to observe. So let’s observe the inner side of consciousness and hope that some innovative ideas and lines of research, both neurologic and philosophic, will emerge from this exercise, with a higher yield. Main risk is that the neuroscience community achieves enlightenment and quits, to spend the rest of their lifes contemplating a flower.

– Also, the search for NCCs has to rely in much more powerful instruments (non invasive is a very stringent codition…) that can achieve a much higher simultaneous space and time resolution, sufficient to cross correlate events with the on going conscious experience. Sort of a very evolved SMTT experiment setup while monitoring neural activity at the same time. It looks to me we are looking for the dark matter with a magnifying glass in one hand and the newspaper on the other. A better understanding and new ideas on basic science scaffolding, e.g. what life is, from a very theoretical biology point of view, or a new structure of matter and space-time (more dimensions) that could accommodate qualia, would help.

Regading AI… I don’t know, Blade Runner is still one of my favorite films.

So Peter, sorry for saying really nothing, but honestly I haven’t got a clue and, the very little I could really contribute with is quite ineffable.

On what grounds would one argue that this explanation of conscious experience is not within the current norms of science?

On no grounds at all. I went through a bit of not very systematic reading of your retinoid model a few years ago. The details have now faded away, I hope you will forgive me if report only about what’s left in me: the thoughts and impressions that I’ve gained from reading. I don’t have time at this moment to re-read the papers, but I wanted to chip in as I think it is somehow relevant to the current topic. What follows are my personal opinions, just a bloke that likes to study the subject.

First: your work is certainly within the current norms of science, I am very surprised that you felt the need to ask. (were you asking me? this reply is somewhat justified by the possibility that you were)

Second: your general principles are a notch above the ones that underlie many alternative scientific efforts. They aren’t perfect, but still solid. From the metaphysical point of view, I do think that the third one can’t be taken as a face-value postulate, it would require some philosophical ground work first. Trouble is: doing so would inevitably lead to the chicken and egg problem, you can’t get philosophical consensus unless you have solid evidence to back up your conclusions, but you need the principle to start collecting evidence.

Third: the execution isn’t flawed by the same issues / questionable assumptions / narrow scope of the search for NCCs. However my impression was that it still suffered from the Streetlight Effect, only a different kind (and again: this is a necessary problem). In particular: the retinoid model seems limited to (or focused on, do correct me if I’m wrong!) visual/spatial domains, an approach that I assume is justified by the fact that we know a lot of how visual stimuli are processed and that it’s easy to objectively quantify/describe the stimuli used. What this leaves out therefore is: the big mystery (for me coincides with the (supposedly-)hard problem) implicit in your third principle, how does subjectivity arise? Also: other sensory inputs, possibly attention (I genuinely don’t remember!), propioception (self-modelling?), and most importantly, motivational and/or emotional states (they may or may not be seen as the same thing).
I’d guess that doing neuroscience on motivational/emotional states, especially some years ago, was practically impossible: one would have been laughed at. Unfortunately however, I don’t think one can reach a decent understanding of Consciousness/Subjectivity without accounting for them. As a result, my overall impression of your work was: everything he says is spot-on, but it’s only the portion of the puzzle that was relatively easy to illuminate (e.g. very difficult, but possible). What you don’t say is equally important, but missing, and potentially undermines the whole architecture. When I say “potentially” I don’t mean that I think it does, just that partial insights may contain a sort of “sampling” error.

Finally: what’s your view on the various incarnations of predictive/Bayesian brains? The modelling element is there as well, but perception works “backwards”, so I wonder whether you dismiss the hypothesis altogether of if you have tried to complement both approaches (once more, I hope ignorance is forgiven!).

hypotheses about why systems carrying out such functions will of necessity end up with qualia

Exactly so. See for just one example “Structural Qualia” as discussed by Peter in these very pages. The same goes for BBT, which in my view is nothing but another way of expressing the same concept.

This brings me to a short remark @Peter:

it could be that the thing we’ll all recognise as decisive with hindsight has already happened, without (yet) being recognised.

We can’t know for sure, but I do think that’s the case for the supposedly hard problem (e.g. the above). Without the solution for the supposedly easy problems, however, we regretfully have to admit that we can’t really say if that’s the case. As Tom suggests, we have (fairly strong) hypotheses of actual functions, and these in turn suggest plausible predictions (second-order hypotheses) on the necessity of qualia.

Talk of the ‘cognitive impenetrability of qualia’ suggests that ‘cognitive impenetrability’ is a predicate of qualia, or that qualia are penetrable enough to be so predicated. In other words, you’re already assuming you have a cognitive relation with something called qualia. And again, the same old unanswerable questions I’ve been asking here apply. How could we have any relation with our internal states that isn’t radically heuristic?

We have this communicative system that atomizes the world, transforms it into particles and low-dimensional relationships between particles, and despite being so granular, actually allows us to solve innumerable problems. But the complexities of the world vary wildly while granularity of that system remains the same: in the case of ourselves, the complexities are astronomical, so the granularity renders our problem-solving heuristic in the extreme and severely constrains the kinds of problems that can be solved.

We atomize ourselves to solve narrow ranges of certain high frequency problems. Thus the power of intentional and phenomenal terms (and thus the reliability cliff those terms fall from when used in theoretical contexts). The search for NCC’s is bound to fail so long as we insist on using intentional atoms (like qualia) as our correlates. BBT provides a way to back out of this mess.

Isn’t the brain damage talked about rather suggestive, though? Imagine if you could reverse or advance it in a subject at will. Or even reverse or advance it in yourself? Surely as we reverse the damage, brain cells are returned and the subject (or yourself) might be able to register a change in feeling on the matter. Advance the damage and a similar change.

Then you could log that feeling of change to that cell or cells (perhaps it’ll take a few lost cells to feel the difference? The introspection resolution might not be cell perfect!)

Talk of the ‘cognitive impenetrability of qualia’ suggests that ‘cognitive impenetrability’ is a predicate of qualia, or that qualia are penetrable enough to be so predicated. In other words, you’re already assuming you have a cognitive relation with something called qualia. And again, the same old unanswerable questions I’ve been asking here apply. How could we have any relation with our internal states that isn’t radically heuristic?

Speaking of short cuts, I think there’s a sort of jump here that probably deserves longer explanation. So I’ll have a crack at it!

Were jumping from the idea that you can’t say something is unknowable without having some knowledge of it to begin with, over to the idea of heuristics. I think maybe the middle stepping stone is inferred too much – I think the longer explanation is along the lines that if one assumes a large lack of feedback or knowledge, then one would be working with a very rough sketch of how things are. So in saying something is unknowable, you’d actually have to have some scant knowledge of it to say that – scant knowledge is a sketch – sketches are heuristics. Ie, it turns around the acknowledgement of unknowable into what is really an acknowledgement of massive heuristics use. Or that’s the idea.

If that didn’t need explaining, ok, I’m terrible! 🙂 But if only my kind of terrible was the worst thing you find on the net! 🙂

22. Vicente says:

Arnold,

I think you are right, even more, the bayesian approach just requires the brain to have a representation of the world (in fact this requirement is common to all global models), but not a *conscious* representation. Consciousness is an accidental feature in these models.

At least your theory proposes how a fundamental part of this representation could be generated by the brain.

@Scott
I confess you’ve lost me, your way to write about these things is always somewhat arduous to me, and this time the difficulty is enough for me to admit that I don’t think I understand what you mean.

I’ll reply by vaguely gesturing in the direction I think is relevant, hoping I’ll strike somewhat close to the mark.
Few things we may agree on: we are able to internally represent the world. If you are in a room and close your eyes, you are still able to point in the direction of the door, even if with limited precision. Therefore, you got to have a model of the room stored somehow inside yourself, presumably in the brain.
You are also able to think about this ability, and somehow “remember” perceiving the door (explicit/declarative memory implies metacognition), and what it looks like. To do so, you need to be able to construct symbolic representations of the world, if someone has an idea of how to do all the above in a way that can’t be described as symbolic, I’m all ears, because I can’t.

A quale, in the sense of “redness of red” is, the way I see it, what the internal symbol of red looks like when it’s assessed. It seems to have the puzzling ineffability of philosophical qualia (and all that baggage) because of blindness (to use your terminology): our conscious processes are nothing but the tip of the iceberg when it comes to the inner workings of mind/brain. They consume the symbols, but have no access on the mechanisms that produce them, nor any insight on the rules of how they interact.
This is inevitable: a mechanistic modelling engine can’t have the processing power that would be necessary to model itself, not in real-time (and: a mechanistic modelling engine that has been shaped by evolution has no reason to even try). It is also evolutionarily necessary: the symbol “pain” needs to produce an inescapable avoidance motivation, it might not be compelling (we can endure some pain, when motivated to do so), but pain still needs to remain coupled with the desire to avoid it. Hence, “conscious me” needs to be unable to separate symbols from meaning (Sci: here comes intentionality!). I see a parallel between the above, Loorits’ Structural Qualia (conscious processes perceive Qualia as unitary and ineffable, because they have no access to their structure) and BBT, don’t you?

Now, the heuristic part would need to have an essay-length explanation. Yes, all the brain does is heuristic, it has to be, because induction is ultimately the only source of knowledge, and it is inherently heuristic. Furthermore, symbolic modelling is heuristic in itself: models are by definition a reduction of the real thing. They approximate the things they describe, good models retain the information that is required to make the model behave in a way that closely resembles the modelled system, but it’s guaranteed that this ability is imperfect: sometime it will fail. This applies to blind induction of principles selected via evolution, as well as conscious knowledge. The implication is that judgements that seem ineluctable to ourselves are in fact the result of our particular ecological needs, and actually make no sense from an objective point of view (they make sense only by accounting for the ecological needs of the creatures that make such judgements), this applies to colours, all perceptions, but also aesthetic, conceptual and moral judgements. Is it radical enough?

The above is a rough sketch that may make some sense to you if and only if you already agree, I can’t fully unpack this argument here. I’ve kind-of started my whole thinking from a closely related stance, you may want to check my very first blog posts and see if they make more sense to you.

A little question for you: what’s the difference between “radically heuristic” and (just) “heuristic”? I ask because your writings do suggest me that you see a significant qualitative distinction, while I only see a quantitative one (making the choice of where to put the threshold arbitrary).

I agree with the above, but disagree with all the rest, sorry!
Will try to explain, starting from far away: let’s take a very simple single photosynthetic cell. Takes water, CO2 and light, it is thus able to produce everything it needs. During the day, the internal metabolism is adjusted to maximise photosynthesis, during the night other metabolic processes are prioritised. In this way, time is used in the most efficient way. According to Friston, we can already say that this cell contains a model of the world, in the form of the metabolism-regulating mechanism. The two states (+photosynthesis and +otherStuff) “represent” night and day, the shifting and its time-scale represent the expected alternation of night and day, and are even tuned to the expected speed of change. This cell models one binary aspect of the external environment.

Now, let’s imagine that this cell lives far from the equator, during Winter the need to prioritise photosynthesis is even greater, so natural selection will favour those cells that are able to respond to this seasonal difference (for example, by slowing down the other metabolic processes – so to avoid depleting the reserves of sugar too quickly). Thus, through evolution, the offspring of our starting cell will eventually gain the ability to account for the initial binary distinction, plus the seasonal variance. The model is enriched with a quantitative element on top of the binary one.

In this case, Bayesian logic operates across generations of individuals, not within a single subject. However, this example serves me to make an important point: living organisms, all of them, internally model the outside world, they need to. Thus, all living things already have a model, the Bayesian approach shows us that this is the case, it doesn’t presuppose the existence of the model, it explains why it is there.

Where I agree is that all this doesn’t even start explaining why the model is (sometimes?) conscious. Unless you take an IIT-like stance, I guess we can agree that most likely the cell I’ve described isn’t conscious.

Arnold: my (intended) question was another, more specific to Neural-centric applications of the concept. Contents of the retinoid space are maintained by self-stimulating neurons, while what is there is the direct consequence of transduced stimuli. Predictive coding and similar approaches expect the model to happen first and compute a difference between the transduced stimulus and the expected value, this seems physiologically incompatible with retinoid theory: a neural circuit that works in a predictive coding manner doesn’t seem to be compatible with what you propose. Hence my question: am I right in seeing this incompatibility?

Re Arnold #10: I don’t know what to say. I know the experiment is pivotal for Retinoid theory, but I can’t recollect why. In general, I don’t see a problem with the existence of perceptual illusions, given the inescapable heuristic nature of modelling-based perception (see #23), it would be surprising if such illusions didn’t exist. I could even stretch this argument and say that the very existence of perceptual illusions is a strong indication that a model is indeed accessible to conscious processes.

@Segio: “According to Friston, we can already say that this cell contains a model of the world, in the form of the metabolism-regulating mechanism.”

If you take the metabolism-regulating mechanism as a cell’s model of the world in which the cell exists, then we are talking past each other because we have significantly different understandings of what an internal model of its surrounding world might be.

Re #10, the vivid perception of a triangle in horizontal motion is a hallucination, not an illusion, because there is nothing like a triangle in the visual field.

27. Sci says:

@Richard J.R.Miles:
“Most neuro scientists put consious brain activity down to two or three squares of a chess board”

I’m not convinced most neuroscientists understand the gravity of the problem when it comes to an eliminativist account of intentionality. The one that does, Tallis, actually rejects materialism due to the Hard Problem and the Problem of Intentionality. Of course he may not be right either but he has some interesting writings out there.

“so it is amazing what the other sixty odd squares of the brain are doing isn’t it.”

28. Sci says:

@Richard J.R.Miles: Apologies, I forgot the neuroscientist Koch had also rejected materialism for panpsychism –

“I used to be a proponent of the idea of consciousness emerging out of complex networks…But over the years, my thinking has changed. Subjectivity is too radically different from anything physical for it to be an emergent phenomenon…The phenomenal hails from a different kingdom than the physical and is subject to different laws. I see no way for the divide between unconscious and conscious creatures to be bridged by more neurons.”
-Consciousness, p. 119

30. Sci says:

@Sergio: ” in all honesty, I don’t see why you raised the problem in the first instance.”

Partly because as far as I can tell consciousness and intentionality are not necessarily separable in actuality, as argued in EJ Lowe’s There Are No Easy Problems of Consciousness & Tallis’ What Consciousness is Not. (sadly can’t link without being caught in the filter)

Partly because if Blackmore hasn’t addressed how materialism will account for intentionality – via BBT or some other mechanism – I would hesitate to take her understanding of the mind at face value given the above. My understanding of Blackmore is she’s a compatibilist, and I can’t square that with an eliminativist account of intentionality…though admittedly perhaps the intellectual deficiency is mine!

@Callan: I’m sure an actual account of how consciousness arises from matter would be accepted by Koch, who is an actual (neuro)scientist after all. Not sure there is reason for him to be a cheerleader for materialism in the meantime? There’s ideally room for nonmaterialst starting assumptions so long as they are grounded?

31. Vicente says:

Sergio,

IMO consciousness entails an scenario opposite to processes you have described in #24. It is precisely because of those limited and invariable processes that the vegetal cell is not conscious (in the sense we are discussing here).

I don’t see how the cell “models” or represents those natural processes it is just part of them, e.g. O2 produced by the primitive algae completely changed the atmosphere’s chemistry etc…Plants also adapt to the concentration of nitrogen and nutrients in the soil, and there are no cycles involved.

Well, if we say that any system that includes some inner processes that correlate, and allows the adaptation and interaction, with the external medium is somehow modelling and representing the world around, fine. But the idea here is other, I believe.

Consciousness starts to play a role when the conscious entity decouples itself from the underlying biological substrate to some extent. This is a problem that has always intrigued me a lot. Do you believe (ala Dawkins) that, for example, altruism is just a kind of egoism? or that somebody that exercises regularly and diets, making and big effort and sacrifice, does it because at the end of the day it makes him feel better, and there is a direct payback in average pleasure, or he is really struggling against its own instincts?

@Vincente #31,
I think I understand your doubts, and I certainly share the difficulty of the change of perspective that I’m describing. In all fairness, I’m on the fence myself: not sure it will bring valuable insights, but the more I study it, the more it looks promising. The promise is in how it redefines the problem, not in a ready-made solution, in fact, the solution isn’t there at all.

Well, if we say that any system that includes some inner processes that correlate, and allows the adaptation and interaction, with the external medium is somehow modelling and representing the world around, fine. But the idea here is other, I believe. Consciousness starts to play a role when the conscious entity decouples itself from the underlying biological substrate to some extent.

Well said, this is a good way to approach the issue. Friston’s position predicts that all living organisms will necessarily have “some inner processes that correlate [with the outside]”. Contrary to Arnold, I see this as a model, or at least a proto-model. Hence, the question becomes: given that living organism all maintain a sort of model of the outside, which mechanically drives their behaviours, how and why does the decoupling then happen?

The (first, hypothetical and sketchy) answer is twofold and straightforward. On one hand, if we leave our anthropocentric bias aside, we have to restate the question as in: how and why does the decoupling happen in some lineages, and it (probably) does not in the remaining majority?
Most living things are more similar to the primitive automata I’ve used in #24 than to the self-disciplined human you describe (also: there is no binary distinction, clearly organisms should be classified across a gradient – most likely a multidimensional one).

The key is in the scale and speed of adaptability. Unicellular organisms (and many insects) can change their genetic makeup roughly in the same timescale with which we change our minds, certainly in a shorter timescale of the one we need to change the organisation of societies. Here, the decoupling happens across generations, while individuals remain strict automatons.
On organisms that require resources that are stable and predictable over long periods of time, the same process of “across generations” decoupling (our evolutionary “default mode”) works just as well, but can happen across a much longer timescale. See trees for an example. The difference is that in “trees world”, as long as there is light, rain and soil, most (but not all!) problems are solved. It would be interesting to explore the “not all!” side, as trees need to dynamically fend off parasitic attacks, and these can change quickly. CE is not the place for this discussion, though.

On the other hand, we end up with so-called “higher organisms”, those equipped with a brain and a mental life, able to generate mental plans of their future actions and so forth. The evolutionary drive towards developing such capacities is implicit in the above: these organisms are able to occupy less stable, less predictable ecological landscapes. Importantly, by doing so, they make the landscape even less stable (because their actions necessarily have some impact), thus increasing the selective pressure for more and more “impromptu adaptability”.

The above accounts for the “why” side of the question, leaving the “how” mostly untouched.
Thus, the final step comes from the early emergence of nervous systems: at first we had receptors, but as multicellular organisms grew in size, whole body coordination required to send signals across distances, creating chains of cells specialised in communication, and eventually neurons. Thus, the proto-modelling regulatory mechanisms, at the level of the whole organism, got segregated on a specialised system (or organ, following Hohwy). This in turn facilitates the decoupling (as now we have something that even Arnold may recognise as a model, and we can change the effects it has on the organism by simply sending an axon in another direction), and we know some decoupling is useful because it allows adaptability. Importantly, it is also dangerous, because it may generate dis-adaptive behaviours.
Thus, for us “hard problem junkies”, the consequence is that we can take some sort of modelling for granted (a big step, in the debate for or against some sort of Cartesian theater), and look at how it can be used to produce robust, reliably useful, and dynamic (not univocally coupled) responses to stimuli.
Intuitively, some sort of “inner life” starts making sense, at least to me.

To share this view, one needs to flip head-over-heels a good number of well trusted intuitions, but I do think the results are rewarding, so my own stance is that it is well worth a try. It would be very satisfying if someone will share my hunch, but I’m old enough to know it isn’t likely to happen!

Koch writes. “I see no way for the divide between unconscious and conscious creatures to be bridged by more neurons.”

A way has been openly proposed and detailed in relevant publications. But Koch seems to avert his eyes and won’t mention the retinoid model — not more neurons, but a particular kind of neuronal mechanism — as a plausible candidate to bridge the gap, even if to argue why it can’t do the job. He resigns himself to the refuge of panpsychism.

Marois said. “Consciousness appears to break down the modularity of these networks, as we observed a broad increase in functional connectivity between these networks with awareness.”

Vicente: “Expected results, coherent with IIT, support unified experience, and so on….. But does this paper really clarifies, or tell us anything really new, about consciousness?”

No it does not. It doesn’t explain how the perspectival feature of subjectivity emerges from widespread connectivity. On the other hand, just from a schematic description, it can be seen how wide-spread connectivity is realized by the retinoid model of consciousness because of the recurrent projections between multiple pre-conscious mechanisms and retinoid space via the core self (I!). For example, see Fig. 8 in “Space, self, and the theater of consciousness”, here:

39. Vicente says:

Arnold, you write:

…our phenomenal space is strictly limited and determined by the structural and dynamic properties of the 3D retinoid, which is an egocentric topological analog of our natural behavioral space. I suggest that all images and tokens which we experience as having denotable spatial locations are really neuronally projected, by mechanisms…

How would you project this theoretical topology (matrix) represented by a logical structure that is theoretically mapped to a synaptical network level onto lower histologic layers?

If I select an object in my visual field, it should be projected in some area of the retinoid space, and correspond to the activity patterns of certain neurons in that area (or not?). But if I want to get into more detail, how is this image element allocated to the different cellular structures. How is the image coded in the retinoid space, down to the maximum detail and lowest biological structures you could take into account, or refer to. Or is it that all this processing takes place in the visual cortex, whose activity is an input for the retinoid space just for location purposes.

In order to assign a real system value to the retinoid space I need to know what is the smallest anatomical structured involve in the retinoid space working, and what is projected on it. How is an image, as I see it, decomposed and allocated to the different parts of the retinoid space, so that I see it spread in front of me in a continuous way. How is this continuity histologic and physiologically achieved by the retinoid system, that seems discrete to me, in its logic formulation.

Vicente: “How is an image, as I see it, decomposed and allocated to the different parts of the retinoid space, so that I see it spread in front of me in a continuous way.”

As I reflected on your question, It seemed to me that it would be helpful if you first read my TCB chapter “Analysis and Representation of Object Relations” linked below. Then we can both consider the relevant mechanisms that work together with the retinoid system to regulate image projection from pre-conscious mosaic arrays into retinoid space.