Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem

by rsbakker

Along with Taylor Webb, Michael Graziano has published an updated version of what used to be his Attention Schema Theory of Consciousness, but is now called the Attention Schema Theory of Subjective Awareness. For me, it epitomizes the kinds of theoretical difficulties neuroscientists face in their attempts to define their explananda, why, as Gary Marcus and Jeremy Freeman note in their Preface to The Future of the Brain, “[a]t present, neuroscience is a collection of facts, still awaiting an overarching theory” (xi).

On Blind Brain Theory, the ‘neuroscientific explananda problem’ is at least twofold. For one, the behavioural nature of cognitive functions raises a panoply of interpretative issues. Ask any sociologist: finding consensus commanding descriptions of human behaviour is far more difficult than finding consensus commanding descriptions of, say, organ behaviour. For another, the low-dimensional nature of conscious experience raises a myriad of interpretative conundrums in addition to the problems of interpretative underdetermination facing behaviour. Ask any psychologist: finding consensus commanding descriptions of conscious phenomena has hitherto proven impossible. As William Uttal notes in The New Phrenology: “There is probably nothing that divides psychologists of all stripes, more than the inadequacies and ambiguities of our efforts to define mind, consciousness, and the enormous variety of mental events and phenomena” (90). At least with behaviour, publicity allows us to anchor our theoretical interpretations in revisable data; experience, however, famously affords us no such luxury. So where the problem of behavioural underdetermination seems potentially soluble given enough elbow grease (one can imagine continued research honing canonical categorizations of behavioural functions as more and more information is accumulated), the problem of experiential underdetermination out and out baffles. We scarce know where to begin. Some see conscious experience as a natural phenomena possessing properties that do not square with our present scientific understanding of nature. Others, like myself, see conscious experience as a natural phenomena that only seems to possess properties that do not square with our nature. Michael Graziano belongs to this camp also. The great virtue of belonging to this deflationary pole of the experiential explananda debate is that it spares you the task of explaining inexplicable entities, or the indignity of finding rhetorical ways to transform manifest theoretical vices (like analytic opacity) into virtues (like ‘irreducibility’). In other words, it lets you drastically simplify the explanatory landscape. Despite this, Graziano’s latest presentation of his theory of consciousness (coauthored with Taylor Webb), “The attention schema theory: a mechanistic account of subjective awareness,” seems to be deeply–perhaps even fatally–mired in the neuroscientific explananda problem.

Very little in Webb and Graziano’s introduction to AST indicates the degree to which the theory has changed since the 2013 publication of Consciousness and The Social Brain. The core insight of Attention Schema Theory is presented in the same terms, the notion that subjective awareness, far from being a property perceived, is actually a neural construct, a tool the human brain uses to understand and manipulate both other brains and itself. They write:

This view that the problem of subjective experience consists only in explaining why and how the brain concludes that it contains an apparently non-physical property, has been proposed before (Dennett, 1991). The attention schema theory goes beyond this idea in providing a specific functional use for the brain to compute that type of information. The heart of the attention schema theory is that there is an adaptive value for a brain to build the construct of awareness: it serves as a model of attention. 2

They provide the example of visual attention upon an apple, how the brain requires, as a means to conclude it was ‘subjectively aware’ of the apple, information regarding itself and its means of relating to the apple. This ‘means of relating’ happens to be the machinery of attention, resulting in the attention schema, a low-dimensional representation of the high-dimensional complexities comprising things like visual attention upon an apple. And this, Graziano maintains, is what ‘subjective awareness’ ultimately amounts to: “the brain’s internal model of the process of attention” (1).

And this is where the confusion begins, as much for Webb and Graziano as for myself. For one, ‘consciousness’ has vanished from the title of the theory, replaced by the equally overdetermined ‘subjective awareness.’ For another, the bald claims that consciousness is simply a delusion have all but vanished. As recently as last year, Graziano wrote:

How does the brain go beyond processing information to become subjectively aware of information? The answer is: It doesn’t. The brain has arrived at a conclusion that is not correct. When we introspect and seem to find that ghostly thing — awareness, consciousness, the way green looks or pain feels — our cognitive machinery is accessing internal models and those models are providing information that is wrong. The machinery is computing an elaborate story about a magical-seeming property. And there is no way for the brain to determine through introspection that the story is wrong, because introspection always accesses the same incorrect information. “Are We Really Conscious,” The New York Times Sunday Review.

Here there simply is no such thing as subjective awareness: it’s a kind of cognitive illusion foisted on the brain by the low-dimensionality of the attention schema. Now, however, the status of subjective awareness is far less clear. Webb and Graziano provide the same blind brain explanation (down to the metaphors, no less) for the peculiar properties apparently characterizing subjective awareness: since the brain has no use for high-dimensional information, the “model would be more like a cartoon sketch that depicts the most important, and useful aspects of attention, without representing any of the mechanistic details that make attention actually happen” (2). As a result of this opportunistic simplification, it makes sense that a brain:

“would conclude that it possesses a phenomenon with all of the most salient aspects of attention – the ability to take mental possession of an object, focus one’s resources on it, and, ultimately, act on it – but without any of the mechanisms that make this process physically possible. It would conclude that it possesses a magical, non-physical essence, but one which can nevertheless act and exert causal control over behavior, a mysterious conclusion indeed.” 2

This is a passage that would strike any long time followers of TPB as a canonical expression of Blind Brain Theory, but there are some key distinctions dividing the two pictures, which I’ll turn to in a moment. For the nonce, it’s worth noting that it’s not so much subjective awareness (consciousness) that now stands charged with deception, as the kinds of impossible properties that attributed to it. Given that subjective awareness is the explicit explanandum, there’s a pretty important ambiguity here between subjective awareness as attention schema and subjective awareness as impossible construct. Even though the latter is clearly a cognitive illusion, the former is real insofar as the attention schema is real.

For its part, Blind Brain Theory is a theory, not of consciousness, but of the appearance of consciousness. It provides a principled way to detect, diagnose and even circumvent the kinds of cognitive illusions the limits of deliberative metacognition inflict upon reflection. It only explains why, given the kind of metacognitive resources our brains actually possess, the problem of consciousness constitutes a ‘crash space,’ a domain where we continually run afoul the heuristic limitations of our tools. So when I reflect upon my sensorium, for instance, even though I am unencumbered by supernatural characterizations of phenomenology—subjective awareness—something very mysterious remains to be explained, it’s just nowhere near so mysterious as someone like, Chalmers, for instance, is inclined to think.

Graziano, on the other hand, thinks he possesses a bona fide theory of consciousness. The attention schema, on his account, is awareness. So when he reflects upon his sensorium, he’s convinced he’s reflecting upon his ‘attention schema,’ that this is the root of what consciousness consists in—somehow.

I say ‘somehow,’ because in no way is it clear why the attention schema, out of all the innumerable schematisms the brain uses to overcome the ‘curse of dimensionality,’ should be the one possessing (the propensity to be duped by?) subjective awareness. In other words, AST basically suffers the same problem all neural identity theories suffer: explaining what makes one set of neural mechanisms ‘aware’ while others remain ‘dark.’ Our brains run afoul their cognitive limitations all the time, turn on countless heuristic schema: why is the attention schema prone to elicit sensoriums and the like?

Note that he has no way of answering, ‘Because that’s how attention is modelled,’ without begging the question. We want to know what makes modelling attention so special as to result in what, mistaken or not, we seem to be enjoying this very moment now. Even though he bills Attention Schema Theory as a ‘mechanistic account of subjective awareness,’ there’s a real sense in which consciousness, or ‘subjective awareness,’ is left entirely unexplained. Why should a neurobiologically instantiated schema of the mechanisms of attention result in this mad hall of mirrors we are sharing (or not) this very moment?

Graziano and Webb have no more clue than anyone. AST provides a limited way to understand the peculiarities of experience, but it really has no way whatsoever of explaining the fact of experience.

He had no such problem with the earlier versions of AST simply because he could write off consciousness as an illusion entirely, as a ‘squirrel in the head.’ Once he had dispatched with the peculiarities of experience, he could slap his pants and go home. But of course, this stranded him with the absurd position of denying the existence of conscious experience altogether.

“The attention schema theory is consistent with these previous proposals, but also goes beyond them. In the attention schema theory, awareness does not arise just because the brain integrates information or settles into a network state, anymore than the perceptual model of color arises just because information in the visual system becomes integrated or settles into a state. Specific information about color must be constructed by the visual system and integrated with other visual information. Just so, in the case of awareness, the construct of awareness must be computed. Then it can be integrated with other information. Then the brain has sufficient information to conclude and report not only, “thing X is red,” or, “thing X is round,” but also, “I am aware of thing X.” 3

If this is the case, then subjective awareness has to be far more than the mere product of neural fiat, a verbal reporting system uttering the terms, “I am aware of X.” And it also has to be far more than simply paying attention to the model of attention. If AST extends beyond global workspace and information integration accounts, then the phenomenon of consciousness exceeds the explanatory scope of AST. Before subjective awareness was a metacognitive figment, the judgment, “I am aware of thing X” exhausted the phenomenology of experiencing X. Now subjective awareness is a matter of integrating the ‘construct of awareness’ (the attention schema) with ‘other information’ to produce the brain’s conclusion of phenomenology.

At the very least, the explanatory target of AST needs to be clarified. Just what is the explanandum of the Attention Schema Theory? And more importantly, how does the account amount to anything more than certain correlations between a vague model and the vague phenomena(lity) it purports to explain?

I actually think it’s quite clear that Graziano has conflated what are ultimately two incompatible insights into the nature of consciousness. The one is simply that consciousness and attention are intimately linked, and the other is that metacognition is necessarily heuristic. Given this conflation, he has confused the explanatory power of the latter as warrant for reducing subjective awareness to the attention schema. The explanatory power of the latter, of course, is simply the explanatory power of Blind Brain Theory, the way heuristic neglect allows us to understand a wide number of impossible properties typically attributed to intentional phenomena. Unlike the original formulation of AST, Blind Brain Theory has always been consilient with global workspace and information integration accounts simply because heuristic neglect says nothing about what consciousness consists in, only the kinds of straits the limits of the human brain impose upon the human brain’s capacity to cognize its own functions. It says a great deal about why we find ourselves still, after thousands of years of reflection and debate, completely stumped by our own nature. It depends on the integrative function of consciousness to be able to explain the kinds of ‘identity effects’ it uses to diagnose various metacognitive illusions, but beyond this, BBT remains agnostic on the nature of consciousness (even as it makes hash of the consciousness we like to think we have).

But even though BBT is consilient with global workspace and information integration accounts the same as AST, it is not consilient with AST. Unpacking the reasons for this incompatibility makes the nature of the conflation underwriting AST quite clear.

Graziano takes a great number of things for granted in his account, not the least of which is metacognition. Theory is all about taking things for granted, of course, but only the right things. AST, as it turns out, is not only a theory of subjective awareness, it’s also a theory of metacognition. Subjective awareness, on Graziano’s account, is a metacognitive tool. The primary function of the attention schema is to enable executive control of attentional mechanisms. As they write, “[i]n this perspective, awareness is an internal model of attention useful for the control of attention” (5). Consciousness is a metacognitive device, a heuristic the brain uses to direct and allocate attentional (cognitive) resources.

We know that it’s heuristic because, even though Webb and Graziano nowhere reference the research of fast and frugal heuristics, they cover the characteristics essential to them. The attention schema, we are told, provides only the information the brain requires to manage attention and nothing more. In other words, the attention schema possesses what Gerd Gigerenzer and his fellow researchers at the Adaptive Behaviour and Cognition Research Institute call a particular ‘problem ecology,’ one that determines what information gets neglected and what information gets used (see, Ecological Rationality). This heuristic neglect in turn explains why, on Webb and Graziano’s account, subjective awareness seems to possess the peculiar properties it does. When we attend to our attention, the neglect of natural (neurobiological) information cues the intuition that something not natural is going on. Heuristic misapplications, as Wimsatt has long argued, lead to systematic errors.

But of course the feasibility of solving any problem turns on the combination of the information available and the cognitive capacity possessed. Social cognition, for instance, allows us to predict, explain, and manipulate our fellows on the basis of so little information that ‘computational intractibility’ remains a cornerstone of mindreading debates. In other words, the absence of neurobiological information in the attention schema only explains the apparently supernatural status of subjective awareness given certain metacognitive capacities. Graziano’s attention schema may be a metacognitive tool, a way to manage cognitive resources, but it is the ‘object’ of metacognition as well.

For me, this is where the whole theory simply falls apart—and obviously so. The problem is that the more cognitive neuroscience learns about metacognition, the more fractionate and specialized it appears to be. Each of these ‘kluges’ represents adaptations to certain high impact, environmental problems. The information subjective awareness provides leverages many different solutions to many different kinds of dilemmas, allowing us to bite our tongues at Thanksgiving dinner, ponder our feelings toward so-and-so, recognize our mistakes, compulsively ruminate upon relationships, and so on, while at the same time systematically confounding our attempts to deduce the nature of our souls. The fact is, the information selected, stabilized, and broadcast via consciousness, enables far, far more than simply the ability to manage attention.

But if subjective awareness provides solutions to a myriad of problems given the haphazard collection of metacognitive capacities we possess, then in what sense does it count as a ‘representation of’ the brain’s attentional processes? Is it the case that a heuristic (evolutionarily opportunistic) model of that machinery holds the solution to all problems? Prima facie, at least, the prospects of such a hypothesis seem dim. When trying to gauge our feelings about a romantic partner, is it a ‘representation’ of our brain’s attentional processes that we need, or is it metacognitive access to our affects?

“According to the attention schema theory, the brain constructs a simplified model of the complex process of attention. If the theory is correct, then the attention schema, the construct of awareness, is relevant to any type of information to which the brain can pay attention. The relevant domain covers all vision, audition, touch, indeed any sense, as well as internal thoughts, emotions, and ideas. The brain can allocate attention to all of these types of information. Therefore awareness, the internal representation of attention, should apply to the same range of information.” 9

So even though a representation of the brain’s attentional resources is not what we need when we inspect our feelings regarding another, it remains ‘applicable’ to such an inspection. If we accept that awareness of our feelings is required to inspect our feelings, does this mean that awareness somehow arises on the basis of the ‘applicability’ of the attention schema, or does it mean that the attention schema somehow mediates all such metacognitive activities?

Awareness of our feelings is required to inspect our feelings. This means the attention schema underwrites our ability to inspect our feelings, as should come as no surprise, given that the attention schema underwrites all conscious metacognition. But if the attention schema underwrites all conscious metacognition, it also underwrites all conscious metacognitive functions. And if the attention schema underwrites all conscious metacognitive functions, then, certainly, it models far, far more than mere attention.

The dissociation between subjective awareness and the attention schema seems pretty clear. Consciousness is bigger than attention, and heuristic neglect applies to far more than our attempts to understand the ‘attention schema’—granted there is such a thing.

But what about the post facto ‘predictions’ that Webb and Graziano present as evidence for AST?

Given that consciousness is the attention schema and the primary function of the attention schema is the control of attention, we should expect divergences between attention and awareness, and we should expect convergences between awareness and attentional control. Webb and Graziano adduce experimental evidence of both, subsequently arguing that AST is the best explanation, even though the sheer generality of the theory makes it hard to see the explanatory gain. As it turns out, awareness correlates with attentional control because awareness is an attentional control mechanism, and awareness uncouples with attention because awareness, as a representation of attention, is something different than attention. If you ask me, this kind of ’empirical evidence’ only serves to underscore the problems with the account more generally.

Ultimately, I just really don’t see how AST amounts to a workable theory of consciousness. It could be applied, perhaps, as a workable theory for the appearance of consciousness, but then only as a local application of the far more comprehensive picture of heuristic neglect Blind Brain Theory provides. These limits become especially clear when one considers the social dimensions of AST, where Graziano sees it discharging some of the functions Dennett attributes to the ‘intentional stance.’ But since AST possesses no account of intentionality whatsoever (indeed, Graziano doesn’t seem to be aware of the problems posed by aboutness or content), it completely neglects the intentional dimensions of social cognition. Since social cognition is intentional cognition, it’s hard to understand how AST does much more than substitute a conceptually naïve notion of ‘attention’ for intentionality more broadly construed.

42 Comments to “Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem”

“Ultimately I think functional explanations need to be validated by mechanical explanations before we can claim true understanding. I don’t think we can really claim to know how the brain works until we can make one.”

If at present “neuroscience is a collection of facts, still awaiting an overarching theory” it seems to me that we don’t have nearly enough facts to justify the creation of any theory or consciousness. I guess that some sketch of a theory is useful in deciding what kinds of data are likely to be most useful, but so long as we can’t agree on what consciousness is we have no real chance of determining how it works. I suggest we declare a moratorium on presenting theories of consciousness until we can at least map every interneuronal connection in a living human brain.

Blind Brain Theory is truly a thing of beauty, and I have found it very rewarding to follow this blog and the development and elaboration of your theory. I could use a little clarification on this line:
“For its part, Blind Brain Theory is a theory, not of consciousness, but of the appearance of consciousness.”
This is a little confusing to me, as I’m not entirely certain how to distinguish between ‘consciousness’ and it’s ‘appearance’. In order for you to limit BBTs explananda to the appearance of consciousness you would need a working criteria for its appearance, meaning to some extent you would need a working criteria for its reality – something you seem keen to avoid and for good reasons – reasons articulated by BBT. How can we help but comment on the reality of consciousness even when we limit ourselves to commenting on its ‘appearance’?

It is entirely possible that you furnished this criteria and I simply failed to make the connection. Damn my metacognitive shortcomings!

As a side note I also enjoyed your treatment of Alva Noe. A professor I had a while back was particularly enthusiastic about her enactivist account of sensory-perception (in her book Experience without the Head). We had it out in class and after class I found him apologizing for assigning the reading, which actually made me like him a lot.

Thanks, James. You really don’t need to presume all that much once you change the explanatory onus from the object of reports to the reports themselves. Think of Sasquatch, for instance. My guess is you could say a whole bunch about ‘Sasquatch science’ without knowing the first thing about Sasquatches, or whether there even is such a thing. Consciousness is like Sasquatch: hidden in the wilderness of the world or our imagination.

The problem for me is that switching the onus from the subject of a report to the report itself must still judge the accuracy of said reports based on their purported subject matter. To assume that all subjects of all reports are ‘sasquatches’ so to speak, without having any idea whatsoever what a sasquatch is strikes me as absurd (after all the ‘sasquatch’ could then also be the ‘report itself’). I’ll give it to you that it is up to the reporter to clarify when things become this ambiguous, and this is ‘apparently’ where the problem lies: when speaking of cognition the ‘apparently cognizant’ have problems being specific and/or addressing the situation from anything but the second person point of view, at least in part because we/they lack a sufficiently robust and precise language to furnish anything more than a vague third person point of view (to continue with the metaphor). Cognition just seems to have an equally vague and specific je ne sais quois from inside the frame.

So I guess my question is whether or not you think the sciences are capable of dispelling this delusion of self, certainty, subjectivity, qualia, attention, intentionality, etc. Is it a lack of metacognitive concepts or metacognitive capacity?

Part of me feels like there is so much going on conceptually, so many different split hairs and postulations, that the string of consciousness is just so hard to even begin to tug at. The attention and awareness relationship is one that is often presented so messily. I am thinking especially of Graziano, Jesse Prinz in his The Conscious Brain (his AIR theory, Attended-Intermediate-Representation), and even the latest Spacetimemind podcast of Pete Mandik and Richard Brown, in which they immediately bog down in defining and delineating attention as opposed to awareness. To be fair, Graziano and Prinz obviously spend a lot of effort in trying to grasp and present the attention versus awareness distinction. There is a lot there that most of us gain by their efforts, especially for newcomer to these issues.

However, I like your distinction that Graziano’s theory is trying to explain both metacognition and subjective awareness. His conflation of those concepts leads to unwieldiness, at the least.

I wonder if at the end of day we do not pave over a lot of this. That we will simplify a lot of it. Much of the discourse that brain/mind are presented in will dissipate. But we may still have storytellers that can elucidate why exactly those 20th century people were categorizing things the way they were, and why this led to the endless extolling of and arguing over rather empty categorical structures and rather empty concepts. Certainly a lot of that messiness comes because of our history of easing into this problem, as well as the general makeup of the problem.

One other note, Nicholas Humphrey ( https://www.youtube.com/watch?v=NHXCi6yZ-eA ) also tries to extend the Dennett idea that what we perceive as consciousness is a construct. That consciousness is an illusory, magical trick played on the brain by the brain. That the brain creates the illusion of consciousness to control our attention and to awe us (something like that). Anyways, just wanted to say that he presents a similar paradigm.

Prinz’s theory is far and away the more thorough, more careful one (though I think AIR falls flat as well). I actually think the most accurate observation in the whole of consciousness research belongs to Humphrey, the fact that theories of consciousness are like toothbrushes: no one likes using anyone else’s! His theory is what Graziano’s approach reminds me of the most: high altitude blind brainish observations, no real concern for the problems posed by intentionality. Humphrey still has a far better grasp on the conceptual snarls, probably because he’s also far better read in the literature than Graziano.

Have you had a chance to check out my Alien Philosophy posts (I & II) Lyndon? Taking this perspective seems to explain quite a bit…

“Here there simply is no such thing as subjective awareness: it’s a kind of cognitive illusion foisted on the brain by the low-dimensionality of the attention schema.”

“Given that subjective awareness is the explicit explanandum, there’s a pretty important ambiguity here between subjective awareness as attention schema and subjective awareness as impossible construct. Even though the latter is clearly a cognitive illusion, the former is real insofar as the attention schema is real.”

I agree, I think Graziano vacillates on that point, or has trouble explaining what he really believes.

I struggle here, because, again, there are several things that we often mean by subjective awareness. Loosely speaking, if you are staring at a box of red apples and then notice or attend to a green apple, your brain is perceiving and modeling that there is an odd green apple. You have have noticed and are attending to (at least in some sense) that green apple. Under AST, your awareness may not be as robust as the cartoon model of attention that you have created, but there is a relationship between you and the apple, and there are many perceptual/sensory processes that your brain is attending to in that apple (or between you and that apple).

Graziano and others may be right, as I believe, that subjective awareness is not as robust as it seems, but at least some of the things that our internal “attentional modeling system” is presenting are accurate, such as that your brain is perceiving and modeling many things about that green apple.

The bald statement “I am subjectively aware of that green apple” or “I am conscious of that green apple” becomes a ripe muddle to try to parse.
When we turn back to the more basic question as to what constitutes consciousness or subjective awareness, given the plethora of meanings and broad properties that are often part of consciousness, it becomes difficult (even for Graziano himself) to simply say that it is an illusion. But so much of what we “see” is cartoonish that it is probably easier to just accept that we are going to massively overhaul our theories of consciousness, and certainly overhaul the first-person assessments that we make.

To sum that up: Following the attentional schema theory, consciousness may be a cartoon model of attention. Though cartoonish, there seems at least some facets that are accurate within that attention scheme. At least some of things encompassed by our concept of consciousness will be present within that cartoon model of attention. That is not enough to save all the stuff that gets thrown into our concept, both personal and intellectual, of consciousness.

“Graziano and others may be right, as I believe, that subjective awareness is not as robust as it seems, but at least some of the things that our internal “attentional modeling system” is presenting are accurate, such as that your brain is perceiving and modeling many things about that green apple.”

But this insight is a theoretical achievement which has nothing to do with the attentional modelling system! that the brain is modelling anything isn’t given in whatever the result of the attentional modelling system is. We have no awareness or attention of or subjective affects or whatever you want to call it which are sensitive to the fact that there is any modelling whatsoever going on.

Yea, I said that bad. Our awareness does not present itself as a modeling process, but instead just presents.

But there is something prima facie to awareness and attention in that I am aware of an apple, and not of a car. Or I am aware of green and not aware of blue. Through experience, we also come to accept that “I am aware of a green sensation, and this sensation is not like a blue one.” In that sense, it seems simple to say that my brain is representing the apple and not the car. That I am now attending to the apple, and not to the car.

Which follows my point, that our consciousness is tracking at least one gross feature of the attentional processing in the brain, namely that right now there are more brain processes being carried out on sensations and representations of that apple than of that car. This becomes one of the difficulties of claiming that “consciousness” is an illusion altogether, because at least some of the properties that people have reflectively given to consciousness are going to parallel those underlying brain processes. Consciousness is not completely empty, even if it is cartoon like. Or even if the attentional schema model does not present itself as a model or as brain processes.

Side rant:

As we naively reflect on “What is the brain doing when I am conscious of something?”, it seems imminently intuitive that the brain is representing/modeling the object in front of me, as opposed to something else. Maybe that already accepts certain theoretical posits, such that there is a modeling process and that consciousness involves brain processes. With that said, it is difficult to find reflective posits that have not been conditioned by a given experience and by a certain society. Once we became linguistic creatures, we became Kantian creatures.

It does not make sense to ask what Koko the gorilla thinks about the presentation of consciousness. To admit that we have the ability to reflect on consciousness already committed us to some theoretical posits, or at least we quickly began to create such once we began such reflective questioning.

Care needs to taken with our metaphors, though, lest we run afoul the bad habits that have strangled the tradition. Consciousness as metacognized is low dimensional, but consciousness as neurobiological process is going to be astronomically complicated. Metacognitive access/capacity severely limits our ability to ‘intuit consciousness.’ As with Metzinger’s PSM, the temptation for theorists is to presume that some kind of intrinsic integrity, somewhere, somehow, constrains the deliverances of metacognition, allowing us to pose cohesive, information bearing ‘models’ that are switched on or switched off. But this is almost certainly an artifact of neglect, theoretical confabulations–theorists are the real model-mongers here. Once you realize that, 1) computationally speaking, messiness is more expensive, and identities/continuities are the default, and 2) metacognition is forced to make due on the radical cheap, which is to say, heuristically. So even ‘cartoon’ needs to be seen as a cartoon, as something useful in some ways, deleterious in others. What makes the attention schema attractive, I think, is that it promises a simple reduction of what we think we experience to a single cartoon possessing a single function. It seems to pack all the confusion into an apparently tidy box, and so long as no questions are asked, only-game-in-town effects lend the appearance of advancement. Given what we know about metacognition, I think it’s safe to say it’s another dead end, just another cogito.

Consciousness pretty clearly mediates a vast number of functions, but ‘mediating’ is far more open (and therefore far more honest to the facts) than ‘modelling,’ the notion that it is somehow ‘about’ something else.

I think what I agree with here is that it requires a dynamics, ie some kind of relationship or structure of transitions between attending or cognizing instances to say there was a awareness, and it requires a number of ‘faculties’ like short term memory, vision, and linguistic capability, but I dont think that these dynamics are themselves nor produce ‘a mode’l, even if they may perform some kind of informational or communicative function. And I think this is part of the message of how Scott tries to argue that heuristics are interacting and doing labor but they aren’t representing or modeling anything. So with the apple, we don’t just die looking at the apple, but we will make a saccade around the periphery and maybe return attending to the apple, then we have short term memory, and especially verbal cognition, and we might produce an observation report to a friend, that ‘despite looking at this apple, i am aware that i am looking at this apple but not that vase’. But it’s important to highlight that this is a spread out in time. If you died before doing the saccade you could say you were aware of anything. Then because our brain can’t track the microstructure of how visual perception works it just retrodicts the memory of saccading over the vase back into it’s earlier perception of the apple. Awareness on a picture like this is a narrative fusion or condensation that results from the bleeding at the edges of the temporal margins of the dynamic transitions between attending instances.

So is it a model of an apple and environment, a model of attention, a model of attending to an apple and environment? Why should any of these models possess subjectivity, as opposed to any other of the countless ‘models’ in the brain?

I just don’t see how anything is explained on AST. I think the attraction of attention is that it seems to give you intentionality, which is to say, seems to give you the first-personal, ‘directed’ structure of conscious experience in a manner that can be summed in a phrase which can itself be cashed out in terms of more mechanistic accounts of attention. I just don’t see how it delivers anything other than another set of fuzzy posits.

Is actually a lack of questions in regards to AST that maybe you have an issue with? Perhaps you feel BBT keeps raising questions about all the things we report due to what you might not know, while maybe AST kind of explains it but then leaves it there – it doesn’t attach questions to further reportings?

So then metacognition is subject, as heuristic expedience, to miscue, ‘crash spaces’. Graziano’s theory (arguably phenomenology in general!) as attribute substitution: trading ‘more mechanistic accounts’ for ‘another set of fuzzy posits’. Where representing a high-dimensional reality as low-dimensional ‘cartoons’ once yielded favorable results it now fails spectacularly.

Yeah, for whatever its worth, like Lyndon’s mention of split hairs I’m not seeing the big distinction? I mean were literally talking about theories that we can’t ever really detect the details of within ourselves with our native senses. Therefore there is no criteria to fulfill here in order to be convincing somehow – it’s just a matter of what personal commitments one has that a theory seems to tick the boxes with. Fan of occams razor? Then the idea of incapacity (which is a simpler state one could be in) probably ticks boxes with your commitments – so BBT ticks boxes. Need to think something is happening – that the brain is doing something? Okay, so some sort of theory the brain is constructing an idea of awareness (drawing from this and that – a sort of ransom note conciousness), then attention schema appeals. If your commitments tend to think social cognition, aboutness, content and intentionality are different mixes of attention types (more ransom note combinations), then it’ll tick boxes for you. If not, then not.

The most embedded form of knowledge which we share with other creatures or prime consciousness is actually sensorimotor. Also pointed out a while back that the nerves in the spinal cord for motor and sense to the muscle groups are actually paired into the spinal column to form the hierarchy of body movement. In terms of movement there is a perfect physical schema (schematic) of action and feel which meet in the adjacent motor and somoesthetic cortex areas (see my Sept post on Apple).

I think Graziano is onto something with AST because the awareness and attention may be exaptations of these structures, including the spinal nerves structure for higher sensory functions.

Consider this, for just about every muscle in the body they are paired so there is an ‘even’ muscle for every ‘odd’ muscle. Numerical Platonism anyone?

One way in which we attend to something is by looking at it. We move our bodies in such a way that the central, higher resolution parts of our retinas are pointed at the thing. As a consequence of this, the things outside our central vision are (dare I say) neglected. Humans also have some ability to point our ears (but much less that dogs). Smells also have some directionality. We can follow a gradient of increasing concentration to find the source of a smell. Of course in order to taste something we have to put it in our mouths. Snatching a burnt finger away from a hot skillet is a sort of attending. Attending to things in the world is at least in part a sensorimotor phenomenon. To what extent, if any, can we follow the neurological activity associated with this sensorimotor attending from the sensorimotor apparatus into “higher” centers of the brain? To what extent, if any, are the neurological activities in “higher” centers as observed while subjects are attending to objects in the external environment correlated with neurological activities observed when subjects are attending to memories and other objects in the “internal environment?” I think there’s a lot to be said for starting with the commonsense understanding of mental phenomena and moving from the commonsense to the neurological rather than from the commonsense to the theoretical.

The neurological activity associated with the “higher” centers is literally the language of movement of the body. Which is why I said all thinking is an exaptation of the sensorimotor machinery. The language which we turn outward as opposed to the inner language of movement. Fundamentally the same neocortical and thalamacortical structures. Causality of movement is implicit by the two major areas linked together in the brain that I previously pointed out. The causal linkage of muscles extends into all type of sensory causality of eventfulness. Our own sensorimotor attention is turned outward or mapped via AST. “External” events are linked to internal sensorimotor events. What they can’t fathom is how the brain’s architecture can surround itself and do all of this. BBT captures the right metaphor because the brain essentially sees itself. This leads to the paradox of causation which they think is an unsolvable problem. Without the proper background it is very difficult for them to build a commonsense architecture of the brain.

Awareness or the mapping of the internal and external is actually the emergence of time sense (for movement) which like color is a brain function.

Well, they argue consciousness and mind has no causal role, but C&M (language) are actually products of these ‘causal’ structures in the brain, exapted from the sensorimotor system. The non-causal arguments are a resultant of the fact that mind and philosophy are actually language, the usage of these structures.

Watch a Steve Jobs movie, the crazy genius was infatuated by his human mind, not the gadgets he invented necessarily.

But there is a counter shaping here as well. “Language”, ie vocalizations with syntactic structure DO SOMETHING to a developing nervous system which alters it’s developmental course and normally yields a human who can speak and understand speech, while no exposure to vocalizations with syntactic structure will result in feral people who don’t speak. So, I don’t agree that language has no causal impacts!

I am eliminativist on many things surrounding consciousness or subjective experience. So, I do not think it is helpful to say that anything has subjective experience and that such is in need of explanation. The brain processes or models do not become subjective experiences because nothing becomes or is a subjective experience, as we often think of subjective experience. It is a unique set of representations and models. Only you are “sitting at this computer, in this city, with these thoughts, with these emotions.” If you turn your thoughts or speech on who you are you will quickly fill out unique representational structures. (That is subjective in some banal sense.)

So what is denied is what the experiential part consists of. Here, I think we have to shrug and say that it is all merely representational. Body models and attentional models fit into our explanations of how the brain structures representations. I am using “representation” here in a similar way to how a google car is representing another car ahead, or any other machine or computer representation. What we have been calling subjective experience is merely one representation repeated after another. Awareness is merely creatures that have accumulated such enormous amounts of self and world representations (aided by speech) that they adequately model and represent their selves at the center of such a world. But each instance of their representational flow is not experiential but merely another instantaneous representation.

Anyways, something like that. So I can’t help but read AST as eliminativist, even if Graziano refuses to do so. But it provides at least one way of how modeling our selves and social actions can give us more complex representations of our selves in the world. Also, I think he is probably right in that attention is a key into the complex representations that we have interpreted as being conscious representations.

All representation talk makes me itchy: content is itself a heuristic means of thinking differential relationships between organisms and environments. What else could it be? The only thing the brain ‘accumulates’ is behaviourally effective differential complexity. Talking representations inevitably invites confusing heuristic entities for what’s actually going on. Think of the PSM, again. Why talk of something so extravagant as ‘coherent representational models’ when really, only a handful of recipe responses are required? What is a self? The essential kernel of who we are? A convenient simulation that we run while conscious? Or an effective utterance in various socio-communicative ecologies? Given that nature is such a cheapskate…

Neural implementations of priors don’t make models of, they leverage behaviours out of sensitivities to, a vast, interactive symphony that we find far easier to deal with in heuristic clumps, bits that inherently possess what is actually a thoroughly systematic efficacy. Neurostructural isomorphisms inevitably arise (simply because we’re talking about systematic interrelationships), but to think of those rare isomorphisms we do find possess ‘content’ is to neglect their mechanical functions, which are all that matters.

Basically, I’m just giving you a longwinded, ‘Yeah, but…’!😉 The less you think in terms of representations, the less inclined you become to simply install versions of what seems to be experience in your brain.

I personally wonder whether attention won’t find itself eliminated, actually. But even if one of the functions of awareness is to ‘control attention,’ it’ll still just be one among very many functions. And it still leaves awareness utterly unexplained. I just don’t see what can be salvaged from this view.

i’ve been looking at some of the old seemingly paradoxical statements in nietzsche through this lense of late. i remember specifically the section on freewill. he goes on to basically reject the philosopher’s conception of it, but then he goes on to say but neither can you posit and act according to an unfree will either.

Yea, Scott, every new book I read in the neurosciences comes down to pushing a bunch of tinker toys around hoping a new model might explain what is impossible to explain. It’s as if with all the data we now have of the various aspects of the brain and the processes going on under the hood, with all the new tools that are being built that work between brain and computers etc. that the problem of consciousness may just become a metaphysical popcorn fest not worth pursuing anymore. What I’ve always liked about BBT it just that, that it gives us a metaphor that explicitly affirms that we don’t know what we don’t know. Even the tautologies are models of unknown unknowns. The tail chasing the illusion of the tail.

I keep wondering if the illusion of consciousness has more to do with the physical placement of our eyes, ears, nose, mouth, etc. All of the portals of the senses are bicameral and locked so closely in our skull and near our brain that the illusion of consciousness is nothing but the form awareness takes in an animal with eyes, ears, nose, mouth all so closely aligned. That maybe we’ve made a big deal out of nothing all along.

Imagine if our eyes were on our knees, our ears on our elbows, our nose in our belly, etc. would consciousness be spread out across the whole body? Aren’t there other strange anomalies like this in the natural world? Other animals that have developed anomalous forms of consciousness? To me what is more fascinating is why did consciousness become locked in one place rather than floating throughout the body, why this sense of stability? Why not a fluid consciousness?

So kind of like the whole body is ‘head’? That’s an interesting one – kind of like Modok?

I wonder if that was the case – people would say your breathed in and out your soul, for example?

Perhaps as science has advanced, conciousness/spirituality has retreated across human anatomy?

On conciousness, can I have a crack at conciousness, only to ping how it fails for you, S.C?

Okay, so take vision, which certainly grasps our attention a lot. So the whole visual field, as whole as we take it (as as whole, we treat ourselves as concious of the visual field), could be broken down to a series of recursive steps. Like when we look at things, we don’t scan across things – our eyes move from feature to feature (like moving from one hotlink to another). How could our eye move from a feature to another feature if we haven’t already seen it? Preprocessing in the visual area.

If you take that as the case, it’s easier to see ‘the whole’ as potentially simply a bunch of paralel processors taking in electromagnetic stimuli and reducing the complex data to say only 90% as complex as before. The reduced complexity outputs of these processors then have their own series of paralel processors (fewer of them), working on that and reducing the complexity yet again. There are several onion like layers of this occuring – with the very last, in blind brain theory style, not knowing it’s the last for not having any processor to tell it that (as much as we aren’t told we have an edge to our vision – we just run out. And we only really know that by thinking about it, rather than any part of our natural born anatomy/born with brain parts telling us)

The worst thing is that emphasis can occur – you focus on that bull running across a field at you more than that cow pattie over to the left, though really both are equal data.

The layers of processors, when they turn their capacity for emphasis from the alarming bull to the alarming ‘what the hell am I? What am I?’ question, start treating emphasis in certain layers of the processing (like the bull got more emphasis by certain processors than others did) as a feedback in itself. The eye widens at the eye widening. Conciousness – what that word means to a bunch of peeps – is making a big deal of a question mark. Kind of like when you wonder if your kids have given you nits, and your head starts itching. Heck, I’m making my own head itch just thinking that…eww!

You say: “Conciousness – what that word means to a bunch of peeps – is making a big deal of a question mark.” Yep. It’s an abyss from which there is no redemption. We keep stuffing it with everything we can find trying to discover what will not emerge from it: the big-Other. The answer to all our questions, when in truth there is no answer coming, nor can there be one. We’re not even here to ask the questions. We’re not even here. We’re not… alone, indifferent, without meaning this accident cries out in the night of the world: bullpucky…

the illusion of the location of consciousness is because of this, but i think the cementing of the illusion as such comes from the BBT’s ‘exemplar case’ of the blindness to the margins of temporal processing producing the clearing of synchrony within which all change is ‘observed’

I thought I’d wonder about this publicly ( kinda off topic – I’m bad, I know! I tried to find another post here that it’d link with, but couldn’t find it ) : In terms of crash spaces and media, is story telling really by a kind of human psychological default supposed to be a ‘whim’ activity. Which in other words, a kind of peacock display – telling stories about where the good fruit is (or even where the bad fruit is) makes you look good (and like bees telling each other where the flowers are, it helps the hive/tribe).

So what happens if you try and make story telling the very way you get your ‘good fruit’? Is the industry of getting people excited about books really just crashing people en masse, fulfilling the old expectation the author just, on a whim, wants to tell about the good fruit? And hell, maybe authors tend to think that too – how much did Rowling think her work would become a fiscal juggernaught? Or did she just write it on a whim (while mourning her mothers death)?

Sounds kinda akratic?

I’m just running into this a bit as I write and to motivate myself, thinking of the income tied to the labour (yes, labour…rather than an act of love…rather than an act of whim…). I get a kind of ‘ehh’ feeling – like a numb pain, maybe. Or the abstract feel of one, anyway. Looking towards getting good fruit by writing…ie the act which is supposed to be writing about where the good fruit is!! And frankly I’m not the finger wagger type…I look at solutions/what people can do, rather than just telling them what they shouldn’t do (while ignoring what one does do to get the calories to wag that finger). I have managed to weave some, IMO, good fruit into my fantasy fiction…it’s just at a far smaller scale than the payoff from writing that I’m imagining. And the discrepancy is…unsettling.