Tag Archives: higher-order theories of consciousness

Phenomenal consciousness has a familiar guise but is frustratingly mysterious. Difficult to define (Goldman, 1993), it involves the sense of there being “something-it-is-like” for an entity to exist. Many theorists have studied phenomenal consciousness and concluded physicalism is false (Chalmers, 1995, 2003; Jackson, 1982; Kripke, 1972; Nagel, 1974). Other theorists defend physicalism on metaphysical grounds but argue there is an unbridgeable “explanatory gap” for phenomenal consciousness (Howell, 2009; Levine, 1983, 2001). “Mysterians” have argued the explanatory gap is intractable because of how the human mind works (McGinn, 1989; 1999). Whatever it is, phenomenal consciousness seems to lurk amidst biological processes but never plays a clearly identifiable causal role that couldn’t be performed nonconsciously (Flanagan & Polger, 1995). After all, some philosophers argue for the possibility of a “zombie” (Chalmers, 1996) physically identical to humans but entirely devoid of phenomenal consciousness.

Debates in the sprawling consciousness literature often come down to differences in intuition concerning the basic question of what consciousness actually is. One question we might have about its nature concerns its pervasiveness. First, is consciousness pervasive throughout our own waking life? Second, is it pervasive throughout the animal kingdom? We might be tempted to answer the first question by introspecting on our experience and hoping that will help us with the second question. However, introspecting on our experience generates a well known puzzle known as the “refrigerator light problem”.

2.0 The Refrigerator Light Problem2.1 Thick vs thin

The refrigerator light problem is motivated by the question, “Consciousness seems pervasive in our waking life, but just how pervasive is it?” Analogously, we can ask whether the refrigerator light is always on. Naively, it seems like it’s on even when the door is closed, but is it really? The question is easily answered because we can investigate the design and function of refrigerators and conclude that the light is designed to turn off when the door is closed. We could even cut a hole in the door to see for ourselves. However, the functional approach won’t work with phenomenal consciousness because we currently lack a theory of how phenomenal consciousness works or any consensus on what its possible function might be, or whether it could even serve a function.

The refrigerator light problem is the problem of deciding between two mutually exclusive views of consciousness (Schwitzgebel, 2007):

The Thick View: Consciousness seems pervasive because it is pervasive, but we often cannot access or report this consciousness.The Thin View: Consciousness seems pervasive, but this is just an illusion.

The thick view is straightforward to understand, but the thin view is prima facie counterintuitive. How could we be wrong about how our own consciousness seems to us? Many philosophers argue that a reality/appearance distinction for consciousness itself is nonsensical because consciousness just is how things seem. In other words, if consciousness seems pervasive, then it is pervasive.

On the thin view, however, the fact that it seems like consciousness is pervasive is a result of consciousness generating a false sense of pervasiveness. The thin theorist thinks that anytime we try to become aware of what-it-is-like to enjoy nonintrospective experience, we activate our introspection by inquiring and corrupt the data. The thin theorist is for methodological reasons skeptical about the idea of phenomenal consciousness existing without our ability to access or attend to it. If phenomenal consciousness can exist without any ability to report it then how can psychologists study it if subjects must issue a report that they are conscious? Anytime a subject reports they are conscious, you can’t rule out that it is the reporting doing all the work. The thin theorist challenges us to become aware of these nonintrospective experiences such that we can report on their existence and meaningfully theorize about them.

Philosophers might appeal to special phenomenological properties to falsify the thin view. This won’t work because, in principle, one could develop a thin view to accommodate any of the special phenomenological properties ascribed to phenomenal consciousness such as the pervasive “raw feeling” of redness when introspecting on what-it-is-like to look at a strawberry or the “painfulness” of pain. Thin theory can simply explain away the experience of pervasiveness as an illusion generated by a mechanism that itself isn’t pervasive. Julian Jaynes is famous for defending a strong thin view:

Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of…It is like asking a flashlight in a dark room to search around for something that doesn’t have any light shining on it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not. (1976, p. 23)

Thin vs thick views represent the two most common interpretations of the refrigerator light problem, and both seem to account for the data equally well. The problem is that from the perspective of introspection, both theories are indistinguishable. The mere possibility of the thin view being true motivates the methodological dilemma of the refrigerator light problem. How do we rule out thin explanations of thick phenomenology?

2.2 The Difference Introspection Makes

The intractability of the refrigerator light depends on the inevitable influence introspection has on nonintrospective experience. Consider the following case. Jones loves strawberries. He eats one a day at 3:00 pm. All day, Jones looks forward to 3:00 pm because it’s the one time of the day when he can savor the moment and take a break from the hustle-and-bustle of work. When 3:00 pm arrives, he first gazes longingly at the strawberry, his eyes soaking up its patterns of texture and color while his reflective mind contemplates how it will taste. Now Jones reaches out for the strawberry, puts it up to his mouth, and bites into it slowly, savoring and paying attention to the sweetness and delicate fibrosity that is distinctive of strawberries. What’s crucial is that Jones is not just enjoying the strawberry, but introspecting on the fact that he is enjoying the strawberry. That is, he is aware of the strawberry but also meta-aware of his first-order awareness.

Suppose we ask Jones what it’s like for him to enjoy the strawberry when he is not introspecting. The refrigerator light problem will completely stump him. Moreover, suppose we want to ascribe consciousness to Jones (or Jones wants to ascribe it to himself). Should we ascribe it before he starts introspecting or after? Naturally, the answer depends on whether we accept a thin or thick view. According to a thin view, whatever is present in Jones’ experience prior to introspection does not warrant the label “consciousness”. The thin theorist might call this pervasive property “nonconscious qualia” (Rosenthal, 1997), but they reserve the term “consciousness” to describe Jones’ metarepresentational awareness that his perceiving. The thin theorist would agree with William Calvin when he says, in defining “consciousness”, “The term should capture something of our advanced abilities rather than covering the commonplace” (1989, p. 78).

What about nonhuman animals? Whereas a thin theorist would say there is a difference in kind between human and rat consciousnesss, the thick theorist is likely to say that both the rat and Jones share the most important kind of pervasive consciousness. Is this jostling a purely terminological squabble? Kriegel (2009) has argued that the debate is substantial because theorists have different intuitions about the source of mystery for consciousness. The thick theorist thinks the mystery originates with first-order pervasiveness; the thin theorist thinks it originates with second-order awareness. Unfortunately, a squabble over intuitions is just as stale as a terminological dispute.

3.0 The Generality of the Refrigerator Light Problem3.1 Introducing the Stipulation Strategy

If you are a scientist wanting to tackle the Hard problem of phenomenal consciousness, how would you respond to the refrigerator light problem? If the debate between thin and thick theories is either terminological or based on conflicting intuitions, what do you do? The only strategy I can think of for circumventing the terminological arbitrariness is to embrace it using what I call the stipulation strategy. It works like this. You first agree that we cannot resolve the thin vs thick debate using introspection alone. Unfazed, you simply stipulate some criterion for pointing phenomenal consciousness out such that it can be detected with empirical methods.

Possible criteria are diverse and differ from scientist to scientist. Some theorists stipulate that you will find phenomenal consciousness anytime you can find first-order (FO) perceptual representations of the right kind (Baars, 1997; Block, 1995; Byrne, 1997; Dretske, 1993, 2006; Tye, 1997). This would allow us to find many instances of phenomenal consciousness throughout the biological world, especially in creatures with nervous systems. However, we might have a more restricted criterion that says you will find phenomenal consciousness anytime you have higher-order (HO) thoughts/perceptions (Gennaro, 2004; Lycan, 1997; Rosenthal, 2005), restricting the instantiations of phenomenal consciousness to mammals or maybe even primates depending on your understanding of higher-order cognition. Or, more controversially, you might have a panpsychist stipulation criterion that makes it possible to point out phenomenal consciousness in the inorganic world.

Once we understand how the stipulation strategy works, the significance of any possible reductive explanation becomes trivialized qua explanation of phenomenal consciousness. To apply this result to contemporary views, I will start with FO theory, apply the same argument to HO theory, and then discuss the more counterintuitive (but equally plausible) theory of panpsychism.

3.2 The First-order Gambit

FO theorists deny the transitivity principle and claim one does not need to be meta-aware in order for there to be something-it-is-like to exist. The idea is that we can be in genuine conscious states but completely unaware of being in them. That is, FO theorists think there can be something-it-is-like for S to exist without S being aware of what-it-is-like for S to exist, a possibility HO theorists think absurd if not downright incoherent because the phrase “for S” suggests meta-awareness.

FO approaches are characterized by their use of perceptual awareness as the stipulation criterion for consciousness. A representative example is Dretske, who says “Seeing, hearing, and smelling x are ways of being conscious of x. Seeing a tree, smelling a rose, and feeling a wrinkle is to be (perceptually) aware (conscious) of the tree, the rose, and the wrinkle” (1993, p. 265). Dretske argues that once you understand what consciousness is (perceptual awareness), you will realize that one can be pervasively conscious without being meta-aware that you are conscious.

However, there is a serious problem with trying to reconcile the implications of theoretical stipulation criteria with common intuitions about which creatures are conscious. The problem with using perceptual awareness as our criterion is that it casts its net widely, perhaps too widely if you think phenomenality is only realized in nervous systems. Since many FO theorists think that if we are going to have a scientific explanation of phenomenal consciousness at all it must be a neural explanation (Block, 2007; Koch, 2004) they will want to avoid ascribing consciousness to nonneural organisms. However, if we stipulate that a bat has phenomenal consciousness in virtue of its capacity for perceptual awareness, I see no principled way of looking at the phylogenetic timeline and marking the evolution of neural systems as the origin of perceptual awareness.

To see why, consider chemotaxis in unicellular bacteria (Kirby, 2009; Van Haastert & Devreotes, 2004). Recently chemotaxis has been modeled using informatic or computational theory rather than classical mechanistic biology (Bourret & Stock, 2002; Bray, 1995; Danchin, 2009; Shapiro, 2007). A simple demonstration of chemotaxis would occur if you stuck a bacterium in a petri dish that had a small concentration of sugar on one side. The bacterium would be able to intelligently discriminate the sugar side from the non-sugar side and regulate its swimming behavior to move upstream the gradient. Naturally we assume the bacterium is able to perceive the presence of sugar and respond appropriately. On this simplistic notion of perceiving, perceiving a stimulus is, roughly speaking, a matter of valenced behavioral discrimination of that stimulus. By valenced, I mean that the stimuli are valued as either attractive or aversive with respect to the goals of the organism (in this case, survival and homeostasis). If the bacterium simply moved around randomly when placed in a sugar gradient such that the sugar had no particular attractive or aversive force, we might conclude that the bacterium is not capable of perceiving sugar, or that sugar is not ecologically relevant to the goals of the organism. But if the bacterium always moved upstream of the sugar gradient, it is natural to say that the bacterium is capable of perceiving the presence of sugar. Likewise, if there were a toxin placed in the petri dish, we would expect this to be valenced as aversive and the bacteria would react appropriately by avoiding it, with appropriateness understood in terms of the goal of survival

Described in this minimal way, perceptual awareness in its most basic form does not seem so special that only creatures with nerve cells are capable of it. Someone might object that this is not a case of genuine perceptual awareness because there is nothing-it-is-like for the bacterium to sense the sugar or that its goals are not genuine goals. But how do we actually know this? How could we know this? For all we know, there is something-it-is-like for the bacterium to perceive the sugar. If we use perceptual awareness as our stipulation criterion, then we are fully justified in ascribing consciousness to even unicellulars.

Furthermore, it is misleading to say bacteria only respond to “proximal” stimulation, and therefore are not truly perceiving. Proximal stimulation implies an implausible “snapshot” picture of stimulation where the stimulation happens instantaneously at a receptor surface. But if stimuli can have a spatial (adjacent) component why can they not also have a temporal (successive) component? As J.J. Gibson put it, “Transformations of pattern are just as [biologically] stimulating as patterns are” (Gibson, 1966). And this is what researchers studying chemotaxis actually find: “for optimal chemotactic sensitivity [cells] combine spatial and temporal information” (Van Haastert & Devreotes, 2004, p. 626). The distinction between proximal stimulation and distal perception rests on a misunderstanding of what actually stimulates organisms.

Interestingly, the FO gambit offers resources for responding to the zombie problem. Since we have independent reasons to think bacteria are entirely physical creatures, if perceptual awareness is used as a stipulation criterion then the idea of zombie bacteria is inconceivable. Because bacterial perception is biochemical in nature, a perfect physical duplicate of a bacteria would satisfy the stipulation criterion we apply to creatures in the actual world. The problem, however, is that we have no compelling reason to choose FO stipulation criteria over any other, including HO criteria.

3.3 The Higher-order Gambit

HO theories are reductive and emphasize some kind of metacognitive representation as a criterion for ascribing phenomenal consciousness to a creature (e.g. awareness that you are aware). These HO representations are postulated in order to capture the “transitivity principle” (Rosenthal, 1997), which says that a conscious state is a state whose subject is, in some way, aware of being in it. A controversial corollary of the transitivity principle is that there are some genuinely qualitative mental states that are nonconscious e.g. nonconscious pain.
Neurologically motivated HO theories like Baar’s Global Workspace model (1988; 1997) and Dehaene’s Global Neuronal Workspace model (Dehaene et al., 2006; Dehaene, Kerszberg, & Changeux, 1998; 2001; Gong et al., 2009) have had great empirical success but they are deeply unsatisfying as explanations of phenomenal consciousness. HO theory can explain our ability to report on or monitor our experiences, but many philosophers wonder how it could provide an explanation for phenomenal consciousness (Chalmers, 1995). Ambitious HO theorists reply by insisting they do in fact have an explanation of how phenomenal consciousness arises from nonconscious mental states.

However, ambitious HO approaches suffer from the same problem of arbitrariness that FO approaches did. In order decide between FO and HO stipulation criteria we need to first decide on either a thick or thin interpretation of the refrigerator light problem. Since introspection is no help, we are forced to use the stipulation strategy. But why choose a HO stipulation strategy over a FO one? If everyone had the same intuitions concerning which creatures were conscious we could generate stipulation criteria that perfectly match these intuitions. The problem is that theorists have different intuitions concerning what creatures (beside themselves) are in fact conscious. Surprisingly, some theorists might go beyond the biological world altogether and claim inorganic entities are conscious.

3.4 The Panpsychist Gambit

A more radical stipulation strategy is possible. If antiphysicalist arguments suggest that neurons and biology have nothing to do with phenomenal consciousness, we might think that phenomenal consciousness is a fundamental feature of reality. On this view, matter itself is intrinsically experiential. Another idea is that phenomenality is necessitated by an even more fundamental property, called a protophenomenal property (Chalmers, 2003).

Panpsychism is a less popular stipulation gambit, but at least one prominent scientist has recently used a stipulation criterion that leads to panpsychism (although he downplays this result). Guilio Tononi (2008) proposes integrated information as a promising stipulation criterion. The intellectual weight of the theory rests on a thought experiment involving a photodiode. A photodiode discriminates between light and no light. But does the photodiode see the light? Does it experience the light? Most people would think no. But the photodiode does integrate information (1 bit to be precise) and therefore, according to the theory of integrated information, has some experience, however dim. Whatever theoretical or practical benefits come with accepting the theory of integrated information, when it comes to the Hard problem of phenomenal consciousness we are left scratching our heads as to why integrated information is the best criterion for picking out phenomenal consciousness. Given the criterion leads to ascriptions of phenomenality to a photodiode, many theorists will take this as good reason for thinking the criterion itself is wrong given their pretheoretical intuitions about what entities are phenomenally conscious. But as we have learned, intuitions are diverse as they are unreliable.

Conclusion

Unable to define phenomenal consciousness, theorists are tempted to use their introspection to “point out” the phenomenon. The refrigerator light problem is motivated by the problem of deciding between thin and thick views of your own phenomenal consciousness using introspection alone. If introspection is supposed to help us understand what phenomenal consciousness is, and the refrigerator light problem prevents introspection from deciding between thin and thick views, then we need some other methodological procedure. The only option available is the stipulation strategy whereby we arbitrarily stipulate a criterion for pointing it out e.g. integrated information, or higher-order thoughts. The problem is that any proposed stipulation criterion is just as plausible as any other given we lack a pretheoretical consensus on basic questions such as the function of phenomenal consciousness. Our only hope is to push for the standardization of stipulation criteria.

In the literature, there are roughly two ways to pin down the explanandum of phenomenal consciousness: first-order approaches and second-order approaches. The difference is simple enough. For first-order theories, phenomenal consciousness is synonymous with awareness; for second-order theories, phenomenal consciousness is associated with the awareness of awareness. Fred Dretske is well-known for defending a first-order definition. In his 1993 paper Conscious Experience, he says:

[The] distinction between a perceptual experience of x and a perceptual belief about x is , I hope, obvious enough. I will spend some time enlarging upon it, but only for the sake of sorting our relevant interconnections (or lack thereof). My primary interest is not this distinction, but, rather, in what it reveals about the nature of conscious experience, and thus, consciousness itself. For unless one understanding the difference between a consciousness of things (Clyde playing the piano) and a consciousness of facts (that he is playing the piano), and the way this difference depends, in turn, on a difference between a concept-free mental state (e.g., an experience) and a concept-charged mental mental (e.g., a belief), one will fail to understand how one can have conscious experiences without being aware that one is having them. One will fail to understand, therefore, how an experience can be conscious without anything – including the person having it – being conscious of having it. Failure to understand how this is possible constitutes a failure to understand what makes something conscious and , hence, what consciousness is.

For Dretske then, the explanandum of consciousness is simple: awareness. Take the famous truck-driver example from Armstrong:

After driving for long periods of time, particularly at night, it is possible to “come to” and realize that for some time past one has been driving without being aware of what one has been doing. The coming-to is an alarming experience. It is natural to describe what went on before one came to by saying that during that time one lacked consciousness.

Drestke thinks exactly the opposite. The truck-driver is conscious of the road the whole time, otherwise he wouldn’t be able to differentially respond to the road conditions. Drestke claims that in order to recognize differences (such as a road obstacle), we must be aware of both the road and the obstacle. If we weren’t aware that the obstacle is there, how would we be able to “see it” and then respond appropriately by driving around it? For first-order theorists, phenomenal consciousness is simply synonymous with awareness.

When asked to define awareness, first-order theorists often say that it means, roughly, “to experience”. But when asked what this means, they usually do not offer a robust definition. First-order theorists love to say that if you got to ask, you ain’t never going to know. In other words, they don’t provide arguments for this definition, they just claim it is obvious. Everyone knows what experience is, right? It’s that strange sense that it feels a certain way to be alive and perceive the world. There is “something it is like” to experience the world. It seems to be one way or another.

This is, of course, a complete circle of reasoning. But most first-order theorists acknowledge this; they just don’t think it’s a problem. They say that we can come up with a theory of experience later, but right now it is important to get our definitions straight: consciousness is awareness and one doesn’t have to be aware that you are conscious in order to be conscious.

Second-order theorists deny this and claim that first-order experiences require a higher-order representational state in order to generate true “phenomenal feels”, or “what-it-is-likeness”. The most well-known second-order theorists in the analytic literature are David Armstrong, David Rosenthal, William Lycan, Peter Carruthers, Robert van Gulick, Uriah Kriegal, Rocco Gennaro, and a couple others. Second-order theorists are a fractious bunch. Armstrong and Lycan take what’s called a Higher-order Perception theory (HOP). This is often called an “inner sense” theory because it posits an internal perceptual “spotlight” that is scanning the lower-order states and this scanning generates phenomenal feels. Rosenthal and Carruthers take what’s called a Higher-order Thought theory (HOT). This is pretty much the same as the HOP theory, they just don’t like the spotlight metaphor. Instead of a spotlight, they talk about higher-order beliefs and representations. Kriegal takes what’s called a self-representational higher-order approach where phenomenal feels are generated when the system represents itself to itself in a particular way. The one thing they all agree on though is that it is conceptually plausible to suppose that an agent could have nonconscious experiences, something the first-order theorists flat out deny as violating basic intuitions.

Second-order theorists also divide on the question of whether animals have higher-order mental states, and thus, phenomenal consciousness. Theorists like Van Gulick are reluctant to deny nonhuman animals phenomenal consciousness, and thus they claim that higher-order representations aren’t that cognitively sophisticated and it’s likely a widespread phenomenon in the animal world. Theorists like Carruthers bite the bullet and deny that nonhuman animals have phenomenal consciousness. Carruthers thus claims that there is nothing “it is like” to be an animal. They have experiences, but these experiences are nonconscious and don’t “feel” in the way that our experiences “feel”. There is something special – phenomenal – about our own experiential states.

All lifeforms possess “phenomenal consciousness”. There is something-it-is-like to be a bacterium just as there is something-it-is-like to be a bat. There can be degrees of experiential richness but it denies common sense to suppose that there is nothing it is like to be an embodied, living organism. However, I do not think that phenomenal feels require higher-order representations in order to feel one way or another. Phenomenal feels are generated at the first level of experience.

But this is precisely wrong. Phenomenal feels are not “generated” as if they were objects or things the brain was literally squirting out. This is a homuncular theory right down to its core. We must be careful not to let our evolutionary disposition for object-oriented abstraction fool us into thinking that experiences are “generated” as if they were physical objects. Phenomenal feels are not generated, they are what-it-is-like to exist as a lived body. Existence is to be cashed out behaviorally. But not in terms of Skinner’s behaviorism, a dead theory based on antiquated notions of linear stimulus-response mechanics and simple associationist learning models. Behavioral models are now based on an understanding of dynamic systems theory and complex categorization and pattern-recognition learning models. The concept of stimulus-response is replaced by concept of self-determining behavior and attention-salience models of decision making. The organism is a self-organized, self-determining, closed operational loop. The material products made by the organism are the components that play a role a making up the production factories that generate the very structural components of the organism. Organisms are organizatinally closed but thermodynamically open. I think it is intuitive to understand these dynamic temporal processes as having the “right stuff” for phenomenal feeling. What-it-is-like to be a rock is radically different and of a different register than what-it-is-like to be an autonomous dynamic system.

But here is where I disagree with contemporary higher-order approaches. Whereas I do think consciousness requires a second-order explanation, I do not think that second-order theories of consciousness are supposed to be explaining phenomenal feeling. I think that phenomenal feels are a separate explanandum than consciousness. I thus take what’s called a narratological or social-constructivist approach to consciousness. Here, I follow Julian Jaynes in claiming that consciousness proper is “[T]he development on the basis of linguistic metaphors of an operation of space in which an ‘I’ could narratize out alternative actions to their consequences”. Recent defenders of social-constructivist approaches to conscious self-hood include Julian Jaynes, Gilles Deleuze, Charles Taylor, Daniel Dennett, Tor Norretranders, J. D. Velleman, Daniel Hutto, John Protevi, and James Austin (and many others).

I would thus say that an earthworm is “aware” of certain properties in the environment, but that it is, strictly speaking, not “conscious” because it does not have the right sort of higher-order metacognitive awareness. There are strong theoretical and empirical reasons for denying nonhuman animals the capacity for second-order cognition. While it is certainly possible to use a second-order explanation for nonhuman animal behavior, for any given case, I guarantee that there is a first-order explanation that is biologically plausible and theoretically adequate to account for all the facts. I also think that first-order explanations are more metaphysically parsimonious have more predictive power precisely because they are more biologically realistic given their dependence on dynamic systems theory and autopoietic, adaptive self-determination So what is consciousness proper? Consciousness is

…is an operation rather than a thing, a repository, or a function. It operates by way of analogy, by way of constructing an analog space with an analog “I” that can observe that space, and move metaphorically in it. It operates on any reactivity, [consciously selects] relevant aspects, narratizes and [assimilates] them together in a metaphorical space where such meanings can be manipulated like things in space. Conscious mind is a spatial analog of the world and mental acts are analogs of bodily acts. (Jaynes, 1976)

BONUS:

For a more systematic account of my theory of cognition and consciousness, check out my paper that was recently published in Phenomenology and the Cognitive Sciences, “What is it like to be nonconscious? A defense of Julian Jaynes”: