Let's start with representation. On Dretske's view, a system represents that x is F just in case that system has a subsystem with the function of entering state A only if x is F and that subsystem is in state A. Dretske's examples of systems with indicator functions include both artificial systems like fuel gauges and natural biological systems.

So we need to think a little bit about what a "system" is. Intuitive application of the label "system" wouldn't seem to exclude the United States. We can think of a traffic system or a social system or the military-industrial complex as a system. Systems, I'd be inclined to think, can be spatially distributed, as long as there is some sort of regular, predictable interaction among their parts. There seems to be no reason, on Dretske's view, not to treat the United States as a system. Dretske does not appear to employ restrictive criteria, such as a requirement of spatial contiguity, on what qualifies as a system. In fact, it would strain against the general spirit of Dretske's view to employ something like spatial contiguity as a criterion of systemhood: Something like a fuel gauge could easily operate via radio communication among its parts and that would make no difference to Dretske's basic analysis. What matters for Dretske are things like information and causation, not adjacency of parts.

If the United States is a system, then it can presumably at least be evaluated for the presence or absence of representations. And once it is evaluated in this way it seems clear that, by Dretske's criteria, the United States does in fact possess representations. The United States has subsystems with indicator functions. The Census Bureau is part of the United States and one of its functions is to tally up the residents. The CIA is part of the United States and one of its functions is to track the location of enemies.

Dretske is very liberal in granting "behavior" to systems: Even plants behave, on his view (e.g., by growing), and in some sense even stones (e.g., by sinking when thrown in a pond). So it seems clear if we grant that the United States is a system with representations, it also exhibits behavior that is influenced by those representations. Dretske's metaphysics, straightforwardly applied, would seem to imply that the United States is a behaving, self-representing system.

But would this behaving, representing system have conscious experience, on Dretske's view? Dretske says that in order to have conscious sensory experience, a system must possess representations that are (a.) natural, (b.) systemic, and (c.) enable the construction of new representations that can be calibrated to regulate behavior serving the system's needs and desires. Let's consider these criteria one at a time, b, c, and then a.

Does the U.S. have "systemic" representations? Systemic representations, per Dretske, are representations that are part of the very design of the system or subsystem, rather than representations acquired later. It seems clear that the United States has these, if it has representations at all. It's among the systemic functions of the Census Bureau that it tally up the residents. It's among the systemic functions of the Supreme Court that it represent laws as Constitutional or un-Constitutional. The systemic representations of these subsystems deliver behavior-guiding information to the system as a whole, as the systemic representations of an animal's sensory subsystems do.

Can the United States construct new representations derived from these systemic representations to further regulate its behavior? It seems clear it can. The United States, or a subsystem within the United States, could represent its rate of population growth among newly-defined demographic groups and adjust immigration policy in response. Does it do so in accord with its needs and desires? Well, needs and desires, on Dretske's account, don't require a lot of apparatus. A desire for some result R, per Dretske, is an internal state that (a.) helps cause movement that helps to yield R, (b.) was selected for because of that tendency to help yield R, and (c.) can be further modified conditionally upon its effectiveness in producing R, in a somewhat sophisticated way. Though these conditions rule out the inflexible inherited drives of many insects, this is still pretty simple stuff, as Dretske intends. The Census Bureau, the CIA, and the United States as a whole would seem to have desires by Dretske's criteria.

Finally, does the United States have "natural" representations that play the necessary roles? This might seem to be a sticking point. Dretske defines conventional representations as those arising when "a thing's informational functions are derived from the intentions and purposes of its designers, builders, and users" -- that is, "us" -- and he defines natural representations as just those that are not conventional (1995, p. 7-8). Now clearly the informational functions of the United States depend on us. They wouldn't exist without us. Does that mean, then, that Dretske can dodge the conclusion that the U.S. is conscious?

I don't think so. After all, your own informational functions depend on you, wouldn't exist without you, and you're conscious. The motivation for Dretske's requirement seems to be that to give rise to consciousness a system's representational functions should be intrinsic to it, rather than assigned from outside. When you slap a label on a column of mercury and call it a thermometer, you don't thereby make the thermometer conscious (even if it were complex enough to meet the other criteria). But the case of the thermometer and the U.S. are not at all analogous. U.S. citizens aren't external label-slappers. We are parts of the United States. We constitute it. We are internal to it. Although Dretske implicitly assumes that if a system's representational capacities depend on human beings then those capacities are not natural to the system, it's clear that in saying this he has ordinary physical artifacts in mind, not cases where human users themselves constitute (part of) the system.

The core idea of Dretske's metaphysics of mind is that minds are information-manipulating systems, where information is construed in terms of simple causes and probabilities, and where mentality and consciousness arise when a system's environmental responsiveness and its tracking of the world are intrinsically sophisticated and flexible. If we can set aside our natural prejudice against large, spatially distributed systems, it seems clear that the United States amply satisfies Dretskean criteria for conscious mentality.

15 comments:

Consider the mereological fusion of you and your hat. Call this fusion "Freddy". You, Eric, have naturally selected information bearing states that, by Dretskean lights, carry phenomenal content. You, in short, have experiences. And you, by stipulation, are part of Freddy. But that doesn't seem enough to make it the case that Freddy thereby has experiences.

Unless I missed something, your argument that America is disanalgous from thermometers in a relevent sense doesn't seem much better than the Freddy argument. And the Freddy argument seems too fast.

Nice example, Pete. No resources internal to the Dretske come to mind to handle this kind of case. To be charitable to him, something will have to be invented, I suppose -- but that would open a can of worms. I guess the issue is that his account isn't complete without a proper ontology of systems and an account of to which of a variety of nested systems and subsystems the mental state should be attributed.

Hm, good! I'll have to think about this more. Further thoughts appreciated!

If “the United States” seems somewhat too large or somewhat too small or in some other way not quite the right group-level system here that is properly identified as the locus of the behavior guided by the representational subsystems, the main point here can still carry with such adjustments. Compare the case of drawing precise boundaries around an individual person. Although the person case seems easier because the skin is a natural intuitive and biological boundary, the same issues arise (e.g., with intestinal flora and artificial limbs), as should become increasingly evident if skin-penetrating technologies become more commonplace.

I am enjoying these posts on "why X should think that the United States is conscious", but still the size issue is something that needs to be addressed.

We can somehow imagine something the size of the US being conscious. But it is just one instance of the category "nation-state". So let me ask instead, should Dretschke, Dennet etc think that Andorra is conscious?" Or if you prefer a Western Hemisphere example, try Grenada.

If the answer is in the affirmative, then is the state of Rhode Island conscious? How about the city of Providence? How about a particular neigbourhood?

In other words, what is so special about the nation-state that we restrict our investigation of the possibility of its consciousness to that category? Five hundred years ago, the concept barely existed. Five hundred years from now, it may seem as laughably antiquated as the idea of feudal obligations to one's liege lord does now. Why is this the go-to category?

Clasqm: Yep, I agree that the same issues will arise for larger and smaller entities. I choose the nation as what seems to me the best-case scenario, for two reasons: (1.) Nations are very large and complex, unlike say, the U.C. Riverside Philosophy Club, with a degree of interconnectedness between people that resembles the degree of neuronal interconnectedness in the human brain. (2.) Nations engage in a lot of what we seem to intuitively regard as collective action, unlike larger entities. It seems very true and natural to say "The U.S. invaded Iraq", the U.S. sent astronauts to the moon, etc. There's not quite as much it seems as true and natural to say "the world" did, or the galaxy, especially if we're thinking of intentional action.

A philosopher appealing to intuition? Why, Eric, we'll make a theologian out of you yet! :-)

But seriously,

1) It seams reasonable to assume that for consciousness to arise, there needs to be a complex physical substrate like, say, a human brain or something analogously complex. However, that is an assumption built upon exactly one known example.

In fact, it is the human brain making that assumption about itself. Which makes it just that little bit open to counter-examples, should we ever find them.

Perhaps we will one day find conscious beings of startling simplicity. Or perhaps we will find conscious beings whose complexity will make the human brain appear like an undifferentiated blob of protoplasm. However that turns out, right now we are arguing from a position of profound ignorance.

(2) "The US invaded Iraq" is just human beings being typically lazy. It takes too much effort to say "The POTUS and Congress signed several pieces of paper and as a result the Pentagon issued orders that resyulted in thirty thousand soldiers being transported to a foreign country called Iraq where they proceeded to usurp power from the person that had recognised as the legitimate ruler of Iraq for forty years before that day."

Excuse me while I pause for breath.

How about this one? "Washington sent an ultimatum to Beijing"

Now this by itself does not mean that there is a conscious entity called Washington that actually drafted the ultimatum. It is a figure of speech. The linguists probably have a name for it.

This is also (like in my comment your 'Why Dennett should think that the US is conscious') a comment about what may appear to be the easier step of the two: that the US represents.

Dretske says that a system represents F if it has the function to indicate F. So the entity that represents is also the entity that has the function to indicate. If X has the function to indicate, then the only entity we can be sure to represent is X (not any proper part of X and not any mereological fusion of X and something else.

But then we have no reason to conclude that the US represents - it does not have the function (biological or artifact) to do anything in the sense of function Dretske has in mind? Some subsystems of the US could be thought of as being designed to indicate, but not the US itself.

Complex systems typically can be (natural systems), or by design are, decomposed into subsystems. Between the subsystems are functional interfaces that can be, or are, relatively well-defined. (This decomposition can in principle be applied to the subsystems themselves, and to their subsystems, etc.)

From that perspective, the problem with "the US" as a system is that it isn't at all clear what the subsystems are. Your examples of subsystems are all parts of the formal US federal government, for which the subsystems and their functional interfaces are, of course, relatively well-defined. So, no conceptual problem there, and I assume that the US government being "conscious" suffices to make your point.

The problem with "Freddie" as a system is that the "hat" subsystem has no functional role, as opposed, say, to Otto's notebook. Ie, not every collection of entities can be meaningfully viewed as a system.

To the philosophical point of this series, doesn't accepting that almost any complex system can be viewed as "conscious" suggest that that word (ie, concept) isn't very useful except perhaps in casual conversation?

Thanks for the continuing comments, folks! I'm in Cincinnati right now, and I'll be giving a talk on some of this material tomorrow.

@ Clasqm: On your (1), I'm inclined to think that's a possibility. On (2): Well, I don't think it's just a figure of speech. I think it describes a real collective action by a group entity -- or at least I think that's what someone like Dretske or Dennett should say. Right now my basic stance on that question is this: It's analogous to human/animal action in lots of ways (setting aside spatial distribution of the parts and composition of the group entity by conscious subentities). And we say it as though it's a literal truth. And their theories don't seem to provide any mechanism by which one could derive the implication that it's not literally true. But lots of people have pressed on this point, so I'll need to think carefully about whether I should also commit to some more positive argument concerning group action. (Is "I drank tea" just a lazy way of saying "my brain underwent these changes... my arm did this... some water went into the cup..."?)

@ Bence: For Dretske, the natural representations that drive conscious tend to occur in subsystems rather than the system as a whole. It is the function of such-and-such neurons to represent color in this part of the visual field. It is not *my* function as a whole organism to represent color in the visual field -- not unless someone else has given me that job, but then that's an "acquired" representation and not the kind of natural representation at the core of consciousness on Dretske's account. So I'm still inclined to think the analogy holds.

Charles: Thanks. I agree that something in that direction might prove a practical way to draw boundaries around "systems".

I don't know, though, if it follows from my remarks that almost any complex system could be viewed as conscious. The U.S. has a kind of complexity and self-representational sophistication that many other systems don't seem to have. But I don't know. How much is enough? That seems a pretty hard question to answer!

One possibility, as you say, is to jettison the word "conscious". But I'm too much of a "phenomenal realist" to want to do that. It would be convenient if I could let go of the idea that there really is a fact of the matter whether a system is conscious or not -- a fact that is important and not just a matter of convention or the like.

As a "phenomenal realist" perhaps you could speculate on how "such-and-such neurons ... represent color in this part of the visual field". (A real question, not a challenge.) Also, what purposes might phenomenal experience (eg, mental "picturing") serve? Dretske addresses this here (towards the end) in the context of blindsight:

http://users.ecs.soton.ac.uk/harnad/Papers/Py104/dretske.good.html

FWIW, I find the arguments based on Helen unconvincing since Dretske doesn't clearly distinguish between sensory input processing that produces phenomenal experience and all other sensory input processing.

Imagine a vision experiment ala Nicolelis ("Beyond Boundaries") in which the subject's retina is illuminated by light of different colors, neural activity relevant to visual sensory input processing - but not consequent to processing that produces phenomenal experience subsystem - is successfully intercepted and then decoded in a computer to produce inputs to a voice synthesizer that outputs correct answers to "What color is this?" Does the system comprising the (hypothetical) subsystem "visual processing up to the point of interception" and the subsystem neural probes+computer+synthesizer (see note) have phenomenal experience? If the subject is human, is the system "conscious" ala Dretske (ie, aware of being aware) of colors? Again, questions rather than challenges - I have no answers.

Note: a "system" seemingly simpler in principle than Nicolelis' Aurora-based system

Wow... our intuitions could not be more different! I certainly don't share the intuition that the label "system" obviously applies to large groups like the United States. But, presently, my worry here has more to do with the representational content that a nation-state "system" could possibly realize.

I'm certainly no expert on Dretske, so it's entirely possible that my worry actually is about his representational theory more generally. But, just given what you've said here, it's not especially obvious to me that the U.S. is capable of satisfying the criteria with respect to representing phenomenal content. Namely, I don't think the United States has a sub-system with the function of representing 'redness' (or 'sweetness' or whatever qualia) to the larger system. As far as I can tell, there's no obvious candidate federal or national department (or bureau or whatever) with the function of entering state A, where state A is an indicator of phenomenal, experiential properties.

My (admittedly limited) understanding of Dretske is that phenomenal representation derives from perceptual systems having acquired the function of indicating qualitative properties in the environment. And, I presume, that means that 'state A' is a qualitative/experiential state (so, for example, the visual system represents the tomato being red because the tomato is red, and because part of the visual system acquired the function of producing the qualitative experience of 'redness' to indicate _red_ in the environment).

It's possible that my understanding of Dretske's representationalism is clouded by some of my internalist leanings. But if my understanding is right, then my worry is specific to your extension of conscious representation to nations. So, my question for you is 'what is the national system that naturally acquired the function of indicating 'redness'(in the qualitative--as opposed to some merely propositional--sense) to the United States?'

My intuition is that no such sub-system exists, In which case,as far as I can tell, the most you might be able to argue is that the U.S. has intentionality (i.e., it has sub-systems that represent propositional content, such as that tomatoes are red).

Are you thinking of the sensory/conceptual distinction from Ch. 1 of his 1995, which Dretske maps onto the "systemic" vs. "acquired" distinction? If so, then the issue becomes whether the U.S. has subparts with systemic, as opposed to acquired, indicator functions.

Well, what is it to have a "systemic" function? On p. 12, Dretske writes that systemic indicator functions derive their indicator function "from the system of which it is a state". For example: "If a system (e.g., a thermometer) is supposed to provide information about temperature, and B is the state (e.g., mercury at such-and-such level) that is supposed to carry the information that the temperature is, say, 32*, then B has the systemic function (function-s) of indicating a temperature of 32*." The contrast case is a case in which the system acquires an indicator function not intended in its design, e.g., if we write "DANGER" at some point on the mercury column.

So the question then -- if you're willing to grant that the U.S. is a "system" in the relevant sense -- is whether the U.S. has subsystems with systemic indicator functions in this sense. (Note that it is only the subsystems that have to have indicator functions, not the system as a whole. Compare: a rabbit may have no function, but its visual system does have a function.) It seems to me that the U.S. does have subsystems with systemic indicators functions -- subsystems designed with indicator functions in mind. The Census Bureau is one such subsystem. It was designed to tote up the populace. The CIA is another such subsystem. It was designed to monitor the activity of foreign enemies. At least this seems to me the result of most straightforward application of Dretske's criteria. Would you disagree?

Maybe there's no subsystem with the systemic function of indicating redness. If so, it only follows that the phenomenology of the U.S. will be very different from our own. But that's only what one should expect!

About Me

Eric Schwitzgebel
Professor of Philosophy at University of California at Riverside. Visit my homepage (link below) to view most of my philosophical and psychological essays. Email me if you like at eschwitz at domain- ucr.edu