Wednesday, July 16, 2014

One of the most prominent theories of consciousness is Guilio Tononi's Integrated Information Theory. The theory is elegant and interesting, if a bit strange. Strangeness is not necessarily a defeater if, as I argue, something strange must be true about consciousness. One of its stranger features is what Tononi calls the Exclusion Postulate. The Exclusion Postulate appears to render the presence or absence of consciousness almost irrelevant to a system's behavior.

Here's one statement of the Exclusion Postulate:

The conceptual structure specified by the system must be singular: the one that is maximally irreducible (Φ max). That is, there can be no superposition of conceptual structures over elements and spatio-temporal grain. The system of mechanisms that generates a maximally irreducible conceptual structure is called a complex... complexes cannot overlap (Tononi & Koch 2014, p. 5).

The basic idea here is that conscious systems cannot nest or overlap. Whenever two information-integrating systems share any parts, consciousness attaches to the one that is the most informationally integrated, and the other system is not conscious -- and this applies regardless of temporal grain.

The principle is appealing in a certain way. There seem to be lots of information-integrating subsystems in the human brain; if we deny exclusion, we face the possibility that the human mind contains many different nesting and overlapping conscious streams. (And we can tell by introspection that this is not so -- or can we?) Also, groups of people integrate information in social networks, and it seems bizarre to suppose that groups of people might have conscious experience over and above the individual conscious experiences of the members of the groups (though see my recent work on the possibility that the United States is conscious). So the Exclusion Postulate allows Integrated Information Theory to dodge what might otherwise be some strange-seeming implications. But I'd suggest that there is a major price to pay: the near epiphenomenality of consciousness.

Consider an electoral system that works like this: On Day 0, ten million people vote yes/no on 20 different ballot measures. On Day 1, each of those ten million people gets the breakdown of exactly how many people voted yes on each measure. If we want to keep the system running, we can have a new election every day and individual voters can be influenced in their Day N+1 votes by the Day N results (via their own internal information integrating systems, which are subparts of the larger social system). Surely this is society-level information integration if anything is. Now according to the Exclusion Postulate, whether the individual people are conscious or instead the societal system is conscious will depend on how much information is integrated at the person level vs. the societal level. Since "greater than" is sharply dichotomous, there must be an exact point at which societal-level information integration exceeds the person-level information integration. Tononi and Koch appear to accept a version of this idea in 2014, endnote xii [draft of 26 May 2014]. As soon as this crucial point is reached, all the individual people in the system will suddenly lose consciousness. However, there is no reason to think that this sudden loss of consciousness would have any appreciable effect on their behavior. All their interior networks and local outputs might continue to operate in virtually the same way, locally inputting and outputting very much as before. The only difference might be that individual people hear back about X+1 votes on the Y ballot measures instead of X votes. (X and Y here can be arbitrarily large, to ensure sufficient informational flow between individuals and the system as a whole. We can also allow individuals to share opinions via widely-read social networks, if that increases information integration.) Tononi offers no reason to think that a small threshold-crossing increase in the amount of integrated information (Φ) at the societal level would profoundly influence the lower-level behavior of individuals. Φ is just a summary number that falls out mathematically from the behavioral interactions of the individual nodes in the network; it is not some additional thing with direct causal power to affect the behavior of those nodes.

I can make the point more vivid. Suppose that the highest-level Φ in the system belongs to Jamie. Jamie has a Φ of X. The societal system as a whole has a Φ of X-1. The highest-Φ individual person other than Jamie has a Φ of X-2. Because Jamie's Φ is higher than the societal system's, the societal system is not a conscious complex. Because the societal system is not a conscious complex, all those other individual people with Φ of X-2 or less can be conscious without violating the Exclusion Postulate. But Tononi holds that a person's Φ can vary over the course of the day -- declining in sleep, for example. So suppose Jamie goes to sleep. Now the societal system has the highest Φ and no individual human being in the system is conscious. Now Jamie wakes and suddenly everyone is conscious again! This might happen even if most or all of the people in the society have no knowledge of whether Jamie is asleep or awake and exhibit no changes in their behavior, including in their self-reports of consciousness.

More abstractly, if you are familiar with Tononi's node-network pictures, imagine two very similar largish systems, both containing a largish subsystem. In one of the two systems, the Φ of the whole system is slightly less than that of the subsystem. In the other, the Φ of the whole system is slightly more. The node-by-node input-output functioning of the subsystem might be virtually identical in the two cases, but in the first case, it would have consciousness -- maybe even a huge amount of consciousness if it's large and well-integrated enough! -- and in the other case it would have none at all. So its consciousness or lack thereof would be virtually irrelevant to its functioning.

It doesn't seem to me that this is a result that Tononi would or should want. If Tononi wants consciousness to matter, given the Exclusion Postulate, he needs to show why slight changes of Φ, up or down at the higher level, would reliably cause major changes in the behavior of the subsystems whenever the Φ(max) threshold is crossed at the higher level. There seems to be no mechanism that ensures this.

24 comments:

I’ve come across the work of Tononi recently (through Scott Aaronson’s blog http://www.scottaaronson.com/blog/ who is skeptical of IIT) and I don’t pretend to understand a word of it. To your knowledge does it (or would it) have any practical, technological (empirical) applications? Or are we dealing with yet another mathematically elegant theory that ultimately will have no useful bearing on reality?

Hi Eric. I have not read this more recent Tononi work, and it's not like I understand phi very well anyway. But just so we can be clearer about your objection: Couldn't Tononi bite the bullet on your thought experiments, but suggest that they are really just almost inconceivable thought experiments and not at all likely given the facts on the ground? Couldn't he say that the complexity of the human brain is such that the level of phi in it (while awake at least) is not going to be exceeded by a group of humans, even acting in harmony? Given simpler brains in ants or bees, maybe the colony or hives would have more phi than the individuals, but that's not so counter-intuitive. Anyway, I'm probably misunderstanding the problem you're posing.

modvs1: Tononi thinks it's useful in thinking about anaesthesia and twilight states, and for modeling the structure of the stream of consciousness. I wouldn't rule out the possibility that if IIT were true, it could have implications for such cases.

Eddy: Yes, I think Tononi could and probably would say this. But (1.) It's so unclear how to calculate phi in such cases that it's not clear that he *can* legitimately be confident about this. And (2.) the theoretical objection stands in any case: Without a mechanism to ensure that small threshold-crossing variations in phi at the larger-system level (whatever the larger system might be) have major impact on the functioning of the subsystems, his theory would have the presumably unwelcome consequence that the consciousness or not of the lower systems will often be nearly irrelevant to their behavior.

It seems a refutation, but there seems some merit to the extension of the idea? I mean, assembly line workers, going through repetitive, mindless actions over and over as they serve a larger system? That sort of lines up with the idea, doesn't it? Or did I not really get the idea?

I wonder again if the domain of consciousness might not be a way of getting around this. In your democracy, the citizens are voting only on a few specific issues, one assumes. And I actually do think it's psychologically plausible to say that if you do a particular thing in concert with a fixed group of other people, getting continuous feedback on it, then you can enter into a form of group consciousness with those people *for that activity*. Sports are the obvious example.

But individuals have many other areas of their lives which are not part of the gestalt. What they eat, read, enjoy... in these domains, consciousness remains at the individual level. Presumably Tonioni's theory could accommodate this because it includes information, and the domain of information has to be defined.

I think there's something about agency missing, though. I don't believe Tonioni's idea because I don't think human beings are much like information processors, and the thing that makes us conscious in particular includes our agency and will. For instance, I think being conscious necessarily involves the ability to change our consciousness - that's why we're unconscious when we dream, and that's why lucid dreaming is so weird. But that's a separate argument.

Do quarks,neutrinos and other particles have consciousness? They move and MAY have free will.Do molecules have consciousness? There may be communication and judgement at that level of being. We have no way of discovering.

Callan: I'm not sure -- were meaning that as a positive example of consciousness at the group level or as a negative example?

chinaphil: It doesn't have to be voting -- voting is just informationally relatively simple and in principle numerically extendable. Maybe sports teams, or informal interactions at the nation level, would serve just as well or better. I think the 2004 and 2008 versions of Tononi's theory could accommodate this. It's his introduction of Exclusion in 2012 that raises the trouble. On agency: I have some sympathy with that idea, too, but it's not clear what justifies acceptance or denial of it.

Anon: Such panpsychism is, I think, on the table -- and Tononi's theory seems to imply that they would often have a tiny bit of consciousness, if they are not part of a larger complex of higher phi.

Anon Jul 22: Yes, I believe that would follow from his exclusion postulate. As to *why* he accepts the exclusion postulate, the main explicit reason he offers is Occam's Razor: It's simpler. This seems to me not an especially compelling reason in this context. He also defends exclusion by appeal to the intuition that individual people have only have one stream of experience (not relevant in part/whole cases, I think, but maybe relevant to overlap cases); and to the intuition that two people having a conversation wouldn't form a third conscious entity that includes them as parts (an odd intuition to take seriously, given that his near-panpsychism seems to conflict with similar intuitions against the consciousness of simple diodes).

How does the exclusion postulate effect streams of consciousness in individuals? Is he implying that all thoughts are distinct conscious entities in that they require a cascade of different neural circuits to be active at different times? Or that, within a complete thought, the most dominant (Phi-intensive) circuit is that which "owns" the consciousness?

Eric, does it matter whether it would be positive or negative (in my subjective evaluation)? I was proposing it kind of makes sense as something that might just be. In my opinion the workers position is a poor one inflicted on them (and not by PVE, if I may use a mmorpg term)

I disagree strongly with the claim that introspection suggests that there are no nested or overlapping consciousnesses in our heads. I regard the exclusion postulate as a counterintuitive bug at best, and not a feature of ITT.

As to ITT as a whole, I don't understand how the integration of information occurs except functionally, i.e. in such a way as to be defined in terms of an airy-fairy cloud of unrealized hypotheticals, and I don't see how those can make consciousness be or not be.

Callan: Are you imagining both the workers AND the larger entity to be conscious, or only the larger entity -- that was the question I meant to ask, though I didn't phrase it clearly.

Mambo: Then you'll probably like the 2004-2008 versions of IIT better than the more recent versions with Exclusion.

John: I'm not sure what introspection shows, so I don't really disagree with you on that point. I do also think it's a bit weird that hypotheticals could do so much work -- they do seem, in a way, pretty "airy-fairy"! But without hypotheticals I think you collapse pretty fast into Greg Egan's dust theory. So I feel like we're stuck giving them an important role. On Egan, see my post here:http://schwitzsplinters.blogspot.com/2009/01/dust-hypothesis.html

I know IIT v3.0.I could not find completely same expression of the exclusion postulate, but I found similar one from IIT v3.0 paper.Probably Tononi misunderstood. I think the exclusion postulate is not the most important part of IIT v3.0.I will think a little.

What you described, Eric, not me! :) It just seemed to resonate in a horrible way to me. I mean think of a small business - an employee might have an idea for how the business works and it might actually influence the procedures of that business. Now scale up the business in size - the very same employee might just be ignored at that point! Indeed they might not even bother trying to think of the idea, for how little they...I don't know how to describe it except perhaps for how little they matter?

The problem with your California election example is that the result of the election -- the victories in some combination of 40 or 50 different candidates or propositions -- is not an irreducible concept. It's a composite, which by Tononi's definition, cannot be the subject of consciousness. And it's not embodied in a singular experience that can be the subject of consciousness -- for example, reading about the results in the LA Times over breakfast.

Bill: I'm not sure I understand. The idea of what is composite or not, and what is singular or not, should flow out of the mathematics of his model, right? Is there some reason to think that the mathematics of his model is structured so as to avoid group consciousness in California as a result of integration through an election structured in the right way?

Take his two core examples: the dipolar switch and the digital camera. The dipolar switch integrates what little information it has in a singular event. The digital camera has no similar capacity for integration. The California election is like the digital camera. There is no single point at which the results in all of the races are integrated. They might all be reported on the same piece of paper, but integrating them in that way does not add any information to the results in each race that have already been calculated somewhere else. As I understand it, Tononi's theory requires both. Information has to be physically integrated (i.e. brought together) AND the integration has to add information that wasn't there before.

Eric,I'm very late to the game, but I guess it will soon be clear why.I was almost a fan of IIT V.2 (2008), I thought it was impressive in all possible ways, and plagued by only one problem. The problem became more and more relevant as people commented, and I do believe that it shaped significantly the most significant changes introduced with version 3. Unfortunately, the solution offered by V.3 is worse than the problem, IMVHO.So, what is the problem? The assumption/postulate that Information Integration is necessary and sufficient for consciousness. Sufficiency inevitably leads to a form of quasi panpsychism, which in turn attracts a lot of substantial criticism and scepticism. In short, it makes IIT unpalatable to lots of people (including me, I'm afraid).

Sitting on the fence, I was hoping that Tononi and collaborators would take the criticism seriously (they did) and "fix" IIT (I have my own preferred way) as a consequence. Instead, they made it worse (IMVHO), and my disappointment still hurts. If I had any doubts, the recent article by Tononi and Koch "Consciousness: here, there and everywhere?" helped me to dispel them: V.3 is really trying to cut-down its panpsychism implications as much as possible, and the main tool to get this done is indeed the exclusion principle.Eric, I think your own criticism is the most elegant I know of, because it's brief and conclusive, but of course also Aaronson's points are really well made. (Bill Lane does raise an interesting counterargument, though).Via the exclusion principle, V.3 becomes less panpsychist, but also less explanatory, without solving the problem at its heart:why on earth should Information Integration be both necessary AND sufficient for the emergence of consciousness? I can't see where Tononi explains this little detail, but I might just have got lost on the huge literature.(As far as I know, the only claim is: IIT explains the explananda, thus we need nothing more.)For me, if you simply say "Information Integration is necessary but not necessarily sufficient for the emergence of consciousness" everything starts working much better, both premises and consequences become much more palatable as well. Panpsychism evaporates, while all the (absolutely fantastic) clinical (and research) implications of IIT and Phi remain perfectly valid. Sure, it won't be possible to market IIT as the "solution" to all the consciousness questions, but I'd say it never was: without making it clear where sufficiency comes from, it would always look incomplete to my eyes.Eric (and all), if you know of where the sufficiency claim comes from, please do share!

Now, the self-serving part, with apologies (When in need, ask for help. That's my policy).I'm writing all this only now because:a) until the publication of "Consciousness: here, there and everywhere?" I wasn't so sure that my impressions were likely to be right.b) I've finally made public my own take on Consciousness, where I briefly address IIT and cite your criticism as "hard hitting".c) I am looking for criticism on my own approach. Eric, I would be delighted to hear your thoughts: if you'll apply the same unforgiving critical eye to my own take you'll make me a very happy man.