Inverse spectrum argument

This is an argument in the philosophy of mind which aims at showing that consciousness is not fully described by a description of a brain-state, however detailed, whereas a brain-state is (asymptotically) so describable, and that therefore a conscious experience is not a brain-state. In other words the argument is aimed against the physicalistmind-brain identity theory, which holds the converse, and the strong computationalist variants of this that have emerged, which appear to hold that a pattern of information of a certain form and level of complexity is sufficient for consciousness.

The argument is usually given by considering questions like:

"how do I know your experience of red is not qualitatively more similar to my experience of violet than to my experience of red?"

"How do I know that the spectrum you see is not the same as the spectrum I see, but inverted?"

Note, nothing in the argument presupposes that we do not agree on testable data such as the frequency of red light, the names for the various colours, etc.; at issue is only our direct experience of the colours - the nature of the so-called qualia.

Because of the difficulty of constructions such as 'my experience of red', and so on, and also to avoid the question of minor physiological details that might affect our perception of colour, I prefer the argument in a slightly different form.

Suppose we construct an impossibly high-powered computer, that can model, in as much detail as we like, the structure of a living brain. We then model the effect of red light hitting the retina to which the model of the optic nerve is connected and ask the question: how can we extract from our model a description of the ensuing experience which is sufficient to convince us that the modeled experience of 'red' is like (or unlike) our own?

I'm not prejudging the question of whether the model has had a real experience of red, qualia and all.
The point is that we stipulated that all the physical information about the brain is in the model, and yet we can't get the determination of the brain's "qualia" out of the model. What would satisfy us? A set of figures? Using those figures to put something on a screen? Run the model through a conversation and have it tell us?

Consideration of the attendant difficulties of describing our own experience of red in order to meet this criterion will show, I think, the implausibility of extracting such information from our model.

Something about "what red is like" seems to resist capture by means we ordinarily use to communicate information, so that we need to have seen red things ourselves in order to successfully dereference "what red is like". In the case of our computer model, this seems to nullify the apparent advantage of having all the information we could wish available at our fingertips.

The proponents of strong computationalism will sometimes at this point mention the possibility of psycho-physical laws
and attempt to defer the question for consideration by some future science. Presumably this would be called psycho-physics, or perhaps neuro-qualiology; but whatever the name, it doesn't exist at present.

That approach begs the question of how to bridge the gulf in describability between a brain-state and a conscious experience. Brain-states, like any 'physical' (or informational) state of affairs - spatial arrangements, neural firing patterns, measurable properties of photons are other examples - are in principle describable in the formal way required (because in this sense, physics is no less a formal discipline than computer science), in as much detail as we like (and our knowledge of physics/IT will allow) whereas the familiar components of our conscious experience are not. (Try it!)

In my view, this version of the inverse spectrum argument is not a demonstration of the existence of some weird nonphysical mental substance or ontologically unique feature of mental events, though it is often taken as such.

The real import, it seems, is that the existence of that which is not exhaustively describable by formal systems (such as the mathematics used in physics and computer science) presents an immediate knockdown argument against naive physicalism and its computationalist cousins, which seem bound to hold that a formal approach can be ontologically complete. Rather than "mental substance", this view merely implies that descriptions of "what is" available through formal means give out at a certain point, and might be better thought of as exhaustive about one aspect of existence than about existence per se.

The only convincing counter-argument I can conceive would be a design for a system which could in principle enable someone who's never seen to know what it is like to experience colour - and to convey this in an exact way, which we don't seem to be able to manage even for seeing people, without reference to samples.

Of course, you can describe, in detail, the mathematical properties of the colour space, the physics of light, the behaviour of the neurons. Any causal property you can name. Is it realistic to expect that this type of description would suffice?

The writeup below presents one contrasting view. Regarding psychophysics, I'm inclined to ask: "intensity of what?" The reply "you'd have to ask them" essentially concedes that our "psychophysics" doesn't provide the information that we're after, else why would we have to ask them? The sort of psychophysics needed should enable us to go from the brain-states to the "qualia" directly.

If their pulse goes up in a bright red room, and mine does
too, if their threshold of perception is the same (say it's easier to detect a very faint red than a very faint yellow)... concede all the physiological inwardness, all
the brain states, and there's nothing left [to invert].

But tell us of this inwardness, Gritchka! I'm not sure I see a physiological sense defined in that node. (Of course it's the mysterious disappearance of "what there is to invert" - something that we know damn well is there - when we restrict the way we look for it in the way that G. suggests that my argument is pointing out, so, again, the objection fails.)

Page category:

Another way of looking at the problem is to ask: If I wake up one morning, and my spectrum is inverted, what happens? Will I get used to it over time? Let's say that I see "red" things, like apples and blood, as green. (The meaning of this as is something to ponder.) After a few weeks, I am able to correctly apply the term "red" to apples and blood. Can we say that I now see them as red?

The as is basically the idea of qualia. Check out the node to see what I mean.

What is really interesting about the inverse spectrum argument is that there are emprical examples of things very close to it. One experimenter gave people goggles that turned everything upside down. He asked them to wear them for a long period -- I think it was several months. After time, the people adapted. They could go about their lives, perform normal tasks -- some of them could even go skiing. But they just didn't have the vocabulary to explain the change that had taken place. They just became frustrated when people asked them: "So, do you still see everything upside down?"

I personally think that the inverse spectrum argument is based on assumptions about the nature of perception that simply don't work. These assumptions are brought to light very well by thinking about the inverse specturm argument -- which is why it is such a good intuition pump. One of these assumptions is the idea that there is a kind of perceptual "space," where things get arranged, a world in miniature that models the perceived world. This is a wishful assumption, I think.

BTW, there is a science of psychophysics -- I've taken a course in it. Scientists like Weber and Fechner (see Weber's law) made very rigorous scientific investigations into psychophysical laws, without missing a methodological beat. They asked people to report how the intensity of their experience varied with variations in objective stimuli.

To answer the question "Intensity of what" you'd have to ask the participants in the experiment. Weber simply varied something measurable (the amount of light, or sound, for instance) and correlated it with reported changes in the intensity of the subject's experience. That's science, not ontology. But ontology had better listen to it, to avoid ruling out things that have already happened. :)