Yawn: More Abuse of the Quantum

Binocular rivalry is a phenomenon which occurs when conflicting information is presented to each of our two eyes, and the brain has to cope with the contradiction. Instead of seeing a superimposition or “average” of the two, our perceptual machinery entertains both possibilities in turn, randomly flickering from one to the other. This presents an interesting way to stress-test our visual system and see how vision works. Unfortunately, talk of “perception” leads to talk of “consciousness,” and once “consciousness” has been raised, an invocation of quantum mechanics can’t be too far behind.

Second, one must carefully distinguish a model of a phenomenon which actually uses quantum physics from a model in which certain mathematical tools are applicable. Linear algebra is a mathematical tool used in quantum physics, but describing a system with linear algebra does not make it quantum-mechanical. Long division and the extraction of square roots can also appear in the solution of a quantum problem, but this does not make dividing 420 lollipops among 25 children a correlate of quantum physics.

Just because the same equation applies doesn’t mean the same physics is at work. An electrical circuit containing a capacitor, an inductor and a resistor obeys the same differential equation as a mass on a spring: capacitance corresponds to “springiness,” inductance to inertia and resistance to friction. This does not mean that an electrical circuit is the same thing as a rock glued to a slinky.

MIXING THE QUANTUM AND THE CLASSICAL

One interesting thing about this paper is that the hypothesis is really only half quantum, at best. In fact, three of the four numbers fed into Manousakis’ hypothesis pertain to a classical phenomenon, and here’s why:

Manousakis invokes the formalism of the quantum two-state system, saying that the perception of (say) the image seen by the left eye is one state and that from the right eye is the other. The upshot of this is that the probability of seeing the illusion one way — say, the left-eye version — oscillates over time as

[tex]P(t) = \cos^2(\omega t),[/tex]

where [tex]\omega[/tex] is some characteristic frequency of the perceptual machinery. The oscillation is always going, swaying back and forth, but every once in a while, it gets “observed,” which forces the brain into either the clockwise or the counter-clockwise state, from which the oscillation starts again.

The quantum two-state system just provides an oscillating probability of favoring one perception, one which goes as the square of [tex]\cos(\omega t)[/tex]. Three of the four parameters fed into the Monte Carlo simulation actually pertain to how often this two-state system is “observed” and “collapsed”. These parameters describe a completely classical pulse train â€” click, click, click, pause, click click click click, etc.

What’s more, the classical part is the higher-level one, the one which intrudes on the low-level processing. Crudely speaking, it’s like saying there’s a quantum two-state system back in the visual cortex, but all the processing up in the prefrontal lobes is purely classical.UNDERDETERMINATION
Manousakis relies upon data from experiments some other people have conducted (no crime there, of course). The most interesting data comes from an experiment where subjects were tested in a binocular rivalry setup: conflicting information was fed to the two eyes, and the time for which the image from each eye was “dominant” was recorded. The fun part of the research is that the experiment was done in two variations, with LSD and without. Manousakis uses his model to fit a curve to the data in both cases. By itself, this doesn’t “explain” the difference between what happens with LSD and without. It just provides a formula for a curve with enough parameters so that the curve can be fit in both cases.

Here’s one problem I have with Manousakis’ results. His model includes three different timescales: the frequency [tex]\omega[/tex] of oscillation and two parameters describing how often the “observation” events occur. However, the graphs presented in the paper appear to have at most two characteristic timescales; the results of curve-fitting on such data would then be under-determined. Also, none of the parameters are the same across the situations, and no explanation is provided for why they might differ.

Two of his figures look like a Poisson distribution would be a reasonable first approximation. This would describe a situation where the image from each eye is entertained in turn, and the probability of flipping to the other eye is constant over time. Instead of comparing to a Poisson distribution, or any other reasonable first guess (like Fisher–Tippett), he compares his model to an exponential decay which looks nothing like the data. It’s fine and dandy to show that an exponential decay won’t fit the measurements, but that doesn’t help distinguish Manousakis’ hypothesis from reasonable null hypotheses like Poissonian behavior.

In fact, digging into the literature, one finds that the duration of dominance is described by a gamma distribution, a curve which is the sum of multiple independent, exponentially distributed random variables, each with the same mean. The gamma distribution has two parameters: the number of exponential random variables, and the mean of the exponential distribution.

Manousakis fits a two-parameter curve with a four-parameter hypothesis. The numbers he gets out are, at face value, meaningless (so it’s no surprise that they differ so radically between the different curves he fits). Two combinations of his parameters, one dimensionful and the other dimensionless, might have actual significance.

A POINT ON NETWORKS

Furthermore, optical illusions arise from mental processing which occurs at a level “before” or “beneath” that which we call consciousness. (Talk of such different “levels” is, it appears, commonplace in the field.) We don’t choose of our own vaunted free will to see the dancer spinning to the left, or the lines of equal length, or the boxes facing upward: something in our brain does that for us. The “I” can then choose, with a certain “effort of will”, to force the perception into another possibility (the dancer turns in the opposite direction; the boxes flip upside-down).

A neural network implemented in a computer, with no spooky notion of “consciousness” whatsoever, can be susceptible to “optical illusions” if it is presented with stimuli unlike those upon which it had been trained. The network might, for example, be trained to distinguish up-arrows from down-arrows; its space of possible states would have two attractors and could be modeled with a bistable potential. An “optical illusion” would be an input stimulus which does not have an unambiguous interpretation. With some stochastic noise present in the system, the network’s state could flip from one attractor to the other, changing the perception from up to down.

Since transitions from one perceptual state to another can occur without consciousness, I find the assumption that (in Mo’s words) “conscious awareness is generated anew each time one flips an ambiguous figure” to be unfounded.

WHY DO WE CARE?

Over at Neurophilosophy, Mo made note of an approving quotation, by a certain Henry Stapp:

If it is correct, this is a landmark paper that for the first time uses quantum mechanics to elucidate brain dynamics and both matches existing experimental data and provides testable predictions.

It is not too difficult to match an existing set of experimental observations; that’s what curve fitting is all about. The more challenging part is that bit about providing “testable predictions.” I’ve explained why I’m doubtful on that score: the numbers which come out are, I suspect, fundamentally underdetermined. Beyond that, the curve-fitting done so far doesn’t really test any deeply, intrinsically quantum aspect of the hypothesis — all the actual knowledge about “brain dynamics” goes into the classical part, the sequence of times at which the two-state system is “observed.”

Incidentally, just who is Henry Stapp? This is strictly speaking irrelevant to the scientific topic at hand, but it might be interesting to know. Turns out, he’s a fellow who has his own notion of “quantum consciousness.” The mathematician Ray F. Streater has pointed out three killing flaws in Stapp’s argument: first, Stapp believes that thoughts must be arrived at instantaneously, whereas experiments show that brain activity can initiate a half-second before the “conscious mind” thinks it has made a decision. Second, Stapp thinks that classical mechanics cannot include correlations, which is a real WTF moment for me. Third, thanks to his belief that thought requires instantaneous communication, Stapp needs some way to send information faster than light, and he finds that mechanism in — surprise! — quantum entanglement. However, the real world doesn’t work that way: even the “spooky action at a distance” seen in entanglement experiments doesn’t send information FTL.

The Poisson distribution is discrete (in fact, it’s used for counts – nonegative integers), while the others you mention – Fisher-Tippett, gamma etc are continuous (as are many other possibilities, such as lognormal, Weibull, inverse Gaussian, etc)

Since the underlying response is presumably continuous (though, as always with continuous variables, measured to some level of accuracy), why would a Poisson be appropriate (since that would give 0 probability to non-integer durations, when clearly non-integer durations occur)?

If the reference to a Poisson just was meant to reflect something like “it looks unimodal and right-skewed, but not heavy-tailed” perhaps the longer description would be more fruitful and less confusing to people who are aware that it would give zero probability to non-zero probability events.

Secondly, your reference to “the duration of dominance” in the paragraph after mentioning the Poisson distribution was confusing (I had to check the original paper to realize the variable being measured in your earlier paragraph was the same thing).

Your other comments seem to make sense. I agree that the negative exponential distribution doesn’t look anything near appropriate.

Thanks for the comments. I had been sitting on this post for a while, wondering how I could improve it (I wasn’t particularly happy with the underdetermination part), and eventually I decided just to publish what I had. Better to say something which can be criticized than to say nothing at all, yes?

First, it’s a blog post, not an encyclopedia entry, something quick is better than nothing at all.

Second, there’s nothing I appreciate better than someone who is both clear enough that you can take issue with them, and prepared to discuss it.

If you only posted perfect prose, what would there be left to comment on?

That’s one thing I enjoy about Richard Dawkins’ writing. I don’t always agree with him, but he’s prepared to say what he means, and he is so damn clear about what he’s saying you can quickly tell that you disagree, and where, and what about. Bliss!

[Hmm… it strikes me that maybe there’s a perception of a bad tone from my previous comment. Was I rude in my comments? — I guess I was at least abrupt (time was pressing). If what I wrote strikes you as rude, I am sorry.]

The excessive amount of time I spend on the Internet has probably desensitized me to rudeness, so I didn’t notice anything particularly harsh in your previous comment. No worries! :-)

(I also enjoy the aspect of Dawkins’ writing you mention.)

If more people were excited about this particular arXiv preprint, I would probably revise what I originally wrote, but it seems that everybody went right along to the next sensation. Some time ago, I promised Tyler DiPietro an overview essay of quantum woo, so I might return to this material there; unfortunately, given the number of tasks I’ve said “yes” to doing, I don’t know when that might be.