Quantum Randomness

Refuting Determinism

Bell’s theorem can also be understood in another way: as using the assumption of no faster-than-light communication to address the even more basic question of predictability and randomness. It’s probably easiest to explain this idea via an example.

In 2002, Stephen Wolfram published his 1,200-page book A New Kind of Science, which set out to explain the entire universe in terms of computer programs called cellular automata. You can think of a cellular automaton as a giant array of 0s and 1s that get updated in time by a simple, local, deterministic rule. For example, “if you’re a 0, then change to 1 only if three of your eight neighbors are 1s; if you’re a 1, then stay 1 only if either two or three of your eight neighbors are 1s.” (This rule defines Conway’s Game of Life, one of the most popular cellular automata, invented by mathematician John Conway in 1970.)

Despite their almost childish simplicity, cellular automata can produce incredibly complex behavior (see the figure at right), such as “particles” and other structures that travel across the array, collide, merge, and disintegrate, and sometimes even seem to act like living organisms. Witnessing this behavior, generations of programming hobbyists have speculated that our own universe might be a cellular automaton at its base, a speculation that Wolfram embraces with gusto.

Personally, I have no problem with the general idea of describing nature as a simple computation—in some sense, I’d say, that’s the entire program of science! When I read Wolfram’s book, however, I had difficulties with the specific kind of computation he asserted could do the job. In particular, Wolfram insisted that the known phenomena of physics could be reproduced using a classical cellular automaton: that is, one where the bits defining “the state of the universe” are all either definitely 0 or definitely 1, and where the apparent randomness of quantum mechanics arises solely from our own ignorance about the bits. Wolfram knew that such a model would imply that quantum mechanics was only approximate, so that (for example) quantum computers could never work. However, it seemed to me that Wolfram’s idea was ruled out on much simpler grounds, namely Bell’s theorem. That is, because a classical cellular automaton would allow only correlation, rather than quantum entanglement, it would lead us to predict wrongly that the CHSH game can be won at most 75 percent of the time.

Wolfram had a further complication up his sleeve, however. Noticing the difficulty with explaining entanglement, he added a conjecture that, when two particles become entangled, a “long-range thread” forms in the cellular automaton, connecting two locations that would otherwise be far apart. Crucially, these long-range threads would not be usable for sending messages faster than light, or for picking out a “preferred frame of reference”; Wolfram knew that he didn’t want to violate special relativity. Rather, in some way that Wolfram never explained, the threads would only be useful for reproducing certain predictions of quantum mechanics, such as the one that the CHSH game can be won 85.4 percent of the time.

It turned out that the idea was still unworkable. In a 2002 review of Wolfram’s book, I proved that the long-range thread idea can’t possibly do what Wolfram wanted. More precisely, I showed that if a long-range thread can be used to win the CHSH game more than 75 percent of the time, then that thread also picks out a preferred frame of reference, or (worse yet) creates closed timelike curves, where time loops back on itself, which would allow Alice and Bob to send messages to their own pasts. (Note that Bohmian mechanics doesn’t contradict this theorem, because it does pick a preferred frame of reference. But such a frame—that is, something that tells you whether Alice or Bob made their measurement “first”—is what Wolfram had been trying to avoid.)

At the time I wrote down this observation, I didn’t think much of it, because the argument was just a small variation on ones that Bell and the CHSH game creators had made decades earlier. In 2006, however, the observation attracted widespread attention, when John Conway (the same one from Conway’s Game of Life) and Simon Kochen presented a sharpened and more general version, under the memorable name “The Free Will Theorem.” Conway and Kochen phrased their conclusion as follows: “if indeed there exist any experimenters with a modicum of free will, then elementary particles must have their own share of this valuable commodity.” To put it differently: Assuming no preferred reference frames or closed timelike curves, if Alice and Bob have genuine “freedom” in deciding how to measure entangled particles, then the particles must also have “freedom” in deciding how to respond to the measurements.

Unfortunately, Conway and Kochen’s use of the term “free will” generated a lot of avoidable confusion. For the record, what they meant by “free will” has almost nothing to do with what philosophers mean by it, and certainly nothing to do with human agency. A better name for the Free Will Theorem might’ve been the “Freshly-Generated Randomness Theorem.” For if you want to understand what the theorem says, you might as well imagine that Alice and Bob are dice-throwing robots rather than humans, and that the “free will of the elementary particles” just means quantum indeterminacy. The theorem then says:

Suppose you agree that the observed behavior of two entangled particles is as quantum mechanics predicts (and as experiment confirms); that there’s no preferred frame of reference telling you whether Alice or Bob measures “first” (and no closed timelike curves); and finally, that Alice and Bob can both decide “freely” how to measure their respective particles after they’re separated (i.e., that their choices of measurements aren’t determined by the prior state of the universe). Then the outcomes of their measurements also can’t be determined by the prior state of the universe.

Although the assumption that Alice and Bob can “measure freely” might seem strong, all it amounts to in essence is that there’s no “cosmic conspiracy” that predetermined how they were going to measure. At least one distinguished physicist, the Nobel laureate Gerard ‘t Hooft, has actually advocated such a cosmic conspiracy as a way of escaping the implications of Bell’s theorem. To my mind, however, such a conspiracy is no better than believing in a God who planted fossils in the ground to confound paleontologists.