[Update (8/26): Inspired by the great responses to my last Physics StackExchange question, I just asked a new one—also about the possibilities for gravitational decoherence, but now focused on Gambini et al.’s “Montevideo interpretation” of quantum mechanics.

Also, on a completely unrelated topic, my friend Jonah Sinick has created a memorial YouTube video for the great mathematician Bill Thurston, who sadly passed away last week. Maybe I should cave in and set up a Twitter feed for this sort of thing…]

[Update (8/26): I’ve now posted what I see as one of the main physics questions in this discussion on Physics StackExchange: “Reversing gravitational decoherence.” Check it out, and help answer if you can!]

[Update (8/23): If you like this blog, and haven’t yet read the comments on this post, you should probably do so! To those who’ve complained about not enough meaty quantum debates on this blog lately, the comment section of this post is my answer.]

[Update: Argh! For some bizarre reason, comments were turned off for this post. They’re on now. Sorry about that.]

I’m in Anaheim, CA for a great conference celebrating the 80th birthday of the physicist Yakir Aharonov. I’ll be happy to discuss the conference in the comments if people are interested.

In the meantime, though, since my flight here was delayed 4 hours, I decided to (1) pass the time, (2) distract myself from the inanities blaring on CNN at the airport gate, (3) honor Yakir’s half-century of work on the foundations of quantum mechanics, and (4) honor the commenters who wanted me to stop ranting and get back to quantum stuff, by sharing some thoughts about a topic that, unlike gun control or the Olympics, is completely uncontroversial: the Many-Worlds Interpretation of quantum mechanics.

Proponents of MWI, such as David Deutsch, often argue that MWI is a lot like Copernican astronomy: an exhilarating expansion in our picture of the universe, which follows straightforwardly from Occam’s Razor applied to certain observed facts (the motions of the planets in one case, the double-slit experiment in the other). Yes, many holdouts stubbornly refuse to accept the new picture, but their skepticism says more about sociology than science. If you want, you can describe all the quantum-mechanical experiments anyone has ever done, or will do for the foreseeable future, by treating “measurement” as an unanalyzed primitive and never invoking parallel universes. But you can also describe all astronomical observations using a reference frame that places the earth is the center of the universe. In both cases, say the MWIers, the problem with your choice is its unmotivated perversity: you mangle the theory’s mathematical simplicity, for no better reason than a narrow parochial urge to place yourself and your own experiences at the center of creation. The observed motions of the planets clearly want a sun-centered model. In the same way, Schrödinger’s equation clearly wants measurement to be just another special case of unitary evolution—one that happens to cause your own brain and measuring apparatus to get entangled with the system you’re measuring, thereby “splitting” the world into decoherent branches that will never again meet. History has never been kind to people who put what they want over what the equations want, and it won’t be kind to the MWI-deniers either.

This is an important argument, which demands a response by anyone who isn’t 100% on-board with MWI. Unlike some people, I happily accept this argument’s framing of the issue: no, MWI is not some crazy speculative idea that runs afoul of Occam’s razor. On the contrary, MWI really is just the “obvious, straightforward” reading of quantum mechanics itself, if you take quantum mechanics literally as a description of the whole universe, and assume nothing new will ever be discovered that changes the picture.

Nevertheless, I claim that the analogy between MWI and Copernican astronomy fails in two major respects.

The first is simply that the inference, from interference experiments to the reality of many-worlds, strikes me as much more “brittle” than the inference from astronomical observations to the Copernican system, and in particular, too brittle to bear the weight that the MWIers place on it. Once you know anything about the dynamics of the solar system, it’s hard to imagine what could possibly be discovered in the future, that would ever again make it reasonable to put the earth at the “center.” By contrast, we do more-or-less know what could be discovered that would make it reasonable to privilege “our” world over the other MWI branches. Namely, any kind of “dynamical collapse” process, any source of fundamentally-irreversible decoherence between the microscopic realm and that of experience, any physical account of the origin of the Born rule, would do the trick.

Admittedly, like most quantum folks, I used to dismiss the notion of “dynamical collapse” as so contrived and ugly as not to be worth bothering with. But while I remain unimpressed by the specific models on the table (like the GRW theory), I’m now agnostic about the possibility itself. Yes, the linearity of quantum mechanics does indeed seem incredibly hard to tinker with. But as Roger Penrose never tires of pointing out, there’s at least one phenomenon—gravity—that we understand how to combine with quantum-mechanical linearity only in various special cases (like 2+1 dimensions, or supersymmetric anti-deSitter space), and whose reconciliation with quantum mechanics seems to raise fundamental problems (i.e., what does it even mean to have a superposition over different causal structures, with different Hilbert spaces potentially associated to them?).

To make the discussion more concrete, consider the proposed experiment of Bouwmeester et al., which seeks to test (loosely) whether one can have a coherent superposition over two states of the gravitational field that differ by a single Planck length or more. This experiment hasn’t been done yet, but some people think it will become feasible within a decade or two. Most likely it will just confirm quantum mechanics, like every previous attempt to test the theory for the last century. But it’s not a given that it will; quantum mechanics has really, truly never been tested in this regime. So suppose the interference pattern isn’t seen. Then poof! The whole vast ensemble of parallel universes spoken about by the MWI folks would have disappeared with a single experiment. In the case of Copernicanism, I can’t think of any analogous hypothetical discovery with even a shred of plausibility: maybe a vector field that pervades the universe but whose unique source was the earth? So, this is what I mean in saying that the inference from existing QM experiments to parallel worlds seems too “brittle.”

As you might remember, I wagered $100,000 that scalable quantum computing will indeed turn out to be compatible with the laws of physics. Some people considered that foolhardy, and they might be right—but I think the evidence seems pretty compelling that quantum mechanics can be extrapolated at least that far. (We can already make condensed-matter states involving entanglement among millions of particles; for that to be possible but not quantum computing would seem to require a nasty conspiracy.) On the other hand, when it comes to extending quantum-mechanical linearity all the way up to the scale of everyday life, or to the gravitational metric of the entire universe—as is needed for MWI—even my nerve falters. Maybe quantum mechanics does go that far up; or maybe, as has happened several times in physics when exploring a new scale, we have something profoundly new to learn. I wouldn’t give much more informative odds than 50/50.

The second way I’d say the MWI/Copernicus analogy breaks down arises from a closer examination of one of the MWIers’ favorite notions: that of “parochial-ness.” Why, exactly, do people say that putting the earth at the center of creation is “parochial”—given that relativity assures us that we can put it there, if we want, with perfect mathematical consistency? I think the answer is: because once you understand the Copernican system, it’s obvious that the only thing that could possibly make it natural to place the earth at the center, is the accident of happening to live on the earth. If you could fly a spaceship far above the plane of the solar system, and watch the tiny earth circling the sun alongside Mercury, Venus, and the sun’s other tiny satellites, the geocentric theory would seem as arbitrary to you as holding Cheez-Its to be the sole aim and purpose of human civilization. Now, as a practical matter, you’ll probably never fly that spaceship beyond the solar system. But that’s irrelevant: firstly, because you can very easily imagine flying the spaceship, and secondly, because there’s no in-principle obstacle to your descendants doing it for real.

Now let’s compare to the situation with MWI. Consider the belief that “our” universe is more real than all the other MWI branches. If you want to describe that belief as “parochial,” then from which standpoint is it parochial? The standpoint of some hypothetical godlike being who sees the entire wavefunction of the universe? The problem is that, unlike with my solar system story, it’s not at all obvious that such an observer can even exist, or that the concept of such an observer makes sense. You can’t “look in on the multiverse from the outside” in the same way you can look in on the solar system from the outside, without violating the quantum-mechanical linearity on which the multiverse picture depends in the first place.

The closest you could come, probably, is to perform a Wigner’s friend experiment, wherein you’d verify via an interference experiment that some other person was placed into a superposition of two different brain states. But I’m not willing to say with confidence that the Wigner’s friend experiment can even be done, in principle, on a conscious being: what if irreversible decoherence is somehow a necessary condition for consciousness? (We know that increase in entropy, of which decoherence is one example, seems intertwined with and possibly responsible for our subjective sense of the passage of time.) In any case, it seems clear that we can’t talk about Wigner’s-friend-type experiments without also talking, at least implicitly, about consciousness and the mind/body problem—and that that fact ought to make us exceedingly reluctant to declare that the right answer is obvious and that anyone who doesn’t see it is an idiot. In the case of Copernicanism, the “flying outside the solar system” thought experiment isn’t similarly entangled with any of the mysteries of personal identity.

There’s a reason why Nobel Prizes are regularly awarded for confirmations of effects that were predicted decades earlier by theorists, and that therefore surprised almost no one when they were finally found. Were we smart enough, it’s possible that we could deduce almost everything interesting about the world a priori. Alas, history has shown that we’re usually not smart enough: that even in theoretical physics, our tendencies to introduce hidden premises and to handwave across gaps in argument are so overwhelming that we rarely get far without constant sanity checks from nature.

I can’t think of any better summary of the empirical attitude than the famous comment by Donald Knuth: “Beware of bugs in the above code. I’ve only proved it correct; I haven’t tried it.” In the same way, I hereby declare myself ready to support MWI, but only with the following disclaimer: “Beware of bugs in my argument for parallel copies of myself. I’ve only proved that they exist; I haven’t heard a thing from them.”

I’d prefer to say that I understand the case for MWI and agree with part of it: yes, MWI really is just the “obvious” story you would tell if you wanted to apply quantum mechanics to the entire universe. (The zillions of other worlds aren’t “added” per se; rather, they seem unavoidable once you accept that the Schrödinger equation applies always and everywhere.) Furthermore, all of the concrete alternatives to MWI on the market today are contrived and unsatisfactory in various ways.

On the other hand, for the reasons explained in the post, I reject the further step some people take these days: of asserting that MWI is as obviously true as the Copernican system, and that anyone who refuses to see that is an idiot.

“The problem is that, unlike with my solar system story, it’s not at all obvious that such an observer can even exist, or that the concept of such an observer makes sense.”

If physics are computable, we can always imagine some “alien god” building a computer to simulate physics and look at the universe from outside the simulaton- even if the computer only simulates a small subset of the universe. In fact, we’ll be able to do it ourselves.

Can a photon can arrive at one place having traveled different paths to get there, with the paths being different lengths, meaning that, since the speed of light is constant, and the photon arrived at one time, the photon left at different times? If so, how does this happen without worlds combining?

Is worlds combining not a plausible ingredient in an explanation of Born’s rule?

Great post! But I wonder just what the implications would be if the Bouwmeester et al. experiment shows no interference.

It certainly gives an opportunity to falsify Penrose’s theory of gravitationally induced collapse, and a no-interference result would make that theory much more credible.

But couldn’t the mirror also undergo something much like conventional environmental decoherence in a version of quantum gravity where the gravitational field is a thermal state of Planck-scale degrees of freedom? I’m not sure that any of those theories are sufficiently well defined to predict the outcome of this experiment, but as a wild guess I’d expect them to produce environmental decoherence under the same kind of conditions as Penrose’s theory produces a collapse.

Brian #6: No, it’s not a necessary aspect of MWI that the branches never again meet—in fact they will meet in the thermodynamic limit. All I meant was that, on the timescales relevant to us (and on the conventional picture), their chance of meeting is so absurdly small that it can safely be neglected.

Please let me commend to everyone the first chapter of David Deutsch’s book The Fabric of Reality (1997), not for the arguments that Deutsch develops in favor of MWI (arguments that are — literally — debatable), but rather for the wonderful arguments that Deutsch develops for the proposition that science is all about understanding, relative to which predictive power is merely supplementary.

Here are excepts from Deutsch’s chapter, arrayed so as to argue (what I take to be) Deutsch’s main points:

“Being able to predict things or to describe them, however accurately, is not at all the same thing as understanding them. Facts cannot be understood just by being summarized in a formula, any more than being listed on paper or committed to memory. They can be understood only by being explained. Fortunately, our best theories embody deep explanations as well as accurate predictions.”

“To say that prediction is the purpose of a scientific theory is to confuse means with ends. It is like saying the purpose of a spaceship is to burn fuel. Passing experimental tests is only one of the many things a theory has to do to achieve the real purpose of science, which is to explain the world.”

“Even in purely practical applications, the explanatory power of a theory is paramount and its predictive power only supplementary. A scientific theory stripped of its explanatory content would be or strictly limited utility. Let us be thankful that real scientific theories do not resemble that ideal, and that scientists in reality do not work toward that ideal.”

“Knowledge does not come into existence fully formed. It exists only as the result of creative processes, which are step-by-step, evolutionary processes, always starting with a problem and proceeding with tentative new theories, criticism and the elimination of errors to a new and preferable problem-situation.”

“This is how Shakespeare wrote his plays. It is how Einstein discovered his field equations. It is how all of us succeed in solving any problem, large or small, in our lives, or in creating anything of value.”

“In the future, all explanations will be understood against the backdrop of universality, and every new idea will automatically tend to illuminate not just a particular subject, but, to varying degrees, all subjects.”

The final sentence quoted above is my favorite expression from all of Deutsch’s Fabric, because it is such a wonderfully sharp, wonderfully double-edged sword, which we will call “Deutsch’s Universal Double-Edged Sword” (DUDES).

By the way, “DUDES” is also a tribute to a much-used word of Ryan North’s wonderfully hilarious creation Dinosaur Comics Check it out, DUDES!

As a concrete and realistic example of quantum-relevant DUDES, let us suppose — as many researchers are discovering — that we come to a collective appreciation, as a humbly empirical finding, that the introduction of small quanitities of high-order noise can greatly increase the efficiency of computational simulation simulation, not for all classes of dynamical system, but for a class sufficienly broad as to encompass all naturally occuring systems, as well as all laboratory experiments conducted to date.

The principle of DUDES affirms that this empirical finding must “illuminate all subjects”, and in so particular, it must illuminate the reality (or not) of MWI.

What might the mathematical and physical substance of this illumination be? By the principle of DUDES, this is an interesting question, and therefore a fine research opportunity, for which this appreciation and thanks are extended to David Deutsch!

Scott, what do you think of combining Wigner’s friend experiment with Peres’s delayed choice for entanglement swapping?

Suppose Alice, Bob, ad Victor (Figure 1 of Ma et al. 2012) become conscious of their measurement (basis chosen at random) at space-like distance. Then don’t you think each individual would need to consider the others two as superposed consciousness?

If this experiment was conducted, would you agree Penrose would have been proven wrong on this particular topic? If yes, do you actually need a space-like separation to get convinced?

Greg Egan #7: Yes, absolutely, there might be “gravity-induced environmental decoherence,” of a kind that left quantum-mechanical linearity formally intact. But even then, if the decoherence were irreversible for some fundamental reason (e.g., if the differences in the gravitational metric in the two branches propagated outward at the speed of light, and the cosmology was such that the branches could never recohere), then I’d tend to say that unitarity “remained on its throne only as a ceremonial monarch”! In other words, as soon as we postulate any decoherence (whatever its source) that occurs below the level of everyday experience, and that’s truly irreversible for fundamental physical reasons … at that point, I would say that we can now fully explain our experience without any reference to parallel copies of ourselves in other branches, and are therefore not forced into MWIism. And MWIism isn’t something that has great appeal to me unless I’m forced into it. But I suspect Deutsch would disagree here.

Scott #11: Sorry I was unclear. Wigner’s original proposal could hardly be done in near future (except maybe using conscious computer, but then Penrose could argue these computers are not truly conscious). To the contrary, Peres’ experiment has already be conducted, with Alice, Bob and Victor being instruments rather than conscious being.

So, if Victor, Alice and Bob were conscious of their own measurement, would you agree Peres’s proposal would be in essence a valid variation of Wigner’s friend experiment?

Jiav #13: Thanks for clarifying! Alas, no, I don’t think an experiment where someone else measures a quantum state at spacelike separation from me says anything about these issues. For I’d only describe the other person as being in superposition if I already accepted MWI! If instead I believed in some dynamical-collapse theory, then I’d say the other person was in a definite configuration just like I was. And nothing in an experiment like this is able to distinguish the two possibilities. The only way I know to distinguish them is to look for interference between different mental states of a superposed observer, and that’s something that we’re obviously an extremely long way from doing.

Here is a suggestion for a viewpoint from which the belief that our universe is the only one seems parochial. Suppose for a second that we accept the MWI. Then, it is likely that there exists some universe in which:
1. There is a consensus on the MWI.
2. The people in this universe, by their belief in MWI, guess the existence of a universe like ours – i.e., a universe whose inhabitants believe it to be the only one.

Then, from the point of view of those people, our view would seem parochial.

Well, one of the spacelike-separated people could consciously choose a polarized photon to send to the other person, based on the result of their measurement. If this photon is in a superposition presumably so is the person who hand-crafted it.

Lets restate this philosophical problem as a problem of ontology
Imagine that you want to write a computer program that perfectly simulates what’s going on at the quantum level
Now the problem comes down to asking how many classes you need to define in your domain model
When you run your program will there be only one class of object instantiated (the wave class) or are there two different types of objects (of wave class and particle class) ?
The many worlds interpretation is equivalent to saying you only need to define one class in your model (wave class) because wave objects are all there are
Other interpretations are equivalent to saying you need to define at least two different classes (waves and particles) since both types of object can be instantiated and you also therefore need to define the interface showing the message passing between the two different types of object as per the rules of object oriented programming
When restating the problem in this way much confusion immediately clears
It should be obvious that the many worlds interpretation has much greater simplicity and clarity and that all other interpretations are in fact a return of dualism in disguise (with all the associated problem thereof).
It is for that reason that many worlds wins hands down.

It’s as extreme a violation of the Occam razor principle (entities should not be multiplied without necessity) as possible. I challenge anyone to try to come up with something that would violate it more. The most incredible thing about it is that it doesn’t offer even a single practical benefit over other interpretations like the truly minimalist statistical/ensemble interpretation.

Only physicists can be so arrogant as to postulate billions onto billions of even in principle unobservable universes just to make their theory a bit nicer in mathematical terms, or to avoid having to admit to their own ignorance.

Imagine if something like that were tried by say a doctor, who would come up with a theory of Alzheimer, claiming that it’s actually caused by deaths of your parallel selves in alternative unobservable planes of biological existence. See, that’s why we haven’t found it’s cause till now and why we don’t have a cure for it! Makes perfect sense…

Does the notion of superdeterminism remove the need for MWI? And if so, is there a good reason why is this not an explanation that receives more attention? To my limited knowledge, Gerard ’t Hooft seems to be the only one whose work on this topic seems to be taken somewhat seriously.

Also, if the concept of “consciousness” is integral to certain interpretations of quantum mechanics, does this not raise alarm bells? From what I’ve seen, there seems to be a consensus amongst the neuroscience community that “free will” is an illusion (see Sam Harris’s recent short text for references). I suppose you might have “consciousness” without “free will”, but that seems problematic, even at the level of then defining these terms.

Or Meir #15: LOL, I hadn’t thought of that! The trouble is, there’s also some branch of the wavefunction where they all agree that MWI is parochial, and that the only way to escape the parochial-ness is through a dynamical-collapse theory…

Well, one of the spacelike-separated people could consciously choose a polarized photon to send to the other person, based on the result of their measurement. If this photon is in a superposition presumably so is the person who hand-crafted it.

Sorry, while that argument sounds superficially plausible, the rules of QM prevent it from working! If the photon is entangled with the person who sent it, then I won’t see interference when I measure the photon (i.e., the photon will be in a mixed state). The only way for the photon to be in a pure-state superposition is if it’s unentangled with the sender. But in that case, the photon could just as well have been sent by a person in a definite configuration as by a superposed sender.

So in neither case can I conclude anything at all, by measuring only the photon, about whether the person who sent it was in superposition.

mjgeddes #17: While I love your coding analogy, the irony is that I draw the opposite conclusion from it than you do!

I’d say: whether your program should have “wave” objects only, or both “wave” and “particle” objects, strongly depends on what kind of functionality you think the program has to support.

To illustrate, suppose that I ask two people to code up a spaceship game. The first person dutifully starts defining objects corresponding to spaceships, asteroids, planets, etc. But then the second person says, “ha! what needless complexity! I’ve simply written a program that dovetails over all possible Turing machines, at least one of which is obviously a great spaceship game. Because of its greater simplicity and clarity, my program wins hands down.” I think we’d be justified to respond that, while simplicity and clarity are criteria that we use in evaluating code, they’re not the sole criteria.

It’s as extreme a violation of the Occam razor principle (entities should not be multiplied without necessity) as possible.

The trouble is that “entities should not be multiplied without necessity” is an archaic formulation of Occam’s Razor, which matches neither how it is invoked in science, nor how it should be invoked according to modern statistical theories.

To illustrate, imagine a theory that says that atoms don’t exist except when we look at them through a microscope, or at least when we observe some phenomenon whose explanation requires atomic theory. Otherwise, if you just stare at (say) your table and don’t try to take it apart, then it really is just a solid, continuous wood-substance with no internal structure. Now, notice that this theory can achieve an incredible reduction in “the number of entities in the universe,” while matching (because we insist it do so) the predictions of conventional physics. Even so, the theory is stupid, and no one (including you, I’m sure) seriously advocates it.

What’s going on here is that this theory reduced the number of “entities,” only at the expense of a staggering increase in the complexity of the laws. For example, how does the universe “know” whether anyone is looking at the table or not, so that it knows whether it needs to render the table down to the level of atoms?

This is the reason why modern versions of Occam’s Razor talk about the simplicity of theories themselves (either the number of bits needed to specify the theory in some way, or some other looser criterion), rather than the number of “entities” postulated by the theories. And thus, calling MWI “the most extreme violation of Occam’s Razor possible” gets it ironically backwards. MWI is one of the most extreme possible applications of Occam’s Razor—so extreme, in fact, that debate over whether the Razor should be taken that far seems entirely legitimate to me (see my comment #25).

Mitchell, before answering, please let me first disclaim (a) any implication that the quest to find good QM approximation algorithms is solely mine, and/or (b) any ambition to encompass the entirety of QM with PTIME approximations.

The reasons are pure common-sense: (a) many eminent researchers for many decades have sought (and found!) a multitude of ingenious PTIME approximations to quantum dynamical systems — of which many (perhaps even the majority) methods resort to non-Hilbert state-spaces — and there is no particular reason to think this progress will stop any time soon, and (b) as Scott often notes, there are excellent complexity-theoretic reasons to expect that scalable quantum computing architectures on perfectly flat state-spaces will forever be beyond the reach of PTIME simulation.

That said, this summer our Quantum Systems Engineering group is mainly interested to explore (both experimentally and theoretically) the question “How does Onsager/Lindblad-style transport theory work on non-flat Hamiltonian/Kählerian state-spaces?”, to which the satisfying answer — that in retrospect is mathematically unsurprising — is simply this: naturally and universally.

Here the notion of of “transport” phenomena is understood pretty broadly, e.g. the subtle and intricate phenomena that are associated to single-photon emission-and-detection in low-loss optical cavities are regarded as the instances of the dynamical transport of conserved quantities from sources/emitters to the sinks/detectors.

Thus viewed broadly, transport theory is associated to a class of phenomena that (obviously) very many physicists have studied for (obviously) very many decades using (obviously) very many physical insights and mathematical toolsets.

What is *not* obvious — per David Deutsch’s DUDES principle discussed above — is all the ways that these transport-related ideas can be arrayed so as to mutually illuminate one another! It will be quite awhile (IMHO) before we achieve a reasonably natural and unified understanding even of the transport-related ideas that are already extant in the theoretical and experimental literature … to say nothing of fascinating new classes of experiments that quantum information theorists (like Scott and Alex Arkhipov) are proposing.

So there are plenty of good ideas in the air, and ample useful work for everyone. This is good, eh?

Do you think Hilbert spaces are the best representation of QM? May be there is a Super QM with an unknown mathematical representation? Will the sophistication of the represetnation at hand help? I believe once Yuri Manin suggested the following:
‘On the fundamental level our world is neither real nor p-adic; it is adelic. For some reasons, reflecting the physical nature of our kind of living matter (e.g. the fact that we are built of massive particles), we tend to project the adelic picture onto its real side. We can equally well spiritually project it upon its non-Archimediean side and calculate most important things arithmetically. The relations between “real” and “arithmetical” pictures of the world is that of complementarity, like the relation between conjugate observables in quantum mechanics’. (Y. Manin, in Conformal Invariance and String Theory, (Academic Press, 1989) 293-303 )

1ly1 #30: I’ve certainly heard of Manin, but I confess that I don’t understand that passage. With dynamical-collapse proposals, even if the details aren’t worked out, at least it’s clear what kind of thing we have in mind. By contrast, if someone tells me that we should switch from Hilbert space to a more “sophisticated” representation, or that “our world is adelic,” then I’m not even sure what we’re talking about until there’s some more concrete idea on the table.

I was about to write a long essay about why we should not believe in the many-worlds interpretation given our present understanding of quantum theory, but I watched The Avengers last night and I am afraid that it might make me turn into the hulk. On the other hand, thanks very much for the link to the Aharonov conference. I will watch a few talks, even though I have a feeling that I might find some of them hulk-inducing as well.

This copernicanism thing remind me of another alleged “copernican revolution” but in philosophy, which might be relevant here. Kant claimed to have provided such a revolution by stating that our knowledge cannot be about the world as it is in itself, but about the world shaped by our understanding (thus providing a synthesis of empiricism and rationalism).

I tend to agree that there is a kind of copernican turn with quantum physics, but I think it is much more akin to Kant’s copernican turn than the advocates of MWI might think. It is something like this: our best physical theories are about relations between particulars (including relations to the observers), not about particulars themselves as existing independently of any observer.

Obviously a single particular cannot be an object of knowledge if not in virtue of its relations to something else (a category, …), or if unrelated to an observer… MWI fails for taking too seriously the content of our theories as refering to particulars instead of relations, and for thinking that our theories somehow exist “outside our knowledge”.

In my opinion, this kind of relational interpretations suits as well Ockam’s razor (and in a sense even more : you don’t have to postulate an absolute reality-in-itself!) without the problematic “extravaganza” of MWI.

One reason I’m uncomfortable with MWI is the large number of unobservable ‘unused’ universes that end up just sort-of lying around. My question is whether this large number is infinite: Specifically, does MWI require admitting infinite quantities into physical theories?

To make the discussion more concrete, consider the proposed experiment of Bouwmeester et al., which seeks to test (loosely) whether one can have a coherent superposition over two states of the gravitational field that differ by a single Planck length or more.

I am a grad student currently working in the Bouwmeester group on this project. A minor technical point: it seeks a superposition of two coherent states of an object separated by more than the width of its wavepacket. For our proposal and related proposals, in practical terms this will be in the femtometer or picometer range. Still pretty small, but much larger than a Planck length!

what do you think about the argument in this paper
arxiv.org/abs/1109.6424 ?

The authors emphasize what they call ‘entanglement relativity’, e.g. in a H-atom the e and p particle are entangled, but if one uses center of mass plus relative coordinates the wave function separates.

They show that something similar can be done for a particle coupled to a heat bath of harmonic oscillators (a toy model for decoherence) and ‘entanglement relativity’ now means that one is dealing with two different decoherence processes.

They claim that an inconsistency follows for the original (and also modern) Everett interpretation, about when/how the world splits.

#33… from what I remember from a class a long while back(very primitive) adeles are “vectors” with coordinates corresponding to every valuation possible (the first coordinate being real which is p-adic at p = infinity and remaining coordinates being done at all p). Under suitable definitions on product they form a ring. I believe what he is saying is, we may due to our inherent bias we may very well look at only the first coordinate leaving all other valuations. This may provide data that may not be new but very well be complementary.

One easy way to see this is think of all experimental data as it is….. meaning we know only a finite preceision of the actual data. We project tghem to the real world meaning p=infinity completion. It is very well possible some suitable padic completion may also hold where p is finite. On this idea vsv has a note. I only half understood this. So I may very well be wrong. http://www.math.ucla.edu/~vsv/papers/arithphys1.pdf

OK, your counterexample of an “atoms are only there when you look” theory is valid against quantitative interpretation of Occam razor, but not against qualitative one as it introduces more concepts then the original theory – in addition to all the usual atoms it also has fundamental surfaces (or whatever atoms turn into when not observed precisely).

But MWI violates both interpretations since not only it postulates infinite additional universes, it also introduces some gargantuan hyperuniverse which contains ours and all the other branches, plus it adds new ill-defined concepts like splitting of universes. Also our universe is certainly not equivalent to other branches since unlike them it is observable, so there is something that picks it up from the ensemble – in other words why did my conscience follow this branch and not the other?

Also Occam razor certainly doesn’t favor many worlds even if you interpret it using some sort of simplicity or information content measure. The major error I see here is to assume that mathematics is all that defines a physical theory. That is simply false. Mathematics is only a part of it, and not the most important part (all physical theories can in principle be restated without math), by far the most important part of any physical theory is it’s interpretation, which tells us how it’s concepts and it’s mathematics relate to the real world.

An equation stating that a = db/dc is not Newtons law, it only turns into one with proper interpretation which relates the symbols of a, b, and c to the real world.

Now the problem with MWI is that math may be simpler without having to postulate measurement, but the interpretation get’s way more complex and ill-defined, completely outweighing any benefit.

So even if we go with your interpretation of Occam razor as one which favors simplicity, ensemble interpretation still obliterates MWI. To see that, imagine that you have to write 2 papers explaining both ensmeble interpretation and MWI interpretation and all the concepts behind them to someone who never had any contact with QM. Which of the two papers would be shorter, which could be encoded in less bits?

Ensemble interpretation is concise and straightforward, while MWI leads to a mess of ill-defined concepts. What a parallel universe even means? How can universes split and conserve energy? How can separate universes interfere? How can you get probability out of it? Why did my conscience follow this branch during split and not the other? etc. etc.

Scott remarks “If someone tells me that we should switch from Hilbert space to a more `sophisticated’ representation, or that `our world is adelic,’ then I’m not even sure what we’re talking about until there’s some more concrete idea on the table.”

Now let’s combine this with a third observation, that is so commonplace that we seldom are conscious of it:

Commonplace Observation “We know well how to `tinker’ with quantum dynamics by adding Lindbladian noise to the dynamics …”

Combining these three observations yields two common-sense predictions:

Common-Sense Prediction A When the day comes that we switch from Hilbert space to “a more sophisticated representation”, the leading order dynamical predictions of the new theory will look like noise (will it be noise compatible with scalable error-corrected quantum computing? no one knows).

Common-Sense Prediction B In consequence of David Deutsch’s DUDES principle, we can reasonably hope make progress toward a natural and universal post-Hilbert dynamics by naturalizing and universalizing our present understanding of noise dynamics.

The key point is that naturalizing and universalizing our present understanding of quantum noise dynamics is a broad research path upon which any researcher can set forth immediately, in the reasonable hope that it will lead to illumination of one form or another … if not fundamental insights, then perhaps practical engineering applications.

This ease-of-starting stands in marked contrast to quantum research paths that require genius-level inspiration even to identify the trail-head (e.g., Manin’s “our world is neither real nor p-adic; it is adelic”). Genius-level quantum insights are a great starting point, but (obviously) as a general rule they are not available.

1) Just as Ockham’s razor, properly interpreted, argues for dispensing with measurement as an unanalyzed primitive, it also (it seems to me) argues for dispensing with physical reality (as something distinct from mathematical reality) as an unanalyzed primitive. On this view, the many branches of MWI are just a tiny tip of the iceberg of reality. Mathematical structures exist; some mathematical strucures are sufficiently complex in the right sorts of ways that they contain self-conscious substructures; our Universe, with its many branches, is one of many such structures. To single out our universe from all the other mathematical structures, and to declare it uniquely “real”, strikes me as (to adapt your words) an unmotivated perversity that mangles the simplicity of mathematics for no better reason than a parochial urge to place our own experiences at the center of reality.

2) One might adopt your anti-MWI arguments for service here, asking, e.g. “Who is the observer who in principle can take a birds-eye view and see that our own many-branched universe occupies no particular privileged place among the many other mathematical universes that we know it’s possible to describe? The answer, of course, is the mathematical physicist, who has a lifetime of experience writing down different models of the universe and therefore sees something of the entire landscape. It’s true that all of the math-physicists’ models are far too primitive to include anything like self-conscious beings, but it’s also true that your theoretical Copernican observer flies too far above the earth to see the details. Our math physicist, like your Copernican observer, glimpses a highly undetailed picture of something that is nevertheless undoubtedly there. And there is absolutely nothing in the mathematics to suggest that, e.g. the Einstein-de-Sitter universe is any more or less “real” (whatever that means) than the Godel Universe or any of a hundred other universes that we see in the literature, many of which look like extremely rough sketches of the universe we happen to inhabit.

Bottom line: Ockham’s razor tells me not to add unnecessary primitives, especially (as you point out) when it’s easy to imagine an observer to whom those unnecessary primitives would seem entirely arbitrary. That’s an argument for the reality of the MWI branches, but it’s an even better argument for ditching the unnecessary primitive of physical reality, because in that case we know who the key observer is. And this has the nice side effect of relieving us from having to ask why the Universe exists in the first place. Mathematical objects exist; the Universe is a mathematical object; therefore the Universe exists. Of course this leaves the question of why mathematical objects exist, but I think I’ll stop here for now.

1.
>> “On the other hand, when it comes to extending quantum-mechanical linearity all the way up to the scale of everyday life, or to the gravitational metric of the entire universe—as is needed for MWI—even my nerve falters.”

Could you please tell us precisely what it was that led to _your_ nerves faltering: (a) the fact that it’s a _linear_ theory or (b) the part that it’s the _quantum_ theory (with all its peculiarities included in it)?

2.
About Occam’s razor: “plurality should not be posited without necessity.” Ayn Rand’s formulation of it: “concepts are not to be multiplied beyond necessity.”

IMO, the reason none should take MWI as a serious hypothesis of physics (let alone as a serious theory) has, in a way, little to do with Occam’s razor. The fallacy involved here is, IMO, much more crude.

Concepts of physics (i.e. certain mental objects which together form the contents of physics as a knowledge-discipline) are supposed to stand for something in the perceivable reality. Sense-perception is the basis of all knowledge.

When a mental object is posited to stand for something that can in principle never be perceived, it ceases to have any valid epistemological status, and must not be taken seriously.

Historical examples of such mental objects include things such as angels and fairies pushing the planets. In comparison, the late 20-th century version of the error posits not just a few imaginary conscious objects but an infinity of in principle imaginary universes. It’s supernaturalism taken to the logically extreme end. And, supernaturalism, it is, even if it appears in a non-religious way—there evidently are secular ways to be a mystic, too!

3.
If one has to accept MWI as a serious hypothesis, then, by the same token (i.e. literally so, i.e. simply applying MWI), one must also take such things as the following seriously:

(i) Every man always wins … in some or the other universe.
(ii) Every man is immortal … in some or the other universe.
(iii) Hitler is busy in a forever loop, packing chocolates and gifts he sends to Jewish children … in some or the other universe.

When I came to (iii), I caught myself. It reminded me of wanting to ask you (or anyone else who takes MWI as seriously as reading anything more than a small page or two about it), the following:

How is the issue of consciousness handled in MWI? Any idea? In specific terms, I wanted to point out two different kinds of (the supernaturally) imaginary scenarios:

(a) One is to say that when a bullet aimed at you arrives near you, somehow the EM field and the gravity field and the wind pressure and whatnot kind of physical conditions so combine together that the bullet disintegrates into a harmless gas of its constituent atoms and molecules at just the safe distance from you (and these atoms and molecules also disperse rapidly, even while obeying the c limit). Such an outcome is extremely less probable, but, hey, with MWI, it _necessarily_ and _actually_ happens in some or the other universe.

Notice, the bullet-disintegration and all is all a purely physical effect; it’s purely an inanimate behaviour; consciousness has nothing to do with it.

(b) Another possibility is to say that for every conscious choice, both the choice-path actually taken and also all the alternatives to it, also begin to exist in the correspondingly spun-out parallel universes. Thus, every act of free will also spins another _physical_ universe. Subtly implicit here is a physically active and causal role that consciousness plays in splitting the universes.

The loving, jolly, Santa Claus-competing (and in fact Santa Claus-bettering) version of Hitler requires (b)—it involves his free will choice. However, for the Jews (and, logically, everyone) to survive Hitler’s army (and, logically, all armies) despite bullets and nukes hurled at them, it is merely sufficient that (a) holds.

My question is: Do the MWI advocates mean the version (a) or (b) of their supernaturalistic outpouring aka “interpretation” or “theory”?

[I hope that I never would have to write even this much on/about MWI in my future life in _this_ universe—the only universe that is meaningful to talk of/about/etc.]

Nex #44: While at some level I share your skepticism about MWI, there are two important things I found missing from your comment.

The first is the number of times in the history of physics that enormous discoveries have been made, simply by people trying to keep their equations as clean and simple as possible, and then taking completely literally what the equations said. Some famous examples include: the discovery of quantum mechanics in the first place, Dirac’s discovery of the relativistic wave equation and his prediction of antiparticles, the prediction of black holes and an expanding universe from GR, Gell-Mann’s discovery of quarks… Rightly or wrongly, the MWI proponents consider themselves in that same tradition.

Second, I’m not sure exactly what you mean by “the ensemble interpretation.” Do you mean Copenhagen? If so, I’d consider it less an “interpretation” than a decision to treat quantum mechanics instrumentally, and simply not to ask certain questions.

Foremost among those questions is the following: what happens if you try to put a human being such as yourself in a coherent superposition state, as the laws of unitary evolution seem to allow?

If you say that nothing much happens—i.e., quantum mechanics continues to hold, another person could observe the interference pattern revealing that you were, indeed, in a superposition of two different mental states, etc.—then I’d say that MWI, or something like it, has basically been vindicated by experiment. Or in other words, any remaining debates at that point are semantic: I’m willing to say that the main substantive question at issue would have been resolved in favor of MWI.

This then leads to the core of my position: that in order to kill MWI (which would, of course, be a great scientific advance!), it’s necessary and sufficient to explain what’s wrong about the idea of putting a conscious being into superposition. I suspect that this position makes some people uncomfortable, since it reminds them of something they’d rather ignore: that the real reason why the quantum interpretation debate is so contentious, why it induces such a feeling of vertigo and of little or no progress being made, is that the even bigger debate over consciousness and the mind/body problem is always lurking in the background.

Scott – Are you omitting the (purported) problems about the intelligibility of having probabilities without uncertainty or frequencies because you think there’s no problem, or because you think they’re not an argument against MWI but only against the completeness of its current state?

Scott, during your talk [very interesting, but beware of white socks :-)] you proposed a “Turing test” for free will. Namely, you proposed the revised question “Is it physically possible to build a machine that, given some human’s environmental stimuli as an input feed, predicts that human’s future choices to any desired accuracy and arbitrarly far into the future -at least in the probabilistic sense (…)”.

Have you considered diagonization’s argument? If this theorical machine was computable, then humans could simulate it to base their decision on defeating the machine, thus this machine does not exist.

Peli Grietzer #51: I omitted those arguments because I know full well what the MWIers want to say—that the relevant uncertainty in this picture is “indexical” uncertainty, that is, uncertainty about which of the copies you are. Such uncertainty occurs in plenty of thought experiments having nothing to do with QM: for example, you know that in the world there are exactly two people, one blond and one redhead, and you wake up in a room with no mirror and with amnesia about which person you are. In such a case, you’re uncertain, but for reasons having nothing to do with any uncertainty about the “objective state of the world.”

And no, I don’t find it likely that future “development” of MWI is going to clarify this issue: it is what it is! In particular, if we agree to talk in this way about a probability distribution over various copies of yourself in the different Everett branches, then lots of well-known arguments (Gleason’s theorem, Zurek’s “envariance” argument, etc.) make a strong case that, given the framework of Hilbert spaces and unitary evolution, the distribution must be given by the standard Born rule and not some other rule. So the question “why the Born rule?” is not one that keeps me up at night; and at any rate, it certainly isn’t a special problem for MWI (one could just as well ask it in any interpretation).

As I see it, the real questions are: do we want to accept the view on offer, involving indexical uncertainty over different copies of yourself in different branches of the wavefunction? Are we forced by the evidence to accept that view? Is there any sensible alternative? And I think those questions, in turn, ultimately hinge on the empirical question of whether it is, in fact, possible to place a conscious observer into a coherent superposition of mental states, or whether there’s a not-yet-discovered obstruction to that (see my comment #50).

Scott #51, if I understand correctly, what you are telling that there are no any problems with introduction of probabilities and Born rules in MWI? For example, why not use the amplitude in 4-th degree, or, better, a linear combination of second and 4-th degree with small parameter before the fourth degree? Please, do not tell about Gleason theorem, it simply tells that World with such a law would be a terrible place.

Jiav #52: Sorry about my unfashionable attire; I had to pack in a hurry!

Yes, I’m familiar with the diagonalization argument, but I don’t see it as relevant here, for the simple reason that there’s no reason why the mad scientists predicting you would choose to give you access to their prediction machine! (Note that it’s not enough for you and the prediction machine to be doing something computable, since you don’t have introspective access to which computable function you’re computing. Determining which function you’re computing would presumably require a brain scan, at the least.)

Furthermore, even if you can “merely” be predicted by a black box to which you don’t have access—the predictions becoming invalid if and when you do get access—that already seems to me to raise all the bizarre consequences for personal identity that I mentioned in the talk. For example, provided only that you don’t open the box, such a box would suffice for making backup copies of yourself or for faxing yourself to Mars!

And actually, even if you did open the box, my definition of “prediction” allows the predictor to get access to whatever you’re seeing, hearing, touching, etc. as a continuous input feed. Because of this, the predictor wouldn’t need to predict its own future behavior, which would indeed generate a contradiction. Instead, it would “merely” need to predict your behavior as a function of whatever you learn on opening the box.

What a parallel universe even means? How can universes split and conserve energy? How can separate universes interfere?

I think these questions are red herrings. “Splitting” and “parallel universes” are just the verbal patter around MWI. Fundamentally, I take MWI to be the claim that measurement is to be understood as a sort of parochial illusion, since what really exists is a state vector evolving via the Schrödinger equation, full stop. If you accept that unitary evolution is all there is, then it follows that in standard measurement situations, you’ll get observers becoming entangled with the quantum systems they observe, like so:

(α|0>+β|1>)|E> → α|0>|E0>+β|1>|E1>

And it seems very natural to describe such an evolution in terms of a “splitting” into “parallel worlds.” But if we don’t know exactly what that means, then in some sense that’s our problem, not the theory’s!

Because MWI really just means “unitary evolution and nothing more,” it follows that MWI can’t fail for some simple technical reason like violating the conservation of energy. The laws of physics are as happy as a clam in this picture! If there’s any obvious problem, it’s that—much like with turning yourself into a clam, in fact!—we’ve arguably bought “happiness” only at the cost of erasing our actual experiences, of ignoring the “we” who are happy or sad and who came to know about quantum mechanics in the first place.

Could you please tell us precisely what it was that led to _your_ nerves faltering: (a) the fact that it’s a _linear_ theory or (b) the part that it’s the _quantum_ theory (with all its peculiarities included in it)?

I was referring to the particular implication of quantum-mechanical linearity that an observer (me, for example) could exist in a coherent superposition of different mental states, which states could then interfere with each other. That’s about as radical a revision to my concept of personal identity as I can imagine, and it’s not a step that I personally want to take unless I’m forced to take it. Hence my comments above (and my remarks in the OP) about the possibility of a yet-undiscovered obstruction to carrying the requisite experiment out.

Human: Dude! That’s unfortunate but you’re right, that’s obvious from the math. I can’t argue, but… wait a minute why do you say it to me? Don’t your computations stipulate I should not know this result?

Predictor: Well, that’s the problem. I’m a bit ill-at-ease my prediction is somewhat restricted. But look at this new simulation! Now you know your determinisms, it is clear you’ll in the future act as a decent being. And, cerise on the sunday, this prediction now doesn’t change when you know it. So my prediction is now perfect in any case: you’ll be kind and have no free will!

Human: Let me have a look… yeah, true, again one can’t argue with a equation, and the solution is stable upon me knowing it. It’s seems being kind and have no free will is what I’ll choose from now. Thank you so much!

if I understand correctly, what you are telling that there are no any problems with introduction of probabilities and Born rules in MWI?

No, I don’t think there’s no problem about justifying the specific form of the Born rule—just that

(1) the problem isn’t worse for MWI than for any other interpretation, and

(2) leaving aside the interpretation debate, the problem isn’t nearly as bad as many people seem to think. For we really do have detailed arguments explaining why you can’t change the Born rule even a little, without turning QM into a pile of crap (allowing superluminal signalling, instantaneous solution of NP-complete problems, etc). How much more “justification” for a fundamental rule of physics do you want, or can you reasonably expect?

Great post, Scott! Here is something I truly dont understand in your approach. Why does an obstruction for having an observer like you existing in coherent superposition of two mental states necessarily violate linearity of quantum mechanics. And, more generally, why isn’t it possible that such an obstruction be discovered within the framework of quantum mechanics.

(I also dont understand why a revision of the notion of personal identity based on such hypothetical possibility disturbs you…)

My skepticism of the MWI(s) having any scientific content whatever led to Chris Fuchs buying me lunch at the most recent APS March meeting. As a PhD student, I find that a standard of success it is difficult to dismiss.

(Oh, OK, it’s also led me into some mathematics I find rather interesting, but I’ve yet to make those ideas into a publishable package.)

… and if we want to argue parochialism, well, if one takes the Schrödinger Equation as describing the actual dynamics of observer-independent, ontic stuff — of ψ-flavoured goop — then saying that the same dynamical rule has to hold for the 96% of the universe we can’t experiment upon because it’s dark matter or dark energy or His Dark Materials — I find that astonishingly parochial.

Hey Scott, I just want to remind that a year has passed since you last held a 24-hour-ask-Scott-anything event. Have you scheduled this year’s question day? And will you commit now to answering questions continuously for the entire 24 hours, rather than wimping out like last year and going to bed for a while?

Scott: “I think these questions are red herrings. “Splitting” and “parallel universes” are just the verbal patter around MWI. I take MWI to be the claim that measurement is to be understood as a sort of parochial illusion, since what really exists is a state vector evolving via the Schrödinger equation, full stop. If you accept that unitary evolution is all there is, then it follows that in standard measurement situations, you’ll get observers becoming entangled with the quantum systems they observe, like so:
(α|0>+β|1>)|E> → α|0>|E0>+β|1>|E1>”

Ok, so we have a nice simple equation but the problem is that it doesn’t illuminate anything, on the contrary it obscures the problem with needless metaphysics.

To see why let’s say we do a trivial experiment, I have a timer linked with a detector, and 2 unstable atoms, first decay starts the timer, second decay stops the timer. After running this experiment I have some measured value for time between 2 decays – real solid information, stored in bits on my computer – and I want to know where this information came from?

Can MWI tell me anything intelligible about this? I don’t think so, although it supposedly does away with measurement problem to me the problem is still there, only now it’s not “why wavefuntion collapses to this value” but “why my consciousness is entangled with this value/chose to be on this branch of hyperuniverse.” I certainly don’t see this as an improvement.

What benefit does said unitarity offer us here? I can’t see any. However we dress the question, the truth is we simply don’t know the answer, we don’t know where this information comes from. Invoking MWI doesn’t make the problem any better, on the contrary it entangles it with another famous thorny problem – that of consciousness – which is precisely the last thing we should want if we were serious about trying to solve it.

Nex #67: OK, then I’d call the “ensemble interpretation” not so much an interpretation as just a restatement of the problem, or better, an attempt to divert attention from it. If “the wavefunction isn’t applicable to individual systems,” then how should we describe the state of an individual system? Even more importantly, what is it that comes in and stops the Schrödinger equation from applying at the level of measuring devices or brains?

Personally, I tend toward the agnosticism that you yourself seem to express in your last paragraph: I’d say there’s something deep about measurement that we have yet to understand. If I have to account for (or rather, explain away) measurement using existing concepts, then I think MWI is, as Steven Weinberg once said, “like democracy: terrible except for all the alternatives!” But what I really hope for is some new discovery that will change the outlook entirely.

Blake Stacey #64: I don’t see what bearing dark matter or dark energy could have on MWI, unless you’re prepared to argue that they could change the framework of quantum mechanics in some way. For dark energy, that’s actually on the border of plausibility: after all, dark energy does shape the causal structure of the universe on the largest scales, making the Hilbert space accessible to us finite- rather than infinite-dimensional. And it’s one of the barriers to using the AdS/CFT correspondence to get a fully unitary quantum theory of gravity (right now, that correspondence only works for certain anti de Sitter spaces, not de Sitter spaces like ours). For dark matter, on the other hand, I have no clue how such an argument would go: we’ve discovered plenty of other new particles over the past century, and all of them have been perfectly describable with good old quantum mechanics…

how does the fact that we can’t change the Born Rule give any plus points to MWI?

It doesn’t. I was simply pointing out that the “why the Born Rule and not some other rule?” question can be asked in any interpretation, so that to whatever extent MWI doesn’t answer it, that doesn’t seem to me like a special problem for MWI.

Ockham’s razor tells me not to add unnecessary primitives, especially (as you point out) when it’s easy to imagine an observer to whom those unnecessary primitives would seem entirely arbitrary. That’s an argument for the reality of the MWI branches, but it’s an even better argument for ditching the unnecessary primitive of physical reality…

That was a wonderful comment; sorry for taking so long to respond to it!

I should point out that some people would probably use your argument in a way you don’t seem to have intended: as a reductio ad absurdum of MWI! “Sure, it would simplify things to ditch measurement as a primitive, and leave only the Schrödinger equation. But it would simplify things even more to ditch the Schrödinger equation and leave nothing at all—or rather, ‘the totality of mathematical objects’ or something like that! Ergo, we see that simplicity can’t possibly be the sole criterion for evaluating physical theories.”

However, one place where the analogy between MWI and Max Tegmark’s “Mathematical Everthingism” breaks down, is that MWI is at least trying to account for an actual fact of our experience (namely, quantum interference experiments). If we judge a theory by the ratio of the amount it explains to the amount it needs to postulate arbitrarily, then Mathematical Everythingism seems to me to have a ratio of 0/0 … which makes it wonderful, but also terrible!

I am a grad student currently working in the Bouwmeester group on this project. A minor technical point: it seeks a superposition of two coherent states of an object separated by more than the width of its wavepacket. For our proposal and related proposals, in practical terms this will be in the femtometer or picometer range. Still pretty small, but much larger than a Planck length!

Thanks so much for the informative comment, and sorry for the delay responding!

I wasn’t talking about the separation of the actual object’s wavepacket, but about the separation that would theoretically be induced by that wavepacket separation in the state of the gravitational metric. As you probably know, Penrose believes in a gravitational “objective reduction” process, and as a criterion for when the “objective reduction” takes place, he’s proposed that it happens when the mass/energy distributions in two components of a superposition become different enough that (loosely speaking) “their gravitational fields differ by one Planck length or more.” And as I understand it, testing that and related ideas was Bouwmeester’s original motivation for the experiments you’re working on—though I’m sure you could tell me more about how far you are from starting to test Penrose’s quantitative conjectures!

1ly1 #74: Not quite. I do think that the known limits of quantum computers (for example, the optimality of Grover’s algorithm) mean that quantum computing can’t provide nearly as strong an argument for MWI as its proponents would like. For when you examine how quantum algorithms actually work, you find that they all look like extremely clever changes of basis—ways of exploiting interference to concentrate probability mass where you want. They don’t look, as the popular articles would have it, like “trying all possible answers in parallel,” which is the vague image most people probably have in mind when they say that a QC would “exploit parallel universes.”

Furthermore, while a scientific theory can’t strictly be held responsible for its misinterpretations, if the many-worlds imagery consistently leads people to grievously-wrong ideas about how a quantum computer would work, then to my mind, that’s a strong argument against using many-worlds imagery when popularizing quantum mechanics.

On the other hand, I’ve also said before that perhaps the strongest argument in favor of MWI is that taking it seriously led David Deutsch to think of quantum computing.

That’s rewriting history. Everyone in the 20s, pro or con (except Bohr), agreed that what Bohr said was that consciousness causes collapse.

Could you give me some sources for that?

I’ve read Bohr, and I thought he was perfectly clear in his obscurity! That is, he said again and again that what defined the split between the quantum and classical worlds was not the experimenter’s consciousness, but rather “the very conditions of measurement, by means of which the description of the atomic phenomena is… [blah blah blah]”

Sure, but only in the same sense that superdeterminism removes the need for any confrontation with actual features of the world! Much like with Godwin’s Law, “superdeterminism” strikes me as the sort of thing that you resort to after you realize you’ve lost an argument.

Look, according to superdeterminism, you’re allowed to say about any experimental result: “well, maybe that happened because of a giant universe-wide conspiracy involving both the particles you measured and the atoms of your own brain—which allowed the particles to know in advance which experiment you were going to do, and to get into just the right state, thereby fooling you into thinking that, had you chosen to do a different experiment (which is actually impossible, since you lack free will), you would’ve continued to see results consistent with standard physical theory. So it all looks like the standard physical theory is valid, but really it’s not.”

With these universe-as-magician rules, I agree that you can “explain” any conceivable scientific discovery. But precisely because of that flexibility, I’d say your victory is a hollow one, devoid of explanatory value.

And if so, is there a good reason why is this not an explanation that receives more attention?

Why does an obstruction for having an observer like you existing in coherent superposition of two mental states necessarily violate linearity of quantum mechanics. And, more generally, why isn’t it possible that such an obstruction be discovered within the framework of quantum mechanics.

We’ve had this discussion many times in the related context of quantum computing! In standard QM, there seems to be no principled obstruction to creating states like α|0>|You0>+β|1>|You1>. Hence, if there is such an obstruction, then something new has to be discovered that would radically change our understanding of QM. As explained in comment #12, I admit the possibility that the new discovery might be one that would somehow leave QM’s unitarity and reversibility “formally on the books,” while also giving a fundamental reason why the reversibility could never be realized in certain experiments like the Wigner’s friend one. But if so, then I’d personally regard that as a revolution in our understanding of QM every bit as far-reaching as a flat-out dynamical collapse mechanism.

(I also dont understand why a revision of the notion of personal identity based on such hypothetical possibility disturbs you…)

On reflection, I suppose the “threat to personal identity” from the Wigner’s-friend experiment is basically similar to the “threat” from hypothetical classical brain-uploading or brain-duplication technologies. Here’s one way to make the point vividly: suppose it’s possible to put me into a coherent superposition of two states, corresponding to different subjective experiences, and then observe interference between those states. Then by the reversibility of unitary evolution, it must also be possible to perform the same experiment backwards. (Indeed, the simplest way to observe an interference pattern between the two copies of me, would be to “uncompute” whatever they had experienced that made them different!)

So now one can ask: what’s it like to be run backward in time? Does it feel similar to being run forward in time, except that now your subjective experience goes “the other direction”? This has a similar flavor to various “classical” conundrums: “what’s it like to have an exact duplicate of you created on Mars, while the original remains on earth? when you wake up from the brain scan, should you expect to find yourself on earth? On Mars? On one or the other with 50/50 odds? Somehow on both?” Or: “can ‘you’ be brought into being by a colony of termites simulating all the neurons in your brain by their wiggling motions—taking, let’s say, a thousand years to simulate each millisecond of neural activity?” Of course, the similarity to these old chestnuts hardly makes the Wigner’s-friend one less perplexing!

One reason I’m uncomfortable with MWI is the large number of unobservable ‘unused’ universes that end up just sort-of lying around. My question is whether this large number is infinite: Specifically, does MWI require admitting infinite quantities into physical theories?

Sorry for the delay! It’s a good question. If the Hilbert space is finite-dimensional, then certainly the number of “universes” will also be finite. And there are indications today—specifically, from the dark energy together with the holographic principle—that the total Hilbert space accessible to any one observer has a “mere” ~e10^123 dimensions. So as long as you only care what happens inside our causal horizon, you should be able to get away with only finitely many universes! If you also want to include what happens outside our causal horizon, then the answer would depend on whether the universe is spatially infinite or not, which no one knows (or possibly will ever know). But notice that, if the universe is spatially infinite, then you already have infinity even without MWI! In that case, MWI would “merely” take you up from Aleph0 to 2Aleph0.

Bottom line, if within each world you take there to be ~S independent degrees of freedom, then you should expect ~exp(S) possible worlds.

Ockam’s razor is a useful principle for choosing our best mathemtical model of reality, and to that extent, no doubt that a model without any ‘collapse’ or ‘measurement’ primitive is better.

Now come the problem of interpreting the model (what exactly do we mean by ‘exist’, what is a viewpoint, etc.). Unfortunatly this problem cannot be express in mathematical terms and I doubt that Ockam’s razor is of any help here.

You seem to be commited to a specific metaphysical interpretation akin to mathematical platonism, which is defensible, but not necessarily the sole one. Relational interpretations use the exact same mathematical model, but with a different interpretation of it.

… In case you are curious to know: The reason I raised that question was to check if you see some (any) theoretical problems in having a theory that is linear at a basic level but which nevertheless go on to produce (or at least allow) nonlinear manifestations when “scaled up.”

Anyway, thanks for clarifying the viewpoint you had while writing that line of yours.

@Scott #83:

Let me change your line a bit, while adding an emphasis to it: If the _Hilbert_ space is infinite-dimensional…

Two main points: (i) the relativistic horizon is not the only consideration here. (You call it the “causal” horizon.) (ii) The “spatial” space i.e. the real Eucledian space R^n may be finite dimensional with n being a finite integer, say <= 3, and yet the Hilbert space be infinite dimensional, e.g., the space L^2( R^n) defined by a square-integrable complex-valued function defined on that R^n.

Wow! Since I already happened to have written so much math_s_ [evidently, I can do that], I now should leave aside the other highly enticing baits related to the issue of the infinity or otherwise of "space," here. [Yes, I can do that, too! ]

I don’t think “introducing infinite quantities” is something to be worried about in the first place. Firstly, math can handle infinite quantities just fine. Sure, usually when you encounter an infinity in physics this indicates you messed up, but there’s no a priori reason the universe has to work that way. Secondly, if you’re trying to count the worlds in MWI, you’re focusing on the wrong thing. Thirdly, infinite stuff seems much less philosophically worrying than infinitary stuff (e.g. real numbers), but physics already has plenty of that and hardly anyone complains.

Scott, most (*) of your comments on this thread were even more insightful than the original, wonderful post. Let me humbly suggest collecting them into their own follow-up post for increased visibility.

(*) If you ask me, all of them excluding #79 and #80.

#79: I think the version of superdeterminism you are arguing against is a straw-man, or at least an uninteresting interpretation of the term. Whatever its name, I’m more interested in your opinion on this approach by Huw Price: http://prce.hu/w/preprints/QT7.pdf

#80: Running backward in time feels exactly like running forward in time. The only problem is, if you run backward in time, then your input and output systems will have a hard time properly interacting with an outside world running forward in time.

Re @Scott 80: this comment is listing dangerpusly in the off topic direction, but I just want to ask this question for the sake of my qualitative understanding:

About those termites, what’s so absurd about it? Can we not say that’s what are neurons are? As for the scope of time… This reminds me how Newton thought gravity’s action over vast distances seemed absurd. But distance is relative. In the action of forces between atoms any more or less absurd?

It’s pretty easy to calculate concretely (full details below) that the spin-up versus spin-down trajectories of the IBM group’s 92-picogram cantilever were sufficiently separated in space that a naive Penrose-style gravitational decoherence mechanism would have reduced the IBM experiment’s signal amplitude by a factor of 1/3, and the signal-to-noise ratio by a factor of 1/3^4, relative to the observed experimental values.

This constitutes (AFAICT) rather strong experimental disconfirmation of at least some Penrose-style decoherence mechanisms!

Among quantum spin microscopists it is widely appreciated that the IBM/MRFM cantilever is (by far) the largest mass ever observed to be in a quantum superposition of states. I happen to be personally conversant with these figures, because the IBM group asked me to compute them *prior* to committing to the experiment, for the reason that Penrose-style decoherence (if it were present) would have rendered the cantilever dynamics classical, and thus the experimental SNR unobservably low — yikes! — and so the IBM group desired some measure of theoretical assurance that Penrose-style decoherence would *not* be observed.

Conclusion Penrose-style quantum decoherence would have had substantial adverse consequences for some varieties of quantum spin imaging applications, and so it is fortunate (from an engineering point-of-view) that this quantum decoherence is *not* observed in practical quantum spin imaging contexts.

Caveat The above analysis, and the above conclusion, both are over-simplified pretty dramatically, in order to fit within the confines of an entertaining and thought-provoking web-log post. A more circumspect conclusion is that the IBM data verify that a mass-spin system can preserve internal quantum coherence over space-time separations that a naive Penrose analysis would suggest would be decoherent, and this observation has significant practical implications for the capabilities of quantum spin microscopy (for example).

Question How much does the IBM experiment increase our rational Bayesian confidence that MWI is true?

I’m always un-nerved by the failure to distinguish between QM and QFT. Yes, QM correctly describes a handful of particles that have been isolated from their environment. But decoherance requires full QFT to correctly describe it: its an “MBI”–“Many Body Interaction”. And we already know that the mathematics of MBI is full of surprises.

Let me provide a pro-dynamical-collapse argument, by analogy. Its based on the http://en.wikipedia.org/wiki/Asymptotic_equipartition_property So: if you flip a coin N times, there are 2^N possible outcomes, right? If you take the limit N\to\infty, you get how many states? Ahem, 2^NH where H=<1 is the entropy. (H=1 for a perfectly fair coin, but H<1 in general) A somewhat counter-intuitive result. What happened to all those other possibilities? Their probability shrank to zero in the limit.

So, by analogy: replace "coin" by "quantum trajectory" and replace "2^N" by "many interactions". Perhaps many of the possibilities simply shrink to zero probability. So, writing the analogous expression for QFT, we have Z=\int D\phi exp (i action /hbar) and we know that we experience the classical world located at the peak of the stationary phase above, which is the classical action. And we know that there are other trajectories all around us, at a distance hbar away from us. That's what we know for sure.

From what I can tell, MWI argues that there are other trajectories, very very far away, that contribute to Z. Really? We don't ever seem to need to take these into account when computing with things with Z. No textbook ever says "gee, this is where the classical action bifuracted into two, and one must be careful to sum over both branches." Because I think that most people envision MWI as saying "there are two worlds, and the cat lives in one and dies in the other", which means that the CLASSICAL action had to bifurcate into two, to follow these two possibilities. But none of our practical calculations ever have to treat this case. Why not? Perhaps because it never happens?

To conclude my hand-waving: perhaps these other possibilities almost happen, but then fade away to a probability of zero. The "coin" of quantum mechanics may be perfectly fair, but when it interacts with the unfair reality of many bodies, one possibility ends up with a slight edge over all the others, and all the other possibilities that one might be able to COUNT, those all sum up to a grand total probability of zero. Yeah?﻿

Because I think that most people envision MWI as saying “there are two worlds, and the cat lives in one and dies in the other”, which means that the CLASSICAL action had to bifurcate into two, to follow these two possibilities. But none of our practical calculations ever have to treat this case. Why not? Perhaps because it never happens?

Interesting, but I don’t see how your suggestion can be made coherent (no pun intended), unless QFT can somehow allow an actual dynamical-collapse mechanism, or some other fundamental source of irreversibility.

For in the usual view of both standard QM and QFT, “the reason none of our practical calculations ever have to treat this case” is extremely simple: it’s because none of our practical experiments ever involve recoherence between different basis states of a macroscopic object like a cat! If we could do such experiments—and nothing in the known laws of physics seems to prohibit our eventually doing so—then our practical QFT calculations would need to include the macroscopic interference term.

To reiterate, I agree that it would be wonderful if something killed off the macroscopic interference terms, but whatever that something was, it would have to be profoundly new; it can’t arise just from pushing around existing QFT machinery.

Thanks for the reply about superdeterminism. Another quick question: Do you believe that MWI allows for so-called “nighmare states” to exist? That is, states where extremely low probability (and potentially awful) phenomena are occuring.

For example, one might unluckily inhabit a universe where all people have survived to the age of 3000, but have aged very badly. Or where everyone has a cellphone with MC Hammer’s “You Can’t Touch This” as an obnoxiously loud ringtone. And I suppose more awful states might exist too.

The existence of such strange and awful states has always made MWI seem somewhat unattractive to me.

Anon #91: If you believe MWI, then it’s straightforwardly true that there would exist such “nightmare states” … as well as utopian dream states and everything in between. From an MWI standpoint, the question is not about the “existence” of these states but only about their relative probabilities.

Scott has provided a good starting point for appreciating the distinction between QM and QFT as follows:

Scott posts “I agree that it would be wonderful if something killed off the macroscopic interference terms, but whatever that something was, it would have to be profoundly new; it can’t arise just from pushing around existing QFT machinery.”

To gently disagree with Scott’s assertion, by concretely illustrating (what I take to be) Linas Vesptas’ main point, let us reason as follows:

(1) The physical reality of vacuum field fluctuations is well-established.

(2) The dynamical validity of fluctuation-dissipation relations is well established.

(4) The role of fluctuation-dissipation – entanglement relations in quantum computation is *not* well-analyzed (at present).

From a purely practical point-of-view, fluctuation – dissipation – entanglement relations enter significantly in quantum spin microscopy … for example, per our own group’s “The Classical and Quantum Theory of Thermal Magnetic Noise, with Applications in Spintronics and Quantum Microscopy” (Proc. IEEE, 2003) is yet another example of the ubiquity of Deutsch’s DUDES duality between applied and fundamental research.

More broadly, as quantum computing experiments become more sophisticated, the field – theoretic renormalization effects that enter prominently in cavity QED are moving to center-stage. This is grounds for theoretical humility, because even after very many decades of work by very many, very ingenious, quantum researchers, we are far from possessing an understanding of quantum renormalization effects in field theory that is simple, natural, and universal.

And it is good news for young researchers too, in the sense that naturalizing and unifying the quantum renormalization-related ideas that are already extant in the theoretical and experimental literature is a reasonable path for launching a substantive research program, that does *not* require startling / transformational insights to begin.

From an MWI standpoint, the question is not about the “existence” of these states but only about their relative probabilities.

It is important to note that these worlds are in every sense of the world inaccessible to us. From a MWI standpoint they exist, but when dealing with issues of ethics and such, it is better to treat them as nonexistent. If someone’s moral decisions depend on the existence or nonexistence of such worlds, then one most probably commits a category mistake.

Hi Scott, thanks for the answer (#80) and also for the reference to#12. I also think that understanding decoherence and how fundamental it is, is probably at the crux of matters and I appreciate your point of view that for you a theory for which decoherence is fundamentally irreversible even if it keeps all the unitaries of QM intact will be surprising as much or almost as much as a theory that violates QM.

About running quantum processes backwards raised in #80: The question if any computer simulation can run back in time came up in our QC debate. Joe Fitzsimons mentioned this property as characteristics to both classic and quantum computers simulation of physical devices. I regard it as a property of classical simulations (actually, with some subtle issues even classically) that we should not take for granted for quantum emulations. And I would conjecture that you cannot always run quantum computer programs backwards. (http://rjlipton.wordpress.com/2012/06/20/can-you-hear-the-shape-of-a-quantum-computer/#comment-23241 )

Peter #94: I read that paper by Michael Cuffaro, liked it, and actually had some correspondence with him soon after it came out. I just reread the correspondence, in order to remember exactly what I thought about the paper!

Bottom line: His basic observation, about the gap in the usual argument from quantum computing to the existence of parallel universe, is one that I agree with and in fact made independently in Why Philosophers Should Care About Computational Complexity (see the section on quantum computing). However, I wouldn’t go nearly as far as him in saying that MWI should be rejected “tout court.” I think MWI is a perfectly legitimate way to think about quantum computation if you want to. In the QC context, the question is not so much whether MWI is “true” or “false”, as whether it’s more or less useful as a mental aid! In particular, I see no problem of principle in describing measurement-based quantum computation in MWI terms—indeed, an MWI true-believer like Deutsch would have no difficulty arguing why the “parallel universes” are no less essential for MBQC than any other kind of QC. What MBQC does is simply to provide a way of implementing QC that makes its “exponential parallelism” aspect less noticeable and salient—which I suppose has both good and bad aspects.

About those termites, what’s so absurd about it? Can we not say that’s what are neurons are?

I’ve certainly gone through phases of my life where I thought there was no difficulty whatsoever in such thought experiments. You’ve simply caught me while I’m not in such a phase.

One place where I think it gets difficult, is where you start asking what exactly a simulation of your brain needs to do in order to count as a simulation. For example, consider an astronomical-sized lookup table, which caches, for every possible question you could be asked of (say) 1000 bits of fewer, the answer you would give to that question. Does that count as a simulation of you, sufficient to “conjure your conscious experience into being” or whatever? If so, then does it matter if anyone actually consults the lookup table, or if it just sits there inert?

Gil Kalai posits “Understanding decoherence and how fundamental it is, is probably at the crux of matters.”

Gil, what you say is reasonable, and yet the magnitude of this undertaking is daunting. For if by “decoherence” we understand more specifically “entropy-increasing dynamics”, and if by “entropy-increasing dynamics” we understand yet more specifically “entropically spontaneous transport processes”, then our quest for fundamental understanding leads to a very recent and eminently practical arxiv preprint by Jacquod, Whitney, Meair and Markus Büttiker, titled “Onsager Relations in Coupled Electric, Thermoelectric and Spin Transport: The Ten-Fold Way” (http://arxiv.org/abs/1207.1629, 6 Jul 2012).

This preprint’s “Ten-Fold Way” analysis systematically catalogs a reasonably comprehensive suite of fluctuation-dissipation relations, of which the fluctuations are (obviously) of fundamental relevance in scalable quantum computing error-correction, and their Onsager-associated dissipative currents are (obviously) of practical relevance to a broad class of sensing, separation, and storage technologies. Here we encounter yet another (vivid) example of Deutsch’s DUDES duality between fundamental and applied quantum research.

Scott #98: an old chinese proverb says “all that matters for us is that the lookup table would be ﬁnite, by the assumption that there is a ﬁnite upper bound on the conversation length (…) From these simple considerations, we conclude that if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory.” It still seems hard to dismiss this very simple and powerfull point!

But you’re not actually wondering about intelligence and the Turing test. You’re wondering about whether the Turing test is enough to prove consciousness. That is an hard problem, chinese says.

This has to be some of the most brilliant writing from Scott ever, the clarity and abundance of insights in the original post and the comments is amazing. The mention to a relation between entropy and the perception of time really piqued my curiosity, only idea about this I was aware of is the fact that the direction in which time is perceived to advance is the direction in which entropy increases.

Scott #70, may be I misunderstand Black Stacey #64, but Pullman indeed very briefly touched possible relation between dark matter and MWI in his fantasy novels. Yet, then one my colleague had to made an intensive bibliographic search in the scientific literature (because of a grant they’ve got few years ago), he claimed that he did not see any article related to link between MWI and dark matter (and venomously suggested me to write a first one).

So to caricature your thought process – and that of many MWI opponents – the *reason* that you dislike MWI is that it gives you a distressing philosophical problem to do with personal identity, but you hope that some as-yet-undiscovered physical mechanism to do with quantum gravity will come along and defeat MWI just enough that existing objects like lasers and quantum computers still work for basically the same reasons we think they do at the moment, but that your brain is saved from being put into a superposition.

Have you considered the possibility that the maybe the real theory of quantum gravity might make the philosophical problem *worse* rather than better? I personally have an inkling that when we finally solve quantum gravity, the answer will add a lot of weight to the Tegmarks of this world, i.e. QG might well necessitate a “big world” which is bigger than what MWI necessitates. (basically, pretty much every major paradigm shift has has been philosophically distressing, QG is unlikely to be an exception)

Or the possibility that perhaps there *is* some length-scale based cutoff above which superposition doesn’t occur, but that the human brain comes in below the cutoff? This may sound unlikely, and forgive me if I am being stupid, but wasn’t there an experiment a not long ago which put a millimetre-sized object into a superposition?

My knowledge of the beliefs of the pioneers of quantum mechanics are largely based on this article which used to have an ungated copy here.

In that case, MWI would “merely” take you up from Aleph0 to 2^Aleph0.

It doesn’t even do that. Just because QM exponentiates finite numbers doesn’t mean it exponentiates infinite cardinals. I don’t think there is any sense in which continuum QM has more than countably many parameters. The Hilbert spaces have countable (Hilbert) bases. There may, in some sense, be uncountably many worlds, but they are linearly dependent.

Or maybe a new Turing Test, call it the “quantum-Turing Test”.
It would be like the usual Turing Test, but with the topic of conversation restricted to foundations and interpretation of quantum mechanics

Scott #90: “it can’t arise just from pushing around existing QFT machinery.” But why? This is an assertion, and its exactly the assertion that I’m attacking.

The avenue of attack is to claim that quantum measurement is inherently a many-body interaction. So: for a quantum system with just a few degrees of freedom, a few bra-kets here and there, one is forever forced to believe in MWI (or to argue about it) And also: for an isolated quantum system, it is valid to factor it into a quantum piece, and a rest-of-the-universe piece. However, once the system starts interacting, then the factorization cannot be properly done, and doing so invariably leads to a certain hand-waving. As long as one thinks one can factorize, then, yes, believing in MWI seems inescapable.

At best, all I have is some hand-waving arguments to the effect that the N->infinity limit is very counter-intuitive, and has had many many many surprising results in the history of mathematics. I claim that quantum measurement is an N -> infinity situation. The hard part, is, of course, to find a tractable example that resembles an actual quantum measurement

John Sidles: Comment #93: thanks. I never actually think of fluctuation-dissipation; it has a certain air of semi-classical approximation which is dangerous to flirt with when arguing such a contentious topic. I really am always going back to the QFT partition function as the fundamentally correct description: the trick is how to calculate a many(N->infty)-body interaction with it.

Indeed, were there some interesting insights, ideas, speculations, or discoveries that you can tell us about?

Well, I think I now understand what the Aharonov-Bohm effect is and why it was important. And I understand what weak measurements and Yakir’s “two-vector formalism” for quantum mechanics are all about—although I disagree with Yakir’s tendency to state his results in the most “paradoxical” way possible, even when a less paradoxical way to explain the same thing is available! I also had very interesting conversations with Sean Carroll and David Albert about the issue of “self-locating belief”: that is, how to do probability theory in cases where there might be many identical copies of you in the universe or multiverse. Sean thinks he has a general way to deal with that sort of problem, but it doesn’t address the issue that interests me the most: how do you decide exactly what counts as a “copy” of you? I also learned something new and interesting from David Albert’s talk: that in special relativity, the Lorentz transformations can’t be seen as just a simple matter of “turning your head sideways,” and looking at the history of spacetime from a different angle. For in order to map the history of spacetime in one Lorentz frame to the history in another Lorentz frame, in general (at least in quantum mechanics) you need to know the Hamiltonian—or in other words, the actual differential equation that’s mapping earlier states to later ones. He had a very simple example to show that. (David also had a great story about an American visiting Israel, who meant to ask about a bathroom, “זה רק בשביל ילדים?” but instead asked “זה רק בשביל יהודים?”)

So to caricature your thought process – and that of many MWI opponents – the *reason* that you dislike MWI is that it gives you a distressing philosophical problem to do with personal identity, but you hope that some as-yet-undiscovered physical mechanism to do with quantum gravity will come along and defeat MWI just enough that existing objects like lasers and quantum computers still work for basically the same reasons we think they do at the moment, but that your brain is saved from being put into a superposition.

Actually, I don’t think that’s the thought process of many MWI opponents. Most of them say that MWI is absurd pseudoscience, it violates Occam’s Razor and/or conservation of energy, it can’t explain where probabilities come from (and this is a special problem for MWI), etc. Or, if they’re like many high-energy physicists (Tom Banks and Lubos Motl are two good examples), they seem to agree with MWI on every substantive question, yet hate the language of MWI—considering it a superfluous and wrongheaded attempt to translate the perfectly-clear mathematical formalism of QM into florid sci-fi imagery. As they see it, we should accept that everything in the world is quantum-mechanical, and even that we ourselves could be placed in coherent superposition, but not use phrases like “parallel universes” that have nothing to do with the calculation of scattering amplitudes.

But I’ve rejected both of those positions. I explained at length, in the comments (see #26, #50, #53, #56), what I see as the problems with all the usual anti-MWI arguments. And I’ve also explained that, if conscious observers can demonstrably be placed in coherent superposition, then (as I see it) the MWI position would have been essentially vindicated: any remaining debates at that point are “mere semantics.”

So, having (I think) pretty fairly described what is true, am I not also entitled to say what I hope is true?

Seriously, I’d give the the following argument: we know that something enormous has to give when quantum mechanics is brought together with GR, since we don’t yet really know what it means to have superpositions over different spacetime geometries. And while you might disagree with me here, I’d say it’s also clear that there’s something huge we don’t understand about “self-locating belief,” personal identity, and yes, free will and consciousness. For example, if you want to say that the mind is just a computer program running on meat hardware, that’s fine … but then what happens when the program gets copied? Assuming you’re “merely” self-interested, should you agree to a perfect computer simulation of yourself undergoing terrible simulated tortures, in exchange for millions of real dollars? Either possible answer can be made to sound bizarre from a traditional scientific rationalist standpoint, but presumably one of the answers is right!

Furthermore, thought experiments like Wigner’s friend (#80) seem to indicate that there’s some connection between these two large sources of confusion in our existing scientific worldview. While I disagree with Penrose’s proposed solutions to the problem, I do now agree with him (as I didn’t used to) about the existence of a problem! It looks to me like we have something profoundly new to learn, just as philosophically important as (say) quantum mechanics or evolution were.

Having said that, I actually strongly agree with your sentiment that the world doesn’t care what we want, and that the more we’ve learned about the universe, the more alien it’s seemed, and the further away from human preconceptions and prejudices. So we should expect that a quantum gravity theory, if and when it’s ever found, will overthrow yet more of our prejudices. But that’s only one meta-principle that we can apply to guess what such a theory will look like. Another meta-principle, I’d say, is that whatever the right theory is, it should ultimately render the universe comprehensible—because incomprehensibility can only arise from an incomplete or inaccurate map of the world, not from the world itself. But I’d better stop before this gets too grandiose.

Scott #90: “it can’t arise just from pushing around existing QFT machinery.” But why? This is an assertion, and its exactly the assertion that I’m attacking.

I don’t quite understand your position—but in any case, the reason why you can’t “decisively” suppress macroscopic interference within the framework of QM is extremely simple, and extremely independent of details. Basically, it’s that QM is a reversible theory. If a unitary transformation U represents a possible evolution of a physical system, then U-1 also represents a possible evolution. And, therefore, if decoherence can take place, then recoherence can also take place, by simply inverting whatever unitary transformation caused the decoherence. The only reason why we don’t see that happen in practice is the Second Law of Thermodynamics: the same reason why we don’t see omelettes unscrambling themselves, ash unburning back into wood, etc. But all those things are just statistical consequences of the initial conditions, and have nothing to do with the dynamical laws themselves. With fine enough control, you could in principle unscramble an omelette, and you could recohere a decohered macroscopic system.

I think that any argument to the contrary, as you and John Sidles seem to want to make, would need to confront the reversibility of quantum mechanics directly—either by denying it outright, or else by explaining why other aspects of fundamental physics can prevent the reversibility from being “realized,” not just in practice but in principle. I don’t see any “sneaky” way to avoid talking about this!

Scott suggests “We need to confront the reversibility of quantum mechanics directly — either by denying it outright, or else by explaining why other aspects of fundamental physics can prevent the reversibility from being ‘realized,’ not just in practice but in principle. I don’t see any ‘sneaky’ way to avoid talking about this!”

Our ground rule will be that it’s OK to modify (standard) quantum dynamics, but it’s *not* OK to tamper with (microscopic) reversability. After all, we are all-of-us so fond of thermodynamics, that we refust to modify First and Second Laws, which are founded upon microscopic reversability. To say it a more formally, the Laws require that dynamical systems are Hamiltonian flows. To say it geometrically, dynamical flows are symplectomorphisms.

Once this thermodynamic constraint is in-place, there’s not a lot we can do (AFAICT) to modify quantum dynamics on flat state-spaces. But fortunately, there’s *plenty* that we can do to pullback quantum mechanics onto non-flat state-spaces.

We first apply the engineering edge of DUDES. Surely mathematicians/physicists/chemists/engineers must *already* be working on non-flat complex state-spaces? This proves to be true … and not just true, but ubiquitously & ridiculously true. The generic challenge is craft a pullback state-space that preserves as much as possible of the “quantum / symplectic goodness” of flat Hilbert space, while enabling efficient trajectory integration on non-flat spaces that are propitious by virtue of lower dimension and/or favorable algebraic structures. Because the literature already holds (literally) thousands of examples of this method, the great challenge is to naturalize and universalize our understanding of these extant methods.

Now let’s apply the fundamental edge of DUDES. Surely pullback methods must be useful for discovering new physical laws (as contrasted with efficient similation of known Hamilitonian physics)? This research is way above my pay-grade, and (seemingly) orthogonal to my technical objectives, and yet it is clear that Fadeev-Popov/BRST field quantization (as a much-studied example) does preserves the congenial fiction of a flat Hilbert space only by introducing an intricate set of nonlinear interactions among a non-physical family of particles.

As for the natural state-space of String/M theory, heck, don’t ask me! There’s enough mathematics and physics already extant, to make it reasonable for physicists to be agnostic regarding Hilbert as the fundamental state-space of Nature (or not).

Conclusion The algebraic framework of Hilbert-space QM was constrained to allow physicists of the era 1925-1975 to avoid learning geometric dynamics … and nowadays we are relaxing that constraint. Good!

The nub for me is whether these two statements (which I assent to)
1. Schrödinger’s equation describes the MWI multiverse.
2. Schrödinger’s equation applies at all scales, with no “collapse”.
logically imply
3. The MWI multiverse has physical existence.

I think Deutsch would argue 3. as a contingent fact based on the two-slit experiment, but (like Scott) I demur on its being necessary. See if my own analogy for “why-not” holds water. Suppose you are given an NFA N in a form where you can be said to experience a computation path by it. Then you can say:

—but this need not entail M having anything more than mathematical existence, nothing that you “experience”. Indeed in applications such as “grep” you don’t want to deal with M at all—the simulation that gets actuated uses N as the data structure.

(?)

My analogy might be more realistic with a quantum finite automaton in place of N—noting that various QFA models covered in this paper by Rusins Freivalds still accept only regular languages.

Rationalist #106: “the *reason* that you dislike MWI is that it gives you a distressing philosophical problem to do with personal identity”

The reason criticisms of MWI revolve around personal identity is because MWI has not yielded any objective concept of a “world”.

MWI believers just say “all that exists is the wavefunction, it’s all in there somewhere”, but are unwilling and unable to say exactly where.

So in an attempt to get MWI advocates to focus on the need to say *exactly* how their theory relates to empirical reality, MWI critics talk about conscious experiences and so forth, this always being the last resort when one tries to communicate with someone who either denies reality entirely (quantum “antirealists”) or who is only willing to talk about their favorite theoretical construct (MWI believers).

If MWI had a clear and objective and universally agreed answer to the question, “what is a ‘world’, in your theory of ‘many worlds’?”, this debate about personal identity would be a lot more in the background. Now anyone who knows QM, knows that the theory in its essence doesn’t give you a preferred basis or a natural factorization of all states, so looking for well-defined parts in the universal wavefunction is problematic.

But instead of considering this a point against their theory, MWI advocates consider it a mark of sophistication to say, “Oh, a ‘world’ is only a vague, heuristic concept for the use of human beings, but we know that at the deeper level of reality, the wavefunction doesn’t divide cleanly and objectively into worlds.”

MWI advocates, when you take this line of thought, you are abandoning your responsibility for connecting your theory to reality. The theory is vague, therefore reality is vague? Reality can’t be vague. Vagueness is only in concepts, not in reality. So my syllogism is: the theory is vague, reality is not vague, therefore the theory has a problem.

The Michael Mensky e-print mentioned by vince #108 is indeed about the things I myself usually associate with “links between MWI and dark matter” and that caused venomous remark of my colleague few years ago. Indeed I have read the e-print only now – it was not cross-referenced neither to quant-ph, nor to gr-qc and I can consider that as indirect confirmation of “hard-printability” of such ideas even for people with very strong reputation like Mensky. For me submission of a preprint about such kind of ideas in quant-ph quite possibly could cause rejection, inclusion in black list, etc.

Ken, there is considerable literature to indicate that this postulate is attended by risks, e.g. Ford and O’Connell “There is No Quantum Regression Theorem” (Phys. Rev. Lett. 1996). Here the point is that the algebraic properties of flat-space Schrödinger physics are so enticing, as to encourage us to ignore a very considerable body of difficulties and intricacies that arise when we attempt to describe open-system dynamics within this framework.

The situation is perhaps analogous to nineteenth century physics, in which Newtonian physics worked just fine, except for a few (seemingly minor) technical points associated with questions like “why is atomic structure stable at atomic scales?”

In both cases, mathematical methods were evolved that successfully swept awkward dynamical questions under the carpet. Such sweeping is a blessing in the sense that it permits practical calculations to move forward, and yet that same sweeping is regrettable in that clues to conceiving more general dynamical frameworks can be cloaked by it.

That is why it is prudent to study these dynamical difficulties with two objectives in view: first to evolve techniques that evade the difficulties in practical calculations, second to conceive alternative dynamical frameworks that naturally exclude the difficulties (these being the twin edges of David Deutsch’s sword).

Scott: “I’d say it’s also clear that there’s something huge we don’t understand about “self-locating belief,” personal identity, and yes, free will and consciousness. For example, if you want to say that the mind is just a computer program running on meat hardware, that’s fine … but then what happens when the program gets copied? Assuming you’re “merely” self-interested, should you agree to a perfect computer simulation of yourself undergoing terrible simulated tortures, in exchange for millions of real dollars? Either possible answer can be made to sound bizarre from a traditional scientific rationalist standpoint, but presumably one of the answers is right!”

Can you explain this “self-locating belief” problem in more detail?

Yes, to me the mind is just a program running on meat hardware (free will an illusion) and I don’t see any problems with this view.

What happens when the program gets copied? Same as with regular software – you end up with 2 programs. They will have shared memory and personality but will diverge after split.

Now if it were possible to place both copies in perfectly identical conditions they would act exactly the same, but of course in real world there are sources of randomness which cannot be controlled.

As to the other question – assuming you’re “merely” self-interested, should you agree to a perfect computer simulation of yourself undergoing terrible simulated tortures, in exchange for millions of real dollars?

If you were to completely discard ethics then yes, you certainly won’t feel any of those tortures. Of course the other copy would also say the same thing if asked the same question but I don’t see this as a problem.

But the question is mostly about ethics as it’s analogous to subjecting someone else to such tortures for money. Only in this case that someone else is very similar to you.

So what is bizarre here? And where does this interpretation of mind run into problems?

Doesn’t the outcome of the two-slit experiment constitute something more than mere mathematical existence?

Isn’t it something that we “experience”?

Perhaps not, but then I would like to understand the “explanation” for the outcome — not simply that QM “predicts” the outcome, but rather an explanation of what physically in the world of experience is causing the outcome.

Related to what are ‘reasonable’ visualizations of quantum mechanics, isn’t the schrodinger cat experiment very off-base, because interference only works for linear interactions, and dieing is about as non-linear as you can possibly get, since it destroys information? If you die a second time you’re still dead.

Bram #123: No, if we’re talking about Schrödinger’s cat at all, we presumably must be imagining that technology has advanced to the point where one could rotate unitarily between the cat’s “live” and “dead” states, so that “death” wouldn’t be irreversible at all. This is why that experiment isn’t nearly as macabre as many people think!

Ad John #119: Thanks. I found the paper, but need hand-holding to see the connection. However, I do frequently ask people when it is clear that the entire system described by an instance of Schrödinger’s equation is within our “branch”, versus when not—as per descriptions of the two-slit experiment.

Ad Mike #122: I did nod to what you say in my first comment #116, but I’m not convinced the “other” photon—or what is perhaps more properly described as “the” photon regarded as a “multiversal object”—has physical existence. Before getting there, my Q is whether its physical existence must follow from accepting that QM linearity holds at all scales.

Ad Mitchell #117: I move from “world” to “history”—as in “consistent histories” (whose relation to MWI can be another topic)—and then try to analogize a history to a path through states in an NFA. The states of the NFA represent configurations of matter-energy, understanding that a 10^123-ish size limit applies somewhere.

“[M]y Q is whether its physical existence must follow from accepting that QM linearity holds at all scales.”

I guess my question is that if one assumes that the “other” photon isn’t “real,” what then is the explanation for the single photon interference outcome. And, I suppose a related question is that absent any good evidence [yet ;)] that QM linearity doesn’t hold, why not prefer the MWI?

The reason criticisms of MWI revolve around personal identity is because MWI has not yielded any objective concept of a “world” … So in an attempt to get MWI advocates to focus on the need to say *exactly* how their theory relates to empirical reality, MWI critics talk about conscious experiences and so forth, this always being the last resort when one tries to communicate with someone who either denies reality entirely (quantum “antirealists”) or who is only willing to talk about their favorite theoretical construct (MWI believers).

The problem with this view is that I know exactly how the MWI folks would respond to it. They’d say: “but ‘world,’ in the sense you mean by it, is just as much an abstract theoretical construct as ‘wavefunction’ is! Indeed, the only relevant difference between the two is that ‘wavefunction’ is a better theoretical construct, the one that our deepest theory of the physical world actually talks about.”

So, that’s the reason to bring in consciousness: because if I know nothing else, at least I know that my consciousness isn’t a theoretical construct (as you presumably know yours isn’t!).

Admittedly, many scientists seem to adopt a methodological rule, according to which you have to avoid saying the word “consciousness” for as long as possible, instead using various proxies for it. But my own attitude is different: because I know from experience that that’s where this discussion has to go eventually, if pursued to the bitter end, I simply go there immediately!

Sure! “Self-locating belief” refers to the issue that, even after all the “objective” facts about the physical world have been specified, you still need certain additional facts before you can make useful predictions. In particular, you need what the philosophers call “indexical” facts: among all the possible agents contained in your objective world-description, which agent are YOU? This is obviously a huge issue for MWI folks, but it’s also an issue for reasons having nothing to do with quantum mechanics. For example, self-locating questions are always in the background when people argue about the anthropic principle, the Doomsday Argument, or the likelihood of extraterrestrial intelligence.

The best introduction to self-locating belief I’ve seen is Nick Bostrom’s wonderful book Anthropic Bias: Observation Selection Effects in Science and Philosophy, which fortunately is available free on the web. Personally, I don’t claim that I know the right way to think about these issues: my only real goal, when this subject comes up, is to convince people who claim not to be confused that they should be confused!

And while Bostrom can probably do a vastly better job of confusing you than I can, let me try one thing:

In my example from comment #113, where a simulation of you undergoes terrible simulated tortures, suppose instead that it’s a thousand simulations of you being tortured in a thousand computers. Then even if you’re completely self-interested (i.e., if we leave ethics out of it), why on earth should you agree to this? As you deliberate over this decision, shouldn’t you reason that you yourself are almost certainly one of the simulations (since by assumption, they all have exactly the same mental state as the “real” you), and therefore it’s overwhelmingly likely that you yourself will be tortured once the deal is agreed to?

(I feel a little like Lisa Simpson now. Lisa: “if a tree falls in a forest and there’s no one there to hear it, does it make a sound?” Bart: “Sure it does! EWWW-PLUNK!” Lisa: “OK, what is the sound of one hand clapping?” Bart: “that’s easy!” [repeatedly claps his fingers against his palm])

Mike #127: Well, I could try one of the theories at the bottom of Scott’s notes here. Admittedly, my own objections to MWI are only “m(etaph)or(ic)al”, on top of what Scott says in his philosophy paper about MWI as an explanation of quantum computing (esp. his second issue).

QM is a reversible theory. If a unitary transformation U represents a possible evolution of a physical system, then U^-1 also represents a possible evolution. And, therefore, if decoherence can take place, then recoherence can also take place, by simply inverting whatever unitary transformation caused the decoherence. The only reason why we don’t see that happen in practice is the Second Law of Thermodynamics:… . But all those things are just statistical consequences of the initial conditions, and have nothing to do with the dynamical laws themselves.

The reason we don’t observe U^-1 (per Drescher’s account in Good and Real) lies in the “observe” part: specifically, that we could never have memories of U^-1 happening, because memory formation is itself an entropy-increasing process. In other words, the only state(s) s1 that can store information about state(s) s2 are those in which s1 has higher entropy than s2. (Obviously there are other constraints; that’s just the minimum.)

So even if the U^-1 evolution were to “happen” (whatever that would mean), all of the observers further along in that direction would only have memories of things even further in that direction (i.e. further “pastward”). The only observers left to “remember” t = 0 are those in states resulting from the (increasing-entropy) evolution in the U direction.

So, that’s the reason to bring in consciousness: because if I know nothing else, at least I know that my consciousness isn’t a theoretical construct (as you presumably know yours isn’t!).

Admittedly, many scientists seem to adopt a methodological rule, according to which you have to avoid saying the word “consciousness” for as long as possible, instead using various proxies for it.

But do you need consciousness for any of this, or merely the ability to experience qualia ( Some would argue – do argue – that you don’t even need that)? Or do you mean the two to be roughly equivalent? I think there’s an important difference myself.

ScentOfViolets #133, I’m extremely curious: what, in your view, is the relevant difference between “consciousness” and “qualia”? I confess that I’ve always regarded those—together with “mind,” “sentience,” “first-person experience,” and other frequent bedfellows—as totally interchangeable for these discussions. Or rather, from my standpoint, there’s one mysterious thing that all these terms are trying to get at (the most precise designation I could think of was, “that which David Chalmers writes about”). So the differences between the terms merely arise from the different ways in which they can be misunderstood to mean something other than that thing.

(On reflection, there might also be what Steven Pinker calls a “linguistic treadmill” going on here. In other words, just like the English language has cycled between “lavatory,” “bathroom,” “restroom,” “facilities,” etc.—as each euphemism, in its turn, becomes too distasteful for polite company, yet the need for some word remains—so too “consciousness,” “qualia,” “sentience,” etc. might each in their turn pick up too many woolly connotations for hardheaded scientists’ comfort, so that a new term keeps coming into vogue that actually means exactly the same thing!)

Scott: Sure! “Self-locating belief” refers to the issue that, even after all the “objective” facts about the physical world have been specified, you still need certain additional facts before you can make useful predictions. In particular, you need what the philosophers call “indexical” facts: among all the possible agents contained in your objective world-description, which agent are YOU?…”

Ok, thanks for explanation though it does seem to me that it’s only relevant for methaphysics: MWI, anthropic principle, the Doomsday Argument, or likelihood of extraterrestrial intelligence… none of that counts as solid science in my book

As for the torture example: “As you deliberate over this decision, shouldn’t you reason that you yourself are almost certainly one of the simulations (since by assumption, they all have exactly the same mental state as the “real” you), and therefore it’s overwhelmingly likely that you yourself will be tortured once the deal is agreed to?”

I don’t see how (assuming that were possible) them having exactly the same mental state would imply that one of them is me. The real me is the one that rises hand when I rise my hand. Each software occupies it’s own meat hardware, trashing the hardware of all the other copies won’t do anything to me. Exactly the same as with regular software and hardware.

How do you explain partial traces in MWI? The preferred basis problem? What is the coarse graining resolution for the branchings? What about branches merging due to “forgetting”? If there really are parallel worlds, how come we can only extract information from them obtainable from coherent constructive and destructive interference, and not any more?

Regarding whether consciousness causes wavefunction collapse, can’t we learn a bit more just within the double-slit experiment? Namely, let Alice observe which slit the particle goes through (so she will never see the interference pattern) while Bob will be watching the wall from afar, so he should have no information to collapse the particle wavefunction (assuming we isolated him from Alice and her observing device) and should see the interference pattern. But is it really possible that one conscious being sees the pattern and another one does not? Or one conscious observer (even isolated) is enough for the collapse? Any experiments like this have been done before?

Stas #140: In the situation you describe, neither one will see an interference pattern. Alice observing which slit the photon goes through decoheres the photon, turning it from a pure state to a mixed state. Note that exactly the same would happen if “Alice” were a mechanical recording device or even just environmental noise. From the lack of an interference pattern, Bob can’t conclude anything at all about whether a conscious being learned which path the photon went through—all he knows is that the “which-path” information became entangled with something in the external environment. This is all just standard QM, and yes, it’s been done.

I haven’t read all the comments in detail do this might just be a repetition of stuff that has already been discussed above.

I was imagining today that there existed some kind of pan dimensional being that could record the wave function of our universe and play it back at wil.

We could all very well be part of one of these playbacks. Knowing that’s possible doesn’t seem to preclude us being conscious, even though we have no choice but to repeat everything, including what I am writing right now, each and every time the recording is “played” again.

I’m extremely curious: what, in your view, is the relevant difference between “consciousness” and “qualia”? I confess that I’ve always regarded those—together with “mind,” “sentience,” “first-person experience,” and other frequent bedfellows—as totally interchangeable for these discussions.

I don’t know if there’s a difference either; bear in mind this is in the context of scientists loathing to explicitly refer to the trait that dare not speak it’s name: consciousness. Everyone knows why of course: if a human can ‘collapse the wave function’, how about a dog or a dolphin? How about a doodlebug or a volvox or a single streptococcus? Otoh, (to my mind at least) asking about qualia seems to be much more specific. At what point do we stop saying a detector is registering the presence of photons of a certain energy and start saying it is seeing the color red? It’s the same problem, admittedly, but at least it’s a smaller slice

This is all just blue-skying, of course. Given what we know of consciousness already – namely that it’s an epiphenomenon that strips away vast quantities of data in preference of a pared-down inner form or narrative or whatever other descriptor you prefer – it may turn out that consciousness is in part defined by the fact it only takes notice of small parts of the wave function, those decohered ‘branches’ if you will. Certainly any agent cognizant of multiple branches of the wave function would not experience any form of consciousness as we know it. Qualia, otoh . . . well, that to my mind (and this is just imho, mind you) seems to be more fundamental in some sense.

Pardon me if I’ve missed something here; classes started for us this week. But when you say this:

No, it’s not a necessary aspect of MWI that the branches never again meet—in fact they will meet in the thermodynamic limit.

I’m wondering what you mean here. I would think (naively, of course; I’m an algebraic geometer), that living in a cosmos dominated by dark energy would ensure this would never happen! Am I wrong in thinking that dark energy does not imply some sort of exponential increase in accessible states? All other things being equal, of course.

ScentOfViolets #145: That’s an excellent point. With dark energy in the picture, it’s entirely possible that the branches would never again meet; thanks! But a major difficulty here is that no one really understands how to apply quantum mechanics in a cosmological context—e.g., should our Hilbert space include only the ~10122 qubits inside our causal horizon (in which case, the dynamics on that Hilbert space wouldn’t be perfectly unitary, but would include an external “bath”)? Or should we include “the whole shebang,” even if we have no possible way of knowing how large it is, and can’t even in principle construct observables involving the stuff outside our causal region?

ScentOfViolets #144: In these discussions, I think it’s crucial to distinguish between consciousness and wavefunction collapse. For even if someone believes there’s a link between the two, there remains at least one huge difference: namely, wavefunction collapse has testable consequences, at least in principle. For example, you could check whether a dog or dolphin collapses the wavefunction, by shooting the poor animal through a superposition of two slits and then seeing whether you get interference fringes. (Yeah, there’s a lot of animal cruelty in this business.) But regardless of what answer you got, you still wouldn’t know what it was “like” to be a dog or dolphin—a different and much harder question that’s even not obviously within the scope of science. (Though in this case, I imagine that the sheer terror while flying through the diffraction grating wouldn’t be all that different in the human, dog, and dolphin cases. )

Suppose I’m given a container filled with gas– my task is to work out the thermodynamic theory and then take measurements. I work out the theory, and then my assistant, Igor, runs in and says “I’m sorry to tell you this but the container is actually filled with a mixture of two gases.”

I see three possibilities:
1) I’m an ordinary physics-monkey. I say “Ooopsie, mistakes have been made,” and I toss out what I’ve done so far.
2) I’m a Bayesian. I say “No mistakes have been made, my subjective knowledge of the system has changed,” and I toss out what I’ve done so far.
3) I’m an MWI-er. I say “No mistakes have been made, nothing has changed, I am simply not in the universe I thought I was in,” and I toss out what I’ve done so far.

I have a vague impression, that many years ago David Deutsch explained me that branches do not meet again just due to thermodynamics reasons… Yet the definition of “branch” is rather classical explanation of a quantum picture and it is not clear how to define that in some complex cases.

The structural aspect of the deductive logic you have is right. However, I was disturbed by the “observe” thing in there. The two are unrelated.

Consciousness isn’t necessary in a basic physical description/theory (e.g. a differential equation), though it _can_ participate as a causal agent in a physical system, but only in an auxiliary way (e.g. as a part of well-posed auxiliary conditions). When you swing a bat to impart motion to an initially motionless ball, there _is_ a conscious action involved in it as a cause. However, such causal participation of consciousness does not alter the basic physics of the force imparted to the ball; F = dp/dt still holds. Moral: Your best bet to discover physics always is to use the context of the inanimate objects; it simplifies things to such a degree that physical analysis at all becomes possible. If the physics you get this way is right, then the use of consciousness in the physics theory will automatically take care of itself.

@Scott #147:

Think about it. The last time you walked through a door, you _were_ diffracted in the process. Would you therefore stay indoors? … Come to think of it, even entertaining such a possibility is meaningless. You would be diffracted during any movement, any motion whatsoever. In fact, you would be diffracted even if you hypothetically were to remain perfectly still and some one thing—just one thing—in the rest of the universe were to move relative to you… All in all, flying through a diffraction grating isn’t all that terrifying.

John Sidles #88:
So you are saying Rugar et al. produced a coherent wavefunction representing a total mass of 92 pg with a width of roughly 45 pm which was demonstrated to obey linear quantum mechanics for 760 ms? That would indeed falsify Penrose’s conjecture as well as certain other theories predicting gravitationally-induced nonlinearities. If so, your analysis seems deserving of its own peer-reviewed paper.

Jim, there are multiple ways (of course) to unravel the quantum trajectories of the IBM experiment, all of which predict the same ensemble of experimental records (of course!) and only *some* of these unravelings invalidate Penrose’s conjecture (of course!!).

In consequence, any serious analysis has to tackle head-on the conceptually subtle, computationally tricky, and experimentally difficult question “What was the *real* quantum trajectory unraveling during the IBM experiment?”

Closing all of the loopholes that are naturally associated to this question will take more than one generation of physicists (if the history of Bell inequalities is any guide).

It seems to me that a comparably consequential question to “What was the *real* quantum trajectory unraveling?” is “What is the most *computationally efficient* quantum trajectory unraveling?”

This latter is the question that I mostly think about, because (1) it has greater practical applications, and (2) it’s philosophical baggage is lighter-weight, and (3) DUDES duality suggests that in the long run these two questions will be answered within a common framework, and so it may not matter much which gets worked on.

On update (8/26).
(1) I think that experiment mentioned in #47 is close related with the question.
(2) “outgoing spheres of gravitational influence” is not very good idea
(3) the question itself is about theory of “proper quantum gravity” we still do not have.

Scott, may be I wrong about your idea in SE, but it seems to me that you are looking for example with inversion of evolution prohibited by some reasons and in such a case the gravitation indeed is overkill.

@Ajit_R._Jadhav #150: I thought my explanation was orthogonal to consciousness — indeed, it applies just as much to non-conscious beings as to conscious beings. The specific example Drescher uses in his exposition is a system of giant balls bouncing around with numerous small balls. In that system, you have an arrow of time, but clearly no conscious beings.

The “arrow of time” is that you can look at a snapshot and see the “wakes” that the large balls have cleared out, thus identifying which snapshots are “pastward” and which are “futureward”. The kicker is that if you run the simulation backwards past its initial state, you see the same phenomena, just with all the signs reversed except entropy! That is, you see the large balls clearing out the small ones and leaving wakes, just in the opposite directions. But you can still identify which snapshots are closer to t=0. And you also see that it’s not the “negative time direction” in which entropy increases but the “closer to initial” direction. Even more interestingly, there is no observational distinction that can tell you whether your in the “increasing time” or “decreasing” time direction — it’s purely a matter of labels whether you say that “the large ball moving left” is the positive time or negative time direction.

Certainly, people can have false memories of past events — indeed, that’s what dreams are — but for them stably persist (i.e. be “true” memories), they have to be the result of an increasing-entropy process.

Sorry if I have offended you in the above comment by addressing you as @Silas. I had no idea of the naming system you seem to follow.

I have not read Drescher, nor, going by the Wiki page about him (http://en.wikipedia.org/wiki/Gary_Drescher), do I intend to even browse his book(s). Feel free to briefly point out if the description in the Wiki page about him is inaccurate (by Scott’s permission, here, or otherwise, direct to me (no spaces etc): a j 1 7 5 t p at yahoo=dot * co#dot+in).

And, no, your above explanation was not orthogonal to consciousness—not inasmuch as you specifically said memories, and not, say, photographs. However, I won’t press or debate this point too much. Apart from the broad aside I noted above, it doesn’t interest me much.

I don’t think the foil for false memories are the stabling persisting ones. Stably persisting memories could easily be false ones.

However, yes, I would agree that, speaking in broader terms, life is a process in which the entropy of the universe increases. Yet, I have no particular opinion (nor any particular interest) whether only the memory-forming sub-part of the life processes, taken by themselves, would necessarily increase the entropy of the universe or not.

Scott #98: Hi Scott, I’ve thought about your question. This is off-topic, but I hope it’s okay for me to comment as I’ve been thinking about it for a few days. In addition to passing the Turing test in the sense of straightforward synchronous input-output, I think an entity would also have to pass an “asynchronous” Turing test. That is it would have to demonstrate its own behaviour that is not directly linked to its previous input. In human terms I am referring to creativity in all of its forms. When an author writes a novel, he or she isn’t doing so in response to one specific input. That seems important to me. What do you think?

Other than that, I can’t really think of anything else. If an entity could pass the Turing test on both counts then I wouldn’t care if it was some kind of lookup table or anything else; I’d have a hard time rejecting its claim of consciousness…

I wrote a slightly longer reply on my Google+ feed. If you feel like reading it go ahead and let me know what you think. Since this is already way off topic, I think this comment is long enough as it is

After excellent answer of John Preskill on your question about gravitational decoherence may be still there are couple of points to worry about:
(1) We may not found “which-way” information by measuring of field of single electron, but we may resolve the problem by measuring “static” gravitational field of macroscopic body near us even with present day technology. I think the Preskill model is rather related with fascinated perspective of measurement of gravitational waves from far objects.
(2) If we are talking about gravitons, we most likely refer to linearized gravity. In such a case maybe indeed it is not a big difference in comparison with some spin-2 particle or even with “qupentits” prepared in some multi-photon experiment. But initial reason to ask about gravitation may be just a hope on some difference with “usual case” …

I have posted a reasonably quantitative answer to the question Scott posed “Reversing gravitational decoherence“ on Physics StackExchange. This same answer will provide technical background to the final exchange in the ongoing debate between Aram Harrow and Gil Kalai regarding the feasibility (or not) of scalable quantum computing.

Folks who are familiar with the celebrated joke — variously told of rabbis / priests / ministers / monks — whose punch-line is “You’re absolutely right,” will be unsurprised by the gist of this answer.

John Sidles #162, it was a pleasure for me to read that review, but what is relation with Scott’s question? Certainly it is a thought experiment with simplified model there second law “is disabled by definition”.

Here on Shtetl Optimized it is traditional to be a little bit more provocative. In keeping with this tradition, the strongest fundamental physics postulates that (IMHO) might be true, are these: (1) No bosonic emission into the vacuum (gravitational or otherwise) can be scalably quenched with abitrarily low residual decoherence, and in consequence (2) the state-space of Nature is effectively not a (flat) Hilbert space, and hence (3) it is credible that the state-space of Nature is in fact not a Hilbert space, however (4) non-flat quantum state-spaces are difficult to observe because they mimic experimental decoherence so near-perfectly.

A young mathematician comes to present to a famous mathematician his conjecture and ideas. “You are absolutely wrong,” the famous mathematician dismissed the young one. Next enters another young mathematician and presents precisely the opposite conjecture. “You are absolutely wrong” replies the famous mathematician. The famous mathematician’s wife interferes. “How could you tell both of them that they are wrong,” she sais. “They have made completely opposite claims, one of them must be right!” “You are also wrong,” replied the famous mathematician.

John #164, It seems to me, Physics.SE is not well suited for such discussions by pure technical (software) and other reasons. Joe Fitzsimons et al tried to organize some “physicsoverflow” first inside SE (but the TP.SE was closed after about 200+ days of work) and outside (http://discussion.tpqa.org/), but I have not seen any motion there during long period of time.

I am a complete idiot when it comes to general relativity and the attempts to integrate it with QM. (I first wrote novice/newbie, but then revised the draft, in the larger interests of accuracy.)

One dumb thing, however, that none seems to mention in this context is this:

_If_ there has to be a particle of gravity, why does it have to quantum mechanical in nature, i.e. carrying a wave-particle dual nature, undergoing the wave-collapse/decoherence, etc.? Why can’t it be a simple classical particle?

Since there is no experimental evidence anyway, the only issue that can now be raised is: What kind of aesthetics leads to that kind of an assumption?

Is the aesthetics in question completely inspired in reference to the fact that one has already invested a lot of years mastering the usual QM and the same/similar mathematical machinery can then be easily put to a good reuse (good in the sense: allowing easiest route to enhance one’s publication record)? Or is there a more robust, physical, consideration behind the choice of that assumption?

Historically, atoms and molecules initially were simple particles without any quantum nature. That’s how Boltzmann thought of them. The spectral density graph of the cavity radiation, the photoelectric effect, and the discrete atomic spectral lines together thrust the quantum nature onto the theorists. (There is one more effect that the introductory texts invariably mention: the Compton effect. However, though discovered by an American, it was a piece of evidence that actually came only later.) In contrast, Boltzmann could still explain thermodynamical properties of gases assuming a simple, pre-quantum (or classical) molecule.

If the Poisson-Laplace equation is all that is (locally) to be explained in reference to a particles-based model, why not keep it a classical particle? Where precisely, then, do we hurt the attempts at GR+QM integration—if at all we do? Since no experimental observation yet exists to suggest any diffraction/interference effects due to gravity, why take a giant leap of faith of assuming a quantum nature for gravity (say, in disregard of the famous Occam’s razor (which could be, and still is, invoked to reject the idea of aether, anyway))?

I of course know in advance that I am right. However, here, I am looking for a succint, direct and preferably conceptual answers to these questions, esp. the last one.

why take a giant leap of faith of assuming a quantum nature for gravity…

It’s an excellent question, but I think there’s an excellent answer to it, which was provided by Feynman in a 1957 conference. See this article by Zeh for details.

The short version: if there’s something that can be put in superposition, then the principles of quantum mechanics (provided you accept them) imply that anything else can also be put in superposition, since what happens when you do an experiment that entangles the first thing with the second thing? The principle of superposition is like a “universal acid”: once you introduce it anywhere, it tolerates nothing in the universe that fails to abide by it!

So no, this isn’t a question of “saving effort” by reusing the same mathematical machinery: it’s a simple question of logical consistency. Either gravity has to be quantum-mechanical, or else the principles of quantum mechanics themselves have to be overthrown (surviving only as approximations). But no one knows how to revise QM even slightly and get a sensible theory—that’s exactly what we’ve been discussing here! So, instead they think about how to quantize gravity, and (if they’re string theorists ) claim things like AdS/CFT as large if incomplete successes in that program. At any rate, I don’t see any way to escape the choice between quantizing gravity or changing QM.

Cool! The paper to which you refer seems neat. However, I might take some time before returning on this one. (Am thinking of writing something for the latest FQXi contest. Haven’t started yet, but may be, will start doing so tomorrow afternoon or so… Which means, I will go through Zeh’s paper on September 1st or so, at the earliest).

Yet, let me slip in something here right away, for the time being.

I have to read and understand Feynman’s argument, but I am not too sure I would get convinced. The essential reason I am hesitant to accept the quantum mechanical superposition as the “universal acid” (BTW, a neat term, that one) is this:

Superposition is a part of a mathematical (or even a physics-theoretical) method; it’s not a physical quantity. In fact, even if you stick only to the mainstream QM, it’s not even a QM “observable.” Gravitational force, on the other hand, is (inasumuch as momentum is, anyway).

Though I have yet to go through the paper, here is my _hunch_: Feynman (or others like him), logically speaking, _must_ be first assuming that a proper unification of EM (or QM i.e. QED) and gravity is in principle possible.

Now, note, that is just a hypothesis. Historical evidence supports it (e.g. mechanical theory of heat; E+M = EM; Optics as EM; QM theory of matter; etc.). But there is no fundamental reason why Mother Nature must always oblige us in supporting the projections into future of our historical trends.

_If_ you accept that hypothesis (that unification is possible), and then, as you yourself rightly pointed out, if you also further suppose that the present mainstream understanding of QM is the final word on those matters (I mean, the observations and experimental findings regarding the QM phenomena), then, sure, for the consistency reasons, I would immediately agree that gravity would have to be quantum in nature.

OK, that was a bit long and winding writeup, but, yes, in essence, I wanted to point out the difference of superpositions from physical quantities/observables. (In a way, superpositions/entanglements would be different even from hidden variables, but more on that, sometime later, may be next week or so.)

Thanks, and bye for now. (Will keep my ‘net browsing to the minimum until August 31.)

Feynman (or others like him), logically speaking, _must_ be first assuming that a proper unification of EM (or QM i.e. QED) and gravity is in principle possible.

No, there’s not really any assumption about unification. Gravity and EM could be totally different forces, but as long as

(1) they inhabit the same logically-consistent universe,
(2) there’s some means by which the two interact (e.g., even charged particles fall if you drop them!), and
(3) the degrees of freedom of one of them obey the superposition principle,

the degrees of freedom of the other seem like they have to obey the superposition principle as well.

Probably not worth discussing further until you at least understand the argument for this conclusion! (The argument is an old and standard one; Feynman simply gives one of the best presentations of it.)

Cogito ergo sum.
1. Is the thought a physical reality? We have measurements of the thought in the form of a scientific papers, working devices.
2. If it is physical reality, is it described by the classical physics, quantum mechanics, or neither.
3. What are the particles responsible for the thought? Thoughton, ideon, evrikon? (we have effective abstract particles like phonons, holes)
4. What is consciousness? Even in operational form. What are measurable consequences of consciousness?

Consider an inclined black box with one central hole on the top surface and ‘n’ # of holes on the bottom surface. You release initially motionless balls in the top hole, one at a time, and, as the ball exits from the bottom surface after a while, it falls into one of n # of collection bins below each hole.

[What we have here is _not_ the Galton box. For the inner working of the box, you may want to imagine a small hammer just near the entry hole; it gives a variable amount of gentle horizontal push to the balls. An n-way combination of splitting grooves/chutes exists just below. The ball horizontally so diverted then directly exits through the selected pathway—without a further horizontal jump.]

We agree to describe the dynamics of the system in discrete terms for the motion of the balls along the horizontal x-axis, and to do so in continuous terms for the motion along the vertical (or inclined) y-axis. (That’s primarily because we may neglect the balls’s horizontal deflection time that happens just once.) The motion of a ball along the y-axis is governed by gravity; v = u + gt applies, with u = 0. Motion along the x-axis is governed by something we don’t understand under the black box description. However, statistically, it happens to show a distribution. [It’s not necessarily the binomial one—it’s determined by the distribution of whimsicalness the machine has built-in, into the horizontal hammer.]

The system obviously has two basically different mechanisms governing the components of motion along the two axes (i.e., it has two sets of forces).

Questions:

(i) Just because the DOFs along the x-axis are discrete, does it mean our theory must discretize the motion along the y-axis, too? Why does the “universal acid” fail to dissolve the continuous nature of the y-axis DOFs here?

(ii) Now, assume that the black box has some mechanism other than the one given above. Suppose that it involves some QM-like superpositions. (Note, the word is: QM-like, not the QMechanical.) The ball, once released in the top hole, goes in a state of that QM-like superposition. Assume further that the collapse occurs for some unknown reasons at the time of the ball’s exit from one of the ‘n’ holes at the bottom. Note, the fact of superpositions affects only the horizontal component of the motion. The question: Does the existence in theory of superpositions now go on to change your answer to question (i)? Why? Why not? What if the question is generalized to any aspect of the dynamics (i.e. not just to the continuous/discrete nature of the DOFs)? Would the existence of a superposition mechanism, operating strictly only in the horizontal direction to select one of the n bins, imply that the vertical motion must also obey some superposition? As this example shows, the average time of a ball to exit the box (in fact even the statistical distribution of the time to exit) would be completely independent of superpositions. What went wrong?

>> “Probably not worth discussing further until you at least understand the argument for this conclusion!”

Fine as an expression of some mild exasperation. However, if meant by any chance more seriously than that, then let me ask: What would it take for you to be convinced that I do?

BTW, I can always leave commenting on any blog, including yours. It really is easier to actually do it than what it sounds like on the first reading. The collectivists sort of “connectors” have culturally made the idea of an easily un-connecting kind of a man difficult to digest. But, that’s not reality. It is pretty easy. (I recently did it for an IITians group at LinkedIn, too.) So, feel free to indicate (or drop me a line by email) if I should do that.

Bye for now.

Last attempt over.

Ajit
PS: (BTW, haven’t yet begun writing that essay—as any spyware on my machine could easily verify. Perhaps may not do so, at all… Will decide later tonight or tomorrow.)
[E&OE]

Gil Kalai, I think your joke is pretty good, but it could be improved. I’ve reworked it:

A young blog commenter tells famous Prof Scott Aaronson his conjecture and ideas. “You are absolutely wrong,” the famous professor dismissed the young one. Next enters another young blog commenter and presents precisely the opposite conjecture. “You are absolutely wrong” replies the famous professor. The famous professor’s wife interferes. “How could you tell both of them that they are wrong,” she says. “They have made completely opposite claims, one of them must be right!” “You are also wrong,” replied the famous professor.

Scott #147, I have actually done that experiment. I sent all 435 members of the US House of Representatives through the double slit and found that only 17% of them were conscious, and that consciousness was highly correlated with subcommittee assignments in a most surprising way. Before I could publish my findings, I was kidnapped by CIA operatives and my entire lab was relocated to Gitmo. It is only a quantum doppelganger of me posting here now, and similarly for the ghosts now occupying those seats in Congress. Thus, MWI is confirmed ;-).

No, I don’t think I had to invent that blackbox. However, conceptually, it’s a powerful toy, and it does help ground arguments while talking with folks who are given to taking flights into the abstract and the symbolic far too easily.

Anyway, the two main points on which _you_ are _inconsistent_, are these:

(i) You first say that G and EM are _totally_ different forces, but then you also go on to add that they _interact_.

The expression “interacting _forces_” has no meaning unless one first assumes the existence of a more basic, unifying, force. In which case, G & EM cease to be _totally_ different; they simply become two different contextual manifestations of the same underlying force. Which precisely is what I had suggested.

Tch… Do you at least now realize that all that you have actually succeeded in doing is to lend support to my position?

(ii) A different, second point. Here, you (and also many many others, including those at MIT/Berkley/Cambridge etc., those winning Nobels etc., and Indians, esp. IITians, revering all such aforementioned) may not agree. However, it’s something I believe in. BTW, it’s a direct consequence of Ayn Rand’s philosophy.

Strictly speaking, there cannot be any interaction between forces. Interactions happen between objects (entities), not between their attributes, characteristics, actions, etc. A force is nothing but a kind of an action taken by an entity. Actions (e.g. motions) do not exist independent of the entities which act.

To quote Ayn Rand: “They proclaim that there are no entities, that nothing exists but motion, and blank out the fact that motion presupposes the thing which moves, that without the concept of entity, there can be no such concept as ‘motion.'”

To suggest that two “totally different” forces can interact follows the same basic pattern as suggesting that two unrelated attributes interact. For instance, that size and texture interact. That bigness can interact with surface roughness.

So long as one is willing to drop the context and blank out necessary facts, one can always come up with a lot of argumentation, perhaps an intelligent one, perhaps a socially satisfying one, perhaps one that leads to money, perks and career advancements, etc. Perhaps. But, always, ultimately, in rebellion against reality.

If I were in your place, rather than writing a new blog post involving interaction, I would have come back and on my own clarified this point—viz., that I had smartly, almost cheatingly, slipped in that interaction thingie in that argument above. And, I would have agreed that all the G + QM programs are at best only tentative in nature.

Tch. Berkeley/MIT/etc. folks. Remarkably like IITians. No point expecting such things from them.

Where is the discussion on the Born rule problem that is so persistent and seemingly insoluble? Also no reply to Dibby’s questions would be interesting… right now a naive reader would think that the biggest objection to mwi is the lack of detection of these other worlds, which obviously isn’t true at all

A better use of time would be to ask the question what are Many-World like? If the future is close to us it is because the clock is always ticking and the future is always becoming the present and the present is always becoming the future, and this has been going on since the beginning of time. To see the future close to us we have to see the contrast with the surrounding area. We require a good to better outline of our futures. If we recognize these goals and accept the responsibilities, the future will be opened to us and we will see we can share information with the future in the present. The solutions to the questions of computer science must focus on positive affirmative outcomes, and the same principles must be adapted to physics to continue to innovate and expand out our capabilities and intelligence as a species.

Scott @171: I follow the argument that gravity must obey the superposition principle, but does this necessarily imply that gravity must be *quantized*, strictly speaking? I.e., must it have a minimum non-zero energy, or could it be “classical” in that sense?

Jon Lennox #181: I don’t know! For me, “being quantum” means obeying the superposition principle. Maybe someone else can explain what, if any, is the argument from first principles that there must be a “quantum of gravity” (i.e., a graviton).

It is terribly late but here are several questions related to the post that I am curious about.

The post asserts that either QM “goes all the way” and allows macroscopic cat states and other counter intuitive scenarios or that there is some theory of forced decoherence that kick in in the macroscopic scale, and Scott (for having no clear opinion) gives 50:50 chance for each possibility. My questions are:

1) Is MWI the only interpretation which support QM going “all the way?”. E.g., why not to think about QM as a mathematical theory of noncommutative probability as (if I understood him correctly) Steve Landsburg suggests #46 .

2) Cat states are fairly simple. Why to regard macroscopic cat states as harder to get than the very entangled states we see in quantum algorithms of quantum error correction?

3) Isn’t it more likely to think that the distinction between microscopic/macroscopic systems emerges from the physics rather than that there will be different a priori principles for microscopic and macroscopic systems?

4) The way I look at it a theory of decoherence (or noise) is simply a theory of approximation of large quantum systems when you neglect some (or many) degrees of freedom. Viewed this way many computational methods in quantum physics can be seen as such approximation recipes. Are such approximation methods expected (or even known) to follow “from first principles” from the basic framework of QM or rather to supplement it?

5) A related question: Should we regard QM as a mathematical language that allows to express every law of physics or rather more strongly as a theory that allows to derive every law of physics?

6) Of course, the case of thermodynamics in the context of question 5 is especially interesting. What about thermodynamics?

The beauty of MWI is that everything emerges from just taking the Schroedinger equation (and its siblings) literally. But MWI has issues, like the dependence of the branch counting on the level of coarse graining or the problems with the emergence of the Born rule (without additional postulates). So it’s not obvious at all that MWI is the final answer, even if the approach is highly favorable.

I’ve been working on an alternative to MWI that also includes information restrictions for local mechanisms in a quantum universe (like an observer). The constrains come from interaction locality, and while sharing the starting point with MWI, they lead to a very different picture. My blog contains a gentle introduction to the approach and a pointer to more rigorous explanations: http://aquantumoftheory.wordpress.com/2012/09/12/does-quantum-theory-have-to-be-interpreted/