Basically, what the many-worlds theory says is that parallel universes are constantly branching off of ours whenever any sort of observation occurs, therefore causing the collapse of a wavefunction.

Of course, events like those are happening all the time. Where are these parallel universes? And I'm not even going to try to think about how many of them there are, because the number is obscenely ridiculous. How many wavefunctions are collapsing around you right now? And the whole structure is branching. Let's say that every second, somewhere in the universe, exactly one wavefunction collapses, and there are two possible pure states that it could collapse to. The universe is about 5 × 1017 seconds old, so the number of universes you'd need to make this work is two to that power. Occam is rolling over in his grave. Except maybe in the universe these people are in, Occam never existed.

Besides, we can't see these other universes. Isn't that convenient? Perhaps the two major interpretations of quantum mechanics -- the probabilistic interpretation and the many-worlds interpretation -- are just two different formalisms for understanding the same thing. It's easy to picture this giant combinatorial tree of universes; it's harder to picture superpositions.

You're offended by the idea that "God plays dice with the universe". So you call zillions of other universes into existence because you can't handle it? Einstein would be ashamed.

There are many [viable] interpretations to Quantum Mechanics and, to my knowledge, there are no "2 most important ones" — the Kopenhagen view is the one we all use in everyday life (so, you could call this one the "important" one), but it's my understanding that all of the other interpretations have "equal weight" (maybe with small perturbations ;-).

And, in this sense, i have to agree with the Anonymous above: When it comes to wave-function collapse... all bets are off!

Also, Penrose does discuss this every time he's got a chance: He did so on the "Road to Reality" and he'd done it already on the "Emperor's New Mind". In his case, he speculates that quantum mechanical wave-function collapse is related to Quantum Gravity.

There's much more than just a 'crackpot' point to be made by these various interpretations/ontologies of Quantum Mechancis. So, although your numbers don't lie, things are not so straightforward.

As for "Landscape"-like analogies... here's something very rarely mentioned in this fashion: Let's consider a vanilla Gauge Field Theory, with an Action given by S[A] = (1/g) int_{M} F ^ *F, where F = d A, * is the Hodge-* operator and M is the manifold/spacetime where this theory is defined. The vacuum solutions to this Action are all labeled by the coupling constant g, which belongs to the Real set. Therefore, in this clear sense, the Moduli Space of this theory (it's Solution Space, Vacuum Manifold) has uncountably-infinite many vacuum states! This is, in principle, much worse than the so-called "Landscape" problem!

However, the trick here comes with under the following name: SuperSelection Rules. That is, in QFT we usually have "ways" to determine which states are physically realizable, we have a "Vacuum Selection Mechanism". The same is not true for String Theory... at least so far. ;-)

The detail hidden behind all of this is the following: The analysis of the QFT equations of motion in terms of its parameters is something quite new... which some folks would rather call under the name Renormalization. Point being that "dynamical systems"-like analysis is not something "common" in QFT... and this is exactly the stage we're entering now. :-)

"Q21 Does many-worlds violate Ockham's Razor?William of Ockham, 1285-1349(?) English philosopher and one of the founders of logic, proposed a maxim for judging theories which says that hypotheses should not be multiplied beyond necessity. This is known as Ockham's razor and is interpreted, today, as meaning that to account for any set of facts the simplest theories are to be preferred over more complex ones. Many-worlds is viewed as unnecessarily complex, by some, by requiring the existence of a multiplicity of worlds to explain what we see, at any time, in just one world.

This is to mistake what is meant by "complex". Here's an example. Analysis of starlight reveals that starlight is very similar to faint sunlight, both with spectroscopic absorption and emission lines. Assuming the universality of physical law we are led to conclude that other stars and worlds are scattered, in great numbers, across the cosmos. The theory that "the stars are distant suns" is the simplest theory and so to be preferred by Ockham's Razor to other geocentric theories.

Similarly many-worlds is the simplest and most economical quantum theory because it proposes that same laws of physics apply to animate observers as has been observed for inanimate objects. The multiplicity of worlds predicted by the theory is not a weakness of many-worlds, any more than the multiplicity of stars are for astronomers, since the non-interacting worlds emerge from a simpler theory.

(As an historical aside it is worth noting that Ockham's razor was also falsely used to argue in favour of the older heliocentric theories against Galileo's notion of the vastness of the cosmos. The notion of vast empty interstellar spaces was too uneconomical to be believable to the Medieval mind. Again they were confusing the notion of vastness with complexity [15].)"

thanks for the link, although I don't have the time to read it right now. Everything I've read about this gave the impression that the many worlds were "somewhere else", because I've never read about the many-worlds interpretation in a setting where the authors felt comfortable invoking superposition. (Lay readers aren't so comfortable with that idea.) The theory seems ridiculously anti-Occam to me if the universe are in some separate space, but if they're superposed on ours suddenly I'm a lot more comfortable with it.

No, steven, it doesn't follow "directly from the math", despite what an overzealous promoter says.

All the mathematics can tell you is what the "shut up and calculate" interpretation says. And that is the interpretation that holds sway among actual physics (again, despite what that author may say). In practice, physicists have abandoned the philosophy of quantum mechanics as much as mathematicians have abandoned the Platonism/Formalism debate.

Many-Worlds is even less of a scientific theory than string theory. It "resolves" the measurement problem by pushing it into alternative universes that we can by definition never observe. Much more satisfactory is Penrose's attempt to explain decoherence from the uncertainty associated with gravitational energy. Not only does it unify the non-unitary evolution with the unitary, but it has the absolute advantage of being physically testable.

John, we may be using a different definition of "the mathematics". What I'm saying is that once you take the unitary part of the dynamics as describing something, you're forced to conclude that this something has branching worlds in it, unless most of those magically disappear because of some sort of extra collapse process for which there is no evidence.

Many worlds is untestable (which is why it's a question for philosophers of physics), but so are almost all other interpretations, and many worlds makes the fewest unnecessary assumptions. Being successfully tested makes a theory more probable, but being potentially testable does not.

So in what sense can Many-Worlds be said to be "right" (as your link claims it is) if it differs in no material way from any other interpretation?

Oh yes, that link claims to outline a prediction, but it's complete hand-waving at that point. Basically he claims that it resolves the measurement problem by making certain things reversible that wouldn't be otherwise. However, it really gives no mechanism to actually effect this process.

On the other hand, there are theories out there for revising quantum mechanics and resolving the measurement problem that don't invoke myriad alternate realities in the process, that give very specific mechanisms for their workings, and give specific, predictable tests. These revisions lead to interpretations of their own, which your link conveniently avoids mentioning. I wonder why?