Why is many-worlds winning the foundations debate?

Almosteverytime the foundations of quantum theory are mentioned in another science blog, the comments contain a lot of debate about many-worlds. I find it kind of depressing the extent to which many people are happy to jump on board with this interpretation without asking too many questions. In fact, it is almost as depressing as the fact that Copenhagen has been the dominant interpretation for so long, despite the fact that most of Bohr’s writings on the subject are pretty much incoherent.

Traditionally, philosophers have made a distinction between analytic and synthetic truths. Analytic truths are those things that you can prove to be true by deduction alone. They are necessary truths and essentially they are just the tautologies of classical logic, e.g. either this is a blog post or this is not a blog post. On the other hand, synthetic truths are things we could imagine to have been another way, or things that we need to make some observation of the world in order to confirm or deny, e.g. Matt Leifer has never written a blog post about his pet cat.

Perhaps the central problem of the philosophy of science is whether the correctness of the scientific method is an analytic or a synthetic truth. Of course this depends a lot on how exactly you decide to define the scientific method, which is a topic of considerable controversy in itself. However, it’s pretty clear that the principle of induction is not an analytic truth, and even if you are a falsificationist you have to admit that it has some role in science, i.e. if a single experiment contradicts the predictions of a dominant theory then you call it an anomaly rather than a falsification. Of the other alternatives, if you’re a radical Kuhnian then you’re probably not reading this blog, since you are busy writing incoherent postmodern junk to write for a sociology journal. If you are a follower of Feyerabend then you are a conflicted soul and I sympathize. Anyway, back to the plot for people who do believe that induction has some role to play in science.

Kant’s resolution to this dilemma was to divide the synthetic truths into two categories, the a priori truths and the rest (I don’t know a good name for non-a priori synthetic truths). The a priori synthetic truths are things that cannot be directly deduced, but are nevertheless so basic to our functioning as beings living in this world that we must assume them to be true, i.e. it would be impossible to make any sense of the world without them. For example, we might decide that the fact that the universe is regular enough to perform approximately repeatable scientific experiments and to draw reliable inferences from them should be in the category of a priori truths. This seems reasonable because it is pretty hard to imagine that any kind of intelligent life could exist in a universe where the laws of physics were in continual flux.

One problem with this notion is that we can’t know a priori exactly what the a priori truths are. We can write down a list of what we currently believe to be a priori truths – our assumed a priori truths – but this is open to revision if we find that we can in fact still make sense of the world when we discard some of these assumed truths. The most famous example of this comes from Kant himself, who assumed that the way our senses are hooked up meant that we must describe the world in terms of events happening in space and time, implicitly assuming a Euclidean geometry. As we now know, the world still makes sense if we drop the Euclidean assumption, unifying space and time and working with much more general geometries. Still, even in relativity we have the concept of events occurring at spacetime locations as a fundamental primitive. If you like, you can modify Kant’s position to take this as the fundamental a priori truth, and explain that he was simply misled by the synthetic fact that our spacetime is approximately flat on ordinary length scales.

At this point, it is useful to introduce Quine’s pudding-bowl analogy for the structure of knowledge (I can’t remember what kind of bowl Quine actually used, but he’s making pudding as far as we are concerned). If you make a small chip at the top of a pudding bowl, then you won’t have any problem making pudding with it and the chip can easily be fixed up. On the other hand, if you make a hole near the bottom then you will have a sticky mess in the oven. It will take some considerable surgery to fix up the bowl and you are likely to consider just throwing out the bowl and sitting down at the pottery wheel to build a new one. The moral of the story is that we should be more skeptical of changes in the structure of our knowledge that seem to undermine assumptions that we think are fundamental. We need to have very good reasons to make such changes, because it is clear that there is a lot of work to be done in order to reconstruct all the dependent knowledge further up the bowl that we rely on every day. The point is not that we should never make such changes – just that we should be careful to ensure that there isn’t an equally good explanation that doesn’t require making such as drastic change.

Aside: Although Quine has in mind a hierarchical structure for knowledge – the parts of the pudding bowl near the bottom are the foundation that supports the rest of the bowl – I don’t think this is strictly necessary. We just need to believe that some areas of knowledge have higher connectivity than others, i.e. more other important things that depend on them. It would work equally well if you think knowledge is stuctured like a power-law graph for example.

The Quinian argument is often levelled against proposed interpretations of quantum theory, e.g. the idea that quantum theory should be understood as requiring a fundamental revision of logic or probability theory rather than these being convenient mathematical formalisms that can coexist happily with their classical counterparts. The point here is that it is bordering on the paradoxical for a scientific theory to entail changes to things on which the scientific method itself seems to depend, and we did use logical and statistical arguments to confirm quantum theory in the first place. Thus, if we revise logic or probability then the theory seems to be “eating its own tail”. This is not to say that this is an actual paradox, because it could be the case that when we reconstruct the entire edifice of knowledge according to the new logic or probability theory we will still find that we were right to believe quantum theory, but just mistaken about the reasons why we should believe it. However, the whole exercise is question begging because if we allow changes to such basic things then why not make a more radical change and consider the whole space of possible logics or probability theories. There are clearly some possible alternatives under which all hell breaks loose and we are seriously deluded about the validity of all our knowledge. In other words, we’ve taken a sledgehammer to our pudding bowl and we can’t even make jelly (jello for North Ameican readers) any more.

At this point, you might be wondering whether a Quinian argument can be levelled against the revision of geometry implied by general relativity as well. The difference is that we do have a good handle of what the space of possible alternative geometries looks like. We can imagine writing down countless alternative theories in the language of differential geometry and figuring out what the world looks like according to them. We can adopt the assumed a priori truth that the world is describable in terms of events in asome spacetime geometry and then we find the synthetic fact that general relativity is in accordance with our observations, while most of the other theories are not. We did some significant damage close to the bottom of the bowl, but it turned out that we could fix it relatively easily. There are still some fancy puddings – like the theory of quantum gravity (baked Alaska) – that we haven’t figured out how to make in the repaired bowl, but we can live without them most of the time.

Now, is there a Quinian argument to be made against the many-worlds interpretation? I think so. The idea is that when we apply the scientific method we assume we can do experiments which have actual definite outcomes. These are the basic data from which we build a confirmation or refutation our theories. Many-worlds says that this assumption is wrong, there are no fundamental definite outcomes – it just appears that way to us because we are all entangled up in the wavefunction of the universe. This is a pretty dramatic assertion and it does seem to be bordering on the “theory eating its own tail” type of assertion. We need to be pretty sure that there isn’t an equally good alternative explanation in which experiments really do have definite outcomes before accepting it. Also, as with the case of revising logic or probability, we don’t have a good understanding of the space of possible theories in which experiments do not have definite outcomes. I can think of one other theory of this type, namely a bizarre interpretation of classical probability theory in which all outcomes that are assigned nonzero probability occur in different universes, but two possible theories does not amount to much in the grand scheme of things. The problem is that on dropping the assumption of definite outcomes, we have not replaced it with an adequate new assumed a priori truth. That the world is describable by vectors in Hilbert space that evolve unitarily seems much to specific to be considered as a potential candidate. Until we do come up with such an assumption, I can’t see why many-worlds is any less radical than proposing a revision of logic or probability theory. Until then, I won’t be making any custard tarts in that particular pudding bowl myself.

25 Comments on “Why is many-worlds winning the foundations debate?”

I am also puzzled by the MWI- it does not seem specific to quantum mechanics (in the sense that, as you say, anytime we have probabilities we can declare all of them to “exist”, in some inherently unverifiable and thus empty way). The facts specific to QM are that the “worlds” are in fact not independent- they interfere, and also they are certainly not “worlds” in the same way we conceive of our own classical reality.

As for removing the preferred status of the observer, or the quantum-classical divide, I don’t see it, in my mind it just reformulates the question (which can be sometimes a useful thing to do).

I have to confess, since I have a low tolerance for philosophising, I skipped directly from your warning message down to the bottom of your post. So I might be repeating something you just said…

I’ve always thought that the many-worlds interpretation is the most literal violation one could come up with for Occam’s dictum that “entities should not be multiplied unnecessarily.”

I had supposed that it was just a convenient mathematical model for whatever is really going on in the physical world, and that it is basically a visualization aid. Does anyone regard this as actually being literally true?

I’ve always thought that the many-worlds interpretation is the most literal violation one could come up with for Occam’s dictum that “entities should not be multiplied unnecessarily.”

Well, the many-worlds fans see it the other way. QM is about wave-vectors evolving unitarily, so what could be simpler than taking this to be literally true, i.e. dropping all the extraneous baggage of the measurement postulates. The difficulty with Occam is that people can disagree on what simplicity is. In the face of such a difficulty it is a blunt instrument.

It seeems to me that MWI appeals to the public because it’s sort of sexy. I’m not sure that the number of people in the field that subscribe to it is all that large, though, is it? It may have some high-profile disciples (such as Deutsch) but what fraction of people in the field really buy into it?

Interesting philosophical exposition, by the way. It’s an aside to your main point, but you are right that falsificationism gets a bit sticky in terms of deciding when falsification occurs (and also in picking between two competing theories). I think that where falsificationism has it over old-fashioned Bacon-style inductivism (that will lead inexorably to Truth) is that it recognises that inductive reasoning won’t get us there (or, at least, that we’ll never be able to say with justifiable certainty that we have Truth); this comes at the cost of believing that there’s any real way to connect with synthetic Truth. It doesn’t matter if people use inductive reasoning, or any other sort, so long as they can form hypotheses that lead to falsifiable predictions. Induct away.

I’m not sure that the number of people in the field that subscribe to it is all that large, though, is it? It may have some high-profile disciples (such as Deutsch) but what fraction of people in the field really buy into it?

It depends what you mean by “in the field”. If you mean any physicist who does any work that uses quantum mechanics, i.e. almost all physicists, then you are almost certainly right. This is not too surprising because most of them just don’t spend their time thinking about foundations. On the other hand, if you focus on areas where foundations is considered to be important by a significant fraction of people, e.g. quantum cosmology or quantum information, then you’ll find quite a substantial number of many-worlds followers. Also, anyone who just mutters something vague about decoherence when you mention the measurement problem to them should be counted as a many-worlds supporter because that’s the only interpretation where decoherence makes any sense as an answer to the measurement problem.

To be honest, I don’t have good enough data to back up any of those claims, but it is true that many-worlds usually rates pretty highly in polls when these have been taken at relevant places, e.g. the Tegmark poll or the student poll that was taken after the Interpretations course we held at PI a couple of years ago. In fact, it seems to rank higher than Copenhagen and is often second only to Shut Up and Calculate.

The problem is that the well-known alternatives to the MWI are also costly in the Quinean sense. Copenhagen’s observer dependency breaks the pudding bowl in its own way. And I take Shut Up and Calculate to be a brand of instrumentalism, which scientific realists (which most scientists implicitly are) find costly, even if they are willing to pay that cost to avoid worrying about quantum foundations.

I know very little about quantum foundations, though, so for all I know there may be a more obscure non-costly alternative to the MWI.

I think that MWI certainly fires up the imagination, especialy of science fiction enthusiasts.

Having said that, I have a question. I admit that I haven’t read much on the topic, but every popular science MWI model that I’ve seen shows the universe as branching. That doesn’t fit with my mental model of quantum information theory, where different “worlds” can and do exactly cancel each other out. How do MWI adherents deal with this?

For me, that’s the strongest argument against MWI. If it was just branching, that’d be one thing. But because “universes” have to meet and annihilate each other, that’s more baggage than a physical theory needs.

Well, given that I’m not too keen on MWI, I am also not keen on describing the details of how they get out of such dilemmas, but I’ll give you a short answer.

There are actually different versions of MWI, and their adherents would have different answers to that question. The one that seems most reasonable to me is the idea that the branching universe picture is not accurate at all scales. In reality, the ontology of many-worlds is just a global wavefunction evolving unitarily. When there is enough decoherence in the wavefunction to support complex persistent structures according to some decomposition then you say that branching has occurred. It is an emergent phenomenon at the macro-level and it only happens in situations where re-interference of the branches is extremely unlikely.

A general trend I’ve noticed is that philosophers of physics and theoretical/mathematical physicists tend to favour no-collapse interpretations. Copenhagen is “ugly” from a mathematical point of view because it essentially involves using our [physical] intuition that a system must have only one final state after observation to mandate that a mathematical object (the state vector) must collapse into one of its components.

It hasn’t ever seemed ‘ugly’ to me, although I perhaps lack the finely-tuned mathematical sensibilities of many. The disquieting thing, for me, is the matter of locating the classical-quantum divide (so I was interested for a time in the Gell-Mann and Hartle decoherence functional approach and unbothered by the loss of unique retrodiction) that mediates the ‘collapse’, although in the end I decided to Shut Up and Calculate.

I don’t think inductionism is an adequate model of science methods, so I don’t see the point in distinguishing between different contexts of facts or derivations of theories.

IIRC Andrew Jaffe goes on to claim that it is a category mistake to compare logical models with science, but I’m not sure that is true. What is apparent, I think, is that “truth” is an inadequate description when “we don’t know” is an acceptable answer in the world of facts and theories.

If the description of Quine’s analogy is referring to infinite recursion, I think it is another problem. Anyone who gets that result in a model should take a long hard look in a mirror and repeat “Now I made a dumb mistake”.

But I need to read Quine to understand exactly what he is discussing. More specifically here, another description of measurements or mechanics doesn’t change the validity of our methods. Otherwise, QM would have invalidated science when it replaced (or rather, complemented) classical mechanics.

When we finally get to the MWI analysis, there is an equivocation in the post between “actual definite outcomes” and “fundamental definite outcomes”. We need the former, but not the later. In fact, we may never be guaranteed to get those.

So what about choosing interpretations? Isn’t this a model problem like choosing descriptions of science (inductionism, falsification, et cetera)? We don’t have to do it, but it can be comforting and convenient, until such time there are distinguishing tests.

MWI is appealing since it in principle resolves some problems by foisting them off to a contingency, like other multiverse theories. And I guess it has the same problems with falsifiability. But as other multiverse theories it also seems to have elegance on its side. IIRC Tegmark notes that one QM axiom is dropped and another is derived in MWI. Parsimony seems to be comparable, not surprisingly perhaps considering that different interpretations describe the same system.

Finally a side comment and a few nitpicks that doesn’t detract from the argument in the post.

First, I really liked the observation that connectivity between theories is more fundamental than hierarchy.

Second, I’m not sure that “we do have a good handle of what the space of possible alternative geometries looks like”. I’m a non-expert here, but it seems to me there is a constant complaint that GR solutions far from flat space is hard to find even though there are many energy conditions to try. Isn’t this part of the reason why universal energy and entropy laws can’t be stated?

Third, demanding unitarity doesn’t seem very limiting. Conservation of probability should be equivalent of asking for a consistent logic, I think.

Matt, first, thanks for the excellent post. The measure of a good post, of course, is how much it gets one thinking. In any case, I would have to say that I tend to be somewhat neutral about such things and must confess to only cursory knowledge of MWI, but it has always struck me as being vaguely akin to (perhaps one step beyond) the so-called fundamental assumption of statistical mechanics in which, roughly, all possible outcomes are equally probable in the long run – MWI simply extends this to say that they all actually occur (technically the fundamental assumption is about accessible microstates, but I did qualify it with a “vaguely”). The problem here, of course, is how to prove it. In addition, the fundamental assumption of statistical mechanics has its own problems.
As a note, I have always referred to non-a priori truths (knowledge) as a posteriori which I may have gotten from Eddington or Copi or someone. As a note, Eddington’s epistemology was, what is sometimes called, selective subjectivism and held the Rob Spekkens-like view that quantum systems specified knowledge as distinct from truths (viewed as experimental results). In selective subjectivism, a priori knowledge puts specific limitations on a posteriori knowledge (most scientists, I would think, would likely think it the other way round). MWI seems to me to be precisely this kind of theory, i.e. it is selectively subjectivist since it puts limitations on the amount of future knowledge we may glean from a bit of present or past knowledge (specifically since our existence is limited to the subjective viewpoint of a single “world”).
Of course, this does nothing to solve the problem of precisely what knowledge is a priori. For example, it could be Kantian in the manner discussed by Matt above, or it could be more vague in the sense that Eddington was getting at (e.g. to him, the entire field of general relativity was an a priori truth beyond Matt’s example above about spacetime geometry in which GR is a consequence and not the truth itself). Eddington’s point – and this is where I see a similarity to MWI – was that epistemologically knowledge in quantum systems is limited to probabilities while in relativistic systems it is limited to relations (Eddington saw this as a starting point for an early quantum gravity theory). This is interesting since both epistemologies are essentially irreversible (I don’t want to bore anyone so I’ll simply say this idea is better explained on p. 29 of this).
As an additional note, I have had the idea bouncing around in my head for some time that permutation invariance (see for example van Fraassen) could be used to argue against MWI on the basis that, if represented as such, the branching spacetimes could never interact with one another. Of course, I never followed up on that idea and have since lost my scribbled notes on the subject.
Finally, I will add the semi-historical comment that anyone who thinks that the Copenhagen interpretation is singular and self-consistent should read Mara Beller’s Quantum Dialogue in which she makes a very strong case for the view that it was simply cobbled together out of sometimes inconsistent and competing views whose adherents skillfully used each other to silence their mutual critics.

So what about choosing interpretations? Isn’t this a model problem like choosing descriptions of science (inductionism, falsification, et cetera)? We don’t have to do it, but it can be comforting and convenient, until such time there are distinguishing tests.

Regular readers of this blog will know that I think it DOES matter, i.e. it’s a matter of relevance for the science as well as for the philosophical interpretation, but I’m not going to rehash this issue again in a comment right now.

Second, I’m not sure that “we do have a good handle of what the space of possible alternative geometries looks like”. I’m a non-expert here, but it seems to me there is a constant complaint that GR solutions far from flat space is hard to find even though there are many energy conditions to try.

True, all I meant is that it is pretty easy to imagine formulating a vast array of different theories in which events are the primitive objects with all sorts of different geometries. I’m not talking about different solutions to GR per se, but rather situating GR within a framework of possible theories, all of which share the feature that spacetime is not a fixed, flat background structure. It may not be easy to actually find interesting solutions to any of these theories, but that’s beside the point because we are not talking about the practicalities.

Third, demanding unitarity doesn’t seem very limiting. Conservation of probability should be equivalent of asking for a consistent logic, I think.

Again, here I am talking about placing MWI QM within a framework of possible theories that may be very different from QM, all of which share the feature that experiments don’t have definite outcomes. Even if such theories are based on Hilbert space, the probabilities need not necessarily follow the Born rule, in which case conservation of probability need not imply unitarity. Even if we do have the Born rule, conservation of probability alone does not rule out anti-unitary dynamics – we need considerations to do with composite systems in order to do that. Further, it is possible to imagine theories with irreversible dynamics, in which case we only require complete positivity rather than unitarity.

However, even these considerations are a lot narrower than what I was imagining. I can’t see how the idea that measurements don’t have definite outcomes should imply that we have a Hilbert space, so the framework should include much more exotic possibilities to which the above considerations do not apply.

MWI as all of quantum interpretation expose our hidden conceptual set up that places space relative to motion. Consequently the more we know about the one the less we know about the other. isn’t that sounds familiar? the
uncertainty principle maybe? In did, the uncertainty principle is an incomplete statement. it is actually an unscientific one that neglects toa knowledge the one most important observation of particle physics. The observation that momentum and location and thus motion and space are interchangeable. thus if we are at 100% motion we have no space definition and if we are a 100% particle we don’t move( sorry newton). And thus whatever is moving has no mass and it is not a matter any more, it
become something else. wave? energy? motion? this is the source for the wave particle duality as well. and so The more something is a wave the less it is a particle. Einstein himself got a little confused. It is not time that is relative to motion it is space and mass which is relative to motion. and time is only a tool. time is the way we express motion in space coordinates.

Ok, I found my notes on permutation invariance and MWI. As a warning, it references some works by philosophers of physics and might stray a bit into that realm at times. It is completely speculative and represents my thinking on the subject from two years ago. At some point I’ll think harder about it. Until now, have a look at my somewhat dusty thoughts which have been posted to my blog. I would, of course, love some feedback if only since no one ever posts to my blog (and yet I continue to write it…).

It’s nice to see actual physicists talking about these questions! Being a philosopher, I’m mainly commenting to point out a few quibbles.

I think you’ve blurred the synthetic/analytic distinction and the a priori/a posteriori (that’s the word you were looking for) distinction. The former is about meanings, and the latter is about how we learn things or could imagine them to be. So the synthetic a priori is supposed to be the stuff that isn’t true just in virtue of meaning, but is still so fundamental that we couldn’t imagine it being any other way.

And actually, Quine’s major point is that the synthetic/analytic distinction doesn’t really make any sense. His view seems much closer to what you describe as a revision of him, than to your original description. He doesn’t think there are some foundational truths – the picture he gives in “Two Dogmas of Empiricism” is the “web of belief”. Some things are more central than others, but anything could in principle be challenged. So some sort of graph with a power-law distribution of vertex degrees would be an appropriate model.

Anyway, I consider myself generally Quinean about metaphysics, and that inclines me towards the MWI. As others have pointed out, Copenhagen is just really weird. Admittedly, I don’t know enough about the Bohm “pilot wave” interpretation to have any idea how it compares. And of course, people have given broadly Quinean or Occamite arguments for just about all of the different interpretations – which is why there are still so many candidates.

Now, is there a Quinian argument to be made against the many-worlds interpretation? I think so. The idea is that when we apply the scientific method we assume we can do experiments which have actual definite outcomes. These are the basic data from which we build a confirmation or refutation our theories. Many-worlds says that this assumption is wrong, there are no fundamental definite outcomes – it just appears that way to us because we are all entangled up in the wavefunction of the universe.

The problem with this argument is that the MWI models exactly the kind of world we do in fact experience, and as far as we know, is perfectly consistent with the evolution of animals such as we are, who are capable of formulating theories such as it. I don’t see how the restriction that physical theories have to support a universe where the development of physics is possible is a whit stronger than the restriction that physical theories have to fit physical observations. At least, not without some proof or demonstration that the mind requires physics above and beyond what is necessary for atoms, stars, and quarks.

The human mind is capable of formulating many theories that are perfectly consistent with experience. Here is one for example: The universe as we know it does not exist – instead there is a giant green goblin who is manipulating my senses to make me believe that it does.

The problem with such a theory is not that it isn’t consistent with experience, but that it is not scientific. It undermines a fundamental assumption that we have to make in order for scientific reasoning to go through, i.e. the assumption that the universe exists and we obtain reliable data about it through observation.

My argument is not that MWI is inconsistent with experience – people have put a lot of effort into explaining how it can be made so and I think think that they have developed a pretty good story at this stage. The question is simply whether it undermines an assumption that is basic to the scientific method, and is therefore not a valid scientific hypothesis. Now, admittedly the case of MWI is much closer to the borderline than the green goblin theory, but I still think that having objective data, i.e. definite outcomes of experiments, is a pretty basic assumption behind the scientific method. Quantum theory is obviously consistent with this assumption, since there are other consistent interpretations that assert it, so I don’t see why so many people feel compelled to give up such a basic assumption by adopting MWI.

The problem with such a theory is not that it isn’t consistent with experience, but that it is not scientific. It undermines a fundamental assumption that we have to make in order for scientific reasoning to go through, i.e. the assumption that the universe exists and we obtain reliable data about it through observation.

It seems to me if a giant green goblin exists, then the universe exists. The notion of a hidden substrate on which our perceived universe hangs doesn’t strike me as particularly non-scientific. There is a plausible argument to be made that the quantum world is just such an example of that. In other words, the universe we perceive doesn’t exist, but instead a weird quantum universe of particles that only physicists can detect, which work in such a fashion and with such rules that we perceive the “normal” universe of macro phenomena. Of course, quantum theory does have all sorts of experimental corroboration, even though very little accessible to most people, and is falsifiable, and the giant, green goblin is not. On those grounds, I would say the green, goblin theory isn’t scientific.

It seems to me a lot of philosophizing around this issue is an attempt to make our knowledge seem more than it is. More certain. More universal. More relevant to whatever is the basis of reality. All science needs is that our observations are reliable enough to generate generally useful theories. Where there are holes, we try to do more science. Sometimes that succeeds, though in modern times, it has produced some theories like quantum mechanics that put a strain on philosophers. So far, no one (at least publicly) has discovered any rabbit hole whose other side makes the entire rest of physics seem the contrivance of another intelligence, whether green goblins or Leibnizean gods . Consider, though, that in the various science fiction stories that use that plot device, it does not make science impossible. It merely means there is an apparent physics, and a physics behind that that we haven’t yet discovered. And there might be one behind that. Surely one of the lessons of science and philosophy is that we can never know the ground of being, the bottom-most substrate, the ur-reality that is not itself produced by something more basic. If that is what you mean by science, and what is required to make science possible, then you’re right: science isn’t possible. We’re not just watching shadows on walls, but don’t even know whether the things casting those shadows are at the bottom, or are themselves shadows. That doesn’t make it impossible to learn some of the rules of the wall we see.

“True, all I meant is that it is pretty easy to imagine formulating a vast array of different theories in which events are the primitive objects with all sorts of different geometries. I’m not talking about different solutions to GR per se, but rather situating GR within a framework of possible theories, all of which share the feature that spacetime is not a fixed, flat background structure. It may not be easy to actually find interesting solutions to any of these theories, but that’s beside the point because we are not talking about the practicalities.”

Are you working on such a theory? If so, since “events are the primitive objects” and have no temporal duration, how do you use events to construct trans-temporal objects?

Since I’ve never been polled I thought I’d add my voice to those who find the MWI to be nonsense as a solution to the measurement problem. In fact I have a similar feeling about the decoherence approach too. In looking through the comments here I don’t find a counterargument to the point that nothing ever actually happens in the MWI, so what could the probabilities be refering to? I don’t see any real response to the question of how the different possibilities interfere with each other, if they are in different universes, either.

I think the attraction of the MWI is something like this: measurement in quantum mechanics is so obscure, and has these pesky associations with consciousness (and therefore with mysticism etc), that many scientistic-minded people want to get as far away from these things as possible. The conceptually cleanest way to solve this (psychological discomfort) seems to be to deny troublesome phenomenon.

It reminds me of how, in an attempt to be “scientific” (not to say positivistic), certain philosophers and scientists will deny the existence of consciousness itself because it does not (so far) have a place in our physical theories.

Am I the first in this thread to mention that we are not forced to choose between the Copenhagen Interpretaion and the MWI? The natural alternatives are “objective collapse” theories. With those we can have genuine ontological measurement events that actualize particular definite outcomes, so that the probabilities in our theory actually refer to something. And, we can avoid the frightening apparent dependence on human observers that might have led us to run to the MWI, because the state-vector reduction is induced by some objective mechanism or criterion (e.g. a coherent mass threshold, or something).

I don’t understand why this interpretation is not the default. Maybe to some it seems to flirt with panpsychism? Maybe because it would appear to violate Lorentz invariance (but no more so than non-local EPR correlations)?

[…] of Everett’s Many-World’s Interpretation (MWI) of quantum mechanics, thanks to a post on Matt Leifer’s blog, Quantum Quandaries, I dug out some two-year old notes I had on MWI and permutation invariance and am throwing them up […]