Archive for June 2006

As some of you know, my alter ego works on quantum information and computation (I’ll leave you to decide which of us is Clark Kent and which is Superman). My foundations personality sometimes feels a twinge of professional jealousy and I’ll tell you why.

In quantum computation we have a set of criteria for evaluating proposed experimental implementations, known as the diVincenzo criteria. These tell you what is required to implement the circuit model of quantum computation, and include things like the ability to prepare pure input states and the ability to perform a universal gate set. Of course, you might choose to implement an alternative model of computation, such as the measurement based models, and then a different set of criteria are applicable. Nevertheless, talks about proposed implementations often proceed by explaining how each of the criteria is to be met in turn. This makes it very clear what the weak and strong points of the implementation are, since there are usually one or two criteria that present a significant experimental challenge.

In contrast, there is no universally accepted set of criteria that an interpretation of quantum mechanics is supposed to meet. They are usually envisioned as attempts to solve the nefarious “measurement problem”, which is actually a catch-all term for a bunch of related difficulties to which different researchers attach different degrees of significance. The question of exactly what an interpretation is supposed to do also varies according to where one is planning to apply it. Is it supposed to explain the emergence of classical mechanics, help us understand why quantum computation works, give us some clues as to how to construct quantum gravity, or simply stand as a work of philosophical elegance?

It seems to me that the foundations community should have, by now, cracked their heads together and come up with a definitive list of issues on which an interpretation has to make a stand, before we are prepared to accept it as a viable contender. Then, instead of reading lots of lengthy papers and spending a lot of time trying to work out exactly where the wool has been pulled out from under our eyes, we can simply send each new interpreter a form to fill in and be done with it. Of course, this is bound to be slightly more subjective than the di Vincenzo criteria, but hopefully not by all that much. For what it’s worth here is my attempt at the big list.

The first six criteria would probably be agreed upon by most people who think seriously about foundations.

An interpretation should have a well-defined ontology.

To begin with, you need to tell me which things are supposed to correspond to the stuff that actually exists in reality. This can be some element of the quantum formalism, e.g. the state vector, something you have added to it, e.g. hidden variables, or something much more exotic, e.g. relations between things without any definite state for the things that are related, correlations without correlata etc. This is all fine at this stage, but of course the more exotic possibilities are going to get into trouble with the later criteria.

At this stage, I am even prepared to allow you to say that only detector clicks exist in reality, so long as you are clear about this and are prepared to face the later challenges.

As a side note, some people might want to add that the interpretation should explicitly state whether the quantum state vector is ontological, i.e. corresponds to something in reality, or epistemic, i.e. something more like a probability distribution. I am inclined to believe that if you have a clear ontology then it should also be clear what the answer to this question is without any need for further comment. I am also inclined to believe that this fixation on the role of the state vector is an artifact of taking the Schroedinger picture deadly seriously, and ignoring other formalisms in which it plays a lesser role. For instance, why don’t we ask whether operators or Wigner functions are ontological or epistemic instead?

An interpretation should not conflict with my direct everyday experience.

In everyday life, objects appear to be in one definite place and I have one unique conscious experience. If you have adopted a bizarre ontology, wherein this is not the case at the quantum level, you have to explain why it appears that it is the case to me. This is a particularly relevant question for relationalists, Everettistas and correlationalists of course. It is also not the same thing as…

An interpretation should explain how classical mechanics emerges from quantum theory.

Why do systems exist that appear to have states represented by points in phase space, evolving according to the classical evolution equations?

Note that it is not enough to give some phase space description. It must correspond to the description that we actually use to describe classical systems.

Some people might want to phrase this as “Why don’t we see macroscopic superpositions?”. I’m not quite sure what it would mean to “see” a macroscopic superposition, and I think that this is the more general issue in any case.

Similarly, you may be bothered by the fact that I haven’t mentioned the “collapse of the wavefunction” or the “reduction of the wavevector”. Your solution to that ought to be immediately apparent from combining your ontology with the answer to the present issue.

Some physicists seem to think that the whole question of interpretation can be boiled down to this one point, or that it is identical with the measurement problem. I hope you are convinced that this is not the case by now.

An interpretation should not conflict with any empirically established facts.

For example, I don’t mind if you believe that wavefunction collapse is a real physical process, but your theory should be compatible with all the systems that have been observed in superposition to date.

An interpretation should provide a clear explanation of how it applies to the “no-go” theorems of Bell and Kochen Specker.

A simple answer would be to explain in what sense your interpretation is nonlocal and contextual. If you claim locality or noncontextuality for your interpretation then you need to give a clear explanation of which other premises of the theorems are violated by your interpretation. They are theorems, so some premise must be violated.

An interpretation should be applicable to multiparticle systems in nonrelativistic quantum theory.

Some interpretations take the idea that the wavefunction is like a wave in real 3d space very seriously (the transactional interpretation comes to mind here). Often such ideas can only be worked out in detail for a single particle. However, the move to wavefunctions on multiparticle configuration space is very necessary and needs to be convincingly accomplished.

The next four criteria are things that I regard as important, but probably some people would not give them such great importance.

An interpretation should provide a clear explanation of the principles it stands upon.

For example, if you claim that your interpretation is minimal in some sense (as many-worlds and modal advocates often do) then you need to make clear what the minimality assumption is and derive the interpretation from it if possible.

If you claim that “quantum theory is about X” then a full derivation of quantum theory from axioms about the properties that X should satisfy would be nice. Examples of X might be nonstandard logics, complimentarity, or information.

No facticious sample spaces.

OK this is a bit of a personal bugbear of mine. Some interpretations introduce classical sample spaces (over hidden variable states for instance) or generalizations of the notion of a sample space (as in consistent histories). Quantum theory is then thought of as being a sort of probability theory over these spaces. Often, however, the “quantum states” on these sample spaces are a strict subset of the allowed measures on the sample space, and the question is why?

I allow the explanation to be dynamical, in analogy to statistical mechanics. There we tend to see equilibrium distributions even though many other distributions are possible. The dynamics ensures that “most” distributions tend to equilibrium ones. Of course, this gets into the thorny issues of the foundations of statistical mechanics, but provided you can do at least as good a job as is done there I am OK with it.

I also allow a principle explanation, e.g. some sort of fundamental uncertainty principle. However, unlike the standard uncertainty relations, you should actually be able to derive the set of allowed measures from the principle.

An interpretation should not be ambiguous about whether it is consistent with the scientific method.

Some interpretations seem to undermine the very method that was used to discover quantum theory in the first place. For example, we assumed that experiments really had outcomes and that it was OK to reason about the world using ordinary deductive logic. If you deny any of these things then you need to explain why it was valid to use the scientific method to arrive at the theory in the first place. How do you know that an even more radical revision of these concepts isn’t in order, perhaps one that could never be arrived at by empirical means?

An interpretation should take the great probability debate into account.

Quantum theory involves probabilities and some interpretations take a stand on the fundamental significance of these. Is the interpretation consistent with all the major schools of thought on the foundations of probability (propensities, frequentism and subjectivism), at least as far as these are themselves consistent? If not, you need to be clear on what notion of probability is actually needed and address the main arguments in the great probability debate. Good luck, because you could spend a whole career just doing this.

The final three criteria are not strictly required for me to take your interpretation seriously, but addressing them would score you extra bonus points.

An interpretation should be consistent with relativistic quantum field theory and the standard model.

Obviously, you need to be consistent with the most fundamental theories of physics that we have at the moment. However, the conceptual leap from nonrelativistic to relativistic physics is nontrivial and it has implications for ontology even if we forget about quantum theory. Therefore, it is OK to just focus on the nonrelativistic case when developing an interpretation. QFT might require significant changes to the ontology of your interpretation, and this is something that should be addressed eventually.

An interpretation should suggest experiments that might exhibit departures from quantum theory.

It’s good to have something which can be tested in the lab. Interpretations such as spontaneous collapse theories make predictions that depart from quantum theory and these should be investigated and tested.

However, even if your interpretation is entirely consistent with quantum theory, it might suggest novel ways in which the theory can be modified. We should be constantly on the lookout for such things and test them wherever possible.

An interpretation should address the phenomenology of quantum information theory.

This reflects my personal interests quite a bit, but I think it is a worthwhile thing to mention. Several quantum protocols, such as teleportation, suggest a strong analogy between quantum states (even pure ones) and probability distributions. If your interpretation makes light of this analogy, e.g. the state is treated ontologically, then it would be nice to have an explanation of why the analogy is so effective in deriving new results.

Having just returned from several evenings of Bayesian discussion in Vaxjo, I was inspired to read Facts, Values and Quanta by Marcus Appleby. Whilst not endorsing a completely subjectivist view of probability, the paper is an appropriate remedy for anyone who thinks that the frequentist view is the way to understand probability in physics, and particularly in quantum theory.

In fact, Appleby's paper provides good preparation for tackling a recent paper by Buniy, Hsu and Zee, pointed out by the Quantum Pontiff. The problem they address is how to derive the Born rule within the many-worlds interpretation, or simply from the eigenvalue-eigenstate (EE) link. The EE link says that if you have a system in an eigenstate of some operator, then the system posesses a definite value (the corresponding eigenvalue) for the associated physical quantity with certainty. Note that this is much weaker than the Born rule, since it does not say anything about the probabilities for observables that the system is not in an eigenstate of.

An argument dating back to Everett, but also discussed by Graham, Hartle and Farhi, Goldstone and Gutmann, runs as follows. Suppose you have a long sequence of identically prepared systems in a product state:

|psi>|psi>|psi>…|psi>

For the sake of definiteness, suppose these are qubits. Now suppose we are interested in some observable, with an eigenbasis given by |0>,|1>. We can construct a sequence of relative frequency operators, the first few of which are:

It is straightforward to show that in the limit of infinite copies, the state |psi>|psi>|psi>…|psi> becomes an eigenstate of Fn with eigenvalue |<psi|1>|^2. Thus, in this limit, the infinite system posesses a definite value for the relative frequency operator, given by the Born probability rule. The argument is also relevant for many worlds, since one can show that if the |0> vs. |1> measurement is repeated on the state |psi>|psi>|psi>…|psi> then there will be norm squared of the worlds where non Born-rule relative frequencies were found will tend to zero.

Of course, there are many possible objections to this argument (see Caves and Shack for a rebuttal of the Farhi, Goldstone, Gutmann version). One is that there are no infinite sequences available in the real world. For finite but large sequences, one can show that although the norm squared of the worlds with non Born probabilities is small, there are actually still far more of them than worlds which do have Born probabilities. Therefore, since we have no a priori reason to assign worlds with small amplitudes a small probability (which we do not because that is what we are trying to derive), we should expect to see non Born rule probabilities.

Buniy, Hsu and Zee point out that this problem can be avoided if we assume that the state space is fundamentally discrete, i.e. if |<phi|psi>| < epsilon for some small epsilon then |psi> and |phi> are actually the same physical state. They provide a way of discretizing the Hilbert space such that the small amplitude worlds dissapear for some large but finite number of copies of the state. They also argue that this discreteness of the state space might be derived from some future theory of quantum gravity.

I have to say that I do not buy their argument at all. For one thing, I hope that the conceptual problems of quantum theory have good answers independently of anything to do with quantum gravity. In any case, the question of whether the successful theory will really entail a discrete state space is still open to doubt. More seriously, it should be realized that the problem they are trying to solve is not unique to quantum mechanics. The same issue exists if one trys to give a frequentist account of classical probability based on large but finite ensembles. In that case, their solution would amount to the procustean method of just throwing away probabilities that are smaller than some epsilon. Hopefully, this already seems like a silly thing to do, but if you still have doubts then you can find persuasive arguments against this approach in the Appleby paper.

For me, the bottom line is that the problem being addressed has nothing to do with quantum theory, but is based on an erroneous frequentist notion of probability. Better to throw out frequentism and use something more sensible, i.e. Bayesian. Even then, the notion of probability in many-worlds remains problematic, but I think that Wallace has given the closest we are likely to get to a derivation of the Born rule for many-worlds along Bayesian lines.

I returned this weekend from the meeting on Foundations of Probability and Physics at the University of Vaxjo in Sweden. There were many interesting talks, so I'll just mention a few of them that I found particularly inspiring.

– Giacomo Mauro d'Ariano explained his axiomatization of quantum theory, inspired by observations from quantum state and process tomography. One of the nice features of this is that he gives an operational definition of the adjoint. Why the observables of QM should form an algebra from an operational point of view has been a topic of recent debate amongst foundational people here at Perimeter, so this could be a piece of the puzzle.

– Rüdiger Schack explained what it might mean for quantum randomness to be "truly random" from a Bayesian point of view, using the concept of "inside information" that he has developed with Carlton Caves.

– Philip Goyal gave another axiomatization of quantum theory. I'm not sure whether the framework he uses is that well-motivated (especially the sneaky way that complex numbers are introduced). On the other hand, one of his axioms has the flavor of an "epistemic constraint", which gels nicely with ideas that have been expressed earlier by Chris Fuchs and Rob Spekkens.

– Joseph Altepeter gave another excellent talk about the state of the art Bell inequality experiments currently going on in Paul Kwiat's group.

As is traditional with physics blogs, it is time to indulge in a spot of shameless self-promotion of my own work. I have just posted a paper on quantum dynamics as an analog of conditional probability on the arXiv. This is about a generalization of the isomorphism between bipartite quantum states and completely positive maps, that is often used in quantum information. The main point is that it provides a good quantum analog of conditional probability, so it may be of interest to foundations-types who like to think of quantum theory as a generalization of classical probability theory.

The paper was completed in somewhat of a hurry, to get it out in time for the conference on Foundations of Probability and Physics in Vaxjo taking place this week, where I am due to give a talk on the subject. No doubt it still contains a few typos, so you can expect it to get updated in the next couple of weeks. Any comments would be appreciated.