The Role of Decoherence in Quantum Mechanics

Interference phenomena are a well-known and crucial aspect of quantum
mechanics, famously exemplified by the two-slit experiment. There are
situations, however, in which interference effects are artificially or
spontaneously suppressed. The theory of decoherence is
precisely the study of (spontaneous) interactions between a system and
its environment that lead to such suppression of interference. We
shall make more precise what we mean by this in
Section 1,
which discusses the concept of suppression of interference and
gives a simplified survey of the theory, emphasising features that
will be relevant to the following discussion. In fact, the term
decoherence refers to two largely overlapping areas of research. The
characteristic feature of the first (often called
‘dynamical’ or ‘environmental’ decoherence) is
the study of concrete models of (spontaneous) interactions between a
system and its environment that lead to suppression of interference
effects. That of the second (the theory of ‘decoherent
histories’ or ‘consistent histories’) is an abstract
(and in fact more general) formalism that captures the essential
features of the phenomenon of decoherence. The two are obviously
closely related, and will both be reviewed in turn
in Section 1.

Decoherence is relevant (or is claimed to be relevant) to a variety of
questions ranging from the measurement problem to the arrow of time,
and in particular to the question of whether and how the
‘classical world’ may emerge from quantum mechanics. This
entry mainly deals with the role of decoherence in relation to the
main problems and approaches in the foundations of quantum
mechanics. Specifically,
Section 2
analyses the claim that decoherence solves the
measurement problem. It also
discusses the exacerbation of the problem through the inclusion of
environmental interactions, the idea of emergence of classicality, and
the motivation for discussing decoherence together with approaches to
the foundations of quantum mechanics.
Section 3
then reviews the relation of decoherence to some of the main
foundational approaches. Finally, in
Section 4
we mention suggested applications that would push the role of
decoherence even further.

Suppression of interference has of course featured in many papers
since the beginning of quantum mechanics, such as Mott's (1929)
analysis of alpha-particle tracks. The modern foundation of
decoherence as a subject in its own right was laid by H. D. Zeh in the
early 1970s (Zeh 1970; 1973). Equally influential were the papers by
W. Zurek from the early 1980s (Zurek 1981; 1982). Some of these
earlier examples of decoherence (e.g., suppression of interference
between left-handed and right-handed states of a molecule) are
mathematically more accessible than more recent ones. A concise and
readable introduction to the theory is provided by Zurek
in Physics Today (1991). (This article was followed by
publication of several letters with Zurek's replies (1993),
which highlight controversial issues.) More recent surveys are the
ones by Zeh (1995), which devotes much space to the interpretation of
decoherence, Zurek (2003), and the books on decoherence by Giulini
et al. (1996) and Schlosshauer
(2007).[1]

The two-slit experiment is a paradigm example of an
interference experiment. One repeatedly sends electrons or
other particles through a screen with two narrow slits, the electrons
impinge upon a second screen, and we ask for the probability
distribution of detections over the surface of the screen. In order to
calculate this, one cannot just take the probabilities of passage
through the slits, multiply with the probabilities of detection at the
screen conditional on passage through either slit, and sum over the
contributions of the two
slits.[2]
There is an additional so-called interference term in the correct
expression for the probability, and this term depends on both
wave components that pass through one or the other slit.

There are, however, situations in which this interference term (for
detections at the screen) is not observed, i.e. in which the classical
probability formula applies. This happens for instance when we perform
a detection at the slits, whether or not we believe that measurements
are related to a ‘true’ collapse of the wave function
(i.e. that only one of the components survives the
measurement and proceeds to hit the screen). The disappearence of the
interference term, however, can happen also spontaneously, when no
collapse (true or otherwise) is presumed to happen. Namely, if some
other systems (say, sufficiently many stray cosmic particles
scattering off the electron) suitably interact with the wave between
the slits and the screen. In this case, the reason why the
interference term is not observed is because the electron has become
entangled with the stray
particles.[3]
The phase relation between the two components of the wave function,
which is responsible for interference, is well-defined only at the
level of the larger system composed of electron and stray particles,
and can produce interference only in a suitable experiment including
the larger system. Probabilities for results of measurements performed
only on the electron are calculated as if the wave function
had collapsed to one or the other of its two components, but in fact
the phase relations have merely been distributed over a larger
system.[4] It is
this phenomenon of suppression of interference through suitable
interaction with the environment that we call ‘dynamical’
or ‘environmental’ decoherence.

The study of ‘dynamical’ decoherence consists to a large
extent in the exploration of concrete spontaneous interactions that
lead to suppression of interference. Several features of interest
arise in models of such interactions (although by no means are all
such features common to all models).

One feature of these environmental interactions is that they suppress
interference between states from some preferred set, be it a discrete
set of states (e.g. left- and right-handed states in models of chiral
molecules, or the upper and lower component of the wave function in
our simple example of the two-slit experiment), or some continuous set
(e.g. the coherent states of a harmonic oscillator). The intuitive
picture is one in which the environment monitors the system of
interest by continuously ‘measuring’ some quantity
characterised by the set of preferred states (‘eigenstates of
the decohering variable’). Formally, this is reflected in the
(at least approximate) diagonalisation of the reduced state of the
system of interest in the basis of privileged states (whether discrete
or continuous).

These preferred states can be characterised in terms of their
robustness or stability with respect to the interaction with the
environment. Roughly speaking, the system gets entangled with the
environment, but the states between which interference is suppressed
are the ones that would themselves get least entangled with
the environment under further interaction. The robustness of the
preferred states is related to the fact that information about them is
stored in a redundant way in the environment (say, because a
Schrödinger cat has interacted with so many stray particles:
photons, air molecules, dust). This information can later be acquired
by an observer without further disturbing the system (we
observe—however that may be interpreted—whether the cat is
alive or dead by intercepting on our retina a small fraction of the
light that has interacted with the cat).

In this connection, one also says that decoherence induces
‘effective superselection rules’. The concept of a
(strict) superselection rule means that there are some
observables—called classical in technical terminology—that
commute with all observables (for a review, see Wightman
(1995)). Intuitively, these observables are infinitely robust, since
no possible interaction can disturb them (at least as long as the
interaction Hamiltonian is considered to be an observable). By
an effective superselection rule one means, analogously, that
certain observables (e.g. chirality) will not be disturbed by the
interactions that actually take
place.[5]

Interaction potentials are functions of position, so the preferred
states will tend to be related to position. In the case of the chiral
molecule, the left- and right-handed states are indeed characterised
by different spatial configurations of the atoms in the molecule. In
the case of the harmonic oscillator, one should think of the
environment coupling to (‘measuring’) approximate
eigenstates of position, or rather approximate joint eigenstates of
position and momentum (since information about the time of flight is
also recorded in the environment), thus leading to coherent states
being preferred. (Rough intuitions should suffice here; see also the
entries on
quantum mechanics
and the section on the measurement problem in
the entry on
philosophical issues in quantum theory.)

The resulting localisation can be on a very short length scale,
i.e. the characteristic length above which coherence is dispersed
(‘coherence length’) can be very short. A speck of dust of
radius a = 10-5cm floating in the air will have
interference suppressed between (position) components with a width of
10-13cm. Even more strikingly, the time scales for this
process are minute. This coherence length is reached after a
microsecond of exposure to air, and suppression of interference on a
length scale of 10-12cm is achieved already after a
nanosecond.[6]

One can thus argue that generically the states privileged by decoherence
at the level of components of the quantum state are localised in position or
both position and momentum, and therefore kinematically classical. (One
should be wary of overgeneralisations, as already pointed out, but this is
certainly a feature of many concrete examples that have been investigated.)

What about classical dynamical behaviour? Interference is a dynamical
process that is distinctively quantum, so, intuitively, lack of
interference might be thought of as classical-like. To make the
intuition more precise, think of the two components of the wave going
through the slits. If there is an interference term in the probability
for detection at the screen, it must be the case that both components
are indeed contributing to the particle manifesting itself on the
screen. But if the interference term is suppressed, one can at least
formally imagine that each detection at the screen is a manifestation
of only one of the two components of the wave function, either the one
that went through the upper slit, or the one that went through the
lower slit. Thus, there is a sense in which one can recover at least
one dynamical aspect of a classical description, a trajectory of
sorts: from the source to either slit (with a certain probability),
and from the slit to the screen (also with a certain
probability). That is, one recovers a ‘classical
trajectory’ at least in the sense used in classical stochastic
processes.

In the case of continuous models of decoherence based on the analogy
of approximate joint measurements of position and momentum, one can do
even better. In this case, the trajectories at the level of the
components (the trajectories of the preferred states) will approximate
surprisingly well the corresponding classical (Newtonian)
trajectories. Intuitively, one can explain this by noting that if the
preferred states (which are wave packets that are narrow in position
and remain so because they are also narrow in momentum) are the states
that tend to get least entangled with the environment, they will tend
to follow the Schrödinger equation more or less undisturbed. But
in fact, narrow wave packets follow approximately Newtonian
trajectories, at least if the external potentials in which they move
are uniform enough along the width of the packets (results of this
kind are known as ‘Ehrenfest theorems’). Thus, the
resulting ‘histories’ will be close to Newtonian ones (on
the relevant
scales).[7]

The most intuitive physical example for this are the observed
trajectories of alpha particles in a bubble chamber, which are indeed
extremely close to Newtonian ones, except for additional tiny
‘kinks’. As a matter of fact, one should expect slight
deviations from Newtonian behaviour. These are due both to the
tendency of the individual components to spread and to the
detection-like nature of the interaction with the environment, which
further enhances the collective spreading of the components (a
narrowing in position corresponds to a widening in momentum). These
deviations appear as noise, i.e. particles being kicked slightly off
course.[8]
According to the type of system, and the details of the interaction,
the noise component might actually dominate the motion, and one
obtains (classical) Brownian-motion-type behaviour.

Other examples include trajectories of a harmonic oscillator in
equilibrium with a thermal bath, and trajectories of particles in a
gas (without which the classical derivation of thermodynamics from
statistical mechanics would make no sense; see below
Section 4).

None of these features are claimed to obtain in all cases of
interaction with some environment. It is a matter of detailed physical
investigation to assess which systems exhibit which features, and how
general the lessons are that we might learn from studying specific
models. In particular, one should beware of common overgeneralisations.
For instance, decoherence does not affect only and all
‘macroscopic systems’. True, middle-sized objects, say, on
the Earth's surface will be very effectively decohered by the
air in the atmosphere, and this is an excellent example of decoherence
at work. On the other hand, there are also very good examples of
decoherence-like interactions affecting microscopic systems, such as
in the interaction of alpha particles with the gas in a bubble
chamber. And further, there are arguably macroscopic systems for
which interference effects are not suppressed. For instance, it has
been shown to be possible to sufficiently shield SQUIDS (a type of
superconducting devices) from decoherence for the purpose of observing
superpositions of different macroscopic currents—contrary to
what one had expected (see e.g. Leggett 1984, and esp. 2002, Section
5.4). Anglin, Paz and Zurek (1997) examine some less well-behaved
models of decoherence and provide a useful corrective as to the limits
of decoherence.

As we have just discussed, when interference is suppressed, e.g. in a
two-slit experiment, we can also speak (at least formally) about the
‘trajectory’ followed by an individual electron. In
particular, we can assign probabilities to the alternative
trajectories, so that probabilities for detection at the screen can be
calculated by summing over intermediate events. The decoherent
histories formalism (originating with Griffiths 1984; Omnès
1988, 1989; and Gell-Mann and Hartle 1990) takes this as the defining
feature of decoherence.

In a nutshell, the formalism is as
follows.[9]
Take orthogonal families of projections with

(1)

∑α1Pα1
= 1,… ,
∑αnPαn
= 1

Given times t1,… ,tn
one defines histories as time-ordered sequences of projections
at the given times, choosing one projection from each family,
respectively. Such histories form a so-called alternative and
exhaustive set of histories.

Take a state ρ(t). We wish to define probabilities for the set of
histories. If one takes the usual probability formula based on
repeated application of the Born rule, one obtains

(2)

Tr(PαnUtntn-1…
Pα1Ut1t0
ρ(t0) U*t1t0Pα1… U*tntn-1Pαn)

(where Uts represents the unitary evolution
operator from time s to time t, and its
adjoint U*ts the inverse
evolution).

We shall take (2) as defining ‘candidate
probabilities’. In general these probabilities exhibit
interference, in the sense that if one sums over intermediate events
(if one ‘coarse-grains’ the histories), one does not
obtain probabilities of the same form (2). But we can impose, as
a consistency or (weak) decoherence condition, precisely
that interference terms should vanish for any pair of distinct
histories. It is easy to see that this condition takes the form

(3)

ReTr(Pα′nUtntn-1…
Pα′1Ut1t0
ρ(t0) U*t1t0Pα1… U*tntn-1Pαn)
= 0

for any pair of distinct histories. If this is satisfied, we can view
(2) as defining the distribution functions for a stochastic process
with the histories as trajectories. (There are some differences
between the various authors, but we shall gloss them over.)

Decoherence in the sense of this abstract formalism is thus defined
simply by the condition that (quantum) probabilities for wave
components at a later time may be calculated from (quantum)
probabilities for wave components at an earlier time and (quantum)
conditional probabilities according to the standard classical
formula, i.e. as if the wave had collapsed.

Models of dynamical decoherence fall under the scope of decoherence
thus defined, but the abstract definition is much more general. A
stronger form of the decoherence condition, namely the vanishing of
both the real and imaginary part of the trace expression in (3) (the
‘decoherence functional’), can be used to prove theorems
on the existence of (later) ‘permanent records’ of
(earlier) events in a history, which is a generalisation of the idea
of ‘environmental monitoring’. For instance, if the state
ρ is a pure state |ψ><ψ| this strong decoherence
condition is equivalent, for all n, to the orthogonality of the
vectors

(4)

PαnUtntn-1…
Pα1Ut1t0 |ψ>

and this in turn is equivalent to the existence of a set of orthogonal
projections Rα1...αiti
(for any ti≤tn)
that extend consistently the given set of histories and are perfectly
correlated with the histories of the original set (Gell-Mann and
Hartle 1990). Note, however, that these ‘generalised
records’ need not be stored in separate degrees of freedom, such
as an environment or measuring
apparatus.[10]

Various authors have taken the theory of decoherent histories as
providing an interpretation of quantum mechanics. For instance,
Gell-Mann and Hartle sometimes talk of decoherent histories as a
neo-Everettian approach, while Omnès appears to think of
histories along neo-Copenhagen lines (perhaps as an experimental
context creating a ‘quantum phenomenon‘ that can stretch
back into the
past).[11]
Griffiths (2002) has probably developed
the most detailed of these interpretational approaches (trying to do
justice to various earlier criticisms, e.g. by Dowker and Kent
(1995, 1996)).[12]

In itself, however, the formalism is interpretationally neutral and
has the particular merit of bringing out two crucial conceptual
points: that wave components can be reidentified over time, and that
if we do so, we can formally identify ‘trajectories’ for
the system. As such, it is particularly useful as a tool for
describing decoherence in connection with attempts to solve the
problem of the classical regime in the context of various different
interpretational approaches to quantum mechanics. In particular, it
has become a standard tool in discussions of Everett interpretations,
where ‘worlds’ can be formally described as histories in a
consistent family (see, e.g., Saunders 1993).

The fact that interference is typically very well suppressed between
localised states of macroscopic objects suggests that it is relevant
to why macroscopic objects in fact appear to us to be in localised
states. A stronger claim is that decoherence is not only relevant to
this question but by itself already provides the complete answer. In
the special case of measuring apparatuses, it would explain why we never
observe an apparatus pointing, say, to two different results, i.e.
decoherence would provide a solution to the
measurement problem of quantum mechanics.
As pointed out by many authors, however (e.g. Adler 2003;
Zeh 1995, pp. 14–15), this claim is not tenable.

The measurement problem, in a nutshell, runs as follows. Quantum
mechanical systems are described by wave-like mathematical objects
(vectors) of which sums (superpositions) can be formed (see the entry
on
quantum mechanics).
Time evolution (the Schrödinger equation) preserves such
sums. Thus, if a quantum mechanical system (say, an electron) is
described by a superposition of two given states, say, spin in
x-direction equal +1/2 and spin in x-direction equal
-1/2, and we let it interact with a measuring apparatus that couples
to these states, the final quantum state of the composite will be a
sum of two components, one in which the apparatus has coupled to (has
registered) x-spin = +1/2, and one in which the apparatus has
coupled to (has registered) x-spin = -1/2. The problem is
that, while we may accept the idea of microscopic systems being
described by such sums, the meaning of such a sum for the (composite
of electron and) apparatus is not immediately obvious.

Now, what happens if we include decoherence in the description?
Decoherence tells us, among other things, that plenty of interactions
are taking place all the time in which differently localised states of
macroscopic systems couple to different states of their
environment. In particular, the differently localised states of the
macroscopic system could be the states of the pointer of the apparatus
registering the different x-spin values of the electron. By
the same argument as above, the composite of electron, apparatus and
environment will be a sum of (i) a state corresponding to the
environment coupling to the apparatus coupling in turn to the value
+1/2 for the spin, and of (ii) a state corresponding to the
environment coupling to the apparatus coupling in turn to the value
-1/2 for the spin. Again, the meaning of such a sum for the composite
system is not obvious.

We are left with the following choice whether or not we
include decoherence: either the composite system is not described by
such a sum, because the Schrödinger equation actually breaks down
and needs to be modified, or it is described by such a sum, but then
we need to understand what that means, and this requires giving an
appropriate interpretation of quantum mechanics. Thus, decoherence as
such does not provide a solution to the measurement problem, at least
not unless it is combined with an appropriate interpretation of the
theory (whether this be one that attempts to solve the
measurement problem, such as Bohm, Everett or GRW; or one that
attempts to dissolve it, such as various versions of the
Copenhagen interpretation). Some of the main workers in the field such
as Zeh (2000) and (perhaps) Zurek (1998) suggest that decoherence is
most naturally understood in terms of Everett-like interpretations
(see below
Section 3.3,
and the entries on
Everett's relative-state interpretation
and on the
many-worlds interpretation).

Unfortunately, naive claims of the kind that decoherence gives a complete answer to the measurement problem are still somewhat part
of the ‘folklore’ of decoherence, and deservedly attract
the wrath of physicists (e.g. Pearle 1997) and philosophers (e.g.
Bub 1997, Chap. 8) alike. (To be fair, this ‘folk’
position has at least the merit of attempting to subject measurement
interactions to further physical analysis, without assuming that
measurements are a fundamental building block of the theory.)

Decoherence is clearly neither a dynamical evolution contradicting the
Schrödinger equation, nor a new interpretation of the theory. As
we shall discuss, however, it both reveals important dynamical
effects within the Schrödinger evolution, and may
be suggestive of possible interpretations of the theory.

As such it has much to offer to the philosophy of quantum
mechanics. At first, however, it seems that discussion of
environmental interactions should actually exacerbate the existing problems. Intuitively,
if the environment is carrying out, without our intervention, lots of
approximate position measurements, then the measurement problem ought
to apply more widely, also to these spontaneously occurring
measurements.

Indeed, while it is well-known that localised states of macroscopic
objects spread very slowly with time under the free Schrödinger
evolution (i.e., if there are no interactions), the situation turns
out to be different if they are in interaction with the
environment. Although the different components that couple to the
environment will be individually incredibly localised, collectively
they can have a spread that is many orders of magnitude larger. That
is, the state of the object and the environment could be a
superposition of zillions of very well localised terms, each with
slightly different positions, and that are collectively spread over
a macroscopic distance, even in the case of everyday
objects.[13]

Given that everyday macroscopic objects are particularly subject to
decoherence interactions, this raises the question of whether quantum
mechanics can account for the appearance of the everyday world even
apart from the measurement problem in the strict sense. To put it
crudely: if everything is in interaction with everything else,
everything is generically entangled with everything else, and that is
a worse problem than measuring apparatuses being entangled with the
measured systems. And indeed, discussing the measurement problem
without taking decoherence (fully) into account may not be enough, as
we shall illustrate by the case of some versions of the modal
interpretation in
Section 3.4.

What suggests that decoherence may be relevant to the issue of the
classical appearance of the everyday world is that at the level of
components of the wave function the quantum description of
decoherence phenomena can display tantalisingly classical aspects. The
question is then whether, if viewed in the context of any of the main
foundational approaches to quantum mechanics, these classical aspects
can be taken to explain corresponding classical aspects of the
phenomena. The answer, perhaps unsurprisingly, turns out to depend on
the chosen approach, and in the next section we shall discuss in turn
the relation between decoherence and several of the main
approaches to the foundations of quantum mechanics.

Even more generally, one can ask whether the results of decoherence
could thus be used to explain the emergence of the entire
classicality of the everyday world, i.e. to explain both
kinematical features such as macroscopic localisation and dynamical
features such as approximately Newtonian or Brownian trajectories in all cases where such descriptions happen to be phenomenologically
adequate. As we have mentioned already, there are cases in which a
classical description is not a good description of a phenomenon, even
if the phenomenon involves macroscopic systems. There are also cases,
notably quantum measurements, in which the classical aspects of the
everyday world are only kinematical (definiteness of pointer
readings), while the dynamics is highly non-classical (indeterministic
response of the apparatus). In a sense, if we follow Bohr in requiring
the world of classical concepts in order to describe in the first
place ‘quantum phenomena’ (see the entry on the
Copenhagen interpretation), then,
if decoherence gives us indeed the everyday classical world, the
quantum phenomena themselves would become a
consequence of decoherence (Zeh 1995, p. 33; see also Bacciagaluppi
2002, Section 6.2). The question of explaining the classicality of the
everyday world becomes the question of whether one can derive
from within quantum mechanics the conditions necessary to discover
and practise quantum mechanics itself, and thus, in Shimony's
(1989) words, close the epistemological circle.

In this generality the question is clearly too hard to answer,
depending as it does on how far the physical programme of
decoherence (Zeh 1995, p. 9) can be successfully developed. We
shall thus postpone the (partly speculative) discussion of how far this
programme might go until
Section 4.

There is a wide range of approaches to the foundations of quantum
mechanics. The term ‘approach’ here is more appropriate
than the term ‘interpretation’, because several of these
approaches are in fact modifications of the theory, or at
least introduce some prominent new theoretical aspects. A convenient
way of classifying these approaches is in terms of their strategies
for dealing with the measurement problem.

Some approaches, so-called collapse approaches, seek to modify the
Schrödinger equation, so that superpositions of different
‘everyday’ states do not arise or are very unstable. Such
approaches may have intuitively little to do with decoherence since
they seek to suppress precisely those superpositions that are created
by decoherence. Nevertheless their relation to decoherence is
interesting. Among collapse approaches
(Section 3.1), we shall discuss
von Neumann's collapse postulate and theories of spontaneous
localisation (for which see also the entry on
collapse theories).

Other approaches, known as ‘hidden variables’ approaches,
seek to explain quantum phenomena as equilibrium statistical effects
arising from a deeper-level theory, rather strongly in analogy
with attempts at understanding thermodynamics in terms of statistical
mechanics (see the entry on
philosophy of statistical mechanics).
Of these, the most developed are the so-called pilot-wave theories (Section 3.2),
in particular the theory by de Broglie and Bohm (see also the entry on
Bohmian mechanics).

Finally, there are approaches that seek to solve (or dissolve) the
measurement problem strictly by providing an
appropriate interpretation of the theory. Slightly tongue in
cheek, one can group together under this heading approaches as diverse
as Everett interpretations (see the entries on
Everett's relative-state interpretation
and on the
many-worlds interpretation),
modal interpretations
and the
Copenhagen interpretation.
We shall be analysing these approaches specifically in their relation
to decoherence (we discuss the Everett interpretation in Section
3.3, the modal interpretations in Section
3.4, and the Copenhagen interpretation in
Section
3.5).

3.1.1 Von Neumann

It is notorious that von Neumann (1932) proposed that the
observer's consciousness is somehow related to what he called
Process I, otherwise known as the collapse postulate or the projection
postulate, which in his book is treated on a par with the
Schrödinger equation (his Process II). There is some ambiguity in
how to interpret von Neumann. He may have been advocating some sort of
special access to our own consciousness that makes it appear to us
that the wave function has collapsed; this would suggest a
phenomenological reading of Process I. Alternatively, he may have
proposed that consciousness plays some causal role in precipitating
the collapse; this would suggest that Process I is a physical process
taking place in the world on a par with Process
II.[14]

In either case, von Neumann's interpretation relies on the
insensitivity of the final predictions (for what we consciously
record) to exactly where and when Process I is used in modelling the
evolution of the quantum system. This is often referred to as the
movability of the von Neumann cut between the subject and the
object, or some similar phrase. Collapse could occur anywhere along
the so-called von Neumann chain: when a particle impinges on a screen,
or when the screen blackens, or when an automatic printout of the
result is made, or in our retina, or along the optic nerve, or when
ultimately consciousness is involved. Von Neumann thus needs to show
that all of these models are equivalent, as far as the final
predictions are concerned, so that he can indeed maintain that
collapse is related to consciousness, while in practice applying the
projection postulate at a much earlier (and more practical) stage in
the description.

Von Neumann poses this problem in Section VI.1 of his book. In Section
VI.2, by way of preparation, he discusses the relation between states
of systems and subsystems, in particular the partial trace, and the
biorthogonal decomposition theorem, i.e. the theorem stating that an entangled
quantum state can always be written in the special form

(5)

∑kckφkξk

for two suitable bases (note the perfect correlations in (5)). Then in
Section VI.3, after discussing his insolubility argument (see again
footnote 14), von Neumann shows that there always is a Hamiltonian
that will lead from a state of the form ∑kckφkξ0 to
a state of the form (5). This concludes von Neumann's
argument.

What von Neumann has shown is that, under suitable modelling of the
measurement interaction, applying the collapse postulate directly to
the measured observable or applying it to the pointer observable of
the apparatus (or by extension to the ‘optic nerve signal
observable’, etc.) leads to the same statistics of results.

What he has not shown is that the assumption that the collapse
occurs at the level of consciousness is equivalent to the assumption
that it happens at any other earlier stage if one considers
also other possible measurements that could be carried out
along the von Neumann chain. Indeed, if collapse occurs only at the
level of consciousness, it is in principle possible, instead of
looking at the pointer, to perform a different measurement on the
composite of system and apparatus that would detect interference
between the different components of (5).

This is now precisely where decoherence plays a role. Indeed, while
such measurements are possible in principle, decoherence will make
them impossible to perform in practice. Therefore, if we assume that Process I is
a real physical process, decoherence makes it in practice
impossible to detect where along the measurement chain this process
takes place, thus allowing von Neumann to postulate that it happens
when consciousness gets involved. This aspect will be relevant also in
the next subsection.

3.1.2 Spontaneous collapse theories

The best known theory of spontaneous collapse is the so-called GRW
theory (Ghirardi Rimini & Weber 1986), in which a material
particle spontaneously undergoes localisation in the sense
that at random times it experiences a collapse of the form used to
describe approximate position
measurements.[15]
In the original model, the collapse occurs independently for each
particle (a large number of particles thus ‘triggering’
collapse much more frequently); in later models the frequency for each
particle is weighted by its mass, and the overall frequency for
collapse is thus tied to mass
density.[16]

Thus, formally, the effect of spontaneous collapse is the same as in
some of the models of decoherence, at least for one
particle.[17]
Two crucial differences on the other hand are that we have
‘true’ collapse instead of suppression of interference
(cf. above
Section 1),
and that spontaneous collapse occurs without there being any
interaction between the system and anything else, while in the case of
decoherence suppression of interference generally arises through
interaction with the environment.

Can decoherence be put to use in GRW? The situation may be rather
complex when the decoherence interaction does not approximately
privilege position (e.g. when it selects for currents in a SQUID
instead), because collapse and decoherence might actually
‘pull’ in different
directions.[18]
But in those cases in which the decoherence interaction also takes the
form of approximate position measurements, the answer presumably boils down to a
quantitative comparison. If collapse happens faster than decoherence,
then the superposition of components relevant to decoherence will not
have time to arise, and insofar as the collapse theory is successful
in recovering classical phenomena, decoherence plays no role in this
recovery. Instead, if decoherence takes place faster than collapse,
then (as in von Neumann's case) the collapse mechanism can find
‘ready-made’ structures onto which to truly collapse the
wave function. Simple comparison of the relevant rates in models of
decoherence and in spontaneous collapse theories (Tegmark 1993,
esp. Table 2) suggests that this is generally the case. Thus, it seems
that decoherence should play a role also in spontaneous collapse
theories.

A further aspect of the relation between decoherence and spontaneous
collapse theories relates to the
experimental testability of spontaneous collapse
theories. Exactly as we have just discussed in the previous subsection
in the context of von Neumann's Process I, if we assume that
collapse is a real physical process, decoherence will make it
extremely difficult in practice to detect empirically when and where
exactly spontaneous collapse takes place (see the nice discussion of
this point in Chapter 5 of Albert (1992)).

Even worse, at least with the proviso that decoherence may be put
to use also in no-collapse approaches such as pilot-wave or Everett
(possibilities that we discuss in the next sub-sections), then in all
cases in which decoherence is faster than collapse, what might be
interpreted as evidence for collapse could be reinterpreted as
‘mere’ suppression of interference (for instance in the
case of measurements), and only those cases in which the collapse
theory predicts collapse but the system is shielded from decoherence
(or perhaps in which the two pull in different directions) could be
used to test collapse theories experimentally.

One particularly bad scenario for experimental testability is related
to the speculation (in the context of the ‘mass density’
version) that the cause of spontaneous collapse may be connected with
gravitation. Tegmark 1993 (Table 2) quotes some admittedly uncertain
estimates for the suppression of interference due to a putative
quantum gravity, but they are quantitatively very close to the rate of
destruction of interference due to the GRW collapse (at least outside
of the microscopic domain). Similar conclusions are arrived at by Kay
(1998). If there is indeed such a quantitative similarity between
these possible effects, then it would become extremely difficult to
distinguish between the two. In the presence
of gravitation, any positive effect could be interpreted as support
for either collapse or decoherence (with the above proviso). And in those cases in which the
system is effectively shielded from decoherence (say, if the
experiment is performed in free fall), if the collapse mechanism is
indeed triggered by gravitational effects, then no collapse should be
expected either.

The relation between decoherence and spontaneous
collapse theories is thus indeed far from straightforward.

3.2.1 De Broglie-Bohm and related theories

Pilot-wave theories are no-collapse formulations of quantum mechanics
that assign to the wave function the role of determining the evolution
of (‘piloting’, ‘guiding’) the variables
characterising the system, say particle configurations, as in de
Broglie's (1928) and Bohm's (1952) theory, or fermion number density,
as in Bell's (1987, Chap. 19) ‘beable’ quantum field
theory, or again field configurations, as in various proposals for
pilot-wave quantum field theories (for a recent survey, see Struyve 2011).

De Broglie's idea was to modify classical Hamiltonian mechanics
in such a way as to make it analogous to classical wave optics, by
substituting for Hamilton and Jacobi's action function the phase
S of a physical wave. Such a ‘wave mechanics’ of
course yields non-classical motions, but in order to understand how de
Broglie's dynamics relates to typical quantum phenomena, we must
include Bohm's (1952, Part II) analysis of the appearance of
collapse. In the case of measurements, Bohm argued that the wave
function evolves into a superposition of components that are and
remain separated in the total configuration space of measured system
and apparatus, so that the total configuration is
‘trapped’ inside a single component of the wave
function, which will guide its further evolution, as if the wave had
collapsed (‘effective’ wave function). This analysis
allows one to recover qualitatively the measurement collapse and by
extension such typical quantum features as the
uncertainty principle
and the perfect correlations in an
Einstein-Podolsky-Rosen
experiment. (The quantitative aspects of the theory are also very well
developed, but we shall not describe them here.)

It is natural to extend this analysis from the
case of measurements induced by an apparatus to that of
‘spontaneous measurements’ as performed by the environment in
the theory of decoherence, thus applying the same strategy to
recover both quantum and classical phenomena. The resulting picture
is one in which de Broglie-Bohm theory, in cases of decoherence,
describes the motion of particles that are trapped inside one of the
extremely well localised components selected by the decoherence
interaction. Thus, de Broglie-Bohm trajectories will partake of the
classical motions on the level defined by decoherence (the width of
the components).

This use of decoherence would arguably resolve the puzzles discussed,
e.g., by Holland (1996) with regard to the possibility of a
‘classical limit’ of de Broglie's theory. One
baffling problem, for instance, is that trajectories with different
initial conditions cannot cross in de Broglie-Bohm theory, because the
wave guides the particles by way of a first-order equation, while, as
is well known, Newton's equations are second-order and possible
trajectories in Newton's theory do cross. Now, however, the
non-interfering components produced by decoherence can indeed cross,
and so will the trajectories of particles trapped inside them.

The above picture is natural, but it is not obvious. De Broglie-Bohm
theory and decoherence contemplate two a priori distinct
mechanisms connected to apparent collapse: respectively, separation of
components in configuration space and suppression of interference.
While the former obviously implies the latter, it is equally obvious
that decoherence need not imply separation in configuration space. One
can expect, however, that decoherence interactions of the form of
approximate position measurements will.

If the main instances of decoherence are indeed coextensive with
instances of separation in configuration, de Broglie-Bohm theory can
thus use the results of decoherence relating to the formation
of classical structures, while providing an interpretation of quantum
mechanics that explains why these structures are indeed
observationally relevant. In that case, the question that arises for
de Broglie-Bohm theory is not only the standard question of whether
all apparent measurement collapses can be associated with
separation in configuration (by arguing that at some stage all
measurement results are recorded in macroscopically different
configurations), but also whether all appearance of
classicality can be associated with separation in configuration
space.[19]

A discussion of the role of decoherence in pilot-wave theory in the
form suggested above is still largely outstanding. An informal
discussion is given in Bohm and Hiley (1993, Chap. 8), partial results
are given by Appleby (1999), some simulations have been realised by
Sanz and co-workers (e.g. Sanz and Borondo 2009); and a different
approach is suggested by Allori (2001; see also Allori &
Zanghì 2009). Appleby discusses Bohmian trajectories in a model
of decoherence and obtains approximately classical trajectories, but
under a special
assumption.[20]
The simulations currently published by
Sanz and co-workers are based on simplified models, but fuller results
have been
announced.[21]
Allori investigates in the first place
the ‘short wavelength’ limit of de Broglie-Bohm theory
(suggested by the analogy to the geometric limit in wave optics). The
role of decoherence in her analysis is crucial but limited to
maintaining the classical behaviour obtained under the
appropriate short wavelength conditions, because the behaviour would
otherwise break down after a certain time.

While, as argued above, it appears plausible that decoherence might be
instrumental in recovering the classicality of pilot-wave trajectories
in the case of the non-relativistic particle theory, it is less clear
whether this strategy might work equally well in the case of field
theory. Doubts to this effect have been raised, e.g., by Saunders
(1999) and by Wallace (2008). Essentially, these authors doubt whether
the configuration-space variables, or some coarse-grainings thereof,
are, indeed, decohering
variables.[22]
At least in the opinion of the present
author, further detailed investigation is needed.

3.2.2 Nelson's stochastic mechanics

Nelson's (1966, 1985) stochastic mechanics is strictly speaking
not a pilot-wave theory. It is a proposal to recover the wave function
and the Schrödinger equation as effective elements in the
description of a fundamental diffusion process in configuration
space. Insofar as the proposal is successful, however, it then shares
many features with de Broglie-Bohm theory. In particular, the current
velocity for the particles in Nelson's theory turns out to be
equal to the de Broglie-Bohm velocity, and the particle distribution
in Nelson's theory is equal to that in de Broglie-Bohm theory
(in equilibrium).

It follows that many results from pilot-wave theories can be imported
into Nelson's stochastic mechanics. However, decoherence has
been very little discussed in the literature on stochastic mechanics,
if at all, and the strategies used in pilot-wave theories to recover
the appearance of collapse and the emergence of a classical regime
still need to be applied specifically in the case of stochastic
mechanics. This would presumably also resolve some conceptual puzzles
specific to Nelson's theory, such as the problem of two-time
correlations raised in Nelson (2006).

Over the years, since the original paper by Everett (1957), some very
diverse ‘Everett interpretations’ have been proposed,
which possibly only share the core intuition that a single
wave function of the universe should be interpreted in terms of
a multiplicity of ‘realities’ at some level or
other. This multiplicity, however understood, is formally associated
with components of the wave function in some
decomposition.[23]

Various such Everett interpretations, roughly speaking, differ as to how to
identify the relevant components of the universal wave
function, and how to justify such an identification (the
so-called problem of the ‘preferred basis’ —
although this may be a misnomer), and differ as to how to
interpret the resulting multiplicity (various
‘many-worlds’ or various ‘many-minds’
interpretations), in particular with regard to the interpretation of
the (emerging?) probabilities at the level of the components
(problem of the ‘meaning of probabilities’).

The last problem is perhaps the most hotly debated aspect of
Everett. Clearly, decoherence enables reidentification over time of
both observers and of results of repeated measurement (and thus
definition of empirical frequencies). In recent years progress has
been made especially along the lines of interpreting the probabilities
in decision-theoretic terms for a ‘splitting’ agent (see
in particular Deutsch (1999) and Wallace (2003b,
2007)).[24]

The most useful application of decoherence to Everett, however, seems
to be in the context of the problem of the preferred basis.
Decoherence yields a natural solution to the problem,
in that it identifies a class of ‘preferred’
states (not necessarily an orthonormal basis!), and allows one to
reidentify them over time, so that one can identify
‘worlds’ with the trajectories defined by decoherence (or
more abstractly with decoherent
histories).[25]
If part of the aim of Everett is to interpret quantum mechanics
without introducing extra structure, in particular without
postulating the existence of some preferred basis, then one
will try to look for potentially relevant structures that are already
present in the wave function. In this sense, decoherence is the ideal
candidate for identifying ‘worlds’ (see e.g. Wallace
2003a).

A justification for this identification can be variously
given by suggesting that a ‘world’ should be a
temporally extended structure and thus reidentification over
time will be a necessary condition for defining worlds; or similarly
by suggesting that in order for observers to have evolved
there must be stable records of past events (Saunders 1993,
and the unpublished Gell-Mann & Hartle 1994) (see the
Other Internet Resources section below);
or that observers must be able to
access robust states, preferably through the existence of
redundant information in the environment (Zurek's ‘existential
interpretation’, 1998).

Alternatively to some global notion of ‘world’, one can
look at the components of the (mixed) state of a (local) system,
either from the point of view that the different components defined by
decoherence will separately affect (different components of the state
of) another system, or from the point of view that they will
separately underlie the conscious experience (if any) of the
system. The former sits well with Everett's (1957) original
notion of relative state, and with the relational interpretation of
Everett preferred by Saunders (e.g. 1993) and, it would seem, Zurek
(1998) (see the entry on
Everett's relative-state interpretation).
The latter leads directly to the idea of many-minds
interpretations.[26]

The idea of many minds was suggested early on by Zeh (2000; also 1995,
p. 24). As Zeh puts it, von Neumann's motivation for introducing
collapse was to save what he called ‘psycho-physical
parallelism’ (arguably to be understood as supervenience of the
mental on the physical: only one mental state is experienced, so there
should be only one corresponding component in the physical state). In
a decohering no-collapse universe one can instead introduce
a new psycho-physical parallelism, in which individual minds
supervene on each non-interfering component in the physical state. Zeh
indeed suggests that, given decoherence, this is the most natural
interpretation of quantum
mechanics.[27]

Modal interpretations originated with Van Fraassen (1973, 1991) as
pure reinterpretations of quantum mechanics (other later versions
coming more to resemble pilot-wave theories). Van Fraassen's
basic intuition was that the quantum state of a system should be
understood as describing a collection of possibilities, represented by
components in the (mixed) quantum state. His proposal considers only
decompositions at single instants, and is agnostic about
reidentification over time. Thus, it can directly exploit only the
fact that decoherence produces descriptions in terms of classical-like
states, which will count as possibilities in Van Fraassen's
interpretation. This ensures ‘empirical adequacy’ of the
quantum description (a crucial concept in Van Fraassen's philosophy of
science). The dynamical aspects of decoherence can be exploited
indirectly, in that single-time components will exhibit
records of the past, which ensure adequacy with respect to
observations, but about whose veridicity Van Fraassen remains
agnostic.

A different strand of modal interpretations is loosely associated with
the (distinct) views of Kochen (1985), Healey (1989) and Dieks and
Vermaas (e.g. 1998). We focus on the last of these to fix ideas.
Van Fraassen's possible decompositions are restricted to one singled
out by a mathematical criterion (related to the biorthogonal
decomposition theorem mentioned above in Section 3.1),
and a dynamical picture is explicitly sought
(and was later developed). In the case of an ideal (non-approximate)
quantum measurement, this special decomposition coincides with that
defined by the eigenstates of the measured observable and the
corresponding pointer states, and the interpretation thus appears to
solve the measurement problem (for this case at least).

At least in Dieks's original intentions, however, the approach was
meant to provide an attractive interpretation of quantum mechanics
also in the case of decoherence interactions, since at least in simple
models of decoherence the same kind of decomposition singles out more
or less also those states between which interference is suppressed
(with a proviso about very degenerate states).

However, this approach fails badly when applied to other models of
decoherence, e.g., that in Joos and Zeh (1985, Section III.2). Indeed,
it appears that in more general models of decoherence the components
singled out by this version of the modal interpretation are given
by delocalised states, and are unrelated to the localised
components naturally privileged by decoherence (Donald 1998;
Bacciagaluppi 2000). Note that Van Fraassen's original
interpretation is untouched by this problem, and so are possibly some
more recent modal or modal-like interpretations by Spekkens and Sipe
(2001), Bene and Dieks (2002) and Berkovitz and Hemmo (2006).

Finally, some of the views espoused in the decoherent histories
literature could be considered as cognate to Van Fraassen's views,
identifying possibilities, however, at the level of possible courses
of world history. Such ‘possible worlds’ would be those
temporal sequences of (quantum) propositions satisfying the
decoherence condition and in this sense supporting a description in terms
of a probabilistic evolution. This view would be using decoherence as
an essential ingredient, and in fact may turn out to be the most
fruitful way yet of implementing modal ideas; a discussion in these
terms has been outlined by Hemmo
(1996).

Bohr is often credited with more or less the following view. Everyday
concepts, in fact the concepts of classical physics, are indispensable
to the description of any physical phenomena (in a way and
terminology somewhat reminiscent of Kant's transcendental
arguments). However, experimental evidence from atomic phenomena shows
that classical concepts have fundamental limitations in their
applicability: they can only give partial (complementary) pictures of
physical objects. While these limitations are quantitatively
negligible for most purposes in dealing with macroscopic objects, they
apply also at that level (as shown by Bohr's willingness to apply the
uncertainty relations to parts of the experimental apparatus in the
Einstein-Bohr debates), and they are of paramount importance
when dealing with microscopic objects. Indeed, they shape the
characteristic features of quantum phenomena, e.g., indeterminism. The
quantum state is not an ‘intuitive’ (anschaulich,
also translated as ‘visualisable’) representation of a
quantum object, but only a ‘symbolic’ representation, a
shorthand for the quantum phenomena that are constituted by applying the
various complementary classical pictures.

While it is difficult to pinpoint exactly what Bohr's views were (the
concept and even the term ‘Copenhagen interpretation’
have been argued to be a later construct; see Howard 2004), it is clear that
according to Bohr, classical concepts are autonomous from, and indeed
conceptually prior to, quantum theory. If we understand the theory of
decoherence as pointing to how classical concepts might in fact emerge
from quantum mechanics, this seems to undermine Bohr's basic
position. Of course it would be a mistake to say that decoherence (a
part of quantum theory) contradicts the Copenhagen approach
(an interpretation of quantum theory). However, decoherence does
suggest that one might want to adopt alternative interpretations, in
which it is the quantum concepts that are prior to the classical ones,
or, more precisely, the classical concepts at the everyday level
emerge from quantum mechanics (irrespectively of whether there are
even more fundamental concepts, as in pilot-wave theories). In this
sense, if the programme of decoherence is successful in the sense sketched in
Section 2.3,
it will indeed be a blow to Bohr's interpretation coming
from quantum physics itself.

On the other hand, Bohr's intuition that quantum mechanics as
practised requires a classical domain would in fact be
confirmed by decoherence, if it turns out that decoherence is
indeed the basis for the phenomenology of quantum mechanics, as the
Everettian and possibly the Bohmian analysis
suggest.[28] As a matter of
fact, Zurek (2003) locates his existential interpretation half-way
between Bohr and Everett.

We have already mentioned in
Section 1.1
that some care has to be taken lest one overgeneralise conclusions
based on examining only well-behaved models of decoherence. On the
other hand, in order to assess the programme of explaining the
emergence of classicality using decoherence (together with appropriate
foundational approaches), one has to probe how far the
applications of decoherence can be pushed. In this final section, we
survey some of the further applications that have been proposed for
decoherence, beyond the easier examples we have seen such as chirality
or alpha-particle tracks. Whether decoherence can indeed be
successfully applied to all of these fields will be in part a matter
for further assessment, as more detailed models are proposed and investigated.

A straightforward application of the techniques allowing one to derive
Newtonian trajectories at the level of components has been employed by
Zurek and Paz (1994) to derive chaotic trajectories in
quantum mechanics. The problem with the quantum description of chaotic
behaviour is that prima facie there should be none. Chaos is
characterised roughly as extreme sensitivity in the behaviour of a
system on its initial conditions, in the sense that the distance between the
trajectories arising from different initial conditions increases
exponentially in time. Since the Schrödinger evolution is
unitary, it preserves all scalar products and all distances
between quantum state vectors. Thus, it would seem, close initial
conditions lead to trajectories that are uniformly close throughout
all of time, and no chaotic behaviour is possible (‘problem of
quantum chaos’). The crucial point that enables Zurek and Paz's
analysis is that the relevant trajectories defined by decoherence are
at the level of components of the state of the
system. Unitarity is preserved because the vectors in the environment,
to which these different components are coupled, are and remain
orthogonal: how the components themselves more specifically evolve is immaterial.
Explicit modelling yields a picture of quantum chaos in which
different trajectories branch (a feature absent from classical chaos,
which is deterministic) and then indeed diverge exponentially. As with
the crossing of trajectories in de Broglie-Bohm theory
(Section 3.2),
one has behaviour at the level of components that is qualitatively
different from the behaviour derived for wave functions of an
isolated system.

The idea of effective superselection rules was mentioned in
Section 1.1.
As pointed out by Giulini, Kiefer and Zeh (1995, see also Giulini
et al. 1996, Section 6.4), the justification for the (strict)
superselection rule for charge in quantum field theory can also be
phrased in terms of decoherence. The idea is simple: an electric
charge is surrounded by a Coulomb field (which electrostatically is
infinitely extended; the argument can also be carried through using
the retarded field, though). States of different electric charge of a
particle are thus coupled to different, presumably orthogonal, states
of its electric field. One can consider the far-field as an
effectively uncontrollable environment that decoheres the particle
(and the near-field), so that superpositions of different charges are
indeed never observed.

Another claim about the significance of decoherence relates to time
asymmetry (see e.g. the entries on
time asymmetry in thermodynamics
and
philosophy of statistical mechanics),
in particular to whether decoherence can explain the apparent
time-directedness in our (classical) world. The issue is again one of
time-directedness at the level of components emerging from a
time-symmetric evolution at the level of the universal wave function
(presumably with special initial conditions). Insofar as (apparent)
collapse is indeed a time-directed process, decoherence will have
direct relevance to the emergence of this ‘quantum mechanical
arrow of time’ (for a spectrum of discussions, see Zeh 2001,
Chap. 4; Hartle 1998, and references therein; Bacciagaluppi 2002,
Section 6.1, and Bacciagaluppi 2007). Whether decoherence
is connected to the other familiar
arrows of time is a more specific question, various discussions of
which are given, e.g., by Zurek and Paz (1994), Hemmo and Shenker
(2001) and the unpublished Wallace (2001) (see the
Other Internet Resources below).

Zeh (2003) argues from the notion that decoherence can explain
‘quantum phenomena’ such as particle detections
that the concept of a particle in quantum field theory is itself a
consequence of decoherence. That is, only fields need to be included
in the fundamental concepts, and ‘particles’ are a derived
concept, unlike what might be suggested by the customary introduction
of fields through a process of ‘second quantisation’. Thus
decoherence seems to provide a further powerful argument for the
conceptual primacy of fields over particles in the question of the
interpretation of quantum field theory.

Finally, it has been suggested that decoherence could be a useful
ingredient in a theory of quantum gravity, for two reasons. First,
because a suitable generalisation of decoherence theory to a full
theory of quantum gravity should yield suppression of interference
between different classical spacetimes (Giulini et al. 1996,
Section 4.2). Second, it is speculated that decoherence might solve
the so-called problem of time, which arises as a prominent
puzzle in (the ‘canonical’ approach to) quantum
gravity. This is the problem that the candidate fundamental equation
(in this approach)—the Wheeler-DeWitt equation—is an
analogue of a time-independent Schrödinger equation, and
does not contain time at all. The problem is thus in a sense simply:
where does time come from? In the context of decoherence theory, one
can construct toy models in which the analogue of the Wheeler-DeWitt
wave function decomposes into non-interfering components (for a
suitable sub-system) each satisfying a time-dependent
Schrödinger equation, so that decoherence appears in fact as the
source of
time.[29]
An accessible introduction to and
philosophical discussion of these models is given by Ridderbos (1999),
with references to the original papers.

Adler, S. L., 2003, ‘Why Decoherence has not Solved the
Measurement Problem: A Response to P. W. Anderson’, Studies in
History and Philosophy of Modern Physics, 34B:
135–142.
[Preprint available online]

Bene, G., and Dieks, D., 2002, ‘A Perspectival Version of the
Modal Interpretation of Quantum Mechanics and the Origin of Macroscopic
Behavior’, Foundations of Physics, 32:
645–672.
[Preprint available online]

Wallace, D., 2003a, ‘Everett and Structure’,
Studies in History and Philosophy of Modern Physics,
34 B: 87–105.
[Preprint available online]

–––, 2003b, ‘Everettian Rationality: Defending
Deutsch's Approach to Probability in the Everett
Interpretation’, Studies in History and Philosophy of Modern
Physics, 34 B: 415–439.
[Preprint available online]
[See also the longer, unpublished version titled
‘Quantum Probability and Decision Theory, Revisited’
referenced in the
Other Internet Resources.]

–––, 2007, ‘Quantum Probability from Subjective
Likelihood: Improving on Deutsch's Proof of the Probability
Rule’, Studies in History and Philosophy of Modern
Physics, 38: 311–332.
[Preprint available online]

–––, 2008, ‘Philosophy of Quantum Mechanics’,
in D. Rickles (ed.), The Ashgate Companion to Contemporary
Philosophy of Physics, Aldershot: Ashgate,
pp. 16–98. [Preliminary version
available online
as ‘The Quantum Measurement Problem: State of Play’, December 2007.]

Felline, L. (Universidad Autóonoma de Barcelona) and Bacciagaluppi, G. (University of Aberdeen), 2011, ‘Locality and Mentality in Everett Interpretations: Albert and Loewer's Many Minds’, available online
in the Pittsburgh Phil-Sci Archive.

A Many-Minds Interpretation Of Quantum Theory,
maintained by Matthew Donald (Cavendish Lab, Physics, University of
Cambridge). This page contains details of his many-minds
interpretation, as well as discussions of some of the books and papers
quoted above (and others of interest). Follow also the link to the
‘Frequently Asked Questions’, some of which (and the
ensuing dialogue) contain useful discussion of decoherence.

Quantum Mechanics on the Large Scale,
maintained by Philip Stamp (Physics, University of British Columbia).
This page has links to the available talks from the Vancouver
workshop mentioned in footnote 1; see especially the papers
by Tony Leggett and by Philip Stamp.

Decoherence Website,
maintained by Erich Joos. This is a site with information,
references and further links to people and institutes working on
decoherence, especially in Germany and the rest of Europe.

Acknowledgments

I wish to think many people in discussion with whom I have shaped my
understanding of decoherence over the years, in particular Marcus
Appleby, Matthew Donald, Beatrice Filkin, Meir Hemmo, Simon Saunders,
Max Schlosshauer, David Wallace and Wojtek Zurek. For more recent
discussions and correspondence relating to this article I wish to
thank Valia Allori, Bob Griffiths, Peter Holland, Martin Jones, Tony
Leggett, Hans Primas, Alberto Rimini, Philip Stamp and Bill Unruh. I
also gratefully acknowledge my debt to Steve Savitt and Philip Stamp
for an invitation to the University of British Columbia, to Claudius
Gros for an invitation to the University of the Saarland, and for the
opportunities for discussion arising from these talks. Finally I wish
to thank the following: the referee for this entry, again David
Wallace, for his clear and constructive commentary; my subject editor,
John Norton, who corresponded with me extensively over a previous
version of part of the material and whose suggestions I have taken to
heart; our editor-in-chief, Ed Zalta, for his saintly patience; and my
late friend, Rob Clifton, who invited me to write on this topic in the
first place.