This article needs attention from an expert in physics. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Physics (or its Portal) may be able to help recruit an expert.(September 2009)

A measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured, the eigenvalue this eigenstate belongs to being equal to the result of the measurement.

Measurement plays an important role in quantum mechanics, and it is viewed in different ways among various interpretations of quantum mechanics. In spite of considerable philosophical differences, different views of measurement almost universally agree on the practical question of what results form a routine quantum-physics laboratory measurement. To understand this, the Copenhagen interpretation, which has been commonly used,[1] is employed in this article.

In classical mechanics, a simple system consisting of only one single particle is fully described by the position x→(t){\displaystyle {\vec {x}}(t)} and momentum p→(t){\displaystyle {\vec {p}}(t)} of the particle. As an analogue, in quantum mechanics a system is described by its quantum state, which contains the probabilities of possible positions and momenta. In mathematical language, all possible pure states of a system form an abstract vector space called Hilbert space, which is typically infinite-dimensional. A pure state is represented by a state vector in the Hilbert space.

Once a quantum system has been prepared in laboratory, some measurable quantity such as position or energy is measured. For pedagogic reasons, the measurement is usually assumed to be ideally accurate. The state of a system after measurement is assumed to "collapse" into an eigenstate of the operator corresponding to the measurement. Repeating the same measurement without any evolution of the quantum state will lead to the same result. If the preparation is repeated, subsequent measurements will likely lead to different results.

The predicted values of the measurement are described by a probability distribution, or an "average" (or "expectation") of the measurement operator based on the quantum state of the prepared system.[2] The probability distribution is either continuous (such as position and momentum) or discrete (such as spin), depending on the quantity being measured.

The measurement process is often considered as random and indeterministic. Nonetheless, there is considerable dispute over this issue. In some interpretations of quantum mechanics, the result merely appears random and indeterministic, whereas in other interpretations the indeterminism is core and irreducible. A significant element in this disagreement is the issue of "collapse of the wavefunction" associated with the change in state following measurement. There are many philosophical issues and stances (and some mathematical variations) taken—and near universal agreement that we do not yet fully understand quantum reality. In any case, our descriptions of dynamics involve probabilities, not certainties.

The mathematical relationship between the quantum state and the probability distribution is, again, widely accepted among physicists, and has been experimentally confirmed countless times. This section summarizes this relationship, which is stated in terms of the mathematical formulation of quantum mechanics.

Hermitian operators' eigenvalues are real. The possible outcomes of a measurement are precisely the eigenvalues of the given observable.

For each eigenvalue there are one or more corresponding eigenvectors (eigenstates). A measurement results in the system being in the eigenstate corresponding to the eigenvalue result of the measurement. If the eigenvalue determined from the measurement corresponds to more than one eigenstate ("degeneracy"), instead of being in a definite state, the system is in a sub-space of the measurement operator corresponding to all the states having that eigenvalue.

Important examples of observables are:

The Hamiltonian operator H^{\displaystyle {\hat {H}}}, which represents the total energy of the system. In nonrelativistic quantum mechanics the nonrelativistic Hamiltonian operator is given by H^=T^+V^=p^22m+V(x^){\displaystyle {\hat {H}}={\hat {T}}+{\hat {V}}={{\hat {p}}^{2} \over 2m}+V({\hat {x}})}.

The position operatorx^{\displaystyle {\hat {x}}} is given by x^=x{\displaystyle {\hat {x}}=x} (in the position basis), or x^=iℏ∂∂p{\displaystyle {\hat {x}}=i\hbar {\partial \over \partial p}} (in the momentum basis).

Operators can be noncommuting. Two Hermitian operators commute if (and only if) there is at least one basis of vectors such that each of which is an eigenvector of both operators (this is sometimes called a simultaneous eigenbasis). Noncommuting observables are said to be incompatible and cannot in general be measured simultaneously. In fact, they are related by an uncertainty principle as discovered by Werner Heisenberg.

There are a few possible ways to mathematically describe the measurement process (both the probability distribution and the collapsed wavefunction). The most convenient description depends on the spectrum (i.e., set of eigenvalues) of the observable.

Consider a system prepared in state |ψ⟩{\displaystyle |\psi \rangle }. Since the eigenstates of the observable O^{\displaystyle {\hat {O}}} form a complete basis called eigenbasis, the state vector |ψ⟩{\displaystyle |\psi \rangle } can be written in terms of the eigenstates as

where c1,c2,…{\displaystyle c_{1},c_{2},\ldots } are complex numbers in general. The eigenvalues O1,O2,O3,...{\displaystyle O_{1},O_{2},O_{3},...} are all possible values of the measurement. The corresponding probabilities are given by

If the result of the measurement is On{\displaystyle O_{n}}, then the system (after measurement) is in pure state |n⟩{\displaystyle |n\rangle }. That is,

|ψ′⟩=|n⟩{\displaystyle |\psi '\rangle =|n\rangle }

so any repeated measurement of O^{\displaystyle {\hat {O}}} will yield the same result On{\displaystyle O_{n}}. When there is a discontinuous change in state due to a measurement that involves discrete eigenvalues, that is called wavefunction collapse. For some, this is simply a description of a reasonably accurate discontinuous change in a mathematical representation of physical reality; for others, depending on philosophical orientation, this is a fundamentally serious problem with quantum theory; others see this as statistically-justified approximation resulting from the fact that the entity performing this measurement has been excluded from the state-representation. In particular, multiple measurements of certain physically extended systems demonstrate predicted statistical correlations which would not be possible under classical assumptions.

Consider a system prepared in state |ψ⟩{\displaystyle |\psi \rangle }. Since the eigenstates of the observable O^{\displaystyle {\hat {O}}} form a complete basis called eigenbasis, the state vector |ψ⟩{\displaystyle |\psi \rangle } can be written in terms of the eigenstates as

where c(x){\displaystyle c(x)} is a complex-valued function. The eigenvalue that fills up the interval (a,b){\displaystyle (a,b)} is the possible value of measurement. The corresponding probability is described by a probability function given by

If the result of the measurement is x{\displaystyle x}, then the system (after measurement) is in pure state |x⟩{\displaystyle |x\rangle }. That is,

|ψ′⟩=|x⟩.{\displaystyle |\psi '\rangle =|x\rangle .}

Alternatively, it is often possible and convenient to analyze a continuous-spectrum measurement by taking it to be the limit of a different measurement with a discrete spectrum. For example, an analysis of scattering involves a continuous spectrum of energies, but by adding a "box" potential (which bounds the volume in which the particle can be found), the spectrum becomes discrete. By considering larger and larger boxes, this approach need not involve any approximation, but rather can be regarded as an equally valid formalism in which this problem can be analyzed.

If there are multiple eigenstates with the same eigenvalue (called degeneracies), the analysis is a bit less simple to state, but not essentially different. In the discrete case, for example, instead of finding a complete eigenbasis, it is a bit more convenient to write the Hilbert space as a direct sum of multiple eigenspaces. The probability of measuring a particular eigenvalue is the squared component of the state vector in the corresponding eigenspace, and the new state after measurement is the projection of the original state vector into the appropriate eigenspace.

Instead of performing quantum-mechanics computations in terms of wavefunctions (kets), it is sometimes necessary to describe a quantum-mechanical system in terms of a density matrix. The analysis in this case is formally slightly different, but the physical content is the same, and indeed this case can be derived from the wavefunction formulation above. The result for the discrete, degenerate case, for example, is as follows:

Let O^{\displaystyle {\hat {O}}} be an observable, and suppose that it has discrete eigenvaluesO1,O2,O3,…{\displaystyle O_{1},O_{2},O_{3},\ldots }, associated with eigenspacesV1,V2,…{\displaystyle V_{1},V_{2},\ldots } respectively. Let Pn{\displaystyle P_{n}} be the projection operator into the space Vn{\displaystyle V_{n}}.

Assume the system is prepared in the state described by the density matrix ρ. Then measuring O^{\displaystyle {\hat {O}}} can yield any of the results O1,O2,O3,…{\displaystyle O_{1},O_{2},O_{3},\ldots }, with corresponding probabilities given by

Pr(On)=Tr(Pnρ){\displaystyle \Pr(O_{n})=\mathrm {Tr} (P_{n}\rho )}

where Tr{\displaystyle \mathrm {Tr} } denotes trace. If the result of the measurement is n, then the new density matrix will be

Alternatively, one can say that the measurement process results in the new density matrix

ρ″=∑nPnρPn{\displaystyle \rho ''=\sum _{n}P_{n}\rho P_{n}}

where the difference is that ρ″{\displaystyle \rho ''} is the density matrix describing the entire ensemble, whereas ρ′{\displaystyle \rho '} is the density matrix describing the sub-ensemble whose measurement result was n{\displaystyle n}.

Suppose that we have a particle in a 1-dimensional box, set up initially in the ground state |ψ1⟩{\displaystyle |\psi _{1}\rangle }. As can be computed from the time-independent Schrödinger equation, the energy of this state is E1=π2ℏ22mL2{\displaystyle E_{1}={\frac {\pi ^{2}\hbar ^{2}}{2mL^{2}}}} (where m is the particle's mass and L is the box length), and the spatial wavefunction is ⟨x|ψ1⟩=2Lsin(πxL){\displaystyle \langle x|\psi _{1}\rangle ={\sqrt {\frac {2}{L}}}~{\rm {sin}}\left({\frac {\pi x}{L}}\right)}. If the energy is now measured, the result will always certainly be E1{\displaystyle E_{1}}, and this measurement will not affect the wavefunction.

Next suppose that the particle's position is measured. The position x will be measured with probability density

If the measurement result was x=S, then the wavefunction after measurement will be the position eigenstate |x=S⟩{\displaystyle |x=S\rangle }. If the particle's position is immediately measured again, the same position will be obtained.

The new wavefunction |x=S⟩{\displaystyle |x=S\rangle } can, like any wavefunction, be written as a superposition of eigenstates of any observable. In particular, using energy eigenstates, |ψn⟩{\displaystyle |\psi _{n}\rangle }, we have

If we now leave this state alone, it will smoothly evolve in time according to the Schrödinger equation. But suppose instead that an energy measurement is immediately taken. Then the possible energy values En{\displaystyle E_{n}} will be measured with relative probabilities:

The process in which a quantum state becomes one of the eigenstates of the operator corresponding to the measured observable is called "collapse", or "wavefunction collapse". The final eigenstate appears randomly with a probability equal to the square of its overlap with the original state.[2] The process of collapse has been studied in many experiments, most famously in the double-slit experiment. The wavefunction collapse raises serious questions regarding "the measurement problem",[3] as well as questions of determinism and locality, as demonstrated in the EPR paradox and later in GHZ entanglement. (See below.)

In the last few decades, major advances have been made toward a theoretical understanding of the collapse process. This new theoretical framework, called quantum decoherence, supersedes previous notions of instantaneous collapse and provides an explanation for the absence of quantum coherence after measurement. Decoherence correctly predicts the form and probability distribution of the final eigenstates, and explains the apparent randomness of the choice of final state in terms of einselection.[4]

The von Neumann measurement scheme, the ancestor of quantum decoherence theory, describes measurements by taking into account the measuring apparatus which is also treated as a quantum object.

"Measurement" of the first kind — premeasurement without detection[edit]

Let the quantum state be in the superposition |ψ⟩=∑ncn|ψn⟩{\displaystyle \scriptstyle |\psi \rangle =\sum _{n}c_{n}|\psi _{n}\rangle }, where |ψn⟩{\displaystyle \scriptstyle |\psi _{n}\rangle } are eigenstates of the operator for the so-called "measurement" prior to von Neumann's second apparatus. In order to make the "measurement", the system described by |ψ⟩{\displaystyle \scriptstyle |\psi \rangle } needs to interact with the measuring apparatus described by the quantum state |ϕ⟩{\displaystyle \scriptstyle |\phi \rangle }, so that the total wave function before the measurement and interaction with the second apparatus is |ψ⟩|ϕ⟩{\displaystyle \scriptstyle |\psi \rangle |\phi \rangle }. During the interaction of object and measuring instrument the unitary evolution is supposed to realize the following transition from the initial to the final total wave function:

where |ϕn⟩{\displaystyle \scriptstyle |\phi _{n}\rangle } are orthonormal states of the measuring apparatus. The unitary evolution above is referred to as premeasurement. The relation with wave function collapse is established by calculating the final density operator of the object ∑n|cn|2|ψn⟩⟨ψn|{\displaystyle \scriptstyle \sum _{n}|c_{n}|^{2}|\psi _{n}\rangle \langle \psi _{n}|} from the final total wave function. This density operator is interpreted by von Neumann as describing an ensemble of objects being after the measurement with probability |cn|2{\displaystyle \scriptstyle |c_{n}|^{2}} in the state |ψn⟩.{\displaystyle \scriptstyle |\psi _{n}\rangle .}

in which the vectors |ψni⟩{\displaystyle \scriptstyle |\psi _{ni}\rangle } for fixed n are the degenerate eigenvectors of the measured observable. For an arbitrary state described by a density operator ρ{\displaystyle \scriptstyle \rho } Lüders projection is given by

in which the states |χn⟩{\displaystyle \scriptstyle |\chi _{n}\rangle } of the object are determined by specific properties of the interaction between object and measuring instrument. They are normalized but not necessarily mutually orthogonal. The relation with wave function collapse is analogous to that obtained for measurements of the first kind, the final state of the object now being |χn⟩{\displaystyle \scriptstyle |\chi _{n}\rangle } with probability |cn|2.{\displaystyle \scriptstyle |c_{n}|^{2}.} Note that many measurement procedures are measurements of the second kind, some even functioning correctly only as a consequence of being of the second kind. For instance, a photon counter, detecting a photon by absorbing and hence annihilating it, thus ideally leaving the electromagnetic field in the vacuum state rather than in the state corresponding to the number of detected photons; also the Stern–Gerlach experiment would not function at all if it really were a measurement of the first kind.[5]

One can also introduce the interaction with the environment |e⟩{\displaystyle \scriptstyle |e\rangle }, so that, in a measurement of the first kind, after the interaction the total wave function takes a form

The above is completely described by the Schrödinger equation and there are not any interpretational problems with this. Now the problematic wavefunction collapse does not need to be understood as a process |ψ⟩→|ψn⟩{\displaystyle \scriptstyle |\psi \rangle \rightarrow |\psi _{n}\rangle } on the level of the measured system, but can also be understood as a process |ϕ⟩→|ϕn⟩{\displaystyle \scriptstyle |\phi \rangle \rightarrow |\phi _{n}\rangle } on the level of the measuring apparatus, or as a process |e⟩→|en⟩{\displaystyle \scriptstyle |e\rangle \rightarrow |e_{n}\rangle } on the level of the environment. Studying these processes provides considerable insight into the measurement problem by avoiding the arbitrary boundary between the quantum and classical worlds, though it does not explain the presence of randomness in the choice of final eigenstate. If the set of states

represents a set of states that do not overlap in space, the appearance of collapse can be generated by either the Bohm interpretation or the Everett interpretation which both deny the reality of wavefunction collapse. Both of these are stated to predict the same probabilities for collapses to various states as the conventional interpretation by their supporters. The Bohm interpretation is held to be correct only by a small minority of physicists, since there are difficulties with the generalization for use with relativistic quantum field theory. However, there is no proof that the Bohm interpretation is inconsistent with quantum field theory, and work to reconcile the two is ongoing. The Everett interpretation easily accommodates relativistic quantum field theory.

Interaction without interaction is a new quantum mechanical measurement effect, which states that one motion A does not interact with the other motion B in a system, when we measure the physical quantity associated with the motion A, the other motion B will have an effect on the measured physical quantity. The coinage follows from John Wheeler's style, for example Wheeler's coinages include 'mass without mass', 'charge without charge' and 'law without law'.[6]

The motion B is usually a classical harmonic vibration, two measured quantities associated with the motions A are proposed i.e. the quantum entanglement of the two two-level atoms in a single-mode polarized cavity field and the number of atoms in an atomic beam reaching the atomic detector. Because of the non-unity trace over the state of a classical harmonic oscillator, during a shorter time interval Δt{\displaystyle \Delta t} than its period T, the measured entanglement concurrence between the two atoms is modified by the vibrant factor (Δt)2/T2{\displaystyle (\Delta t)^{2}/T^{2}}.[7] , the registered number of atoms of the translational motion should be multiplied by another vibrant factor Δt/T{\displaystyle \Delta t/T}.[8][9]

Actually if the Hamiltonian of the system is given by H=HA+HB{\displaystyle H=H_{A}+H_{B}} in which the coupling term between the motion A and the motion B is absent, then the state of the system reads ρ=ρA⊗ρB{\displaystyle \rho =\rho _{A}\otimes \rho _{B}}. The measured quantity a{\displaystyle a} belonging to the motion A should be ⟨a⟩=TrA(ρAa)⋅TrB(ρBIB){\displaystyle \langle a\rangle ={\mathrm {Tr_{A}} (\rho _{A}a)\cdot \mathrm {Tr_{B}} (\rho _{B}I_{B})}}, usually the trace over the motion B TrB(ρBIB){\displaystyle {\mathrm {Tr_{B}} (\rho _{B}I_{B})}} is unity, our conventional intuition holds. However, in some condition the trace over the motion B TrB(ρBIB){\displaystyle {\mathrm {Tr_{B}} (\rho _{B}I_{B})}}, for instance a classical harmonic vibration during a shorter time interval than its period, should be less than unity, then the measurement effect of interaction without interaction appears.

Surprisingly and interestingly the measurement effect for an atomic beam is also a macroscopic quantum phenomenon, because the classical harmonic vibration and the process of registering the number of atoms by the atomic detector are both regarded as the macroscopic events. The measurement effect for an atomic beam is potentially important in the detection of gravitational waves, because the vibrant factor Δt/T{\displaystyle \Delta t/T} is independent of the amplitude and the initial phase, which implies that an atomic beam can be used to detect the extremely weak classical harmonic vibrations induced by gravitational waves.

Until the advent of quantum decoherence theory in the late 20th century, a major conceptual problem of quantum mechanics and especially the Copenhagen interpretation was the lack of a distinctive criterion for a given physical interaction to qualify as "a measurement" and cause a wavefunction to collapse. This is illustrated by the Schrödinger's cat paradox. Certain aspects of this question are now well understood in the framework of quantum decoherence theory, such as an understanding of weak measurements, and quantifying what measurements or interactions are sufficient to destroy quantum coherence. Nevertheless, there remains less than universal agreement among physicists on some aspects of the question of what constitutes a measurement.

The question of whether (and in what sense) a measurement actually determines the state is one which differs among the different interpretations of quantum mechanics. (It is also closely related to the understanding of wavefunction collapse.) For example, in most versions of the Copenhagen interpretation, the measurement determines the state, and after measurement the state is definitely what was measured. But according to the many-worlds interpretation, measurement determines the state in a more restricted sense: In other "worlds", other measurement results were obtained, and the other possible states still exist.

As described above, there is universal agreement that quantum mechanics appearsrandom, in the sense that all experimental results yet uncovered can be predicted and understood in the framework of quantum mechanics measurements being fundamentally random. Nevertheless, it is not settled[10] whether this is true, fundamental randomness, or merely "emergent" randomness resulting from underlying hidden variables which deterministically cause measurement results to happen a certain way each time. This continues to be an area of active research.[11]

In physics, the Principle of locality is the concept that information cannot travel faster than the speed of light (also see special relativity). It is known experimentally (see Bell's theorem, which is related to the EPR paradox) that if quantum mechanics is deterministic (due to hidden variables, as described above), then it is nonlocal (i.e. violates the principle of locality). Nevertheless, there is not universal agreement among physicists on whether quantum mechanics is nondeterministic, nonlocal, or both.[10]