Relevant For...

A wavefunction in quantum mechanics encodes the probability of finding particles in a particular quantum state. Particle wavefunctions can be used to describe the probability distribution for position, momentum, spin, or any other observable quantity. The classically measured value of a physical observable can be obtained from a wavefunction by taking the expectation value of operators acting on the wavefunction. The evolution of both the wavefunction and the expectation value over time can be found from the Schrödinger equation .

Contents

Wavefunctions as Probability Distributions

A position wavefunction \(\Psi (x)\) is in general a complex-valued function. The phase of the wavefunction is responsible for quantum properties like interference and diffraction of particle waves. The probability density of measuring a particle to be in position \(x\) is defined to be \(P(x) = \Psi^{\ast} (x) \Psi (x) = |\Psi (x)|^2\), where \({ \Psi }^{\ast} (x)\) is the complex conjugate of \(\Psi (x) \). Mathematical and physical consistency of this definition requires the following constraints:

Arbitrary square-integrable wavefunctions can be normalized such that the above equation holds by dividing by their norm. Note that square-integrability is equivalent to the wavefunction decaying to zero faster than \(\frac{1}{\sqrt{|x|}}\) as \(x \rightarrow \pm \infty \).

Let a harmonic oscillator be described by the wavefunction

\[\psi (x)=Ax{ e }^{ -{ x }^{ 2 } }.\]

Determine the normalization constant \(A\).

Calculate the normalization constant \(A\) if the wavefunction is

\[\Psi(x,t) = A\text{exp}(-2\pi |x|-i\pi t).\]

2) The wavefunction stays normalized as it evolves in time, so that the total probability of finding the particle somewhere is always 100%. This is equivalent to requiring that the Schrödinger equation be continually solved:
\[ \frac { d }{ dt } \int _{ -\infty }^{ \infty }{ { \Psi }^{\ast} (x) \Psi (x) } \, dx=0.\]

3) The average of any physical observable is defined to be the expectation value. For instance, the average position \(\langle x \rangle \) of a wavefunction is
\[\langle x \rangle = \int _{ -\infty }^{ \infty }{ { \Psi }^{\ast} (x) x\Psi (x) } dx = \int_{-\infty}^{\infty} xP(x)\, dx. \]

The expectation value is analogous to the value obtained from averaging measurements from repeated experiments, classically.

In the quantum description in the world, a particle can exist everywhere in space at varying degrees of probability as prescribed by its wavefunction. Once a measurement of the particle in some location is made, the wavefunction collapses, producing a sharp peak \(\Psi (x) \sim \delta (x)\) where the probability of finding the particle in that location becomes 100%. If the position of the particle were immediately measured again, the sharp peak would stay where it was last measured. However, if the wavefunction of the particle were allowed to evolve over some time via the Schrödinger equation, the sharp peak will gradually spread out.

Spreading out over time of a localized particle via the Schrödinger equation.

When the position or momentum of a particle with some wavefunction \(\Psi \) is measured, there is no guarantee that the observed value will match the expectation value of position or momentum respectively. The wavefunction still has some spread in position and/or momentum space: \(\Delta x = x - \langle x\rangle\) or \(\Delta p = p - \langle p\rangle\). The average amount of spread over repeated measurements is quantified through the statistical measure called the variance, denoted \({\sigma}^{2}\). The variance is calculated as the expectation of the square of the spread:

The square root of the variance is the standard deviation, which quantifies the uncertainty in position or momentum of a particle. The standard deviations of position and momentum satisfy a curious relationship called the Heisenberg uncertainty principle:
\[{\sigma}_{x}{\sigma}_{p} \ge \frac{\hbar}{2}.\]
This inequality is derived via the mathematical properties of the vector space in which the wavefunction lives.

A particular position wavefunction is given by:

\[\Psi (x) = Ae^{-x^2/2}\]

for some constant \(A\). Find the constant \(A\) assuming the wavefunction is normalized and compute the position uncertainty \(\sigma_x\).

Image Credit: Wikipedia Fermilab, U.S. Department of Energy

Hilbert Spaces and Operators

Wavefunctions inhabit a type of mathematical space called a Hilbert space as a result of their mathematical properties. Hilbert spaces are particularly important for their generalization of the dot product (the inner product) to arbitrarily many dimensions (up to infinite dimensions). Below is a summary of the properties of Hilbert spaces as used in QM.

1) Vectors in Hilbert spaces are typically written in Dirac notation or bra-ket notation. An arbitrary vector is a "ket":
\[|a\rangle = (a_1 \: a_2\: \ldots \: a_n)^T,\]
where \(T\) denotes the transpose of a vector. If the Hilbert space is infinite-dimensional, the right-hand side above is a function rather than a vector in a finite vector space.

The adjoint \(v^{\dagger}\) of a vector \(v\) is defined as its complex conjugate transpose. If the vector is represented by a ket, its adjoint is represented by a "bra":
\[\langle a | = (a_1^{\ast} \: a_2^{\ast} \:\ldots \:a_n^{\ast}).\]

In an infinite-dimensional Hilbert space, the adjoint of a state is the complex conjugate of the corresponding function, since transpose is meaningless on functions.

The Dirac notation is also convenient because it allows ready generalization to the case of infinite-dimensional Hilbert spaces. The inner product of two functions \(f(x), g(x)\) over the interval \([a,b]\) is

\[\langle f|g\rangle = \int_{a}^{b} f^{\ast}(x)g(x)dx.\]

Compute the inner product over one period \([0,2\pi)\) of \(\sin(mx)\) and \(\cos (nx)\) where \(m\) and \(n\) are integers.

where the last equality follows because each term gives an integral of the \(\sin\) function over an integer number of periods. This formula holds regardless of \(m\) and \(n\), as expected: the orthogonality of the \(\sin\) and \(\cos\) functions are why Fourier series are well-defined.

In QM, the interval for the inner product of functions is often (but not always) \((-\infty,\infty)\), for example if \(f(x)\) and \(g(x)\) are spatial wavefunctions that represent particles that could be anywhere in space. Generally, if \(f(x)\) is confined to some region, the inner product is defined as an integral over that region.

2) A linear transformation for a matrix \(T\) is given by the matrix product
\[T \left|a \right>=Ta,\] where \(T\) is a square matrix.
If the Hilbert space is infinite-dimensional, the linear transformation \(T\) is promoted to a linear operator on functions. An operator on functions takes a function as input and ouputs a new function. For instance, the derivative operator \(\frac{d}{dx}\) is a linear operator on functions.

3) A function is normalized if \(\left<f|f \right>=1\). Two functions are orthogonal if \(\left<f|g\right>=0\). A set of functions is orthonormal if \( \langle f_{m}|f_{n}\rangle={\delta}_{mn},\) where \({\delta}_{mn}\) is the Kronecker delta:
\[\delta_{mn} = \begin{cases} 1 \quad &m=n \\ 0 \quad& m\neq n. \end{cases}\]

4) A set of functions is complete if any other function can be expressed as a linear combination of elements of the set of functions:
\[f(x)=\sum_{n=1}^{\infty}{c}_{n} f_{n}(x).\]

Assuming the \(f_n\) are orthonormal, the constants \(c_n\) are given by:

The sine and cosine functions \(\sin(mx)\) and \(\cos(nx)\), where \(m\) and \(n\) integers, are a complete set for functions that are periodic on a compact interval. The linear combination of sine and cosine functions that one can write down to represent an arbitrary periodic function are called the Fourier series of that function. Write down the Fourier series of the sawtooth wave:

Note: the definition above is meant to indicate that \(f(x)\) is periodic outside of the defined region as shown in the picture below:

One period of a sawtooth wave. The pattern repeats itself for \(x\) outside \([0,2)\). Image from [1].

If shifted down by \(\frac12\), the sawtooth wave is an odd function. Also note that as given the sawtooth wave has already been normalized in amplitude. Therefore the only coefficients we need to compute are the \(c_n\) corresponding to the sine functions:

This gives the decomposition of the sawtooth wave into a linear combination over an infinite set of orthogonal functions, the trigonometric functions.

In QM position, momentum, and energy are now called observables, and each observables are represented by Hermitian operators or self-adjoint operators. These terms have distinct meanings in functional analysis, but are for all practical purposes identical in quantum mechanics. The adjoint of an operator is its conjugate transpose; operators are self-adjoint or Hermitian if they are equal to their adjoint. Hermitian operators play a major role in QM because they conveniently describe the expectation values of observables, as the expectation values of Hermitian operators are always real. Furthermore, according to the spectral theorem from functional analysis, the eigenvectors of a Hermitian operator form a complete set. If the Hermitian operator acts on an infinite-dimensional Hilbert space, then its eigenvectors are called eigenfunctions.

To show that an operator is Hermitian, consider the integral definition of expectation of an operator \(\hat{A}\):
\[\langle \hat{A}\rangle =\langle \Psi|\hat{A}|\Psi\rangle=\int_{-\infty}^{\infty} \Psi^{\ast} (x) \hat{A}\Psi (x) dx.\]

To write any function in terms of a basis of eigenfunctions of an operator, one can use a so-called resolution of the identity:

\[\hat{1} = \sum_n |\psi_n\rangle\langle \psi_n|,\]

where the \(\psi_n\) are the (orthonormal) eigenfunctions of the operator. If there are uncountably many eigenfunctions, the sum becomes an integral. Above, the \(\hat{1}\) on the left-hand side denotes the identity operator. Note that applying the identity to a state \(|\psi\rangle\) gives:

The right-hand side above just gives \(|\psi\rangle\) written out in a basis of the \(|\psi_n\rangle\), since the overlap gives the inner product.

Write the eigenfunctions of the position operator in the momentum basis and vice versa.

The eigenfunctions of the position operator are those functions for which multiplying by \(x\) is equivalent to multiplying by a constant. This is true only of the Dirac delta function \(\delta(x)\), which is technically a distribution. The eigenfunctions are written in Dirac notation as \(|x\rangle\).

Writing them instead in a momentum basis, one obtains

\[|x\rangle = \int |p\rangle \langle p | x \rangle .\]

Compare to the integral expression for the delta function

\[\delta(x) = \int dp \frac{e^{-ipx / \hbar}}{\sqrt{2\pi \hbar}}. \]

The inclusion of \(\hbar\) in the exponent and normalization factor is standard in quantum mechanics, so that the position operator is written in terms of the momentum eigenstates. In the momentum basis, the operators similarly transform, so that \(\hat{x} = i\hbar \frac{\partial}{\partial p}\) and \(\hat{p} = p\). One can read off from above that in the momentum basis the eigenfunctions of position look like

Measurement and the Commutator

Once the measurement of the position or momentum of a particle is made, the wavefunction of that particle collapses to an eigenfunction of the corresponding operator. As a result, the order in which measurements are made affect the results, since the first measurement causes collapse to a different wavefunction than the second measurement. The probability of measuring a wavefunction \(\Psi\) to take the eigenvalue \(\lambda_n\) corresponding to some eigenfunction \(\phi_n\) of an operator corresponds to the square of the overlap:

\[P(\lambda_n) = |\langle \phi_n | \Psi \rangle|^2.\]

Let there be two operators \(\hat{A}\) and \(\hat{B}\) that represent their respective observables \(A\) and \(B\).

\(\hat{A}\) has two eigenvalues \({a}_{1}\) and \({a}_{2}\), each corresponding to respective normalized eigenstates:

\[{\psi}_{1} = \frac{1}{5}(3{\phi}_{1} + 4{\phi}_{2})\]

\[{\psi}_{2} = \frac{1}{5}(4{\phi}_{1} - 3{\phi}_{2}).\]

\(\hat{B}\) also has two eigenvalues \({b}_{1}\) and \({b}_{2}\), which correspond respectively to normalized eigenstates \({\phi}_{1}\) and \({\phi}_{2}\).

You make an initial measurement of \(A\), recording a value of \(a_1\). You then measure \(B\), then \(A\) again. What is the probability that you record \(a_1\) again?

This ordering-dependence of measurements is captured by the fact that observables are represented by operators, which do not necessarily commute. Mathematically, whether or not operators commute is given by their commutator, denoted in square brackets:

\[[\hat{A},\hat{B}] = \hat{A}\hat{B} - \hat{B} \hat{A}.\]

If two operators commute, then \(\hat{A}\hat{B} = \hat{B} \hat{A}\) and their commutator vanishes. For example, the multiplication of numbers is commutative; hence, \([2,5]=2(5)-5(2)=0\). Consider instead the position and momentum operators

This equation is known as the canonical commutation relation between position and momentum. Since the operators corresponding to position and momentum do not commute, both cannot be simultaneously measured, since measurement of one affects the state of the other. This means that all times there is an uncertainty in both position and momentum. Therefore, the failure of Hermitian operators corresponding to observables to commute is deeply linked to uncertainty principles for these operators like the Heisenberg uncertainty principle.

By exploiting the commutator in a clever way, one can derive the time evolution of the expectation value of observables, called Ehrenfest's theorem. Consider some Hermitian operator \(\hat{A}\). By definition of the expectation value,

For any operator without explicit time dependence like position or momentum, the time-dependence of the expectation value depends entirely on the commutation of the operator with the Hamiltonian. Notably, for position and momentum specifically with a time-independent potential, Ehrenfest's theorem reduces to