g-OLOGY

Science is often portrayed as a kind of tennis match between theory
and experiment. This description is nowhere more apt than in the
study of a physical constant known as the g factor of the
electron and the muon. For more than 50 years g has been
batted back and forth by theorists and experimenters striving always
to append another decimal place to the known value. The game depends
on having well-matched players on either side of the net, so that
what's predicted theoretically can be checked experimentally. In
this case the players are very good indeed. The g factor of
the electron has been both calculated and measured so finely that
the uncertainty is only a few parts per trillion. The current
experimental value is 2.0023193043718 ± 0.0000000000075.

Measuring a property of matter with such extraordinary precision is
a labor of years; a single experiment could well occupy the better
part of a scientific career. It's not always appreciated that
theoretical calculations at this level of accuracy are also arduous
and career-consuming. Getting to the next decimal place is not
back-of-the-envelope work. It calls for care and patience and for
mastery of specialized mathematical methods. These days it also
requires significant computer resources for both algebraic and
numerical calculations. Only a few groups of workers worldwide have
the necessary expertise. My own role in this tennis game is purely
that of a spectator, but I have been watching the ball bounce for
some time, and I would like to give a brief account of the game from
a fan's point of view, emphasizing the action on the theoretical
side of the net.

The study of g is not just an exercise in accumulating
decimal places for their own sake. The g factor represents
an important test for fundamental theories of the forces of nature.
So far, theory and experiment are in excellent agreement on the
g factor of the electron. But for the muon—the
heavier sibling of the electron—the situation is not so clear.
Calculations and measurements of the muon g factor have not
yet reached the precision of the electron results, but already there
are hints of possible discrepancies. Those hints could be early
signs of "new physics." Or they could be signs that we
don't understand the old physics as well as we think we do.

QED

The naive mental picture of an electron is a blob of mass and
electric charge, spinning on its axis like a tiny planet. If we take
this image seriously, the moving charge on the spinning particle's
surface has to be regarded as an electric current, which ought to
generate a magnetic field. The g factor (also known as the
gyromagnetic ratio) is the constant that determines how much
magnetic field arises from a given amount of charge, mass and spin.
The formula is:

where µ is the magnetic moment, e the electric
charge, m the mass and s the spin angular momentum
(all expressed in appropriate units). Early experimental evidence
suggested that the numerical value of g is approximately 2.

In the 1920s P. A. M. Dirac created a new and not-so-naive theory of
electrons in which g was no longer just an arbitrary
constant to be measured experimentally; instead, the value of
g was specified directly by the theory. For an electron in
total isolation, Dirac calculated that g is exactly 2. We
now know that this result was slightly off the mark; g is
greater than 2 by roughly one part in a thousand. And yet Dirac's
mathematics was not wrong. The source of the error is that no
electron is ever truly alone; even in a perfect vacuum, an electron
is wrapped in a halo of particles and antiparticles, which are
continually being emitted and absorbed, created and annihilated.
Interactions with these "virtual" particles alter various
properties of the electron, including the g factor.

Methods for accurately calculating g were devised in the
1940s as part of a thorough overhaul of the theory of
electrons—a theory called quantum electrodynamics, or QED.
That the calculation of g can be honed to such a razor edge
of precision is something of a fluke. The mass, charge and magnetic
moment of the electron are known only to much lower accuracy; so how
can g, which is defined in terms of these quantities, be
pinned down more closely? The answer is that g is a
dimensionless ratio, calculated and measured in such a way that
uncertainties in all those other factors cancel out.

Experimental measurements of g benefit from another fortunate
circumstance. The experiments can be arranged to determine not
g itself but the difference between g and 2; thus
the measurements have come to be known as "g minus 2
experiments." Because g–2 is only about a
thousandth of g, the measurement gains three decimal places
of precision for free.

Bottled Electrons

One good way to measure g is to capture an electron and
keep it in a bottle formed out of electric and magnetic fields. In
confinement, the electron executes an elegant dance of twirls and
pirouettes. The various modes of motion are quantized, meaning that
only certain discrete energy states are possible. In some of these
states the electron's intrinsic magnetic moment is oriented parallel
to the external magnetic field, and in other states it is
antiparallel. The energy difference between two such states is
proportional to g. Thus a direct approach to determining
g is simply to measure the energy of a
"spin-flip" transition between parallel and antiparallel states.

The drawback of this straightforward experimental design is that you
cannot know g with any greater accuracy than you know the
strength of the external field. A clever trick sidesteps this
problem: Measure the energies of two transitions, both of which
depend on the magnetic field but only one of which involves a spin
flip. For the non-flip transition, the constant of proportionality
that sets the energy scale is exactly 2, whereas for the spin-flip
transition the constant is g. Taking the ratio of the two
energies eliminates dependence on the strength of the field.

Experiments with isolated electrons were pioneered by Hans Dehmelt
of the University of Washington, who kept them penned up for weeks
at a time—long enough that some of them were given names, like
family pets. Although the technique may sound simple in its
principles, getting results accurate to 11 significant figures is
not a project for a high school science fair.

In the case of the muon, measuring g is even more challenging. The
muon is an unstable particle, with a lifetime of a few microseconds,
and so keeping a pet muon in a cage is not an option. The best muon
g–2 measurements come from a 20-year-long experiment
designated E821, carried out at the Brookhaven National Laboratory
by workers from 11 institutions. Clouds of muons with their spins
aligned circulate in a toroidal vacuum chamber immersed in a strong
magnetic field. The apparatus is adjusted so that if g were
exactly 2, the particles would complete each orbit with the same
orientation they had at the outset. But because g differs
from 2, the spin axis precesses slowly, drifting about 0.8 degree on
each circuit of the ring. When a muon decays, it emits an electron
preferentially in the direction of the spin axis. The spatial
distribution of these electrons reveals the rate of precession and
thus the value of g–2.

The latest value of the muon g factor reported by the E821
group works out to 2.0023318416 ± 0.0000000012. This number
differs from the electron g factor in the fifth decimal
place, and its precision is only at the parts-per-billion level
rather than parts-per-trillion. Despite the lesser precision,
however, the confrontation between theory and experiment turns out
to be more dramatic in the case of the muon.

g-Whiz

Calculating g from theoretical principles might seem to be
far easier than measuring it experimentally. After all, the theorist
can leave behind all the messy imperfections of the physical world
and operate in an abstract realm where vacuums and magnetic fields
are always ideal, and no one ever spills coffee on the control
panel. But theory has challenges of its own, and in the saga of the
g factor, 20-year-long experiments are matched by
30-year-long calculations.

What needs to be calculated is the strength of a charged particle's
interaction with a magnetic field. The problem can be phrased in
terms of something directly observable: Given a particle of known
mass, charge and momentum, and a magnetic field of known intensity,
how much will the particle's path be deflected when it passes
through the field? Classical physics envisions magnetic lines of
flux that induce a curvature in the particle's trajectory. Quantum
electrodynamics takes a different approach. Instead of a field
exerting its influence throughout a volume of space, QED posits a
discrete, localized "scattering event," where an electron
either emits or absorbs a photon (the quantum of the electromagnetic
field); the recoil from this emission or absorption alters the
electron's own motion.

A key tool for understanding such scattering events is the
diagrammatic method introduced in the 1940s by Richard P. Feynman. A
Feynman diagram plots position in space along one axis and time
along another, so that a particle moving with constant velocity is
represented by an oblique straight line. The Feynman diagram for a
simple scattering event might have an electron moving diagonally
until it collides with a photon coming from the opposite direction;
at this "vertex" of the diagram the photon disappears and
the electron reverses course.

There is more to a Feynman diagram, however, than just a spacetime
depiction of particles colliding like billiard balls. As a matter of
fact, in QED a particle cannot be assigned a unique, definite
trajectory; all you can calculate is the probability that the
particle will make its way from point A to point B. A Feynman
diagram represents an entire family of possible trajectories,
corresponding to collisions taking place at various positions and
times. Each such trajectory has an associated "amplitude";
adding all the amplitudes and squaring the result yields the
probability for the overall process.

The simplest scattering event—one electron bouncing off one
photon—was the process considered by Dirac in his first
computation of g in the 1920s. As noted above, Dirac got an
exact result of g=2. The reason this value needs correcting
is that the simplest, one-photon scattering process is not the only
way for an electron to get from point A to point B. The direct route
may well be the most important path, but in QED you dare not ignore
detours or distractions along the way.

One such distraction is for the electron to emit a photon and then
reabsorb it, somewhat like a child throwing a ball in the air and
running to catch it herself. The evanescent photon is called a
virtual particle, because it can never be detected directly, but its
effects on g are certainly real. Adding a virtual photon to the
Feynman diagram is easy enough—it forms a loop, diverging from
and then rejoining the electron path—but computing the
photon's effect on g is more difficult. The problem is that
the virtual photon can have unlimited energy. For an accurate
computation, you have to add up the amplitudes associated with every
possible energy—and without an upper limit, this sum comes out
infinite. These implausibly infinite answers stymied the further
development of QED for two decades.

The solution was a trick called renormalization, worked out by
Feynman, Julian Schwinger, Sin-Itiro Tomonaga and Freeman Dyson. In
1947 Schwinger finally succeeded in calculating the contribution of
a single virtual-photon loop to the g factor of the
electron. The answer was given in terms of another fundamental
constant of nature, known as ,
which measures the electric charge of the electron and has a
numerical value of about 1/137. Schwinger showed that the one-loop
contribution to the "anomalous magnetic moment" of the
electron is /2π, or
approximately 0.00116. The anomalous magnetic moment is defined as
one-half of g–2, and so the corrected value of
g comes out to about 2.00232.

g-Willikers

If an electron can get away with spontaneously tossing around a
virtual photon, what's to stop it from juggling two or three of
them? Nothing at all: A Feynman diagram decorated with a single
photon loop can just as well be festooned with two loops.
Furthermore, it turns out there are seven distinct two-loop diagrams
(see Figure 3).

Drawing the seven two-loop Feynman diagrams is actually the easy
part of understanding their effect; the hard part is calculating
each diagram's contribution to the value of g. The
mathematical expression associated with a diagram takes the form of
an integral, summing up the amplitudes of an infinite family of
particle paths. Some of the two-loop integrals are complicated, and
early attempts to evaluate them went astray; the task was not
completed until 1957. The result is again expressed in terms of
α, but—reflecting the much lower probability of a
two-loop event—the α term is now squared. And it is
multiplied by a curious coefficient that combines various rational
fractions, logarithms and the Riemann zeta function—this last
item being familiar in number theory but an exotic interloper in physics.

What comes next is no surprise: If two loops are good, three must
be better. However, for an electron-photon event with three loops
there are 72 Feynman diagrams, representing integrals of daunting
difficulty (see Figure 4). When work on evaluating those
integrals got under way in the 1960s, it soon became clear that the
methods of pencil-and-paper algebra had reached their limits. In
this way Feynman-diagram calculations became a major impetus to the
development of computer-algebra systems—programs that can
manipulate and simplify symbolic expressions.

Despite such computational power tools, some of the three-loop
diagrams resisted analytic solution for 30 years. To fill in the
gaps, physicists tried numerical methods of evaluating the
integrals—an even more computer-intensive task. A simple
example of numerical integration is estimating the area of a
geometric figure by randomly throwing darts at it and counting the
hits and misses. The same basic idea can be applied to a Feynman
integral, but the object being measured is now a complicated volume
in a high-dimensional space; this makes the dart-throwing process
painfully inefficient. Merely deciding whether or not a dart has hit
the target becomes time-consuming. It was not until 1995 that a
reliable, high-precision value of the three-loop contribution was
published, by Toichiro Kinoshita of Cornell University. He evaluated
all 72 diagrams numerically, comparing and combining his results
with analytic values that were then known for 67 of the diagrams. A
year later the last few diagrams were solved analytically by Stefano
Laporta and Ettore Remiddi of the University of Bologna.

The three-loop correction is proportional to 3, which makes its order of magnitude less than one part
per million. Even so, to match the precision of the experimental
measurement it's necessary to go on to the four-loop diagrams, of
which there are 891. Attacking all those intricately tangled
diagrams by analytic methods is hopeless for now. Numerical
computations have been under way since the early 1980s. A
thousandfold increase in the computer time invested in the task has
brought a thirtyfold improvement in precision—but the best
results still amount to only a few significant digits.

The Muon's Story

The electron and the muon are twins (or triplets, since there is a
third sibling called the tau). The only apparent difference between
them is mass, the muon being 200 times as heavy. But mass matters
mightily in the calculation of g. Because certain effects
are proportional to the square of the mass, they are enhanced 40,000
times in the muon. As a result, the muon g factor depends
not just on electromagnetic interactions but also on manifestations
of the weak and the strong nuclear forces. The virtual particles
that appear in muon Feynman diagrams include the usual photons and
electrons and also heavier objects such as the W and Z (quanta of
the weak force) and the strongly interacting particles known
collectively as hadrons.

A theoretical framework called the Standard Model extends the ideas
of QED to the strong and weak forces. Unfortunately, however, the
theory does not always allow high-precision calculations from first
principles in the way QED does. The strong-force contributions have
to be computed on a more empirical basis; in effect, even the
theoretical value of the muon g factor is based in part on
experimental findings.

The muon g factor has attracted much attention lately
because the theoretical and experimental values seem to be
diverging. The latest measurements from the E821 group differ from
accepted theoretical values by roughly two standard deviations.
Physicists have not been reticent about speculating on the possible
meaning of this discrepancy, suggesting it could be our first
glimpse of physics beyond the Standard Model. Perhaps the muon is
not truly an elementary particle but has some kind of substructure?
Another popular notion is supersymmetry, which predicts that all
particles have shadowy "superpartners," with names such as
selectrons, smuons and photinos.

One of these adventurous interpretations of the muon results could
well turn out to be true. On the other hand, it seems prudent to
keep in mind that the g-factor experiments and calculations are
fearfully difficult, and it's always possible an error has crept in
somewhere along the way. It would not be the first time. Feynman, in
his book QED: The Strange Theory of Light and Matter, tells
the story of an early computation of the two-loop electron
g factor:

It took two ‘independent' groups of physicists two
years to calculate this next term, and then another year to find out
there was a mistake—experimenters had measured the value to be
slightly different, and it looked for a while that the theory didn't
agree with experiment for the first time, but no: it was a mistake
in arithmetic. How could two groups make the same mistake? It turns
out that near the end of the calculation the two groups compared
notes and ironed out the differences between their calculations, so
they were not really independent.

The story has been re-enacted more recently. In the mid-1990s two
groups independently calculated a small, troublesome contribution to
the muon g factor called hadronic light-by-light
scattering. Kinoshita's group and a European collaboration of Johan
Bijnens, Elisabetta Pallante and Joaquím Prades got
compatible results. Then, six years later, Marc Knecht and Andreas
Nyffeler recalculated the effect by another method and came up with
an answer that was the same in magnitude but opposite in sign. The
other groups rechecked their work, and both found they had made
essentially the same mistake entering formulas into a
computer-algebra program. The correction slightly diminished the
disagreement between theory and experiment.

In mentioning such incidents, my aim is certainly not to embarrass
the participants. They are working far out on the frontier of
computational science, where no maps or signposts show the way. But
for that very reason a certain amount of caution is in order when
evaluating the results.

A definitive understanding of the muon g factor will have
to await further refinements of both the experimental and the
theoretical values. Incremental improvements can be expected soon,
but major advances may be some time in coming. On the experimental
side, the E821 project has been shut down by the Department of
Energy, at least for the time being. As for theory, the next major
stage will require serious attention to the five-loop Feynman
diagrams. There are 12,672 of those. Don't hold your breath.