quantum entanglement

Entangled particles remain intimately and instantaneously
linked throughout their existence. This image, which first appeared
on the cover of Nature in October 2006 (vol 2 no 10), is an artist's
impression of how quantum teleportation of particles is achieved via
the phenomenon of entanglement.

Identical twins, it's said, can sometimes sense when one of the pair is
in danger, even if they're oceans apart. Tales of telepathy abound. Scientists
cast a skeptical eye over such claims, largely because it isn't clear how
these weird connections could possibly work. Yet they've had to come to
terms with something that's no less strange in the world of physics: an
instantaneous link between particles that remains strong, secure, and undiluted
no matter how far apart the particles may be – even if they're on
opposite sides of the universe. It's a link that Einstein went to his grave denying, yet its existence is now beyond dispute. This
quantum equivalent of telepathy is demonstrated daily in laboratories around
the world. It holds the key to future hyperspeed computing and underpins
the science of teleportation.
Its name is entanglement.

The discovery
of entanglement

The concept, but not the name of entanglement, was first put under the scientific
spotlight on May 15, 1935, when a paper by Einstein and two younger associates,
Nathen Rosen and Boris Podolosky, appeared in the journal Physical Review.1 Its title – "Can a Quantum-Mechanical Description of Physical Reality
Be Considered Complete?" – leaves no doubt that the paper was a challenged
to Niels Bohr and his vision of the subatomic
world. On June 7, Erwin Schrödinger,
himself no lover of quantum weirdness, wrote to Einstein, congratulating
him on the paper and using in his letter the word entanglement – or, rather, its German equivalent verschränkung –
for the first time. This new term soon found its way into print in an article
– sent to the Cambridge Philosophical Society on August 14 that was
published a couple of months later.2 In it he wrote:

When two systems ... enter into temporary physical interaction
... and when after a time of mutual influence the systems separate again,
then they can no longer be described in the same way as before, viz. by
endowing each of them with a representative of its own. I would not call
that one but rather the characteristic trait of quantum mechanics, the
one that enforces its entire departure from classical lines of thought.
By the interaction the two representatives [the quantum states] have become
entangled.

The characteristic trait of quantum mechanics ... the one that enforces
its entire departure from classical lines of thought – here was
an early sign of the importance attached to this remarkable effect. Entanglement
lay at the very heart of quantum reality – its most startling and
defining feature. And Einstein would have none of it.

For the best part of a decade, the man who revealed the particle nature
of light (see Einstein
and the photoelectric effect) had been trying to undermine Bohr's interpretation
of quantum theory. Einstein couldn't stomach the notion that particles didn't
have properties, such as momentum and position, with real, determinable
(if only we knew how), preexisting values. Yet that notion was spelled out
in a relationship discovered in 1927 by Werner Heisenberg.
Known as the uncertainty principle,
it stems from the rule that the result of multiplying together two matrices
representing certain pairs of quantum properties, such as position and momentum,
depends on the order of multiplication. The same oddball math that says X times Y doesn't have to equal Y times X implies that we can never know simultaneously the exact values of both position
and momentum. Heisenberg proved that the uncertainty in momentum can never
be smaller than a particular number that involves Planck's
constant. In one sense, this relationship quantifies wave-particle
duality. Momentum is a property that waves can have (related to their
wavelength); position is a particlelike property because it refers to a
localization in space. Heisenberg's formula reveals the extent to which
one of these aspects fades out as the other becomes the focus of attention.
In a different but related sense, the uncertainty principle tells how much
the complementary descriptions of a quantum object overlap. Position and
momentum are complimentary properties because to pin down one is to lose
track of the other; they coexist but are mutually exclusive, like the opposite
sides of the same object. Heisenberg's formula quantifies the extent to
which knowledge of one limits knowledge of the other.

Einstein didn't buy this. He believed that a particle does have
a definite position and momentum all the time, whether we're watching it
or not, despite what quantum theory says. From his point of view, the Heisenberg
uncertainty principle isn't a basic rule of nature; it's just an artifact
of our inadequate understanding of the subatomic realm. In the same way,
he thought, wave-particle duality isn't grounded in reality but instead arises
from a statistical description of how large numbers of particles behave.
Given a better theory, there'd be no wave-particle duality or uncertainty
principle to worry about. The problem was, as Einstein saw it, that quantum
mechanics wasn't telling the whole story: it was incomplete.

Einstein versus Bohr

Intent on exposing this fact to the world and championing a return to a
more classical pragmatic view of nature, Einstein devised several thought
experiments between the late 1920s and the mid-1930s. Targeted specifically
at the idea of complementarity,
these experiments were designed to point out ways to simultaneously measure
a particle's position and momentum, or its precise energy at a
precise time (another complementary pair), thus pulling the rug from under
the uncertainty principle and wave-particle duality.

The first of these gedanken was talked about informally in 1927, in hallway
discussions at the fifth Solvay Conference in Brussels. Einstein put to
Bohr a modified version of the famous double-slit experiment in which quantum
objects – electrons, say –
emerging from the twin slits are observed by shining light onto them. Photons bouncing off a particle would have their momenta changed by an amount that
would reveal the particle's trajectory and, therefore, which slit it had
passed through. The particle would then go on to strike the detector screen
and contribute to the buildup of an interference
pattern. Wave-particle duality would be circumvented, Einstein argued,
because we would have simultaneously measured particlelike behavior (the
trajectory the particle took) and wavelike behavior (the interference pattern
on the screen).

But Bohr spotted something about this thought experiment that Einstein had
overlooked. To be able to tell which slit a particle went through, you'd
have to fix its position with an accuracy better than the distance between
the slits. Bohr then applied Heisenberg's uncertainty principle, which demands
that if you pin down the particle's position to such and such a precision,
you have to give up a corresponding amount of knowledge of its momentum.
Bohr said that this happens because the photons deliver random kicks as
they bounce off the particle. The result of these kicks is to inject uncertainty
into the whereabouts of the particle when it strikes the screen. And here's
the rub: the uncertainty turns out to be roughly as large as the spacing
between the interference bands. The pattern is smeared out and lost. And
with it disappears Einstein's hoped-for contradiction.

On several other occasions, Einstein confronted Bohr with thought experiments
cunningly contrived to blow duality out of the water. Each time, Bohr used
the uncertainty principle to exploit a loophole and win the day against
his arch rival (and, incidentally, good friend). In the battle for the future
of quantum physics, Bohr defeated Einstein and, in the process, showed just
how important Heisenberg's little formula was in the quantum scheme of things.

Such is the version of this clash of 20th-century titans that's been dutifully
repeated in textbooks and spoon fed to physics students for many years.
But evidence has recently come to light that Bohr had unwittingly hoodwinked
Einstein with arguments that were fundamentally unsound. This disclosure
doesn't throw quantum mechanics back into the melting pot, but it does mean
that the record needs setting straight, and that the effect that really
invalidates Einstein's position should be given proper credit.

The revisionist picture of the Bohr-Einstein debates stems partly from a
suggestion made in 1991 by Marlan Scully, Berthold-Georg Englert, and Herbert
Walther of the Max Planck Institute for Quantum Optics in Garching, Germany.3 These researchers proposed using atoms as quantum objects in a
version of Young's two-slit experiment. Atoms have an important advantage
over simpler particles, such as photons or electrons: they have a variety
of internal states, including a ground
state (lowest energy state) and a series of excited
states. These different states, the German team reckoned, could be used
to track the atom's path.

Seven years later, Gerhard Rempe and his colleagues at the University of
Konstanz, also in Germany, brought the experiment to life – and made
a surprising discovery.4 Their technique involved cooling atoms
of rubidium down to within a hair's breadth
of absolute zero. (Cold atoms have long wavelengths, which make their interference
patterns easier to observe.) Then they split a beam of the atoms using thin
barriers of pure laser light. When the two beams were combined, they created
the familiar double-slit interference pattern. Next, Rempe and his colleagues
looked to see which path the atoms followed. The atoms going down one path
were left alone, but those on the other path were nudged into a higher energy
state by a pulse of microwaves (short wavelength radio waves). Following
this treatment, the atoms, in their internal states, carried a record of
which way they'd gone.

The crucial factor in this version of the double-slit experiment is that
the microwaves have hardly any momentum of their own, so they can cause
virtually no change to the atom's momentum – nowhere near enough to
smear out the interference pattern. Heisenberg's uncertainty principle can't
possibly play a significant hand in the outcome. Yet with the microwaves
turned on so that we can tell which way the atoms went, the interference
pattern suddenly vanishes. Bohr had argued that when such a pattern
is lost, it happens because a measuring device gives random kicks to the
particles. But there aren't any random kicks to speak of in the rubidium
atom experiment; at most, the microwaves deliver momentum taps ten thousand
times too small to destroy the interference bands. Yet, destroyed the bands
are. It isn't that the uncertainty principle is proved wrong, but there's
no way it can account for the results.

The only reason momentum kicks seemed to explain the classic double slit
experiment discussed by Bohr and Einstein turns out to be a fortunate conspiracy
of numbers. There's a mechanism at work far deeper than random jolts and
uncertainty. What destroys the interference pattern is the very act of trying
to get information about which paths is followed. The effect at work is
entanglement.

Nonlocality

Ordinarily, we think of separate objects as being independent of one another.
They live on their own terms, and anything tying them together has to be
forged by some tangible particles, A and B, which have
come into contact, interacted for a brief while, and then flown apart. Each
particle is described by (among other properties) its own position and momentum.
The uncertainty principle insists that one of these can't be measured precisely
without destroying knowledge of the other. However, because A and B have interacted and, in the eyes of quantum physics, have effectively
merged to become one interconnected system, it turns out that the momentum
of both particles taken together and the distance between them can be measured
as precisely as we like. Suppose we measure the momentum of A,
which we'll assume has remained behind in the lab where we can keep an eye
on it. We can then immediately deduce the momentum of B without
having to do any measurement on it at all. Alternatively, if we choose to
observe the position of A, we would know, again without having
to measure it, the position of B. This is true whether B is in the same room or a great distance away.

From Heisenberg's relationship, we know that measuring the position of,
say, A will lead to an uncertainty in its momentum. Einstein, Podolosky,
and Rosen pointed out, however, that by measuring the position of A,
we gain precise knowledge of the position of B. Therefore, if we
take quantum mechanics at face value, by gaining precise knowledge of its
position, an uncertainty in momentum has been introduced for B.
In other words, the state of B depends on what we choose to do with A in our lab. And, again, this is true whatever the
separation distance may be. EPR considered such a result patently absurd.
How could B possibly know whether it should have a precisely defined
position or momentum? The fact that quantum mechanics led to such an unreasonable
conclusion, they argued, showed that it was flawed – or, at best,
that it was only a halfway house toward some more complete theory.

At the core of EPR's challenge is the notion of locality: the commonsense
idea that things can only be affected directly if they're nearby. To change
something that's far away, there's a simple choice: you can either go there
yourself or send some kind of signal. Either way, information or energy
has to pass through the intervening space to the remote site in order to
affect it. The fastest this can happen, according to Einstein's special
theory of relativity, is the speed of
light.

The trouble with entanglement is that it seems to ride roughshod over this
important principle. It's fundamentally nonlocal. A measurement
of particle A affects its entangled partner B instantaneously,
whatever the separation distance, and without signal or influence passing
between the two locations. This bizarre quantum connection isn't mediated
by fields of force, like gravity or electromagnetism. It doesn't weaken
as the particles move apart, because it doesn't actually stretch across
space. As far as entanglement is concerned, it's as if the particles were
right next to one another: the effect is as potent at a million light-years
as it is at a millimeter. And because the link operates outside space, it
also operates outside time. What happens at A is immediately known
at B. No wonder Einstein used words such as "spook" and "telepathic"
to describe – and deride – it. No wonder that as the author
of relativity he argued that the tie that binds entangled particles is a
physical absurdity. Any claim that an effect could work at faster-than-light
speeds, that it could somehow serve to connect otherwise causally isolated
objects, was to Einstein an intellectual outrage.

A close look at the EPR scenario reveals that it doesn't actually violate
causality, because no information passes between the entangled particles.
The information is already, as it were, built into the combined system,
and no measurement can add to it. But entanglement certainly does throw
locality out the window, and that development is powerfully counterintuitive.
It was far too much for Einstein and his colleagues to accept, and they
were firmly convinced that quantum mechanics, as it stood, couldn't be the
final word. It was, they suggested, a mere approximation of some as yet
undiscovered description of nature. This description would involve variables
that contain missing information about a system that quantum mechanics doesn't
reveal, and that tell particles how to behave before a measurement is carried
out. A theory along these lines – a theory of so-called local hidden
variables – would restore determinism and mark a return to the principle
of locality.

The shock waves from the EPR paper quickly reached the shores of Europe.
In Copenhagen, Bohr was once again cast into a fever of excitement and concern
as he always was by Einstein's attacks on his beloved quantum worldview.
He suspended all other work in order to prepare a counterstrike. Three months
later, Bohr's rebuttal was published in the same American journal that had
run the EPR paper. Basically, it argued that the nonlocality objection to
the standard interpretation of quantum theory didn't represent a practical
challenge. It wasn't yet possible to test it, and so physicists should just
get on with using the mathematics of the subject, which worked so well,
and not fret about the more obscure implications.

Bohm

Most scientists, whose interest was simply in using quantum tools to probe
the structure of atoms and molecules were happy to follow Bohr's advice.
But a few theorists continued to dig away at the philosophical roots. In
1952, David Bohm, an American at Birkbeck College,
London, who'd been hounded out of his homeland during the McCarthy inquisitions,
came up with a variation on the EPR experiment that paved the way for further
progress in the matter.5 Instead of using two properties, position
and momentum, as in the original version, Bohm focused on just one: the
property known as spin.

The spin of subatomic particles, such as electrons, is analogous to spin
in the everyday world but with a few important differences. Crudely speaking,
an electron can be thought of as spinning around the way a basketball does
on top of an athlete's finger. But whereas spinning basketballs eventually
slow down, all electrons in the universe, whatever their circumstances,
spin all the time and at exactly the same rate. What's more, they can only
spin in one of two directions, clockwise or counterclockwise, referred to
as spin-up and spin-down.

Bohm's revised EPR thought experiment starts with the creation, in a single
event, of two particles with opposite spin. This means that if we measure
particle A and find that its spin-up, then, from that point on, B must be spin-down. The only other possible result is that A is measured to be spin-up, which forces B to be spin-down. Taking
this second case as an example, we're not to infer, says quantum mechanics,
that A was spin-up before we measured it and therefore
that B was spin-down, in a manner similar to a coin being heads
or tails. Quantum interactions always produce superpositions. The state
of each particle in Bohm's revised EPR scenario is a mixed superposition
that we can write as: psi = (A spin-up and B spin-down)
+ (A spin-down + B spin-up). A measurement to determine A's spin causes this wave function to collapse and a random choice
to be made of spin-up or spin-down. At that very same moment, B also ceases to be in a superposition of states and assumes the opposite
spin.

This is the standard quantum mechanical view of the situation and it leads
to the same kind of weird conclusion that troubled Einstein and friends.
No matter how widely separated the spinning pair of particles may be, measuring
the spin of one causes the wave function of the combined system to collapse
instantaneously so that the unmeasured twin assumes a definite (opposite)
spin state, too. The mixed superposition of states, which is the hallmark
of entanglement, ensures nonlocality. Set against this is the Einsteinian
view that "spooky action at a distance" stems not from limitations about
what the universe is able to tell us but instead from limitations in our
current knowledge of science. At a deeper, more basic level than that of
wave functions and complementary properties, are hidden variables that will
restore determinism and locality to physics.

Bell's inequality

Bohm's new version of the EPR paradox didn't in itself offer a way to test
these radically different worldviews, but it set the scene for another conceptual
breakthrough that did eventually lead to a practical experiment.
This breakthrough came in 1964 from a talented Irish physicist, John Bell,
who worked at CERN, the European center for high-energy particle research
in Switzerland. Colleagues considered Bell to be the only physicist of his
generation to rank with the pioneers of quantum mechanics, such as Niels
Bohr and Max Born, in the depth of his philosophical understanding of the
implications of the theory. What Bell found is that it makes an experimentally
observable difference whether the particles described in the EPR experiment
have definite properties before measurement, or whether they're entangled
in a ghostlike hybrid reality that transcends normal ideas of space and
time.

Bell's test hinges on the fact that a particle's spin can be measured independently
in three directions, conventionally called x, y, and z,
at right angles to one another. If you measure the spin of particle A along the x direction, for example, this measurement also affects
the spin of entangled particle B in the x direction, but
not in the y and z directions. In the same way, you can
measure the spin of B in, say, the y direction without
affecting A's spin along x or z. Because of these
independent readings, it's possible to build up a picture of the complementary
spin states of both particles. Being a statistical effect, lots of measurements
are needed in order to reach a definite conclusion. What Bell showed is
that measurements of the spin states in the x, y, and z directions on large numbers of real particles could in principle
distinguish between the local hidden variable hypothesis championed by the
Einstein-Bohm camp and the standard nonlocal interpretation of quantum mechanics.

If Einstein was right and particles really did always have a predetermined
spin, then, said Bell, a Bohm-type EPR experiment ought to produce a certain
result. If the experiment were carried out on many pairs of particles, the
number of pairs of particles in which both are measured to be spin-up, in
both the x and y directions ("xy up"), is always less
than the combined total of measurements showing xz up and yz up. This statement became known as Bell's inequality. Standard quantum theory,
on the other hand, in which entanglement and nonlocality are facts of life,
would be upheld if the inequality worked the other way around. The decisive
factor is the degree of correlation between the particles, which is significantly
higher if quantum mechanics rules.

This was big news. Bell's inequality, although the subject of a modest little
paper and hardly a poplar rival to the first Beatles tour of America going
on at the same time, provided a way to tell by actual experiment which of
the two major, opposing visions of subatomic reality was closer to the truth.6 Bell made no bones about what his analysis revealed: Einstein;s ideas about
locality and determinism were incompatible with the predictions of orthodox
quantum mechanics. Bell's paper offered a clear alternative that lay between
the EPR/Bohemian local hidden variables viewpoint and Bohrian, nonlocal
weirdness. The way that Bell's inequality was set up, its violation would
mean that the universe was inherently nonlocal, allowing particles to form
and maintain mysterious connections with each other no matter how far apart
they were. All that was needed now was for someone to come along and set
up an experiment to see for whom Bell's inequality tolled.

But that was easier said than done. Creating, maintaining, and measuring
individual entangled particles is a delicate craft, and any imperfection
in the laboratory setup masks the subtle statistical correlations being
sought. Several attempts were made in the 1970s to measure Bell's inequality
but none was completely successful. Then a young French graduate student,
Alain Aspect, at the Institute of Optics in Orsay, took up the challenge
for his doctoral research.

Aspect's experiment

Aspect was set upon his way by his supervising professor, Bernard d'Espagnat,
whose career centered around gathering experimental evidence to uncover
the deep nature of reality. "I had the luck," said d'Espagnat, "to discover
in my university a young physicist, Alain Aspect, who was looking for a
thesis subject and I suggested that testing the Bell inequalities might
be a good idea. I also suggested that he go and talk to Bell, who convinced
him it was a good idea and the outcome of this was that quantum mechanics
won."

Aspect's experiment used particles of light – photons – rather
than material particles such as electrons or protons. Then, as now, photons
are by far the easiest quantum objects from which to produce entangled pairs.
There is, however, a minor complication concerning the property that is
actually recorded in a photon-measuring experiments such as Aspect's or
those of other researchers we'll be talking about later. Both Bell and Bohm
presented their theoretical arguments in terms of the particle spin. Photons
do have a spin (they're technically known as spin-1 particles), but because
they travel at the speed of light, their spin axes always lie exactly along
their direction of motion, like that of a spinning bullet shot from a rifle
barrel. You can imagine photons to be right-handed or left-handed depending
on which way they rotate as you look along their path of approach. What's
actually measured in the lab isn't spin, however, but the very closely related
property of polarization.

Effectively, polarization is the wavelike property of light that corresponds
to the particlelike property of spin. Think of polarization in terms of
Maxwell's equations, which tell us that the electric and magnetic fields
of a light wave oscillate at right angles to each other and also to the
direction in which the light is traveling. The polarization of a photon
is the direction of the oscillation of its electric field: up and
down, side to side, or any orientation in between. Ordinarily, light consists
of photons polarized every which way. But if light is passed through a polarizing
filter, like that used in Polaroid sunglasses, only photons with a particular
polarization – the one that matches the slant of the filter –
can get through. (The same happens if two people make waves by flicking
one end of a rope held between them. If they do this through a gap between
iron railings only waves that vibrate in the direction of the railings can
slip through to the other side.)

Aspect designed his experiment to examine correlations in the polarization
of photons produced by calcium atoms – a technique that had already
been used by other researchers. He shone laser light onto the calcium atoms,
which caused the electrons to jump from the ground state to a higher energy
level. As the electrons tumbled back down to the ground state, they cascaded
through two different energy states, like a two-step waterfall, emitting
a pair of entangled photons – one photon per step – in the process.

The photons passed through a slit, known as a collimator,
designed to reduce and guide the light beam. Then they fell into an automatic
switching device that randomly sent them in one of two directions before
arriving, in each case, at a polarization analyzer – a device that
recorded their polarization state.

An important consideration in Aspect's setup was the possibility, however
small, that information might leak from one photon to its partner. It was
important to rule out a scenario in which a photon arrived at a polarization
analyzer, found that polarization was being measured along say the vertical
direction, and then somehow communicated this information to the other photon.
(How this might happen doesn't matter: the important thing was to exclude
it as an option.) By carefully setting up the distances through which the
photons traveled and randomly assigning the direction in which the polarization
would be measured while the photons were in flight, Aspect ensured that
under no circumstances could such a communicating signal be sent between
photons. The switches operated within 10 nanoseconds, while the photons
took 20 nanoseconds to travel the 6.6 meters to the analyzers. Any signal
crossing from one analyzer to the other at the speed of light would have
taken 40 nanoseconds to complete the journey – much too long to have
any effect on the measurement.

In a series of these experiments in the 1980s, Aspect's team showed what
most quantum theorists expected all along: Bell's inequality was violated.7 The result agreed completely with the predictions of standard quantum mechanics
and discredited any theories based on local hidden variables. More recent
work had backed up this conclusion. What's more, these newer experiments
have included additional refinements designed to plug any remaining loopholes
in the test. For example, special crystals have enabled experimenters to
produce entangled photons that are indistinguishable, because each member
of the pair has the same wavelength. Such improvements have allowed more
accurate measurements of the correlation between the photons. In all cases,
however, the outcomes have upheld Aspect's original discovery. Entanglement
and nonlocality are indisputable facts of the world in which we live.

Practical applications

The phenomenon of entanglement has already begun to be exploited for practical
purposes. In the late 1980s, theoreticians started to see entanglement not
just as a puzzle and a way to penetrate more deeply into the mysteries of
the quantum world, but also as a resource. Entanglement could be exploited
to yield new forms of communication and computing. It was a vital missing
link between quantum mechanics and another field of explosive growth: information
theory. The proof of nonlocality and the quickly evolving ability to work
with entangled particles in the laboratory were important factors in the
birth of a new science. Out of the union of quantum mechanics and information
theory sprang quantum information science – the fast-developing field
whose most important fields of development are quantum cryptography, quantum
teleportation, and quantum computers.