Open Questions in Physics

While for the most part a FAQ covers the answers to frequently asked questions whose
answers are known, in physics there are also plenty of simple and interesting questions
whose answers are not known. Here we list some of these. We could
have called this section Frequently Unanswered Questions, but the resulting
acronym would have been rather rude.

Before you set about answering these questions on your own, it's worth noting that while
nobody knows what the answers are, a great deal of of work has already been done on most
of these subjects. So, do plenty of research and ask around before you try to cook
up a theory that'll answer one of these and win you the Nobel prize! You'll probably
need to really know physics inside and out before you make any progress on these.

The following partial list of open questions is divided into five groups:

However, given the implications of particle physics and nonlinear dynamics on
cosmology, and other connections between the groups, the division is somewhat artificial,
so the classification here is somewhat arbitrary.

There are many other interesting and fundamental questions in other fields, and many more
in these fields besides those listed here. Their omission is not a judgement about
importance, but merely a decision about the scope of this article.

Since this article was last updated in 1997, a lot of progress has been made in answering
some big open questions in physics. We include references on some of these
questions. There is also a lot to read about the other open
questions—especially the last one, which we call The Big Question. But, we
haven't had the energy to list it.

What causes sonoluminescence? Sonoluminescence is the generation of small light
bursts in liquids caused by sound. Bubbles form in the liquid at low pressure points
of the sound wave, then collapse again as a high pressure wave passes. At the point
of collapse a small flash of light is produced. The exact cause has been the subject
of intense speculation and research.

What causes high temperature superconductivity? Is it possible to make a material
that is a superconductor at room temperature? Superconductivity at very low
temperatures has been understood since 1957 in terms of the BCS theory, but high
temperature superconductors discovered in 1986 are still unexplained.

To learn more about superconductivity, see this web page and its many links:

How can turbulence be understood and its effects calculated? One of the oldest
problems of them all. A vast amount is known about turbulence, and we can simulate
it on a computer, but much about it remains mysterious.

The Navier-Stokes equations are the basic equations describing fluid flow.
Do these equations have solutions that last for all time, given arbitrary sufficiently
nice initial data? Or do singularities develop in the fluid flow, which prevent the
solution from continuing?

This is more of a question of mathematical physics than physics per se—but it's related
to the previous question, since (one might argue) how can we deeply understand turbulence
if we don't even know that the equations for fluid motion have solutions? At the
turn of the millennium, the Clay Mathematics Institute offered a $1,000,000 prize for
solving this problem. For details, see:

How should we think about quantum mechanics? For example, what is meant by a
"measurement" in quantum mechanics? Does "wavefunction collapse" actually happen as
a physical process? If so, how, and under what conditions? If not, what
happens instead?

Many physicists think these issues are settled, at least for most practical
purposes. However, some still think the last word has not been heard. Asking
about this topic in a roomful of physicists is the best way to start an argument, unless
they all say "Oh no, not that again!". There are many books to read on this
subject, but most of them disagree.

Can we build a working quantum computer big enough to do things ordinary computers
can't easily do?

This question is to some extent impacted by the previous one, but it also has a strong
engineering aspect to it. Some physicists think quantum computers are impossible in
principle; more think they are possible in principle, but are still unsure if they will
ever be practical.

What happened at or before the Big Bang? Was there really an initial
singularity? Does the history of the Universe go back in time forever, or only a
finite amount? Of course, these questions might not make sense, but they might.

Are there really three dimensions of space and one of time? If so, why? Or is
spacetime higher-dimensional, or perhaps not really a manifold at all when examined on a
short enough distance scale? If so, why does it appear to have three
dimensions of space and one of time? Or are these unanswerable questions?

Is the Universe infinite in spatial extent? More generally: what is the topology of
space?

We still don't know, but in 2003 some important work was done on this issue:

Briefly, the Wilkinson Microwave Anisotropy Probe (WMAP) was used to rule out nontrivial
topology within a distance of 78,000 million light years—at least for a large class of
models. For the precise details, you'll have to read the article!

Why is there an arrow of time; that is, why is the future so much different from the
past?

Here are two pieces of required reading for anyone interested in this tough question:

Huw Price, Time's Arrow and Archimedes' Point: New Directions for a Physics of
Time, Oxford University Press, Oxford, 1996.

H. D. Zeh, The Physical Basis of the Direction of Time, second edition, Springer
Verlag, Berlin, 1992.

Will the future of the Universe go on forever or not? Will there be a "big crunch"
at some future time, will the Universe keep on expanding forever, or what?

There's been some progress on this one recently. Starting in the late 1990s, a bunch
of evidence has accumulated suggesting that the universe is not slowing down enough to
recollapse in a so-called "big crunch". In fact, it seems that some form of "dark
energy" is making the expansion speed up! We know very little about dark
energy; it's really just a name for any invisible stuff that has enough negative pressure
compared to its energy density that it tends to make the expansion of the universe tend to
accelerate, rather than slow down. (In general relativity, energy density tends to
make the expansion slow down, but negative pressure has the opposite effect.)

Einstein introduced dark energy to physics under the name of "the cosmological constant"
when he was trying to explain how a static universe could fail to collapse. This
constant simply said what the density dark energy was supposed to be, without providing
any explanation for its origin. When Hubble observed the redshift of light from
distant galaxies, and people concluded the universe was expanding, the idea of a
cosmological constant fell out of fashion and Einstein called it his "greatest
blunder". But now that the expansion of the universe seems to be accelerating, a
cosmological constant or some other form of dark energy seems plausible.

For an examination of what an ever-accelerating expansion might mean for our universe,
see:

But, we still can't be sure the universe will expand forever, because the possibility
remains that at some point the dark energy will go away, switch sign, or get bigger!
Here's a respectable paper suggesting that the dark energy will change sign and make the
universe recollapse in a big crunch:

But, before you launch into wild speculations, it's worth emphasizing that the late 1990s
and early 2000s have seen a real revolution in experimental cosmology, which answered many
open questions (for example: "how long ago was the Big Bang?") in shockingly precise ways
(about 13,700 million years). For good introduction to this material, try:

Our evidence concerning the expansion of the universe, dark energy, and dark matter now
comes from a wide variety of sources, and what makes us confident we're on the right track
is how nicely all this data agrees. People are getting this data from various
sources including:

As mentioned above, evidence has been coming in that suggests the universe is full of some
sort of "dark energy" with negative pressure. For example, an analysis of data from
the Wilkinson Microwave Anisotropy Probe in 2003 suggested that 73% of the energy density
of the universe is in this form! But even this is right and dark energy exists,
we're still in the dark about what it is.

The simplest model is a cosmological constant, meaning that so-called "empty" space
actually has a negative pressure and positive energy density, with the pressure exactly
equal to minus the energy density in units where the speed of light is 1. However,
nobody has had much luck explaining why empty space should be like this, especially with
an energy density as small as what we seem to be observing: about 6 &times
10−30 grams per cubic centimeter if we use Einstein's E = mc2
to convert it into a mass density. Other widely studied possibilities for dark
matter include various forms of "quintessence". But, this term means little more
than "some mysterious field with negative pressure", and there's little understanding of
why such a field should exist.

The third is the most detailed, and it has lots of good references for further study.

Why does it seem like the gravitational mass of galaxies exceeds the mass of all
the stuff we can see, even taking into account our best bets about invisible stuff like
brown dwarfs, "Jupiters", and so on? Is there some missing "dark matter"? If
so, is it ordinary matter, neutrinos, or something more exotic? If not, is there
some problem with our understanding of gravity, or what?

Since the late 1990s, a consensus has emerged that some sort of "cold dark matter" is
needed to explain all sorts of things we see. For example, in 2003 an analysis of
data from the Wilkinson Microwave Anisotropy Probe suggested that the energy density of
the universe consists of about 23% cold dark matter, as compared to only 4% ordinary
matter. (The rest is dark energy.)

Unfortunately nobody knows what this cold dark matter is! It probably can't be
ordinary matter we've neglected, or neutrinos, since these wouldn't have been sufficiently
"cold" in the early universe to collapse into the lumps needed for galaxy formation.
There are many theories about what it might be. There's also still a possibility that we
are very confused about something, like our theory of gravity.

The last of these three is the most detailed, and it has lots of references for further
study.

The Horizon Problem: why is the Universe almost, but not quite, homogeneous on the
very largest distance scales? Is this the result of an "inflationary epoch"—a
period of rapid expansion in very early history of the universe, which could flatten out
inhomogeneities? If so, what caused this inflation?

In 2003 the case for inflation was bolstered by the Wilkinson Microwave Anisotropy Probe,
which made detailed measurements of "anisotropies" (slight deviations from perfect
evenness) in the cosmic microwave background radiation. The resulting "cosmic
microwave background power spectrum" shows peaks and troughs whose precise features should
be sensitive to many details of the very early history of the Universe. Models that
include inflation seem to fit this data very well, while those that don't, don't.

However, the mechanism behind inflation remains somewhat mysterious. Inflation can be
nicely explained using quantum field theory by positing the existence of a special
particle called the "inflaton", which gave rise to extremely high negative pressure before
it decayed into other particles. This may sound wacky, but it's really not.
The only problem is that nobody has any idea how this particle fits into known
physics. For example, it's not part of the Standard Model.

Gamma ray bursters (GRBs) appear as bursts of gamma rays coming from points randomly
scattered in the sky. These bursts are very brief, lasting between a few
milliseconds to a few hundred seconds. For a long time there were hundreds of
theories about what caused them, but very little evidence for any of these theories, since
nothing was ever seen at the location where one of these bursts occurred. Their
random distribution eventually made a convincing case that they occurred not within our
solar system or within our galaxy, but much farther away. Given this, it was clear
that they must be extraordinarily powerful.

Starting in the late 1990s, astronomers made a concerted effort to catch gamma ray
bursters in the act, focusing powerful telescopes to observe them in the visible and
ultraviolet spectrum moments after a burst was detected. These efforts paid off in
1999 when one was seen to emit visible light for as long as a day after the burst
occurred. A redshift measurement of z = 1.6 indicated that the gamma ray burster was
about 10,000 million light years away. If the burst of gamma rays was
omnidirectional, this would mean that its power was about 1016 times that of
our sun—for a very short time. For details on this discovery, see:

A more detailed observation of a burst on March 3, 2003 convinced many astrophysicists
that at least some gamma-ray bursters are so-called "hypernovae". A
hypernova is an exceptionally large supernova formed by the nearly instantaneous collapse
of the core of a very large star, at least 10 times the mass of the sun, which has already
blown off most of its hydrogen. Such stars are called Wolf-Rayet stars. The
collapse of such a star need not be spherically symmetric, so the gamma ray burst could be
directional, reducing the total power needed to explain the brightness we see here (if the
burst happened to point towards us). For more, try:

Here is the complete story about GRB 030329, as the astronomers now read it.

Thousands of years prior to this explosion, a very massive star, running out of
hydrogen fuel, let loose much of its outer envelope, transforming itself into a bluish
Wolf-Rayet star. The remains of the star contained about 10 solar masses worth of helium,
oxygen and heavier elements.

In the years before the explosion, the Wolf-Rayet star rapidly depleted its remaining
fuel. At some moment, this suddenly triggered the hypernova/gamma-ray burst event. The
core collapsed, without the outer part of the star knowing. A black hole formed inside,
surrounded by a disk of accreting matter. Within a few seconds, a jet of matter was
launched away from that black hole.

The jet passed through the outer shell of the star and, in conjunction with vigorous winds
of newly formed radioactive nickel-56 blowing off the disk inside, shattered the
star. This shattering, the hypernova, shines brightly because of the presence of
nickel. Meanwhile, the jet plowed into material in the vicinity of the star, and created
the gamma-ray burst which was recorded some 2,650 million years later by the astronomers
on Earth. The detailed mechanism for the production of gamma rays is still a matter of
debate but it is either linked to interactions between the jet and matter previously
ejected from the star, or to internal collisions inside the jet itself.

This scenario represents the "collapsar" model, introduced by American astronomer Stan
Woosley (University of California, Santa Cruz) in 1993 and a member of the current team,
and best explains the observations of GRB 030329.

"This does not mean that the gamma-ray burst mystery is now solved", says Woosley. "We
are confident now that long bursts involve a core collapse and a hypernova, likely
creating a black hole. We have convinced most skeptics. We cannot reach any conclusion
yet, however, on what causes the short gamma-ray bursts, those under two seconds
long."

Indeed, there seem to be at least two kinds of gamma-ray bursters, the "long" and "short"
ones. Nobody has caught the short ones in time to see their afterglows, so they are more
mysterious. For more information, try these:

Cosmic rays are high-energy particles, mainly protons and alpha particles, which come from
outer space and hit the Earth's atmosphere producing a shower of other particles.
Most of these are believed to have picked up their energy by interacting with shock waves
in the interstellar medium. But the highest-energy ones remain mysterious—nobody
knows how they could have acquired such high energies.

The record is a 1994 event detected by the Fly's Eye in Utah, which recorded a shower of
particles produced by a cosmic ray of about 300 EeV. A similar event has been
detected by the Japanese scintillation array AGASA. An EeV is an
"exa-electron-volt", which is the energy an electron picks up going through a potential of
1018 volts. 300 EeV is about 50 joules—the energy of a one-kilogram mass
moving at 10 meters/second, presumably all packed into one particle!

Nobody knows how such high energies are attained—perhaps as a side effect of the shock
made by a supernova or gamma-ray burster? The puzzle is especially acute because
because particles with energies like these are expected to interact with the cosmic
microwave background radiation and lose energy after travelling only moderate
extragalactic distances, say 100 mega light years. This effect is called the
Greisen-Zatsepin-Kuz'min (or GZK) cutoff. So, either our understanding of the GZK
cutoff is mistaken, or ultra-high-energy cosmic rays come from relatively nearby—in
cosmological terms, that is.

Right now the data is confusing, because two major experiments on ultra-high-energy cosmic
rays have yielded conflicting results. The Fly's Eye seems to see a sharp drop-off in
the number of cosmic rays above 100 EeV, while the AGASA detector does not. People
hope that the Pierre Auger cosmic ray observatory, being built in western Argentina, will
settle the question.

Do gravitational waves really exist? If so, can we detect them? If so,
what will they teach us about the universe? Will they mainly come from expected
sources, or will they surprise us?

Perhaps the most ambitious physics experiments of our age are the attempts to detect
gravitational waves. Right now the largest detector is LIGO—the the Laser
Interferometer Gravitational-Wave Observatory. This consists of two facilities: one
in Livingston, Louisiana, and one in Hanford, Washington. Each facility consists of
laser beams bouncing back and forth along two 4-kilometer-long tubes arranged in an L
shape. As a gravitational wave passes by, the tubes should alternately stretch and
squash—very slightly, but hopefully enough to be detected via changing interference
patterns in the laser beam.

LIGO is coming into operation in stages. The first stage, called LIGO I, is supposed
to allow detection of gravitational waves made by binary neutron stars within 65 mega
light years of us. These binaries emit lots of gravitational radiation, spiral into
each other, and eventually merge. In the last few minutes of this process you've got
two objects heavier than the sun whipping around each other about 100 times a second,
faster and faster, and they should emit a "chirp" of gravitational waves increasing in
amplitude and frequency until the final merger. It's these "chirps" that LIGO is
optimized for detecting. Later, in LIGO II, they'll try to boost the sensitivity to
allow detection of in-spiralling binary neutron stars within 1000 mega light years of
us.

To give you an idea of what these distances are like: the radius of the Milky Way is about
50,000 light years. The distance to the Andromeda galaxy is about 2.3 mega light
years. The radius of the "Local Group" consisting of three dozen nearby galaxies is
about 6 mega light years. The distance to the "Virgo Cluster", the nearest large
cluster of galaxies, is about 50 mega light years. The radius of the observable
universe is roughly 10,000 mega light years. So, if everything works as planned,
we'll be able to see quite far with gravitational waves.

However, binary neutron stars don't merge very often! The current best guess is that
with LIGO I we will be able to see such an event somewhere between once every 3000 years
and once every 3 years. I know, that's not a very precise estimate! Luckily,
the volume of space we survey grows as the cube of the distance we can see out to, so LIGO
II should see between 1 and 1000 events per year.

The really scary thing is how good LIGO needs to be to work as planned. Roughly
speaking, LIGO I aims to detect gravitational waves that distort distances by about 1 part
in 1021. Since the laser bounces back and forth between the mirrors about
50 times, the effective length of the detector is 200 kilometers. Multiply this by
10−21 and you get 2 x 10−16 meters. By comparison, the radius
of a proton is 8 x 10−16 meters! So, we're talking about measuring
distances to within a quarter of a proton radius! And that's just LIGO I. LIGO
II aims to detect waves that distort distances by a mere 2 parts in 1023, so it
needs to do 50 times better.

Actually all this is a bit misleading. The goal is not really to measure distances,
but really vibrations with a given frequency. However, it will still be an
amazing feat... if it works.

Getting LIGO to work has been a heroic endeavor: so far two earthquakes have caused damage
to the equipment, and problems from tree logging in Livingston to wind-blown tumbleweeds
in Hanford have made life more difficult than expected. To keep up with the latest news,
try the "LIGO Web Newsletter" here:

LIGO is working in collaboration with the British/German GEO 600 detector in Hanover,
Germany, a smaller detector that tests out lots of new technology. Other gravitational
wave detectors include the French/Italian collaboration VIRGO, the Japanese TAMA 300
project, and ACIGA in Australia. For information on these and others try:

The idea is to orbit 3 satellites in an equilateral triangle with sides 5 million
kilometers long, and constantly measure the distance between them to an accuracy of a
tenth of an angstrom (10−11 meters) using laser interferometry. The
big distances would make it possible to detect gravitational waves with frequencies of
0.0001 to 0.1 hertz, much lower than the frequencies for which the ground-based detectors
are optimized. The plan involves a really neat technical trick to keep the
satellites from being pushed around by solar wind and the like: each satellite will have a
free-falling metal cube floating inside it, and if the satellite gets pushed to one side
relative to this mass, sensors will detect this and thrusters will push the satellite back
on course.

For more details on what people hope to see with all these detectors, try this:

Do black holes really exist? (It sure seems like it.) Do they really
radiate energy and evaporate the way Hawking predicts? If so, what happens when,
after a finite amount of time, they radiate completely away? What's left? Do
black holes really violate all conservation laws except conservation of energy, momentum,
angular momentum and electric charge? What happens to the information contained in
an object that falls into a black hole? Is it lost when the black hole
evaporates? Does this require a modification of quantum mechanics?

Is the Cosmic Censorship Hypothesis true? Roughly, for generic collapsing
isolated gravitational systems are the singularities that might develop guaranteed to be
hidden beyond a smooth event horizon? If Cosmic Censorship fails, what are these
naked singularities like? That is, what weird physical consequences would they
have?

Proving the Cosmic Censorship Hypothesis is a matter of mathematical physics rather than
physics per se, but doing so would increase our understanding of general relativity.
There are actually at least two versions: Penrose formulated the "Strong Cosmic Censorship
Conjecture" in 1986, and the "Weak Cosmic Censorship Hypothesis" in 1988. A fairly
precise mathematical version of the former one states:

Every maximal Hausdorff development of generic initial data for Einstein's equations,
compact or asymptotically flat, is globally hyperbolic.

That's quite a mouthful, but roughly speaking, "globally hyperbolic" spacetimes are those
for which causality is well-behaved, in the sense that there are no closed timelike curves
or other pathologies. Thus this conjecture states that for generic initial
conditions, Einstein's equations lead to a spacetime in which causality is
well-behaved.

The conjecture has not been proved, but there are a lot of interesting partial results so
far. For a nice review of this work see:

Piotr Chrusciel, On the uniqueness in the large of solutions of Einstein's
equations ("Strong cosmic censorship"), in Mathematical Aspects of Classical Field
Theory, Contemp. Math. 132, American Mathematical Society, 1992.

Why are the laws of physics not symmetrical between left and right, future and
past, and between matter and antimatter? I.e., what is the mechanism of CP
violation, and what is the origin of parity violation in Weak interactions? Are
there right-handed Weak currents too weak to have been detected so far? If so, what
broke the symmetry? Is CP violation explicable entirely within the Standard Model,
or is some new force or mechanism required?

Why is there more matter than antimatter, at least around here? Is there
really more matter than antimatter throughout the universe? This seems related to the
previous question, since most attempts at explaining the prevalence of matter over
antimatter make use of CP violation.

Are there really just three generations of leptons and quarks? If so,
why? For example, the muon is a particle almost exactly like the electron except
much heavier, and the tau particle is also almost the same, but heavier still. Why
do these three exist and no more? Or, are these unanswerable questions?

Besides the particles that carry forces (the photon, W and Z boson, and gluons), all
elementary particles we have seen so far fit neatly into three "generations" of particles
called leptons and quarks. The first generation consists of:

the electron

the electron neutrino

the up quark

the down quark

The second consists of:

the muon

the muon neutrino

the charmed quark

the strange quark

and the third consists of:

the tau

the tau neutrino

the top quark

the bottom quark

How do we know there aren't more?

Ever since particle accelerators achieved the ability to create Z bosons in 1983, our best
estimates on the number of generations have come from measuring the rate at which Z bosons
decay into completely invisible stuff. The underlying assumption is that when this
happens, the Z boson is decaying into a neutrino-antineutrino pair as predicted by the
Standard Model. Each of the three known generations contains a neutrino which is
very light. If this pattern holds up, the total rate of "decay into invisible stuff"
should be proportional to the number of generations!

Experiments like this keep indicating there are three generations of this
sort. So, most physicists feel sure there are exactly three generations of quarks
and leptons. The question then becomes "why?"—and so far we haven't a clue!

Honesty compels us to point out a slight amount of wiggle room in the remarks above.
Conservation of energy prevents the Z from decaying into a neutrino-antineutrino pair if
the neutrino in question is of a sort that has more than half the mass of Z. So, if
there were a fourth generation with a very heavy neutrino, we couldn't detect it by
studying the decay of Z bosons. However, all three
known neutrinos have a mass less
than 1/3000 times the Z mass, so a fourth neutrino would have to be much heavier than the
rest to escape detection this way.

Another bit of wiggle room lurks in the phrase "decaying into a neutrino-antineutrino pair
in the manner predicted by the Standard Model". If there were a fourth generation
with a neutrino that didn't act like the other three, or no neutrino at all, we might not
see it. However, in this case it would be stretching language a bit to speak of a
"fourth generation", since the marvelous thing about the three known generations is how
they're completely identical except for the values of certain constants like masses.

Why does each generation of particles have precisely this structure: two leptons
and two quarks?

If you're familiar with particle physics, you'll know it goes much deeper than this: the
Standard Model says every generation of particles has precisely the same mathematical
structure except for some numbers that describe Higgs couplings. We don't know
any reason for this structure, although the requirement of "anomaly cancellation" puts
some limits on what it can be.

If you're not an expert on particle physics, perhaps these introductions to the Standard
Model will help explain things:

Do the quarks or leptons have any substructure, or are they truly elementary
particles?

Is there really a Higgs boson, as predicted by the Standard Model of particle
physics? If so, what is its mass? If not, what breaks the symmetry between the
electromagnetic and weak forces, and gives all the elementary particles their masses?

The Standard Model predicts the existence of a spin-0 particle called the Higgs boson,
which comes in two isospin states, one with charge +1 and one neutral. (It also
predicts that this particle has an antiparticle.) According to the Standard Model,
the interaction of the Higgs boson with the electroweak force is responsible for a
"spontaneous symmetry breaking" process that makes this force act like two very different
forces: the electromagnetic force and the weak force. Moreover, it is primarily the
interaction of the Higgs boson with the other particles in the Standard Model that endows
them with their masses! The Higgs boson is very mysterious, because in addition to
doing all these important things, it stands alone, very different from all the other
particles. For example, it is the only spin-0 particle in the Standard Model.
To add to the mystery, it is the only particle in the Standard Model that has not yet been
directly detected!

On the 4th of July, 2012, two experimental teams looking for the Higgs
boson at the Large Hadron Collider (LHC) announced the discovery of a
previously unknown boson with mass of roughly 125-126
GeV/c2. Using the combined analysis of two interaction
types, these experiments reached a statistical significance of 5
sigma, meaning that if no such boson existed, the chance of seeing
what they was less than 1 in a million.

However, it has not yet been confirmed that this boson behaves as the
Standard Model predicts of the Higgs. Some particle physicists hope
that the Higgs boson, when seen, will work a bit differently than the
Standard Model predicts. For example, some variants of the
Standard Model predict more than one type of Higgs boson. LHC
may also discover other new phenomena when it starts colliding
particles at energies higher than ever before explored. For
example, it could find evidence for supersymmetry, providing indirect
support for superstring theory.

What is the correct theory of neutrinos? Why are they almost but not quite
massless? Do all three known neutrinos—electron, muon, and tau—all have a
mass? Could any neutrinos be Majorana spinors? Is there a fourth kind of neutrino,
such as a "sterile" neutrino?

Starting in the 1990s, our understanding of neutrinos has dramatically improved, and the
puzzle of why we see about 1/3 as many electron neutrinos coming from the sun as naively
expected has pretty much been answered: the different neutrinos can turn into each other
via a process called "oscillation". But, there are still lots of loose ends. For
details, try:

The first of these has lots of links to the web pages of research groups doing experiments
on neutrinos. It's indeed a big industry!

Is quantum chromodynamics (QCD) a precise description of the behavior of quarks
and gluons? Can we prove using QCD that quarks and gluons are confined at low
temperatures? Is it possible to calculate masses of hadrons (such as the proton,
neutron, pion, etc.) correctly from the Standard Model, with the help of QCD? Does
QCD predict that quarks and gluons become deconfined and form plasma at high
temperature? If so, what is the nature of the deconfinement phase transition?
Does this really happen in Nature?

Most physicists believe the answers to all these questions are "yes". There are
currently a number of experiments going on to produce and detect a quark-gluon
plasma. It's believed that producing such a plasma at low pressures requires a
temperature of 2 million million kelvins. Since this is 10,000 times hotter than the
sun, and such extreme temperatures were last prevalent in our Universe only 1 microsecond
after the Big Bang, these experiments are lots of fun. The largest, the Relativistic
Heavy Ion Collider on Long Island, New York, began operation in 2000. It works by
slamming gold nuclei together at outrageous speeds. For details, see:

But, in addition to such experimental work, a lot of high-powered theoretical work is
needed to understand just what QCD predicts, both in extreme situations like these, and
for ordinary matter. In fact, it's a great challenge to use QCD to predict the
masses of protons, neutrons, pions and the like to an accuracy greater than about
10%. Doing so makes heavy use of supercomputers, but there are also fundamental
obstacles to good numerical computations, like the "fermion doubling problem", where
bright new ideas are needed. See for example:

Is there a mathematically rigorous formulation of a relativistic quantum field
theory describing interacting (not free) fields in four spacetime dimensions? For
example, is the Standard Model mathematically consistent? How about Quantum
Electrodynamics? Even the classical electrodynamics of point particles does not yet
have a satisfactory mathematically rigorous formulation. Does one exist or is this theory
inconsistent?

These are questions of mathematical physics rather than physics per se, but they are
important. At the turn of the millennium, the Clay Mathematics Institute offered a
$1,000,000 prize for providing a mathematically rigorous foundation for the quantum
version of SU(2) Yang-Mills theory in four spacetime dimensions, and proving that there's
a "mass gap"—meaning that the lightest particle in this theory has nonzero mass.
For details see:

Most "grand unified theories" (GUTs) predict that the proton decays, but so far
experiments have (for the most part) only put lower limits on the proton lifetime.
As of 2002, the lower limit on the mean life of the proton was somewhere between
1031 and 1033 years, depending on the presumed mode of decay, or 1.6
x 1025 years regardless of the mode of decay.

Proton decay experiments are heroic undertakings, involving some truly huge
apparatus. Right now the biggest one is "Super-Kamiokande". This was built in
1995, a kilometer underground in the Mozumi mine in Japan. This experiment is mainly
designed to study neutrinos, but it doubles as a proton decay detector. It consists of a
tank holding 50,000 tons of pure water, lined with 11,200 photomultiplier tubes which can
detect very small flashes of light. Usually these flashes are produced by neutrinos
and various less interesting things (the tank is deep underground to minimize the effect
of cosmic rays). But, flashes of light would also be produced by certain modes of proton
decay, if this ever happens.

Super-Kamiokande was beginning to give much improved lower bounds on the proton lifetime,
and excellent information on neutrino oscillations, when a strange disaster happened on
November 12, 2001. The tank was being refilled with water after some burnt-out
photomultiplier tubes had been replaced. Workmen standing on styrofoam pads on top of
some of the bottom tubes made small cracks in the neck of one of the tubes, causing that
tube to implode. The resulting shock wave started a chain reaction in which about 7,000 of
the photomultiplier tubes were destroyed! Luckily, after lots of hard work the experiment
was rebuilt by December 2002.

In 2000, after about 20 years of operation, the Kolar Mine proton decay experiment claimed
to have found proton decay, and their team of physicists gave an estimate of
1031 years for the proton lifetime. Other teams are skeptical.

Why do the particles have the precise masses they do? Or is this an
unanswerable question?

Of course their mass in kilograms depends on an arbitrary human choice of units, but their
mass ratios are fundamental constants of nature. For example, the muon is about 206.76828
times as heavy as the electron. We have no explanation of this sort of number! We
attribute the masses of the elementary particles to the strength of their interaction with
the Higgs boson (see above), but we have no understanding of why these interactions are as
strong as they are.

Why are the strengths of the fundamental forces (electromagnetism, weak and strong
forces, and gravity) what they are? For example, why is the fine structure constant,
that measures the strength of electromagnetism, about 1/137.036? Where do such
dimensionless constants come from? Or is this an unanswerable question?

Particle masses and strengths of the fundamental forces constitute most of the 26
fundamental dimensionless constants of nature. Another one is the cosmological
constant—assuming it's constant. Others govern the oscillation of neutrinos (see
below). So, we can wrap a bunch of open questions into a bundle by asking: Why
do these 26 dimensionless constants have the values they do?

Perhaps the answer involves the Anthropic Principle, but perhaps not. Right now, we have
no way of knowing that this question has any answer at all!

The Pioneer 10 and Pioneer 11 spacecraft are leaving the the Solar System. Pioneer
10 sent back radio information about its location until January 2003, when it was about 80
times farther from the Sun than the Earth is. Pioneer 11 sent back signals until
September 1995, when its distance from the Sun was about 45 times the Earth's.

The Pioneer missions have yielded the most precise information we have about navigation
in deep space. However, analysis of their radio tracking data indicates a small
unexplained acceleration towards the Sun! The magnitude of this acceleration is
roughly 10−9 meters per second per second. This is known as the
"Pioneer anomaly".

This anomaly has also been seen in the Ulysses spacecraft, and possibly also in the
Galileo spacecraft, though the data is much more noisy, since these were Jupiter probes,
hence much closer to the Sun, where there is a lot more pressure from solar
radiation. The Viking mission to Mars did not detect the Pioneer anomaly —
and it would have, had an acceleration of this magnitude been present, because its radio
tracking was accurate to about 12 meters.

Many physicists and astronomers have tried to explain the Pioneer anomaly using
conventional physics, but so far nobody seems to have succeeded. There are many
proposals that try to explain the anomaly using new physics — in particular, modified
theories of gravity. But there is no consensus that any of these explanations are
right, either. For example, explaining the Pioneer anomaly using dark matter would
require more than 0.0003 solar masses of dark matter within 50 astronomical units of the
Sun (an astronomical unit is the distance between Sun and Earth). But, this is in
conflict with our calculations of planetary orbits.

Are there important aspects of the Universe that can only be understood using the
Anthropic Principle? Or is this principle unnecessary, or perhaps inherently
unscientific?

Very roughly speaking, the Anthropic Principle says that our universe must be
approximately the way it is for intelligent life to exist, so that the mere fact we are
asking certain questions constrains their answers. This might "explain" the values
of fundamental constants of nature, and perhaps other aspects of the laws of physics as
well. Or, it might not.

Different ways of making the Anthropic Principle precise, and a great deal of evidence
concerning it, can be found in a book by Barrow and Tipler:

This book started a heated debate on the merits of the Anthropic Principle, which
continues to this day. Some people have argued the principle is vacuous.
Others have argued that it distracts us from finding better explanations of the facts of
nature, and is thus inherently unscientific. For one interesting view, see:

In 1994 Lee Smolin advocated an alternative but equally mind-boggling idea, namely that
the parameters of the Universe are tuned, not to permit intelligent life, but to maximize
black hole production! The mechanism he proposes for this is a kind of cosmic
Darwinian evolution, based on the (unproven) theory that universes beget new baby
universes via black holes. For details, see:

Does some version of string theory or M-theory give specific predictions about the
behavior of elementary particles? If so, what are these predictions? Can we test
these predictions in the near future? And: are they correct?

Despite a huge amount of work on string theory over the last decades, it still has made no
predictions that we can check with our particle accelerators, whose failure would falsify
the theory. The closest it comes so far is by predicting the existence of a
"superpartner" for each of the observed types of particle. None of these
superpartners have ever been seen. It is possible that the Large Hadron Collider will
detect signs of the lightest superpartner. It's also possible that dark matter is
due to a superpartner! But, these remain open questions.

It's also interesting to see what string theorists regard as the biggest open questions in
physics. At the turn of the millennium, the participants of the conference Strings
2000 voted on the ten most important physics problems. Here they are:

Are all the (measurable) dimensionless parameters that characterize the physical
universe calculable in principle or are some merely determined by historical or quantum
mechanical accident and uncalculable?

How can quantum gravity help explain the origin of the universe?

What is the lifetime of the proton and how do we understand it?

Is Nature supersymmetric, and if so, how is supersymmetry broken?

Why does the universe appear to have one time and three space dimensions?

Why does the cosmological constant have the value that it has, is it zero and is it
really constant?

What are the fundamental degrees of freedom of M-theory (the theory whose low-energy
limit is eleven-dimensional supergravity and which subsumes the five consistent
superstring theories) and does the theory describe Nature?

What is the resolution of the black hole information paradox?

What physics explains the enormous disparity between the gravitational scale and the
typical mass scale of the elementary particles?

Can we quantitatively understand quark and gluon confinement in Quantum
Chromodynamics and the existence of a mass gap?