The postdoc will be located at the University of Nevada, Reno and will be directly collaborating with Dr. Andrei Derevianko (Physics) and Dr. Geoffrey Blewitt (Nevada Geodetic Laboratory). Strong computational skills and familiarity with statistical analysis are preferred.

In the aftermath of our paper (with Maxim Pospelov) "Hunting for topological dark matter with atomic clocks" having been published, there were quite a number of e-mails with questions about our proposal. There was even an offer for a free-of-charge use of a powerful computational cluster (thank you!). I apologize for not answering all e-mails individually - just not enough time. One of my friends has also sent me a link to this reddit thread - there is a genuine interest to the details of the proposal. This post is intended to answer some of these questions.

First of all see the previous post that outlines the basic idea of the search.

Topological dark matter:
There are two components that go into dark-matter model building: (i) what the dark matter objects are and (ii) how these objects interact non-gravitationally with us (baryonic or ordinary matter). I emphasize the word non-gravitationally, as the gravitational interaction is a must due to multiple observations of gravitational interactions between dark and ordinary matter (and consistency with general relativity).

Additional model constraints come from various observations and cosmological simulations. Still the allowed parameter space is enormous: even if one were to assume that the dark matter objects are made out of elementary particles, the allowed masses span 50 orders (!) of magnitude. This is just a testament to the current state of confusion in modern physics and cosmology. The field is ripe for discoveries.

First of all I admit that our model (due to Maxim Pospelov) is speculative, but it is as good as any model out there. WIMPs and axions have additional attractive features as they also might solve other outstanding problems in physics (for example, strong-CP problem in physics can be solved with axions).

So what is the model? (here you might get lost, just read on). For experts, technical discussion can be found in the extensive supplementary material to our paper.

Well, you start with a quantum field and this field has some self-interaction built in. The interaction is such that it allows for several identical minima. For example, the same value of potential minima could be reached at two distinct values of the field +A and -A. Now when the Universe expands it cools down and the field has to settle at the minima of the potential. The field is torn apart by which value to chose - the choice of +A or -A are equivalent. So in some regions of space it picks +A and in the other regions it picks -A. This is called "spontaneous symmetry breaking".

Nature does not like discontinuities and you have to smoothly connect +A and -A domains. This transition region is the topological defect or cosmic wall. The thickness of the wall is given by the particle Compton wavelength = h/(m c), where m is the particle mass, h is the Plank constant and c is the speed of light.

This example is overly-simplistic but it demonstrates the idea of how topological defects are formed as the Universe cools down: in fact, for a dark-matter model you would like to have the field to be zero everywhere except inside the defects (see the supplement). All the energy (or mass) is stored in topological defects.

Depending on the field's degrees of freedom (scalar vs vector fields) and the self-interaction potential one may form defects of various geometry: monopoles, strings or domain walls. Especially interesting is the case of monopoles (spherically-symmetric objects) as the gravitationally-interacting gas of monopoles mimics dark matter. The size of the defect is a free parameter - we do not have constraints on how large it could be. GPS would be sensitive to Earth-sized monopoles (huge Compton wavelength translating into particle mass ~10^-14 eV).

Here is a real-life example of spontaneous symmetry breaking and topological defects (due to Rafael Lang, the interview to appear in Sensing Our Planet magazine)

“There’s a wedding and a hundred people are sitting at this big round table. Somebody starts eating the salad. They pick up the fork on their left, so the person next to them has to pick up the fork on the left. Now the bride also starts eating, picking up the fork on the right, so everybody around the bride picks up the right fork. At some point in between this poor guy will be sitting with no fork; on his other side will be someone with two forks. Those two guys are called a topological defect. There’s nothing special going on around the left, the right, but where those two guys are sitting, there’s a disruption of the forks.”

Ok so we are done with choosing dark-matter objects. Now the second ingredient is the non-gravitational interaction between dark matter objects and us. Here you do need to pick one that is "reasonable" (e.g., Lorentz-invariant) and is sufficiently weak that it went unnoticeable in dedicated experiments and observations. The interaction that we picked is of this kind. Effectively when the defect overlaps with us, it pulls on the particle (electron, proton, neutron, etc) masses and forces acting between the particles. Mind you this pull is really weak, otherwise we would have noticed it. However, there are ultra-sensitive devices, like atomic clocks (see this post) that may be sensitive to such pulls. You might ask - why it might have gone unnoticed before in atomic clocks - some of the reasons are purely psychological and are related to how an experimentalist discerns signal from noisy background (see this post.)

I was asked to write a news story for the American Physical Society's Forum on International Physics newsletter. Here is my contribution.

As I type this text away, I become aware that time just continues its quiet flow and my keyboard clicks measure its passage. And whatever poetical, philosophical or religious meaning one might assign to the “time”, that is how the time is defined: as a measurable sequence of events.

And time being measurable naturally means that physicists are in business.

Atomic clocks are arguably the most accurate devices ever built. While a typical wristwatch keeps time accurate to about a second over a week, modern atomic clocks aim at neither gaining nor loosing a second over the age of the Universe. Imagine that if some poor soul were to build a clock like that at the beginning of time, at the Big Bang, and for some good reason it were to survive through all the cosmic cataclysms, today it would be off by less than a heartbeat.

Atomic clocks are ubiquitous and one could buy a slightly used one on the internet. Among many places, they tick away on stock exchanges, in data centers, and in the hearts of GPS satellites. However, there is a truly special collection of several hundred atomic clocks distributed among 50 or so industrialized countries that defines the world’s time. This timescale is known as the TAI (from the French “Temps Atomique International”) or the international atomic time.

A collection of atomic clocks at the Physikalisch-Technische Bundesanstalt (PTB), Germany. These clocks substantially contribute to the TAI timescale, the world’s time. Credit: PTB

BIPM (Bureau International Des Poids Et Mesures) is at the heart of defining the world’s time. This international organization is located in a white wooden two-story building on the forested bank of the Seine River in the Parisian suburbs. Judah Levine from NIST-Boulder explains that BIPM was established in 1875 by the international “Treaty of the Meter” which defined the kilogram and the meter. Later the second was added to the convention (SI units) and the meter redefined in terms of the fixed speed of light and the second. The modern legal definition of the second involves a certain number of beats derived from the hyperfine splitting of cesium-133 atom.

Judah Levine has been contributing US data to TAI for nearly half a century. He explains that BIPM collects clock data from metrology labs and averages them. Then BIPM distributes a document called “the Circular T” which tells by how much the national timescales were off from the average about a month ago. In turn, based on this circular, national labs steer their local timescales to account for the drifts from the TAI. Such a protocol keeps the world’s time stable at the level of a nanosecond over a month.

The most advanced metrology labs rely on the so-called primary frequency standards, super-precise cesium clocks, says Peter Rosenbusch of the Laboratoire Nationale de Métrologie et d'Essais and the Paris Observatory. The primary standards are occasionally used to calibrate other local “workhorse” continuously-run atomic clocks to the SI definition of time as close as possible. In the US, the primary frequency standard is the cesium fountain clock at NIST-Boulder.

So if the world’s time is the time counted by atomic clocks, is it the same as the cosmic time? In principle one could measure time using pulsars, magnetized rotating neutron stars. The pulsars, however tend to slow down over time due to the gravitational wave radiation, and, moreover, Judah Levine points out that the very shape of the pulses also changes over time making counting the pulses imprecise. We joke that, perhaps, to define the Standard Galactic Time one needs to find more stable cosmic sources.

Nevertheless, space and satellite technology are anticipated to improve TAI. Christophe Salomon of Ecole Normale Supérieure in Paris is involved with the ACES (Atomic Clock Ensemble in Space) mission of the European Space Agency. He explains that the goal is to operate the most precise primary Cs frequency standard onboard the International Space Station (ISS). The clock is expected to become operational in space in two years. ISS would broadcast a microwave time signal down to several Earth-based stations. In the USA, the stations will be installed at JPL in Pasadena and NIST-Boulder. Through the ACES mission, national labs around the globe will establish high precision links to compare primary standards and thus remove some uncertainties in their contributions to the world’s time.

Neither time nor its definition is still. There are new generations of atomic clocks based on ultracold atoms and ions that already outperform the primary frequency standards. Pushing these quantum devices to their limits is a friendly competition between several labs around the world. Just over the past year the crown of being the world’s most precise clock has been shared by USA (two teams at JILA and NIST-Boulder), Japan, and Germany. These advances have been summarized in recent talks by E. Peik (PTB, Germany) and A. Ludlow (NIST-Boulder, USA) at the International Conference on Atomic Physics held last July in a historic Mayflower hotel in Washington D.C.

Considering this rapid progress in atomic horology, the international community discusses how to redefine the second in terms of these novel classes of clocks. This means retiring Cs from the SI units and redefining the world’s time.

Also the clock comparison technology improves. The European Union is building a trans-European clock network using existing optical fiber communication links to compare clocks at metrology labs directly, removing the uncertainties of the over-the-air and over-the-space comparisons. The first 920 km-long link between the northern and southern parts of Germany has been already tested.

One of the apparent limitations of the TAI timescale is that it is a “paper timescale” – it only shows what the world’s time was a month ago. What if the dedicated clocks were compared and averaged continuously or even better they formed one single geographically distributed clock? This was envisioned recently by a group of physicists led by Mikhail Lukin at Harvard and Jun Ye at JILA in Colorado. They proposed a quantum network of atomic clocks (for example, placed on satellites orbiting the Earth) that would utilize quantum entanglement to create one giant distributed clock with each nation contributing satellites to the network. Jun Ye comments, “this is definitely a futuristic proposal, and we must achieve substantial technological advances. However, all of the different building blocks for the network have in principle been demonstrated in small scales.” May be this is how the world’s time would be measured in the far future.

I would like to also thank Jeff Sherman of NIST-Boulder, Ekkehard Peik of PTB, and Peter Komar of Harvard for illuminating discussions.

About the author: Dr. Andrei Derevianko is a Russian-American theoretical physicist and a professor at the University of Nevada, Reno. He has contributed to the development of several novel classes of atomic clocks and precision tests of fundamental symmetries with atoms and molecules.

The closing session of the Perimeter workshop on "New ideas in low-energy tests of fundamental physics" was a stimulating discussion on open questions at the intersection of precision measurements and fundamental physics. The discussion was guided by Derek Kimball's list of questions that he kindly shares with you below. The video/audio record of the entire discussion can be found online here: part1 and part2.

Are there boring answers to exciting mysteries?

If one assumes, from the experimental perspective, the most boring solutions to mysteries: for example, a cosmological constant driving the accelerating expansion of the universe and dark matter that has no couplings to Standard Model particles, what mysteries still cannot be resolved?

Technical naturalness: for now should we not be overly concerned about this issue for experiments?

Hierarchy problem and its relation to the observed Higgs mass, cosmological constant, BICEP-2, Planck scale: how does this relate to the scale of new physics and where we should search?

New (?) idea of searching for fast-varying constants: could this be done in an astrophysical spectroscopic search?

It was noted that a phase transition process (or evolving couplings) could be introduced “avoid” technical naturalness problems... could there be phase transitions with very small effects that occur frequently, perhaps even today? (Something for GNOME or clock networks to look for?)

Impact of BICEP-2 results

BICEP-2 results: if assumed to be correct, what do they imply about the best regimes/scenarios/experiments to search for new physics?

Does BICEP-2 imply that lots of interesting new physics stuff inflates away?

How plausible is scale evolution of physics to avoid BICEP-2 “problems” and what are experimental signatures of scale evolution?

Relation between astrophysical and laboratory searches

Ideas like chameleon fields: what kind of mechanisms exist to hide interactions in laboratory tests and allow astrophysically, or allow in laboratory tests and hide astrophysically? How plausible are these, and how seriously should constraints be taken?

What is the state of knowledge about coupling between dark matter particles? Would coupling between DM particles make some difference between laboratory vs. astrophysical bounds? What if DM is more complex (not just one species) and 5% is coupled strongly to itself: could one have, for example, axion stars, etc.? Could such objects give transient signals?

Transient and time-dependent new physics signals

What kind of new physics can GNOME, clock network, CASPEr, or related experiments access that laboratory experiments cannot access?

Is there anything that can be said about scale of domains, time between transient signals?

Higher dimension topological defects and textures were mentioned. What are these, and what are interesting signatures and characteristics?

It was noted that the photon mass could be altered inside a topological defect: could this be measured with the GNOME or the clock network experiment?

Symmetry tests of gravity

How do we test if standard gravity violates parity or time-reversal invariance?

What is the impact of the ThO electron EDM constraint (also Hg and neutron EDM limits) on new physics scenarios?

Dark Energy (DE)

What are the range of viable ideas outside of the cosmological constant, and among these which have the best motivation? What “hand-waving arguments” motivate where to search?

What is relation of inflation to CP problem and baryogenesis (does CP-violating inflaton do anything, or are Sakharov conditions not satisfied)?

What is connection between inflaton and dark-energy?

Non-quantum fields?

It was noted that loop corrections complicate the physics of light scalar fields. Can one imagine “non-quantizable” fields (something like gravity that can’t be quantized simply)? For example, torsion or chiral gravity that is not quantized (at least in the usual way)?

Do such fields have distinct signatures compared to quantized spin-0, spin-1 quantum fields? What is plausibility, for example, of long-range torsion gravity?

Transients in astronomical spectroscopy

Could we detect new physics “passing through” the line of sight between earth and astrophysical object?

Could we search for transients using a telescope? There was a suggestion to use an astro-comb…

For example, it was suggested that quintessence field coupled electromagnetically and generated Faraday rotation: if \phi is clumped or forms topological defect, is this something observable?

Impact of proton radius measurements

What kind of new physics might it imply?

Hidden sector

There are previous and future tests of the spin-statistics connection being conducted. Could these be sensitive to some of the hidden sector physics?

What are observable signatures of hidden sector supersymmetry?

Large extra dimensions

What is the present status and is there strong motivation to go to particular length scales in tests? Is 100 microns a special length? What would show up in atoms?

Experimentally, what is the status of patch potential systematics?

Lense-Thirring effect for intrinsic spins

Comparison of Lense-Thirring effect for intrinsic spin vs. orbital angular momentum? Is there a way to test?

Suppose an experimentalist has a sensitive device, the conventional physics of which is well under control. Now let’s assume that once in while the device is perturbed by some unanticipated “new physics” events, such as an interaction with a lump of dark matter. Suppose the device has enough sensitivity to “new physics”. When would an unanticipated “new physics” event become apparent to an unsuspecting experimentalist?

This is quite different from particle colliders, when experimentalists do hunt for unusual events. Sometimes specific signal or signature is anticipated (e.g., the Higgs), so let me emphasize that our unsuspecting fellow experimentalist does not specifically look for new physics.

So when would “new physics” be noticed? An obvious answer would be: when the new physics perturbs the expected signal in a significant way.

Even this simple statement requires qualifiers. Suppose new physics provides a uniform background to the signal and the signal itself cannot be computed exactly from the first principles. For example, transition frequencies of many-electron atoms can be computed only to 3-4 significant figures while experimentalists can determine some of these frequencies to 18 significant figures. Then (unless there are symmetry arguments, e.g., parity or time-reversal violation and associated external field reversals implemented in an experiment) there is no way to dissect the new physics background from the conventional one.

This leaves us with time/space-dependent new physics signals. I.e., new physics could be noticed if it leads to some noise or drift in time/space-dependent signals. For example, a uniform-in-time drift in atomic frequencies could reveal variation in fundamental constants.

What about “new-physics” noise/spike-like events, such as the perturbation by "lumps" of dark-matter? Suppose the conventional signal is interrupted by new physics events. We could characterize such events by how long an event lasts (short/long interaction times), how frequent the events are (rare/frequent) and if the device is sensitive to the event. For simplicity we assume that the events do not overlap, i.e., the average time between the individual events is much longer than the event duration.

The event would be missed if the average time between consecutive events is larger than a typical time of continuous operation of the device (i.e., the events are rare). Unfortunately, a single bona fide event could be discarded by an experimentalist as being an outlier. Indeed, one can never guarantee that everything is fully under control, as there still may be occasional perturbations present, such as a student bumping into an optical table or misbehaving power supplies.

Essentially, rare events would be registered as such only if they are anticipated.

This argument brings us to the following conclusion: unanticipated “new physics” events would be noticed only if the sizable events are frequent on time-scale of the experiment.

There is another caveat: suppose the events are so frequent that they look like a white or a flicker noise in the signal. After all it is natural to assume Poissonian distribution of time intervals between consecutive events. Then there is a danger of “integrating out” the events.

Thereby we have to revise our statement: unanticipated “new physics” events would be noticed only if the sizable events are frequent (but not too frequent) on the time-scale of the experiment.

In practical terms, for a typical atomic physics experiment, the events should last longer than a second and there should be hundreds of them per day of operation. Even then, the experimentalist should be gutsy enough to put his/her credibility and comfort at risk and publicly report the data as being unusual. “New physics” looks for the right fellow to notice and appreciate it.

P.S. I would like to thank Dima Budker and Jeff Sherman for discussions on this topic

By monitoring correlated time discrepancy between two spatially-separated clocks one could search for passage of topological defects (TD), such as domain wall pictured here. Domain wall moves at galactic speeds ~ 300 km/s. Here the clocks are assumed to be identical. Before the TD arrival at the first clock, the apparent time difference is zero, as the clocks are synchronized. As the TD passes the first clock, it runs faster (or slower, depending on the TD-SM coupling), with the clock time difference reaching the maximum value. Time difference stays at that level while the defect travels between the two clocks. Finally, as the defect sweeps through the second clock, the phase difference vanishes. For intercontinental scale network, l~ 10,000 km, the characteristic time 30 seconds.

Despite solid observational evidence for the existence of dark matter, its nature remains a mystery. A large and ambitious research program in particle physics assumes that dark matter is composed of heavy-particle-like matter. That community hopes to see events of dark matter particles scattering off individual nuclei. Considering nil results of the latest particle detector experiments (see excellent discussion here), this assumption may not hold true, and significant interest exists to alternatives.

Now what about atomic clocks? Atomic clocks are arguably the most accurate scientific instruments ever build. Modern clocks approach the fractional inaccuracy, which translates into astonishing timepieces guaranteed to keep time within a second over the age of the Universe. Attaining this accuracy requires that the quantum oscillator be well protected from environmental noise and perturbations well controlled and characterized. This opens intriguing prospects of using clocks to study subtle effects, and it is natural to ask if such accuracy can be harnessed for dark matter searches.

Posing and answering this question is the subject of our recent paper:Hunting for topological dark matter with atomic clocks, A. Derevianko and M. Pospelov, arXiv:1311.1244.

We consider one of alternatives to heavy-particle dark matter and focus on so-called topological dark matter. The argument is that depending on the initial quantum field configuration at early cosmological times, light fields could lead to dark matter via coherent oscillations around the minimum of their potential, and/or form non-trivial stable field configurations in space (topological defects). The stability of this type of dark matter can be dictated by topological reasons.

I know, this sounds a little bit too far fetched to an atomic physicist. Well, ferro-magnets could serve as a familiar analogy. Here topological defects are domain walls separating domains of well-defined magnetization. Above the Curie point, the sample is uniform, but as the temperature is lowered, the domains start to form. So one could argue that as the Universe was cooling down after the Big Bang, quantum fields underwent a similar phase transition.

Generically, one could talk about 0D topological defects (=monopoles), 1D=strings, and 2D=walls. Dark matter would form out of such defects. The light masses of fields forming the defects could lead to a large, macroscopic, size for a defect. Based on observations and simulations, astronomers have a good idea of how dark matter moves around the Solar system. The defects would fly through the Earth at galactic velocities ~ 300 km/s. Now if the defects couple (non-gravitationally) to ordinary matter, one could think of a detection scheme using sensitive listening devices, e.g., atomic clocks. In fact one would benefit from a network of clocks, as one would cross-correlate events occurring at different locations.

Phenomenologically, the dark matter interaction with ordinary matter could be described as a transient variation of fundamental constants. The coupling would shift atomic frequencies and thus affect time readings. During the encounter with a topological defect, as it sweeps through the network, initially synchronized clocks will become desynchronized. This is illustrated in the figure.

The real advantage of clocks is that these are ubiquitous. Several networks of atomic clocks are already operational. Perhaps the most well known are Rb and Cs atomic clocks on-board satellites of the Global Positioning System (GPS) and other satellite navigation systems. Currently there are about 30 satellites in the GPS constellation orbiting the Earth with an orbital radius of 26,600 km with a half of a sidereal day period. As defects sweep through the GPS constellation, satellite clock readings are affected. For two diametrically-opposed satellites the maximum time delay between clock perturbations would be ~ 200 s, assuming the sweep with a typical speed of 300 km/s. Different types of topological defects (e.g., domain walls versus monopoles) would yield distinct cross-correlation signatures. While the GPS is affected by a multitude of systematic effects, e.g., solar flares, temperature and clock frequency modulations as the satellites come in out of the Earth shadow, none of conventional effects would propagate with 300 km/s through the network. Additional constraints can come from analyzing extensive terrestrial network of atomic clocks on GPS tracking stations.

The performance of GPS on-board clocks is certainly lagging behind state-of-the art laboratory clocks. Focusing on laboratory clocks, one could carry out a dark matter search employing the vast network of atomic clocks at national standards laboratories used for evaluating the TAI timescale. Moreover, several elements of high-quality optical links for clock comparisons have been already demonstrated in Europe, with 920 km link connecting two laboratories in Germany.

Naturally I hope that this proposal motivates dark matter searches with atomic physics tools pushing our “listening capabilities” to the next level. This proposal could provide fundamental physics motivation to building high-quality terrestrial and space-based networks of clocks. As the detection schemes would benefit from improved accuracy of short-term time and frequency determination, following this path could stimulate advances in ultra-stable atomic clocks and Heisenberg-limited time-keeping.

When invited to present colloquia talks, I usually talk about atomic clocks and I am sometimes asked what a theorist is doing in this highly experimental field. I was thinking about this question for a while and then I was on a plane browsing through an in-flight catalog (you know the one that advertises stuff that you may want but really do not need) and I realized how to answer this question.

There was an ad for a beautiful wristwatch accompanied by a story on the fine craft of clockmaking. The story was comparing making a wristwatch to building a skyscraper out of matchsticks. So I thought, by that account, experimentalists working on atomic clocks are performing miracles. And if these experimentalists are walking on water, my role as a theorist is to show them where the stepping stones are.