On negative temperatures, negative pressures, and (maybe) dark energy

A portion of an xkcd comic, dealing with intuitive temperature measurements. Click for the rest of the comic (which deals with length, mass, and so forth). Warning: contains swears.

Probably the most covered physics story from last week involved a new paper achieving negative absolute temperatures in an unusual physical system. Negative temperatures are both less weird and far more weird than we might expect, so it’s worth taking a little time to dissect their meaning and the potential implications for studying dark energy in the lab—one of the more intriguing claims made by the authors.

Temperature is a concept we tend to understand on an intuitive level: we understand that some temperatures are cold, some are hot. Those with a little physics or chemistry background also know about absolute temperature, measured in Kelvins. Absolute temperature is usually described as a measure of the average motion of particles; we learn that absolute zero is the temperature at which all motion stops. Those concepts aren’t lies, but they aren’t the whole story either. The truth is that absolute temperature isn’t quite a measure of motion, but something more subtle.

Disclaimer: I will use anthropomorphic language in this post, speaking of systems “wanting” to do things, or ascribing emotions to them. That’s a rhetorical convenience, so please don’t take it literally.

To understand the true meaning of temperature, let’s start with a set of atoms in a row. (I feel like I should have Linus from Peanuts come out onto an empty stage to explain this. “That’s what temperature is all about, Charlie Brown!”) These atoms are paramagnetic: they each have an unpaired electron in their outer layer, and when an external magnetic field is applied, those electrons will tend to align parallel with the field. Paramagnets are materials that are only magnetic when exposed to an external magnetic field, in other words.

A real system of atoms has a lot of interactions: the electrons interact with each other and with their nuclei, the atoms themselves can jiggle around, and so forth. Let’s ignore all that for the moment and just look at the spins. (For those keeping track at home, this is known as the Ising model of magnetism.) When all the spins are aligned with the magnetic field, that’s the happiest the paramagnet can be: it’s the lowest energy the system can have. Recall that low energy is good, from a physics point of view: just like a car wants to roll to the bottom of a hill, a system wants to settle into the lowest energy configuration. On the other hand, if all the spins were aligned antiparallel to the magnetic field, then the system would be in its highest energy configuration; you could induce that behavior by letting everything settle parallel to the magnetic field, then abruptly switch the field’s direction. That means our ideal paramagnet has both a maximum and a minimum amount of available energy, relative to the strength of the magnetic field.

Entropy, energy, and the true meaning of temperature

Entropy as a function of energy in an ideal paramagnet. For simplicity, I made the maximum entropy correspond to zero energy (since what matters is energy differences, not the absolute number for energy).

Energy is just one side of the story. We must also consider entropy, the measure of the order a system possesses. (See my previous post about entropy and its meaning in gas physics.) Both the highest and the lowest energy states of the system are as ordered as can be, so they both have zero entropy. The maximum entropy the system can have is when the spins are equally aligned and anti-aligned with the magnetic field. As the graph shows, we can draw the entropy as a function of energy; it makes a nice smooth arc, nearly a parabola.

Here’s where things get fun. The Second Saw of Thermodynamics tells us that all systems tend to evolve toward the state with highest entropy, in the absence of external influences. Since low energy corresponds to high order, that means low entropy, so a system tends to move from low energy states to higher energy states. Such a system has a deficit of energy, and is willing to accept more to reach the maximum entropy state. That’s what we see on the left side of the diagram: a normal kind of system.

However, the right side of the curve is very different: here the highly ordered (low entropy) state is higher energy! That means the system likes to give up energy to achieve maximum entropy, the opposite of the usual state of affairs: it’s energetically desirable to give up energy and become more disordered.

The temperature as a function of energy in our ideal paramagnet. Notice that absolute zero is still inaccessible! You can’t cool down the magnet to zero. However, the states where more spins are down than up – the right side of the graph – want to give up energy to reach maximum entropy. That corresponds to negative absolute temperature.

So now we’re finally able to define temperature: it’s a relationship between energy and entropy as the system varies. A system that likes to accept energy to achieve maximum entropy has a low temperature. At very low temperatures, things tend to be very ordered, but the slightest thing can mess up that order…and it’s very easy to increase temperature in such a system. On the other hand, a system that’s close to maximum disorder is at high temperature. You can keep increasing temperature forever, but you won’t make the entropy grow much more.

However, if a system likes to give up energy to achieve maximum entropy, it corresponds to a decrease in temperature…but it starts off negative. It’s not “normal” to want to give up energy: it can only happen if there’s a maximum energy available in the system that also corresponds to an ordered state. The plot on the right illustrates this point.

Note that absolute zero doesn’t really lie between the branches of the temperature curves: if you want to get from the right side to the left side of the graph, you have to create a radical discontinuity in behavior. In other words, you don’t get to negative temperatures by cooling the system to absolute zero, then past it. Absolute zero is still a barrier. (In this sense, it’s kind of analogous to the speed of light: you can have particles that always move slower than light, and you can (maybe) have particles moving faster than light, but nothing can cross that barrier. Don’t take the analogy too far, though! Negative temperature are real; tachyons probably aren’t.)

Negative temperatures in the lab

Edward Purcell and R. V. Pound performed the first experiment achieving negative temperatures in 1951, using a paramagnetic system. It’s not as easy as I’ve described here: in real life the spins of the electrons interact with each other, and atoms in a magnetic material vibrate. That means that on top of the energy and entropy I discussed, there’s a bunch of additional effects, which skew the results and prevent negative temperature states from existing. Similarly, in a gas or ordinary solid, you can always increase the energy of the system. (There’s a limit to the speed of a particle, but you can always increase the energy.) Thus, the usual common-sense relationship between average speed of particle motion and temperature usually holds up.

Some reporting (which I’ll be nice and call “lazy”) missed the point of the recent research paper, and made nonsensical comments mentioning refutations of thermodynamics or violations of absolute zero. (In the interests of full disclosure: in a Twitter conversation, I made a couple of statements that weren’t correct. I said that negative temperatures were an inherently quantum effect, which isn’t actually true. In practice, experiments involving negative temperatures have involved exploiting quantum phenomena (including the paramagnetic system described above), but theoretically you could make a negative temperature system without that.

The new experiment is exciting because it didn’t exploit electron spins. Instead, the researchers trapped very cold atoms in an optical lattice: an arrangement of crossed laser beams that produce standing waves of crests and troughs, with the atoms sitting in the troughs. In other words, it’s kind of like a solid in the sense that the atoms are ordered, but they aren’t electrically bound to each other as they would be in a real material. However, if the troughs are sufficiently shallow, the atoms can move from one to another, via the process of quantum tunneling.

The state of the system is then defined by the occupation of the different lattice sites; the system’s energy is determined by the occupancy and the shape of the optical lattice. The researchers achieved a maximum energy for the system by carefully tuning the interactions between the atoms. They used potassium atoms, which are Mott insulators: metals that don’t conduct electricity due to electron-electron repulsion in a material.(The previous sentence was completely in error. Please see the first comment below.) However, these atoms are also bosons: particles that like to pile into the same quantum state at sufficiently cold temperatures. That careful balance of attractive and repulsive properties produces both a bound on energy both above and below—akin to the paramagnetic case above, but for atom movement.

That’s incredibly interesting in its own right, but it has another property. We can construct a similar relationship between entropy and volume, as we did for entropy and energy. The analogous quantity to temperature in this relationship is pressure. In other words, the negative temperature states correspond to negative pressures: the atoms react to being squeezed by collapsing into smaller volumes, whereas increasing their separation increases the energy of the system. That’s analogous to dark energy: the stupid name we give to the mysterious substance that drives cosmic acceleration.

I say “analogous” deliberately: the negative temperature, negative pressure states were carefully constructed to have maximum energy, whereas dark energy seems to correspond to a constant (or nearly constant) density of energy. No atoms are involved in dark energy! However, such bizarre systems could allow us to simulate dark energy in the lab, since experimenters can tune the parameters of the system to behave in all sorts of interesting ways. I look forward to seeing where this type of experiment will lead us.

References/Further Reading

Many other people have written about the negative temperature story and the concepts underlying it, so I dithered quite a bit before writing this post. Here are some of the excellent posts I’ve seen, and some other resources for further reading.

The always excellent Chad Orzel takes a slightly different approach, talking about temperature in terms of probabilities. I avoided probability language in my post, but Chad’s post is more rigorous in that respect.

John Timmer, my editor at Ars Technica, focuses on the recent research paper, so he provides a lot more context for the experiment.

Like this:

Related

You write that:
” They used potassium atoms, which are Mott insulators: metals that don’t conduct electricity due to electron-electron repulsion in a material.”

Actually, the Mott insulator portion of this has nothing to do with intrinsic properties of potassium– they used potassium because it’s a reasonably convenient atom for laser cooling experiments, and has magnetic properties that allow them to change the collisional interactions in a useful way. The Mott insulating phase they use involves the atoms in the optical lattice– by adjusting the strength of the lattice, they can create a phase of the gas of atoms analogous to the Mott insulating phase in a solid, where there is one and only one atom at each site of the lattice.

The Mott insulator doesn’t have anything to do with achieving the negative temperatures directly, either, it’s just a convenient trick that they use to move between two different configurations of the experiment.

I hope to have time this weekend or next week to write up a post about the experimental details, and put that up. In my copious free time.

Their discussion of Mott insulators in the paper was really brief and confusing. I wouldn’t have any idea what it meant if I hadn’t been following this group’s work for a long time (my post-doc work did something closely related, but we wrote about it in different language).

Great post and great analogies! I’m just a bit concerned with the portion that states that the 2nd law expresses a system looks for its state of higher entropy when in fact it is the entropy of the universe the one that has to increase in order for a process to take place spontaneously. Isn’t it a bit misleading to just compare the energy and the entropy of a system without considering the entropy change within the surroundings? I know it would make for a more complex analogy. Am I missing something here?

This is an isolated system in the thought experiment: no need to consider the wider surroundings (much less the whole Universe). Much of the time, you can consider real systems to be only loosely attached to the environment, at least on the first level of approximation.

Great post. I’m a bit confused by the example of paramagnetic atoms: in a “normal state”: how is the system at rest with positive temperature in terms of energy and entropy? At the beginning you implicitly sayt that it is at energy 0 and maximum entropy, but afterwards I understand it whould spontaneously evolve to lower energy AND lower entropy (to the left of the graph), which seems odd…

You can’t have zero energy at positive temperatures, simply because the spins of the atoms will be fluctuating around. Without a place to dump that extra energy, you end up violating the conservation of energy – no energy created or destroyed.