Category Archives: Statistical Mechanics

I’ve referred to the fluctuation-dissipation theorem many times on this blog (see here and here for instance), but I feel like it has been somewhat of an injustice that I have yet to commit a post to this topic. A specialized form of the theorem was first formulated by Einstein in a paper about Brownian motion in 1905. It was then extended to electrical circuits by Nyquist and then generalized by several authors including Callen and Welten (pdf!) and R. Kubo (pdf!). The Callen and Welton paper is a particularly superlative paper not just for its content but also for its lucid scientific writing. The fluctuation-dissipation theorem relates the fluctuations of a system (an equilibrium property) to the energy dissipated by a perturbing external source (a manifestly non-equilibrium property).

In this post, which is the first part of two, I’ll deal mostly with the non-equilibrium part. In particular, I’ll show that the response function of a system is related to the energy dissipation using the harmonic oscillator as an example. I hope that this post will provide a justification as to why it is the imaginary part of a response function that quantifies energy dissipated. I will also avoid the use of Green’s functions in these posts, which for some reason often tend to get thrown in when teaching linear response theory, but are absolutely unnecessary to understand the basic concepts.

It is useful to think about the response function, , as how the harmonic oscillator responds to an external source. This can best be seen by writing the following suggestive relation:

Response functions tend to measure how systems evolve after being perturbed by a point-source (i.e. a delta-function source) and therefore quantify how a system relaxes back to equilibrium after being thrown slightly off balance.

Now, look at what happens when we examine the energy dissipated by the damped harmonic oscillator. In this system the energy dissipated can be expressed as the time integral of the force multiplied by the velocity and we can write this in the Fourier domain as so:

One can write this more simply as:

Noticing that the energy dissipated has to be a real function, and that is also a real function, it turns out that only the imaginary part of the response function can contribute to the dissipated energy so that we can write:

Although I try to avoid heavy mathematics on this blog, I hope that this derivation was not too difficult to follow. It turns out that only the imaginary part of the response function is related to energy dissipation.

Intuitively, one can see that the imaginary part of the response has to be related to dissipation, because it is the part of the response function that possesses a phase lag. The real part, on the other hand, is in phase with the driving force and does not possess a phase lag (i.e. ). One can see from the plot from below that damping (i.e. dissipation) is quantified by a phase lag.

Damping is usually associated with a 90 degree phase lag

Next up, I will show how the imaginary part of the response function is related to equilibrium fluctuations!

When writing on this blog, I try to share nuggets here and there of phenomena, experiments, sociological observations and other peoples’ opinions I find illuminating. Unfortunately, this format can leave readers wanting when it comes to some sort of coherent message. Precisely because of this, I would like to revisit a few blog posts I’ve written in the past and highlight the common vein running through them.

Condensed matter physicists of the last couple generations have grown up ingrained with the idea that “More is Different”, a concept first coherently put forth by P. W. Anderson and carried further by others. Most discussions of these ideas tend to concentrate on the notion that there is a hierarchy of disciplines where each discipline is not logically dependent on the one beneath it. For instance, in solid state physics, we do not need to start out at the level of quarks and build up from there to obtain many properties of matter. More profoundly, one can observe phenomena which distinctly arise in the context of condensed matter physics, such as superconductivity, the quantum Hall effect and ferromagnetism that one wouldn’t necessarily predict by just studying particle physics.

While I have no objection to these claims (and actually agree with them quite strongly), it seems to me that one rather (almost trivial) fact is infrequently mentioned when these concepts are discussed. That is the role of consistency.

While it is true that one does not necessarily require the lower level theory to describe the theories at the higher level, these theories do need to be consistent with each other. This is why, after the publication of BCS theory, there were a slew of theoretical papers that tried to come to terms with various aspects of the theory (such as the approximation of particle number non-conservation and features associated with gauge invariance (pdf!)).

This requirement of consistency is what makes concepts like the Bohr-van Leeuwen theorem and Gibbs paradox so important. They bridge two levels of the “More is Different” hierarchy, exposing inconsistencies between the higher level theory (classical mechanics) and the lower level (the micro realm).

In the case of the Bohr-van Leeuwen theorem, it shows that classical mechanics, when applied to the microscopic scale, is not consistent with the observation of ferromagnetism. In the Gibbs paradox case, classical mechanics, when not taking into consideration particle indistinguishability (a quantum mechanical concept), is inconsistent with the idea the entropy must remain the same when dividing a gas tank into two equal partitions.

Today, we have the issue that ideas from the micro realm (quantum mechanics) appear to be inconsistent with our ideas on the macroscopic scale. This is why matter interference experiments are still carried out in the present time. It is imperative to know why it is possible for a C60 molecule (or a 10,000 amu molecule) to be described with a single wavefunction in a Schrodinger-like scheme, whereas this seems implausible for, say, a cat. There does again appear to be some inconsistency here, though there are some (but no consensus) frameworks, like decoherence, to get around this. I also can’t help but mention that non-locality, à la Bell, also seems totally at odds with one’s intuition on the macro-scale.

What I want to stress is that the inconsistency theorems (or paradoxes) contained seeds of some of the most important theoretical advances in physics. This is itself not a radical concept, but it often gets neglected when a generation grows up with a deep-rooted “More is Different” scientific outlook. We sometimes forget to look for concepts that bridge disparate levels of the hierarchy and subsequently look for inconsistencies between them.

A couple weeks ago, I wrote a post about the Gibbs paradox and how it represented a case where, if particle indistinguishability was not taken into account, led to some bizarre consequences on the macroscopic scale. In particular, it suggested that entropy should increase when partitioning a monatomic gas into two volumes. This paradox therefore contained within it the seeds of quantum mechanics (through particle indistinguishability), unbeknownst to Gibbs and his contemporaries.

Another historic case where a logical disconnect between the micro- and macroscale arose was in the context of the Bohr-van Leeuwen theorem. Colloquially, the theorem says that magnetism of any form (ferro-, dia-, paramagnetism, etc.) cannot exist within the realm of classical mechanics in equilibrium. It is quite easy to prove actually, so I’ll quickly sketch the main ideas. Firstly, the Hamiltonian with any electromagnetic field can be written in the form:

Now, because the classical partition function is of the form:

we can just make the substitution:

without having to change the limits of the integral. Therefore, with this substitution, the partition function ends up looking like one without the presence of the vector potential (i.e. the partition function is independent of the vector potential and therefore cannot exhibit any magnetism!).

This theorem suggests, like in the Gibbs paradox case, that there is a logical inconsistency when one tries to apply macroscale physics (classical mechanics) to the microscale and attempts to build up from there (by applying statistical mechanics). The impressive thing about this kind of reasoning is that it requires little experimental input but nonetheless exhibits far-reaching consequences regarding a prevailing paradigm (in this case, classical mechanics).

Since the quantum mechanical revolution, it seems like we have the opposite problem, however. Quantum mechanics resolves both the Gibbs paradox and the Bohr-van Leeuwen theorem, but presents us with issues when we try to apply the microscale ideas to the macroscale!

What I mean is that while quantum mechanics is the rule of law on the microscale, we arrive at problems like the Schrodinger cat when we try to apply such reasoning on the macroscale. Furthermore, Bell’s theorem seems to disappear when we look at the world on the macroscale. One wonders whether such ideas, similar to the Gibbs paradox and the Bohr-van Leeuwen theorem, are subtle precursors suggesting where the limits of quantum mechanics may actually lie.

An Interesting Research Avenue: A couple months ago, Stephane Mangin of the Insitut Jean Lamour gave a talk on all-optical helicity-dependent magnetic switching (what a mouthful!) at Argonne, which was fascinating. I was reminded of the talk yesterday when a review article on the topic appeared on the arXiv. The basic phenomenon is that in certain materials, one is able to send in a femtosecond laser pulse onto a magnetic material and switch the direction of magnetization using circularly polarized light. This effect is reversible (in the sense that circularly polarized light in the opposite direction will result in a magnetization in the opposite direction) and is reproducible. During the talk, Mangin was able to show us some remarkable videos of the phenomenon, which unfortunately, I wasn’t able to find online.

The initial study that sparked a lot of this work was this paper by Beaurepaire et al., which showed ultrafast demagnetization in nickel films in 1996, a whole 20 years ago! The more recent study that triggered most of the current work was this paper by Stanciu et al. in which it was shown that the magnetization direction could be switched with a circularly polarized 40-femtosecond laser pulse on ferromagnetic film alloys of GdFeCo. For a while, it was thought that this effect was specific to the GdFeCo material class, but it has since been shown that all-optical helicity-dependent magnetic switching is actually a more general phenomenon and has been observed now in many materials (see this paper by Mangin and co-workers for example). It will be interesting to see how this research plays out with respect to the magnetic storage industry. The ability to read and write on the femtosecond to picosecond timescale is definitely something to watch out for.

Update: After my post on the Gibbs paradox last week, a few readers pointed out that there exists some controversy over the textbook explanation that I presented. I am grateful that they provided links to some articles discussing the subtleties involved in the paradox. Although one commenter suggested Appendix D of E. Atlee Jackson’s textbook, I was not able to get a hold of this. It looks like a promising textbook, so I may end up just buying it, however!

The links that I found helpful about the Gibbs paradox were Jaynes’ article (pdf!) and this article by R. Swendsen. In particular, I found Jaynes’ discussion of Whifnium and Whoofnium interesting in the role that ignorance and knowledge plays our ability to extract work from a partitioned gases. Swendsen’s tries to redefine entropy classically (what he calls Boltzmann’s definition of entropy), which I have to think about a little more. But at the moment, I don’t think I buy his argument that this resolves the Gibbs paradox completely.

Thomas Kuhn, the famous philosopher of science, envisioned that scientific revolutions take place when “an increasing number of epicycles” arise, resulting in the untenability of a prevailing theory. Just in case you aren’t familiar, the “epicycles” are a reference to the Ptolemaic world-view with the earth at the center of the universe. To explain the trajectories of the other planets, Ptolemaic theory required that the planets circulate the earth in complicated trajectories called epicycles. These convoluted epicycles were no longer needed once the Copernican revolution took place, and it was realized that our solar system was heliocentric.

This post is specifically about the Gibbs paradox, which provided one of the first examples of an “epicycle” in classical mechanics. If you google Gibbs paradox, you will come up with several different explanations, which are all seemingly related, but don’t quite all tell the same story. So instead of following Gibbs’ original arguments, I’ll just go by the version which is the easiest (in my mind) to follow.

Imagine a large box that is partitioned in two, with volume V on either side, filled with helium gas of the same pressure, temperature, etc. and at equilibrium (i.e. the gases are identical). The total entropy in this scenario is . Now, imagine that the partition is removed. The question Gibbs asked himself was: does the entropy increase?

Now, from our perspective, this might seems like an almost silly question, but Gibbs had asked himself this question in 1875, before the advent of quantum mechanics. This is relevant because in classical mechanics, particles are always distinguishable (i.e. they can be “tagged” by their trajectories). Hence, if one calculates the entropy increase assuming distinguishable particles, one gets the result that the entropy increases by .

This is totally at odds with one’s intuition (if one has any intuition when it comes to entropy!) and the extensive nature of entropy (that entropy scales with the system size). Since the size of the larger container of volume containing identical gases (i.e. same pressure and temperature) does not change when removing the partition, neither should the entropy. And most damningly, if one were to place the partition back where it was before, one would naively think that the entropy would return to , suggesting that the entropy decreased when returning the partition.

The resolution to this paradox is that the particles (helium atoms in this case) are completely indistinguishable. Gibbs had indeed recognized this as the resolution to the problem at the time, but considered it a counting problem.

Little did he know that the seeds giving rise to this seemingly benign problem required the complete overthrow of classical mechanics in favor of quantum mechanics. Only in quantum mechanics do truly identical particles exist. Note that nowhere in the Gibbs paradox does it suggest what the next theory will look like – it only points out a severe shortcoming of classical mechanics. Looked at in this light, it is amusing to think about what sorts of epicycles are hiding within our seemingly unshakable theories of quantum mechanics and general relativity, perhaps even in plain sight.