Disclaimer

Fermi gas

Review

Continuing a discussion of [1] section 8.1 content.

We found

With no spin

Fig 1.1: Occupancy at low temperature limit

Fig 1.2: Volume integral over momentum up to Fermi energy limit

gives

This is for periodic boundary conditions \footnote{I filled in details in the last lecture using a particle in a box, whereas this periodic condition was intended. We see that both achieve the same result}, where

Moving on

with

this gives

Over all dimensions

so that

Again

Example: Spin considerations

{example:basicStatMechLecture16:1}{

This gives us

and again

}

High Temperatures

Now we want to look at the at higher temperature range, where the occupancy may look like fig. 1.3

Fig 1.3: Occupancy at higher temperatures

so that for large we have

Mathematica (or integration by parts) tells us that

so we have

Introducing for the thermal de Broglie wavelength,

we have

Does it make any sense to have density as a function of temperature? An inappropriately extended to low temperatures plot of the density is found in fig. 1.4 for a few arbitrarily chosen numerical values of the chemical potential , where we see that it drops to zero with temperature. I suppose that makes sense if we are not holding volume constant.

Fig 1.4: Density as a function of temperature

We can write

or (taking (and/or volume?) as a constant) we have for large temperatures

The chemical potential is plotted in fig. 1.5, whereas this function is plotted in fig. 1.6. The contributions to from the term are dropped for the high temperature approximation.

Disclaimer

Last time we found that the low temperature behaviour or the chemical potential was quadratic as in fig. 1.1.

Fig 1.1: Fermi gas chemical potential

Specific heat

where

Low temperature

The only change in the distribution fig. 1.2, that is of interest is over the step portion of the distribution, and over this range of interest is approximately constant as in fig. 1.3.

Fig 1.2: Fermi distribution

Fig 1.3: Fermi gas density of states

so that

Here we’ve made a change of variables , so that we have near cancelation of the factor

Here we’ve extended the integration range to since this doesn’t change much. FIXME: justify this to myself? Taking derivatives with respect to temperature we have

With , we have for

Using eq. 1.1.4 at the Fermi energy and

we have

Giving

or

This is illustrated in fig. 1.4.

Fig 1.4: Specific heat per Fermion

Relativisitic gas

Relativisitic gas

graphene

massless Dirac Fermion

Fig 1.5: Relativisitic gas energy distribution

We can think of this state distribution in a condensed matter view, where we can have a hole to electron state transition by supplying energy to the system (i.e. shining light on the substrate). This can also be thought of in a relativisitic particle view where the same state transition can be thought of as a positron electron pair transition. A round trip transition will have to supply energy like as illustrated in fig. 1.6.

Fig 1.6: Hole to electron round trip transition energy requirement

Graphene

Consider graphene, a 2D system. We want to determine the density of states ,

Motivation

I was wondering how to generalize the arguments of [1] to relativistic systems. Here’s a bit of blundering through the non-relativistic arguments of that text, tweaking them slightly.

I’m sure this has all been done before, but was a useful exercise to understand the non-relativistic arguments of Pathria better.

Generalizing from energy to four momentum

Generalizing the arguments of section 1.1.

Instead of considering that the total energy of the system is fixed, it makes sense that we’d have to instead consider the total four-momentum of the system fixed, so if we have particles, we have a total four momentum

where is the total number of particles with four momentum . We can probably expect that the ‘s in this relativistic system will be smaller than those in a non-relativistic system since we have many more states when considering that we can have both specific energies and specific momentum, and the combinatorics of those extra degrees of freedom. However, we’ll still have

Only given a specific observer frame can these these four-momentum components be expressed explicitly, as in

where is the velocity of the particle in that observer frame.

Generalizing the number if microstates, and notion of thermodynamic equilibrium

Generalizing the arguments of section 1.2.

We can still count the number of all possible microstates, but that number, denoted , for a given total energy needs to be parameterized differently. First off, any given volume is observer dependent, so we likely need to map

Let’s still call this , but know that we mean this to be four volume element, bounded in both space and time, referred to a fixed observer’s frame. So, lets write the total number of microstates as

where is the total four momentum of the system. If we have a system subdivided into to two systems in contact as in fig. 1.1, where the two systems have total four momentum and respectively.

Fig 1.1: Two physical systems in thermal contact

In the text the total energy of both systems was written

so we’ll write

so that the total number of microstates of the combined system is now

As before, if denotes an equilibrium value of , then maximizing eq. 1.0.8 requires all the derivatives (no sum over here)

With each of the components of the total four-momentum separately constant, we have , so that we have

as before. However, we now have one such identity for each component of the total four momentum which has been held constant. Let’s now define

Our old scalar temperature is then

but now we have three additional such constants to figure out what to do with. A first start would be figuring out how the Boltzmann probabilities should be generalized.

Equilibrium between a system and a heat reservoir

Generalizing the arguments of section 3.1.

As in the text, let’s consider a very large heat reservoir and a subsystem as in fig. 1.2 that has come to a state of mutual equilibrium. This likely needs to be defined as a state in which the four vector is common, as opposed to just the temperature field being common.

Fig 1.2: A system A immersed in heat reservoir A’

If the four momentum of the heat reservoir is with for the subsystem, and

Writing

for the number of microstates in the reservoir, so that a Taylor expansion of the logarithm around (with sums implied) is

Here we’ve inserted the definition of from eq. 1.0.11, so that at equilibrium, with , we obtain

Next steps

This looks consistent with the outline provided in http://physics.stackexchange.com/a/4950/3621 by Lubos to the stackexchange “is there a relativistic quantum thermodynamics” question. I’m sure it wouldn’t be too hard to find references that explore this, as well as explain why non-relativistic stat mech can be used for photon problems. Further exploration of this should wait until after the studies for this course are done.

Impressed with the clarity of Baez’s entropic force discussion on differential forms [1], let’s use that methodology to find all the possible identities that we can get from the thermodynamic identity (for now assuming is fixed, ignoring the chemical potential.)

This isn’t actually that much work to do, since a bit of editor regular expression magic can do most of the work.

Our starting point is the thermodynamic identity

or

It’s quite likely that many of the identities that can be obtained will be useful, but this should at least provide a handy reference of possible conversions.

Disclaimer

Question: Sackur-Tetrode entropy of an Ideal Gas

Find the temperature of this gas via . Find the energy per particle at which the entropy becomes negative. Is there any meaning to this temperature?

Answer

Taking derivatives we find

or

The energies for which the entropy is negative are given by

or

In terms of the temperature this negative entropy condition is given by

or

There will be a particle density for which this distance will start approaching the distance between atoms. This distance constrains the validity of the ideal gas law entropy equation. Putting this quantity back into the entropy eq. 1.1.1 we have

We see that a positive entropy requirement puts a bound on this distance (as a function of temperature) since we must also have

for the gas to be in the classical domain. I’d actually expect a gas to liquefy before this transition point, making such a low temperature nonphysical. To get a feel for whether this is likely the case, we should expect that the logarithm argument to be

at the point where gasses liquefy (at which point we assume the ideal gas law is no longer accurate) to be well above unity. Checking this for 1 liter of a gas with atoms for hydrogen, helium, and neon respectively we find the values for eq. 1.1.10 are

At least for these first few cases we see that the ideal gas law has lost its meaning well before the temperatures below which the entropy would become negative.

Question: Ideal gas thermodynamics

An ideal gas starts at in the pressure-volume diagram (x-axis = , y-axis = ), then moves at constant pressure to a larger volume , then moves to a larger pressure at constant volume to , and finally returns to , thus undergoing a cyclic process (forming a triangle in plane). For each step, find the work done on the gas, the change in energy content, and heat added to the gas. Find the total work/energy/heat change over the entire cycle.

Answer

Our process is illustrated in fig. 1.1.

Fig 1.1: Cyclic pressure volume process

Step 1
This problem is somewhat underspecified. From the ideal gas law, regardless of how the gas got from the initial to the final states, we have

So a volume increase with fixed implies that there is a corresponding increase in . We could have for example, an increase in the number of particles, as in the evaporation process illustrated of fig. 1.2, where a piston held down by (fixed) atmospheric pressure is pushed up as the additional gas boils off.

Fig 1.2: Evaporation process under (fixed) atmospheric pressure

Alternately, we could have a system such as that of fig. 1.3, with a fixed amount of gas is in contact with a heat source that supplies the energy required to induce the required increase in temperature.

Fig 1.3: Gas of fixed mass absorbing heat

Regardless of the source of the energy that accounts for the increase in volume the work done on the gas (a negation of the positive work the gas is performing on the system, perhaps a piston as in the picture) is

Let’s now assume that we have the second sort of configuration above, where the total amount of gas is held fixed. From the ideal gas relations of eq. 1.0.12.12, and with , , and , we have

The change in energy of the ideal gas, assuming three degrees of freedom, is

The energy balance then requires that the total heat absorbed by the gas must include that portion that has done work on the system, plus the excess kinetic energy of the gas. That is

Step 2

For this leg of the cycle we have no work done on the gas

We do, however have a change in energy. The energy of the gas is

With , the change of energy of the gas, the total heat absorbed by the gas, is

Step 3

For the final leg of the cycle, the work done on the gas is

This is positive this time
Unlike the first part of the cycle, the work done on the gas is positive this time (work is being done on the gas to both compress it). The change in energy of the gas, however, is negative, with the difference between final and initial energy being

The simultaneous compression and the pressure reduction require energy to be removed from the gas. We must have a negative change in heat , with heat emitted in this phase of the cycle. This can be verified explicitly

Changes over the complete cycle.

Summarizing the results from each of the phases, we have

Summing the changes in the work we have

This is the area of the triangle, as expected. Since it is positive, there is net work done on the gas.

We expect the energy changes to sum to zero, and this can be verified explicitly finding

With net work done on the gas and no change in energy, there should be no net heat absorption by the gas, with a total change in heat that should equal, in amplitude, the total work done on the gas. This is confirmed by summation

Question: Adiabatic process for an Ideal Gas

Show that when an ideal monoatomic gas expands adiabatically, the temperature and pressure are related by

Answer

From (3.34b) of [1], we find that the Adiabatic condition can be expressed algebraically as

With

this is

Dividing through by , this becomes a perfect differential, and we can integrate

Question: Rotation of diatomic molecules ([2] problem 3.6)

In our first look at the ideal gas we considered only the translational energy of the particles. But molecules can rotate, with kinetic energy. The rotation motion is quantized; and the energy levels of a diatomic molecule are of the form

where is any positive integer including zero: . The multiplicity of each rotation level is .

a

Find the partition function for the rotational states of one molecule. Remember that is a sum over all states, not over all levels — this makes a difference.

b

Evaluate approximately for , by converting the sum to an integral.

c

Do the same for , by truncating the sum after the second term.

d

Give expressions for the energy and the heat capacity , as functions of , in both limits. Observe that the rotational contribution to the heat capacity of a diatomic molecule approaches 1 (or, in conventional units, ) when .

e

Sketch the behavior of and , showing the limiting behaviors for and .

Answer

a. Partition function

To understand the reference to multiplicity recall (section 4.13 [1]) that the rotational Hamiltonian was of the form

where the eigenvectors satisfied

\begin{subequations}

\end{subequations}

and , where is a positive integer. We see that is of the form

and our partition function is

We have no dependence on in the sum, and just have to sum terms like fig 1, and are able to sum over trivially, which is where the multiplicity comes from.

Fig 1: Summation over m

To get a feel for how many terms are significant in these sums, we refer to the plot of fig 2. We plot the partition function itself in, truncation at terms in fig 3.

Fig 2: Plotting the partition function summand

Fig 3: Z_R(tau) truncated after 30 terms in log plot

b. Evaluate partition function for large temperatures

If , so that , all our exponentials are close to unity. Employing an integral approximation of the partition function, we can somewhat miraculously integrate this directly

c. Evaluate partition function for small temperatures

When , so that , all our exponentials are increasingly close to zero as increases. Dropping all the second and higher order terms we have

d. Energy and heat capacity

In the large domain (small temperatures) we have

The specific heat in this domain is

For the small (large temperatures) case we have

The heat capacity in this large temperature region is

which is unity as described in the problem.

e. Sketch

The energy and heat capacities are roughly sketched in fig 4.

Fig 4: Energy and heat capacity

It’s somewhat odd seeming that we have a zero point energy at zero temperature. Plotting the energy (truncating the sums to 30 terms) in fig 5, we don’t see such a zero point energy.

Fig 5: Exact plot of the energy for a range of temperatures (30 terms of the sums retained)

That plotted energy is as follows, computed without first dropping any terms of the partition function

To avoid the zero point energy, we have to use this and not the truncated partition function to do the integral approximation. Doing that calculation (which isn’t as convenient, so I cheated and used Mathematica). We obtain

This approximation, which has taken the sums to infinity, is plotted in fig 6.

Fig 6: Low temperature approximation of the energy

From eq. 1.0.12, we can take one more derivative to calculate the exact specific heat