Motivation.

Work problem 4.1 from [1], calculation of the eigensolution for an infinite square well, with boundaries . It’s actually a bit tidier seeming to generalize this slightly to boundaries , which also implicitly solves the problem. This is surely a problem that is done in 700 other QM texts, but I liked the way I did it this time so am writing it down.

Guts

Our equation to solve is . Separation of variables gives us

With , we have

and the usual boundary conditions give us

We must have a zero determinant, which gives us the constraints on immediately

So our constraint on in terms of integers , and the corresponding integration constant

One of the constants can be eliminated directly by picking any one of the two zeros from 2.4

So we have

Or,

Because probability densities, currents and the expectations of any operators will always have paired and factors, any constant phase factors like above can be dropped, or absorbed into the constant , and we can write

The only thing left is to fix by integrating , for which we have

This last trig term vanishes over the integration region and we are left with

which essentially completes the problem. A final substitution back into 2.8 allows for a final tidy up

Motivation.

Previously while workinga Liboff problem, I wondered about what the conditions were required for exponentials to commute. In those problems the exponential arguments were operators. Exponentials of bivectors as in quaternion like spatial or Lorentz boosts are also good examples of (sometimes) non-commutative exponentials. It appears likely that the key requirement is that the exponential arguments commute, but how does one show this? Here this is explored a bit.

Guts

If one could show that it was true that

Then it would also imply that

Let’s perform the school boy exercise to prove 2.1 and explore the restrictions for such a proof. We assume a power series definition of the exponential operator, and do not assume the values are numeric, instead just that they can be multiplied. A commutative multiplication will not be assumed.

By virtue of the power series exponential definition we have

To attempt to put this into form we’ll need to change the order that we evaluate the double sum, and here a picture (\ref{fig:gridSummation}) is helpful.

Diagonal grid double summation

For somebody who has seen this summation trick before the picture probably says it all. We want to iterate over all pairs , and could do so in order as in our sum. This is all the pairs of points in the upper right hand side of the grid. We can also cover these grid coordinates in a different order. In particular, these can be iterated over the diagonals. The first diagonal having the point , the second with the points , the third with the points .

Observe that along each diagonal the sum of the coordinates is constant, and increases by one. Also observe that the number of points in each diagonal is this sum. These observations provide a natural way to index the new grid traversal. Labeling each of these diagonals with index , and points on that subset with , we can express the original loop indexes and in terms of these new (coupled) loop indexes and as follows

Our sum becomes

With one small rearrangement, by introducing a in both the numerator and the denominator, the goal is almost reached.

This shows where we have a requirement that and commute, because only in that case do we have a binomial expansion

in the interior sum. This reduced the problem to a consideration of the implication of possible non-commutation have on the binomial expansion. Consider the simple special case of . If and do not necessarily commute, then we have

whereas the binomial expansion formula has no such allowance for non-commutative multiplication and just counts the number of times a product can occur in any ordering as in

One sees the built in requirement for commutative multiplication here. Now this doesn’t prove that unconditionally if and do not commute, but we do see that a requirement for commutative multiplication is sufficient if we want equality of such commuted exponentials. In particular, the end result of the Liboff calculation where we had

and was assuming this to be unity even for the differential operators under consideration is now completely answered (since we have ).

Motivation.

In [1], Feynman writes the Pauli wave equation for a non-relativistic treatment of a mass in a scalar and vector potential electrodynamic field. That is

Is this amenable to Fourier transform solution like so many other PDEs? Let’s give it a try. It would also be interesting to attempt to apply such a computation to see if it is possible to calculate , and the first two derivatives of this expectation value. I would guess that this would produce the Lorentz force equation.

Prep

Fourier Notation.

Our transform pair will be written

Interpretation of the squared momentum operator.

Feynman actually wrote

That notation I’m not familiar with, and I’ve written this as a plain old vector square. If were not an operator, then this would be a scalar, but as written this actually also includes a bivector term proportional to . To see that, lets expand this operator explicitly.

This anticommutator of the vector potential and the gradient is only a scalar has zero divergence. More generally, expanding by chain rules, and using braces to indicate the scope of the differential operations, we have

where is the spatial unit trivector, and .

This is assuming should be treated as a complex valued scalar, and not a complex-like geometric object of any sort. Does this bivector term have physical meaning? Should it be discarded or retained? If we assume discarded, then we really want to write the Pauli equation utilizing an explicit scalar selection, as in

Assuming that to be the case, our squared momentum operator takes the form

The Pauli equation, written out explicitly in terms of the gradient is then

Confirmation.

Instead of guessing what Feynman means when he writes Pauli’s equation, it would be better to just check what Pauli says. In [2] he uses the more straightforward notation

for the vector potential dependent part of the Hamiltonian operator. This is just the scalar part as was guessed.

Guts

Using the expansion 2.7 of the Pauli equation, and writing for the effective complex potential we have

Let’s now apply each of these derivative operations to our assumed Fourier solution from 2.2. Starting with the Laplacian we have

For the operator application we have

Putting both together we have

We can tidy this up slightly by completing the square, yielding

If this is to be zero for all , it seems clear that we need to be the solution of the first order non-linear PDE

Somewhere along the way this got a bit confused. Our Fourier transform function is somehow a function of not just wave number, but position, since by virtue of being a solution to a differential equation involving , and ? Can we pretend to not to have noticed this and continue on anyways? Let’s try the further simplification of the system by imposing a constraint of constant time potentials (). That allows for direct integration of the wave function’s Fourier transform

And inverse transforming this

By inserting the inverse Fourier transform of , we have the time evolution of the wave function as a convolution integral

Splitting out the convolution kernel, this takes a slightly tidier form

Verification attempt.

If we apply the Pauli equation 1.1 to 3.18 does it produce the correct answer?

For the LHS we have

but for the RHS we have

So if it were not for the spatial dependence of and , we would have LHS equal to the RHS. It appears that ignoring the odd dependence in the differential equation definitely leads to trouble, and only works for constant potential distributions, a rather boring special case.

Motivation.

I got a nice present today which included one of Feynman’s QED books. I noticed some early mistakes, and since I can’t find an errata page anywhere, I’ll collect them here.

Third Lecture

Page 6 typos.

The electric field is given in terms of only the scalar potential

and should be

The invariant gauge transformation for the vector and scalar potentials are given as

But these should be

The sign was crossed on the scalar potential transformation. Feynman is also probably used to using , but he doesn’t do that explicitly at a different point on the page, so including it here is proper.

Page 7 notes.

The units in the transformation for the wave function don’t look right. We want to transform the Pauli equation

with a transformation of the form

Where is presumed, and we want to find the proportionality constant required for invariance. With we have

so

For the time partial we have

and the scalar potential term transforms as

Putting the pieces together we have

We need one more intermediate result, that of

So we have

To get rid of the , and time partials we need

Or

This also kills off all the additional undesirable terms in the transformed operator (with ), leaving the invariant transformation completely specified

This is a fair bit different than Feynman’s result, but since he starts with the wrong electrodynamic guage transformation, that’s not too unexpected.

Second Lecture

This isn’t errata, but I found the following required slight exploration. He gives (implicitly)

Is this an average over space and time? How would one do that? What do we get just integrating this over the volume? That dot product is . Our average over the volume, for , using wolfram alpha to do the dirty work

Since the sine integral vanishes, we have just as expected regardless of the angular frequency . Okay, that makes sense now. Looks like is only relavent for the single Fourier component, but that likely doesn’t matter since I seem to recall that the fourier component of this oscillators in a box problem was entirely constant (and perhaps zero?).

and have often wondered if there’s an easier way. One way would be to put something like this in a script and avoid re-creating a command line like this every time. I tried this in perl, making a stdin/stdout filter by default, and a file modifying helper when files are listed specifically (not really a filter anymore, but often how I’d like to be able to invoke c++filt). Here’s that beastie:

This also works, but is clunkier than I expected. If anybody knows of some way to use or abuse the in place filtering capability of perl (ie: perl -p -i) to do something like this, or some other clever way to do this, I’d be curious what it is?

Recently some of our code started misbehaving only when compiled with the GCC compiler. Our post mortem stacktrace and data collection tools didn’t deal with this trap very gracefully, and dealing with that (or even understanding it) is a different story.

Observe that there are two sets of ” frames. One from the original SIGILL, and another one that our “main” thread ends up sending to all the rest of the threads as part of our process for freezing things to be able to take a peek and see what’s up.

This has got the si_addr value 0x00002AB821393257, which also matches frame 9 in the stack for sqluInitLoadEDU. What was at that line of code, doesn’t appear to be something that ought to generate a SIGILL:

Hmm. What is a ud2a instruction? Google is our friend and we find that the linux kernel uses this as a “guaranteed invalid instruction”. It is used to fault the processor and halt the kernel in case you did something really really bad.

Other similar references can be found, also explaining the use in the linux kernel. So what is this doing in userspace code? It seems like something too specific to get there by accident and since the instruction stream itself contains this stack corruption or any other sneaky nasty mechanism doesn’t seem likely. The instruction doesn’t immediately follow a callq, so a runtime loader malfunction or something else equally odd doesn’t seem likely.

Perhaps the compiler put this instruction into the code for some reason. A compiler bug perhaps? A new google search for GCC ud2a instruction finds me

...generates this warning (using gcc 4.4.1 but I think it applies to most
gcc versions):
main.cpp:12: warning: cannot pass objects of non-POD type .class A.
through .....; call will abort at runtime
1. Why is this a "warning" rather than an "error"? When I run the program
it hits a "ud2a" instruction emitted by gcc and promptly hits SIGILL.

Oh my! It sounds like GCC has cowardly refused to generate an error, but also bravely refuses to generate bad code for whatever this code sequence is. Do I have such an error in my build log? In fact, I have three, all of which look like:

It turns out that agtRqstCB is a rather large structure, and certainly doesn’t match the %p that the developer used in this debug build special code. The debug code actually makes things worse, and certainly won’t help on any platform. It probably also won’t crash on any platform either (except when using the GCC compiler) since there are no subsequent %s format parameters that will get messed up by placing gob-loads of structure data in the varargs data area inappropriately.

This should resolve this issue and allow me to go back to avoiding the (much slower!) intel compiler that is used by our nightly build process.

Problem 3.19.

What is the effect of operating on an arbitrary function with the following two operators

On the surface with and it appears that we have just

but it this justified when the sinusoids are functions of operators? Let’s look at the first case. For some operator we have

Can we assume that these cancel for general operators? How about for our specific differential operator ? For that one we have

Since the differentials commute, so do the exponentials and we can write the slightly simpler

I’m pretty sure the commutative property of this differential operator would also allow us to say (in this case at least)

Will have to look up the combinatoric argument that allows one to write, for numbers,

If this only assumes that and commute, and not any other numeric properties then we have the supposed result 1.3. We also know of algebraic objects where this does not hold. One example is exponentials of non-commuting square matrices, and other is non-commuting bivector exponentials.

Motivation.

In [1] is problem 3.14, Describe the time evolution of the following wavefunctions

This isn’t really a QM problem, but seems worthwhile anyways, because it isn’t obvious looking at the functions what this is.

Let’s start in reverse order with , but in a slightly more general form that is less error prone to manipulate. These wavefunctions can be viewed as superpositions, and expanding out as exponentials temporarily gets us to a form that makes this more obvious.

Now the problem is simplified to observing how a wave of the form propagates, or really the interaction of two such waves moving together or against each other, depending on the signs of the constants. Let’s now put in the constants for to get a better feel for it

It wasn’t obvious from the original product of sinusoids form that the question asked about that the resulting wave form stays in phase for its time propagation, but we see that to be the case above. This really just leaves some thought about the standing wave itself to understand what is happening. For that, at time 0, we have a destructive interference superposition of two almost identical period standing waves. That near perfect cancellation will likely leave an envelope, and a Mathematica plot gives a better feel for this waveform. This in turn will propagate at light speed down the x-axis. Because is so small we have a nearly linear, and nearly flat, envelope for the , as can be expected near the origin since we have there

Comparing to a smaller range plot, one sees that it is necessary to increase the plot range significantly before seeing the oscillatory nature of the envelope.

For we have almost the same wave function, but out sine term has no time variation. What does this do to the waveform? Let’s see if a sum and difference of angles form sheds some light on that. We have

Specifically for , and , we have

Again at we have a very widely spread envelope with rapid oscillations within it. There is a very small difference in the rate that the two components waveforms will go out of phase, and each of the component waveforms is moving in the opposite directions. What does that phase change do to the evolution? Looking back to the original product of sinusoids form for I believe this will just mean we have the phase shifting with time within the envelope.

Finally for the first wave function we have both of the sinusoid factors with time variation. What does that expand out to in terms of superposition of fundamental frequencies?

or

We have the superposition of two wave forms, destructively interfering with each other, one with phase changing at the angular rate , and the other at the rate . What is the overall waveform? It’s still not obvious what this is. I actually have the inclination to try to not treat these analytically, but pull out some graphing software. Something like the real mathematica software would be nice since it would allow for the use of sliders to vary parameters and then animate the graphs as time varied.

As a check for the torus segment center of mass calculation, there should be agreement in the limit where the radius of the torus goes to zero (with a non-zero correction otherwise).

Center of mass for a circular wire segment.

As an additional check for the correctness of the result above, we should be able to compare with the center of mass of a circular wire segment, and get the same result in the limit .

For that we have

So we have

Observe that this is

which is consistent with the previous calculation for the solid torus when we let that solid diameter shrink to zero.

In particular, for of the torus, we have , and

We are a little bit up the imaginary axis as expected.

I’d initially somehow thought I’d been off by a factor of two compared to the result by The Virtuosi, without seeing a mistake in either. But that now appears not to be the case, and I just screwed up plugging in the numbers. Once again, I should go to my eight year old son when I have arithmetic problems, and restrict myself to just the calculus and algebra bits.

Center of mass.

With the prep done, we are ready to move on to the original problem. Given a toroidal segment over angle , then the volume of that segment is

Our center of mass position vector is then located at

Evaluating the integrals we loose the and terms and are left with and . This leaves us with

Since , we have a conjugate commutation with the for just

A final reassembly, provides the desired final result for the center of mass vector

Presuming no algebraic errors have been made, how about a couple of sanity checks to see if the correctness of this seems plausible.

We are pointing in the -axis direction as expected by symmetry. Good. For , our center of mass vector is at the origin. Good, that’s also what we expected. If we let , and , we have as also expected for a tiny segment of “wire” at that position. Also good.