Reading.

Covering lecture notes pp. 136-146: continued reminder of electrostatic Greens function (136); the retarded Greens function of the d’Alembert operator: derivation and properties (137-140); the solution of the d’Alembert equation with a source: retarded potentials (141-142)

Solving the forced wave equation.

See the notes for a complex variables and Fourier transform method of deriving the Green’s function. In class, we’ll just pull it out of a magic hat. We wish to solve

(with a gauge choice).

Our Green’s method utilizes

If we know such a function, our solution is simple to obtain

Proof:

Claim:

This is the retarded Green’s function of the operator , where

Proof of the d’Alembertian Green’s function

Our Prof is excellent at motivating any results that he pulls out of magic hats. He’s said that he’s included a derivation using Fourier transforms and tricky contour integration arguments in the class notes for anybody who is interested (and for those who also know how to do contour integration). For those who don’t know contour integration yet (some people are taking it concurrently), one can actually prove this by simply applying the wave equation operator to this function. This treats the delta function as a normal function that one can take the derivatives of, something that can be well defined in the context of generalized functions. Chugging ahead with this approach we have

This starts things off and now things get a bit hairy. It’s helpful to consider a chain rule expansion of the Laplacian

In vector form this is

Applying this to the Laplacian portion of 2.6 we have

Here we make the identification

This could be considered a given from our knowledge of electrostatics, but it’s not too much work to just do so.

An aside. Proving the Laplacian Green’s function.

If is a Green’s function for the Laplacian, then the Laplacian of the convolution of this with a test function should recover that test function

We can directly evaluate the LHS of this equation, following the approach in [2]. First note that the Laplacian can be pulled into the integral and operates only on the presumed Green’s function. For that operation we have

It will be helpful to compute the gradient of various powers of

In particular we have, when , this gives us

For the Laplacian of , at the points where this is well defined we have

So we have a zero. This means that the Laplacian operation

can only have a value in a neighborhood of point . Writing we have

Observing that we can put this in a form that allows for use of Stokes theorem so that we can convert this to a surface integral

where we use as the outwards normal for a sphere centered at of radius . This integral is just , so we have

The convolution of with produces , allowing an identification of this function with a delta function, since the two have the same operational effect

Returning to the d’Alembertian Green’s function.

We need two additional computations to finish the job. The first is the gradient of the delta function

Consider . This is

so we have

The Laplacian is similar

so we have

With , we’ll need the Laplacian of this vector magnitude

So that we have

Now we have all the bits and pieces of 2.8 ready to assemble

Since we also have

The terms cancel out in the d’Alembertian, leaving just

Noting that the spatial delta function is non-zero only when , which means in this product, and we finally have

We write

Elaborating on the wave equation Green’s function

The Green’s function 2.26 is a distribution that is non-zero only on the future lightcone. Observe that for we have

We say that is supported only on the future light cone. At , only the contributions for matter. Note that in the “old days”, Green’s functions used to be called influence functions, a name that works particularly well in this case. We have other Green’s functions for the d’Alembertian. The one above is called the retarded Green’s functions and we also have an advanced Green’s function. Writing for advanced and for retarded these are

There are also causal and non-causal variations that won’t be of interest for this course.

This arms us now to solve any problem in the Lorentz gauge

The additional EM waves are the possible contributions from the homogeneous equation.

Since is non-zero only when , the non-homogeneous parts of 3.28 reduce to

Our potentials at time and spatial position are completely specified in terms of the sums of the currents acting at the retarded time . The field can only depend on the charge and current distribution in the past. Specifically, it can only depend on the charge and current distribution on the past light cone of the spacetime point at which we measure the field.

Example of the Green’s function. Consider a charged particle moving on a worldline

( for classical)

For this particle

PICTURE: light cones, and curved worldline. Pick an arbitrary point , and draw the past light cone, looking at where this intersects with the trajectory

For the arbitrary point we see that this point and the retarded time obey the relation

Reading.

Covering lecture notes pp. 128-135: energy flux and momentum density of the EM wave (128-129); radiation pressure, its discovery and significance in physics (130-131); EM fields of moving charges: setting up the wave equation with a source (132-133); the convenience of Lorentz gauge in the study of radiation (134); reminder on Greens functions from electrostatics (135) [Tuesday, Mar. 8]

Review. Energy density and Poynting vector.

Last time we showed that Maxwell’s equations imply

In the lecture, Professor Poppitz said he was free here to use a full time derivative. When asked why, it was because he was considering and here to be functions of time only, since they were measured at a fixed point in space. This is really the same thing as using a time partial, so in these notes I’ll just be explicit and stick to using partials.

Any change in the energy must either due to currents, or energy escaping through the surface.

The energy flux of the EM field: this is the energy flowing through in unit time ().

How about electromagnetic waves?

In a plane wave moving in direction .

PICTURE: , , .

So, since .

for a plane wave is the amount of energy through unit area perpendicular to in unit time.

So we wee that is indeed rightly called “the momentum density” of the EM field.

We will later find that and are components of a rank-2 four tensor

where is the stress tensor. We will get to all this in more detail later.

For EM wave we have

(this is the energy flux)

(the momentum density of the wave).

(recall for massless particles.

EM waves carry energy and momentum so when absorbed or reflected these are transferred to bodies.

Kepler speculated that this was the fact because he had observed that the tails of the comets were being pushed by the sunlight, since the tails faced away from the sun.

Maxwell also suggested that light would extort a force (presumably he wrote down the “Maxwell stress tensor” that is named after him).

This was actually measured later in 1901, by Peter Lebedev (Russia).

PICTURE: pole with flags in vacuum jar. Black (absorber) on one side, and Silver (reflector) on the other. Between the two of these, momentum conservation will introduce rotation (in the direction of the silver).

This is actually a tricky experiment and requires the vacuum, since the black surface warms up, and heats up the nearby gas molecules, which causes a rotation in the opposite direction due to just these thermal effects.

Another example (a factor) that prevents star collapse under gravitation is the radiation pressure of the light.

Moving on. Solving Maxwell’s equation

Our equations are

where we assume that is a given. Our task is to find , the fields.

Proceed by finding . First, as usual when . The Bianchi identity is satisfied so we focus on the current equation.

In terms of potentials

or

We want to work in the Lorentz gauge . This is justified by the simplicity of the remaining problem

Write

where

This is the d’Alembert operator (“d’Alembertian”).

Our equation is

(in the Lorentz gauge)

If we learn how to solve (**), then we’ve learned all.

Method: Green’s function’s

In electrostatics where , only, we have

Solution

PICTURE:

(a small box)

acting through distance , acting at point . With , we have

Also since is deemed a linear operator, we have , we find

We end up finding that

thus solving the problem. We wish next to do this for the Maxwell equation 4.20.

The Green’s function method is effective, but I can’t help but consider it somewhat of a cheat, since one has to through higher powers know what the Green’s function is. In the electrostatics case, at least we can work from the potential function and take it’s Laplacian to find that this is equivalent (thus implictly solving for the Green’s function at the same time). It will be interesting to see how we do this for the forced d’Alembertian equation.

Introduce the center of mass coordinates.

We’ll want to solve this using the formalism we’ve discussed. The general problem is a proton, positively charged, with a nearby negative charge (the electron).

Our equation to solve is

Here is the total kinetic energy term.
For hydrogen we can consider the potential to be the Coulomb potential energy function that depends only on . We can transform this using a center of mass transformation. Introduce the centre of mass coordinate and relative coordinate vectors

The notation represents the Laplacian for the positions of the k’th particle, so that if is the position of the first particle, the Laplacian for this is:

Here is the center of mass coordinate, and is the relative coordinate. With this transformation we can reduce the problem to a single coordinate PDE.

We set and , and get

and

where is the total mass, and is the reduced mass.

Aside: WHY do we care (slide of Hydrogen line spectrum shown)? This all started because when people looked at the spectrum for the hydrogen atom, a continuous spectrum was not found. Instead what was found was quantized frequencies. All this abstract Hilbert space notation with its bras and kets is a way of representing observable phenomina.

Also note that we have the same sort of problems in electrodynamics and mechanics, so we are able to recycle this sort of work, either applying it in those problems later, or using those techniques here.

In Electromagnetism these are the problems involving the solution to

or for

where is the electric field and is the electric potential.

We need sol solve 6.127 for . In spherical coordinates

where

This all follows by the separation of variables technique that we’ll use here, in E and M, in PDEs, and so forth.

FIXME: picture drawn. Theta measured down from axis to the position and measured in the plane measured in the to orientation.

Large limit.

but keep and note that is also a solution in the limit of , where is a polynomial.

Let where .

Small limit.

We also want to consider the small limit, and piece together the information that we find. Think about the following. The small or limit gives

\paragraph{Question:} Is this correct?

Not always. Also: we will also think about the case later (where would probably need to be retained.)

We need:

Instead of using 6.142 as in the text, we must substuitute into the above to find

for this equality for all we need

Solutions and can be found to this, and we need s positive for normalizability, which implies

Now we need to find what restrictions we must have on . Recall that we have . Substutition into 6.142 gives

We get

For this to be valid for all ,

or

For large we have

Recall that for the exponential Taylor series we have

for which we have

is behaving like , and if we had that

This is divergent, so for normalizable solutions we require to be a polynomial of a finite number of terms.

The polynomial must stop at , and we must have

From 6.150 we have

so we require

Let , an integer and so that says for

If

we have

where is the Bohr radius, and . In the lecture was originally used for the reduced mass. I’ve switched to earlier so that this cannot be mixed up with this use of for the azimuthal quantum number associated with .

PICTURE ON BOARD. Energy level transitions on graph with differences between to shown, and photon emitted as a result of the to transition.

From Chapter 4 and the story of the spherical harmonics, for a given , the quantum number varies between and in integer steps. The radial part of the solution of this separtion of variables problem becomes

where the functions are the Laguerre polynomials, and our complete wavefunction is

Notes.

Chapter IV notes and problems for [1].

There’s a lot of magic related to the spherical Harmonics in this chapter, with identities pulled out of the Author’s butt. It would be nice to work through that, but need a better reference to work from (or skip ahead to chapter 26 where some of this is apparently derived).

Other stuff pending background derivation and verification are

\begin{itemize}
\item Antisymmetric tensor summation identity.

This is obviously the coordinate equivalent of the dot product of two bivectors

We can prove 1.1 by expanding the LHS of 1.2 in coordinates

\item Question on raising and lowering arguments.

How equation (4.240) was arrived at is not clear. In (4.239) he writes

Shouldn’t that Hermitian conjugation be just complex conjugation? if so one would have

How does he end up with the and the interchanged. What justifies this commutation?

A much clearer discussion of this can be found in The operators , where Dirac notation is used for the normalization discussion.

\item Another question on raising and lowering arguments.

The reasoning leading to (4.238) isn’t clear to me. I fail to see how the commutation with implies this?

\end{itemize}

Problems

Problem 1.

Statement.

Write down the free particle Schr\”{o}dinger equation for two dimensions in (i) Cartesian and (ii) polar coordinates. Obtain the corresponding wavefunction.

Cartesian case.

For the Cartesian coordinates case we have

Application of separation of variables with gives

Immediately, we have the time dependence

with the PDE reduced to

Introducing separate independent constants

provides the pre-normalized wave function and the constraints on the constants

Rectangular normalization.

We are now ready to apply normalization constraints. One possibility is a rectangular periodicity requirement.

or

This provides a more explicit form for the energy expression

We can also add in the area normalization using

Our eigenfunctions are now completely specified

The interesting thing about this solution is that we can make arbitrary linear combinations

and then “solve” for , for an arbitrary by taking inner products

This gives the appearance that any function is a solution, but the equality of 2.18 only applies for functions in the span of this function vector space. The procedure works for arbitrary square integrable functions , but the equality really means that the RHS will be the periodic extension of .

Infinite space normalization.

An alternate normalization is possible by using the Fourier transform normalization, in which we substitute

Our inner product is now

And the corresponding normalized wavefunction and associated energy constant are

Now via this Fourier inner product we are able to construct a solution from any square integrable function. Again, this will not be
an exact equality since the Fourier transform has the effect of averaging across discontinuities.

Polar case.

In polar coordinates our gradient is

with

Squaring the gradient for the Laplacian we’ll need the partials, which are

The Laplacian is therefore

Evalating the derivatives we have

and are now prepared to move on to the solution of the Hamiltonian . With separation of variables again using we have

Rearranging to separate the term we have

The angular solutions are given by

Where the normalization is given by

And the radial by the solution of the PDE

Problem 2.

Statement.

Use the orthogonality property of

confirm that at least the first two terms of (4.171)

are correct.

Solution.

Taking the inner product using the integral of 2.34 we have

To confirm the first two terms we need

On the LHS for we have

On the LHS for note that

So, integration in gives us

Now compare to the RHS for , which is

which matches 2.41. For we have

which in turn matches 2.42, completing the exersize.

Problem 3.

Statement.

Obtain the commutation relations by calculating the vector using the definition directly instead of introducing a differential operator.

Solution.

Expressing the product in determinant form sheds some light on this question. That is

We see that evaluating this cross product in turn requires evaluation of the set of commutators. We can do that with the canonical commutator relationships directly using like so

The first two terms cancel, and we can employ (4.179) to eliminate the antisymmetric tensors from the last two terms

For , this is , so we can write

In [2], the commutator relationships are summarized this way, instead of using the antisymmetric tensor (4.224)

as here in Desai. Both say the same thing.

Problem 4.

Statement.

Solution.

TODO.

Problem 5.

Statement.

A free particle is moving along a path of radius . Express the Hamiltonian in terms of the derivatives involving the polar angle of the particle and write down the Schr\”{o}dinger equation. Determine the wavefunction and the energy eigenvalues of the particle.

Solution.

In classical mechanics our Lagrangian for this system is

with the canonical momentum

Thus the classical Hamiltonian is

By analogy the QM Hamiltonian operator will therefore be

For , separation of variables gives us

from which we have

Requiring single valued , equal at any multiples of , we have

or

Suffixing the energy values with this index we have

Allowing both positive and negative integer values for we have

where the normalization was a result of the use of a inner product over the angles

Problem 6.

Statement.

Determine and .

Solution.

Since contain only and partials, . For the position vector, however, we have an angular dependence, and are left to evaluate . We’ll need the partials for . We have

Evaluating the partials we have

With

where , and , we have

For the partial we have

We are now prepared to evaluate the commutators. Starting with the easiest we have

So we have

Observe that by virtue of chain rule, only the action of the partials on itself contributes, and all the partials applied to cancel out due to the commutator differences. That simplifies the remaining commutator evaluations. For reference the polar form of , and are

where the sines and cosines are written with , and respectively for short.

Motivation.

In [1] was a Geometric Algebra derivation of the 2D polar Laplacian by squaring the quadient. In [2] was a factorization of the spherical polar unit vectors in a tidy compact form. Here both these ideas are utilized to derive the spherical polar form for the Laplacian, an operation that is strictly algebraic (squaring the gradient) provided we operate on the unit vectors correctly.

Our rotation multivector.

Our starting point is a pair of rotations. We rotate first in the plane by

Then apply a rotation in the plane

The composition of rotations now gives us

Expressions for the unit vectors.

The unit vectors in the rotated frame can now be calculated. With we can calculate

Performing these we get

and

and

Summarizing these are

Derivatives of the unit vectors.

We’ll need the partials. Most of these can be computed from 3.9 by inspection, and are

Expanding the Laplacian.

We note that the line element is , so our gradient in spherical coordinates is

We can now evaluate the Laplacian

Evaluating these one set at a time we have

and

and

Summing these we have

This is often written with a chain rule trick to considate the and partials

Dedication.

To all tyrannical old Professors driven to cruelty by an unending barrage of increasingly ill prepared students.

Motivation.

The text [1] has an excellent general derivation of a number of forms of the gradient, divergence, curl and Laplacian.

This is actually done, not starting with the usual Cartesian forms, but more general definitions.

These are then shown to imply the usual Cartesian definitions, plus provide the means to calculate the general relationships in whatever coordinate system you like. All in all one can’t beat this approach, and I’m not going to try to replicate it, because I can’t improve it in any way by doing so.

Given that, what do I have to say on this topic? Well, way way back in first year electricity and magnetism, my dictator of a prof, the intimidating but diminutive Dmitrevsky, yelled at us repeatedly that one cannot just dot the gradient to form the Laplacian. As far as he was concerned one can only say

and never never never, the busted way

Because “this only works in Cartesian coordinates”. He probably backed up this assertion with a heartwarming and encouraging statement like “back in the days when University of Toronto was a real school you would have learned this in kindergarten”.

This detail is actually something that has bugged me ever since, because my assumption was that, provided one was careful, why would a change to an alternate coordinate system matter? The gradient is still the gradient, so it seems to me that this ought to be a general way to calculate things.

Here we explore the validity of the dictatorial comments of Prof Dmitrevsky. The key to reconciling intuition and his statement turns out to lie with the fact that one has to let the gradient operate on the unit vectors in the non Cartesian representation as well as the partials, something that wasn’t clear as a first year student. Provided that this is done, the plain old dot product procedure yields the expected results.

This exploration will utilize a two dimensional space as a starting point, transforming from Cartesian to polar form representation. I’ll also utilize a geometric algebra representation of the polar unit vectors.

The gradient in polar form.

Lets start off with a calculation of the gradient in polar form starting with the Cartesian form. Writing , , , and , we want to map

into the same form using , and . With we have

Next we need to do a chain rule expansion of the partial operators to change variables. In matrix form that is

To calculate these partials we drop back to coordinates

From this we calculate

for

We can now write down the gradient in polar form, prior to final simplification

Observe that we can factor a unit vector

so the element of the matrix product in the interior is

Similarly, the element of the matrix product in the interior is

The exponentials cancel nicely, leaving after a final multiplication with the polar form for the gradient

That was a fun way to get the result, although we could have just looked it up. We want to use this now to calculate the Laplacian.

Polar form Laplacian for the plane.

We are now ready to look at the Laplacian. First let’s do it the first year electricity and magnetism course way. We look up the formula for polar form divergence, the one we were supposed to have memorized in kindergarten, and find it to be

We can now apply this to the gradient vector in polar form which has components , and , and get

This is the expected result, and what we should get by performing in polar form. Now, let’s do it the wrong way, dotting our gradient with itself.

This is wrong! So is Dmitrevsky right that this procedure is flawed, or do you spot the mistake? I have also cruelly written this out in a way that obscures the error and highlights the source of the confusion.

The problem is that our unit vectors are functions, and they must also be included in the application of our partials. Using the coordinate polar form without explicitly putting in the unit vectors is how we go wrong. Here’s the right way

Now we need the derivatives of our unit vectors. The derivatives are zero since these have no radial dependence, but we do have partials

and

(One should be able to get the same results if these unit vectors were written out in full as , and , instead of using the obscure geometric algebra quaterionic rotation exponential operators.)

Having calculated these partials we now have

Exactly what it should be, and what we got with the coordinate form of the divergence operator when applying the “Laplacian equals the divergence of the gradient” rule blindly. We see that the expectation that is the Laplacian in more than the Cartesian coordinate system is not invalid, but that care is required to apply the chain rule to all functions. We also see that expressing a vector in coordinate form when the basis vectors are position dependent is also a path to danger.

Is this anything that our electricity and magnetism prof didn’t know? Unlikely. Is this something that our prof felt that could not be explained to a mob of first year students? Probably.

Having covered a fairly wide range in the previous Geometric Algebra exploration of the angular momentum operator, it seems worthwhile to attempt to summarize what was learned.

The exploration started with a simple observation that the use of the spatial pseudoscalar for the imaginary of the angular momentum operator in its coordinate form

allowed for expressing the angular momentum operator in its entirety as a bivector valued operator

The bivector representation has an intrinsic complex behavior, eliminating the requirement for an explicitly imaginary as is the case in the coordinate representation.

It was then assumed that the Laplacian can be expressed directly in terms of . This isn’t an unreasonable thought since we can factor the gradient with components projected onto and perpendicular to a constant reference vector as

and this squares to a Laplacian expressed in terms of these constant reference directions

a quantity that has an angular momentum like operator with respect to a constant direction. It was then assumed that we could find an operator representation of the form

Where was to be determined, and was found by subtraction. Thinking ahead to relativistic applications this result was obtained for the n-dimensional Laplacian and was found to be

For the 3D case specifically this is

While the scalar selection above is good for some purposes, it interferes with observations about simultaneous eigenfunctions for the angular momentum operator and the scalar part of its square as seen in the Laplacian. With some difficulty and tedium, by subtracting the bivector and quadvector grades from the squared angular momentum operator it was eventually found that (76) can be written as

In the 3D case the quadvector vanishes and (77) with the scalar selection removed is reduced to

In 3D we also have the option of using the duality relation between the cross and the wedge to express the Laplacian

Since it is customary to express angular momentum as , we see here that the imaginary in this context should perhaps necessarily be viewed as the spatial pseudoscalar. It was that guess that led down this path, and we come full circle back to this considering how to factor the Laplacian in vector quantities. Curiously this factorization is in no way specific to Quantum Theory.

A few verifications of the Laplacian in (80) were made. First it was shown that the directional derivative terms containing , are equivalent to the radial terms of the Laplacian in spherical polar coordinates. That is

Employing the quaternion operator for the spherical polar rotation

it was also shown that there was explicitly no radial dependence in the angular momentum operator which takes the form

Because there is a , and dependence in the unit vectors , , and , squaring the angular momentum operator in this form means that the unit vectors are also operated on. Those vectors were given by the triplet

Using for the spatial pseudoscalar, and (a possibly confusing switch of notation) for the bivector of the x-y plane we can write the spherical polar unit vectors in exponential form as

These or related expansions were used to verify (with some difficulty) that the scalar squared bivector operator is identical to the expected scalar spherical polar coordinates parts of the Laplacian

Additionally, by left or right dividing a unit bivector from the angular momentum operator, we are able to find that the raising and lowering operators are left as one of the factors

Both of these use , the bivector for the plane, and not the spatial pseudoscalar. We are then able to see that in the context of the raising and lowering operator for the radial equation the interpretation of the imaginary should be one of a plane.

Using the raising operator factorization, it was calculated that was an eigenfunction of the bivector operator with eigenvalue . This results in the simultaneous eigenvalue of for this eigenfunction with the scalar squared angular momentum operator.

There are a few things here that have not been explored to their logical conclusion.

The bivector Fourier projections do not obey the commutation relations of the scalar angular momentum components, so an attempt to directly use these to construct raising and lowering operators does not produce anything useful. The raising and lowering operators in a form that could be used to find eigensolutions were found by factoring out from the bivector operator. Making this particular factorization was a fluke and only because it was desirable to express the bivector operator entirely in spherical polar form. It is curious that this results in raising and lowering operators for the x,y plane, and understanding this further would be nice.

In the eigen solutions for the bivector operator, no quantization condition was imposed. I don’t understand the argument that Bohm used to do so in the traditional treatment, and revisiting this once that is done is in order.

I am also unsure exactly how Bohm knows that the inner product for the eigenfunctions should be a surface integral. This choice works, but what drives it. Can that be related to anything here?

After a bit more manipulation we find that the angular momentum operator polar form representation, again using , is

Observe how similar the exponential free terms within the braces are to the raising operator as given in Bohm’s equation (14.40)

In fact since , the match can be made even closer

This is a surprising factorization, but noting that we have

It appears that the factoring out from the left of a unit bivector (in this case ) from the bivector angular momentum operator, leaves as one of the remainders the raising operator.

Similarily, noting that anticommutes with , we have the right factorization

Now in the remainder, we see the polar form representation of the lowering operator .

I wasn’t expecting the raising and lowering operators “to fall out” as they did by simply expressing the complete bivector operator in polar form. This is actually fortunitous since it shows why this peculiar combination is of interest.

If we find a zero solution to the raising or lowering operator, that is also a solution of the eigenproblem , then this is neccessarily also an eigensolution of . A secondary implication is that this is then also an eigensolution of . This was the starting point in Bohm’s quest for the spherical harmonics, but why he started there wasn’t clear to me.

Saying this without the words, let’s look for eigenfunctions for the non-raising portion of (58). That is

Since we want solutions of

Solutions are

A demand that this is a zero eigenfunction for the raising operator, means we are looking for solutions of

It is sufficient to find zero eigenfunctions of

Evaluation of the partials and rearrangement leaves us with an equation in only

This has solutions , where because of the partial derivatives in (65) we are free to make the integration constant a function of . Since this is the functional dependence that is a zero of the raising operator, including this at the dependence of (62) means that we have a simultaneous zero of the raising operator, and an eigenfunction of eigenvalue for the remainder of the angular momentum operator.

This is very similar seeming to the process of adding homogeneous solutions to specific ones, since we augment the specific eigenvalued solutions for one part of the operator by ones that produce zeros for the rest.

As a check lets apply the angular momentum operator to this as a test and see if the results match our expectations.

From (38) we have

and from (37) we have

Putting these together shows that is an eigenfunction of ,

This negation suprised me at first, but I don’t see any errors here in the arithmetic. Observe that if this is correct, then it provides a demonstration that the previous suspected calculation leading to (7) is in fact wrong as guessed. That suspected incorrect result, a product of very messy calculation, was

the one half factor seemed unasthetic, with the following somehow preferable

If (67) is the correct version then calculating the operator effect of for the eigenvalue we have

So the eigenvalue is . This we do know to be the case in fact, so a second look at the messy algebra leading to (68) is justified (or an attempt at a coordinate free expansion).