This should be possible using Geometric Algebra too, and trying this made for a good exercise.

Setup

The starting point can be the same, the source free Maxwell’s equations. Writing , we have

Multiplication of the last two equations by the spatial pseudoscalar , and using , the curl equations can be written in their dual bivector form

Now adding the dot and curl equations using eliminates the cross products

These can be further merged without any loss, into the GA first order equation

We are really after solutions to the total multivector field . For this problem where separate electric and magnetic field components are desired, working from (7) is perhaps what we want?

Following Eli and Jackson, write , and

Evaluating the and partials we have

For the remainder of these notes, the explicit dependence will be assumed for and .

An obvious thing to try with these equations is just substitute one into the other. If that’s done we get the pair of second order harmonic equations

One could consider the problem solved here. Separately equating both sides of this equation to zero, we have the constraint on the wave number and angular velocity, and the second order Laplacian on the left hand side is solved by the real or imaginary parts of any analytic function. Especially when one considers that we are after a multivector field that of intrinsic complex nature.

However, that is not really what we want as a solution. Doing the same on the unified Maxwell equation (9), we have

Selecting scalar, vector, bivector and trivector grades of this equation produces the following respective relations between the various components

From the scalar and pseudoscalar grades we have the propagation components in terms of the transverse ones

But this is the opposite of the relations that we are after. On the other hand from the vector and bivector grades we have

A clue from the final result.

From (22) and a lot of messy algebra we should be able to get the transverse equations. Is there a slicker way? The end result that Eli obtained suggests a path. That result was

The numerator looks like it can be factored, and after a bit of playing around a suitable factorization can be obtained:

Observe that the propagation components of the field can be written in terms of the symmetric product

Now the total field in CGS units was actually , not , so the factorization above isn’t exactly what we want. It does however, provide the required clue. We probably get the result we want by forming the symmetric product (a hybrid dot product selecting both the vector and bivector terms).

Symmetric product of the field with the direction vector.

Rearranging Maxwell’s equation (15) in terms of the transverse gradient and the total field we have

With this our symmetric product is

The antisymmetric product on the right hand side should contain the desired transverse field components. To verify multiply it out

Now, with multiplication by the conjugate quantity , we can extract these transverse components.

Rearranging, we have the transverse components of the field

With left multiplication by , and writing we have

While this is a complete solution, we can additionally extract the electric and magnetic fields to compare results with Eli’s calculation. We take
vector grades to do so with , and . For the transverse electric field

and for the transverse magnetic field

Thus the split of transverse field into the electric and magnetic components yields

Compared to Eli’s method using messy traditional vector algebra, this method also has a fair amount of messy tricky algebra, but of a different sort.

that you want to make a systematic simple change to, replacing all instances of some pattern with another. For illustration purposes, suppose that replacement is a straight replacement of the simplest sort changing all instances of the variable type:

blah_t

to something else:

MyBlahType

Such a search and replace can be done in many ways. For one file it wouldn’t be unreasonable to run:

vim filename

then:

:%s/blah_t/MyBlahType/g
:wq

for multiple files, this isn’t the most convienent way (although you can do it with vim plus a command line for loop in a pinch using a vim command script if you really wanted to). An easier way is the following one liner:

perl -p -i -e 's/blah_t/MyBlahType/g' `cat r`

Let’s break this down. First is the `cat r` part. This presumes you are running in a unix shell where backquotes (not regular quotes like ”) mean “take the output of the back-quoted command and embed that in the new command”. This means the command above is equivalent to:

Next is the -i flag for the perl command. This specifies a suffix for a backup file. When no such file is specified (as here) then this means do an in-place modification of an existing file (something that’s particularily convienent if you are working with a version control system and if you have a recently checked out source file don’t have to worry as much about saving backups in case the search and replace goes wrong). If you wanted a backup, with suffix .bak, then replace -i with -i.bak (no spaces between -i and .bak).

The -e option says to treat the parameter as the entire perl program. By example, if you had the following small perl command in a file (say ./mySearchAndReplace):

The remaining worker option in the perl comand is the -p. This is really a convienence option and says to “wrap” the entire command (be that in a file or via -e) in a loop that processes standard input and outputs the results. You could do the same thing explicitly like so:

A command or script, such as sed, that takes all input from stdin and provides an altered stdout, is called a filter. In perl while (<$filehandle>) is the syntax to process all lines in an opened file, and nothing means the current default file (usually stdin). So a final decoding of the one liner is a command like:

In the previous factorization of the Laplacian, a projection of the gradient along a constant direction vector we found

The vector was arbitrary, and just needed to be constant with respect to the factorization operations. The transition to non-constant vectors was largely guesswork and was in fact wrong. This guess was that we had

The radial factorization of the gradient relied on the direction vector being constant. If we evaluate (2), then there should be a non-zero remainder compared to the Laplacian. Evaluation by coordinate expansion is one way to verify this, and should produce the difference. Let’s do this in two parts, starting with . Summation will be implied by mixed indexes, and for generality a general basis and associated reciprocal frame will be used.

For the dot product we have

So, forming the difference we have

Or

Going back to the quantum Hamiltonian we do still have the angular momentum operator as one of the distinct factors of the Laplacian. As operators we have something akin to the projection of the gradient onto the radial direction, as well as terms that project the gradient onto the tangential plane to the sphere at the radial point

Spatial bivector representation of the angular momentum operator.

Reading ([1]) on the angular momentum operator, the form of the operator is suggested by analogy where components of with
the position representation used to expand the coordinate representation of the operator.

The result is the following coordinate representation of the operator

It is interesting to put these in vector form, and then employ the freedom to use for the spatial pseudoscalar.

The choice to use the pseudoscalar for this imaginary seems a logical one and the end result is a pure bivector representation of angular momentum operator

The choice to represent angular momentum as a bivector is also natural in classical mechanics (encoding the orientation of the plane and the magnitude of the momentum in the bivector), although its dual form the axial vector is more common, at least in introductory mechanics. Observe that there is no longer any explicit imaginary in (1), since the bivector itself has an implicit complex structure.

Factoring the gradient and Laplacian.

The form of (1) suggests a more direct way to extract the angular momentum operator from the Hamiltonian (i.e. from the Laplacian). Bohm uses the spherical polar representation of the Laplacian as the starting point. Instead let’s project the gradient itself in a specific constant direction , much as we can do to find the polar form angular velocity and acceleration components.

Write

Or

The Laplacian is therefore

So we have for the Laplacian a representation in terms of projection and rejection components

The vector was arbitrary, and just needed to be constant with respect to the factorization operations. Setting , the radial position from the origin, we have

So in polar form the bivector form of the angular momentum operator is quite evident, just by application of projection of the gradient onto the radial direction and the tangential plane to the sphere at the radial point

Obsolete with potential errors.

This post may be in error. I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.

Original Post:

The basic idea of using duality to express the 4D divergence integral as a stokes boundary surface integral has been explored. Lets consider this in more detail picking a specific parametrization, namely rectangular four vector coordinates. For the volume element write

As seen previously (but not separately), the divergence can be expressed as the dual of the curl

So we have . Putting things together, and writing we have

It is straightforward to reduce each of these dot products. For example

The rest proceed the same and rather anticlimactically we end up coming full circle

This is however nothing more than the definition of the divergence itself and no need to resort to Stokes theorem is required. However, if we are integrating over a rectangle and perform each of the four integrals, we have (with c=1) from the dual Stokes equation the perhaps less obvious result

When stated this way one sees that this could have just as easily have followed directly from the left hand side. What’s the point then of the divergence theorem or Stokes theorem? I think that the value must really be the fact that the Stokes formulation naturally builds the volume element in a fashion independent of any specific parametrization. Here in rectangular coordinates the result seems obvious, but would the equivalent result seem obvious if non-rectangular spacetime coordinates were employed? Probably not.

Although I now rate myself as a guru level vi user, there was a time that I totally hated the vi editor. Back in university if you tried to run the newsreader (trn in those days) if you hadn’t configured for a “usable” editor, vi would be invoked and then you were stuck.

So, … if you don’t want to use vi, and end up using it by mistake, how to exit? The following is promising looking, but doesn’t actually work:

~
~
~
:!@$!@^! shit, how the hell do you exit this damn thing

Here’s the real way.

Step 1: If you are in edit mode (you have on purpose or accidentally hit the ‘i’ character) you’ll need to hit the Escape character on your keyboard. If you don’t know if you are in edit mode or not you can hit the Esc character. You can hit it five times if you like, once you are out of edit mode, you’ll stay there.

Step 2: type ‘:’ (the colon character), then wq, or q, or q! or wq! Example output at the bottom of the vi screen will look something like:

~
~
~
:wq!

This last one means write and quit, and a really mean it, even if I have multiple files being edited. Plain old “:w” is just write, “:q” is quit, and “:q!” is “quit damnit, yes I really mean it”

While my vi hating days are gone, the transition from vi hating to loving requires one first baby step: being able to exit the damn editor. Eventually, if like me, you are forced to work on multi platform Unix development where the only editor you can depend on is vi, taking further steps away from vi hating may be possible.

Obsolete with potential errors.

This post may be in error. I wrote this before understanding that the gradient used in Stokes Theorem must be projected onto the tangent space of the parameterized surface, as detailed in Alan MacDonald’s Vector and Geometric Calculus.

Original Post:

Motivation

Relying on pictorial means and a brute force ugly comparison of left and right hand sides, a verification of Stokes theorem for the vector and bivector cases was performed ([1]). This was more of a confirmation than a derivation, and the technique fails the transition to the trivector case. The trivector case is of particular interest in electromagnetism since that and a duality transformation provides a four-vector divergence theorem.

The fact that the pictorial means of defining the boundary surface doesn’t work well in four vector space is not the only unsatisfactory aspect of the previous treatment. The fact that a coordinate expansion of the hypervolume element and hypersurface element was performed in the LHS and RHS comparisons was required is particularly ugly. It is a lot of work and essentially has to be undone on the opposing side of the equation. Comparing to previous attempts to come to terms with Stokes theorem in ([2]) and ([3]) this more recent attempt at least avoids the requirement for a tensor expansion of the vector or bivector. It should be possible to build on this and minimize the amount of coordinate expansion required and go directly from the volume integral to the expression of the boundary surface.

Do it.

Notation and Setup.

The desire is to relate the curl hypervolume integral to a hypersurface integral on the boundary

In order to put meaning to this statement the volume and surface elements need to be properly defined. In order that this be a scalar equation, the object in the integral is required to be of grade , and where is the dimension of the vector space that generates the object .

Reciprocal frames.

As evident in equation (2.1) a metric is required to define the dot product. If an affine non-metric formulation
of Stokes theorem is possible it will not be attempted here. A reciprocal basis pair will be utilized, defined by

Both of the sets and are taken to span the space, but are not required to be orthogonal. The notation is consistent with the Dirac reciprocal basis, and there will not be anything in this treatment that prohibits the Minkowski metric signature required for such a relativistic space.

Vector decomposition in terms of coordinates follows by taking dot products. We write

Gradient.

When working with a non-orthonormal basis, use of the reciprocal frame can be utilized to express the gradient.

This contains what may perhaps seem like an odd seeming mix of upper and lower indexes in this definition. This is how the gradient is defined in [4]. Although it is possible to accept this definition and work with it, this form can be justified by require of the gradient consistency with the the definition of directional derivative. A definition of the directional derivative that works for single and multivector functions, in and other more general spaces is

Taylor expanding about in terms of coordinates we have

The lower index representation of the vector coordinates could also have been used, so using the directional derivative to imply a definition of the gradient, we have an additional alternate representation of the gradient

Volume element

We define the hypervolume in terms of parametrized vector displacements . For the vector x we can form a pseudoscalar for the subspace spanned by this parametrization by wedging the displacements in each of the directions defined by variation of the parameters. For let

so the hypervolume element for the subspace in question is

This can be expanded explicitly in coordinates

Observe that when is also the dimension of the space, we can employ a pseudoscalar and can specify our volume element in terms of the Jacobian determinant.

This is

However, we won’t have a requirement to express the Stokes result in terms of such Jacobians.

Expansion of the curl and volume element product

We are now prepared to go on to the meat of the issue. The first order of business is the expansion of the curl and volume element product

The wedge product within the scalar grade selection operator can be expanded in symmetric or antisymmetric sums, but this is a grade dependent operation. For odd grade blades (vector, trivector, …), and vector we have for the dot and wedge product respectively

Similarly for even grade blades we have

First treating the odd grade case for we have

Employing cyclic scalar reordering within the scalar product for the first term

we have

The end result is

For even grade (and thus odd grade ) it is straightforward to show that (2.11) also holds.

Expanding the volume dot product

We want to expand the volume integral dot product

Picking will serve to illustrate the pattern, and the generalization (or degeneralization to lower grades) will be clear. We have

This avoids the requirement to do the entire Jacobian expansion of (2.9). The dot product of the differential displacement with can now be made explicit without as much mess.

We now have products of the form

Now we see that the differential form of (2.11) for this example is reduced to

While 2.11 was a statement of Stokes theorem in this Geometric Algebra formulation, it was really incomplete without this explicit expansion of . This expansion for the case serves to illustrate that we would write Stokes theorem as

Here the indexes have the range . This with the definitions 2.7, and 2.8 is really Stokes theorem in its full glory.

Observe that in this Geometric algebra form, the one forms are nothing more abstract that plain old vector differential elements. In the formalism of differential forms, this would be vectors, and would be a form. In a context where we are working with vectors, or blades already, the Geometric Algebra statement of the theorem avoids a requirement to translate to the language of forms.

With a statement of the general theorem complete, let’s return to our case where we can now integrate over each of the parameters. That is

This is precisely Stokes theorem for the trivector case and makes the enumeration of the boundary surfaces explicit. As derived there was no requirement for an orthonormal basis, nor a Euclidean metric, nor a parametrization along the basis directions. The only requirement of the parametrization is that the associated volume element is non-trivial (i.e. none of ).

For completeness, note that our boundary surface and associated Stokes statement for the bivector and vector cases is, by inspection respectively

and

These three expansions can be summarized by the original single statement of (2.1), which repeating for reference, is

Where it is implied that the blade is evaluated on the boundaries and dotted with the associated hypersurface boundary element. However, having expanded this we now have an explicit statement of exactly what that surface element is now for any desired parametrization.

Duality relations and special cases.

Some special (and more recognizable) cases of (2.1) are possible considering specific grades of , and in some cases employing duality relations.

curl surface integral

One important case is the vector result, which can be expressed in terms of the cross product.

Write . Then we have

This recovers the familiar cross product form of Stokes law.

3D divergence theorem

Duality applied to the bivector Stokes result provides the divergence theorem in . For bivector , let , , and . We then have

Similarly

This recovers the divergence equation

4D divergence theorem

How about the four dimensional spacetime divergence? Write, express a trivector as a dual four-vector , and the four volume element . This gives

For the boundary volume integral write , for

So we have

the orientation of the fourspace volume element and the boundary normal is defined in terms of the parametrization, the duality relations and our explicit expansion of the 4D stokes boundary integral above.

4D divergence theorem, continued.

The basic idea of using duality to express the 4D divergence integral as a stokes boundary surface integral has been explored. Lets consider this in more detail picking a specific parametrization, namely rectangular four vector coordinates. For the volume element write

As seen previously (but not separately), the divergence can be expressed as the dual of the curl

So we have . Putting things together, and writing we have

It is straightforward to reduce each of these dot products. For example

The rest proceed the same and rather anticlimactically we end up coming full circle

This is however nothing more than the definition of the divergence itself and no need to resort to Stokes theorem is required. However, if we are integrating over a rectangle and perform each of the four integrals, we have (with ) from the dual Stokes equation the perhaps less obvious result

When stated this way one sees that this could have just as easily have followed directly from the left hand side. What’s the point then of the divergence theorem or Stokes theorem? I think that the value must really be the fact that the Stokes formulation naturally builds the volume element in a fashion independent of any specific parametrization. Here in rectangular coordinates the result seems obvious, but would the equivalent result seem obvious if non-rectangular spacetime coordinates were employed? Probably not.

Lets break this down. The first part :,+3 specifies a range of lines. For example

:1,3 s/something/else/

this would replace something with else on lines 1-3. If the first line number is left off, the current line is implied, and as above the end line number can be a computed expression relative to the current line (i.e. I did three additional lines on top of the current line).

The line numbers themselves can be patterns. For example:

:,/^}/ s/something/else/

would make the substuition something -> else from the current line to the line that starts with } (^ matches to the beginning of the line). Now, how about the original seach and replace. There a regular expression capture was used. The match pattern was essentially:

/.* = .*;/

but we want to put the two interesting bits in regular expression variables (back references) that can be referred to in the replacement expression. In vim the back references go like \1, \2, …. Different regular expression engines do this differently, and in perl you’d use $1, $2 for the back references, and to generate them just /(.*) = (.*);/ instead of \(.*\).

If you use vi as your editor (and by vi I assume vi == vim), then you want to know about the vim -q option, and grep -n to go with it.

This can be used to navigate through code (or other files) looking at matches to patterns of interest. Suppose you want to look at calls of strchr() that match some pattern. One way to do this is to find the subset of the files that are of interest. Say:

and edit all those files, searching again for the pattern of interest in each file. If there aren’t many such matches, your job is easy and can be done manually. Suppose however that there’s 20 such matches, and 3 or 4 are of interest for editing, but you won’t know till you’ve seen them with a bit more context. What’s an easy way to go from one to the next? The trick is grep -n plus vim. Example:

vim will bring you right to line 710 of sqlecatd.C in this case. To go to the next pattern, which will be in this case also in the next file, use the vim command

:cn

You can move backwards with :cN, and see where you are and the associated pattern with :cc

vim -q understands a lot of common filename/linenumber formats (and can probably be taught more but I haven’t tried that). Of particular utility is compile error output. Redirect your compilation error output (from gcc/g++ for example) to a file, and when that file is stripped down to just the error lines, you can navigate from error to error with ease (until you muck up the line numbers too much).

A small note. If you are grepping only one file, then the grep -n output won’t have the filename and vim -q will get confused. Example: