Share this:

In my last post, Why does touch include a utimensat() syscall?, I’d pointed out that strace reports an utimensat syscall from touch, and noted that this appeared to have the effect of clearing the subsecond portion of the files mtime.

It turns out that strace was slightly lying, and we actually have a call to futimens( fd, NULL ). I was able to see that by debugging into coreutils’s touch.c and see what it is doing. The purpose of this futimens() syscall is supposed to be to set the time to the current time. It appears that clearcase V8 + MVFS + futimes() doesn’t respect the microsecond granular times that were implemented in V8. This API, as currently implemented will set the file’s mtime to the current time, but only respects the seconds portion of that time.

With clearcase V8 MVFS view private files having the capability for subsecond granularity, the end result is that this pushes the modification time backwards if you get unlucky enough to execute the futimes() in the same second as the file creation. That is almost always. It can be seen to work correctly if you put a long enough sleep in the code between the initial creation point for one file, and the “touch” of the second. Here’s a bit of standalone code that illustrates:

The first is the buggy call, and the time goes backwards despite a half second sleep. The second call is okay, because 0.9+0.5 of a second ends up in the next second, so the second “touched” file has a timestamp after the creation of the second file (as expected.)

Notice that the file that is touched by doing a perl “open” ends up with a later time, despite the fact that it was done logically earlier than the touch.

Running this command outside of a clearcase dynamic view shows zeros only in the subsecond times (also the behaviour of clearcase V7). Needless to say, this difference in file times from their creation sequence wreaks havoc on make.

I was curious how the two touch methods differed, and stracing them shows that the touch differs by including a utimesat() syscall. The perl touch is:

Following the principle that one should always relate new formalisms to things previously learned, I’d like to know what Maxwell’s equations look like in tensor form when magnetic sources are included. As a verification that the previous Geometric Algebra form of Maxwell’s equation that includes magnetic sources is correct, I’ll start with the GA form of Maxwell’s equation, find the tensor form, and then verify that the vector form of Maxwell’s equations can be recovered from the tensor form.

Tensor form

With four-vector potential \( A \), and bivector electromagnetic field \( F = \grad \wedge A \), the GA form of Maxwell’s equation is

The electric source equation can be unpacked into tensor form by dotting with the four vector basis vectors. With the usual definition \( F^{\alpha \beta} = \partial^\alpha A^\beta – \partial^\beta A^\alpha \), that is

By forming the dot product sequence \( F^{\alpha \beta} = \gamma^\beta \cdot \lr{ \gamma^\alpha \cdot F } \), the electric and magnetic field components can be related to the tensor components. The electric field components follow by inspection and are

This takes things full circle, going from the vector differential Maxwell’s equations, to the Geometric Algebra form of Maxwell’s equation, to Maxwell’s equations in tensor form, and back to the vector form. Not only is the tensor form of Maxwell’s equations with magnetic sources now known, the translation from the tensor and vector formalism has also been verified, and miraculously no signs or factors of 2 were lost or gained in the process.

Observe that the perfect cancellation of the time derivative terms only occurs when the cross product differences were those of the phasors. When those cross differences are those of the actual fields, like those in the Poynting theorem, there is a frequency dependent term is that expansion.

Maxwell’s equations with magnetic sources

The form of Maxwell’s equations to be used here are expressed in terms of \( \boldsymbol{\mathcal{E}} \) and \( \boldsymbol{\mathcal{H}} \), assume linear media, and do not assume a phasor representation

The usual relationship is only modified by one additional term. Recall from electrodynamics [2] that \ref{eqn:energyMomentumWithMagneticSources:40} (when the magnetic current density \( \boldsymbol{\mathcal{M}} \) is omitted) is just one of four components of the energy momentum conservation equation

Note that \ref{eqn:energyMomentumWithMagneticSources:80} was likely not in SI units. The next task is to generalize this classical relationship to incorporate the magnetic sources used in antenna theory. With an eye towards the relativistic nature of the energy momentum tensor, it is natural to assume that the remainder of the energy momentum tensor conservation relation can be found by taking the time derivatives of the Poynting vector.

The \( \mu_0 \boldsymbol{\mathcal{J}} \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} \cross \BB \) is a portion of the Lorentz force equation in its density form. To put \ref{eqn:energyMomentumWithMagneticSources:220} into the desired form, the remainder of the Lorentz force force equation \( \rho \boldsymbol{\mathcal{E}} = \epsilon_0 \boldsymbol{\mathcal{E}} \spacegrad \cdot \boldsymbol{\mathcal{E}} \) must be added to both sides. To extend the magnetic current term to its full dual (magnetic) Lorentz force structure, the quantity to add to both sides is \( \rho_m \boldsymbol{\mathcal{H}} = \mu_0 \boldsymbol{\mathcal{H}} \spacegrad \cdot \boldsymbol{\mathcal{H}} \). Performing these manipulations gives

It seems slightly surprising the sign of the magnetic equivalent of the Lorentz force terms have an alternation of sign. This is, however, consistent with the duality transformations outlined in ([1] table 3.2)

Comfortable that the LHS has the desired structure, the RHS can expressed as a divergence. Just expanding one of the differences of vector products on the RHS does not obviously show that this is possible, for example

This smells like something that can probably be related to a combined electromagnetic field multivectors in some sort of structured fashion. Guessing that this is related to the antisymmetic sum of two electromagnetic field multivectors turns out to be correct. Let

This has two components, the first is a bivector (pseudoscalar times vector) that includes all the non-mixed products, and the second is a vector that includes all the mixed terms. We can therefore write the antisymmetic difference of the reciprocity theorem by extracting just the grade one terms of the antisymmetric sum of the combined electromagnetic field

over a surface, where the RHS was a volume integral involving the fields and (electric and magnetic) current sources.
The idea was to consider two different source loading configurations of the same system, and to show that the fields and sources in the two configurations can be related.

To derive the result in question, a simple way to start is to look at the divergence of the difference of cross products above. This will require the phasor form of the two cross product Maxwell’s equations

Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.)

Magnetic Vector Potential.

In class and in the problem set \( \BA \) was referred to as the Magnetic Vector Potential. I only recalled this referred to as the Vector Potential. Prefixing this with magnetic seemed counter intuitive to me since it is generated by electric sources (charges and currents).
This terminology can be justified due to the fact that \( \BA \) generates the magnetic field by its curl. Some mention of this can be found in [4], which also points out that the Electric Potential refers to the scalar \( \phi \). Prof. Eleftheriades points out that Electric Vector Potential refers to the vector potential \( \BF \) generated by magnetic sources (because in that case the electric field is generated by the curl of \( \BF \).)

Plots of infinitesimal dipole radial dependence.

In section 4.2 of [1] are some discussions of the \( kr < 1 \), \( kr = 1 \), and \( kr > 1 \) radial dependence of the fields and power of a solution to an infinitesimal dipole system. Here are some plots of those \( k r \) dependence, along with the \( k r = 1 \) contour as a reference. All the \( \theta \) dependence and any scaling is left out.

The CDF notebook visualizeDipoleFields.cdf is available to interactively plot these, rotate the plots and change the ranges of what is plotted.

A plot of the real and imaginary parts of \( H_\phi = \frac{j k}{r} e^{-j k r} \lr{ 1-\frac{j}{k r} } \) can be found in fig. 1 and fig. 2.

fig 1. Radial dependence of Re H_phi

fig 2. Radial dependence of Im H_phi

A plot of the real and imaginary parts of \( E_r = \inv{r^2} \lr{1-\frac{j}{k r}} e^{-j k r} \) can be found in fig. 3 and fig. 4.

Observe the perfect, somewhat miraculous seeming, cancellation of all the radial components of the field. If \( \BA_{\textrm{T}} \) is the non-radial projection of \( \BA \), the electric far field is just

the magnetic far field can be expressed in terms of the electric far field as
\begin{equation}\label{eqn:chapter4Notes:260}
\boxed{
\BH = \inv{\eta} \rcap \cross \BE.
}
\end{equation}

Plane wave relations between electric and magnetic fields

I recalled an identity of the form \ref{eqn:chapter4Notes:260} in [3], but didn’t think that it required a far field approximation.
The reason for this was because the Jackson identity assumed a plane wave representation of the field, something that the far field assumptions also locally require.

which also finds \ref{eqn:chapter4Notes:260}, but with much less work and less mess.

Transverse only nature of the far-field fields

Also observe that its possible to tell that the far field fields have only transverse components using the same argument that they are locally plane waves at that distance. The plane waves must satisfy the zero divergence Maxwell’s equations

Vertical dipole reflection coefficient

In class a ground reflection scenario was covered for a horizontal dipole. Reading the text I was surprised to see what looked like the same sort of treatment section 4.7.2, but ending up with a quite different result. It turns out the difference is because the text was treating the vertical dipole configuration, whereas Prof. Eleftheriades was treating a horizontal dipole configuration, which have different reflection coefficients. These differing reflection coefficients are due to differences in the polarization of the field.

To understand these differences in reflection coefficients, consider first the field due to a vertical dipole as sketched in fig. 7, with a wave vector directed from the transmission point downwards in the z-y plane.

\( \BE \) lies in the plane of incidence, and the magnetic field is completely parallel to the plane of reflection). For the no transmission case, allowing \( v_t \rightarrow 0 \), the index of refraction is \( n_t = c/v_t \rightarrow \infty \), and the reflection coefficient is \( 1 \) as claimed in section 4.7.2 of [1]. Because of the symmetry of this dipole configuration, the azimuthal angle that the wave vector is directed along does not matter.

Horizontal dipole reflection coefficient

In the class notes, a horizontal dipole coming out of the page is indicated. With the page representing the z-y plane, this is a magnetic vector potential directed along the x-axis direction

This far field electric field lies in the plane of incidence (a direction of \( \thetacap \) rotated by \( \pi/2 \)), not in the plane of reflection. The corresponding magnetic field should be directed along the plane of reflection, which is easily confirmed by calculation

Share this:

For reasons not completely known to myself, I bought a raspberry pi recently. Here’s a first circuit, a collaboration between myself and Lance. Lance added a switch between the GPIO output port and the LED, so that the port has to be enabled by both software, and by the physical switch

pi and the circuit

the circuit

I tried two different ways of controlling the GPIO, the first using a command line tool: