So called pseudovectors pop up in physics when discussing quantities defined by cross products, such as angular momentum $\mathbf L=\mathbf r\times\mathbf p$. Under the active transformation $\mathbf x \mapsto \mathbf{-x}$, we claim that such a vector gets mapped to itself because $\mathbf{-r} \times \mathbf{-p} = \mathbf r\times\mathbf p$. (Or under the equivalent passive transformation, a pseudovector turns into to its negative.) But it seems like we're just pretending that a linear transformation $T$ preserve cross products, so that $T(\mathbf a \times \mathbf b) = T(\mathbf a) \times T(\mathbf b)$, and then when things don't go as expected we label the result as a pseudovector. Is there more to the story?

This question might be more appropriate to the physics SE.
–
FabianDec 24 '12 at 23:13

As far as I know, the difference between vectors and pseudovectors appears when one considered inversion (or mirror reflection).
–
FabianDec 24 '12 at 23:15

1

Yes, there is more to the story. If you want to fully formalize those things in a convenient mathematical framework you need to move into the realm of the differential forms. Try looking for the keyword "pseudovector" in the book "The Geometry of Physics" by Theodore Frankel.
–
Giuseppe NegroDec 24 '12 at 23:30

Thanks, I will take a look at Frenkel's book, although I don't really have any experience with differential geometry yet.
–
yuvalDec 25 '12 at 0:15

2 Answers
2

In three dimensions, pseudovectors are a simple way to treat bivectors, oriented planar subspaces. True vectors are oriented linear subspaces with a weight (their magnitudes); bivectors are planar instead of linear. The normal vectors to these oriented subspaces are what we usually call pseudovectors, and it is for this reason that various operations (like reflections or inversions through the origin) produce "wrong" results.

Notationally, we deal directly with a bivector by forming a wedge product of vectors. That is, the bivector formed by vectors $a,b$ is $a \wedge b$. Given a linear operator $\underline T$, we define the action of the linear operator on a bivector by the following law:

Let us consider the simple case of $\underline T(a) = -a$ for any $a$. Then the associated bivector transforms as $\underline T(a \wedge b) = -a \wedge -b = a \wedge b$, as you observe. Doing it this way--by defining the action of a linear operator on a bivector--makes it sensible, rather than saying simply that pseudovectors transform differently from regular vectors. Here, you build the operator according to a specific rule, and the result is deterministic.

Note that we can continue to build things with wedges that traditional formulations of vector algebra and calculus tend to gloss over. We can define the action of a linear operator on three vectors wedged together.

The quantity $a \wedge b \wedge c$ is called a trivector or pseudoscalar. In three dimensions, there is only one linearly independent unit trivector, $\hat x \wedge \hat y \wedge \hat z$. The action of $\underline T$ on this object is very interesting. It happens that

This can be taken as a definition of the determinant, defined in a wholly geometric way.

Ultimately, though, yes, linear operators should act individually on the vectors that make them up--they should preserve wedge products. Cross products are related to wedges, however, and so most of the time, applying a linear operator to preserve crosses is sensible, but there are some times (inversion and reflections being among them) that it is not.

Edit: about the relations between operators on duals and duals of operators. The Hodge star is much, much better treated in geometric algebra as multiplication by the pseudoscalar. We define $i \equiv \hat x \wedge \hat y \wedge \hat z$ and make sense of expressions like $\star a = i a$ and $\star (a \wedge b) = -i (a \wedge b)$ through the geometric product. Here are the canonical properties of the geometric product:

You should be able to show then that $i = \hat x \hat y \hat z$ and that $\star a = i a$ captures the Hodge star operation on a vector.

Now, why bother with this stuff? Because it makes formulas that would be ugly and clumsy with the Hodge star very simple. There exists a simple formula relating the adjoint (in Euclidean space, the transpose) of an operator with the inverse. That is,

$$\overline T^{-1}(a) = [\underline T(i)]^{-1} \underline T(ia)$$

for any multivector $a$, where $\overline T$ is the adjoint operator to $\underline T$. Written with Hodge stars, we would need a term of $(-1)^k$ that would alternate based on grade, and it would all be a royal mess. This formula, however, written in geometric algebra, is entirely simple.

Now then, rotations and reflections all belong to the group of orthogonal linear operators, obeying $\overline T^{-1} = \underline T$, so for rotations and reflections we get instead,

For a rotation, the determinant is $+1$, and as such, the $i$ just pulls out. Rotating the vector and then finding the dual is the same as rotating the dual. For an inversion, the determinant is $-1$, and you can see how the inversion of the vector gets canceled by the determinant's factor.

Very helpful reply, thanks! I read up a bit on wedge products and their relation to the cross product via the Hodge dual. Is there any meaningful relationship between, say, $T(\star(\mathbf a \wedge \mathbf b))$ and $\star(T(\mathbf a \wedge \mathbf b))$?
–
yuvalDec 25 '12 at 23:10

@yuval I've added a section on orthogonal operators and operations of duals compared to duals of operators. The core result is that $(\det \underline T) \star[\underline T(a)] = \underline T(\star a)$ for any vector $a$ when $\underline T$ is orthogonal.
–
MuphridDec 26 '12 at 2:10