I admit, I have lots of pet peeves; although I like to think of it not as much as having pet peeves but having a heightened sense of what’s right and what’s wrong! Something that gets my spider sense of wrongness tingling is seeing research papers use Plücker coordinates. (Computer graphics and computational geometry papers predominantly, because those are the fields I follow.) Why the sense of wrongness? Because these days, increasingly more people in these fields are grasping for Plücker coordinates to solve problems that would have been both more elegantly and more efficiently solved with simpler, more traditional tools. Let me explain myself.

Plücker coordinates the inelegant and confusing way
Let us first start by looking at how Plücker coordinates are, sadly, often defined. In computer graphics one of the worst expositions around seems to have originated with Teller (based on Stolfi’s work), see [Teller92] or [Teller99]. (In the unlikely case you’re reading this, sorry to single you out Seth, but, well, I believe you started it.) For reasons unknown, Teller’s presentation has been copied in several works, for example in [Mahovsky04]. As Teller’s presentation suffers from greek-symbol-itis, I’ll quote Mahovsky’s presentation of Teller’s presentation (of Stolfi’s presentation?) of Plücker coordinates:

A line L passing through two points A and B in three-space with coordinates (Ax, Ay, Az) and (Bx, By, Bz) can be expressed as six Plücker coordinates L0…L5:

A ray R can also be expressed as Plücker coordinates R0…R5. The first point in space is the ray origin O with coordinates (Ox, Oy, Oz). The second point is the origin O plus the direction vector D with components (Di, Dj, Dk), giving coordinates (Ox + Di, Oy + Dj, Oz + Dk). Substituting into the Plücker coefficient equations gives

An important relation is side(R,L), which provides information about the relative orientation of the ray and the line

An expression can be derived for side(R,L) as the permuted inner product of the ray’s Plücker coordinates Rn, and line’s Plücker coordinates Ln.

side(R,L) = R2L3 + R5L1 + R4L0 + R1L5 + R0L4 + R3L2

side(R,L) is positive if the relative orientation is clockwise, negative if counterclockwise, and zero if the lines intersect.

(Here, “relative orientation” refers to which way the other vector is directed, if the first vector is pointing straight at you.) So, did you follow all that? Or did you go “Argh! My eyes!” Yeah, me too.

The above exposition is inelegant and confusing for a number of reasons: (1) it has been thoroughly obscured (by using determinant expressions to define individual components) that L3, L1, and L0 are parts of a vector cross product and L2, L5, and L4 is actually a vector subtraction, (2) the inner product is “permuted” for no apparent reason, (3) we have exposed individual coordinates left and right, and (4) there is no insight at all, just numbers. What we have is that all these issues hinder any and all real (geometric) understanding! Strike one!

How Plücker coordinates should be introduced (and why it doesn’t help)
Despite what the above mess may lead you to believe, you can present Plücker coordinates in a coherent and readable way. Perhaps the best presentation around is due to Ken Shoemake and can be found in Ray Tracing News (Volume 11, Number 1). (Ken, btw, is a long-time participant on netnews and a brilliant computer graphics educator.) As my point here is that you probably shouldn’t be using Plücker coordinates at all, I’ll just briefly summarize in the next paragraph the information in Ken’s presentation relevant to this discussion.

How nice and simple! Notice how everything is neatly wrapped up in familiar dot and cross products of the points that define the directed lines. That’s great. But wait, there’s still something missing from the big picture. Why do we need pairs of vector subtractions and cross products? Why the mysterious S = Dot(U1, V2) + Dot(U2, V1) expression? Where is the geometric insight; what does it all mean?! Strike two!

Scalar triple products and signed volumes, that’s where it’s at!
This is already too long, so I won’t go into derivations and stuff, but if you crank the math crank it turns out that all that Plücker coordinate nonsense to determine relative orientation of two directed lines is really just the sign of a scalar triple product!

Recall from elementary linear algebra (or the link above, whichever works for you) that the scalar triple product, p ⋅ (q × r), is the (signed) volume of the parallelepiped spanned by three vectors p, q, and r. The result is signed because the sign changes based on the directions of p, q, and r. Now, let’s say we have two directed line segments, AB and CD, and want to find out their relative orientation. Without loss of generality, we can assume we have a configuration like the one illustrated below.

Our S-expression for testing relative orientation for segments AB and CD then simply becomes [(D – C) (A – C) (B – C)]. Now we have something that both gives geometric insight as well as is dead simple. What does this mean? It means scalar triple products 2, Plücker coordinates 0!

Scalar triple products run (clockwise) circles around Plücker coordinates
So, left is to show that using Plücker coordinates is inferior to scalar triple products not just when it comes to simplicity and insight, but also when it comes to efficiency. For ease of exposition, I’ll do this last one by example. Consider the problem of intersecting a ray with a tetrahedron. A Plücker coordinate approach is described in [Platis03] and code for their approach has been made available (which you can find cached here; check the RayTetraPluecker.cpp file specifically).

Working with scalar triple products we can come up with more efficient, and much simpler code, and in less time. Here’s how we do it. Let the ray be defined by the directed segment PQ. Let the tetrahedron be ABCD, with the vertices arranged so that faces ABC, BAD, CDA, and DCB are clockwise when viewed from in front (from outside ABCD). First, let’s get rid of P by translating PQ and ABCD simultaneously such that P lies at the origin. To avoid excessive notation, let A, B, C, D, and Q now refer to the points after the translation, instead of the original points.

Now, the ray lies inside a face iff the ray lies clockwise of each edge of the face. This means, for the four faces, we need to test the following scalar triple products:

By inspection, we can see that there are six unique scalar triple products (up to a sign) that need to be computed: [Q A B], [Q A C], [Q A D], [Q B C], [Q B D], and [Q C D]. Because a tetrahedron has 6 edges and all 6 edges can be seen when 3 of 4 faces are seen, we know that we would have to evaluate at least 6 scalar triple products in the worst case, so the above tests is as good as it gets.

Looking at the above terms, we can immediately see that we can evaluate these using 3 cross products (Q × A, Q × B, and Q × C) and 6 dot products. In contrast, the Greek code has a worst-case of something like: create 6 Plücker line coordinates, then perform up to 5 Plücker dot products (the permuted inner products). The former requires 6 cross products, the latter 10 dot products. That is, as written, the code I just presented does about half the work in the worst case! And it took all of 5 minutes to figure out, using scalar triple products. (Granted, some additional math is needed to figure out the actual intersection point, but it’s about the same for both methods.)

What this means? Strike three! Plücker coordinates, you’re out! It also means I don’t want to see another computer graphics paper use Plücker coordinates for orientation tests ever again! Using Plücker coordinates for that is, well, plain retarded is what it is. (I guess I’m not making friends in academia with this rant, but this isn’t rocket science folks, so stop it with the Plücker orientation tests already.)

A light at the end of the tunnel
Of course, nothing that I’ve said here is new. It has been known for a long time that the triple scalar product [p q r] is mathematically equivalent to the “permuted” inner product of two Plücker coordinates. It’s just people in computer graphics who haven’t been paying attention for the last 20 years or so. But there are some notable exceptions. One is a recent paper by Kensler and Shirley on fast ray-triangle intersection [Kensler06]. Their paper presents very clearly what I’ve shown (or at least discussed) here, that Plücker coordinates and scalar triple products are mathematically equivalent when it comes to orientedness testing. As a published academic paper, hopefully other academics will now see that you might as well just use scalar triple products. (But somehow I doubt it, because, academics love the abstract over the concrete and the obscure over the obvious. I know, because I was once like that myself in my previous life as an academic!)

(Btw, just to make it clear, I’m not saying that Plücker coordinates shouldn’t be used at all, merely that Plücker coordinates is the wrong choice for performing orientation tests. In that Plücker coordinates are a higher-level abstration, they are still useful in the same sense we prefer to work with vectors instead of individual components.)

Similar Posts:

[…] author of Real-Time Collision Detection has written a little article on Plücker coordinates and why they are considered harmful. It’s great. I never really understood what all the fuss was about, as I could do the […]

I went through similar reasoning in 2002 when working on optimizing the raytracing routines of a game. At the time people mentionned to me the Plucker coordinates name, and I had simple formulas without a need for a name. Finding out that they matched was amusing :D

Your strike 2 point did not really explain where the two parts formula came from and the potential value of it.

If you have a segment (AB) and you want to see if the other segment (SE) is on the right side. You can simply consider the plane (ABS) and the dot of SE and its normal SA^SB therefore the value d = SE.(SA^SB) = SE.((A-S)^(B-S) = SE.(A^B-S^B-A^S) [reminder S^S=0]
then d = SE.(A^B+AB^S) = SE.(A^B) + AB.(S^SE) [rotation on the left of the triple product]
S^SE=S^(E-S)=S^E
therefore d = SE.(A^B) + AB.(S^E) = U1.V2 + U2.V1

Now this is not completely nor always useless, you replace your formula (D – C). (A – C)^(B – C) which is 4 reads, 3 subs, 1 cross, 1 dot by U1.V2 + U2.V1 which is 4 reads, 2 dots and one add as long as you prestore AB and A^B for every segment.

I might have forgotten other details since it has been a long time, but I hope this helps.

Hi Cedrick! Yes, you can typically reduce the total number of floating-point operations by expressing something using Plücker coordinates and storing the P-Q and PxQ terms. (I didn’t mention it above, but I talked about this briefly on pp187-188 of my book.) Of course, trading saving a few flops for extra storage is rarely the right thing to do these days, with flops at around 1 cycle and cache misses approaching 1000 cycles. Then again, if the target machine is some low-end hardware it might be a worthwhile tradeoff and people should definitely be aware of this so they can make that value judgement.

[…] tests are needed. The sidedness tests are done using scalar triple products (as I described in the Plücker post a while ago). For those needing more information, I talk about line vs. quad being the same cost as […]