Structuring a matrix library

I am writing a C++ matrix library, mainly to learn more about linear algebra and improve my programming skills, even though I know plenty of libraries already exist. I have used one such library and quickly discovered that if you don't entirely understand how it was structured, it can lead to confusion with basic multiplication. Case in point:

Whether you think of matrices as row-major or column-major, a matrix * matrix has a clear mathematical meaning and implementation.

Thus, if you think of a row-major ABC or a column-major CBA, the actual code you write should be in "calculation order" and thus A * B * C in both situations. Does this make sense? If you instead implemented a matrix library to allow a column-major notated order of C * B * A in actual code, you could easily confuse users (including yourself) since the implementation of "*" would need to vary from the standard mathematical meaning.

But this presents additional confusion when you consider the vector * matrix. If we implement all code in "calculation order" as suggested above, then a user would think of column-major CBAv (v is a vector) as coded like v*A*B*C. Therefore your vector* operation would need to be implemented as if the vector was actually on the right, which is the intention from the notation and the intention from the calculation order.

The problem is that it reads like the vector is on the left, which OpenGL also actually allows in the shader, by treating the vector as a row-vector, even though the OpenGL matrix is column-major (this v*M would then have the same effect as multiplying the column vector * the transpose of the matrix).

So in my library, I am opting to not allow a vector* overload to avoid this confusion. The user might wonder, "am I treating v as a row or a column? is on right or on left?" But providing only Mv in code leads to the bizarre order of A * B * C * v to get CBAv (or vABC).

Of course, in most situations, you wouldn't concatenate a matrix multiplication and a vector multiplication in the same expression. But the reality remains that part of the library (matrix * matrix) needs to be in "calculation order" while matrix*vector needs to be in column-major notation order, considering these issues.

I am curious how some of you have implemented a matrix * vector and matrix * matrix in your libraries with regards to calculation order.

the actual code you write should be in "calculation order" and thus A * B * C in both situations. Does this make sense?

No. You have made the mistake of confusing row vs. column major ordering with the contents of those matrices.

Mathematicians long-since defined the canonical ordering of values in a matrix. The simplest way to understand this is to answer this question: where does the translation go?

Code :

[ 1 0 0 x ]
[ 0 1 0 y ]
[ 0 0 1 z ]
[ 0 0 0 1 ]

This matrix uses the canonical ordering. You could store that matrix in a 16-float array in row-major or column-major. If you store it in column-major ordering, then the xyz will be in the 12, 13, and 14 indices. If you store it in row-major, it will be in the 3, 7, and 11 indices.

However, a few decades ago, some idiot graphics people decided to overturn centuries of established mathematical conventions, under the mistaken assumption that this would be faster:

Code :

[ 1 0 0 0 ]
[ 0 1 0 0 ]
[ 0 0 1 0 ]
[ x y z 1 ]

This matrix uses transposed ordering (so named because it is the transpose of the canonical ordering). You can again store this in row major or column-major. In column-major, the xyz will be in the 3, 7, and 11 indices. In row-major, it will be in the 12, 13, and 14 indices.

In short, a column-major/canonical ordered matrix looks exactly like a row-major/transposed ordered matrix. This is why Direct3D matrices, despite saying that they are "row-major," will work just fine in OpenGL without requiring the transpose flag. Because they are row-major/transposed, so they are identical in representation to OpenGL's column-major/canonical ordered matrices.

The row/column distinction is all about how you stick the numbers for a 2D matrix array into an 1D array of floats. The canonical/transposed distinction is about what those numbers are. That's defined by how you generate those numbers, what you get when you create a transformation matrix.

Back to your question. The "actual code you write" should be in accord with what your matrix functions generate. Column-major/canonical matrices should be multiplied in the canonical compositional ordering: transformation matrices that you intend to come before the current transform are right multiplied. Since row-major/canonical is functionally equivalent to column-major/transposed, it's easy to see that using row-major/canonical means transposing your matrix multiplication ordering compared to canonical.

So if you want matrices to be in "calculation order" (and you shouldn't), then you want column-major/transposed, aka row-major/canonical. Either way you look at it, you get the same numbers in the array.

So in my library, I am opting to not allow a vector* overload to avoid this confusion. The user might wonder, "am I treating v as a row or a column? is on right or on left?" But providing only Mv in code leads to the bizarre order of A * B * C * v to get CBAv (or vABC).

That makes no sense. What the user sees should be exactly what happens. If they see you right-multiplying a vector, then the vector should be right-multiplied. You're already using the transposed ordering from OpenGL standard; you should stick with that convention and put the vector on the left.

Otherwise, you're just confusing the user. Either consistently use the standard mathematical convention, or consistently use the transposed mathematical convention.

I am also reading through this very interesting compilation of thoughts from the guy that came up with the column-major scheme in OpenGL and it is quite entertaining and educational. I may have more to comment after reading it all: http://steve.hollasch.net/cgindex/ma...olumn-vec.html

Indeed you have resolved a critical gap in my understanding, and I had assumed that row/column major referred not just to memory but also visual ordering. Separating the two answers a lot of questions indeed.

I have looked over the matrix library I got third-party and based on your descriptions, it is a column-major with transposed ordering, which has no doubt created all the confusion for me since I do multiply transforms left-to-right in transform order (i.e. translation transform comes last, scale usually comes first), yet the translation component of the matrix is indeed in the last 4 elements of the 16 element array.

Transposed (DirectX) order works with row vectors, OpenGL with column vectors. E.g.

Code :

[x'] [a b c] [x]
[y']=[d e f].[y]
[z'] [g h i] [z]

and

Code :

[a d g]
[x' y' z']=[x y z].[b e h]
[c f i]

both specify the same sequence of equations.

Note that the matrices are transposed.

Also, concatenation works the other way around. OpenGL matrix operations (glRotate etc) multiply with the current transformation matrix on the left and the operation's matrix on the right; with transposed matrices, the two are the other way around. In each case, the operation's matrix comes between the current matrix and the vector.

I'm afraid that what you have written doesn't make a lot of sense and differs from everything I've seen everywhere else. The two main points: You've written OpenGL matrix as column-major but a vector as row-major. You do not typically mix these two; if you have a column-major system, your matrix and your vectors are column-major. Second, just look at a typical OpenGL shader to see that vectors are usually multiplied on the right, not the left as you have written.

In fact what you have written would give an upper left first element of the resulting multiplied matrix as (ax+by+cz). That is not what you would actually get if you wanted to transform the written vector by the written matrix in OpenGL. It would be (ax+dy+gz) unless you deliberately wanted to multiply your vector by the transpose of the matrix, which is an entirely different issue not being discussed here.

While this is somewhat unconventional, and will not result in the simplest possible code, I regularly argue for creating code that require no prior knowledge to read. In the case of a matrix-library, that could mean only ever using functions with naming accurate enough that you can easily tell what they're doing.

As an example, matrix-mult becomes something like postMultiplyBy(mat4x4) (I believe managed directx had this function)

Code :

mat4x4 a, b, c;
c = a.postMultiplyBy( b );

- no way to get it wrong, a first - then b, no matter the underlying conventions. And similarly for vectors

Code :

vec4 v0, v1;
v1 = a.transform( v0 );

(alternatively, C-style can be used: transform(mat4x4,vec4)).

IF an operator* is provided for vector-matrix mult, only overload ONE way, and for the love of math, don't make the mistake of glsl/hlsl/cg and make the mul(mat4x4,vec4) "cleverly" multiply by the transpose.

Additionally, on a slight tangent, IF an operator* is supplied for matrix-multiplication (seeing as mul( mul(model, view), proj) arguably is a bit horrific), remember to name your matrices according to row-major/column-major:

I'm afraid that what you have written doesn't make a lot of sense and differs from everything I've seen everywhere else. The two main points: You've written OpenGL matrix as column-major but a vector as row-major.

It appears you still don't understand what "row major" and "column major" mean. Those terms refer to the way that a two-dimensional array is stored in memory. A row-major memory is stored in the order:

Code :

0 1 2
3 4 5
6 7 8

while a column-major matrix is stored in the order:

Code :

0 3 6
1 4 7
2 5 8

To put it another way, given the matrix

Code :

a b c
d e f
g h i

row-major order is

Code :

a b c d e f g h i

while column-major order is

Code :

a d g b e h c f i

Originally Posted by openlearner

You do not typically mix these two; if you have a column-major system, your matrix and your vectors are column-major. Second, just look at a typical OpenGL shader to see that vectors are usually multiplied on the right, not the left as you have written.

Matrix multiplication only requires that the "middle" dimensions are equal, i.e. the number of columns of the left-hand matrix must equal the number of rows of the right-hand matrix. The resulting matrix has the same number of rows as the left-hand matrix and the same number of columns as the right-hand matrix.
So you can multiply an M-element row vector (i.e. a 1xM matrix ) by a MxN matrix to obtain an N-element row vector (i.e. a 1xN matrix). Or you can multiply an NxM matrix by an M-element column vector to obtain an N-element column vector. OpenGL uses the latter convention, DirectX the former.
None of this has anything to do with "row major" versus "column major" storage, though.

Originally Posted by openlearner

In fact what you have written would give an upper left first element of the resulting multiplied matrix as (ax+by+cz). That is not what you would actually get if you wanted to transform the written vector by the written matrix in OpenGL. It would be (ax+dy+gz) unless you deliberately wanted to multiply your vector by the transpose of the matrix, which is an entirely different issue not being discussed here.

Both multiplications describe the same system of equations, namely:

Code :

x' = a.x+b.y+c.z
y' = d.x+e.y+f.z
z' = g.x+h.y+i.z

If you don't understand this, you should start by learning the definition of matrix multiplication for the general case. An N-element vector is just a 1xN matrix (row vector) or a Nx1 matrix (column vector).