The function of the imagination is notto make strange things settled, so much asto make settled things strange.- G.K. Chesterton

Why is matrix multiplication defined so very differently from matrix addition? If we didn’t know these procedures, could we derive them from first principles? What might those principles be?

This post gives a simple semantic model for matrices and then uses it to systematically derive the implementations that we call matrix addition and multiplication. The development illustrates what I call “denotational design”, particularly with type class morphisms. On the way, I give a somewhat unusual formulation of matrices and accompanying definition of matrix “multiplication”.

Note: I’m using MathML for the math below, which appears to work well on Firefox but on neither Safari nor Chrome. I use Pandoc to generate the HTML+MathML from markdown+lhs+LaTeX. There’s probably a workaround using different Pandoc settings and requiring some tweaks to my WordPress installation. If anyone knows how (especially the WordPress end), I’d appreciate some pointers.

Matrices

For now, I’ll write matrices in the usual form: (A11⋯A1m⋮⋱⋮An1⋯Anm)

Addition

To add two matrices, we add their corresponding components. If A=(A11⋯A1m⋮⋱⋮An1⋯Anm)and B=(B11⋯B1m⋮⋱⋮Bn1⋯Bnm), then A+B=(A11+B11⋯A1m+B1m⋮⋱⋮An1+Bn1⋯Anm+Bnm). More succinctly, (A+B)ij=Aij+Bij.

Multiplication

Multiplication, on the other hand, works quite differently. If A=(A11⋯A1m⋮⋱⋮An1⋯Anm)and B=(B11⋯B1p⋮⋱⋮Bm1⋯Bmp), then (A∙B)ij=∑k=1mAik⋅Bkj. This time, we form the dot product of each A row and B column.

Why are these two matrix operations defined so differently? Perhaps these two operations are implementations of more fundamental specifications. If so, then making those specifications explicit could lead us to clear and compelling explanations of matrix addition and multiplication.

Transforming vectors

Simplifying from matrix multiplication, we have transformation of a vector by a matrix. If A=(A11⋯A1m⋮⋱⋮An1⋯Anm)and x=(x1⋮xm), then A⋅x=(A11⋅x1+⋯+A1m⋅xm⋮⋱⋮An1⋅x1+⋯+Anm⋅xm) More succinctly, (A⋅x)i=∑k=1mAik⋅xk.

What’s it all about?

We can interpret matrices as transformations. Matrix addition then adds transformations:

(A+B)x=Ax+Bx

Matrix “multiplication” composes transformations:

(A∙B)x=A(Bx)

What kinds of transformations?

Linear transformations

Matrices represent linear transformations. To say that a transformation (or “function” or “map”) f is “linear” means that f preserves the structure of addition and scalar multiplication. In other words, f(x+y)=fx+fyf(c⋅x)=c⋅fx Equivalently, f preserves all linear combinations: f(c1⋅x1+⋯+cm⋅xm)=c1⋅fx1+⋯+cm⋅fxm

What does it mean to say that “matrices represent linear transformations”? As we saw in the previous section, we can use a matrix to transform a vector. Our semantic function will exactly be this use, i.e., the meaning of matrix is as a function (map) from vectors to vectors. Moreover, these functions will satisfy the linearity properties above.

Representation

For simplicity, I’m going structure matrices in a unconventional way. Instead of a rectangular arrangement of numbers, use the following generalized algebraic data type (GADT):

I’m using the notation “c × d” in place of the usual “(c,d)”. Precedences are such that “×” binds more tightly than “⊸”, which binds more tightly than “→”.

This definition builds on the VectorSpace class, with its associated Scalar type and InnerSpace subclass. Using VectorSpace is overkill for linear maps. It suffices to use modules over semirings, which means that we don’t assume multiplicative or additive inverses. The more general setting enables many more useful applications than vector spaces do, some of which I will describe in future posts.

The idea here is that a linear map results in either (a) a scalar, in which case it’s equivalent to dot v (partially applied dot product) for some v, or (b) a product, in which case it can be decomposed into two linear maps with simpler range types. Each row in a conventional matrix corresponds to Dot v for some vector v, and the stacking of rows corresponds to nested applications of (:&&).

Semantics

The semantic function, apply, interprets a representation of a linear map as a function (satisfying linearity):

Deriving matrix operations

Addition

Following the principle of semantic type class morphisms, the specification simply says that the meaning of the sum is the sum of the meanings:

apply (f ^+^ g) ≡ apply f ^+^ apply g

which is half of the definition of “linearity” for apply.

The game plan (as always) is to use the semantic specification to derive (or “calculate”) a correct implementation of each operation. For addition, this goal means we want to come up with a definition like

f ^+^ g =<rhs>

where <rhs> is some expression in terms of f and g whose meaning is the same as the meaning as f ^+^ g, i.e., where

apply (f ^+^ g) ≡ apply <rhs>

Since Haskell has convenient pattern matching, we’ll use it for our definition of (^+^) above. Addition has two arguments, and our data type has two constructors, there are at most four different cases to consider.

10 Comments

rdm:

Note that the derivation is a lot simpler if you start with a form of multiplication which follows the same structural rules as addition and build from that an “inner product” operation which does the multiply-and-sum thing.

in what way is this a “reimagining”? That matrices represent linear transformations is absolutely fundamental.

Thanks for asking.
What I mean by “reimagining” is (a) packaging of linear maps via the Category & Arrow vocabulary (more explicit in the library), (b) structuring the representation and semantics to match the algebraic structure of dot and (&&&), and (c) derivation of operations from semantics.

my big big big question is : can we do arbitrary computation using just matrix addition and multiplication (and maybe transpose)… i.e., can we capture the lambda calculus or the universal turing machine in a little bit of linear algebra? I have managed to find McCarthy COND using only primitive matrix ops. Is this enough, or do I need NAND?

Before I got to the comment section, I had almost exactly the same thought as @DrMathochist in my head, so I was glad I kept that on hold and continued on, and your response clarified what exactly you meant by “reimagining”. (In college, for my first introduction to linear algebra, we went quite far without ever seeing or using a matrix, finally getting there, and deriving the matrix operations, only when it became necessary as a concrete representation of a linear transformation.) I suspect that people seeing only the title and first paragraphs of this post might be confused, however.

There is a lovely demonstration somewhere vaguely remembered in one of my favorite books http://matrixeditions.com/UnifiedApproach4th.html that Determinant is the UNIQUE multilinear, antisymmetric function of matrices. In other words, it ought to be derivable from those properties, just in the way you derived mat mul from linearity and composition. dreaming

Jonathan Fischoff:

I can’t shake this feeling that Arrow erupted from the bowels of Base and tried to take possession of Vect.

It seems more natural to express Vect as a symmetric bimonoidal category (http://ncatlab.org/nlab/show/bimonoidal+category) using the tensor product and direct sum. (***) is basically the tensor product, but I think the direct sum of transformations would be more like (+++).

I’m probably revealing a weakness in my jargon knowledge, but what do you mean when you use the word “preserves,” here? You’ve used that one word to describe multiple things that look like mathematical laws (e.g. distributivity) each of which has its own name.

Jason Turner:

I am a mathematical physicist, lately working in software engineering – and somewhat new to the practice of functional programming; I came across your publications by way of listening to your HaskellCast re FRP and denotational design and find it very very interesting.

I think that you may find some interesting reading on a similar vein and with some curious twists in the book “Probability Theory, The Logic of Science” by E T Jaynes, specifically chapter 2. This derives the quantitative rules of probability theory from the ‘basic desiderata’ of ‘plausible reasoning’. What I find interesting is firstly another fine example of the approach you have applied here and from what I surmise quite widely elsewhere; the approach is the same and the particular steps are also similar although as the problem is more general the equations involved are more general functional equations, one being ‘The Associativity Equation’. But it is also interesting that the author took this approach, with some inspiration from G Polya, around 65 years ago and while active in applying computation to probability and statistics was certainly not of a theoretical computer science orientation.