Quick Review of Matrix Algebra in R

Lately, I’ve been running a series of fMRI experiments on visual perception. In the interests of understanding the underlying properties of the images I’m using as stimuli, I’ve been trying to learn more about the matrix transformations commonly used for image compression and image manipulation. Thankfully, R provides simple-to-use implementations for all of the matrix operations I wanted to play around with, so it’s been quite easy to get started. For the next few posts, I thought that I’d review the standard matrix techniques for image compression and editing, giving full examples of their implementation in R and demonstrations of their real-world value.

Before I start, I should make sure that you’re familiar with basic matrix operations in R. I imagine almost every R user knows a little bit about matrix algebra and probably knows the basics of using R to perform matrix algebra, but here’s a quick review to make sure I don’t leave anyone in the dark:

Be careful with the * operator: it does not perform matrix multiplication, but rather an entry-wise multiplication:

1
2
3
4
5

m * m
# [,1] [,2]# [1,] 0 4# [2,] 1 0

Matrix Transposes and Inverses

The next few matrix operations are a little more complex: transposition and inversion. Transposition is the easier of the two. To get the transpose of a matrix, you simply call the t function:

1
2
3
4

t(m)# [,1] [,2]# [1,] 0 1# [2,] 2 0

In contrast, inversion is a little more complex, partly because the function you’d want to use has a non-obvious name: solve.

1
2
3
4
5

solve(m)# [,1] [,2]# [1,] 0.0 1# [2,] 0.5 0

The reason that solve is called solve is that it’s a general purpose function you can use to solve matrix equations without wasting time computing the full inverse, which is often inefficient. If you want to know more about the computational efficiency issues, you should look into the ideas behind the even faster variant, qr.solve.

Now, you probably know this already, but the definition of a matrix’s inverse is that the product of the matrix and its inverse is the identity matrix, if the inverse exists. I always find this a good way to make sure that I’m correctly computing the inverse of a matrix:

In practice, you can usually get around using the pseudoinverse, but it’s nice to know that it’s at hand all the time.

Eigenvalues and Eigenvectors

Eigenvectors are surely the bane of every starting student of linear algebra, though their considerable power to simplify problems makes them the darling of every applied mathematician. Thankfully, R makes it easy to get these for every matrix:

Matrix Metadata

Hopefully those operations were already familiar to any readers out there, as I doubt that they’ll be clear from this short explanation without prior knowledge. I just felt compelled to review them before using any of them in my next set of posts. Tomorrow, I’ll start going through more interesting matrix algorithms in R, beginning with the SVD.