Mathematics for the interested outsider

I hear joints popping as I stretch and try to get back into the main line of my posts.

We left off defining what we mean by a matrix element of a linear transformation. Let’s see how this relates to adjoints.

We start with a linear transformation between two inner product spaces. Given any vectors and we have the matrix element , using the inner product on . We can also write down the adjoint transformation , and its matrix element , using the inner product on .

But the inner product on is (conjugate) linear. That is, we know that the matrix element can also be written as . And we also have the adjoint relation. Putting these together, we find

So the matrix elements of and are pretty closely related.

What if we pick whole orthonormal bases of and of ? Now we can write out an entire matrix of as . Similarly, we can write a matrix of as

That is, we get the matrix for the adjoint transformation by taking the original matrix, swapping the two indices, and taking the complex conjugate of each entry. This “conjugate transpose” operation on matrices reflects adjunction on transformations.

About this weblog

This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).

I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.