First of all, since an eigenspace generalizes a kernel, let’s consider a situation where we repeat the eigenvalue :

This kills off the vector right away. But the vector gets sent to , where it can be killed by a second application of the matrix. So while there may not be two independent eigenvectors with eigenvalue , there can be another vector that is eventually killed off by repeated applications of the matrix.

More generally, consider a strictlyupper-triangular matrix, all of whose diagonal entries are zero as well:

That is, for all . What happens as we compose this matrix with itself? I say that for we’ll find the entry to be zero for all . Indeed, we can calculate it as a sum of terms like . For each of these factors to be nonzero we need and . That is, , or else the matrix entry must be zero. Similarly, every additional factor of pushes the nonzero matrix entries one step further from the diagonal, and eventually they must fall off the upper-right corner. That is, some power of must give the zero matrix. The vectors may not have been killed by the transformation , so they may not all have been in the kernel, but they will all be in the kernel of some power of .

Similarly, let’s take a linear transformation and a vector . If we said that is an eigenvector of with eigenvalue . Now we’ll extend this by saying that if for some , then is a generalized eigenvector of with eigenvalue .

About this weblog

This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).

I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.