So, you want to prove that the product of the eigenvalues is the determinant first?
–
Guess who it is.Dec 30 '10 at 6:33

You don't really take all pairs. It seems that you're assuming that $v_1,\ldots,v_n$ is a basis, which means that $M$ is diagonalizable.
–
Jonas MeyerDec 30 '10 at 6:33

You need the $v_i$ to be linearly independent, right? Are you sure the way you are defining things this is the determinant?
–
Andres CaicedoDec 30 '10 at 6:34

It's equal to the determinant only if the matrix is diagonalizable. In general you need to look at the Jordan form.
–
Yuval FilmusDec 30 '10 at 6:35

2

First, change your definition from $\mathbf{Mv}_i = k\mathbf{v}_i$ to $(\mathbf{M}-k\mathbf{I})\mathbf{v}_i = \mathbf{0}$. Then consider $(\mathbf{v}_i,k,n)$, where $(\mathbf{M}-k\mathbf{I})^n\mathbf{v}_i = \mathbf{0}$ with $n$ minimal. If you are working over $\mathbb{C}$, this will help you get the necessary multiplicity.
–
Arturo MagidinDec 30 '10 at 17:13

1 Answer
1

If I understand you correctly, you want to prove that, for a square matrix with complex coefficients (or more generally, any algebraically closed field) the product of the eigenvalues (with appropriate multiplicity) is a multiplicative function on the matrices, but without using that the product of the eigenvalues is the determinant.

(It is pretty easy to show that the product of the Eigenvalues, with appropriate multiplicity, is the determinant, by using the Jordan canonical form. The determinant is invariant under conjugation, so the determinant of a matrix is the same as the determinant of its Jordan canonical form; this form is upper triangular, and the diagonal contains the eigenvalues, each with the appropriate algebraic multiplicity, yielding that the determinant is indeed the product of the eigenvalues; a Jordan canonical basis yields the triples you have. Once you have that the product is indeed the determinant, the conclusion follows).

There are a number of obstacles to the project. First, we need to show that $D(\mathbf{M})$ is well-defined; that is, that it does not depend on the choice of maximal family of vectors. This can be achieved pretty much the way one shows that the generalized eigenspaces yield a decomposition of the vector space (see any book that covers the Jordan canonical form), but the simplest proofs I know will use the Cayley-Hamilton Theorem at some point, which involves the determinant. (The more complicated ones consider the vector space as a module over $\mathbb{C}[x]$ and use the structure theorem for modules over PIDs... don't ask.)

But assuming this is taken for granted (that you can find a basis of generalized eigenvectors, so that your maximal family of triples will have $n$ elements, and that the number of times that each $k_i$ occurs is independent of the choice), then we can proceed along similar lines as the way we prove that the determinant is multiplicative.

Lemma. If there is vector $\mathbf{v}\neq 0$ such that $\mathbf{M}\mathbf{v}=\mathbf{0}$, then $D(\mathbf{M})=0$. Conversely, if $D(\mathbf{M})=0$, then there exists a vector $\mathbf{v}\neq 0$ such that $\mathbf{M}\mathbf{v}=\mathbf{0}$.

Proof. Immediate. QED

Proposition. $D(\mathbf{M})=0$ if and only if $\mathbf{M}$ is not invertible.

Proof. $\mathbf{M}$ is invertible if and only if it is of full rank. By the Rank-Nullity Theorem, this occurs if and only if the nullity of $\mathbf{M}$ is zero, which happens if and only if the nullspace is trivial, if and only if for all $\mathbf{v}$, $\mathbf{M}\mathbf{v}=\mathbf{0}\Rightarrow \mathbf{v}=\mathbf{0}$. The previous lemma then completes the proof. QED

Proof. If $D(\mathbf{B})=0$, then the result follows by the previous proposition. If $D(\mathbf{B})\neq 0$, then $B$ is invertible (as above). Since $D(\mathbf{A})=0$, there exists $\mathbf{v}\neq \mathbf{0}$ such that $\mathbf{A}\mathbf{v}=\mathbf{0}$. Then
$$(\mathbf{AB})(\mathbf{B}^{-1}\mathbf{v}) = \mathbf{A}\mathbf{v}=\mathbf{0} = 0(\mathbf{B}^{-1}\mathbf{v}),$$
and since $\mathbf{B}^{-1}\mathbf{v}\neq\mathbf{0}$, then $D(\mathbf{AB}) = 0$, as desired. QED

Lemma. If $\mathbf{B}$ is an elementary matrix, then

If $\mathbf{B}$ is obtained from the identity by transposing two rows, then $D(\mathbf{B}) = -1$.

If $\mathbf{B}$ is obtained from the identity by adding a multiple of one row to another row, then $D(\mathbf{B}) = 1$.

If $\mathbf{B}$ is obtained from the identity by multiplyig a row by $\lambda\neq 0 $, then $D(\mathbf{B}) = \lambda$.

Proof.

Suppose we exchange rows $i$ and $j$. That means that $\mathbf{B}(e_i) = e_j$ and $\mathbf{B}(e_j) = e_i$, while $\mathbf{B}(e_k) = e_k$ if $k\neq i,j$. We get $n-2$ triples with $(e_k,1,1)$, $k\neq i,j$, and also the triples $(e_i+e_j,1,1)$ and $(e_i-e_j,-1,1)$. Then $D(\mathbf{B}) = 1^{n-1}(-1) = -1$, as claimed.

If we multiply the $i$th row by $\lambda$, then the $n-1$ triples $(e_j,1,1)$ with $j\neq i$, together with $(e_i,\lambda,1)$, make a maximal family and we have $D(\mathbf{B}) = 1^{n-1}\lambda = \lambda$. QED

Lemma. If $\mathbf{A}$ and $\mathbf{B}$ are elementary matrices, then $D(\mathbf{AB}) = D(\mathbf{A})D(\mathbf{B})$.

Sketch. There are several cases to consider, depending on which kind of elementary matrices they are and how they interact. For example, if $A$ is obtained from the identity by adding $r$ times the $i$th row to the $j$th row, and $B$ is obtained by multiplying the $\ell$th row by $\lambda$, then we must consider:

If $\ell\neq i,j$, then we can take the $n-3$ triples $(e_k,1,1)$ with $k\neq i,j,\ell$, the triple $(e_{\ell},\lambda,1)$, the triple $(e_j,1,1)$, and the triple $(e_i,1,2)$. Then $D(\mathbf{AB}) = 1^{n-1}\lambda = \lambda = D(\mathbf{A})D(\mathbf{B})$.

If $\ell = j$, then $\mathbf{AB}(e_k) = e_k$ if $k\neq i,j$; $\mathbf{AB}e_j = \lambda e_j$, and $\mathbf{AB}(e_i) = e_i+re_j$, so we can take the $n-2$ triples $(e_k,1,1)$ with $k\neq i,j$, together with $(e_j,\lambda,1)$ and $(e_i,1,2)$. Then $D(\mathbf{AB}) = 1^{n-1}\lambda = \lambda = D(\mathbf{A})D(\mathbf{B})$.

If $\ell=i$, then $\mathbf{AB}(e_k) = e_k$ if $k\neq i$, and $\mathbf{AB}(e_i) = \lambda e_i + \lambda re_j$. We can then take the $n-1$ triples $(e_k,1,1)$, $k\neq i$, and the triple $(e_i,\lambda,2)$, to get $D(\mathbf{AB})=1^{n-1}\lambda = \lambda = D(\mathbf{A})D(\mathbf{B})$.

Similar arguments hold for the other combinations of products of elementary matrices. QED

THEOREM. If $\mathbf{A}$ and $\mathbf{B}$ are $n\times n$ matrices with coefficients in $\mathbb{C}$, then $D(\mathbf{AB}) = D(\mathbf{A})D(\mathbf{B})$.

Proof. If either $D(\mathbf{A}) = 0$ or $D(\mathbf{B})=0$, then the result follows from previous proposition. If $D(\mathbf{A})\neq 0 \neq D(\mathbf{B})$, then each of $\mathbf{A}$ and $\mathbf{B}$ is invertible, hence a product of elementary matrices. The result now follows from the Corollary above. QED