Let $R$ be a commutative ring (with $1$) and $R^{n \times n}$ be the ring of $n \times n$ matrices with entries in $R$.

In addition, suppose that $R$ is a ring in which every non-zero element is either a zero divisor or a unit [For example: take any finite ring or any field.] My question:

Is every non-zero element of $R^{n \times n}$ a zero divisor or a unit as well?

We know that if $A \in R^{n \times n}$, then $AC=CA=\mathrm{det}(A)I_n$ where $C$ is the classical adjoint of $A$ and $I_n$ is the identity matrix.

This means that if $\mathrm{det}(A)$ is a unit of $R$, then $A$ is a unit of $R^{n \times n}$ (since $A^{-1}=(\mathrm{det}(A))^{-1}C$). Also, the converse holds, if $A$ is a unit of $R^{n \times n}$, then $\mathrm{det}(A)$ is a unit.

I would like to know if one can show $0 \not= A \in R^{n \times n}$ is a zero divisor if $\mathrm{det}(A)$ is zero or a zero divisor.

Things to consider:

1) This is true when $R=\mathbb{F}$ a field. Since over a field (no zero divisors) and if $\mathrm{det}(A)=0$ then $Ax=0$ has a non-trivial solution and so $B=[x|0|\cdots|0]$ gives us a right zero divisor $AB=0$.

2) You can't use the classical adjoint to construct a zero divisor since it can be zero even when $A$ is not zero. For example:

3) This is true when $R$ is finite (since $R^{n \times n}$ would be finite as well).

4) Of course the assumption that every non-zero element of $R$ is either a zero divisor or unit is necessary since otherwise take a non-zero, non-zero divisor, non-unit element $r$
and construct the diagonal matrix $D = \mathrm{diag}(r,1,\dots,1)$ (this is non-zero, not a zero divisor, and is not a unit).

6) This is definitely true when $n=1$ and $n=2$. It is true for $n=1$ by assumption on $R$. To see that $n=2$ is true notice that the classical adjoint contains the same same elements as that of $A$ (or negations):

Thus if $\mathrm{det}(A)b=0$ for some $b \not=0$, then either $bC=0$ so that all of the entries of both $A$ and $C$ are annihilated by $b$ so that $A(bI_2)=0$ or $bC \not=0$ and so $A(Cb)=\mathrm{det}(A)bI_2 =0I_2=0$. Thus $A$ is a zero divisor.

7) Apparently strange behavior can occur when $R$ is non-commutative (not surprising). Like a matrix can be both a left inverse and left zero divisor. [The determinant keeps this from happening in the commutative case.]

Better suggestion than Wilberd's: use \cr rather than \\ in matrices. MO has re-taught me that most characters that could be considered special have alternate command names in LaTeX, which is very helpful when the MO software wants to interpret them as special characters. E.g. "<" is bad, but you can use \lt instead, you can use \lbrace for "{" etc.
–
Thierry ZellOct 11 '11 at 14:30

Actually, as far as I've seen definitions, $0$ is usually not called a zero divisor (so an integral domain is a non-trivial commutative ring without zero divisors). But in the current setting saying "zero divisor of zero" all the time would indeed by tiresome.
–
Marc van LeeuwenMar 10 '13 at 11:15

1 Answer
1

The answer given by David Speyer can be strengthened as follows. If $A$ is a non-invertible $n\times n$ matrix with entries in $R$ as described in the problem, then the linear maps $R^n \to R^n$ defined by either left or right multiplication are non-injective. In particular, $A$ is both a left-zero-divisor and a right-zero-divisor.

This is a consequence of McCoy's rank theorem. You can find a nice, brief account of this in Section 2 of this paper by Kodiyalam, Lam, and Swan. One consequence of the theorem is that for any commutative ring $R$, a square matrix $A$ over $R$ has linearly independent columns if and only if its determinant is not a zero-divisor, if and only if its rows are linearly independent.

So if every element of $R$ is either invertible or a zero-divisor, this means that every square matrix over $R$ defines a linear transformation that is either invertible or non-injective.

To repeat a comment from my answer, once you know that $A:R^n \to R^n$ has a kernel, say $Av=0$, then let $V$ be the $n \times n$ matrix whose columns are all copies of $v$ and you get $AV=0$. That's a nice proof, thanks for finding it!
–
David SpeyerOct 11 '11 at 17:13

Right, David - thanks for being explicit for me!
–
Manny ReyesOct 11 '11 at 17:22