(a) Show that for each $\vec b$, $A\vec x=\vec b$ has a unique solution.

OR

(b) Show that $A$ is invertible.

For (a), $AD\vec b=I\vec b = \vec b$, so obviously the given equation has at least one solution $\vec x =D\vec b$. But how to show that the solution is unique?

For (b), I'm guessing we should show either that $DA=I$ or that $DA=AD$ (and thus $D=A^{-1}$), but I'm not sure how to do this without assuming that A is invertible, which is what we are needing to show.

It is such cleverness to invoke the priorly-established <$1 \implies 5$> within the crucial last step <$6 \implies 1$>! And yes, that your proof involves only beginning linear algebra concepts instead of vector spaces or group theory makes it immensely useful and worthy, IMO. This is precisely the type of first-principles proof I was after, although I really like the simplicity of Ed's determinant argument too.
–
RyanJul 19 '12 at 18:25

Does this still work if we omit the initial assumption that A has a left inverse?
–
ArkamisJul 19 '12 at 18:49

This proof shows that all the statements are equivalent. But your problem statement does not make any of those statements. In other words, there is nothing here that guarantees that your matrix A must have a left inverse, or that D is the right inverse of A. All it says is that if D is the right inverse of A, then A is invertible, so we still need to prove that that is the case.
–
ArkamisJul 19 '12 at 19:10

@Ed For the purpose of answering my original question, <$5 \implies 6$> can be omitted, but 1 is important because <$1 \implies 5$> is used to prove <$6 \implies 1$>. What Shahab has done is to prove part of the "Invertible Matrix Theorem", which contains a set of equivalent statements about the invertibility of A. He has done a better job than my textbook author David Lay, who made a small but critical mistake. This is actually the motivation for my original question. :)
–
RyanJul 19 '12 at 19:16

@Ed The supposition is found in my original question: "A is a square matrix, and there is a matrix D such that AD=I." I.e. I was asking for a proof that <$6 \implies 1$>. Mathstackexchange is one of the best things about the internet.
–
RyanJul 19 '12 at 19:19

$A$ is not invertible, it only has by assumption an inverse to the right. If you assume that invertible matrices are a group with law being the product of matrices, then you implicitely say that $A$ is invertible, which is not clear.
–
Louis La BrocanteJul 19 '12 at 15:37

This is nice @Ed, yet I can't understand why you can't deduce that both $\,A,D\,$ are invertible after you showed $\,\det A\cdot \det D=1\neq 0\,$...?
–
DonAntonioJul 19 '12 at 17:02

Yeah, actually I just noticed that, too. I have updated the response to use this stronger argument. Group theoretic considerations are still used to show uniqueness -- probably not necessary, but illuminating.
–
ArkamisJul 19 '12 at 17:02

@DonAntonio if either $\det(A)$ or $\det(D)$ would be equal to zero, then $\det(A)\cdot\det(D)$ should not be $1$ :).
–
Louis La BrocanteJul 19 '12 at 18:10

@Ed This is a great proof, and by far the most straightforward, thus the most elegant, IMO. Thank you!
–
RyanJul 19 '12 at 18:17

Added: Following the remarks in the comments, and since we cannot assume any of $\,D,A,DA, AD\,$ is invertible (and thus talking of a group is out of order), I shall try an approach proposed by Rolando (assume the matrices are $\,n\times n\,$):

We're given $\,AD=I,$ . Either in terms of matrices or of linear operators, this means that $\,\dim Im(AD)=n\,$ (is full, i.e. $\,AD\,$ is onto). Now we have a general claim whose proof is elementary:

I think you want to say "if $G$ is a group with unit $1$ and $x\in G$ is such that $xx=x$, then $x=1$.", or something like that.
–
Matthew PresslandJul 19 '12 at 15:32

2

But to use that you must be aware that the matrix product is a group law for invertible matrices, so you assume that $DA$ is invertible, which is not clear imo.
–
Louis La BrocanteJul 19 '12 at 15:35

Well, I do not need $\,DA\,$ being invertible but each of them being, and you're right: the claim, which is painfully easy to prove, cannot be used unless we already know we're in, at least, a monoid (no need of group for this). Good point.
–
DonAntonioJul 19 '12 at 15:37

For any $b$ in your vector space, you have
$$AD(b)=I(b)=b$$
In particular $A$ is surjective, and $D$ is injective. Since an endomorphism of a vector space is surjective if and only if it is bijective by the rank nullity theorem, $A$ is invertible.

Since $AD=I$, the range of $A$ is the full vector space: For any vector $\vec v$ you can find a vector $\vec w$ so that $A\vec w=\vec v$: Just use $\vec w=D\vec v$. Denote the dimension of the vector space with $d$ (I assume we are in a finite-dimensional vector space, because otherwise I would not know how to define a square matrix). Thus the range of $A$ has dimension $d$.

Now assume there are two vectors $\vec v_1\neq\vec v_2$ so that $A\vec v_1=A\vec v_2$. Then by linearity, $A(\vec v_1-\vec v_2)=\vec 0$. Now you can take the vector $\vec b_1 := \vec v_1-\vec v_2$ (which by the assumption is nonzero) and extend that with $d-1$ other vectors to a basis $\vec b_1,\dots,\vec b_n$ of the vector space. Now look at the image of an arbitrary vector $w=\sum w_i\vec b_i$: Since $\vec b_1$ is, by construction, always mapped to $\vec 0$, every vector is mapped to the $\sum_{k=\color{red}2}^d w_i A\vec b_i$. But the space spanned by those vectors can only have the maximal dimension $d-1$, which contradicts the previously shown fact that the range of $A$ is the whole vector space. Thus the assumption $\vec v_1\neq\vec v2$ but $A\vec v_1=A\vec v_2$ cannot be true.