is a square matrix length n and is a vector in . For every =1,2,...,n is the matrix that is received from after switching the column with vector . Given, detA=0.

Prove that if det then there is no solution to .

This is what the question means if it was unclear:

A =

b =

I simply replace the first column of A with vector b

Now that it's all cleared up how do I solve this?? It has something to do with detA=0 which makes the matrix singular... and most likely it has to do something with all those properties of an invertibale matrix.

And the question is telling us that there is only a solution to Ax=b if A_1 is singular like A! Why is that so? Why can't A_1 be invertible?

Hmm. I think you're right. I missed that detail. Still, I bet you could use the derivation of Cramer's Rule (I mis-spelled it Kramer) to get the proof you need. The proof of Cramer's Rule, I'm betting, has a construction you could use to prove your point. For example, instead of having the denominator being zero, and having a problem there, do all your constructions with that determinant on the same side of the equation as Perhaps you can get a contradiction or something.

Hmm. I think you're right. I missed that detail. Still, I bet you could use the derivation of Cramer's Rule (I mis-spelled it Kramer) to get the proof you need. The proof of Cramer's Rule, I'm betting, has a construction you could use to prove your point. For example, instead of having the denominator being zero, and having a problem there, do all your constructions with that determinant on the same side of the equation as Perhaps you can get a contradiction or something.

Ya, that's what I'm thinking. So I'll try it out. But if anyone else has an idea, I'd greatly appreciate it!!

Well, that second example shows that the theorem you are trying to prove is not true! I recommend you go back and re-read the original problem. If Ax= b is a solvable matrix equation (system of equations) in which none of the solutions is 0, then neither |A| nor , for any k, is 0 yet there are solutions.

I don't understand what you just wrote. The second problem doesn't prove or disprove anything. I was just showing different variations of problems when I change columns by a vector. I was just throwing out ideas which I thought was in the right direction and hoping someone could finish the idea for me.

To disprove the theorem, you would have to exhibit a case where , , and yet there is a solution to jayshizwiz has not provided such an example. In his first example, both the hypothesis and the conclusion of the theorem are satisfied. In his second example, the hypothesis of the theorem is not satisfied, because .

jayshizwiz,

I need to think about this a bit more. The answer certainly isn't popping out at me.

I'm thinking along the lines of using linear independence (abbr. LI) and linear dependence (abbr. LD). Let be the th column of matrix . Since , we know that columns 2 through n are LI, since is LI. Since , and columns 2 through n are LI, it follows that can be written as a linear combination of columns 2 through n, since is LD. We'll say that

where are scalars.

I'm thinking about elementary column operations. You might be able to manipulate those to get what you need.

Elementary column operations change the underlying system in a predictable way. For example, suppose I have the system

The solution is

I perform the elementary column operation The system becomes

with solution . But now which is precisely the column operation I performed, but with the indices switched. I think you could prove that, in general, if I perform the elementary column operation , then with the new solution vector , I can say that

I think your proof could go something like this:

We can achieve the equation

as a series of elementary column operations which will, incidentally, leave the value of unchanged. Since the remainder of the columns (2 through n) are linearly independent, we can row reduce the RHS of this equation such that the last row has all zeros in the non-augmented matrix. You would then need to show that there is a corresponding nonzero entry in the vector for that row, thus showing that there is no solution.

You would then need to show that there is a corresponding nonzero entry in the vector for that row, thus showing that there is no solution.

You can do this by sleight-of-hand: construct the matrix

You know this matrix is invertible, because its determinant is times the determinant of (you'd have flipped two columns times, which flips the sign of the determinant times).

Because this matrix is invertible, you can perform elementary row operations on it such that the matrix is upper triangular with 1's on the main diagonal. Now you perform an inverse sleight-of-hand and put this upper triangular matrix back into