As an augmented matrix $$\left(\begin{array}{c|c}M & V\end{array}\right)\, .$$ Here our plan would be to perform row operations until the system looks like $$\left(\begin{array}{c|c}I & M^{-1}V\end{array}\right)\, ,$$ (assuming that \(M^{-1}\) exists).

As a matrix equation $$MX=V\, ,$$ which we would solve by finding \(M^{-1}\) (again, if it exists), so that $$X=M^{-1}V\, .$$

As a linear transformation $$L:\mathbb{R}^{n}\longrightarrow \mathbb{R}^{n}$$ via $$\mathbb{R}^{n}\ni X \longmapsto MX \in \mathbb{R}^{n}\, .$$ In this case we have to study the equation \(L(X)=V\) because \(V\in \mathbb{R}^{n}\).

Lets focus on the first two methods. In particular we want to think about how the augmented matrix method can give information about finding \(M^{-1}\). In particular, how it can be used for handling determinants.

The main idea is that the row operations changed the augmented matrices, but we also know how to change a matrix \(M\) by multiplying it by some other matrix \(E\), so that \(M\to EM\). In particular can we find ``elementary matrices'' the perform row operations?

Once we find these elementary matrices is is \(\textit{very important}\) to ask how they effect the determinant, but you can think about that for your own self right now.

To finish off the video, here is how all these elementary matrices work for a \(2\times 2\) example. Lets take
$$
M=\begin{pmatrix}a&b\\c&d\end{pmatrix}\, .
$$
A good thing to think about is what happens to \(\det M = ad-bc\) under the operations below.

This video will show you how to calculate determinants of elementary matrices. First remember that the job of an elementary row matrix is to perform row operations, so that if \(E\) is an elementary row matrix and \(M\) some given matrix, $$EM$$ is the matrix \(M\) with a row operation performed on it.

The next thing to remember is that the determinant of the identity is \(1\). Moreover, we also know what row operations do to determinants:

Row swap \(E^{i}_{j}\): flips the sign of the determinant.

Scalar multiplication \(R^{i}(\lambda)\): multiplying a row by \(\lambda\) multiplies the determinant by \(\lambda\).

Row addition \(S^{i}_{j}(\lambda)\): adding some amount of one row to another does not change the determinant.

Lets figure out the relationship between determinants and invertibility. If we have a system of equations \(Mx=b\) and we have the inverse \(M^{-1}\) then if we multiply on both sides we get \(x = M^{-1}Mx= M^{-1}b\). If the inverse exists we can solve for \(x\) and get a solution that looks like a point.

So what could go wrong when we want solve a system of equations and get a solution that looks like a point? Something would go wrong if we didn't have enough equations for example if we were just given
\[
x+y = 1
\]
or maybe, to make this a square matrix \(M\) we could write this as
\begin{align*}
x+y &= 1\\
0 &= 0
\end{align*}
The matrix for this would be
\(M =\begin{bmatrix}
1 & 1\\
0& 0
\end{bmatrix}\)
and det\((M) = 0\). When we compute the determinant, this row of all zeros gets multiplied in every term. If instead we were given redundant equations

\begin{align*}
x+y &= 1\\
2x+2y &= 2
\end{align*}
The matrix for this would be
\(M =\begin{bmatrix}
1 & 1\\
2& 2
\end{bmatrix}\) and det\((M) = 0\). But we know that with an elementary row operation, we could replace the second row with a row of all zeros. Somehow the determinant is able to detect that there is only one equation here. Even if we had a set of contradictory set of equations such as
\begin{align*}
x+y &= 1\\
2x+2y &= 0,
\end{align*}
where it is not possible for both of these equations to be true, the matrix \(M\) is still the same, and still has a determinant zero.

Lets look at a three by three example, where the third equation is the sum of the first two equations.

If we were trying to find the inverse to this matrix using elementary matrices
$$ \left( \begin{array}{ccc | ccc}
1 & 1 &1 & 1 & 0 & 0\\
0 & 1 & 1 & 0 & 1 & 0 \\
1 & 2 & 2 & 0 & 0 & 1
\end{array} \right)
=
\left( \begin{array}{ccc | rrr}
1 & 1 &1 & 1 & 0 & 0\\
0 & 1 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 & -1 & 1
\end{array} \right)
$$
And we would be stuck here. The last row of all zeros cannot be converted into the bottom row of a \(3 \times 3\) identity matrix. this matrix has no inverse, and the row of all zeros ensures that the determinant will be zero. It can be difficult to see when one of the rows of a matrix is a linear combination of the others, and what makes the determinant a useful tool is that with this reasonably simple computation we can find out if the matrix is invertible, and if the system will have a solution of a single point or column vector.

Alternative Proof

Here we will prove more directly that} the determinant of a product of matrices is the product of their determinants. First we reference that for a matrix \(M\) with rows \(r_{i}\), if \(M^{\prime}\) is the matrix with rows \(r^{\prime}_{j} = r_{j} + \lambda r_{i}\) for \(j \neq i\) and \(r^{\prime}_{i} = r_{i}\), then \(\det(M) = \det(M^{\prime})\). Essentially we have \(M^{\prime}\) as \(M\) multiplied by the elementary row sum matrices \(S^{i}_{j}(\lambda)\). Hence we can create an upper-triangular matrix \(U\) such that \(\det(M) = \det(U)\) by first using the first row to set \(m_{i}^{1} \mapsto 0\) for all \(i > 1\), then iteratively (increasing \(k\) by 1 each time) for fixed \(k\) using the \(k\)-th row to set \(m_{i}^{k} \mapsto 0\) for all \(i > k\).

Now we can look at three by three matrices and see a few ways to compute the determinant. We have a similar pattern for \(3\times 3\) matrices.
Consider the example
\[
{\rm det}
\begin{pmatrix}
1 & 2 & 3 \\
3 & 1 & 2 \\
0 & 0 & 1 \\
\end{pmatrix}
= ( (1\cdot 1\cdot 1)+ (2\cdot 2\cdot 0) + (3\cdot 3\cdot 0)) - ((3\cdot 1\cdot 0)+ (1\cdot 2\cdot 0) + (3\cdot 2\cdot 1)) = -5
\]
We can draw a picture with similar diagonals to find the terms that will be positive and the terms that will be negative.

Another way to compute the determinant of a matrix is to use this recursive formula. Here I take the coefficients of the first row and multiply them by the determinant of the minors and the cofactor. Then we can use the formula for a two by two determinant to compute the determinant of the minors

Here we have taken the subspace \(W\) to be a plane through the origin and \(U\) to be a line through the origin. The hint now is to think about what happens when you add a vector \(u\in U\) to a vector \(w\in W\). Does this live in the union \(U\cup W\)?

For the second part, we take a more theoretical approach. Lets suppose that \(v\in U\cap W\) and \(v'\in U\cap W\). This implies
$$
v\in U \quad \mbox{and} \quad v'\in U\, .
$$
So, since \(U\) is a subspace and all subspaces are vector spaces, we know that the linear combination
$$
\alpha v+\beta v'\in U\, .
$$
Now repeat the same logic for \(W\) and you will be nearly done.

Recommended articles

The LibreTexts libraries are Powered by MindTouch®and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Have questions or comments? For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org.