Gauss-Jordan Elimination

Row reduction is the process of performing row operations to transform any matrix into (reduced) row echelon form. In reduced row echelon form, each successive row of the matrix has less dependencies than the previous, so solving systems of equations is a much easier task. The idea behind row reduction is to convert the matrix into an "equivalent" version in order to simplify certain matrix computations. Its two main purposes are to solve system of linear equations and calculate the inverse of a matrix.

Carl Friedrich Gauss championed the use of row reduction, to the extent that it is commonly called Gaussian elimination. It was further popularized by Wilhelm Jordan, who attached his name to the process by which row reduction is used to compute matrix inverses, Gauss-Jordan elimination.

Contents

Explanation

A system of equations can be represented in a couple of different matrix forms. One way is to realize the system as the matrix multiplication of the coefficients in the system and the column vector of its variables. The square matrix is called the coefficient matrix because it consists of the coefficients of the variables in the system of equations.

An alternate representation called an augmented matrix is created by stitching the columns of matrices together and divided by a vertical bar. The coefficient matrix is placed on the left of this vertical bar, while the constants on the right hand side of each equation are placed on the right of the vertical bar.

The matrices that represent these systems can be manipulated in such a way as to provide easy to read solutions. This manipulation is called row reduction. Row reduction techniques transform the matrix into reduced row echelon form without changing the solutions to the system.

The reduced row echelon form of a matrix \(A\) (denoted \( \text{rref}(A)\)) is a matrix equal dimensions that satisfies:

The leftmost non-zero element in each row is \( 1 \). This element is known as a pivot.

Any column can have at most \( 1 \) pivot. If a column has a pivot, then the rest of the elements in the column will be \( 0 \).

For any two columns \(C_{1} \) and \(C_{2}\) that have pivots in rows \( R_{1} \) and \( R_{2}, \) respectively, if pivot in \( C_{1} \) is to the left of pivot in \( C_{2}\), then \( R_{1} \) is above \( R_{2} \). In other words, for any two pivots \( P_{1},\) and \(P_{2}\), if \( P_{2} \) is to the right of \( P_{1} \), then \( P_{2} \) is below \( P_{1}\).

Rows that consist of only zeroes are in the bottom of the matrix.

To convert any matrix to its reduced row echelon form, Gauss-Jordan Elimination is performed. There are three elementary row operations used to achieve reduced row echelon form:

The top left element is a pivot, so the rest of the elements in the first column must be 0. This can be done by subtracting the first row from the second row. Furthermore, the first row can be added to the third row to obtain the necessary 0s in the first column.

Given that \(a,b,c\) and \(d\) are numbers satisfying the system of equations above, find \( \dfrac{b+c}{a-b} \).

Computing Inverses

Given an equation \(A x = b\) with a unique solution for \(x\), it is possible to solve the equation by calculating \(A^{-1}\) and then evaluating \(A^{-1}A x = x = A^{-1} b.\) However, it takes much more time to calculate \(A^{-1}\) (and then multiply by \(b\)!) than just to solve for \(x\). For that reason, \(A^{-1}\) is only useful to compute when it is needed in itself, perhaps to solve general equations.

Perform row reduction on the matrix \([A \ I]\) to obtain something of the form \([I \ B]\). Then, \(B = A^{-1}\) provides a solution to the inverse problem.