Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the appendix entitled GNU Free Documentation License.

Cofactors

A computer finds the determinant from the pivots when the square matrix is reduced to upper triangular form
using Gaussian elimination. However, originally the determinant was defined through cofactor expansion.

A minor of a matrix \( {\bf A} = [a_{i,j} ] \) is the determinant of some
smaller square matrix, cut down from A by removing one or more of its rows or columns.
The minor of entry ai,j of a square n-by-n matrix A is denoted by
Mi,j and the determinant of the \( (n-1) \times (n-1) \) submatrix
that remains after the i-th row and j-th colum are deleted from A. The number
\( (-1)^{i+j} M_{i,j} \) is denoted by \( C_{i,j} \) and is
called the cofactor of entry ai,j.

The term minor is apparently due to the English mathematician James Sylvester () who used it in 1850 paper .

Example: The span of the empty set \( \varnothing \) consists of a unique element 0.
Therefore, \( \varnothing \) is linearly independent and it is a basis for the trivial vector space
consisting of the unique element---zero. Its dimension is zero.

Example: Let us consider the set of all real \( m \times n \)
matrices, and let \( {\bf M}_{i,j} \) denote the matrix whose only nonzero entry is a 1 in
the i-th row and j-th column. Then the set \( {\bf M}_{i,j} \ : \ 1 \le i \le m , \ 1 \le j \le n \)
is a basis for the set of all such real matrices. Its dimension is mn.

Example: The set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n \right\} \)
form a basis in the set of all polynomials of degree up to n. It has dimension n+1.
■

Example: The infinite set of monomials \( \left\{ 1, x, x^2 , \ldots , x^n , \ldots \right\} \)
form a basis in the set of all polynomials.
■

Theorem: Let V be a vector space and
\( \beta = \left\{ {\bf u}_1 , {\bf u}_2 , \ldots , {\bf u}_n \right\} \) be a subset of
V. Then β is a basis for V if and only if each vector v in V can be uniquely
decomposed into a linear combination of vectors in β, that is, can be uniquely expressed in the form

Theorem: Let S be a linearly independent subset of a vector space V,
and let v be an element of V that is not in S. Then
\( S \cup \{ {\bf v} \} \) is linearly dependent if and only if v
belongs to the span of the set S.

Theorem: If a vector space V is generated by a finite set S, then
some subset of S is a basis for V ■

A vector space is called finite-dimensional if it has a basis consisting of a finite
number of elements. The unique number of elements in each basis for V is called
the dimension of V and is denoted by dim(V). A vector space that is not finite-dimensional is called
infinite-dimensional.

The next example demonstrates how Mathematica can determine the basis or set of linearly independent
vectors from the given set. Note that basis is not unique and even changing the order of vectors, a software can provide
you another set of linearly independent vectors.