Chapter 2

Vector Spaces

2.1 INTRODUCTIONIn various practical and theoretical problems, we come across a set V whose elements may bevectors in two or three dimensions or real-valued functions, which can be added and multi-plied by a constant (number) in a natural way, the result being again an element of V . Suchconcrete situations suggest the concept of a vector space. Vector spaces play an important rolein many branches of mathematics and physics. In analysis inﬁnite dimensional vector spaces(in fact, normed vector spaces) are more important than ﬁnite dimensional vector spaces whilein linear algebra ﬁnite dimensional vector spaces are used, because they are simple and lineartransformations on them can be represented by matrices. This chapter mainly deals with ﬁnitedimensional vector spaces.

2.2 DEFINITION AND EXAMPLES

VECTOR SPACE. An algebraic structure (V, F, ⊕, ) consisting of a non-void set V , a ﬁeld F, a

binary operation ⊕ on V and an external mapping : F ×V → V associating each a ∈ F, v ∈ Vto a unique element a v ∈ V is said to be a vector space over ﬁeld F, if the following axiomsare satisﬁed: V -1 (V, ⊕) is an abelian group. V -2 For all u, v ∈ V and a, b ∈ F, we have (i) a (u ⊕ v) = a u ⊕ a v, (ii) (a + b) u = a u ⊕ b u, (iii) (ab) u = a (b u), (iv) 1 u = u.The elements of V are called vectors and those of F are called scalars. The mapping iscalled scalar multiplication and the binary operation ⊕ is termed vector addition.

115116 • Theory and Problems of Linear Algebra

If there is no danger of any confusion we shall say V is a vector space over a ﬁeld F,whenever the algebraic structure (V, F, ⊕, ) is a vector space. Thus, whenever we say thatV is a vector space over a ﬁeld F, it would always mean that (V, ⊕) is an abelian group and : F ×V → V is a mapping such that V -2 (i)–(iv) are satisﬁed. V is called a real vector space if F = R (ﬁeld of real numbers), and a complex vector spaceif F = C (ﬁeld of complex numbers).REMARK-1 V is called a left or a right vector space according as the elements of a skew-ﬁeldF are multiplied on the left or right of vectors in V . But, in case of a ﬁeld these two conceptscoincide.REMARK-2 The symbol ‘+’ has been used to denote the addition in the ﬁeld F. For the sakeof convenience, in future, we shall use the same symbol ‘+’ for vector addition ⊕ and additionin the ﬁeld F. But, the context would always make it clear as to which operation is meant.Similarly, multiplication in the ﬁeld F and scalar multiplication will be denoted by the samesymbol ‘·’.REMARK-3 In this chapter and in future also, we will be dealing with two types of zeros. Onewill be the zero of the additive abelian group V , which will be known as the vector zero andother will be the zero element of the ﬁeld F which will be known as scalar zero. We will use thesymbol 0V to denote the zero vector and 0 to denote the zero scalar.REMARK-4 Since (V,⊕) is an abelian group, therefore, for any u, v, w ∈ V the followingproperties will hold: (i) u ⊕ v = u ⊕ w ⇒ v = w

Hence, F is a vector space over S.

Since S is an arbitrary sub ﬁeld of F, therefore, every ﬁeld is a vector space over its anysubﬁeld.REMARK. The converse of the above Example is not true, i.e. a subﬁeld is not necessarilya vector space over its over ﬁeld. For example, R is not a vector space over C, becausemultiplication of a real number and a complex number is not necessarily a real number.

EXAMPLE-2 R is a vector space over Q, because Q is a subﬁeld of R.

EXAMPLE-3 C is a vector space over R, because R is a subﬁeld of C.

EXAMPLE-4 Every ﬁeld is a vector space over itself.

SOLUTION Since every ﬁeld is a subﬁeld of itself. Therefore, the result directly follows fromExample 1.REMARK. In order to know whether a given non-void set V forms a vector space over a ﬁeldF, we must proceed as follows: (i) Deﬁne a binary operation on V and call it vector addition. (ii) Deﬁne scalar multiplication on V , which associates each scalar in F and each vector in V to a unique vector in V . (iii) Deﬁne equality of elements (vectors) in V . (iv) Check whether V -1 and V -2 are satisﬁed relative to the vector addition and scalar multiplication thus deﬁned.

EXAMPLE-5 The set F m×n of all m × n matrices over a ﬁeld F is a vector space over F withrespect to the addition of matrices as vector addition and multiplication of a matrix by a scalaras scalar multiplication.

EXAMPLE-6 The set R m×n of all m × n matrices over the ﬁeld R of real numbers is a vectorspace over the ﬁeld R of real numbers with respect to the addition of matrices as vector additionand multiplication of a matrix by a scalar as scalar multiplication.

SOLUTION Proceed parallel to Example 5.

REMARK. The set Q m×n of all m × n matrices over the ﬁeld Q of rational numbers is not avector space over the ﬁeld R of real numbers, because √ multiplication ofm×n in Q m×n and a matrix √ m×na real number need not be in Q √ . For example, if 2 ∈ R and A ∈ Q , then 2 A ∈/ Qm×nbecause the elements of matrix 2 A are not rational numbers.

EXAMPLE-7 The set of all ordered n-tuples of the elements of any ﬁeld F is a vector spaceover F.SOLUTION Recall that if a1 , a2 , . . . , an ∈ F, then (a1 , a2 , . . . , an ) is called an orderedn-tuple of elements of F. Let V = {(a1 , a2 , . . . , an ) : ai ∈ F for all i ∈ n} be the set of allordered n-tuples of elements of F. Now, to give a vector space structure to V over the ﬁeld F, we deﬁne a binary operation onV , scalar multiplication on V and equality of any two elements of V as follows:

Since λ ai ∈ F for all i ∈ n. Therefore, λu ∈ V .

Since we have deﬁned vector addition, scalar multiplication on V and equality of any twoelements of V . Therefore, it remains to check whether V -1 and V -2(i) to V -2(iv) are satisﬁed. Vector Spaces • 119

SOLUTION Since the sum of two continuous functions is a continuous function. Therefore,vector addition is a binary operation on V . Also, if f is continuous and λ ∈ R, then λ f iscontinuous. We know that for any f , g ∈ V f = g ⇔ f (x) = g(x) for all x ∈ [a, b].

Thus, we have deﬁned vector addition, scalar multiplication and equality in V .

(i) (a, b) + (c, d) = (a + d, b + c) and k(a, b) = (ka, kb)

19. The set V of all convergent real sequences is a vector space over the ﬁeld R of all real numbers. 20. Let p be a prime number. The set Z p = {0, 1, 2, . . . , (p − 1)} is a ﬁeld under addition and multiplication modulo p as binary operations. Let V be the vector space of polynomials of degree at most n over the ﬁeld Z p . Find the number of elements in V . [Hint: Each polynomial in V is of the form f (x) = a0 + a1 x + · · · + an xn , where a0 , a1 , a2 , . . . , an ∈ Z p . Since each ai can take any one of the p values 0, 1, 2, . . . , (p− 1). So, there are p n+1 elements in V ]

EXERCISE 2.2 1. Let V be a vector space over a ﬁeld F. Then prove that a(u − v) = au − av for all a ∈ F and u, v ∈ V. 2. Let V be a vector space over ﬁeld R and u, v ∈ V . Simplify each of the following: (i) 4(5u − 6v) + 2(3u + v) (ii) 6(3u + 2v) + 5u − 7v 2 (iii) 5(2u − 3v) + 4(7v + 8) (iv) 3(5u + ) v 3. Show that the commutativity of vector addition in a vector space V can be derived from the other axioms in the deﬁnition of V . 4. Mark each of the following as true or false. (i) Null vector in a vector space is unique. (ii) Let V (F) be a vector space. Then scalar multiplication on V is a binary operation on V . (iii) Let V (F) be a vector space. Then au = av ⇒ u = v for all u, v ∈ V and for all a ∈ F. (iv) Let V (F) be a vector space. Then au = bu ⇒ a = b for all u ∈ V and for all a, b ∈ F. 5. If V (F) is a vector space, then prove that for any integer n, λ ∈ F, v ∈ V, prove that n(λv) = λ(nv) = (nλ)v.

ANSWERS 2. (i) 26u − 22v (ii) 23u + 5v (iii) The sum 7v + 8 is not deﬁned, so the given expression is not meaningful. (iv) Division by v is not deﬁned, so the given expression is not meaningful. 4. (i) T (ii) F (iii) F (iv) F

2.4 SUBSPACESSUB-SPACE Let V be a vector space over a ﬁeld F. A non-void subset S of V is said to be asubspace of V if S itself is a vector space over F under the operations on V restricted to S. If V is a vector space over a ﬁeld F, then the null (zero) space {0V } and the entire space Vare subspaces of V . These two subspaces are called trivial (improper) subspaces of V and anyother subspace of V is called a non-trivial (proper) subspace of V .THEOREM-1 (Criterion for a non-void subset to be a subspace) Let V be a vector space overa ﬁeld F. A non-void subset S of V is a subspace of V iff or all u, v ∈ S and for all a ∈ F (i) u − v ∈ S and, (ii) au ∈ S.134 • Theory and Problems of Linear Algebra

PROOF. First suppose that S is a subspace of vector space V . Then S itself is a vector spaceover ﬁeld F under the operations on V restricted to S. Consequently, S is subgroup of theadditive abelian group V and is closed under scalar multiplication. Hence, (i) and (ii) hold. Conversely, suppose that S is a non-void subset of V such that (i) and (ii) hold. Then, (i) ⇒ S is an additive subgroup of V and therefore S is an abelian group under vector addition. (ii) ⇒ S is closed under scalar multiplication. Axioms V -2(i) to V -2(iv) hold for all elements in S as they hold for elements in V . Hence, S is a subspace of V . Q.E.D.

THEOREM-2 (Another criterion for a non-void subset to be a subspace) Let V be a vector

space over a ﬁeld F. Then a non-void subset S of V is a subspace of V iff au + bv ∈ S for allu, v ∈ S and for all a, b ∈ F.PROOF. First let S be a subspace of V . Then, au ∈ S, bv ∈ S for all u, v ∈ S and for all a, b ∈ F [By Theorem 1]⇒ au + bv ∈ S for all u, v ∈ S and for all a, b ∈ F[∵ S is closed under vector addition]Conversely, let S be a non-void subset of V such that au + bv ∈ S for all u, v ∈ S and for alla, b ∈ F. Since 1, −1 ∈ F, therefore 1u + (−1)v ∈ S for all u, v ∈ S.⇒ u−v ∈ S for all u, v ∈ S (i)Again, since 0 ∈ F. Therefore, au + 0v ∈ S for all u ∈ S and for all a ∈ F.⇒ au ∈ S for all u ∈ S and for all a ∈ F (ii) From (i) and (ii), we get u − v ∈ S and au ∈ S for all u, v ∈ S and for all a ∈ F. Hence, by Theorem 1, S is a subspace of vector space V . Q.E.D.

If there is no danger of any confusion we shall say V is a vector space over a ﬁeld F,whenever the algebraic structure (V, F, ⊕, ) is a vector space. Thus, whenever we say thatV is a vector space over a ﬁeld F, it would always mean that (V, ⊕) is an abelian group and : F ×V → V is a mapping such that V -2 (i)–(iv) are satisﬁed. V is called a real vector space if F = R (ﬁeld of real numbers), and a complex vector spaceif F = C (ﬁeld of complex numbers).REMARK-1 V is called a left or a right vector space according as the elements of a skew-ﬁeldF are multiplied on the left or right of vectors in V . But, in case of a ﬁeld these two conceptscoincide.REMARK-2 The symbol ‘+’ has been used to denote the addition in the ﬁeld F. For the sakeof convenience, in future, we shall use the same symbol ‘+’ for vector addition ⊕ and additionin the ﬁeld F. But, the context would always make it clear as to which operation is meant.Similarly, multiplication in the ﬁeld F and scalar multiplication will be denoted by the samesymbol ‘·’.REMARK-3 In this chapter and in future also, we will be dealing with two types of zeros. Onewill be the zero of the additive abelian group V , which will be known as the vector zero andother will be the zero element of the ﬁeld F which will be known as scalar zero. We will use thesymbol 0V to denote the zero vector and 0 to denote the zero scalar.REMARK-4 Since (V,⊕) is an abelian group, therefore, for any u, v, w ∈ V the followingproperties will hold: (i) u ⊕ v = u ⊕ w ⇒ v = w

EXAMPLE-7 Let V be the vector space of all real valued continuous functions over the ﬁeld Rof all real numbers. Show that the set S of solutions of the differential equation d2y dy 2 2 − 9 + 2y = 0 dx dxis a subspace of V.

EXAMPLE-10 Let V be the vector space of all 2×2 matrices over the ﬁeld R of all real numbers.Show that: (i) the set S of all 2 × 2 singular matrices over R is not a subspace of V . (ii) the set S of all 2 × 2 matrices A satisfying A2 = A is not a subspace of V .

SMALLEST SUBSPACE CONTAINING A GIVEN SUBSET. Let V be a vector space over a ﬁeldF, and let W be a subset of V . Then a subspace S of V is called the smallest subspace of Vcontaining W , if (i) W ⊂ Sand (ii) S is a subspace of V such that W ⊂ S ⇒ S ⊂ S .The smallest subspace containing W is also called the subspace generated by W or subspacespanned by W and is denoted by [W ]. If W is a ﬁnite set, then S is called a ﬁnitely generatedspace. In Example 5, Fv is a ﬁnitely generated subspace of V containing {v}.

EXERCISE 2.3 1. Show that the set of all upper (lower) triangular matrices over ﬁeld C of all complex numbers is a subspace of the vector space V of all n × n matrices over C. 2. Let V be the vector space of real-valued functions. Then, show that the set S of all continuous functions and set T of all differentiable functions are subspaces of V . 3. Let V be the vector space of all polynomials in indeterminate x over a ﬁeld F and S be the set of all polynomials of degree at most n. Show that S is a subspace of V . 4. Let V be the vector space of all n × n square matrices over a ﬁeld F. Show that: (i) the set S of all symmetric matrices over F is a subspace of V . (ii) the set S of all upper triangular matrices over F is a subspace of V . (iii) the set S of all diagonal matrices over F is a subspace of V . (iv) the set S of all scalar matrices over F is a subspace of V . 5. Let V be the vector space of all functions from the real ﬁeld R into R. Show that S is a subspace of V where S consists of all: (i) bounded functions. (ii) even functions. (iii) odd functions. 6. Let V be the vector space R3 . Which of the following subsets of V are subspaces of V ? (i) S1 = {(a, b, c) : a + b = 0} (ii) S2 = {(a, b, c) : a = 2b + 1}142 • Theory and Problems of Linear Algebra

12. Which of the following sets of vectors α = (a1 , a2 , . . . , an ) in Rn are subspaces of Rn ?

(n ≥ 3).

(i) all α such that a1 ≥ o

(ii) all α such that a1 + 3a2 = a3 (iii) all α such that a2 = a21 (iv) all α such that a1 a2 = 0 (v) all α such that a2 is rational.

13. Let V be the vector space of all functions from R → R over the ﬁeld R of all real numbers. Show that each of the following subsets of V are subspaces of V .

REMARK. In general, the union of two subspaces of a vector space is not necessarily a sub-space. For example, S1 = {(a, 0, 0) : a ∈ R} and S2 = {(0, b, 0) : b ∈ R} are subspaces ofR3 . But, their union S1 ∪ S2 is not a subspace of R 3 , because (2, 0, 0), (0, −1, 0) ∈ S1 ∪ S2 , but(2, 0, 0) + (0, −1, 0) = (2, −1, 0) ∈ / S1 ∪ S2 . As remarked above the union of two subspaces is not always a subspace. Therefore, unionsare not very useful in the study of vector spaces and we shall say no more about them.

THEOREM-4 The inter section of a family of subspaces of V (F) containing a given subset Sof V is the (smallest subspace of V containing S) subspace generated by S.PROOF. Let {Si : i ∈ I} be a family of subspaces of V (F) containing a subset S of V . LetT = ∩ Si . Then by Theorem 2, T is a subspace of V . Since Si ⊃ S for all i. Therefore, i∈IT = ∩ Si ⊃ S. i∈I Hence, T is a subspace of V containing S.146 • Theory and Problems of Linear Algebra

Now it remains to show that T is the smallest subspace of V containing S.

Let W be a subspace of V containing S. Then W is one of the members of the family{Si : i ∈ I}. Consequently, W ⊃ ∩ Si = T . i∈ I Thus, every subspace of V that contains S also contains T . Hence, T is the smallest sub-space of V containing S. Q.E.D.

DIRECT SUM OF SUBSPACES. A vector space V (F) is said to be direct sum of its two subspacesS and T if every vector u in V is expressible in one and only one as u = v + w, where v ∈ S andw ∈ T. If V is direct sum of S and T , then we write V = S ⊕ T . Vector Spaces • 151

LINEAR VARIETY. Let S be a subspace of a vector space V (F), and let v ∈ V . Then, v + S = {v + u : u ∈ S}is called a linear variety of S by v or a translate of S by v or a parallel of S through v. S is called the base space of the linear variety and v a leader. Vector Spaces • 153

THEOREM-9 Let S be a subspace of a vector space V (F) and let P = v + S be the parallel ofS through v. Then, (i) every vector in P can be taken as a leader of P, i.e u + S = P for all u ∈ P. (ii) two vectors v1 , v2 ∈ V are in the same parallel of S iff v1 − v2 ∈ S

Since S is a subgroup of additive abelian group V . Therefore, for any u + S, v + S ∈ V /S

u+S = v+S ⇔ u−v ∈ S

Thus, we have deﬁned vector addition, scalar multiplication on V /S and equality of any twoelements of V /S. Now we proceed to prove that V /S is a vector space over ﬁeld F under theabove deﬁned operations of addition and scalar multiplication.

THEOREM-1 Let V be a vector space over a ﬁeld F and let S be a subspace of V . Then, the set

V /S = {u + S : u ∈ V }

is a vector space over ﬁeld F for the vector addition and scalar multiplication on V /S deﬁnedas follows:

EXERCISE 2.5 1. If S is a subspace of a vector space V (F), then show that there is one-one correspondence between the subspaces of V containing S and subspaces of the quotient space V /S. 2. Deﬁne quotient space.160 • Theory and Problems of Linear Algebra

2.6 LINEAR COMBINATIONS

nin V and let λ1 , λ2 , . . . , λn be n scalars in F. Then the vector λ1 v1 + · · · + λn vn (or ∑ λi vi ) is i=1called a linear combination of v1 , v2 , . . . , vn . It is also called a linear combination of the setS = {v1 , v2 , . . . , vn }. Since there are ﬁnite number of vectors in S, it is also called a ﬁnite linearcombination of S. If S is an inﬁnite subset of V , then a linear combination of a ﬁnite subset of S is called aﬁnite linear combination of S.

or,  0 1 1   v  =  6  Applying R3 → R3 + 10R2

REMARK. The equation (i) obtained in the above solution is an identity in x, that is it holds forany value of x. So, the values of u, v and w can be obtained by solving three equations whichcan be obtained by given any three values to variable x.164 • Theory and Problems of Linear Algebra

Then each row of B is a row of A or a linear combination of rows of A. So, the row space ofB is contained in the row space of A. On the other hand, we can apply the inverse elementaryrow operation on B to obtain A. So, the row space of A is contained in the row space of B.Consequently, A and B have the same row space. Q.E.D.

THEOREM-2 Let A and B be row canonical matrices. Then A and B have the same row spaceif and only if they have the same non-zero rows.

SOLUTION There are two ways to show that [S] = [T ]. Show that each of v1 , v2 , v3 is a linearcombination of v3 and v4 and show that each of v4 and v5 is a linear combination of v1 , v2 , v3 .But, this method is not very convenient. So, let us discuss an alternative method. Let A be thematrix whose rows are v1 , v2 , v3 and B be the matrix whose rows v4 and v5 . That is, 1 2 −1 3  

SOLUTION In order to show that vectors v1 and v2 span R2 (R), it is sufﬁcient to show that anyvector in R2 is a linear combination of v1 and v2 . Let v = (a, b) be an arbitrary vector in R2 (R).Further, let v = xv1 + yv2⇒ (a, b) = x(1, 1) + y(1, 2)⇒ (a, b) = (x + y, x + 2y)⇒ x + y = a and x + 2y = bThis is a consistent system of equations for all a, b ∈ R. So, every vector in R2 (R) is a linearcombination of v1 and v2 .

EXAMPLE-3 Let V = Pn (x) be the vector space of all polynomials of degree less than or equalto n over the ﬁeld R of all real numbers. Show that the polynomials 1, x, x2 , . . . , xn−1 , xn span V .

This is a homogeneous system of linear equations. The determinant of the coefﬁcient

matrix A is 1 1 4

|A| = 1 3 9 = 0

0 2 5So, the system has non-trivial solutions. Hence, given vectors are linearly dependent in R3 (R).

Aliter In order to check the linear independence or dependence of vectors, we may follow thefollowing algorithm:

ALGORITHM

Step I Form a matrix A whose columns are given vectors.

Step II Reduce the matrix in step-I to echelon form.Step III See whether all columns have pivot elements or not. If all columns have pivot elements, then given vectors are linearly independent. If there is a column not having a pivot element, then the corresponding vector is a linear combination of the preced- ing vectors and hence linearly dependent. In Example 2, the matrix A whose columns are v1 , v2 , v3 is 1 1 4  

0 0 0 0 0 0 0We observe that the rows R2 , R3 , R4 have 0’s in the second column below the non-zero pivot inR1 , and hence any linear combination of R2 , R3 and R4 must have 0 as its entry as the secondcomponent. Whereas R1 has a non-zero entry 2 as the second component. Thus, R1 cannot bea linear combination of the rows below it. Similarly, the rows R3 and R4 have 0’s in the third Vector Spaces • 189

column below the non-zero pivot in R2 , and hence R2 cannot be a linear combination of therows below it. Finally, R3 cannot be a multiple of R4 as R4 has a 0 in the ﬁfth column belowthe non-zero pivot in R3 . Thus, if we look at the rows from the bottom and move upward, weﬁnd that out of rows R4 , R3 , R2 , R1 non row is a linear combination of the preceding rows. So,R1 , R2 , R3 , R4 are linearly independent vectors R7 (R). The above discussion suggests the following theorem.THEOREM-1 The non-zero rows of a matrix in echelon form are linearly independent.

2.9 BASIS AND DIMENSION

BASIS. A non-void subset B of a vector space V(F) is said to be a basis for V , if

(i) B spans V , i.e. [B] = V .and, (ii) B is linearly independent (l.i.).In other words, a non-void subset B of a vector space V is a linearly independent set of vectorsin V that spans V .FINITE DIMENSIONAL VECTOR SPACE. A vector space V(F) is said to be a ﬁnite dimensionalvector space if there exists a ﬁnite subset of V that spans it. A vector space which is not ﬁnite dimensional may be called an inﬁnite dimensional vectorspace.REMARK. Note that the null vector cannot be an element of a basis, because any set containingthe null vector is always linearly dependent. (n) (n) (n) For any ﬁeld F and a positive integer n, the set B = {e1 , e2 , . . . , en } spans the vectorspace F n (F) and is linearly independent. Hence, it is a basis for F n . This basis is calledstandard basis for F n .REMARK. Since the void set ϕ is linearly independent and spans the null space {0V }. There-fore, the void set ϕ is the only basis for the null space {0V }. (3) (3) (3) (3) (3) Consider the subset B = {e1 , e2 , e3 }, where e1 = (1, 0, 0), e2 = (0, 1, 0),e3 = (0, 0, 1), of the real vector space R3 . The set B spans R3 , because any vector (a, b, c) (3)

in R3 can be written as a linear combination of e1 , e2 and e3 , namely,

SOLUTION Let λ0 , λ1 , . . . , λn ∈ R be such that

⇒ λ0 = λ1 = · · · = λn = 0Therefore, the set B is linearly independent.Also, the set B spans the vector space Pn (x) of all real polynomials of degree not exceeding n,because every polynomial of degree less than or equal n is a linear combination of B.Hence, B is a basis for the vector space Pn (x) of all real polynomials of degree not exceeding n.

EXAMPLE-2 Show that the inﬁnite set {1, x, x2 , . . . } is a basis for the vector space R[x] of allpolynomials over the ﬁeld R of real numbers.

SOLUTION The set B = {1, x, x2 , . . . } is linearly independent. Also, the set B spans R[x],because every real polynomial can be expressed as a linear combination of B. Hence, B is a basis for R[x]. Since B is an inﬁnite set. Therefore, R[x] is an inﬁnite dimen-sional vector space over R.

EXAMPLE-3 Let a, b ∈ R such that a < b. Then the vector space C[a, b] of all real valuedcontinuous functions on [a, b] is an inﬁnite dimensional vector space.

EXAMPLE-4 The ﬁeld R of real numbers is an inﬁnite dimensional vector space over its subﬁeldQ of rational numbers. But, it is a ﬁnite dimensional vector space over itself.

EXAMPLE-5 The set {1, i} is a basis for the vector space C of all complex numbers over theﬁeld R of real numbers. We have deﬁned basis of a vector space and we have seen that basis of a vector spaceneed not be unique. Now a natural question arises, does it always exist?. The answer is in theafﬁrmative as shown in the following theorem.194 • Theory and Problems of Linear Algebra

THEOREM–1 Every ﬁnite dimensional vector space has a basis.

PROOF. Let V be a ﬁnite dimensional vector space over a ﬁeld F. If V is the null space,then the void set ϕ is its basis, we are done. So, let V be a non-null space. Since V is ﬁnitedimensional. So, there exists a ﬁnite subset S = {v1 , v2 , . . . , vn } of V that spans V . If S isa linearly independent set, we are done. If S is not linearly independent, then by Theorem 2on page 180, there exists a vector vk ∈ S which is linear combination of the previous vectors.Remove vk from S and let S1 = S − {vk }. By Theorem 3 on page 167, S1 spans V . If S1 is a linearly independent set, the theorem is proved. If not we repeat the above processon S1 and omit one more vector to obtain S2 ⊂ S1 . Continuing in this manner, we obtainsuccessively S1 ⊃ S2 ⊃ S3 , . . . , where each Si spans V . Since S is a ﬁnite set and each Si contains one vector less than Si−1 . Therefore, we ulti-mately arrive at a linearly independent set that spans V . Note that this process terminates beforewe exhaust all vectors in S, because if not earlier, then after (n − 1) steps, we shall be left with asingleton set containing a non-zero vector that spans V . This singleton set will form a basis forV because each singleton set containing a non-zero vector in V forms a linearly independentset. Hence, V has a basis. Q.E.D.

PROOF. By Theorem 1, the basis for V is obtained by removing those vectors from S, whichare linear combinations of previous vectors in S. Hence, S contains a basis for V . Q.E.D.

THEOREM-4 A subset B of a vector space V (F) is a basis for V if every vector in V has aunique representation as a linear combination of vectors of B.PROOF. First suppose that B is a basis for V . Then B spans V and so every vector in V is alinearly combination of vectors of B. To prove the uniqueness, we consider the following twocases:

⇒ λ1 = λ2 = · · · = λn = 0 as a linear combination of vectors in BThus, the only linear combination of vectors of B that equals to the null vector is the triviallinear combination. Hence, B is a basis for V . Q.E.D. The following theorem proves an extremely important result that one cannot have morelinearly independent vectors than the number of vectors in a spanning set.THEOREM-5 Let V be a vector space over a ﬁeld F. If V is spanned by the set {v1 , v2 , . . . , vn }of n vectors in V and if {w1 , w2 , . . . , wm } is a linearly independent set of vectors in V , thenm n. Moreover, V can be spanned by a set of n vectors containing the set {w1 , w2 , . . . , wm }.PROOF. We shall prove both the results together by induction on m. First we shall prove the theorem for m = 1. Let {w1 } be a linearly independent set. Then w1 = 0V .Now, w1 ∈ V⇒ w1 is a linear combination of v1 , v2 , . . . , vn [∵ {v1 , v2 , . . . , vn } spans V ]⇒ {w1 , v1 , v2 , . . . , vn } is linearly dependent set [By Theorem 2 on page 180]⇒ There exist vk ∈ {w1 , v1 , . . . , vn } such that vk is a linear combination of the preceeding vectors Let us rearrange the vectors w1 , v1 , . . . , vn in such a way that vn is a linear combination of theprevious vectors. Removing vn from the set {w1 , v1 , . . . , vn } we obtain the set {w1 , v1 , . . . , vn−1 }of n vectors containing w1 such that it spans V and n − 1 0 ⇒ n 1 = m. Note that the vector removed is one of v’s because the set of w’s is linearly independent. Hence, the theorem holds for m = 1. Now suppose that the theorem is true for m. This means that for a given set {w1 , . . . , wm }of m linearly independent vectors in V , we have (i) m n and, (ii) there exists a set of n vectors w1 , . . . , wm , v1 , . . . , vn−m in V that spans V .To prove the theorem for m + 1, we have to show that for a given set of (m + 1) linearlyindependent vectors w1 , w2 , . . . , wm , wm+1 in V , we have (i) m + 1 n and, (ii) there exists a set of n vectors containing w1 , . . . , wm , wm+1 that spans V . Vector Spaces • 199

we repeat the above process. Continuing in this manner we remove, one by one, every vectorwhich is a linear combination of the preceding vectors, till we obtain a linearly independent setspanning V . Since S is a linearly independent set, vi is a linear combination of the precedingvectors. Therefore, the vectors removed are some of bi ’s. Hence, the reduced linearly indepen-dent set consists of all vi ’s and some of bi ’s. This reduced set {v1 , v2 , . . . , vm , b1 , b2 , . . . , bn−m }is a basis for V containing S. Q.E.D. This theorem can also be stated as

“Any l.i. set of vectors of a ﬁnite dimensional vector space is a part of its basis.”

COROLLARY-1 In a vector space V (F) of dimension n

(i) any set of n linearly independent vectors is a basis,

and, (ii) any set of n vectors that spans V is a basis.

PROOF. (i) Let B = {b1 , b2 , . . . , bn } be a set of n linearly independent vectors in vector spaceV . Then by Theorem 7, B is a part of the basis. But, the basis cannot have more than n vectors.Hence, B itself is a basis for V . (ii) Let B = {b1 , b2 , . . . , bn } be a set of n vectors in V that spans V . If B is not linearlyindependent, then there exists a vector in B which is linear combination of the preceding vectorsand by removing it from B we will obtain a set of (n − 1) vectors in V that will also span V .But, this is a contradiction to the fact that a set of (n − 1) vectors cannot span V . Hence, B is alinearly independent set. Consequently, it is a basis for V . Q.E.D.COROLLARY-2 Let V (F) be a vector space of dimension n, and let S be a subspace of V . Then,

(i) every basis of S is a part of a basis of V .

(ii) dim S dimV .

(iii) dim S = dimV ⇔ S = V .

and, (iv) dim S < dimV , if S is a proper subspace of V .

B is a linearly independent set of vectors in V and hence it is a part of a basis for V .(ii) Since basis for S is a part of the basis for V . Therefore, dim S dimV.(iii) Let dim S = dimV = n. Let B = {b1 , b2 , . . . , bn } be a basis for V so that [B] = S.200 • Theory and Problems of Linear Algebra

Since B is a linearly independent set of n vectors in V and dimV = n. Therefore, B spans V ,

SOLUTION (i) Since dim R3 = 3. So, a basis of R3 (R) must contain exactly 3 elements. Hence, B1 is not a basis of R3 (R). (ii) Since dim R3 = 3. So, B2 will form a basis of R3 (R) if and only if vectors in B2 arelinearly independent. To check this, let us form the matrix A whose rows are the given vectorsas given below. 1 1 1  

⇒ A∼0 1 2 Applying R3 → R3 + 3R2

0 0 5Clearly, the echelon form of A has no zero rows. Hence, the three vectors are linearly indepen-dent and so they form a basis of R3 . (iii) Since (n + 1) or more vectors in a vector space of dimension n are linearly dependent.So, B3 is a linearly dependent set of vectors in R3 (R). Hence, it cannot be a basis of R3 . (iv) The matrix A whose rows are the vectors in B4 is given by 1 1 2  

SOLUTION Given four vectors can form a basis of R4 (R) iff they are linearly independent asthe dimension of R4 is 4. The matrix A having given vectors as its rows is 1 1 1 1   1 2 3 2 A= 2 5 6 4 

0 0 0 0The echelon matrix has a zero row. So, given vectors are linearly independent and do not forma basis of R4 . Since the echelon matrix has three non-zero rows, so the four vectors span asubspace of dimension 3.

TYPE II ON EXTENDING A GIVEN SET TO FORM A BASIS OF A GIVEN VECTOR SPACE

SOLUTION Let us ﬁrst form a matrix A with rows u1 and u2 , and reduce it to echelon form: 1 1 1 1

A= 2 2 3 4 1 1 1 1

⇒ A∼ 0 0 1 2We observe that the vectors v1 = (1, 1, 1, 1) and v2 = (0, 0, 1, 2) span the same space as spannedby the given vectors u1 and u2 . In order to extend the given set of vectors to a basis of R4 (R),we need two more vectors u3 and u4 such that the set of four vectors v1 , v2 , u3 , u4 is linearlyindependent. For this, we chose u3 and u4 in such a way that the matrix having v1 , v2 , u3 , u4 asits rows is in echelon form. Thus, if we chose u3 = (0, a, 0, 0) and u4 = (0, 0, 0, b), where a, bare non-zero real numbers, then v1 , u3 , v2 , u4 in the same order form a matrix in echelon form.Thus, they are linearly independent, and they form a basis of R4 . Hence, u1 , u2 , u3 , u4 also forma basis of R4 .

SOLUTION Let A be the matrix having v1 and v2 as its two rows. Then, −1 1 0

A= 0 1 0Clearly, A is in echelon form. In order to form a basis of R3 (R), we need one more vector suchthat the matrix having that vector as third row and v1 , v2 as ﬁrst and second rows is in echelonform. If we take v3 = (0, 0, a), where a(= 0) ∈ R, then matrix having its three rows as v1 , v2 , v3is in echelon form.Thus, v1 , v2 , v3 are linearly independent and they form a basis of R3 (R).REMARK. Sometimes, we are given a list of vectors in the vector space R n (R) and we want toﬁnd a basis for the subspace S of R n spanned by the given vectors, that is, a basis of [S]. Thefollowing two algorithms help us for ﬁnding such a basis of [S].ALGORITHM 1 (Row space algorithm)Step I Form the matrix A whose rows are the given vectorsStep II Reduce A to echelon form by elementary row operations.Step III Take the non-zero rows of the echelon form. These rows form a basis of the subspace spanned by the given set of vectors. In order to ﬁnd a basis consisting of vectors from the original list of vectors, we use the following algorithm.

ALGORITHM 2 (Casting-out algorithm)

Step I Form the matrix A whose columns are the given vectors.Step II Reduce A to echelon form by elementary row operations.Step III Delete (cast out) those vectors from the given list which correspond to columns without pivots and select the remaining vectors in S which correspond to columns with pivots. Vectors so selected form a basis of [S].

TYPE III ON FINDING THE DIMENSION OF SUBSPACE SPANNED BY A GIVEN SET OF VEC-TORS

EXAMPLE-7 Let S be the set consisting of the following vectors in R5 :

We observe that pivots (encircled entries) in the echelon form of A appear in the columnsC1 ,C2 ,C4 . So, we “cast out” the vectors u3 and u5 from set S and the remaining vectorsu1 , u2 , u4 , which correspond to the columns in the echelon matrix with pivots, form a basis of[S]. Hence, dim[S] = 3.

⇒ B ∼  0 1 1  Applying R2 → R2 − 3R1 , R3 → R3 − 3R1

⇒ B∼ 0 1 1 Applying R1 → R1 + 2R2

0 0 0Clearly, A and B have the same row canonical form. So, row spaces of A and B are equal.Hence, [S] = [T ].

EXERCISE 2.9 1. Mark each of the following true or false. (i) The vectors in a basis of a vector space are linearly dependent. (ii) The null (zero) vector may be a part of a basis. (iii) Every vector space has a basis. (iv) Every vector space has a ﬁnite basis. (v) A basis cannot have the null vector. (vi) If two bases of a vector space have one common vector, then the two bases are the same. (vii) A basis for R3 (R) can be extended to a basis for R 4 (R). (viii) Any two bases of a ﬁnite dimensional vector space have the same number of vectors. (ix) Every set of n + 1 vectors in an n-dimensional vector space is linearly dependent. (x) Every set of n + 1 vectors in an n-dimensional vector space is linearly independent. (xi) An n-dimensional vector space can be spanned by a set n − 1 vectors in it. (xii) Every set of n linearly independent vectors in an n-dimensional vector space is a basis.226 • Theory and Problems of Linear Algebra

(xiii) A spanning set of n vectors in an n-dimensional vector space is a basis.