Inner products on R n, and more

Transcription

1 Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y x n y n? The answer is yes and no. For example, the following are inner products on R 2 : < x, y >= 2x 1 y 1 + 3x 2 y 2 < x, y >= x 1 y 1 + 2x 1 y 2 + 2x 2 y 1 + 3x 2 y 2 But the answer is: There are no fancier examples! In fact, this result is even true for finite-dimensional vector spaces over F! Note: In the following, we will denote vectors in R n and C n by column vectors, not [ row-vectors 1. For example, instead of writing x = (1, 2) 2, we will 1 write x =. 2] 2 Inner products on R n In this section, we will prove the following result: Prop: < x, y > is an inner product on R n if and only if < x, y >= x T Ay, where A is a symmetric matrix whose eigenvalues are strictly positive 3 1 This will simplify matters later on 2 Here we mean the point, not the dot product 3 Such a matrix is called symmetric and positive-definite 1

3 However, < v i, v i >, hence λ i v T i v i > 0, but since v T i v i > 0 (since v T i v i is the usual dot product between v i and v i ), this implies λ i > 0, so all the eigenvalues of A are positive ( ) Suppose < x, y >= x T Ay. Then you can check that <, > is linear in each variable. Moreover: < y, x >= y T Ax = (x T A T y) T = (x T Ay) T = x T Ay = < x, y > Where the third equality follows from A T = A and the last equality follows because x T Ay is just a scalar. Finally: < x, x >= x T Ax Now since A is symmetric, A is normal (you will see that later), and hence there exists an invertible matrix P with P 1 = P T, such that A = P DP T (you will learn that later too, i.e. A is orthogonally diagonalizable), where D is the diagonal matrix of eigenvalues λ i of A, and by assumption λ i > 0 for all i. < x, x >= x T Ax = x T P DP T x = (P T x) T DP T x = ydy T Where y = P T x. But then if y = a 1 y a n y n and you calculate this out, you should get: < x, x >= ydy T = λ 1 a λ n a 2 n 0 (since λ i > 0), So < x, x > 0. Moreover, if < x, x >= 0, then λ 1 a λ n a 2 n = 0, but then a 1 = = a n = 0 (since λ i > 0), but then y = 0, and so x = (P T ) 1 y = (P T ) T y = P y = P 0 = 0. Hence <, > satisfies all the requirements for an inner product, hence (, ) is an inner product! 3

4 3 Inner products on C n In fact, a similar proof works for C n, except that you have to replace all the transposes by adjoints! (i.e. replace all the T with ). Hence, we get the following result: Prop: < x, y > is an inner product on C n if and only if < x, y >= x Ay, where A is a self-adjoint matrix whose eigenvalues are strictly positive 4 4 Inner products on finite-dimensional vector spaces In fact, if V is a finite-dimensional vector space over F, then a version of the above result still holds, using the following trick: Let n = dim(v ) and (v 1,, v n ) be a basis for V. Here, we will prove the following result gives an explicit description of all inner products on V : Theorem: < x, y > is an inner product on V if and only if: < x, y >= (Mx) AM(y) where A is a self-adjoint matrix with positive eigenvalues 5, where M : V F n is the usual coordinate map given by: a 2 M(v) = M(a 1 v a n v n ) =. a n a 1 Example: If V = P 2 (R), then the following is an inner product on V : < a 0 +a 1 x+a 2 x 2, b 0 +b 1 x+b 2 x 2 >= a 0 b 0 +2a 0 b 1 +3a 0 b 2 +2a 1 b 0 +2a 1 b 1 +4a 1 b 2 +3a 2 b 0 +4a 2 b 1 +8a 2 b Here A = Note that A self-adjoint implies that A has only real eigenvalues 5 If V is a vector space over R, then replace self-adjoint with symmetric and with T 4

5 Proof: ( ): Check that < x, y > is an inner product on V (this is similar to the proof in section 2) ( ): First of all, note from chapter 3 that M is invertible, with inverse: M 1 a 1. a n Now let <, > be an inner product on V. = a 1 v a n v n Lemma: Then (x, y ) :=< M 1 (x ), M 1 (y ) > is an inner product on F n Proof: The only tricky thing to prove is that (x, x ) = 0 implies x = 0. However: (x, x ) = 0 < M 1 (x ), M 1 (x ) >= 0 M 1 (x ) = 0 x = 0 Where in the second implication, we used that <, > is an inner product on V, and in the third implication, we used that M 1 is injective. But since (x, y ) is an inner product on F n, by sections 2 and 3, we get that: (x, y ) = (x ) Ay For some self-adjoint (or symmetric) matrix A with only positive eigenvalues. But then it follows that < M 1 (x ), M 1 (y ) >= (x ) Ay Now let x, y be arbitrary vectors in V. Then we can write x = M 1 M(x) and y = M 1 M(y) < x, y >=< M 1 M(x), M 1 M(y) >= M(x) AM(y) Where in the second equality, we used the above result with x = M(x) and y = M(y). 5

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

7 Inner Product Spaces 71 Inner Products Recall that if z is a complex number, then z denotes the conjugate of z, Re(z) denotes the real part of z, and Im(z) denotes the imaginary part of z By definition,

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

(January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

Math 113 Homework 3 October 16, 2013 This homework is due Thursday October 17th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name on the first

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

To ensure the functioning of the site, we use cookies. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy &amp Terms.
Your consent to our cookies if you continue to use this website.