22 Matrix exponent. Equal eigenvalues

Transcription

1 22 Matrix exponent. Equal eigenvalues 22. Matrix exponent Consider a first order differential equation of the form y = ay, a R, with the initial condition y) = y. Of course, we know that the solution to this IVP is given by yt) = e at y. However, let us apply the method of iterations to this equation. First note that instead of differential equation plus the initial conditions we can have one integral equation yt) = y + ayτ) dτ. Now we plug in the right hand side yτ) = y and find first iteration y t): y t) = y + ay dτ = y + ay t = + at)y. I plug it again in the right hand side and find the second iteration y 2 t) = y + ay τ) dτ = + at + a2 t 2 ) y. 2 In general we find y n t) = y + ay n τ) dτ = + at! + a2 t an t n ) y. n! You should recognize inside the parenthesis the partial sums for the Taylor series of e at, hence we recover again our familiar solution yt) = e at y. So what is the point about these iterations? Let us do the same trick with a system Instead of ) we can write the integral equation ẏ = Ay, y) = y. ) yt) = y + Ayτ) dτ, MATH266: Intro to ODE by Artem Novozhilov, Fall 23

2 where the integral of a vector is understood as componentwise integrals. I plug in the right-hand side y and find the first iteration y t) = y + tay = I + At)y. Similarly to the previous, we find y n t) = I + At + A2 t An t n n! ) y. The expression in the parenthesis is a sum of n n matrices, and hence a matrix itself. Therefore, it is natural to define a matrix, which is called matrix exponent, as the infinite sum of the form: e At = exp At := I + At + A2 t 2 Note that we can include scalar t to the matrix A An t n n! +... Definition. The matrix exponent e A of A is the series e A = exp A := I + A + A An n! ) To make sure that the definition makes sense we need to specify what we understand under the infinite series of matrices. I will skip this point here and just mention that series 2) converges absolutely for any matrix A, which allows us multiply this series by another matrix, differentiate it term by term, or integrate it if necessary. Matrix exponent has a lot of properties similar to the usual exponent. Here are those that I will need in the following:. As I already mentioned, series 2) converges absolutely, which means that there is a well defined limit of partial sums of this series. 2. d dt eat = Ae At = e At A. This property can be proved by term by term differentiation and factoring out A left as an exercise). Note here that both A and e At are n n matrices, and it is not obvious that Ae At = e At A. Such matrices for which AB = BA are called commuting. 3. If A and B commute, then e A+B = e A e B. In particular A and B commute if one of them is a scalar matrix, i.e., it has the form of the form λi. 4. e λit v = e λt v, for any λ R and v R n. The proof follows from the definition. 2

3 Before using the matrix exponent to solve problems with equal eigenvalues, I would like to state the fundamental theorem of linear first order homogeneous ODE with constant coefficients: Theorem 2. Consider problem ). Then this problem has a unique solution yt) = e At y. Moreover, for any vector v R n, yt) = e At v is a solution to the system ẏ = Ay Dealing with equal eigenvalues It is important to note that the matrix exponent is not that easy to calculate for each particular example. However, the expression e At v can be easily calculated for some special vectors v without the knowldge on the explicit form of e At. Example 3. For eigenvector v with the eigenvalue λ we have that To show this, express At = λit + At λit, then e At v = e λit+at λit v = by property 3 e At v = e λt v. = e λit e A λi)t v = by property 4 = e λt e A λi)t v = by definition = e λt I + A λi)t + A λi)2 t 2 ) +... v = e λt Iv + ta λi)v + t2 A λi) 2 ) v +... = by properties of the eigenvectors = e λt Iv ) = e λt v. We actually found exactly those solutions to system ẏ = Ay that can be written down using the distinct eigenvalues. Definition 4. A nonzero vector v is called a generalized eigenvector of matrix A associated with the eigenvalue with the algebraic multiplicity k, if A λi) k v =. Now assume that vector v is a generalized eigenvector with k = 2. Exactly as in the last example, we will find that e At v = e λit+at λit v = by property 3 = e λit e A λi)t v = by property 4 = e λt e ta λi) v = by definition = e λt I + ta λi) + t2 A λi) 2 + t3 A λi) 3 ) +... v 3! = e λt Iv + ta λi)v + t2 A λi) 2 v + t3 A λi) 3 ) v +... = by properties of the eigenvectors 3! = e λt Iv + ta λi)v ) = e λt I + ta λi) ) v. 3

4 Hence we found that for a generalized eigenvector v with k = 2, the solution to our system can be taken as yt) = e λt I + ta λi) ) v, which does not require much computations. The only remaining question is actually whether we are always able to find enough linearly independent generalized eigenvectors for a given matrix. The answer is positive. Hence we obtain an algorithm for matrices with equal eigenvalues: Assume that we have a real eigenvalue λ i of multiplicity 2 and we found only one linearly independent eigenvector v i corresponding to this eigenvalue if we are able to find two, the problem is solved). Then first particular solution is given by, as before, y i t) = v i e λ it. To find a second particular solution to account for this multiplicity we need to look for a generalized eigenvector that solves the equation A λ i I) 2 u i =. Note that we are looking for such u i that the previous holds and A λ i I)u i. We can always find a solution u i of this system, which is linearly independent of v i. In this case the second particular solution is given by y i+ t) = e λ it I + A λ i I)t ) u i. This case can be generalized to the case when multiplicity of eigenvalues in bigger than 2 see an example below) and when we have complex conjugate eigenvalues of multiplicity two and higher we will not need this case for the quizzes and exams). Example 5. Find the general solution to ẏ = y. 2 The eigenvalues are λ = 2 and λ 2 = multiplicity 2). An eigenvector for λ can be taken as v =. For λ 2 we find that v 2 =, 4

5 and we are short for one more linearly independent solution to form a basis for the solution set. Consider A λ 2 I) 2 u =, which has solutions and u = u 2 =. The first one is exactly v 2, therefore we keep only u 2. Finally, one finds that y 3 t) = e t I + A λ 2 I)t ) t u 2 = e t and the general solution is yt) = C v e 2t + C 2 v 2 e t + C 3 y 3 t). Example 6. Solve the IVP 2 3 y = 2 y, y) =. 4 I find that λ = 2 is the only eigenvalue of multiplicity 3. Its eigenvector is v =, and a first linearly independent solution is given by y t) = e 2t. To find two more linearly independent solutions we need to look for the generalized eigenvectors. Consider first A λ 2 I) 2 u =, which has two vectors as a solution basis u = 5

3.7 Non-autonomous linear systems of ODE. General theory Now I will study the ODE in the form ẋ = A(t)x + g(t), x(t) R k, A, g C(I), (3.1) where now the matrix A is time dependent and continuous on some

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

CHAPTER System of First Order Differential Equations In this chapter, we will discuss system of first order differential equations. There are many applications that involving find several unknown functions

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

Chapter ANALYSIS OF LINEAR SYSTEMS IN STATE SPACE FORM This course focuses on the state space approach to the analysis and design of control systems. The idea of state of a system dates back to classical

Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted

c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,

The question we want to answer now is the following: If A is not similar to a diagonal matrix, then what is the simplest matrix that A is similar to? Before we can provide the answer, we will have to introduce

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

Linear systems of ordinary differential equations (This is a draft and preliminary version of the lectures given by Prof. Colin Atkinson FRS on 2st, 22nd and 25th April 2008 at Tecnun Introduction. This

Matrix Methods for Linear Systems of Differential Equations We now present an application of matrix methods to linear systems of differential equations. We shall follow the development given in Chapter

Math 2280 Section 002 [SPRING 2013] 1 Today well learn about a method for solving systems of differential equations, the method of elimination, that is very similar to the elimination methods we learned

Matrices in Statics and Mechanics Casey Pearson 3/19/2012 Abstract The goal of this project is to show how linear algebra can be used to solve complex, multi-variable statics problems as well as illustrate

20 Applications of Fourier transform to differential equations Now I did all the preparatory work to be able to apply the Fourier transform to differential equations. The key property that is at use here

ME 115(a): Notes on Rotations 1 Spherical Kinematics Motions of a 3-dimensional rigid body where one point of the body remains fixed are termed spherical motions. A spherical displacement is a rigid body

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

Solutions to Assignment 4 Math 217, Fall 2002 2.7.10 Consider the following geometric 2D transformations: D, a dilation (in which x-coordinates and y- coordinates are scaled by the same factor); R, a rotation;

MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

Chapter 5 Eigenvectors We turn our attention now to a nonlinear problem about matrices: Finding their eigenvalues and eigenvectors. Eigenvectors x and their corresponding eigenvalues λ of a square matrix