A note on companion matrices

Transcription

1 Linear Algebra and its Applications 372 (2003) A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod vodáren. věží Praha 8 Czech Republic Received 8 October 2002; accepted 2 April 2003 Submitted by H. Schneider Abstract We show that the usual companion matrix of a polynomial of degree n can be factored into a product of n matrices n of them being the identity matrix in which a 2 2 identity submatrix in two consecutive rows (and columns) is replaced by an appropriate 2 2 matrix the remaining being the identity matrix with the last entry replaced by possibly different entry. By a certain similarity transformation we obtain a simple new companion matrix in a pentadiagonal form. Some generalizations are also possible Elsevier Inc. All rights reserved. AMS classification: 5A23; 5A57; 65F5 Keywords: Companion matrix; Characteristic polynomial; Pentadiagonal matrix; Zeros of polynomials. Introduction Let p(x) = x n + a x n + +a n x + a n () be a polynomial with coefficients over an arbitrary field. As is well known the matrix a a 2... a n a n 0 0 A = 0 0 (2) 0 0 address: (M. Fiedler). Research supported by grant A /$ - see front matter 2003 Elsevier Inc. All rights reserved. doi:0.06/s (03)

2 326 M. Fiedler / Linear Algebra and its Applications 372 (2003) has the property that det(xi A) = p(x). The matrix A or some of its modifications is being called companion matrix of the polynomial p(x) since its characteristic polynomial is p(x). In the sequel we find another simple matrix Â which is similar to A and has thus the same property with respect to the polynomial p(x). 2. Results We start with a simple lemma. Lemma 2.. For k =...n denote by A k the matrix k A k = I C k (3) I n k where C k is a 2 2 matrix ak C k = (4) and by A n the matrix A n = diag{... a n }. (5) Then A = A A 2 A n A n. Proof. Follows easily by induction from a a 2 a k I k a k a a 2 a k a k = Lemma 2.2. All the matrices obtained as products A i A in for some permutation (i...i n ) of (...n)have the same spectrum including multiplicities. They even are all similar.

3 M. Fiedler / Linear Algebra and its Applications 372 (2003) Proof. Clearly A i A k = A k A i if i k >. This allows us to bring every such product A i A in to the form (A j A j 2 A )(A j2 A j2 2 A j ) (A n A n A js ) with the property that in both permutations (i...i n ) and (j j 2... j 2 j j...nn...j s ) each pair (k k + ) is either ordered positively i.e. k precedes k + or negatively i.e. k + precedes k. Moreover by a well known theorem [4 Theorem.3.20] if A and B are square matrices then AB and BA have the same spectrum including multiplicities; these matrices are even similar if one of the matrices A B is non-singular. This allows us to rotate the permutations which determine the product without changing the spectrum in the sense that the permutation (i...i k i k+...i n ) can be replaced by (i k+...i n i...i k ). It thus suffices to prove that the matrix corresponding to any permutation can be obtained by these operations from the matrix A( ; ) n ; here we denote by A(i...i n ) n the matrix A i A i2 A in and the subscript n means the number of elements in the permutation. Observe that all resulting matrices are similar since at most the matrix A n from (5) can be singular. We prove the assertion by induction with respect to n. It is immediate that for n = 2andn = 3 the assertion is true. Let thus n>2 and suppose the assertion is true for n. Let (i...i n ) be a permutation. We distinguish two cases. Case. The pair ( 2) is positive. Then ( 3...; ) n (3 5...; ) n (we use here the symbol to express that the spectrum of A( 3...; ) n is the same as the spectrum of A(3 5...; ) n ) which is (3 5...; ) n. We now take the pair ( 2) as one element and diminish indices of all the remaining indices by one: thus (3 5...; ) n (2 4...; 3...) n which is again equivalent to ( 3...; ) n by the rotational operation. By the induction hypothesis this permutation is equivalent to the permutation obtained from (i...i n ) by putting together the pair ( 2) by moving to the right until it hits 2 and then denoting it by and diminishing all the remaining indices by one. Going back we reconstruct the chain of operations which brings the permutation ( 3...; ) n into (i...i n ). Case 2. The pair ( 2) is negative. By rotation we can arrange that will be the first element in the permutation. Of course the pair ( 2) will then be positive and by Case the assertion is correct. Theorem 2.3. All matrices A i A in for any permutation (i...i n ) are companion matrices of p(x) and are similar to the matrix (2). In particular this holds for the matrix Â = BC where B is the matrix A A 3 and C is the matrix A 2 A 4 where A i are the matrices from (3). The matrix B is the direct sum of the matrices C C 3 etc. the matrix C the direct sum of the identity matrix and the matrices C 2 C 4 etc. from (4). For n even

4 328 M. Fiedler / Linear Algebra and its Applications 372 (2003) the matrix C ends with the block ( a n ) for n odd B ends with the block ( a n ). The matrix Â is pentadiagonal and contains the same entries as the usual companion matrix (2). Proof. The first part follows from Lemma 2.2 since by Lemma 2. the matrix A A 2 A n is the usual companion matrix. The last assertion follows from the fact that both matrices B and C are tridiagonal and from the comment in the following example. Example 2.4. Let us present explicitly the matrices B C and BC for n = 5and n = 6 (the even and odd cases slightly differ): For n = 5 a B = a 3 C = a 5 a a BC = 0 a 3 0 a a 5 0 For n = 6 a B = a 3 a 5 a 2 C = a 4 a 6 a a BC = 0 a 3 0 a a 5 0 a a 2 a 4

5 M. Fiedler / Linear Algebra and its Applications 372 (2003) We see that and this is true in general the first two rows of BC contain non-zero entries in the first three columns only: a a 2 ; 0 the following pairs of rows 2j and2j contain non-zero entries only in four columns with indices 2j 2...2j + and the submatrices a2j a 2j 0 0 in these rows and columns contain two entries a i and two ones; the remaining are zero. The last two rows in the even case contain non-zeros only in the last three columns an a n ; 0 the last row in the odd case is as in the example above: a n 0. Remark 2.5. The matrix Â from Theorem 2.3 can be transformed by a permutational similarity starting with odd rows and columns and continuing with even rows and columns to the block form Z Z 2 Z 2 Z 22 where a 0 Z = a a 3 a Z 2 = 0 a 5 a Z 2 = Z = If n is even Z 2 and Z 2 are square. If n is odd Z 2 is of size 2 (n + ) 2 (n ).

7 M. Fiedler / Linear Algebra and its Applications 372 (2003) Using this lemma one can find explicitly the moduli of the matrices B and C in Theorem 2.3. The singular values of BC are the eigenvalues of B C ; these can thus be obtained as eigenvalues of the symmetric heptadiagonal matrix B /2 C B /2 and used for estimation of the roots. In our opinion a particularly interesting feature of the matrices B and C from Theorem 2.3 is the following: If g(x) and h(x) are polynomials for which f(x)= g(x 2 ) + xh(x 2 ) then for n odd B depends on the coefficients of g only whereas C depends on the coefficients of h. For n even it is the other way. It is also easy to find explicitly the QR- RQ- etc. decompositions of both matrices B and C from Theorem 2.3 and use them for manipulation. As a sample if B = Q R and C = R 2 Q 2 then Q 2 Q R R 2 is the QR-decomposition of another companion matrix which is a banded matrix with a small number of bands. For n = 5 one gets a w w a 3 w 3 w 3 0 Q = w a w w 3 a 3 w 3 0 w a w a a 2 w 0 0 w w a R = 0 0 w 3 a 3 w 3 a 3 a 4 w w 3 w 3 a a 5 where w k is set as ak 2 + fork = 3. Observe also that some of the matrices considered above have superdiagonal rank (sometimes subdiagonal rank) one. Here as in [3] we call subdiagonal rank (respectively superdiagonal rank) of a square matrix the order of the maximal nonsingular submatrix all of whose entries are in the subdiagonal (respectively superdiagonal) part. In fact in the matrix R the superdiagonal rank is one even if we add to the superdiagonal part all the even diagonal positions (in the sense of [3]). In some mentioned cases a similar property holds for the subdiagonal or superdiagonal rank too. References [] S. Barnett Congenial matrices Linear Algebra Appl. 4 (98) [2] L. Brand Applications of the companion matrix Amer. Math. Monthly 75 (968) [3] M. Fiedler Structure ranks of matrices Linear Algebra Appl. 79 (993) [4] R.A. Horn C.R. Johnson Matrix Analysis Cambridge University Press Cambridge 985. [5] H. Linden Bounds for the zeros of polynomials from eigenvalues and singular values of some companion matrices Linear Algebra Appl. 27 (998) 4 82.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is

I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

Helpsheet Giblin Eunson Library ATRIX ALGEBRA Use this sheet to help you: Understand the basic concepts and definitions of matrix algebra Express a set of linear equations in matrix notation Evaluate determinants

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

(January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

A CHARACTERIZATION OF TREE TYPE LON H MITCHELL Abstract Let L(G) be the Laplacian matrix of a simple graph G The characteristic valuation associated with the algebraic connectivity a(g) is used in classifying

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014 Unfortunately, no one can be told what the Matrix is. You have to see it for yourself. -- Morpheus Primary concepts:

Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

Math 319 Problem Set #3 Solution 21 February 2002 1. ( 2.1, problem 15) Find integers a 1, a 2, a 3, a 4, a 5 such that every integer x satisfies at least one of the congruences x a 1 (mod 2), x a 2 (mod

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are