Joint spectral radius

In mathematics, the joint spectral radius is a generalization of the classical notion of spectral radius of a matrix, to sets of matrices. In recent years this notion has found applications in a large number of engineering fields and is still a topic of active research.

Contents

The joint spectral radius of a set of matrices is the maximal asymptotic growth rate of products of matrices taken in that set. For a finite (or more generally compact) set of matrices M={A1,…,Am}⊂Rn×n,{\displaystyle {\mathcal {M}}=\{A_{1},\dots ,A_{m}\}\subset \mathbb {R} ^{n\times n},} the joint spectral radius is defined as follows:

It can be proved that the limit exists and that the quantity actually does not depend on the chosen matrix norm (this is true for any norm but particularly easy to see if the norm is sub-multiplicative). The joint spectral radius was introduced in 1960 by Gian-Carlo Rota and Gilbert Strang,[1] two mathematicians from MIT, but started attracting attention with the work of Ingrid Daubechies and Jeffrey Lagarias.[2] They showed that the joint spectral radius can be used to describe smoothness properties of certain wavelet functions.[3] A wide number of applications have been proposed since then. It is known that the joint spectral radius quantity is NP-hard to compute or to approximate, even when the set M{\displaystyle {\mathcal {M}}} consists of only two matrices with all nonzero entries of the two matrices which are constrained to be equal.[4] Moreover, the question "ρ≤1?{\displaystyle \rho \leq 1?}" is an undecidable problem.[5] Nevertheless, in recent years much progress has been done on its understanding, and it appears that in practice the joint spectral radius can often be computed to satisfactory precision, and that it moreover can bring interesting insight in engineering and mathematical problems.

In spite of the negative theoretical results on the joint spectral radius computability, methods have been proposed that perform well in practice. Algorithms are even known, which can reach an arbitrary accuracy in an a priori computable amount of time. These algorithms can be seen as trying to approximate the unit ball of a particular vector norm, called the extremal norm.[6] One generally distinguishes between two families of such algorithms: the first family, called polytope norm methods, construct the extremal norm by computing long trajectories of points.[7][8] An advantage of these methods is that in the favorable cases it can find the exact value of the joint spectral radius and provide a certificate that this is the exact value.

In the above equation "ρ(A1…At){\displaystyle \rho (A_{1}\dots A_{t})}" refers to the classical spectral radius of the matrix A1…At.{\displaystyle A_{1}\dots A_{t}.}

This conjecture, proposed in 1995, has been proved to be false in 2003,.[15] The counterexample provided in that reference uses advanced measure-theoretical ideas. Subsequently, many other counterexamples have been provided, including an elementary counterexample that uses simple combinatorial properties matrices [16] and a counterexample based on dynamical systems properties.[17] Recently an explicit counterexample has been proposed in.[18] Many questions related to this conjecture are still open, as for instance the question of knowing whether it holds for pairs of binary matrices.[19][20]

The joint spectral radius is the generalization of the spectral radius of a matrix for a set of several matrices. However, much more quantities can be defined when considering a set of matrices: The joint spectral subradius characterizes the minimal rate of growth of products in the semigroup generated by M{\displaystyle {\mathcal {M}}}. The p-radius characterizes the rate of growth of the Lp{\displaystyle L_{p}} average of the norms of the products in the semigroup. The Lyapunov exponent of the set of matrices characterizes the rate of growth of the geometric average.