Abstract

A general class of matrices, covering, for instance, an important set of proper rotations, is considered. Several characteristics of the class are established, which deal with such notions and properties as determinant, eigenspaces, eigenvalues, idempotency, Moore-Penrose inverse, or orthogonality.

1. Introduction and Basic Properties

Let ℂ𝑚,𝑛 denote the set of 𝑚×𝑛 complex matrices. The symbols 𝐋, 𝐋∗, ℛ(𝐋), and rk(𝐋) will stand for the transpose, conjugate transpose, column space, and rank, respectively, of 𝐋∈ℂ𝑚,𝑛. Further, 𝐋†∈ℂ𝑛,𝑚 will be the Moore-Penrose inverse of 𝐋∈ℂ𝑚,𝑛, that is, the unique matrix satisfying the equations
𝐋𝐋†𝐋=𝐋,𝐋†𝐋𝐋†=𝐋†,𝐋𝐋†∗=𝐋𝐋†,𝐋†𝐋∗=𝐋†𝐋,(1.1)
and 𝐈𝑛 will mean the identity matrix of order 𝑛. The Moore-Penrose inverse 𝐋† is useful in representing the orthogonal (in the sense of the standard inner product) projectors onto ℛ(𝐋) and ℛ(𝐋∗), denoted by 𝐏𝐋 and 𝐏𝐋∗, as well as the orthogonal projectors onto the orthogonal complements of these subspaces, denoted by 𝐐𝐋 and 𝐐𝐋∗. To be precise, for 𝐋∈ℂ𝑚,𝑛,
𝐏𝐋=𝐋𝐋†,𝐏𝐋∗=𝐋†𝐋,𝐐𝐋=𝐈𝑚−𝐋𝐋†,𝐐𝐋∗=𝐈𝑛−𝐋†𝐋.(1.2)
With respect to a scalar, say 𝛼∈ℂ, the inverse 𝛼† is defined as: 𝛼†=0 when 𝛼=0 and 𝛼†=𝛼−1 when 𝛼≠0.

The considerations of the present paper concern 3×3 matrices and 3×1 vectors having either complex or real entries; in the latter case their subsets will be denoted by ℝ3,3 and ℝ3,1, respectively. Customarily, the symbol ‖𝐥‖ will stand for the Euclidean norm of 𝐥∈ℂ3,1, that is, √‖𝐥‖=𝐥∗𝐥. Let
𝐓𝐚=⎛⎜⎜⎜⎝0−𝑎3𝑎2𝑎30−𝑎1−𝑎2𝑎10⎞⎟⎟⎟⎠(1.3)
be generated by 𝐚=(𝑎1,𝑎2,𝑎3)∈ℝ3,1. The matrix 𝐓𝐚 can be used to define the vector cross product in ℝ3,1, with 𝐓𝐚𝐥=𝐚×𝐥 for any 𝐥∈ℝ3,1; see [1]. Further properties of 𝐓𝐚 are listed in the following lemma, whose proof is easy and thus omitted.

Relationships listed in Lemma 1.1 are available in the literature; see Trenkler [2, 3], Groß et al. [4], Bernstein [5, Ch. 3], and G. Trenkler and D. Trenkler [6]. It is noteworthy that the three scalars involved in point (x) represent the scalar triple products, for example, 𝐜𝐓𝐚𝐛=(𝐚×𝐛)𝐜. Moreover, the right-hand sides equalities in points (xi) and (xii) are known as the Grassmann and Lagrange identities, respectively. From the point of view of the present paper, conditions (iii)–(v) of the lemma are of particular importance and will be extensively utilized in the subsequent derivations.

It is known that if 𝐚∈ℝ3,1 generating 𝐓𝐚 given in (1.3) is such that ‖𝐚‖=1, then the matrices of the form
𝑅=sin𝜃𝐓𝐚+𝐈𝟑+(1−cos𝜃)𝐓𝟐𝐚(1.4)
describe proper rotations in ℝ3,1 with 𝜃 being the angle of rotation about the axis given by 𝐚; see Noble [7, Ch. 12] or Murray et al. [8, Ch. 2]. In what follows we consider a more general class of matrices than the one spanned by matrices of the form (1.4), namely,
Υ=𝐓∈ℂ3,3∶𝐓=𝛼𝐓𝐚+𝛽𝐈𝟑+𝛾𝐚𝐚,𝛼,𝛽,𝛾∈ℂ,𝐚∈ℝ3,1,𝐚≠0.(1.5)
It can be verified that Υ covers all proper rotations of the form (1.4). Moreover, it comprises also improper rotations, that is, orthogonal matrices with determinant equal to −1 [9, Ch. VIII], symmetric elementary matrices [10, Sec. 1], and all matrices commuting with 𝐓𝐚 [11].

The purpose of the present paper is to identify various properties of the class of matrices specified in (1.5). As a result, several characteristics of the class Υ are established, dealing with such notions and properties as idempotency, determinant, eigenvalues, Moore-Penrose inverse, orthogonality, or eigenspaces.

2. Results

Subsequently, the symbol 𝑖 is interpreted as √𝑖=−1. The theorem below states that Υ is closed against multiplication.

Theorem 2.1. Let 𝐓1,𝐓2∈Υ with 𝐓𝑘=𝛼𝑘𝐓𝐚+𝛽𝑘𝐈𝟑+𝛾𝑘𝐚𝐚, 𝑘=1,2. Then 𝐓1𝐓2∈Υ.

Proof. Direct calculations show that
𝐓1𝐓2=𝛼1𝛽2+𝛽1𝛼2𝐓𝐚+𝛽1𝛽2−𝛼1𝛼2‖𝐚‖2𝐈𝟑+𝛼1𝛼2+𝛽1𝛾2+𝛽2𝛾1+𝛾1𝛾2‖𝐚‖2𝐚𝐚,(2.1)
establishing the assertion.

It is clear that, besides ensuring the closure property, multiplication in Υ is also associative. Furthermore, for each 𝐓∈Υ there exists the identity element, namely, 𝐈3. On the other hand, since Υ includes also singular matrices, not every 𝐓∈Υ has an inverse element in Υ. Thus, the set Υ is a semigroup under (matrix) multiplication. (As will be seen subsequently, the Moore-Penrose inverse of every 𝐓∈Υ belongs to Υ, in particular, 𝐓∈Υ⇔𝐓−1∈Υ for each nonsingular 𝐓∈Υ.)

The next theorem provides necessary and sufficient conditions for 𝐓2=𝐓.

Theorem 2.2. Let 𝐓∈Υ. Then 𝐓 is idempotent if and only if
2𝛼𝛽=𝛼,𝛽2−𝛼2‖𝐚‖2=𝛽,𝛼2+2𝛽𝛾+𝛾2‖𝐚‖2=𝛾.(2.2)

Proof. The equivalence established in the theorem follows straightforwardly from Theorem 2.1.

In view of Theorem 2.2, we can distinguish two subsets of idempotent matrices belonging to Υ corresponding to 𝛼≠0 and 𝛼=0. In the former of them, 𝛼=±(𝑖/2‖𝐚‖), 𝛽=1/2, and 𝛾=±(1/2‖𝐚‖2). Hence, since 𝐚†=(1/‖𝐚‖2)𝐚, it follows that 𝐓∈{𝐏1,𝐏2,𝐏3,𝐏4}, with
𝐏1=12𝑖𝐓‖𝐚‖𝐚+𝐈3+𝐏𝐚,𝐏2=12𝑖𝐓‖𝐚‖𝐚+𝐐𝐚,𝐏3=12−𝑖𝐓‖𝐚‖𝐚+𝐈3+𝐏𝐚,𝐏4=12−𝑖𝐓‖𝐚‖𝐚+𝐐𝐚.(2.3)
(Note that matrices 𝐏𝑘, 𝑘=1,…,4, were considered by Trenkler [3] to characterize certain eigenspaces.) On the other hand, if 𝛼=0, then Theorem 2.2 entails four cases, namely, 𝛽=0, 𝛾=0; 𝛽=0, 𝛾=1/‖𝐚‖2; 𝛽=1, 𝛾=0; and 𝛽=1, 𝛾=−1/‖𝐚‖2, leading to 𝐏5=0, 𝐏6=𝐏𝐚, 𝐏7=𝐈3, and 𝐏8=𝐐𝐚, respectively.

The next task is to characterize the eigenvalues of matrices belonging to Υ. The subsequent theorem expresses the determinant of 𝐓∈Υ in terms of the scalars 𝛼, 𝛽, and 𝛾. Its proof is based on the so-called Leverrier-Sourian-Frame algorithm, which provides a useful tool to calculate the coefficients in a characteristic polynomial. Since the algorithm is not widely known, it is restated in the following lemma; see for example, Meyer [12, page 504]. Customarily, tr(⋅) denotes the trace of a matrix argument.

Lemma 2.3. Let 𝐋∈ℂ𝑛,𝑛, and let 𝜇𝑛+𝑐1𝜇𝑛−1+𝑐2𝜇𝑛−2+⋯+𝑐𝑛=0 be the characteristic equation for 𝐋. Then
𝑐1=−tr(𝐋),𝑐𝑘1=−𝑘tr𝐋𝐁k−1,𝑘=2,3,…,𝑛,(2.4)
where 𝐁1=𝑐1𝐈𝑛+𝐋 and 𝐁𝑘=𝑐𝑘𝐈𝑛+𝐋𝐁𝑘−1, 𝑘=2,3,…,𝑛−1.

Theorem 2.4. Let 𝐓∈Υ, and let 𝜏=𝛼2‖𝐚‖2+𝛽2. Then det(𝐓)=𝜏(𝛽+𝛾‖𝐚‖2).

Proof. From Lemma 2.3 it follows that 𝑐1=−tr(𝐓) is given by 𝑐1=−(3𝛽+𝛾‖𝐚‖2), whence it is seen that 𝐁1=𝑐1𝐈3+𝐓 takes the form
𝐁1=𝛼𝐓𝐚−2𝛽+𝛾‖𝐚‖2𝐈3+𝛾𝐚𝐚.(2.5)
Further, if 𝑘=2, then 𝑐2=−(1/2)tr(𝐓𝐁1), where
𝐓𝐁1=−𝛼𝛽+𝛼𝛾‖𝐚‖2𝐓𝐚−𝛼2‖𝐚‖2+2𝛽2+𝛽𝛾‖𝐚‖2𝐈3+𝛼2−𝛽𝛾𝐚𝐚.(2.6)
Since tr(𝐓𝐚)=0, in consequence we get 𝑐2=𝛼2‖𝐚‖2+3𝛽2+2𝛽𝛾‖𝐚‖2, leading to
𝐁2=𝑐2𝐈3+𝐓𝐁1=−𝛼𝛽+𝛼𝛾‖𝐚‖2𝐓𝐚+𝛽2+𝛽𝛾‖𝐚‖2𝐈3+𝛼2−𝛽𝛾𝐚𝐚.(2.7)
Finally, if 𝑘=3, then 𝑐3=−(1/3)tr(𝐓𝐁2), where
𝐓𝐁2=𝛼2𝛽‖𝐚‖2+𝛼2𝛾‖𝐚‖4+𝛽3+𝛽2𝛾‖𝐚‖2𝐈3.(2.8)
Hence, straightforward calculations yield 𝑐3=−𝜏(𝛽+𝛾‖𝐚‖2). Combining this result with the property det(𝐓)=(−1)3𝑐3 completes the proof.

By virtue of Theorem 2.4, it is easy to determine the spectrum of matrices belonging to Υ.

Theorem 2.5. Let 𝐓∈Υ. Then the eigenvalues of 𝐓 are solutions 𝜇 to the equation [𝛼2‖𝐚‖2+(𝛽−𝜇)2][𝛽−𝜇+𝛾‖𝐚‖2]=0.

Proof. It is clear that
det𝐓−𝜇𝐈3=det𝛼𝐓𝐚+(𝛽−𝜇)𝐈3+𝛾𝐚𝐚.(2.9)
Hence, on account of Theorem 2.4, we get
det𝐓−𝜇𝐈3𝛼=det2‖𝐚‖2+(𝛽−𝜇)2𝛽−𝜇+𝛾‖𝐚‖2,(2.10)
establishing the assertion.

Corollary 2.6. Let 𝐓∈Υ. Then the eigenvalues of 𝐓 are
𝜇1=𝛽+𝛾‖𝐚‖2,𝜇2=𝛽+𝑖𝛼‖𝐚‖2,𝜇3=𝛽−𝑖𝛼‖𝐚‖.(2.11)

An expected result originating from Corollary 2.6 is that 𝜇1𝜇2𝜇3=det(𝐓), with det(𝐓) given in Theorem 2.4. Furthermore, when 𝛼=1, 𝛽=0, and 𝛾=0, then the eigenvalues given in (2.11) reduce to 𝜇1=0, 𝜇2=𝑖‖𝐚‖, and 𝜇3=−𝑖‖𝐚‖, that is, to the eigenvalues of 𝐓𝐚; see [3, Theorem 2].

The following theorem will be useful in the subsequent calculations of the Moore-Penrose inverses of 𝐓∈Υ.

Theorem 2.7. Let 𝐀∈ℂ3,3 be such that 𝐀=𝛼𝐓𝐚+𝛽𝐈3, with 𝛼,𝛽∈ℂ and 𝐓𝐚 of form (1.3) generated by nonzero 𝐚∈ℝ3,1. Moreover, let 𝜆=1+𝛾𝐚𝐀†𝐚, with 𝛾∈ℂ, and 𝜏=𝛼2‖𝐚‖2+𝛽2. Then, (i)det(𝐀)=𝛽𝜏,(ii)the eigenvalues of 𝐀 are 𝜎1=𝛽, 𝜎2=𝛽+𝑖𝛼‖𝐚‖, 𝜎3=𝛽−𝑖𝛼‖𝐚‖,(iii)if 𝛽=0, then 𝐀†=−(𝛼†/‖𝐚‖2)𝐓𝐚, 𝜆=1,(iv)if 𝛽≠0, 𝜏=0, then 𝐀†=(1/4𝛽)((𝛼/𝛽)𝐓𝐚+𝐈3+3𝐏𝐚), 𝜆=1+(𝛾/𝛽)‖𝐚‖2,(v)if 𝛽≠0, 𝜏≠0, then 𝐀−1=(1/𝜏)(−𝛼𝐓𝐚+𝛽𝐈3+(𝛼2/𝛽)𝐚𝐚), 𝜆=1+(𝛾/𝛽)‖𝐚‖2.

Proof. Assertion (i) follows from Theorem 2.4 by setting 𝛾=0. Statement (ii) is a consequence of (i), whereas the validity of points (iii) and (v) can be confirmed by straightforward calculations; see [3, Theorem 1]. For the proof of statement (iv) note that 𝛽≠0, 𝜏=0 imply 𝛼/𝛽=±(𝑖/‖𝐚‖), that is, 𝛼/𝛽 is purely imaginary. Taking this fact into account, in view of 𝐚†=(1/‖𝐚‖2)𝐚, the validity of the formula for 𝐀† given in point (iv) is seen by direct verification of conditions (1.1). Similarly, the formula for 𝐀−𝟏 provided in point (v) can be confirmed by examining the condition 𝐀𝐀−𝟏=𝐈3. The proof is thus complete, for the expressions for 𝜆 given in points (iv) and (v) are easily obtainable.

Note that regardless whether 𝛽 and/or 𝜏 in Theorem 2.7 are zero or not, the matrix 𝐀 is such that 𝐏𝐀=𝐏𝐀∗, or, in other words, ℛ(𝐀)=ℛ(𝐀∗), that is, 𝐀 is an EP matrix. Another observation is that by setting 𝛼=1, 𝛽=1 in Theorem 2.7 leads to the relationship
𝐓𝐚+𝐈3−1=1‖𝐚‖2+1−𝐓𝐚+𝐈3+𝐚𝐚.(2.12)
Hence, the so called Cayley transform of 𝐓𝐚 (see [13, p. 219]), being of the form 𝐒=(−𝐓𝐚+𝐈3)(𝐓𝐚+𝐈3)−1, of which it is known that is orthogonal (see [9, Theorem 8.1.10]), takes the form
1𝐒=‖𝐚‖2+1−2𝐓𝐚+1−‖𝐚‖2𝐈3+2𝐚𝐚.(2.13)
Since det(𝐒)=1, the matrix 𝐒 represents in fact a proper rotation.

We now have the tools necessary to establish formulae for the Moore-Penrose inverses of 𝐓∈Υ.

Proof. The first observation is that if 𝛽=0, then on account of Theorem 3.1.1 in [14], we have 𝐓†=(𝛼𝐓𝐚)†+(𝛾𝐚𝐚)†. Hence, by utilizing 𝐓†𝐚=−(1/‖𝐚‖2)𝐓𝐚 and (𝐚𝐚)†=(1/‖𝐚‖2)𝐏𝐚, the formula for 𝐓† given in point (i) follows.Assume now that 𝛽≠0, in which case we can still have 𝜏=0 or 𝜏≠0. In the former of these situations, point (iv) of Theorem 2.7 implies 𝐏𝐀=(1/2)((𝛼/𝛽)𝐓𝐚+𝐈3+𝐏𝐚), whence 𝐏𝐀𝐚=𝐚, or, in other words, 𝐚∈ℛ(𝐀). This inclusion is clearly satisfied also when 𝜏≠0, for then det(𝐀)≠0.In order to apply the results of Baksalary et al. in [15], we introduce 𝐛=𝛾𝐚, 𝐜=𝐚. Then, 𝐓=𝐀+𝐛𝐜=𝐀+𝐛𝐜∗, that is, 𝐓 is a rank-one modification of 𝐀. As in [15], we define also the vectors 𝐝,𝐞,𝐟,𝐠∈ℂ3,1 according to
𝐝=𝐀†𝐀𝐛,𝐞=†∗𝐜,𝐟=𝐐𝐀𝐛,𝐠=𝐐𝐀∗𝐜,(2.14)
and denote the squares of the norms of the first two of them by 𝛿 and 𝜂, that is, 𝛿=‖𝐝‖2, 𝜂=‖𝐞‖2. As is seen from Theorem 2.7, the scalar 𝜆∈ℂ specified in [15] by 𝜆=1+𝐜∗𝐀†𝐛 now takes the form 𝜆=1+𝛾𝐚𝐀†𝐚=1+(𝛾/𝛽)‖𝐚‖2.Let us first consider case (ii) of the theorem, characterized, in addition to 𝛽≠0, by 𝜏=0, 𝜆=0. On account of Theorem 1.1 in [15] this case corresponds to rk(𝐓)=rk(𝐀)−1. As can be directly verified with the use of 𝐀† given in point (iv) of Theorem 2.7, 𝐝=(𝛾/𝛽)𝐚 and 𝐞=(1/𝛽)𝐚, from where 𝛿=|𝛾/𝛽|2‖𝐚‖2 and 𝜂=(1/|𝛽|2)‖𝐚‖2. Moreover, we get 𝐀†𝐞=(1/|𝛽|2)𝐚, implying 𝐝∗𝐀†𝐞=(𝛾/|𝛽|2𝛽)‖𝐚‖2 and 𝐀†𝐞𝐞∗=(1/|𝛽|2𝛽)𝐚𝐚. Furthermore, 𝐝𝐝∗=(|𝛾/𝛽|2)𝐚𝐚 and 𝐝𝐞∗=(𝛾/𝛽2)𝐚𝐚. Thus,
𝛿−1𝐝𝐝∗=𝐏𝐚,𝜂−1𝐀†𝐞𝐞∗=1𝛽𝐏𝐚,𝛿−1𝜂−1𝐝∗𝐀†𝐞𝐝𝐞∗=1𝛽𝐏𝐚.(2.15)
In consequence, from formula (2.1) in [15] we get 𝐓†=𝐐𝐚𝐀†, whence the expression for 𝐓† claimed in point (ii) of the theorem follows.According to Theorem 1.1 in [15], another case which corresponds to rk(𝐓)=rk(𝐀)−1 is given in point (iv) of the theorem, where 𝜏≠0, 𝜆=0. Direct calculations with the use of 𝐀−1 given in point (v) of Theorem 2.7 show that formulae (2.15) remain valid also in this case. Hence, from relationship (2.1) in [15] we obtain 𝐓†=𝐐𝐚𝐀−1, leading to the expression claimed in point (iv) of the theorem.Another conclusion originating from Theorem 1.1 in [15] is that case (iii) of the theorem, in which 𝜏=0, 𝜆≠0, corresponds to rk(𝐓)=rk(𝐀). Direct calculations with the use of 𝐀† given in point (iv) of Theorem 2.7 show that
𝜆−1𝐝𝐞∗=𝛾𝛽𝛽+𝛾‖𝐚‖2𝐚𝐚,(2.16)
and substituting this relationship into formula (2.2) in [15] leads to the expression for 𝐓† given in the theorem.Case (v), in which 𝜏≠0, 𝜆≠0, is left to be considered. According to the remark on p. 210 in [15], in such a situation 𝐓 is nonsingular. Hence, 𝐓†=𝐓−1, and the validity of the formula given in the theorem can be confirmed by direct verifications of the condition 𝐓𝐓−1=𝐈3.

A conclusion originating from Theorem 2.8 is that every 𝐓∈Υ satisfies 𝐓∈Υ⇔𝐓†∈Υ, the property which was already mentioned in the remark following Theorem 2.1. Further consequences of Theorem 2.8 are in what follows.

Point (v) of Theorem 2.8 enables to formulate necessary and sufficient conditions for 𝐓∈Υ to be orthogonal.

Theorem 2.10. Let 𝐓∈Υ with nonzero 𝛼,𝛽,𝛾∈ℝ. Moreover, let 𝜏=𝛼2‖𝐚‖2+𝛽2. Then 𝐓∈Υ is orthogonal if and only if
𝜏=1,𝛽+𝛾‖𝐚‖22=1.(2.17)

Proof. The matrix 𝐓 is orthogonal if and only if it is nonsingular and 𝐓−1=𝐓. On account of point (v) of Theorem 2.8, it is seen that 𝐓−1=𝐓 is equivalent to
𝛼𝛼=𝜏𝛽,𝛽=𝜏𝛼,𝛾=2−𝛽𝛾𝛽+𝛾‖𝐚‖2,(2.18)
or, in other words, 𝜏=1 and 𝛾(𝛽+𝛾‖𝐚‖2)=𝛼2−𝛽𝛾. Taking into account that 𝜏=1 implies 𝛼2=(1/‖𝐚‖2)(1−𝛽2), the assertion follows.

Observe that the right-hand side condition in (2.17) admits two possibilities, namely, either 𝛽+𝛾‖𝐚‖2=1 or 𝛽+𝛾‖𝐚‖2=−1. Since 𝜏=1, Theorem 2.4 ensures that in the former situation 𝐓 is a proper rotation, whereas in the latter one 𝐓 is an improper rotation.

The next theorem concerns eigenspaces attributed to 𝐓∈Υ.

Theorem 2.11. Let 𝐓∈Υ, and let ℰ(𝜇𝑗), 𝑗=1,2,3, be the eigenspaces of 𝐓 associated with its eigenvalues 𝜇𝑗 given in (2.11). Then, (i)if 𝛼=0, then ℰ(𝜇2)=ℛ(𝐈3−𝛾𝛾†𝐏𝐚), ℰ(𝜇3)=ℛ(𝐈3−𝛾𝛾†𝐏𝐚),(ii)if 𝛼≠0, then ℰ(𝜇2)=ℛ(𝐐2) provided that 𝛾/𝛼=𝑖/‖𝐚‖ and ℰ(𝜇2)=ℛ(𝐐1) otherwise and, simultaneously, ℰ(𝜇3)=ℛ(𝐐4) provided that 𝛾/𝛼=−(𝑖/‖𝐚‖) and ℰ(𝜇3)=ℛ(𝐐3) otherwise(iii)if 𝛾=0, then ℰ(𝜇1)=ℛ(𝐈3−𝛼𝛼†𝐐𝐚),(iv)if 𝛾≠0, then ℰ(𝜇1)=ℛ(𝐐2) provided that 𝛼/𝛾=−𝑖‖𝐚‖, ℰ(𝜇1)=ℛ(𝐐4) provided that 𝛼/𝛾=𝑖‖𝐚‖, and ℰ(𝜇1)=ℛ(𝐚) otherwise,where 𝐐𝑘=𝐈3−𝐏𝑘, 𝑘=1,…,4, with 𝐏𝑘 as specified in (2.3).

Proof. It is known that ℰ(𝜇𝑗)=ℛ[𝐈3−𝐓(𝜇𝑗)𝐓(𝜇𝑗)†], where 𝐓(𝜇𝑗)=𝐓−𝜇𝑗𝐈3, 𝑗=1,2,3; see, for example, [3]. Clearly, for each 𝜇𝑗, matrix 𝐓(𝜇𝑗) can be written as
𝐓𝜇𝑗=𝛼𝐓𝐚+̃𝛽𝐈3+𝛾𝐚𝐚,(2.19)
where ̃𝛽=𝛽−𝜇𝑗. For 𝜇1=𝛽+𝛾‖𝐚‖2, we have ̃𝛽=−𝛾‖𝐚‖2. By virtue of the equivalence ̃𝛽=0⇔𝛾=0, statement (i) of Corollary 2.9 leads to point (iii) of the theorem. If, however, ̃𝛽≠0, that is, 𝛾≠0, then ̃𝜆=1+(𝛾/𝛽)‖𝐚‖2=0, which means that cases (iii) and (v) of Corollary 2.9, characterized by 𝜆≠0, are not attainable in the present situation. Further observations are that 𝜏=𝛼2‖𝐚‖2+̃𝛽2=(𝛼2+𝛾2‖𝐚‖2)‖𝐚‖2 and ̃𝛼/𝛽=−(𝛼/𝛾)(1/‖𝐚‖2). In view of these facts, it is seen that statements (ii) and (iv) of Corollary 2.9 lead to characterizations of ℰ(𝜇1) given in point (iv) of the theorem.Next we consider eigenvalue 𝜇2=𝛽+𝑖𝛼‖𝐚‖, which ensures that ̃𝛽 occurring in (2.19) is given by ̃𝛽=−𝑖𝛼‖𝐚‖. Since ̃𝛽=0 is equivalent to 𝛼=0, on account of statement (i) of Corollary 2.9 we arrive at characterization of ℰ(𝜇2) given in point (i) of the theorem. On the other hand, if ̃𝛽≠0, that is, 𝛼≠0, then 𝜏 is necessarily equal to zero, which means that cases (iv) and (v) of Corollary 2.9 are to be excluded from the present considerations. Furthermore, it is seen that 𝜆=1+𝑖(𝛾/𝛼)‖𝐚‖ and ̃𝛼/𝛽=𝑖/‖𝐚‖. With these facts taken into account, we conclude that statements (ii) and (iii) of Corollary 2.9 lead to characterizations of ℰ(𝜇2) provided in point (ii) of the theorem.The last eigenvalue to be considered is 𝜇3=𝛽−𝑖𝛼‖𝐚‖, for which ̃𝛽=𝑖𝛼‖𝐚‖. In this case, analogous arguments to the ones used with respect to 𝜇2 lead to the eigenspaces ℰ(𝜇3) in points (i) and (ii) of the theorem. The proof is complete.

Observe that if 𝛼=1, 𝛽=0, and 𝛾=0, then from Theorem 2.11 we get ℰ(𝜇1)=ℛ(𝐚), ℰ(𝜇2)=ℛ(𝐐1), and ℰ(𝜇3)=ℛ(𝐐3), that is, the eigenspaces of 𝐓𝐚 identified by Trenkler [3, Sec. 3].

Theorem 2.11 is supplemented with examples demonstrating its applicability. Let 𝐚=(1,0,−1). Then, 𝐓𝐚=⎛⎜⎜⎜⎝⎞⎟⎟⎟⎠010−10−1010,𝐚𝐚=⎛⎜⎜⎜⎝⎞⎟⎟⎟⎠10−1000−101,(2.20)
leading to
⎛⎜⎜⎜⎝⎞⎟⎟⎟⎠𝐓=𝛽+𝛾𝛼−𝛾−𝛼𝛽−𝛼−𝛾𝛼𝛽+𝛾(2.21)
of eigenvalues 𝜇1=𝛽+2𝛾, 𝜇2√=𝛽+𝑖2𝛼, 𝜇3√=𝛽−𝑖2𝛼. From the right-hand side formula in (2.20) we get
𝐏𝐚=⎛⎜⎜⎜⎜⎝1210−2−10002012⎞⎟⎟⎟⎟⎠,(2.22)
and the projectors 𝐐𝑘, 𝑘=1,…,4, involved in Theorem 2.11 are of the forms
𝐐1=⎛⎜⎜⎜⎜⎜⎜⎜⎝14√−𝑖2414𝑖√2412𝑖√2414√−𝑖2414⎞⎟⎟⎟⎟⎟⎟⎟⎠,𝐐2=⎛⎜⎜⎜⎜⎜⎜⎜⎝34√−𝑖24−14𝑖√2412𝑖√24−14√−𝑖2434⎞⎟⎟⎟⎟⎟⎟⎟⎠,𝐐3=⎛⎜⎜⎜⎜⎜⎜⎜⎝14𝑖√2414√−𝑖2412√−𝑖2414𝑖√2414⎞⎟⎟⎟⎟⎟⎟⎟⎠,𝐐4=⎛⎜⎜⎜⎜⎜⎜⎜⎝34𝑖√24−14√−𝑖2412√−𝑖24−14𝑖√2434⎞⎟⎟⎟⎟⎟⎟⎟⎠.(2.23)
Hence, from Theorem 2.11 we obtain what follows.

If 𝛼=0, then ℰ(𝜇2)=ℰ(𝜇3)=ℂ3,1 provided that 𝛾=0 and
ℰ𝜇2𝜇=ℰ3⎧⎪⎨⎪⎩⎛⎜⎜⎜⎝101⎞⎟⎟⎟⎠,⎛⎜⎜⎜⎝010⎞⎟⎟⎟⎠⎫⎪⎬⎪⎭=span(2.24)
otherwise.

Next, if 𝛼≠0, then
ℰ𝜇2⎧⎪⎨⎪⎩⎛⎜⎜⎜⎝10⎞⎟⎟⎟⎠,⎛⎜⎜⎜⎝01√=span−1−𝑖2⎞⎟⎟⎟⎠⎫⎪⎬⎪⎭(2.25)
provided that √𝛾/𝛼=𝑖(2/2) and
ℰ𝜇2⎧⎪⎨⎪⎩⎛⎜⎜⎜⎝1𝑖√=span21⎞⎟⎟⎟⎠⎫⎪⎬⎪⎭(2.26)
otherwise and, simultaneously,
ℰ𝜇3⎧⎪⎨⎪⎩⎛⎜⎜⎜⎝10⎞⎟⎟⎟⎠,⎛⎜⎜⎜⎝01𝑖√=span−12⎞⎟⎟⎟⎠⎫⎪⎬⎪⎭(2.27)
provided that √𝛾/𝛼=−𝑖(2/2) and
ℰ𝜇3⎧⎪⎨⎪⎩⎛⎜⎜⎜⎝1√=span−𝑖21⎞⎟⎟⎟⎠⎫⎪⎬⎪⎭(2.28)
otherwise.

Further, if 𝛾=0, then ℰ(𝜇1)=ℂ3,1 provided that 𝛼=0 and ℰ(𝜇1)=ℛ(𝐚) otherwise.

Finally, if 𝛾≠0, then
ℰ𝜇1⎧⎪⎨⎪⎩⎛⎜⎜⎜⎝10⎞⎟⎟⎟⎠,⎛⎜⎜⎜⎝01√=span−1−𝑖2⎞⎟⎟⎟⎠⎫⎪⎬⎪⎭(2.29)
provided that √𝛼/𝛾=−𝑖2,
ℰ𝜇1⎧⎪⎨⎪⎩⎛⎜⎜⎜⎝10⎞⎟⎟⎟⎠,⎛⎜⎜⎜⎝01𝑖√=span−12⎞⎟⎟⎟⎠⎫⎪⎬⎪⎭(2.30)
provided that √𝛼/𝛾=𝑖2, and ℰ(𝜇1)=ℛ(𝐚) otherwise.

Further consequences of Theorem 2.11 deal with the proper rotation matrices; for more detailed discussion see [16]. G. Trenkler and D. Trenkler [6, Sec. 3] identified three types of proper rotations, namely, Type I, covering matrices of the form 𝐑=𝐈3; Type II, covering matrices of the form 𝐑=𝟐𝐚𝐚−𝐈3, where ‖𝐚‖=1; Type III, covering matrices of the form 𝐑=𝐈3+𝑓𝐚(𝐓𝐚+𝐓2𝐚), where 𝑓𝐚=2/(‖𝐚‖2+1). Moreover, it was pointed out in [6] that each proper rotation can be attributed to one of these types. Direct calculations show that rotations of Type I are obtained from the representation (1.5) by taking 𝛼=0, 𝛽=1, 𝛾=0; Type II are obtained from the representation (1.5) by taking 𝛼=0, 𝛽=−1, 𝛾=2, and ‖𝐚‖=1; Type III are obtained from the representation (1.5) by taking 𝛼=𝑓𝐚, 𝛽=1−𝑓𝐚‖𝐚‖2, 𝛾=𝑓𝐚, where 𝑓𝐚=2/(‖𝐚‖2+1). Combining these observations with Corollary 2.6 leads to the conclusion that rotations of Type I have eigenvalues 𝜇1=1, 𝜇2=1, 𝜇3=1; Type II have eigenvalues 𝜇1=1, 𝜇2=−1, 𝜇3=−1; Type III have eigenvalues 𝜇1=1, 𝜇2=1−𝑓𝐚‖𝐚‖(‖𝐚‖−𝑖), 𝜇3=1−𝑓𝐚‖𝐚‖(‖𝐚‖+𝑖). Furthermore, from Theorem 2.11 we obtain the following characterizations of the eigenspaces.

Corollary 2.12. Let 𝐑∈Υ be a proper rotation, and let ℰ(𝜇𝑗), 𝑗=1,2,3, be the eigenspaces of 𝐑 associated with its eigenvalues 𝜇𝑗. Then, (i)if 𝐑 is of Type I, then ℰ(𝜇1)=ℂ3,1, ℰ(𝜇2)=ℂ3,1, ℰ(𝜇3)=ℂ3,1,(ii)if 𝐑 is of Type II, then ℰ(𝜇1)=ℛ(𝐚), ℰ(𝜇2)=ℛ(𝐐𝐚), ℰ(𝜇3)=ℛ(𝐐𝐚),(iii)if 𝐑 is of Type III, then ℰ(𝜇1)=ℛ(𝐚), ℰ(𝜇2)=ℛ(𝐐1), ℰ(𝜇3)=ℛ(𝐐3), where 𝐐𝑘=𝐈3−𝐏𝑘, 𝑘=1,3, with 𝐏𝑘 as specified in (2.3).