Ëœ (n)
Conditions (2.5) and (2.6) show that the generalized residual r(tk ) is obtained by projecting, the
Ëœ Ëœ
vector sn onto the subspace W k; n , orthogonally to Lk; n .
(n)
In a matrix form, r(tk ) can be written as
Ëœ

The determinant in the numerator of (2.9) is the vector obtained by expanding this determinant
with respect to its Ã¿rst row by the classical rule.
Note that the determinant in the denominator of (2.9) is equal to det(LT n 2 Sk; n ) which is as-
k;
(n)
sumed to be nonzero. The computation of the approximation tk needs the values of the terms
sn ; sn+1 ; : : : ; sn+k+1 .

Instead of applying the extrapolation methods for solving (2.10), we will use them for the pre-
conditioned linear system
M âˆ’1 Cx = M âˆ’1 f; (2.11)
where M is a nonsingular matrix.
Starting from an initial vector s0 , we construct the sequence (sj )j by
sj+1 = Bsj + b; j = 0; 1; : : : (2.12)
with B = I âˆ’ A; A = M âˆ’1 C and b = M âˆ’1 f.
Note that if the sequence (sj ) is convergent, its limit s = xâˆ— is the solution of the linear system
(2.10).
From (2.12) we have sj = b âˆ’ Asj = r(sj ), the residual of the vector sj . Therefore using (2.3)
(n)
and (2.12), it follows that the generalized residual of the approximation tk is the true residual
Ëœ (n) (n) (n)
r(tk ) = r(tk ) = b âˆ’ Atk : (2.13)
Note also that, since 2 sn = âˆ’A sn , we have 2 Sk; n = âˆ’A Sk; n .
(0)
For simplicity and unless speciÃ¿ed otherwise, we set n = 0, we denote tk = tk and we drop the
index n in our notations. Let d be the degree of the minimal polynomial P d of B for the vector
s0 âˆ’ xâˆ— and, as A = I âˆ’ B is nonsingular, Pd is also the minimal polynomial of B for r0 = s0 .
Therefore, the matrices Sk = [ s0 ; : : : ; skâˆ’1 ] and 2 Sk = [ 2 s0 ; : : : ; 2 skâˆ’1 ] have full rank for
k6d. We also note that the approximation td exits and is equal to the solution of the linear system
(2.10).
The three extrapolation methods make use implicitly of the polynomial P d and since this poly-
nomial is not known in practice, the aim of these methods is to approximate it.
When applied to the sequence generated by (2.12), the vector extrapolation methods above produce
approximations tk such that the corresponding residuals rk = b âˆ’ Atk satisfy the relations
Ëœ Ëœ
rk âˆˆ W k = AV k (2.14)
and
Ëœ
rk âŠ¥ Lk ; (2.15)
Ëœ Ëœ Ëœ Ëœ Ëœ Ëœ Ëœ
where V k = span{ s0 ; : : : ; skâˆ’1 } and Lk â‰¡ W k for RRE, Lk â‰¡ V k for MPE and Lk â‰¡ Y k = span
{y1 ; : : : ; yk } for MMPE where y1 ; : : : ; yk are linearly independent vectors.
Ëœ
Note that, since W k â‰¡ Kk (A; Ar0 ), the extrapolation methods above are Krylov subspace methods.
RRE is an orthogonal projection and is theoretically equivalent to GMRES while MPE and MMPE
are oblique projection methods and are equivalent to the method of Arnoldi and to the Hessenberg
method [38], respectively. From this observation, we conclude that for k6d, the approximation tk
exists and is unique, unconditionally for RRE, and this is not always the case for MPE and MMPE.
In fact, for the last two methods the approximation tk (k Â¡ d) exists if and only if det( Sk 2 Sk ) = 0
T

â€¢ From relations (1), (2) and (4) of Theorem 1, we see that the residuals of the RRE are always
deÃ¿ned while those produced by MPE and MMPE may not exist.
rre
â€¢ We also observe that if a stagnation occurs in RRE (||rk || = ||r0 || for some kÂ¡d), then cos Ã‚k = 0
and, from (2.19), this implies that cos k = cos k = 0 and hence the approximations produced by
MPE and MMPE are not deÃ¿ned.

When the linear process (2.12) is convergent, it is more useful in practice to apply the extrapolation
methods after a Ã¿xed number p of basic iterations. We note also that, when these methods are used
in their complete form, the required work and storage grow linearly with the iteration step. To
overcome this drawback we use them in a cycling mode and this means that we have to restart the
algorithms after a chosen number m of iterations.
The algorithm is summarized as follows:
1. k = 0, choose x0 and the numbers p and m.
2. Basic iteration
set t0 = x0
z0 = t 0
zj+1 = B zj + b, j = 0; : : : ; p âˆ’ 1.
3. Extrapolation scheme
s 0 = zp
sj+1 = B sj + b, j = 0; : : : ; m,
compute the approximation tm by RRE, MPE or MMPE.
4. Set x0 = tm , k = k + 1 and go to 2.

Stable schemes for the computation of the approximation tk are given in [32, 19]. In [32], Sidi gave
an e cient implementation of the MPE and RRE methods which is based on the QR decomposition
of the matrix Sk . In [19], we used an LU decomposition of Sk with a pivoting strategy. These
implementations require low work and storage and are more stable numerically.

2.3. Application to nonlinear systems

Consider the system of nonlinear equations
G(x) = x; (2.20)
where G : RN â‡’ RN and let xâˆ— be a solution of (2.20).
For any arbitrary vector x, the residual is deÃ¿ned by
r(x) = G(x) âˆ’ x:
Let (sj )j be the sequence of vectors generated from an initial guess s0 as follows: