Abstract

A new iterative method for polynomial root-finding based on the development of two novel recursive functions is proposed. In addition, the concept of polynomial pivots associated with these functions is introduced. The pivots present the property of lying close to some of the roots under certain conditions; this closeness leads us to propose them as efficient starting points for the proposed iterative sequences. Conditions for local convergence are studied demonstrating that the new recursive sequences converge with linear velocity. Furthermore, an a priori checkable global convergence test inside pivots-centered balls is proposed. In order to accelerate the convergence from linear to quadratic velocity, new recursive functions together with their associated sequences are constructed. Both the recursive functions (linear) and the corrected (quadratic convergence) are validated with two nontrivial numerical examples. In them, the efficiency of the pivots as starting points, the quadratic convergence of the proposed functions, and the validity of the theoretical results are visualized.

1. Introduction

Perhaps the oldest problem in numerical analysis deals with the search of polynomials’ roots. Since Abel and Galois proved the nonexistence of radical-based solutions for general polynomials or order higher than four, the only method to obtain the complete set of roots is numerical calculus and particularly the iterative methods. For any iterative method, a recursive function together with an initial guess is required. Most methods are focused on the local efficiency of the recursive schemes, convergence conditions, and velocity at the roots, whereas the study on where (and why) to start the iterative sequences is less considered. Hence, the challenge is to find reasonably efficient initial guesses, that is, starting points for the iterative sequences lying close to some of the roots.

The problem of searching zeros of functions has been extensively discussed in several books on numerical analysis; see, for instance, [1–5]. In particular, a survey of methods specially developed for polynomials has been recently published by McNamee [6]. The latter has compiled an extensive bibliography [7–10] in which probably most of the published methods for root-finding are included. In addition, McNamee [6] has proposed an indicator to measure the efficiency of an iterative method and has applied it to the most common approaches. Other relevant reviews on algorithms to search zeros have been presented by Pan [11–14] and by Pan and Zheng [14].

As mentioned, the location of initial approximations is of special relevance in iterative schemes so that the success or failure may largely depend on it. In the book of Kyurkchiev [15], a selection of initial approximations specially developed for simultaneous methods is listed. The roots’ bounds obtained from the polynomial coefficients have traditionally been a rough tool in some cases but useful tool for zeros’ locating. An interesting survey of these bounds is listed in [6]. The quotient-difference method [16] includes in addition an initial approximation, although for its construction all polynomial coefficients must be nonzero. Hubbard et al. [17] with the study of the convergence spectrum have proposed a finite and relatively small set of starting points assuring that, at least from one of them, the convergence of Newton’s method is guaranteed. Bini et al. [18] developed improved initial conditions for the known QR method. Petković et al. [19–21] have proposed parametric families of simultaneous root-finding methods based on the Hansen-Patrick formula [22]. Monsi et al. [23, 24] have introduced the named point symmetric single step procedure and its variants for finding zeros simultaneously. In addition, the necessary initial conditions to guarantee convergence using Smale’s point estimation theory [25] have been reviewed in [26–28]. Zhu [29–31] has analyzed the initial conditions of convergence for Durand-Kerner’s method and for the Newton-like simultaneous methods based on the parallel circular iteration. Lázaro et al. [32] proposed efficient recursive functions to reach eigenvalues in vibrating systems independently on the chosen starting point, showing global convergence in the whole complex plane. Kornerup and Muller [33] and Kjurkchiev [34] have also discussed the influence of starting points for certain Newton-Raphson iterations and for Euler-Chebyshev’s method, respectively. Saidanlu et al. [35] studied the conditions for determining initial approximations of exact roots for certain iterative matrix zerofinding method. The recently published book of Petković et al. [36] explores the development of powerful multipoint algorithms to solve nonlinear equations involved in research problems.

In this paper, we consider the -order polynomial where for and . In the present paper, a pair of novel recursive functions for polynomial root-finding is proposed. In addition, associated with each one of these functions a characteristic complex number can be calculated from the polynomial coefficients and . Both complex numbers, named pivots, lay close to some root of the polynomial when they become large (in absolute value) with respect to the rest of polynomial coefficients. In fact, it is demonstrated that the pivots are attractive fixed points at the complex infinite. Conditions for local and global convergence of the new iterative scheme are provided; the convergence is demonstrated within a family of closed balls centered at the pivots under certain a priori conditions that can be verified. In fact, based on the theorem of global convergence, a test to identify the polynomial class for which the convergence is ensured is proposed. It is also proved that the velocity of convergence is linear; in order to accelerate the iterative process up to a quadratic order, new corrected recursive functions are proposed based on Steffensen acceleration approach. The corrected recursive functions present the same properties as those of the originals with respect to the pivots, but with quadratic convergence.

In order to validate the theoretical results, two numerical examples are analyzed. In the first two, the results of the proposed recursive functions are studied for single and multiple roots, respectively. In the third example, the influence of the pivots is discussed. In addition, in this last example the test of global convergence is applied, directly relating the proposed pivots to the success of the iteration scheme.

2. The New Recursive Functions

2.1. Definitions and Previous Results

Based on the polynomial of (1), the following definition introduces two complex-valued functions of complex variable of special interest in this paper.

Definition 1 (recursive functions). Associated with the polynomial of (1) one introduces two functions defined as wherethat will be named recursive functions (RF).

As mentioned above, represents the square root of a complex number whose branch line is . Since the origin is the only branch point of the square root function so defined, the region of analyticity can consequently be expressed as where

The following proposition relates the polynomial roots with the fixed points of the defined functions.

Proposition 2. If , then a complex number is root of the polynomial if and only if it is a fixed point of either the function or .

Proof. Starting from the general form of the polynomial given by (1), we obtain an equivalent expression where is the function defined in (4). Now, the right-hand term of (7) can be handled to obtain the product of three terms in which we find the RF introduced in (2), (3): Since , the value cannot be a root. Hence, a complex number satisfies if and only if it is a fixed point of some of the RF; that is, either or .

The following proposition presents a characterization of the multiple roots of through the derivatives of the functions and :

Proposition 3. Let be a root of . (i)If , then the root has multiplicity if and only if and for .(ii)If , then the root has multiplicity if and only if and for .

Proof. (i) If with multiplicity , it is verified that , for . The function can directly be calculated taking derivatives in (8). The first derivative evaluated at is where the equality has been used. If , this equality would imply , where , which is against the hypothesis; therefore, necessarily holds. Evaluating now the th derivative for at results inand hence for . Reciprocally, from (10) it is immediately verified that due to . From (11), if , for , holds by induction.(ii) If then the polynomial derivatives evaluated at lead to and hence the relationship between root multiplicity and the derivatives is obtained following the same arguments as (i).

Since each root of the polynomial is a fixed point of one of the functions or , the question arises whether, starting at certain point and iterating these functions, the convergence to any of the root holds. The first step in our research is to introduce the recursive sequences associated with the equally named functions together with the concept of point of attraction.

Definition 4 (recursive sequence). Let . One defines recursive sequences, and , associated with the functions and as the set of complex numbers calculated as

Definition 5 (point of attraction). A complex number is said to be a point of attraction of if there exists an initial point , such that Similarly, a point of attraction of is defined as the limit of the sequence . Obviously, any point of attraction is a root of , but the reciprocal affirmation is not true.

2.2. Local Convergence

This subsection deals with the necessary conditions for the local convergence of the recursive sequences. The main result is presented in Theorem 8. Previously, we introduce the lemma of McLeod in order to prove the Lipschitz continuity of complex-valued functions of complex variable. This result will be used repeatedly along the current work.

Lemma 6 (McLeod [37]). Let be a complex function, analytic in a convex domain . If is the (closed) segment between two any points , then there exist two complex numbers and certain , such that

Corollary 7. If , then .

Proof.
Consider

Theorem 8. Let one assume that is a single root of the equation and that If , then there exist two positive real numbers and , such that (i)the ball ;(ii) is a point of attraction of , ;(iii), .Otherwise, if , then there exist two positive real numbers and , such that (i)the ball ;(ii) is a point of attraction of , ;(iii), .

Proof. The proof is developed for the first case; that is, ; for the function the procedure would be analogous. The demonstration is based on the application of the well-known Banach contraction principle. For that, it is only necessary to prove the contractivity and the self-mapping of in a closed ball centered at the root. Let us evaluate in : Since and is an open set, there exists such that the (closed) ball . In addition, is continuous at ; therefore there exist certain real numbers and such that , . If , it follows from Corollary 7 that is contractive in the ball . Moreover, is a self-mapping as can easily be demonstrated from the contractivity. Indeed, Therefore, the hypothesis of Banach’s fixed point theorem is verified [38] and the convergence of the succession to the (unique) fixed point in is guaranteed. Furthermore, the error rate in each iteration can be bounded by

Theorem 8 does not cover the case of multiple roots since for all of them holds, although this fact does not imply that these roots cannot be points of attraction. However in such cases the sequences will converge more slowly [2].

The necessary condition imposed by (17) in the previous theorem allows predicting certain characteristics of the roots that present local convergence. Assuming that the absolute value of a root is greater than certain , that is, , and denoting , after some operations we obtain

In view of this reasoning, it seems that roots with , that is, those with large absolute value, will present local convergence for some of the proposed recursive sequences. However, although intuitive, this result is not valid in general since the inequalities given by (21) do not guarantee (17). Let us improve the necessary conditions to impose on the polynomial coefficients to check convergence. For that, we define the concept of pivots of a polynomial.

2.3. Global Convergence

For any iterative numerical scheme, it is always desirable to provide a priori information about the convergence. If certain recursive sequence is convergent for any starting point inside certain complex set, it is said that such sequence is globally convergent. This section aimed to study the global convergence of the recursive sequences within sets with closed balls centered at two characteristic points of the polynomials and , named pivots of the polynomial.

Definition 9 (pivots). One defines the pivots of polynomial to the following complex numbers: where

The pivots of a polynomial have the property of lying close to some root when these pivots are relatively large (in absolute value) with respect to the rest of polynomial coefficients. This may be an important advantage because the pivots can be used as effective initial guesses in a recursive scheme. This behavior is explained in the results of this section. The first result (Proposition 10) states that the pivot is a point of attraction in the (complex) infinite of . The same argument holds for the pivot and .

Proposition 10. Let be the pivots of the polynomial ; then (i), ;(ii), .

Proof. (i) From the definition of given in (4), let us calculate the following limits: Now, from the expressions of and given by (2), (18) and by (22), (24) The proof for the function is analogous.

This result justifies the use of the family of closed balls centered at and as suitable sets for the global convergence of and , respectively. In what follows up to the next section, only the case of the recursive function will be rigorously analyzed. The proofs of the lemmas and theorems can easily be extrapolated to the case of .

Let us assume that and denote to the radius of the closed ball centered at ; that is, .

Lemma 11. Let be the distance between the origin and the ball ; that is, . Let one denote . If , then for any (i);(ii);(iii);(iv);(v), .

Proof. (i) From the definition of , holds for all . Consequently where the expression of the general term of the sum has been used.(ii) Using the previous result (iii) Following the same reasoning as that of (27) where now the expression of the sum has been used.(iv) From the bounds calculated in (i) and (ii) Here the number has been introduced in order to simplify the notation in subsequent developments.(v) Let us define the function as that one which verifies the following identity: after some direct operations and using the definition of , we obtain . Consequently, Now, using Lemma 11(iv), it is verified that , . Hence, assuming that and expanding in power series

The same conclusions of this lemma can easily be extrapolated for pivot , simply changing by in the above expressions. In this case, is defined as the distance between the origin and the ball ; that is, .

Lemma 12. Under the same conditions of Lemma 11, let us consider the positive real numbers that depend on the radius of the ball centered at pivot and depending on : If and , then(i), ;(ii), .

Proof. (i) Evaluating the derivative of at a point and using the Lemmas 11(ii), (iii), and (v) (ii) From the definition of given in (32) and Lemmas 11(i), (iv), and (v), the distance between and can be bounded by

With the help of the above lemmas, the main result on global convergence can already be presented.

Theorem 13. Let , , and be the numbers defined in Lemmas 11 and 12. Also, let us assume that, for the radius , the ball . If , , and , then there exists a unique single root of the polynomial , which is point of attraction of the function for any . Moreover,

Proof. Let us demonstrate that is a contractive self-mapping. Indeed, the contractivity arises from the inequality deduced from Lemma 12(ii). Furthermore, if the self-mapping can be demonstrated from Lemma 12(ii) and hypothesis Hence, the complex number , .Therefore, the Banach contraction principle can be applied ensuring that there exists a unique fixed point of the function so that , for any . Furthermore

Lemma 12 and Theorem 13 have been presented for describing the conditions under which global convergence towards fixed points of in balls of the form can be guaranteed. We can also find versions of these results for the function , for the pivot and for the family of balls . It is clear that, in such case, , , and have exactly the same mathematical form although the meaning of is now the radius of a ball centered at and is the distance between the origin and . In order to be consistent in the presentation of the results we also write the associated lemma and theorem about convergence of sequences within balls .

Lemma 14. Under the same conditions of Lemma 11, let one consider the positive real numbers that depend on the radius of the ball centered at pivot and depending on : If and , then(i), ;(ii), .

Theorem 15. Let , , and be the numbers defined in Lemmas 11 and 14. Also, let one assume that, for the radius , the ball . If , , and , then there exists a unique single root of the polynomial , which is point of attraction of the function for any . Moreover,

From the definition of we have the two inequalities Just comparing these expressions with (35), (36), it is clear that and bounds have the same mathematical expressions as those of and in . Consequently, the proof of Lemma 14 and Theorem 15 can be omitted since they can be directly deduced from the proofs of Lemma 12 and Theorem 13.

The necessary conditions of this theorem are given in terms of the three numbers , , and , (30), (34), called convergence indexes and introduced in Lemmas 11, 12, and 14. These indexes can be used to construct an a priori test to check global convergence, since for their calculation no previous root-computing is needed. For that, chosen a radius , the indexes ensure the convergence of the RF provided that and . If these inequalities are verified for ball as well as for , then we can ensure that the two sequences and are convergent and the method reaches two different roots. On the opposite, if there does not exist radius verifying , , and , for some ball or , global convergence cannot be a priori ensured. The latter obviously is not synonymous of no convergence, because local convergence could still arise. Let us illustrate the application of this test with an example considering the 16th order polynomial The test is equivalent to find at least a radius so that and for both the sequences and . Thus, the indexes associated with the pivot (global convergence of ) are represented as function of the radius in Figure 1. It can be observed that in the range , consequently ensuring the global convergence for any ball centered at and radius in the previous range. Furthermore, an a priori estimation of the (linear) velocity of convergence can be calculated for , resulting in . Otherwise, testing balls of the form , centered at the other pivot (global convergence of ), we observe that in the whole range . Note that although the recursive sequence does not pass the test, it locally converges to a root. The polynomial of (55) is an example that presents a good behavior with both recursive sequences, but the sequence passes the test, whereas the other does not. In general, the proposed test is somewhat conservative; in fact, several numerical experiences have shown that convergence of recursive sequences can succeed (starting at the pivots) for polynomials that do not pass the test.

Figure 1: Test of global convergence for polynomial .

To conclude this section, we will make some remarks on the convergence to multiple roots. As proved in Proposition 3, a root with multiplicity verifies . Therefore, cannot be contained in any closed ball verifying . In this case, the Banach contraction principle does not provide information on the convergence, although if it holds the scheme will present sublinear velocity [2] as shown in the numerical examples.

3. Corrected Recursive Functions

3.1. Definitions and Previous Results

Since the velocity of the recursive scheme given by the proposed functions and is linear for single roots, the objective is to propose two new functions, whose associated recursive sequences present quadratic convergence. For that, we construct the following two functions based on Steffensen’s acceleration method [39] used for iterative fixed point schemes (more details can be found in [2]): We name these functions corrected recursive functions (CRF); consequently, associated with them, there exist two recursive fixed-point schemes, named corrected recursive sequences and presented in the following definition.

Definition 16 (corrected recursive sequences). Let be two complex numbers and let and be the CRF. Let one define the associated corrected recursive sequences as

As will be demonstrated, the introduction of the functions and considerably improves the convergence velocity. In fact, both functions have the same properties as those of Steffensen’s method: (a) any fixed point of and is point of attraction of and even for multiple roots, and (b) the convergence to the fixed points is quadratic for single roots.

3.2. Local Convergence

The present subsection deals with the local convergence of sequences and . Let us see that effectively the introduction of and accelerates the convergence. Indeed, naming and , it is easy to prove that the corrected recursive sequences are Newton’s schemes of and ; that is, Consequently, the convergence can be resumed in the following theorem. The details of the proof can be found in reference [2].

Theorem 17. Let be a root of the equation with multiplicity .(i)If , then and . Furthermore, the sequence locally converges to if with quadratic velocity and with linear velocity when .(ii)If , then and . Furthermore, the sequence locally converges to with quadratic velocity when and with linear velocity when .

The previous theorem states that any root of the polynomial is a point of attraction of some corrected recursive function. If the initial guess is close enough to a root, it can be assured that the sequence converges. However, since the corrected recursive sequences are after all two Newton’s schemes, they are sensitive to the chosen initial guesses. Fortunately, the behavior of the CRF with respect to the pivots is the same as that of the RF. Thus, when the pivots and (see Definition 9) are relatively large in absolute value, at least one of them lies close to some root, which in turn is also high (in absolute value) respect to the other roots.

3.3. Global Convergence

As shown in Section 2.3, the necessary conditions for global convergence of the RF and are related to the location of , defined in (22). The main results were presented at Proposition 10 and Theorem 13. As with and , , also are points of attraction of the corrected recursive functions at the infinite. This property is demonstrated in the following proposition.

Proposition 18. If are the complex numbers defined by (22), then (i), ;(ii), .

Proof. From the results obtained in Proposition 10Differentiating now two times (4) of Consequently, also holds. Hence In the same way, it is verified that . Finally, calculating the derivative of and and taking limits result in

The previous proposition suggests that, under similar conditions as those imposed for and , global convergence in a closed ball centered in or for and can also be assured. Therefore, choosing and , the sequences and/or may converge to a root provided that , are large in absolute value. This choice considerably improves the efficiency of the recursive scheme because two good approximations are available as starting points. In fact, as will be shown in the numerical examples, the approximation given by or (and) becomes a very accurate estimation of one (two) root(s) of the polynomial.

We think that the polynomial pivots , and their one-step initial approximations, , , represent the main advantage of the present method since they constitute themselves good estimations of one or even two roots for certain classes of polynomials. Hence, they could be used as initial guesses not only for the proposed recursive sequences but also within other efficient root-finding algorithms [40]. Another contribution of the present paper is to identify quantitatively this class of polynomials. For that, Theorem 13 on global convergence inside closed balls centered in the pivots, and , is used. The theory shows that those polynomials that pass the test present global convergence in previously defined closed balls. Numerical experiences show that the pivots of these classes of polynomials usually lie close to some root. Furthermore, and/or represent in such cases a much more accurate approximation of this root, which in general coincides with the largest one. This latter affirmation was qualitatively advanced in Theorem 8 and in subsequent comments; see (21). Now, the following section which focused on the convergence region also validates this affirmation, showing a close relationship between the root’s size (in the sense of its absolute value) and the quality of the proposed method.

3.4. Remarks on the Convergence Region

As proved in the theorems presented in this section, local convergence is always guaranteed for the corrected recursive functions. Therefore, choosing a starting point close enough to a root, the recursive sequence will converge. However, the following question arises: which complex number must be chosen as starting point? As well known, this is a key issue but very difficult to answer for any nonglobally convergent root finder. It was proved in the previous subsections that the pivots of the polynomial are a good choice under certain conditions. But how to relate the convergence region, that is, the set of valid initial points for which convergence holds, with the functions and ? As expected from the previous argumentation, roots with larger absolute value present also a wider convergence radius.

To this end, firstly it will be demonstrated that if is a fixed point of and , it is verified that

From the definition of and given in (4) and (2), it follows that the approximation order is

Now, assuming that is a single root (for multiple roots, the analysis would be analog and will be omitted), the second and the third derivative of at can be directly calculated from its definition given by (44), resulting in

Note that since has multiplicity , it is verified that , from Proposition 3. Introducing the approximation order of (52) in (53), it follows that These expressions can easily be generalized by induction giving and consequently (51) holds.

Secondly, let us consider the closed ball centered at the root with radius . From the theorem of Earle-Hamilton [41] on fixed points of analytical self-mappings, the sequence starting at any point is convergent towards if there exists a positive real number (in general depending on ) so that Since is a single root, and . The expansion of function around leads to

Approximating to the second order, the distance between and the root can be bounded by where holds only if Therefore, a second order approximation of generates a radius of convergence inversely proportional to , which in turn tends to zero when the root increases in absolute value, as shown in (54).

Approximating to the third order, the distance between and the root can be bounded by and holds only if Since, and , the radius of convergence when .

For the th order approximation of , new values of the radius of convergence can be calculated as result of the following inequality:

Intuitively, it can be generalized that the radius of convergence associated with the th approximation of also increases with the roots, so that

Therefore, the higher the root in absolute value, the larger the convergence region, so that the proposed numerical method is expected to work better in the sense that there exist many more starting points from which convergence will hold. In the numerical examples this behavior will be validated showing that the proposed method presents convergence towards the largest roots. In practice, those polynomials with relatively high absolute values of the pivots with respect to the rest of the polynomial coefficients will present good behavior with respect to the convergence.

4. Numerical Examples

4.1. Example 1

Let us consider the following order polynomial: The local convergence of the recursive functions is directly related to the first derivatives and evaluated at the roots (see (18)), shown in Table 1 in absolute value. In view of the results, it can be ensured that the convergence of the sequence only holds for the root , since it is the only one for which . Let us take the initial values of the sequences to be equal to the pivots calculated from (22); that is, Hence, the recursive sequences , and the corrected , can be built. To visualize the convergence process, the iteration errors are represented in Figure 2, which shows the linear decay of the sequence as demonstrated in Theorem 8. On the contrary, as predicted by Theorem 17 the convergence of the corrected recursive sequences and is quadratic, and this allows reaching the roots and with four and seven iterations, respectively. A tolerance of for the iteration error has been assumed. Moreover, testing the recursive sequences for other starting points (not shown here), we find that they also converge in the great majority of cases. From Table 1, one can notice that and take their lowest values at these roots and therefore the first estimation of the convergence radius is higher than those in the other roots; see (58). This fact explains the satisfactory behavior of the method in this example and with these roots.

Table 1: Results of local convergence for Example 1.

Figure 2: Example 1. Iteration errors. Pivots are and .

Some remarks on the influence of the pivots and can be made. The relative error between the root and is approximately . This confirms the theoretical result obtained in Proposition 10 and Theorem 13: roots relatively large (in absolute value) than the rest are close to the values and defined by (22). If, for example, instead of root we take , the pivot is updated to and then the relative error becomes . It is important to note that the approximations given by and are closed-forms that depend on the polynomial coefficients, in particular on and . These pivots had been proved to be an excellent initial estimation under certain conditions. Moreover, if these conditions hold, improved closed-forms can be constructed just evaluating or depending now on the complete set of coefficients. Thus, it can easily be verified that for the current example and the relative distance with the root is only , a very accurate one-step estimation.

In order to compare the proposed method with another one of quadratic convergence, the iteration error for Newton’s method has also been represented in Figure 2. If is Newton’s sequence, it is well known that the sequence is Since the pivots of the polynomial, and , are relatively close to the roots and , it should be expected that also Newton’s sequence converges to these roots, if are the starting points. However, both sequences converge to and and they do it quadratically as shown in Figure 2. Note that some iterations are needed before Newton’s scheme finds the convergence region of the roots, iteration in which the quadratic convergence can be visualized. This shortcoming does not occur in this example with the proposed method, which provides not only recursive functions but also efficient initial guesses given by the pivots.

4.2. Example 2

In this example, the behavior of the method for multiple roots is examined using the polynomial

Note that it has the same roots as those of Example 1, though now has multiplicity . As in Example 1, the relevant values related to convergence are listed in Table 2. It can be observed that the double root exactly verifies as predicted by Theorem 13. Although this does not guarantee the convergence of , it may occur that, depending on the initial point , the approaching path lies inside the region where but close to the unity. In this case the convergence order becomes sublinear as can be seen for in Figure 3. As predicted by Theorem 17, the acceleration process through leads for this double root to a linear convergence that clearly can be visualized in Figure 3 for . However, the iteration error is not able to decrease under approximately . This limit is due to the singularity of in that, although removable, produces some numerical instabilities when the function is evaluated at the proximity of the root. The radius of convergence can also be estimated for the double root using a parallel development as that for single roots. The root presents the largest value of , the fact that can explain why the sequence, although linearly, converges to this root for initial values relatively far from the root; for example, the freely chosen .

Table 2: Results of local convergence for Example 2.

Figure 3: Example 2. Iteration errors. Pivots are and .

The results of the proposed method for functions and are similar to those of Example 1. As shown in Table 2, the sequence again presents local convergence towards the root with an approximate convergence radius significantly higher than those of the rest of fixed points of .

5. Conclusions

In this paper, a new method for polynomial root-finding is presented. The key idea is the construction of two complex functions, called recursive functions, which can be used in a fixed-point recursive scheme. Necessary conditions for local and global convergence are provided. The latter are studied in a closed ball centered at two certain characteristic points, called pivots of the polynomial. It is demonstrated that the pivots are fixed points of the recursive functions at the complex infinite. Necessary a priori conditions for global convergence are given in terms of the polynomial coefficients. In practice, if a root is in absolute value relatively larger than the others, one of the pivots lies at the proximity of such root. In such cases, the pivots have been demonstrated to be very good initial guesses, not only for the proposed recursive sequences but also for any existing iterative root-finding method. In practice, those polynomials whose pivots are relatively higher than the rest of the coefficients usually present convergence. In addition, the higher the absolute value of pivots, the faster the convergence.

The convergence of the recursive sequences is linear and is not guaranteed for any initial point in the complex plane. For these reasons, corrected recursive functions are constructed to accelerate the convergence. These functions are inspired in the well-known Steffensen’s acceleration method and present the same properties: local convergence and the error-decay velocity quadratic for single roots and linear for multiple ones. The convergence region around the roots, that is, the set of complex values from which convergence holds, is