A geometric modification to the Newton-Secant method to obtain the root of a nonlinear equation is described and analyzed. With the same number of evaluations, the modified method converges faster than Newton’s method and the convergence order of the new method is 1+2≈2.4142. The numerical examples and the dynamical analysis show that the new method is robust and converges to the root in many cases where Newton’s method and other recently published methods fail.

1. Introduction

One of the most important problems in numerical analysis is to find the solution(s) of the nonlinear equations. Given x0, a close approximation for the root of a nonlinear equation f(x)=0, Newton’s method [1–3], defined as(1)xk+1=xk-fxkf′xk,k≥0,produces a sequence that converges quadratically to a simple root r of f(x)=0. Many variations of Newton’s method were recently published [4–6] with different kinds of modification on Newton’s method. In particular, in [7], the authors presented a two-step iterative method with memory based on a simple modification on Newton’s method with the same number of function and derivative evaluations but with convergence order 1+2. This method, starting with x0 sufficiently close to the root r, is defined as(2)xk∗=xk-fxkf′1/2xk-1+xk-1∗,xk+1=xk-fxkf′1/2xk+xk∗for k≥0 and with x-1∗=x-1=x0. Observe that, in the first step, this method realizes three functional evaluations and only two in the other iterations steps. For further explanations of this method we can see [8].

2. Development of the Method

Suppose that r is a root of a nonlinear equation f(x)=0, where f:I→R is a scalar function for an open interval I. Consider that x-1∈I is close to r and f′(x-1)≠0; then we define(3)x0=x-1-fx-1f′x-1,Newton’s method.

Now, as can be seen in Figure 1(a), if f(x) is a convex function, we have the following inequality:(4)f′x0≤fx0-fx-1x0-x-1;then using the two points (x0,f(x0)) and x-1,f(x-1)f′(x0)(x0-x-1)/f(x0)-f(x-1), we have the following line equation:(5)y-fx0=fx0-fx-1f′x0x0-x-1/fx0-fx-1x0-x-1x-x0;thus, we can define(6)x1=x0-fx0fx0-fx-1x0-x-1fx0fx0-fx-1-fx-1f′x0x0-x-1.

Figure 1

Geometric modification of Newton-Secant method. x0 is the point given by Newton’s method. In the next step, using condition (4), we have that f(x-1)≥f(x-1)f′(x0)(x0-x-1)/f(x0)-f(x-1). Thus the line passing through the points (x0,f(x0)) and x-1,f(x-1)f′(x0)(x0-x-1)/f(x0)-f(x-1) gave us the better approximation x1 to the root r. This approximation is better than x0-f(x0)/f′(x0) given by Newton’s method. The concave case is analogous to convex case.

(a)

Convex case

(b)

Concave case

In this way, given x-1 we define, x0=x-1-f(x-1)/f′(x-1), and(7)xk+1=xk-fxkfxk-fxk-1xk-xk-1fxkfxk-fxk-1-fxk-1f′xkxk-xk-1,with k≥0, which uses two functional evaluations.

In the case of Figure 1(b), the concave case, we have the inequality f′(x0)≥f(x0)-f(x-1)/x0-x-1 and analogous to the case of convex function we have the same result.

3. Convergence AnalysisTheorem 1.

Let f:I⊂R→R be a sufficiently differentiable function and let r∈I be a simple zero of f(x)=0 in an open interval I, with f′(x)≠0 on I. If x-1∈I is sufficiently close to r, then the method NSM, as defined by (7), has convergence order equal to 1+2.

Proof.

We consider the following equalities,(8)xk+1-r=εk+1,xk-r=εk,xk-1-r=εk-1,and the following developments of Taylor’s polynomial around the root r:(9)fxk=f′rεk+f′′r2εk2+Aεk3,fxk-1=f′rεk-1+f′′r2εk-12+Bεk-13,f′xk=f′r+f′′rεk+Cεk2,where A=f(3)(ξ1)/6, B=f(3)(ξ2)/6, and C=f(3)(ξ3)/2 for some ξ1,ξ2,ξ3∈I. Using (7) and (8), we have(10)εk+1=εk-fxkfxk-fxk-1εk+1-εkfxkfxk-fxk-1-fxk-1f′xkεk+1-εk.

Using (9), we have, in the denominator, 1/4(S1+S2+S3+S4), where(11)S1=εk22f′r+2Aεk2+f′′rεk2,S2=-2f′rεk-1εk4f′r+2Aεk2+2Cεk2+3f′′rεk,S3=εk-12-3f′′2rεk2+4f′2r-2Af′′rεk3+4f′rCεk2-2f′′rCεk3,S4=-2εk-13Bεk4f′r+3f′′rεk+2cεk2+2Aεk2-f′′rf′r+f′′rεk+Cf′′rεk.

And, in the numerator, we have 1/4εk-1εk2(T1+T2+T3+T4), where(12)T1=εk24A2εk2+8Af′r-4Cf′r+f′′2r+4Af′′rεk-2Cf′′rεk-1,T2=-2εk-1εk2Af′r-2Cf′r+f′′2r,T3=εk-12-4Bf′r+f′′2r-2εk2BCεk+Af′′r+2Bf′′r-Cf′′r,T4=-2Bεk-13-f′′r+2Aεk-2Cεk.

Simplifying and dividing, numerator and denominator by εk-12, and considering that the quotient εk/εk-1 has order at least O(εk-1), we have(13)εk+1=εk2εk-1f′′2r-4Bf′r+Oεk-14f′2r+Oεk-1.Now, suppose that εk+1 is asymptotic to Kεkα, with α>1. Consequently, by expressing (13) in terms of εk-1, we obtain(14)Kα+1εk-1α2=K2εk-12αεk-1f′′2r-4Bf′r+Oεk-14f′2r+Oεk-1.In order to satisfy the previous asymptotic equation, it is evident that α has to be a positive root of α2-2α-1=0, that is, α=1+2, the convergence order of the method. Moreover, the asymptotic constant can be calculated by(15)limk→∞⁡f′′2r-4Bf′r+Oεk-14f′2r+Oεk-1=3f′′2r-2f′rf′′′r12f′2r.

4. Numerical Examples

In this section we check the effectiveness of the new method (NSM) introduced in this paper and compare this with Newton’s classical method (NM) and McDougall’s method (MM). All computations are done using ARPREC C++ [9]. The iteration is stopped if one of the stopping criteria |xk+1-xk|<10-100 and |f(xk+1)|<10-100 is satisfied. We test the different iterative methods using the following smooth functions that are the same as those used in [4, 6, 7]:(16)f1x=sin2⁡x-x2+1,f2x=x2-ex-3x+2,f3x=xex2-sin2⁡x+3cos⁡x+5,f4x=ex2+7x-30-1.

Table 1 presents the number of iterations (k), the estimated error of the iterations |xk+1-xk|, and the number of functional evaluations (NOFE) of the different methods.

Table 1

Numerical results for smooth functions.

Function

x-1

Method

N

NOFE

f(xk+1)

sin2⁡x-x2+1

1

NM

8

16

3.4e-101

MM

7

15

8.8e-113

SNM

7

14

4.0e-125

3

NM

8

16

2.0e-88

MM

7

15

1.2e-129

SNM

7

14

5.0e-131

x2-ex-3x+2

2

NM

6

12

2.9e-55

MM

6

13

3.5e-107

SNM

6

12

7.8e-56

3

NM

8

16

4.1e-104

MM

7

15

7.4e-122

SNM

7

14

3.3e-74

xex2-sin2⁡x+3cos⁡x+5

−2

NM

10

20

3.8e-81

MM

9

19

3.6e-155

SNM

8

16

2.6e-122

ex2+7x-30-1

3.25

NM

10

20

5.5e-66

MM

9

19

1.2e-124

SNM

8

16

2.7e-110

3.5

NM

14

28

1.2e-94

MM

12

25

7.0e-136

SNM

11

22

2.1e-225

The estimated convergence order of the proposed methods is always equal to or better than that of the other iterative methods. The number of iterations is sometimes lower. When the number of iterations is the same, the estimated error of the last iterate is also lower.

5. Dynamical Analysis

For the study of the concepts on complex dynamics [10, 11] we take a rational function R:C^→C^, where C^ is the Riemann sphere. For z∈C, we define its orbit as the set orbz=z,Gz,G2z,…. A point z0∈C is called periodic point with minimal period m if Gm(z0)=z0, where m is the smallest integer with this property. A periodic point with minimal period 1 is called fixed point. Moreover, a point z0 is called attracting if |G′(z0)|<1, repelling if |G′(z0)|>1, and neutral otherwise. The Julia set of a nonlinear map G(z), denoted by J(G), is the closure of the set of its repelling periodic points. The complement of J(G) is the Fatou set F(G), where the basins of attraction of the different roots lie. We use the basins of attraction for comparing the iteration algorithms. The basin of attraction is a method to visually comprehend how an algorithm behaves as a function of the various starting points [12, 13]. In this section, polynomial and rational functions have been considered, which are the same functions that appear in [14–16]:(1)

f1(z)=z2-1;

(2)

f2(z)=z3-1;

(3)

f3(z)=z4-1;

(4)

f4(z)=z5-1;

(5)

f5(z)=z2-1/z;

(6)

f6(z)=z3-1/2z+1;

(7)

f7(z)=z3-1/z;

(8)

f8(z)=z4-1/z.

For the dynamical analysis of iterative method, we usually consider the region [2,2]×[2,2] of the complex plane, with 400×400 points, and we apply the iterative method starting in every x-1 in this region. If the sequence generated by iterative method reaches zero r of the function with a tolerance |xk+1-r|<10-3 and a maximum of 100 iterations, we decide that x-1 is in the basin of attraction of these zeros and we paint this point in a color previously selected for this root. In the same basin of attraction, the number of iterations needed to achieve the solution is showed in different colors. Black color denotes lack of convergence to any of the roots (with the maximum of iterations established) or convergence to the infinity.

For example, for the first function, f1(z)=z2-1, we have f′(z)=2z and the NSM given by (7) can be written as(17)zk+1=zk2zk-1+2zk+zk-1zk2+2zk-1zk+1.

Observe that expression (17) can be calculated always for all values zk-1,zk except if z-1=0 for initial step that uses Newton’s method. Expression (17) gives us the strength of method (7) in the case of function f1 and this is represented in Figure 2(a).

Figure 2

Basins of attraction for the functions.

(a)

f(z)=z2-1

(b)

f2z=z3-1

(c)

f3z=z4-1

(d)

f4z=z5-1

(e)

f5z=z2-1/z

(f)

f6z=(z3-1)/(2z+1)

(g)

f7z=z3-1/z

(h)

f8(z)=z4-1/z

In this section, we observe that the schemes for the new method have a simple boundary of basins. We also found that the new iterative method has no chaotic behavior. Based on figures we also observe that method has no diverging points (black area). Finally, our method has lower number of diverging points and large basins of attraction.

6. Conclusion

In this paper, we have developed a new iterative method based on a geometric modification of Newton-Secant method to find simple root of nonlinear equations. New proposed method is obtained without adding more evaluations. Numerical and dynamical comparisons have also been presented to show the performance of the new method. From numerical and graphical comparisons, we can conclude that the new method is efficient and robust and gives tough competition to some existing methods.

Finally, further research is needed to implement these new iterative methods in solving systems of nonlinear equations. Such implementations may be based on divided difference operator of order 1 or 2 in the sense of [17, 18].

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

1OrtegaJ. M.RheinboldtW. C.Iterative Solution of Nonlinear Equations in Several Variables1970New York, NY, USAAcademic PressMR02738102OstrowskiA. M.Solution of Equations and Systems of Equations1960New York, NY, USAAcademic Press3TraubJ. F.Iterative Methods for the Solution of Equations1964New York, NY, USAPrentice-HallMR01693564WangP.A third-order family of Newton-like iteration methods for solving nonlinear equationsJournal of Numerical Mathematics and Stochastics2011311319MR27718575HomeierH. H.A modified Newton method for rootfinding with cubic convergenceJournal of Computational and Applied Mathematics2003157122723010.1016/s0377-0427(03)00391-1MR19964786WeerakoonS.FernandoT. G. I.A variant of Newton's method with accelerated third-order convergenceApplied Mathematics Letters2000138879310.1016/s0893-9659(00)00100-2MR17917677McDougallT. J.WotherspoonS. J.A simple modification of Newton's method to achieve convergence of order 1+2Applied Mathematics Letters201429202510.1016/j.aml.2013.10.008MR31335652-s2.0-848891018948RenH.ArgyrosI. K.On the convergence of King-Werner-type methods of order 1+2 free of derivativesApplied Mathematics and Computation20152561481599ARPRECC++/Fortran-90 arbitrary precision packagehttp://crd-legacy.lbl.gov/~dhbailey/mpdist/10BlanchardP.Complex analytic dynamics on the Riemann sphereBulletin of the American Mathematical Society19944913915411BlanchardP.The dynamics of Newtons methodProceedings of Symposia in Applied Mathematics19841118514112AmatS.BusquierS.PlazaS.A construction of attracting periodic orbits for some classical third-order iterative methodsJournal of Computational and Applied Mathematics20061891-2223310.1016/j.cam.2005.03.049MR22029612-s2.0-3154443173313AmatS.BermúdezC.BusquierS.PlazaS.On the dynamics of the Euler iterative functionApplied Mathematics and Computation2008197272573210.1016/j.amc.2007.08.086MR24006952-s2.0-3944913254414LotfiT.SharifiS.SalimiM.SiegmundS.A new class of three-point methods with optimal convergence order eight and its dynamicsNumerical Algorithms201568226128810.1007/s11075-014-9843-yMR33048442-s2.0-8490173691815SinghA.JaiswalJ. P.An efficient family of optimal eighth-order iterative methods for solving nonlinear equations and its dynamicsJournal of Mathematics201420141456971910.1155/2014/569719MR326358516ChunC.NetaB.KimS.On Jarratt's family of optimal fourth-order iterative methods and their dynamicsFractals201422416145001310.1142/s0218348x14500133MR32769962-s2.0-8490959490617Grau-SánchezM.GrauÀ.NogueraM.Frozen divided difference scheme for solving systems of nonlinear equationsJournal of Computational and Applied Mathematics201123561739174310.1016/j.cam.2010.09.019MR27368742-s2.0-7804923903918EzquerroJ. A.Grau-SánchezM.Hernández-VerónM. A.NogueraM.A family of iterative methods that uses divided differences of first and second ordersNumerical Algorithms201570357158910.1007/s11075-015-9962-0