Before we go into the very technical definition of Newton’s method, let us consider a more realistic problem so that we could grasp its concept easier. Suppose that a car dealer offers to sell you a car for $18,000 or for payments of $375 per month for five years. You would like to know what monthly interest rate the dealer is, in effect, charging you. To find the answer, you have to solve the equation

48x(1+x)^60-(1+x)^60+1=0

The problem now is how to solve this.

Going back to our basic algebra, for a quadratic equation ax^2+ bx+c=0 there is a known formula to find its roots or solutions. The same goes for the third and forth degree equations though it is already very complicated. As we go to the fifth and higher degree equations, according to the Norwegian mathematician Niels Abel, there is no general formula that can be given for the roots of a fifth-degree equation in terms of radicals. Likewise, there is no formula that will enable us to find the exact roots of a transcendental equation such as cos x = x.

What we can do is use a graphical approach. We could find an approximate of the problem by plotting the left side of the equation. We will therefore have a graph that would look like the graph shown below,

If we do zoom this figure to the solutions near 0, we could find values of the root that gives an approximation of about 0.0076. If we need to have a more accurate result, we could repeatedly zoom it but that would become a very tiresome job. As an easier and faster alternative, we could use a numerical rootfinder or computer algebra system and give us a root with nine decimal places which is equal to 0.007628603.

Numerical rootfinder uses the Newton’s method or also called the Newton-Raphson method, though they also use a variety of methods nowadays. Basically, Newton’s method is a method of approximating solutions to equations.

Let’s suppose that we want to approximate the solution to f(x)= 0 and let’s also suppose that we have somehow found an initial approximation to this solution say, x_0. This initial approximation is probably not all that good and so we’d like to find a better approximation. This is easy enough to do. First we will get the tangent line to f(x)= 0 at x_0.

y=f(x_0)+f'(x_0)(x-x_o)

Now, take a look at the graph below.

The blue line is the tangent line at x_0. We can see that this line will cross the x-axis much closer to the actual solution to the equation than x_0 does. Let’s call this point where the tangent at x0 crosses the x-axis x_1 and we’ll use this point as our new approximation to the solution. So, how do we find this point? Well we know it’s coordinates, (x_1,0), and we know that it’s on the tangent line so plug this point into the tangent line and solve for x_1 as follows,

0=f(x_0)+f'(x_0)(x-x_o)

(x_1-x_0)=-\frac{f(x_0)}{f'(x_0)}

x_1=x_0-\frac{f(x_0)}{f'(x_0)}

Therefore, we can find the new approximation provided the derivative isn’t zero at the original approximation. Now we repeat the whole process to find an even better approximation. We form up the tangent line to f(x) at x_1 and use its root, which we’ll call x_2, as a new approximation to the actual solution. If we do this we will arrive at the following formula:

x_2=x_1-\frac{f(x_1)}{f'(x_1)}

This point is also shown on the graph above and we can see from this graph that if we continue following this process will get a sequence of numbers that are getting very close the actual solution. This process is called Newton’s Method.

In general, we can express the Newton’s method as follows,

If x_n is an approximation a solution of f(x)= 0 and f'(x_n )≠ 0 if the next approximation is given by,

x_{n+1}=x_{n}-\frac{f(x_n)}{f'(x_n)}

If the numbers x_n become closer and closer to as r becomes large, then we say that the sequence converges to and we write

\lim_{n \to \infty}x_{n}=r

Although the sequence of successive approximations converges to the desired root for functions of the type illustrated below,

in certain circumstances the sequence may not converge. For example, consider the situation of the figure shown below.

You can see that x_2 is a worse approximation than x_1. This is likely to be the case when f' (x_1 ) is close to 0. It might even happen that an approximation falls outside the domain of f. Then Newton’s method fails and a better initial approximation x_1 should be chosen.