Yes. Once you find a root $a$, divide the polynomial by $x-a$ and then apply the algorithm again.
–
Ben Webster♦Dec 9 '10 at 5:45

Are these real or complex polynomials? Are the coefficients integers? rational? Newton's method can be slow near multiple or closely spaced roots although exact multiple roots can be found using the $\gcd$ of a polynomial and its derivative. Once the roots are approximately located, (Sturm's theorem is helpful) Newtons method can be fairly robust.
–
Aaron MeyerowitzDec 9 '10 at 5:46

The coefficients and the roots wanted are real integral values.
–
George TylerDec 9 '10 at 6:22

How do we choose good intervals to test for Sturm's theorem?
–
George TylerDec 9 '10 at 7:44

1

Your usage of "multiple roots" seems erroneous; "multiple" is used to denote a root that is counted more than once, e.g. 1 is a multiple (double) root of the polynomial $x^2-2x+1$.
–
J. M.Dec 9 '10 at 9:21

5 Answers
5

The Newton's method has driven a lot of attention during the past decades in the community of complex analysis, because of its nice behaviour as a dynamical system. From the point of view of the approximation of the roots of a polynomial $P$, it has contrasted properties. On the one hand, it is a fast method for calculating one root $a$. But once you have a good approximation $\alpha$ of it, you face the problem of dividing $P$ by $X-\alpha$. This division is not exact, and it is unstable if $P$ has an other root close to $a$. The latter case happens frequently in large matrices, either random or with physical meaning.

This is why the codes employed by computers to calculate the set of roots follows an other idea. Form the companion matrix $B_P$, which turns out to be of Hessenberg form ($b_{ij}=0$ if $i\ge j+2$). Then apply the QR method. This one enjoys crucial properties:

it preserves the Hessenberg form,

one iteration is fast when applied to a Hessenberg matrix ($O(n^2)$ operations),

it is stable, in the sense that the roundoff errors are not amplified along the iterations. This is because one conjugates by unitary matrices.

Naively, we could think that the calculation of eigenvalues of a matrix must be done in two steps, first calculate the characteristic polynomial, then solve it. But the QR method is so much efficient that the converse is actually true: to find the roots of a polynomial, form the companion matrix and then apply QR.

If you fear that $P$ has multiple roots, you had better to eliminate them first by calculating the g.c.d. of $P$ and $P'$. This is not easy in general, but if $P\in\mathbb Z[X]$, you do it exactly.

Later. I realize that you seem to be interested only on polynomials whose roots are real. If this is correct, then you can accelerate the calculation by the following trick. First form a Sturm sequence of polynomials $P_0=P,P_1=P',\ldots$. Then use it to form a tridiagonal symmetric matrix $H$ whose characteristic polynomial is $P$: $P_{n-j}$ is the characteristic polynomial of the principal $j\times j$ matrix. Now apply QR to $H$. This is really fast. Each iteration requires only $O(n)$ operation, and the method becomes of cubic order.

I always feel some misgivings when recommending the companion matrix scheme; while it can be accurate (assuming you balanced your companion matrix before subjecting to QR if the coefficients vary widely in magnitude), using an $O(n^3)$ method for a problem with $O(n)$ parameters (and can thus ostensibly be solved in $O(n^2)$ flops) is a bit of a waste, if you ask me. On the other hand, it might come as a surprise that there are in fact modifications of the QR scheme that exploit the sparsity of the Frobenius companion matrix, e.g. scg.ece.ucsb.edu/publications/oadv2.pdf
–
J. M.Dec 9 '10 at 9:24

1

Regarding the edit: if the polynomial with all real roots has multiple roots, using the symmetric tridiagonal companion matrix has the side benefit of it having zero sub- and superdiagonal matrix entries, such that the eigenproblem actually decouples (your eigenroutine processes a series of small matrices instead of one big matrix).
–
J. M.Dec 9 '10 at 10:01

This is in fact very similar to the (Weirestrass-)Durand-Kerner method I mentioned in the comments, in that both can be considered as Newton-Raphson methods applied to an appropriate system of $n$ equations in $n$ unknowns. There is a good discussion of these methods in McNamee's book (which I also linked to in the comments). The advantage of (Ehrlich-)Aberth(-Maehly) over (W)DK is that this has a cubic convergence rate (over (W)DK's quadratic convergence rate) if the polynomial's roots are all simple.
–
J. M.Dec 9 '10 at 13:44

FWIW, if your input polynomial has all its roots real, one can use a Sturmian scheme to construct a symmetric tridiagonal companion matrix, which can then be subjected to the QR algorithm (which is more efficient in the symmetric case than in the unsymmetric case).
–
J. M.Dec 9 '10 at 9:27