Richard P. Brent

This is a specialized monograph in numerical analysis, developing methods for finding zeroes and minima of functions without using their derivatives. It is an expanded version of the author’s 1971 Ph.D. thesis. The present volume is an unaltered reprint of the 1973 Prentice-Hall edition, with a new preface by the author and a pointer to errata on the author’s web site.

The motivation for avoiding derivatives is that they may be difficult or time-consuming to evaluate. This book is not related to Ivan Niven’s Maxima and Minima Without Calculus, where the motivation is find extrema analytically with simpler methods than calculus.

The general idea is to develop a series of approximations to the desired point by using interpolating polynomials and finding an exact solution for each polynomial. Thus, to find a zero of a function we start with two points, find the linear function that interpolates the function from these points, and find where the interpolating function has a zero; this is the next point in the sequence. Geometrically we are drawing a secant line through the curve and expecting that the point where the secant hits the x-axis is close to a zero.

Finding minima is algorithmically similar: we interpolate through three points with a quadratic polynomial and pick the location of the minimum of the interpolating polynomial as our next estimate. These methods work well if the function is smooth and we start near the solution point, but may fail otherwise. A lot of the complication in developing an automatic algorithm is in detecting and correcting for non-convergence, and the book investigates and proves the properties of complete practical algorithms.

The book is very much oriented toward implementation of these algorithms, and in fact most are totally defined only in the ALGOL source code for them and not in the body of the text. The text does include complete proofs of convergence.