I have a question about the relationship between numerical integration and the solution of ordinary differential equations (ODE). Suppose I want to evaluate the integral
$I(x) = \int_{0}^{x} f(t) dt$, where $f$ is a continous function, for some fixed $x = x_{0}$. From the fundamental theorem of calculus I should be able to evaluate this integral by solving an initial value problem $\dot{I} = f(x)$, $I(0) = 0$, $x\in(0,x_{0}]$ right? However, I have seldom seen this method implemented, instead one uses specialized numerical integration codes. How come?

6 Answers
6

They work better. It is usual for the definite integrals we are interested in that don't have a known closed-form solution (eg. elliptic integrals) to have relatively bad behavior at the endpoints of the interval of integration (eg., discontinuous derivative, infinite derivative, oscillatory), but be very well-behaved on subintervals bounded away from the endpoints (eg. smooth). Because of this, numerical integration codes are designed to pay much more attention to the function near the endpoints than closer to the middle of the interval, which it would be awkward and silly to have an ODE-solver do. Furthermore, numerical integrators can add the summands from smallest to largest, which reduces round-off error as opposed to having to go from left to right, like an ODE-solver.

Yes, even if you go with an ODE solver with built-in error estimation, remember that the solution of an ODE is ultimately an extrapolation, while the solution of a definite integral is an interpolation, and generally interpolations behave better than extrapolations. (For indefinite integrals, you're stuck with having to use an ODE solver.)
–
J. M.Nov 29 '10 at 12:59

@J.M.: if you could expand a bit more on your comment, it would be really helpful.
–
Thierry ZellNov 29 '10 at 13:29

1

In any event, I'm not fond of using elliptic integrals as an example for displaying the utility of numerical quadrature; there exist much quicker methods for computing them than numerical quadrature (though pedagogically, they are a good nontrivial example of integrals that possess singularities at endpoints).
–
J. M.Nov 29 '10 at 13:40

Well, I didn't know there were faster methods for computing them. Do they use the hypergeometric function, such as at mathworld.wolfram.com/…?. Also, the only other examples I know of are from experimental mathematics, rather than having a more direct use.
–
Ricky DemerNov 29 '10 at 17:22

1

@Ricky: Gauss's arithmetic-geometric mean is quite flexible in computing both the elliptic integrals of the first and second kinds. The third kind is admittedly much more delicate, and in fact some packages do use Gauss-Legendre instead since the AGM-based algorithm for the third kind is pretty delicate.
–
J. M.Nov 29 '10 at 20:10

Here's my take on the matter: the difference of philosophy between quadrature routines and ODE solving routines, I believe, is this:

Extrapolation is riskier than interpolation.

Remember that numerical quadrature routines all boil down to approximating your presumably more complicated integrand within the interval of integration with something easier to integrate exactly, and then integrating that. For instance, Newton-Cotes (and in essence, Romberg as well) constructs an interpolating polynomial from your integrand with equispaced abscissas, and integrating that. For Gaussian or Clenshaw-Curtis quadrature, it is equivalent to interpolating your function at "specially spaced" abscissas (Legendre polynomial roots in the former, and Chebyshev polynomial roots in the latter) that have better convergence in the limit. In effect, we run under the assumption that the interpolating function behaves very similarly to the actual integrand within the interval of interest that a sufficient amount of sampling within the integration interval should be enough to capture the behavior of your integrand, and thus give a result hopefully close to the actual value of the integral.

In contrast, remember that ODE solvers usually only have initial values to start with. The reason for building in a lot of machinery in current ODE solvers, whether Runge-Kutta, Bulirsch-Stoer, Adams/Gear multistep, or some of the fancier modern techniques, is that extrapolation is inherently unstable. Knowing how the solution looks like in the beginning gives no guarantee how it will behave as the ODE solver marches on; the solution may well be violently oscillatory, or decaying quite fast (so-called "stiff" problems). Thus, there is quite a fair amount of code inscribed in modern ODE solvers for checking how reasonable are the step-sizes being taken, and other such fail-safes.

As I did mention in some previous comments, some ODE solving methods are equivalent to quadrature methods when applied to the initial-value problem $y^{\prime}=f(x)$: using classical Runge-Kutta for quadrature is equivalent to performing Simpson's rule, for instance.

The point is that ODE solvers tend to be more careful ("tiptoeing", if you will) and thus more effort-intensive than numerical quadrature routines because they make no assumptions on how your integrand behaves. On that note, I will say that you should know that there are integrands (and corresponding intervals) where using an ODE solver might make more sense than using a numerical quadrature routine. One instance that comes to mind: if you know (through graphing, for instance) that your integrand has crazy behavior in a relatively tiny interval within the interval of integration, while the sampling done by a numerical quadrature routine might miss such features (or take a long time to notice them), an ODE solver will be careful to begin with and is less likely to miss the crazy behavior, and will shrink step-sizes as appropriate until it has gone past that interval.

Your idea is not bad, and there are many connections between the theories of numerical integration and the numerical solutions of ODEs. For instance, the Gauss-Legendre methods for solving ODEs reduces to the Gauss-Legendre method for numerical integration.

However, the differential equation $\frac{dI}{dx}=f(x)$ has a special form (the general form is $\frac{dI}{dx}=f(x,I)$) which can be exploited by the methods. Thus, the Gauss-Legendre method for solving ODEs is an implicit method which has to solve an algebraic equation at every step, but this issue disappears when applying it to an ODE of the form $\frac{dI}{dx}=f(x)$. Furthermore, when evaluating an integral numerically, you can split the integration interval in two pieces and add the resulting integrals over the two subintervals. You cannot do the same with an initial-value problem, because the solution over the first subinterval gives you the initial condition for the second subinterval. This means that it's harder to implement adaptive methods for ODEs than for integrals.

For these reasons, and the ones mentioned by Ricky Demer, methods written specifically for evaluating integrals perform better than methods for solving ODEs.

Gauss-Legendre and Gauss-Lobatto tend to be used for "stiff" systems (in general, implicit methods are used for stiff systems); as a probably more familiar example, it can be shown that the usual fourth-order Runge-Kutta method, when applied to the ODE $y^{\prime}=f(x)$, is equivalent to using Simpson's rule on the integral of $f(x)$. Additionally, in general codes for adaptively solving ODEs tend to be more elaborate than adaptive quadrature codes, since as I said in the comment to Ricky's answer, one has to be more careful at extrapolating than interpolating.
–
J. M.Nov 29 '10 at 13:15

Once you teach numerical analysis, you reckognize that some problems are related. This is your case: ODEs vs integrals. This is also the case with roots of polynomials vs eigenvalues of matrices (via the companion matrix). It turns out that the naive attitude is generally wrong. For instance, compute the characteristic polynomial first, and then solving the polynomial equation is the wrong way to compute the eigenvalues of a matrix. Instead, we do use the QR method for the eigenvalue problem + the companion matrix, in order to find the roots of a univariate polynomial.

The same is true here. We do use numerical methods for the computation of integrals in order to design efficient methods for solving ODEs. Not the converse. This is a matter of stability.

Yes, integration of ODE involves extrapolation whereas integration of functions involves interpolation.
Take for example the Euler's method, we are projecting the slope at the initial point and assume that it applies throughout the time step (which is obviously not the case in general). Worse, the errors accumulate through the time steps. That is, the starting value for the 2nd time step is inaccurate to start with.

Integration of functions involves interpolation of a certain polynomial to fit the functional values and evaluating the area under that polynomial.

In fact, many researchers have tried to apply functional integration methods to integration of ODEs and not the other way round! We know the Gaussian quadrature is one of the most efficient and accurate numerical integration techniques around. Several ODE solvers are based on the Gauss method - e.g. the fully-implicit Gauss-Legendre method developed by Hollingsworth (1955) and later generalized by Butcher (1964) for arbitrary orders as the Gauss methods (implicit Runge-Kutta processes).

Still later, such implicit Runge-Kutta processes are shown to be identical to the collocation methods (Wright, 1970).

and thank you very much for your answers! As I understand it then, three important advantages of viewing numerical integration (of a continous function) as a special problem (and not as an initial value problem), is that we can

add the summands from smallest to largest to avoid round-off errors,

pay more attenation to ends of the integration interval, and

split the integration interval into several pieces to ease the construction of adaptive methods.