for some point $x$ in the domain $\Omega$ of the problem. If $\Omega$ is $\mathbb{R}^n$, then this problem has a closed-form solution given by the Euclidean heat kernel

$$k_t(x,y) = \frac{1}{(4\pi t)^{n/2}}e^{-|x-y|^2/4t};$$

in general solutions to this problem will exhibit this same type of behavior: the magnitude of $u$ decays roughly exponentially as you travel away from the "source" $x$.

Now consider solving this problem numerically by constructing a finite-dimensional linear system whose solution approximates the solution of the original PDE (e.g., using a Galerkin method). Typical numerical linear algebra systems (which work with fixed-precision floating-point arithmetic) guarantee that the maximum error in any component of the solution will be no larger than some small fraction $\epsilon > 0$ of the largest element of the right-hand side.

For instance, in the problem outlined above the right-hand side will look like a Kronecker delta, so the absolute error will be no worse than $\epsilon$. Unfortunately, since the magnitude of the solution decays exponentially, the error $\epsilon$ may be much larger than the smallest entry of the true solution.

Question: using floating-point computations only, can one obtain a solution to a linear system that obtains a desired accuracy relative to the magnitude of each element of the solution vector, rather than the right hand side?

The fundamental problem in achieving better accuracy seems to stem from cancellation effects, i.e., if you add two numbers of very different magnitude in floating point, the smaller one is essentially ignored. I am aware of algorithms that use alternative numerical representations (e.g,. Dixon's method and so on), but this kind of answer is not particularly interesting to me due to considerations of efficiency.

My apologies if this question is "too numerical" for MO -- the Numerical Modeling & Simulation Stack Exchange site does not yet exist!
–
TerronaBellOct 30 '11 at 15:02

2

Recent version of LAPACK include routines that compute solutions with componentwise error bounds. It's expensive to do this, and might not be worth it in practice, but see the LAPACK documentation.
–
Brian BorchersOct 30 '11 at 17:11

4

@fuzzytron: on the contrary, I am a "numerical mathematician" and I find that there are too little numerical questions on MO. One could argue on whether numerical linear algebra belongs here rather than in a hypothetical AppliedMathOverflow site, but as long as the pure math people do not get too snobbish I think that both groups can benefit of hanging around in the same environment.
–
Federico PoloniNov 3 '11 at 14:32

2 Answers
2

This subject has been studied in literature --- it turns out that for some class of matrices (such as M-matrices) you can obtain componentwise accurate solutions. A good starting point is the section on "accurate floating point computation" on Jim Demmel's home page; other good search terms are "accurate linear algebra" and "componentwise error analysis". Recent literature has focused more on accurate eigenvalue computation rather than linear systems, but there are results also for them.

EDIT: And let me add that discretizing $\Delta$ with the usual finite-difference scheme results in an M-matrix.

I assume that you're interested in solving the boundary value problem in a situation where the boundaries are far away from the source, so that the solution is close to the analytical solution on an unbounded domain.

One option here would be to write the solution to your problem as the sum of analytical solution to the problem on an unbounded domain plus a correction term. That is,

$ u(x,y,t)=k_{t}(x,y)+v(x,y,t) $

Since the PDE is linear, your correction term $v(x,y,t)$ would also have to satisfy the heat equation. You could easily derive boundary conditions for $v$ in terms of the boundary conditions for $u$ and the values of $k_{t}(x,y)$ at the boundaries.

Thanks Brian. Actually I'm interested in geometries for which there is no known analytical solution for (k_t). Often these are compact spaces without boundary.
–
TerronaBellOct 30 '11 at 16:11

On compact riemannian manifolds, you have an asymptotic heat kernel, which is basically the euclidean heat kernel with correction terms coming from the metric and its derivatives.
–
Matthias LudewigOct 30 '11 at 18:09