I have a rather specific question regarding the condition number. I run FEM simulations which have multiple length scales to them which results in a huge disparity between the largest entries and the smallest entries in my matrix. The condition number can get as high as 10^15 in some circumstances.

In numerical analysis I often see the error bound for the condition number as it applies to error in the computed solution using direct methods. My curiosity is whether this logic applies to error in an iterative-type solver like CG, or GMRES as well. I know that the convergence rate is impacted by the eigenvalues of the matrix, and I do notice huge speed losses when running problems of this type. But, I am uncertain as to the accuracy. Any help would be appreciated.

$\begingroup$It could be my lack of understanding of FEM. But in multiscale modeling problems the volume of my largest element to smallest element is about 10^10. I know that those parameters will go into the entries of the matrix. What I do not know is if this type of thing is accounted for in the FEM linear solvers ( But, I don't know how it would, which is why I asked the question) . So to answer your question, the mesh is refined as in the elements all have acceptable quality but the disparate sizes of the elements caused me to estimate that my condition number would be on this order.$\endgroup$
– CraigJMar 12 '15 at 13:53

2 Answers
2

Ill conditioning is a feature of the system of the equations, not of the algorithm used to solve the system of equations. If your systems are that badly conditioned ($10^{15}$), then you can expect the solution to the system to be extremely sensitive to any perturbation of the problem data, even if the solution is done in extremely high precision (e.g. 500 digits) arithmetic using direct factorization.

Your iterative method is unlikely to converge to a solution in any reasonable amount of time. Even if you were willing to wait around for centuries, the solution you got would still be extremely sensitive to any perturbations in the problem data.

$\begingroup$So, if I have a parameter which has a 5-10% variability which will influence the values in the matrix, then this uncertainty will be magnified many-fold regardless of the type of solution method? Thanks, just asking for a little bit of clarification.$\endgroup$
– CraigJMar 6 '15 at 20:34

1

$\begingroup$Yes, if your data are 5-10% accurate and you have this badly conditioned a system you're in deep trouble. You really need to consider some kind of regularization.$\endgroup$
– Brian BorchersMar 6 '15 at 20:52

$\begingroup$This is interesting. If my element sizes are roughly the same but I have ill-conditioned matrices due to the scaling of each kernel in our PDE, You mention regularization, any suggestions on how to resolve this? For example, a relatively small version of our problem: -pc_svd_monitor in PETSc revealed something like 540/620 order $10^{-20}$ near-zero singular values. We rescaled the kernels with the length scale and reduced this issue to about 280/620 near-zero singular values. The problem is, is the largest singular values are still on order $10^{12}$ or so and the problem is not converging.$\endgroup$
– John MApr 26 '15 at 22:45

We should be more precise here. The simplest estimate that you can give is that
$$
||x^* - x|| = ||A^{-1} A (x^* - x)|| \le ||A^{-1}||\,||b - A x||
$$
so that if you terminate your iteration using the residual, you can be off by a factor of $||A^{-1}||$, for the relative residual by
$$
\kappa(A) = ||A||\,||A^{-1}||
$$
so you have a simple estimate of how many digits you are losing
$$
\log\kappa.
$$