Nils,
On Mon, 20 Nov 2006 09:45:47 +0100, Nils Wagner wrote:
ARPACK uses iterative techniques to find eigenvalues. If it converges, the
eigenvalues are always right, but a part of the spectrum may be missed in some
cases. ARPACK is particularly good when you want a small number of eigenvalues
of a big system.
The problem you're trying to solve is therefor very badly suited to
ARPACK. Also, using a random matrix may on the odd occasion result in a matrix
with very badly conditioned eigenspace (the matrix made up of all the
eigenvectors) and thereby cause large numerical errors. It also needed
more iterations that the maximum I had set previously (based on the size
of the problem). By adding an absolute minimum that was solved.
Solving very big matrices generated by my code I've yet to miss any eigenvalues
(I'm comparing to analytical results), so it does seem to be fairly
reliable. Convergence is affected by, amongst others, the number of Arnouldi
vectors ARPACK uses (the ncv parameter). Let me try to demonstrate this a bit:
>> from scipy.sandbox import arpack
> from scipy import *
> a = random.rand(10,10)
> k = 4
> ws, vs = arpack.eigen(a,k)
>> wd, vd = linalg.eig(a)
I've changed the function name but it works as before:
import numpy as N
from scipy.sandbox import arpack
from scipy import *
a = random.rand(10,10)
k = 4
#sparse, reqesting the k eigen values with the smallest magnitude (the default)
ws, vs = arpack.speigs.ARPACK_eigs(get_matvec(a),a.shape[0],k)
sort_ind = N.abs(ws).argsort()
ws = ws[sort_ind]
vs = vs[:,sort_ind]
#dense
wd, vd = linalg.eig(a)
sort_ind = N.abs(wd).argsort()
wd = wd[sort_ind]
vd = vd[:,sort_ind]
Here I sorted both the sparse and dense results by the absolute value of the
eigenvalues.
The outputs are:
In [41]: wd
Out[41]:
array([-0.17472186+0.j , 0.19316651+0.12446598j,
0.19316651-0.12446598j, 0.02219263+0.3638865j ,
0.02219263-0.3638865j , -0.53332444+0.j ,
-0.73274754+0.j , 0.82145598+0.28728599j,
0.82145598-0.28728599j, 5.25910771+0.j ])
In [42]: ws
Out[42]:
array([-0.17472186+0.j , 0.19316651+0.12446598j,
0.02219263+0.3638865j , 0.02219263-0.3638865j ])
Note that we got 4 eigenvalues, but they aren't the ones with the smalleste
magnitude. A couple were missed.
In [43]: N.abs(wd[[0, 1, 3, 4]] - ws)
Out[43]:
array([ 0.00000000e+00, 2.58261119e-15, 2.77555756e-17,
2.77555756e-17])
This shows that all the eigenvalues computed by the ARPACK code are valid and
agree with the dense calculation. Which one if more accurate is open to
speculation. You can see how to determine this by looking in the test_speigs.py
file where a PDP^1 factorisation is used to construct a matrix with known
eigenvalues and vectors. In my experience there is no clear winner.
Now let's try solving the same matrix using 9 Arnouldi vectors:
#sparse
ws, vs = arpack.speigs.ARPACK_eigs(get_matvec(a),a.shape[0],k, ncv=8)
sort_ind = N.abs(ws).argsort()
ws = ws[sort_ind]
vs = vs[:,sort_ind]
In [50]: wd
Out[50]:
array([-0.17472186+0.j , 0.19316651+0.12446598j,
0.19316651-0.12446598j, 0.02219263+0.3638865j ,
0.02219263-0.3638865j , -0.53332444+0.j ,
-0.73274754+0.j , 0.82145598+0.28728599j,
0.82145598-0.28728599j, 5.25910771+0.j ])
In [51]: ws
Out[51]:
array([-0.17472186+0.j , 0.19316651+0.12446598j,
0.19316651-0.12446598j, 0.02219263+0.3638865j ])
In [52]: N.abs(wd[0:4]-ws)
Out[52]:
array([ 3.60822483e-16, 8.69330150e-16, 8.69330150e-16,
4.33555951e-16])
Now you can see none of the eigenvalues are missed. Interesting this ncv
specified here is less than the default (which should be k*2+1 == 9). Anyway,
this is likely to change on every run since you're using random matrices.
If I remember and understood the ARPACK manual correctly, it is better at
finding eigenvalues of large magnitude than small. The symmetric case should
also be easier to solve.
Regards
Neilen
> _______________________________________________
> Scipy-dev mailing list
>Scipy-dev at scipy.org>http://projects.scipy.org/mailman/listinfo/scipy-dev
--
you know its kind of tragic
we live in the new world
but we've lost the magic
-- Battery 9 (www.battery9.co.za)