I am trying to run an eigenvalue/eigenvector calculation for huge matrices (400,000square) that are heavily sparse.Ideally I would like to be able to run the following in a parallelised script:

[v,d]=eigs(A,B,length(A))

If I run the command in this way one problem is that i run out of memory so I need to use some kind of codistributed array for the eigenvector matrix. I also want to be able to speed up the calculation as much as possible if I run it on a cluster.

What is the best way of doing this that is also most efficient?

Thank you,

George

PS: I only need a few components of all the normalised eigenvectors, not all their values- is there any way I could tweak the Matlab function so as to discard unneeded values as they are calculated and so reduce the storage necessary for the result?