ScaLAPACK 2.0.0

Menu

JavaScript must be enabled in your browser to display the table of contents.

Release date: Fr 11/11/11.

This material is based upon work supported by the National Science Foundation
and the Department of Energy under Grant No. NSF-OCI-1032861, NSF-CCF-00444486,
NSF-CNS 0325873, NSF-EIA 0122599, NSF-ACI-0090127, DOE-DE-FC02-01ER25478,
DOE-DE-FC02-06ER25768.

ScaLAPACK is a software package provided by Univ. of Tennessee, Univ. of
California, Berkeley, and Univ. of Colorado Denver.

7. More details

7.1. PxHSEQR: Nonsymmetric Eigenvalue Problem

Robert Granat, Umeå University and HPC2N,
Bo Kågström, Umeå University and HPC2N,
Daniel Kressner, École Polytechnique Fédérale de Lausanne, and
Meiyue Shao, Umeå University and HPC2N

lapacker

Rodney James (UC Denver)

contribution

Compute the eigenvalues of a nonsymmetric real matrix. Implement the parallel
distributed Hessenberg QR algorithm with the small bulge multi-shift QR
algorithm together with aggressive early deflation.

PxHSEQR computes the eigenvalues of an upper Hessenberg matrix H and,
optionally, the matrices T and Z from the Schur decomposition H = Z*T*ZT,
where T is an upper quasi-triangular matrix (the Schur form), and Z is the
orthogonal matrix of Schur vectors. Optionally Z may be postmultiplied into an
input orthogonal matrix Q so that this routine can give the Schur factorization
of a matrix A which has been reduced to the Hessenberg form H by the orthogonal
matrix Q: A = Q*H*QT = (QZ)*T*(QZ)T.

PxGEBAL balances a general real matrix A. This involves, first, permuting A by
a similarity transformation to isolate eigenvalues on the diagonal; and second,
applying a diagonal similarity transformation to make the rows and columns as
close in norm as possible. Both steps are optional. Balancing may reduce the
1-norm of the matrix, and improve the accuracy of the computed eigenvalues
and/or eigenvectors.

The graph is extracted from reference [1] and the experiments have been performed
by Robert Granat, Bo Kågström, and Daniel Kressner.
This is using Intel Xeon quadcore nodes when computing the Schur form of a dense random matrix reduced to
Hessenberg form. (a) Execution times for a 4000 x 4000 matrix using 1 or 4
cores of a single node. (b) Execution times for a 16 000 x 16 000 matrix using
100 cores of 25 nodes.

This parallel algorithm is derived from Parlett and Dhillon’s SIAG-LA
prize-winning work on sequential MRRR. Compared to other algorithms, parallel
MRRR has some striking advantages. First, for an n x n matrix on p processors,
a tridiagonal inverse iteration can require up to O(n3) operations and
O(n2) memory on a single processor to guarantee the correctness of the
computed eigenpairs. MRRR is guaranteed to produce the right answer with
O(n2/p) memory, and it does not need reorthogonalization. Second, MRRR allows
the computation of subsets at reduced cost, whereas QR and Divide & Conquer do
not. For computing k eigenpairs, the tridiagonal parallel MRRR requires O(nk/p)
operations per processor.

7.3. BLACS revamping

lapacker

Rodney James (UC Denver)

contribution

With ScaLAPACK 2.0, the (MPI) BLACS is now completely integrated into
ScaLAPACK. Linking a ScaLAPACK application now only requires linking with
libscalapack.a, liblapack.a, libblas.a, and possibly the MPI libraries.