Algebraic solutions

Definitions

Matrices

Matrix Definition A matrix is a linear transformation between finite dimensional vector spaces.

Assembling a matrix
Assembling a matrix means defining its action as entries stored in a sparse or dense format. For example, in the finite element context, the storage format is sparse to take advantage of the many zero entries.

Preconditioners

Definition

A preconditioner \$\mathcal{P}\$ is a method for constructing a matrix (just a linear function, not assembled!) P^{-1} = \mathcal{P}(A,A_p) using a matrix \$A\$ and extra information \$A_p\$, such that the spectrum of P^{-1}A (left preconditioning) or A P^{-1} (right preconditioning) is well-behaved. The action of preconditioning improves the conditioning of the previous linear system.

All these parameters are optional which means that the simplest call reads for example:

autob=backend();

They take default values as described in the following table:

parameter

type

default value

_name

string

"" (empty string )

_rebuild

boolean

false

_kind

string

"petsc"

_worldcomm

WorldComm

Environment::worldComm()

_name

Backends are store in a Backend factory and handled by a manager that
allows to keep track of allocated backends. They a registered with
respect to their name and kind. The default name value is en empty
string ("") which names the default Backend. The _name parameter is
important because it provides also the name for the command
line/config file option section associated to the associated Backend.

Only the command line/config file options for the default backend are
registered. Developers have to register the option for each new
Backend they define otherwise failures at run-time are to be expected
whenever a Backend command line option is accessed.

Consider that you create a Backend name projection in your code like
this

_rebuild

If you want to reuse a Backend and not allocate a new one plus add the
corresponding option to the command line/configuration file, you can
rebuild the Backend in order to delete the data structures already
associated to this Backend and in particular the preconditioner. It is
mandatory to do that when you solve say a linear system first with
dimensions \$m\times m\$ and then solve another one with different
dimension \$n \times n\$ because in that case the Backend will throw
an error saying that the dimensions are incompatible. To avoid that
you have to rebuild the Backend.

Although this feature might appear useful, you have to make sure
that the solving strategy applies to all problems because you won’t be
able to customize the solver/preconditioner for each problem. If you
have different problems to solve and need to have custom
solver/preconditioner it would be best to have different Backends.

_worldComm

One of the strength of Feel++ is to be able to change the communicator and in the case of Feel++ the WorldComm. This allows for example to

solve sequential problems

solve a problem on a subset of MPI processes

For example passing a sequential WorldComm to backend() would then in the subsequent use of the Backend generate sequential data structures (e.g. IndexSet, Vector and Matrix) and algorithms (e.g. Krylov Solvers).

Info The default WorldComm is provided by Environment::worldComm() and it conresponds to the default MPI communicator MPI_COMM_WORLD.

Eigen Problem

To solve standard and generalized eigenvalue problems, Feel++
interfaces SLEPc. SLEPc is a library which
extends PETSc to provide the functionality necessary for the solution
of eigenvalue problems. It comes with many strategies for both
standard and generalized problems, Hermitian or not.

We want to find \$(\lambda_i,x_i)\$ such that \$Ax = \lambda x\$. To
do that, most eigensolvers project the problem onto a low-dimensional
subspace, this is called a Rayleigh-Ritz projection. + Let
\$V_j=[v_1,v_2,...,v_j\$] be an orthogonal basis of this subspace,
then the projected problem reads:

Find \$(\theta_i,s_i)\$ for \$i=1,\ldots,j\$ such that \$B_j
s_i=\theta_i s_i\$ where \$B_j=V_j^T A V_j\$.

Then the approximate eigenpairs \$(\lambda_i,x_i)\$ of the original
problem are obtained as: \$\lambda_i=\theta_i\$ and \$x_i=V_j s_i\$.

The eigensolvers differ from each other in the way the subspace is built.

Code

In Feel++, there is two functions that can be used to solve this type
of problems, eigs and veigs.

where eigenmodes is a std::vector<std::pair<value_type,
element_type> > with value_type the type of the eigenvalue, and
element_type the type of the eigenvector, a function of the space
Vh.

The eigs function does not take the bilinear forms but two
matrices. Also, the solver used, the type of the problem, the position
of the spectrum and the spectral transformation are not read from the
options.

where eigenmodes is a std::map<real_type, eigenmodes_type> with
real_type of the magnitude of the eigenvalue. And eigenmodes_type
is a boost::tuple<real_type, real_type, vector_ptrtype> with the
first real_type representing the real part of the eigenvalue, the
second real_type the imaginary part and the vector_ptrtype is a
vector but not an element of a functionspace.

The two functions take a parameter _nev that tel how many eigenpair
to compute. This can be set from the command line option
--solvereigen.nev. + Another important parameter is _ncv which is
the size of the subspace, j above. This parameter should always be
greater than nev. SLEPc recommends to set it to max(nev+15,
2*nev). This can be set from the command line option
--solvereigen.ncv.

Problem type

The standard formulation reads :

Find \$\lambda\in \mathbb{R}\$ such that \$Ax = \lambda x\$

where \$\lambda\$ is an eigenvalue and \$x\$ an eigenvector.

But in the case of the finite element method, we will deal with the
generalized form :

Find \$\lambda\in\mathbb{R}\$ such that \$Ax = \lambda Bx\$

A standard problem is Hermitian if the matrix A is Hermitian
(\$A=A^*\$). + A generalized problem is Hermitian if the matrices
\$A\$ and \$B\$ are Hermitian and if \$B\$ is positive definite. + If
the problem is Hermitian, then the eigenvalues are real. A special
case of the generalized problem is when the matrices are not Hermitian
but \$B\$ is positive definite.

The type of the problem can be specified using the EigenProblemType,
or at run time with the command line option --solvereigen.problem
and the following value :

Table 4. Table of problem type

Problem type

EigenProblemType

command line key

Standard Hermitian

HEP

"hep"

Standard non-Hermitian

NHEP

"nhep"

Generalized Hermitian

GHEP

"ghep"

Generalized non-Hermitian

GNHEP

"gnhep"

Positive definite Generalized non-Hermitian

PGNHEP

"pgnhep"

Position of spectrum

You can choose which eigenpairs will be computed. The user can set it
programmatically with PositionOfSpectrum or at run time with the
command line option --solvereigen.spectrum and the following value :

Table 5. Table of position of spectrum

Position of spectrum

PositionOfSpectrum

command line key

Largest magnitude

LARGEST_MAGNITUDE

"largest_magnitude"

Smallest magnitude

SMALLEST_MAGNITUDE

"smallest_magnitude"

Largest real

LARGEST_REAL

"largest_real"

Smallest real

SMALLEST_REAL

"smallest_real"

Largest imaginary

LARGEST_IMAGINARY

"largest_imaginary"

Smallest imaginary

SMALLEST_IMAGINARY

"smallest_imaginary"

Spectral transformation

It is observed that the algorithms used to solve the eigenvalue
problems find solutions at the extremities of the spectrum. To improve
the convergence, one need to compute the eigenpairs of a transformed
operator. Those spectral transformations allow to compute solutions
that are not on the boundary of the spectrum.

There are 3 types of spectral transformation:

Shift

\$A-\sigma I\$ or \$B^{-1}A-\sigma I\$

Shift and invert

\$(A-\sigma I)^{-1}\$ or \$(A-\sigma B)^{-1}B\$

Cayley

\$(A-\sigma I)^{-1}(A+\nu I)\$ or \$(A-\sigma B)^{-1}(A+\nu B)\$

By default, shift and invert is used. You can change it with
--solvereigen.transform.

Eigensolvers

The default solver is Krylov-Schur, but can be modified using
EigenSolverType or the option --solvereigen.solver.

Table 7. Table of eigensolver

Solver

EigenSolverType

command line key

Power

POWER

power

Lapack

LAPACK

lapack

Subspace

SUBSPACE

subspace

Arnoldi

Arnoldi

arnoldi

Lanczos

LANCZOS

lanczos

Krylov-Schur

KRYLOVSCHUR

krylovschur

Arpack

ARPACK

arpack

Be careful that all solvers can not compute all the problem types and
positions of the spectrum. The possibilities are summarize in the
following table.

Table 8. Supported problem type for the eigensolvers

Solver

Position of spectrum

Problem type

Power

Largest magnitude

any

Lapack

any

any

Subspace

Largest magnitude

any

Arnoldi

any

any

Lanczos

any

standard and generalized Hermitian

Krylov-Schur

any

any

Arpack

any

any

Special cases of spectrum

Computing a large portion of the spectrum

In the case where you want compute a large number of eigenpairs, the
rule for ncv implies a huge amount of memory to be used. To improve
the performance, you can set the mpd parameter, which will limit the
dimension of the projected problem.

You can set it via the command line with --solvereigen.mpd <mpd>.

Computing all the eigenpairs in an interval

If you want to compute all the eigenpairs in a given interval, you
need to use the option --solvereigen.interval-a to set the beginning
of the interval and --solvereigen.interval-b to set the end.

In this case, be aware that the problem need to be generalized and
hermitian. The solver will be set to Krylov-Schur and the
transformation to shift and invert. Beside, you’ll need to use a
linear solver that will compute the inertia of the matrix, this is set
to Cholesky, with mumps if you can use it. + For now, this method is
only implemented in the eigs function.