m_elpa/elpa_func_allocate [ Functions ]

mpi_comm_parent=Global communicator for the calculations
process_row=Row coordinate of the calling process in the process grid
process_col=Column coordinate of the calling process in the process grid
[gpu]= -- optional -- Flag (0 or 1): use GPU version

m_elpa/elpa_func_cholesky_complex [ Functions ]

aa(local_nrows,local_ncols)=Distributed matrix which should be factorized.
Distribution is like in Scalapack.
Only upper triangle is needs to be set.
On return, the upper triangle contains the Cholesky factor
and the lower triangle is set to 0.#elif (defined HAVE_LINALG_ELPA_2016)
elpa_hdl(type<elpa_hdl_t>)=handler for ELPA object

m_elpa/elpa_func_cholesky_real [ Functions ]

aa(local_nrows,local_ncols)=Distributed matrix which should be factorized.
Distribution is like in Scalapack.
Only upper triangle is needs to be set.
On return, the upper triangle contains the Cholesky factor
and the lower triangle is set to 0.
elpa_hdl(type<elpa_hdl_t>)=handler for ELPA object

m_elpa/elpa_func_get_communicators [ Functions ]

mpi_comm_parent=Global communicator for the calculations (in)
process_row=Row coordinate of the calling process in the process grid (in)
process_col=Column coordinate of the calling process in the process grid (in)

m_elpa/elpa_func_hermitian_multiply_complex [ Functions ]

uplo_a='U' if A is upper triangular
'L' if A is lower triangular
anything else if A is a full matrix
Please note: This pertains to the original A (as set in the calling program)
whereas the transpose of A is used for calculations
If uplo_a is 'U' or 'L', the other triangle is not used at all,
i.e. it may contain arbitrary numbers
uplo_c='U' if only the upper diagonal part of C is needed
'L' if only the upper diagonal part of C is needed
anything else if the full matrix C is needed
Please note: Even when uplo_c is 'U' or 'L', the other triangle may be
written to a certain extent, i.e. one shouldn't rely on the content there!
ncb=Number of columns of B and C
aa(local_nrows,local_ncols)=Matrix A
bb(ldb,local_ncols_c)=Matrix B
local_nrows_b=Local rows of matrix B
local_ncols_b=Local columns of matrix B
local_nrows_c=Local rows of matrix C
local_ncols_c=Local columns of matrix C

m_elpa/elpa_func_hermitian_multiply_real [ Functions ]

uplo_a='U' if A is upper triangular
'L' if A is lower triangular
anything else if A is a full matrix
Please note: This pertains to the original A (as set in the calling program)
whereas the transpose of A is used for calculations
If uplo_a is 'U' or 'L', the other triangle is not used at all,
i.e. it may contain arbitrary numbers
uplo_c='U' if only the upper diagonal part of C is needed
'L' if only the upper diagonal part of C is needed
anything else if the full matrix C is needed
Please note: Even when uplo_c is 'U' or 'L', the other triangle may be
written to a certain extent, i.e. one shouldn't rely on the content there!
ncb=Number of columns of B and C
aa(local_nrows,local_ncols)=Matrix A
bb(ldb,local_ncols_c)=Matrix B
local_nrows_b=Local rows of matrix B
local_ncols_b=Local columns of matrix B
local_nrows_c=Local rows of matrix C
local_ncols_c=Local columns of matrix C

m_elpa/elpa_func_invert_triangular_complex [ Functions ]

aa(local_nrows,local_ncols)=Distributed matrix which should be factorized.
Distribution is like in Scalapack.
Only upper triangle is needs to be set.
The lower triangle is not referenced.
elpa_hdl(type<elpa_hdl_t>)=handler for ELPA object

m_elpa/elpa_func_invert_triangular_real [ Functions ]

aa(local_nrows,local_ncols)=Distributed matrix which should be factorized.
Distribution is like in Scalapack.
Only upper triangle is needs to be set.
The lower triangle is not referenced.
elpa_hdl(type<elpa_hdl_t>)=handler for ELPA object

m_elpa/elpa_func_solve_evp_1stage_complex [ Functions ]

ev(na)=Eigenvalues of a, every processor gets the complete set
qq(local_nrows,local_ncols)=Eigenvectors of aa
Distribution is like in Scalapack.
Must be always dimensioned to the full size (corresponding to (na,na))
even if only a part of the eigenvalues is needed.

SIDE EFFECTS

aa(local_nrows,local_ncols)=Distributed matrix for which eigenvalues are to be computed.
Distribution is like in Scalapack.
The full matrix must be set (not only one half like in scalapack).
Destroyed on exit (upper and lower half).
elpa_hdl(type<elpa_hdl_t>)=handler for ELPA object

m_elpa/elpa_func_solve_evp_1stage_real [ Functions ]

ev(na)=Eigenvalues of a, every processor gets the complete set
qq(local_nrows,local_ncols)=Eigenvectors of aa
Distribution is like in Scalapack.
Must be always dimensioned to the full size (corresponding to (na,na))
even if only a part of the eigenvalues is needed.

SIDE EFFECTS

aa(local_nrows,local_ncols)=Distributed matrix for which eigenvalues are to be computed.
Distribution is like in Scalapack.
The full matrix must be set (not only one half like in scalapack).
Destroyed on exit (upper and lower half).
elpa_hdl(type<elpa_hdl_t>)=handler for ELPA object