NAG Toolbox: nag_matop_real_gen_sparse_lu (f01br)

Purpose

nag_matop_real_gen_sparse_lu (f01br) factorizes a real sparse matrix. The function either forms the LULU factorization of a permutation of the entire matrix, or, optionally, first permutes the matrix to block lower triangular form and then only factorizes the diagonal blocks.

Description

Given a real sparse matrix AA, nag_matop_real_gen_sparse_lu (f01br) may be used to obtain the LULU factorization of a permutation of AA,

PAQ = LU

PAQ=LU

where PP and QQ are permutation matrices, LL is unit lower triangular and UU is upper triangular. The function uses a sparse variant of Gaussian elimination, and the pivotal strategy is designed to compromise between maintaining sparsity and controlling loss of accuracy through round-off.

Optionally the function first permutes the matrix into block lower triangular form and then only factorizes the diagonal blocks. For some matrices this gives a considerable saving in storage and execution time.

Extensive data checks are made; duplicated nonzeros can be accumulated.

If abort(1) = trueabort1=true, nag_matop_real_gen_sparse_lu (f01br) will exit immediately on detecting a structural singularity (one that depends on the pattern of nonzeros) and return ifail = 1ifail=1; otherwise it will complete the factorization (see Section [Singular and Rectangular Systems]).

If abort(2) = trueabort2=true, nag_matop_real_gen_sparse_lu (f01br) will exit immediately on detecting a numerical singularity (one that depends on the numerical values) and return ifail = 2ifail=2; otherwise it will complete the factorization (see Section [Singular and Rectangular Systems]).

If abort(3) = trueabort3=true, nag_matop_real_gen_sparse_lu (f01br) will exit immediately (with ifail = 5ifail=5) when the arrays a and icn are filled up by the previously factorized, active and unfactorized parts of the matrix; otherwise it continues so that better guidance on necessary array sizes can be given in idisp(6)idisp6 and idisp(7)idisp7, and will exit with ifail in the range 44 to 66. Note that there is always an immediate error exit if the array irn is too small.

If abort(4) = falseabort4=false, nag_matop_real_gen_sparse_lu (f01br) proceeds using a value equal to the sum of the duplicate elements. In either case details of each duplicate element are output on the current advisory message unit (see nag_file_set_unit_advisory (x04ab)), unless suppressed by the value of ifail on entry.

Optional Input Parameters

1:
licn – int64int32nag_int scalar

Default:
The dimension of the arrays a, icn. (An error is raised if these dimensions are not equal.)

The dimension of the arrays a and icn as declared in the (sub)program from which nag_matop_real_gen_sparse_lu (f01br) is called. Since the factorization is returned in a and icn, licn should be large enough to accommodate this and should ordinarily be 22 to 44 times as large as nz.

The dimension of the array irn as declared in the (sub)program from which nag_matop_real_gen_sparse_lu (f01br) is called. It need not be as large as licn; normally it will not need to be very much greater than nz.

Should have a value in the range 0.0 ≤ pivot ≤ 0.99990.0≤pivot≤0.9999 and is used to control the choice of pivots. If pivot < 0.0pivot<0.0, the value 0.00.0 is assumed, and if pivot > 0.9999pivot>0.9999, the value 0.99990.9999 is assumed. When searching a row for a pivot, any element is excluded which is less than pivot times the largest of those elements in the row available as pivots. Thus decreasing pivot biases the algorithm to maintaining sparsity at the expense of stability.

Default:
0.10.1

4:
lblock – logical scalar

If lblock = truelblock=true, the matrix is preordered into block lower triangular form before the LULU factorization is performed; otherwise the entire matrix is factorized.

Default:
truetrue

5:
grow – logical scalar

If grow = truegrow=true, then on exit w(1)w1 contains an estimate (an upper bound) of the increase in size of elements encountered during the factorization. If the matrix is well-scaled (see Section [Scaling]), then a high value for w(1)w1 indicates that the LULU factorization may be inaccurate and you should be wary of the results and perhaps increase the parameter pivot for subsequent runs (see Section [Accuracy]).

idisp(3)idisp3 and idisp(4)idisp4 monitor the adequacy of ‘elbow room’ in the arrays irn and a (and icn) respectively, by giving the number of times that the data in these arrays has been compressed during the factorization to release more storage. If either idisp(3)idisp3 or idisp(4)idisp4 is quite large (say greater than 1010), it will probably pay you to increase the size of the corresponding array(s) for subsequent runs. If either is very low or zero, then you can perhaps save storage by reducing the size of the corresponding array(s).

idisp(5)idisp5, when lblock = falselblock=false, gives an upper bound on the rank of the matrix; when lblock = truelblock=true, gives an upper bound on the sum of the ranks of the lower triangular blocks.

idisp(6)idisp6 and idisp(7)idisp7 give the minimum size of arrays irn and a (and icn) respectively which would enable a successful run on an identical matrix (but some ‘elbow-room’ should be allowed – see Section [Further Comments]).

lirn is too small: there is not enough space in the array irn to continue the factorization. You are recommended to try again with lirn (and the length of irn) equal to at least idisp(6) + n / 2idisp6+n/2.

licn is too small: there is not enough space in the arrays a and icn to store the factorization. If abort(3)abort3 was false on entry, the factorization has been completed but some of the LULU factors have been discarded to create space; idisp(7)idisp7 then gives the minimum value of licn (i.e., the minimum length of a and icn) required for a successful factorization of the same matrix.

Duplicate elements have been found in the input matrix and the factorization has been abandoned (abort(4) = trueabort4=true on entry).

Accuracy

The factorization obtained is exact for a perturbed matrix whose (i,j)(i,j)th element differs from aijaij by less than 3ερmij3ερmij where εε is the machine precision, ρρ is the growth value returned in w(1)w1 if grow = truegrow=true, and mijmij the number of Gaussian elimination operations applied to element (i,j)(i,j). The value of mijmij is not greater than nn and is usually much less. Small ρρ values therefore guarantee accurate results, but unfortunately large ρρ values may give a very pessimistic indication of accuracy.

Further Comments

Timing

The time required may be estimated very roughly from the number ττ of nonzeros in the factorized form (output as idisp(2)idisp2) and for nag_matop_real_gen_sparse_lu (f01br) and its associates is

where our unit is the time for the inner loop of a full matrix code (e.g., solving a full set of equations takes about (1/3)n313n3 units). Note that the faster nag_matop_real_gen_sparse_lu_reuse (f01bs) time makes it well worthwhile to use this for a sequence of problems with the same pattern.

It should be appreciated that ττ varies widely from problem to problem. For network problems it may be little greater than nz, the number of nonzeros in AA; for discretization of two-dimensional and three-dimensional partial differential equations it may be about 3nlog2n3nlog2⁡n and (1/2)n5 / 312n5/3, respectively.

The time taken by nag_matop_real_gen_sparse_lu (f01br) to find the block lower triangular form (lblock = truelblock=true) is typically 5 – 15%5–15% of the time taken by the function when it is not found (lblock = falselblock=false). If the matrix is irreducible (idisp(9) = 1idisp9=1 after a call with lblock = truelblock=true) then this time is wasted. Otherwise, particularly if the largest block is small (idisp(10) ≪ nidisp10≪n), the consequent savings are likely to be greater.

The time taken to estimate growth (grow = truegrow=true) is typically under 20%20% of the overall time.

The overall time may be substantially increased if there is inadequate ‘elbow-room’ in the arrays a, irn and icn. When the sizes of the arrays are minimal (idisp(6)idisp6 and idisp(7)idisp7) it can execute as much as three times slower. Values of idisp(3)idisp3 and idisp(4)idisp4 greater than about 1010 indicate that it may be worthwhile to increase array sizes.

Scaling

The use of a relative pivot tolerance pivot essentially presupposes that the matrix is well-scaled, i.e., that the matrix elements are broadly comparable in size. Practical problems are often naturally well-scaled but particular care is needed for problems containing mixed types of variables (for example millimetres and neutron fluxes).

Singular and Rectangular Systems

It is envisaged that nag_matop_real_gen_sparse_lu (f01br) will almost always be called for square nonsingular matrices and that singularity indicates an error condition. However, even if the matrix is singular it is possible to complete the factorization. It is even possible for nag_linsys_real_sparse_fac_solve (f04ax) to solve a set of equations whose matrix is singular provided the set is consistent.

Two forms of singularity are possible. If the matrix would be singular for any values of the nonzeros (e.g., if it has a whole row of zeros), then we say it is structurally singular, and continue only if abort(1) = falseabort1=false. If the matrix is nonsingular by virtue of the particular values of the nonzeros, then we say that it is numerically singular and continue only if abort(2) = falseabort2=false, in which case an upper bound on the rank of the matrix is returned in idisp(5)idisp5 when lblock = falselblock=false.

Rectangular matrices may be treated by setting n to the larger of the number of rows and numbers of columns and setting abort(1) = falseabort1=false.

Note: the softfailure option should be used (last digit of ifail = 1ifail=1) if you wish to factorize singular matrices with abort(1)abort1 or abort(2)abort2 set to false.

Duplicated Nonzeros

The matrix AA may consist of a sum of contributions from different sub-systems (for example finite elements). In such cases you may rely on nag_matop_real_gen_sparse_lu (f01br) to perform assembly, since duplicated elements are summed.

Determinant

The following code may be used to compute the determinant of AA (as the double variable deta) after a call of nag_matop_real_gen_sparse_lu (f01br):