My apologies if the question sounds too trivial but I am new to LAPACK and can't figure out what's going wrong with the code. Ok, so here it goes.I need to solve for x in Ax = b system where the matrix A (about 100 x 100 size ) is square,( but could also be rectangular sometimes ), complex, dense, and highly ill-conditioned (condition number \kappa ~ 10E+017 ). The best way to solve such a system is obviously SVD.

First I computed the SVD using Matlab, then obtained the inverse of A to get solution vector x ( A^{-1} b ) and that gives me the correct answer ( I am solving a problem for which theoretical solution is available).

But when I tried to solve the same problem with ZGESVD, I get a wrong solution. Wrong decomposition? Obviously! Yes, the matrix entries for U and V from ZGESVD and MATLAB ([u,s,v] = svd(A)) are very different. Only a few entries from upper left corner of U and V match (that too only upto 4 or 5 deciamal places). Surprisingly, I get a very good comparison between ZGESVD and Matlab for singular vector S.

When I test the ZGESVD code for a small matrix with a reasonable \kappa (~ 10E+001 ) ( a randomly generated matrix from Matlab ), the comparison is very good for U, V and S (the numbers match to double precision ). I am not sure what goes wrong when I try this with a bigger and highly ill-conditioned complex matrix using ZGESVD. I checked for the correctness of the input that is given to ZGESVD (precision, sequence of arguments etc. ), everything looks OK. Not sure why I can’t get the correct SVD from ZGESVD when Matlab uses same routine for its SVD command. Is there anything I am missing?

Just for the sake of completeness – I use gfortran compiler ( from minGW ) with LAPACK libraries built using CMAKE. I compile and link the libraries as follows:

are small up to machine precision. If this is the case then the answer is correct. Two answers which look very different might well be both as correct. No one is better. The first check checks that A = U*S*VT, the second and third check the orthogonality of the eigenvectors (right and left respectively).

In exact arithmetic, you do have unicity of U S and VT when the singular values are distinct. In finite precision arithmetic, when the concept of distinctness is fuzzy. There is not really unicity of the answer up to machine precision. (In particular when the matrix is ill-conditioned.)

I agree with you when you say both could be solutions to the problem considering the conditioning of the matrix. But since I am solving a problem which has an exact and unique solution, it baffles me to see the same algorithm gives different solutions. I am not sure why Matlab gives the solution that I need and not ZGESVD

Hello, so all is well. If you want to understand the difficulty, you need to read some literature about condition number. This is the key word. I can try to explain. There is unicity of the answer if you ask for exact answers. We cannot give you exact answers. Nor can Matlab. This results in non-unicity of the correct answers. If you look at the set of triplet (U,S,V) that satisfies the three conditions: || A - U * S * VT || / || A || <= eps, || I - U^T * U || <= eps, and || I - V^T V || <= eps, this set can be large. Just for example imagine that A is n-by-n and you have n-2 large eigenvalues and you happen to find the exact pieces of U (so the n-2 first columns), the exact pieces for S ( so the n-2 largest singular values) and the exact pieces for VT (so the n-2 first rows). Now imagine that the 2 last singular values are super small. In other words this means that the condition || A - U * S * VT || / || A || <= eps is satisfied no matter what you put in the last two columns of U and no matter what you put in the last two rows of VT. Well the last two constraints are simply the orthogonality of U and VT, so this gives you a lots of flexibility to complete U with pretty much any orthogonal basis of the orthogonal complement of the first n-2 columns. Same for the rows of VT. So that's the idea. If the singular values are not that small, but still smaller, only the few first digits of the singular vectors matters. The smaller the singular values, the more flexibility you will have. Cheers, Julien.

With a condition number that high, your matrix is numerically singular when using double precision arithmetic. So, it really does not make sense to compute the inverse. Even for a non-singular matrix, solving systems of equations by computing the inverse is not recommended and, similarly solving via a pseudo inverse is not recommended.

What I imagine you require is a least squares solution. See the chapter on "Linear Least Squares (LLS) Problems" in the LAPACK Users' Guide (www.netlib.org/lapack/lug/node27.html) for recommendations on LAPACK driver routines.

In the meantime, I tried several cases ( all these cases result in ill-conditioned A ) with \kappa ~ 10^16 - 10^17 and matrix size of 450 x 450. Although I get what you and Julien said, I am a little puzzled with Matlab's capacity to deliver a bang on solution compared to analytical solution in 'all' the cases.

I have to be honest in accepting that the only knowledge I have about SVD is that it gives you a good solution to a ( almost ) singular system ( may be not always ). From Matlab's help I could gather that it returns with error message if the limit of QR steps is reached (which is 75 in its case). I am not sure if the same happens with ZGESVD ( is it not the routine ZBDSQR which takes care of the QR steps for ZGESVD? And again the default iteration limit in ZBDSQR is 6 ). I might be completely wrong in doubting the accuracy of ZGESVD, but looking back at Matlab's results worries me.

One quick thing- I cut off my singular values at 1.0E-16 which is close to machine precision. Not sure if the rounding errors at such high condition numbers would make the difference . Do you suggest using quadruple precision ( I am not sure if that's possible with LAPACK but it appears to be available in Maple/Mathematica ) ?

As Julien has described and your results confirm, both MATLAB and LAPACK are giving results of comparable accuracy for the SVD. (By the way, MATLAB uses LAPACK underneath.) What is not clear is what precisely you do once you have computed the SVD. Since you are regarding some singular values as zero you are solving a reduced order system, so perhaps you can tell us exactly what you are computing.

Sven,Thanks for asking me what I was doing after I compute the SVD. There was a bug . I was using single precision allocation for complex part of transpose of U (UT) and VT (V) in the step where I compute the inverse of A. I double checked the final solution and it matches with Matlab to double precision. Problem solved.

Julien, Sven - my apologies for this but I gained a valuable information on this forum in the process. Many thanks.