4 comments:

The whole point of using the SVD to compute the solution to Ax = b is that you may have an ill-conditioned or even rank-deficient matrix - meaning the smallest singular are very small or zero, respectively. These types of matrices arise all the time in practice (for example, you'd run into this in your example if you used rand(100,100) instead of rand(4,4)). In these cases, you can truncate those small singular values (and corresponding columns of U and V) and the SVD lets you compute the pseudo-inverse.

Solving a system by computing the inverse - which doesn't exist for a rank-deficient matrix, and is very inaccurate for a ill-conditioned matrix - is a very poor numerical method. You should mention this. You say that, as "aspected we have the same solutions," but in fact if you took the norm of the difference of these two solutions versus x, you would find the truncated xSVD solution was much more accurate.

but the thing is that it's weird that i have negative number, because as my data suggest that i should not have negative numbers, am i doing something wrong in the calculation or there is something is the basic idea of generating the equations ??

I am reforming Linear Algebra to make the SVD central and accessible early in undergrad courses. Then many more significant modelling and application situations are more immediately available, as well as better theory. The book would be an excellent answer to all these queries. I have completed the first draft: download viahttps://raw.github.com/uoa1184615/LinearAlgebraGit/master/larxxia-a1a.pdf

Is there a way to label the variables for SVD?Using Python is great for SVD if you have a plain matrix.But how can I employ SVD to determine which values I would keep for a regression? Baseline SVD for Python looks pretty useless, since I just get a bunch of unlabeled numbers for my results.