Notes

A vast number of different applications of traditional mathematics are ultimately based on linear equations of the form u == m . v where u and v are vectors (lists) and m is a matrix (list of lists), all containing ordinary continuous numbers. If v is known then such equations in essence provide explicit rules for computing u. But if only u is known, then the equations can instead be thought of as providing implicit constraints for v. However, it so happens that even in this case v can still be found fairly straightforwardly using LinearSolve[m, u]. With vectors of length n it generically takes about n^2 steps to compute u given v, and a little less than n^3 steps to compute v given u (the best known algorithms—which are based on matrix multiplication—currently involve about n^2.4 steps). But as soon as the original equation is nonlinear, say u == m1 . v + m2 . v^2, the situation changes dramatically. It still takes only about n^2 steps to compute u given v, but it becomes vastly more difficult to compute v given u, taking perhaps 2^2^n steps. (Generically there are 2^n solutions for v, and even for integer coefficients in the range -r to +r already in 95% of cases there are 4 solutions with n=2 as soon as r>=6.)