I still think there’s a lot of truth in the summary above. Linear algebra is very important, and a great deal of applied math does ultimately depend on efficient solutions of large linear systems. The weakest link in the argument may be the first one: there’s a lot more to applied math than mathematical physics. Mathematical physics hasn’t declined, but other areas have grown. Still, areas of applied math outside of mathematical physics and outside of differential equations often depend critically on linear algebra.

I’d certainly recommend that someone interested in applied math become familiar with numerical linear algebra. If you’re going to be an expert in differential equations, or optimization, or many other fields, you need to be at leas familiar with numerical linear algebra if you’re going to compute anything. As Stephen Boyd points out in his convex optimization class, many of the breakthroughs in optimization over the last 20 years have at their core breakthroughs in numerical linear algebra. Improved algorithms have sped up the solution of very large systems more than Moore’s law has.

It may seem questionable to say that linear algebra is at the heart of applied math because it’s linear. What about nonlinear applications, such as nonlinear PDEs? Nonlinear differential equations lead to nonlinear algebraic equations when discretized. But these nonlinear systems are solved via iterations of linear systems, so we’re back to linear algebra.

This especially goes for a lot of data science or computational problems. Often, you’ll find yourself working with something that could be called a vector and something that could called a matrix, doing matrix-vector calculations.

I think that one of the reason why linear algebra is so important in numerical analysis is that systems of linear equations are exactly solvable by algorithms with a polynomial complexity ( solving a n*n system with Gauss method requires O(n^3) operations for example). From a numerical analysis point of view linear algebra can solve problems without generating consistency errors , so you can estimate separately the approximation errors (due to discretisation) from the rounding errors (due to linear algebra computations). It is possible to keep the same approach (reduction to linear equations) even for non-linear problems where computational stability become very important.