Something that has always intrigued me

I've heard, a few times in my life, that something like 90%+ of mathematics that currently exists cannot be solved. That's amazing to me.... Anyone have anything to share on this?
One of my professors was explaining to me that a lot of the equations that we're being taught (e.g. spring forces, aero drag...) are approximations to non linear equations.
Call me weird, but I love hearing about this stuff. It's pretty humbling.
Anyone?

... One of my professors was explaining to me that a lot of the equations that we're being taught (e.g. spring forces, aero drag...) are approximations to non linear equations

Well, IMHO, spring forces are physics. To me, most of mathematics equations, theorems was proved rigorously. Of course, it's just most of them, there are also some special exceptions. They are theorems that is so difficult to prove, and we cannot prove it atm, but we have tested them, and find it to be true in most cases.
Whereas, physics, and chemistry are completely different. Most of the equations, you study in physics are from experiments, and of course, cannot be proven.

I am not in total agreement with VietDao29 as to that most equations are found experimentally. While yes, most equations are derived experimentally, there's some kind of logic behind every equation in physics. That is, every proportionality of one variable over an another has to make some kind of sense. In some cases, experimentation precedes theory, but when theory focuses on the result, we find a satisfying explanation. For example, Newton introduced the inverse square law, not because it was logical to him, but so that his equations match Kepler's results. However, this relationship is logical: the surface of sphere is proportional to its radius squared. Still, to answer the question more directly: most equations in physics are approximations because of assumptions made when we try to model nature. We cannot possibly take into account every bit of information that nature has access to, so we simplify and make approximations.

Well since we use physics to describe the world around us, and mathematics is the language of physics, it's only logical that they work hand-in-hand. It's just quite intimidating to see the complexity of some of the systems and equations that we use, and to think that they are "simplified." Pretty cool.

When trying to describe a physical system accurately, to maximum precision where they are not "simplified" requires large systems of Partial Differential Equations. Finding solutions of Partial Differential Equations is a field of mathematics that is not well developed algebraically and there is some development recently numerically. Even so, these methods are extremely difficult even for a computer. So just think what a system of many of these equations would do.

Even if we can't find the solutions, we can't find the maximum error we are making when "approximating" with linear equations. We know that is error is quite small : )

Another factor is that it is pointless to get so very accurate in Physics, since Heisenberg's Uncertainty Principle already puts an error on measurements. This error is larger than the approximations!

although the claim made is so imprecise as not to admit a precise response, my answer anyway is that in fact more than 90% of problems that can be posed are not solvable precisely, in fact the solvable ones in areas known to me and of interest, have measure zero.

I think someone proved that it is impossible to find a formula for this as well

This is just a "forum mis-read", but it draws attention to the fact that I did not reference any of the theories that established these impossibility/unsolvability milestones.

1) Elementary Antiderivatives. There are many results in this area, and improvements are always being made (partially as a result of advances in symbolic computation). I tried to find a necessary and sufficient condition for a huge class of functions that I recall being do to Abel, but I was only able to find a much smaller theorem:

Chebyschevs 's theorem says that

[tex] x^p(a + bx^r)^q [/tex]

has an elementary antiderivative if and only if one of (p+1)/r, q, or (p+1)/r + q is an integer.

2) Differential Equations. The original impossibility results in this field arose from efforts of mathematical physicists such as Hamilton, Jacobi, and Poincare. Sometimes amazing connections in pure mathematics can be realized by equivalent formulations of physics, in this case classical mechanics. Today nonlinear dynamics is a huge rapidly progressing field, even without formulae.

3) Polynomial roots. The impossibility of representing these roots in terms of algebraic operations (add,sub,mult,div, root extraction) was already hinted at by Gauss, but the first proof is due to Abel. A rich theory including necessary and sufficient conditions is due originally to Galois, popularized by Weistrauss, and is the primary citation for this impossibility result.

4) The recognition of undecidable propositions occured in the middle of the twentieth century, and the most visible result of this kind is the undecidability of the continuum hypothesis, proven by Paul Cohen using an original technique that has come to be known as "forcing" whereintwo distinct models of set theory are demonstrated, one with CH and one with the negation of CH.

Sorry for the long-winded regurgitation of what is common knowledge to many on this board, but results of this type were very intriguing to myself as a beginner, and so I enjoyed revisiting them at that level.