I'm trying to think/know about something but I don't know if my basis premise is plausible, here we go.

Sometimes when I'm talking with people about pure mathematics, they usually dismiss it because it has no practical utility, but I guess that according to the history of mathematics, the math that is useful today was once pure mathematics (I'm not so sure but I guess that when the calculus was invented, it hadn't a pratical aplication).

Also, I guess that the development of pure mathematics is important because it allows us to think about non-intuitive objects before encountering some phenomena that is similar to these mathematical non-intuitive objects, with this in mind can you provide me historical examples of pure mathematics becoming "useful"?

Newton invented his fluxions (i.e. his calculus) in order to compute the orbits of celestial objects that move according to his law of gravitation. The foundations of calculus as pure mathematics were not established until the 18th century.
–
Ron GordonJan 17 '13 at 5:42

25

@JavaMan: I think there might be some debate as to whether string theory is useful ...
–
Henry B.Jan 17 '13 at 5:49

3

@HenryB or whether an application of pure mathematics to pure mathematics is what the OP had in mind.
–
Willie WongJan 17 '13 at 8:48

10

Two similar posts under the heading "Useless math that became useful" here and on MO.
–
MartinJan 17 '13 at 8:52

4

@Brad I think rglordonma was answering the OP's point about calculus being invented without a practical application, which is false as it was invented exactly for a practical application.
–
user50229Jan 17 '13 at 20:02

To add to the first bullet, many types of cryptography are based on pure number theory that was developed long before the cryptography. I also think vectors started out on the pure side before physics started using them, but I don't have a reference off hand.
–
TimothyAWisemanJan 17 '13 at 17:14

1

@N.S. Of course it's NP - I think you mean "We don't know it's NP-complete." In fact, it almost certainly isn't, since we (rather quickly) devised a way to quickly factor using quantum computers, but there is no known algorithm for quickly solving any NP-complete problems with quantum computers. In any case, I don't see how that's at all relevant - the fact is, RSA is the most widely used public-key crypto algorithm in use today, is still considered secure, and relies on the difficulty in factoring, so it fits the question.
–
BlueRaja - Danny PflughoeftJan 18 '13 at 5:57

Negative numbers and complex numbers were regarded as absurd and useless by many mathematicians prior to $15^{th}$ century. For instance, Chuquet referred negative numbers as "absurd numbers." Michael Stifel has a chapter on negative numbers in his book "Arithmetica integra" titled "numeri absurdi". And so too were complex/imaginary numbers. Gerolamo Cardano in his book "Ars Magna" calls the square root of negative numbers as a completely useless object.

I guess the same attitude towards Quaternions and Octonions would have been prevalent, when they were initially discovered.

By the time of Quaternions things had actually changed, and they were sought for for a long time in the hope they would be as good for modeling 3d movements as Complexs are for 2d. Unfortunately they came a bit too late and Linear Algebra had already eaten most of the cake
–
Thomas AhleJan 17 '13 at 11:52

2

I hope people that have difficulties to accept complex numbers know why they accept real numbers, those are much harder to describe.
–
AD.Jan 18 '13 at 11:28

3

This answer doesn't really say what is "useful" about complex numbers. Or negative numbers, for that matter...
–
AShellyJan 18 '13 at 13:54

1

@HerngYi The point is that (at least once you have the reals) the complex numbers are just a 2-dimensional vector space over the reals (a complex number $a+bi$ is easily described as the pair of reals $(a,b)$ with a strange-but-simple multiplication rule. By contrast, the reals have to be constructed in a fundamentally infinitary manner, and almost all reals have no finite descriptions.
–
Steven StadnickiJan 19 '13 at 16:52

1

@N.S. You got a point there, perhaps we should vote for a change there? How about better numbers? :)
–
AD.Jan 27 '13 at 6:37

Control theory is used in order to strengthen signals in the telecom industry, as well as calibrating cd-drives. Control theory is pretty much based on Fourier analysis and on the theory of $H^\infty(\mathbb{D})$ (i.e. the space of bounded analytic functions on the unit disc).

Stochastic analysis came from finance. Louis Bachelier was the first one to treat Brownian motion mathematically in his thesis on speculation. I would also be curious where optimal control is supposed to have originated outside applied math.
–
Michael Greinecker♦Jan 17 '13 at 8:55

6

I am doubtful that the subjects of PDE or (discrete) Fourier transforms could be considered pure math, historically.
–
KCdJan 17 '13 at 10:50

1

@MichaelGreinecker, you're splitting hairs - it doesn't really matter if its BM versus geometric BM versus long-tail Levi flights etc. These are all models, they're not the real world. - A recent graphic in Economist magazine showed that every year starting in 2005, on every continent, as many hedge funds fold as are created. So are they hedging or speculating?
–
alancalvittiJan 18 '13 at 16:31

2

@MichaelGreinecker, the models are applied to decision-making in the real world. I can give you several examples where, after a crash, people realize, gee we were confusing the models for the real world, but not before the crash.
–
alancalvittiJan 19 '13 at 20:11

The discussion of conic sections by the ancient Greeks, see the wikipedia article, gave the basic definitions required by Kepler to formulate his law of planetary orbits. Of course the Greeks did not have term "pure mathematics".

People also forget that the notion of the graph of a function was invented by Descartes and of course is now ubiquitous in our daily papers, to show clearly how bad things are getting! For more information on the invention of Cartesian coordinates, see the wikipedia entry on Descartes.

Just a comment on the term "abstract nonsense": abstraction is about analogies, and thus about saving work, doing several things at the same time. The term "abstract nonsense" comes from those who think maths ought to be hard, about solving hard problems, whereas others think one job of maths is to make difficult things easy, by developing the "right" language. It was said by Bott that Grothendieck was prepared to work very hard to make things tautological!
–
Ronnie BrownJul 26 '13 at 8:47

Actually, it's Fermat's little theorem that is the basis of RSA. There is a widespread misconception that it is based on Euler's theorem (even though the original paper used Fermat's little theorem). The problem with relying on Euler's theorem is that it suggests the encoding and decoding procedures may not be inverses on messages that are not relatively prime to the modulus. But in fact there is no problem, by the proof using Fermat's little theorem. See the last section of the Wikipedia page on RSA at en.wikipedia.org/wiki/RSA_%28algorithm%29
–
KCdJan 17 '13 at 10:46

Complex numbers are very useful in electrical engineering. An imaginary number is a hare-brained idea if you think about it: square of this thing is -1?????!?

And yet, it's very valuable when calculating alternative currents.

The "trouble" with pure mathematics or ideas is that empirical world is open world (not closed like in mathematics), and as we build newer and newer practical things on top of it, you never know what's useful.

Say, lambda calculus and functional programming. If you asked SW engineer 30 years what's functional programming good for, you'd most often get an answer "feh! silly academic toy! useless!".

Fast forward 20 years to MapReduce applied by Google and it turns out that yes, it's actually quite practical.

Werner von Braun: "research is what i'm doing when I don't know what I'm doing". Combine that with Einstein's "there's nothing as practical as good theory". Result of this combo is: since we do not know which theory is good, we have to test them; but how do you test something that you have not even formulated as pure theory first?

"Bottom up" is such an approach, but not everything can be worked out this way.

Although I feel you focus on the wrong problem: applicability of pure theory is trivial, just check if it works in practice, try to apply theory of gravity by Aristotle to shooting cannonballs and see it doesn't work (a stone goes up on a curve and at the highest point of trajectory falls vertically down to the ground - has Aristotle never thrown stones or smth?).

A harder problem is when pure theory deceives us into wrong representation of real world, for example classical logic has done huge conceptual damage to knowledge representation in AI and the way we think about the problems (all those silly logical rules that don't work, akin to the "witch" skit from Monty Python's Holy Grail).

P.S. Certain paper on fast resolution of big Horn clauses is theory behind pattern matching used for programming in Prolog and Erlang (maybe there are more applications I don't know of), although I can't remember the name of the paper.

@Dan-GeorgeFilimon: here it is: dropbox.com/s/o6mb944e5phsdh6/… , although it only its fraction concerns issues I have described, since the thesis subject is mostly "what has worked in AI re knowledge representation and processing" and so it is not direct critique of logic in concext of KM as such.
–
mrkafkJan 20 '13 at 18:11

Most of our current mathematical knowledge was developed to explain something already observed empirically. Going way back, many early civilizations had no concept of "zero" as being a numerical quantity; however, the concept of "nothing" or "none" existed, and eventually the Babylonians, around 2000 BC, began using symbols for "none" or "zero" alongside numerals, equating the concepts. Newton laid the foundations of what we know today as calculus (also developed independently by Leibniz) in order to mathematically explain and calculate the motion of celestial bodies (and also of projectiles here on earth). Einstein developed tensor calculus in order to establish the mathematical backing for general relativity.

It can also, however, happen in reverse. Usually, this is when "pure" math exhibits some "oddity", such as a divergence or discontinuity of an "ideal" formula that otherwise models real-world behavior very closely, or something originally thought of as a practical impossibility. Then, we find that in fact the real-world behavior actually follows the math even in these "edge cases", and it was our understanding of the way things worked that was wrong. Here's one from physics which touches on some of the most basic grade-school math and yet challenges those very foundations of thought: negative absolute temperature.

Temperature, classically, is the measure of thermal energy in a system. By that definition, you can never have less than no energy in the system; hence, the concept of "absolute zero". Most "normal" people hold to this concept and think of zero degrees Kelvin as a true absolute; you can't go lower than that.

However, the theoretical, more rigorous, definition of temperature has as its defining character the ratio between the change in energy and the change in entropy. As you add total energy to a system, some remains "useful" as energy, while some is lost to entropy (natural disorder). It's still there (First Law of Thermodynamics), but cannot do work (Second Law of Thermodynamics).

The graph of temperature using this definition has computable negative values; if entropy and energy are ever inversely related (entropy reduces as energy increases, or vice-versa), then this fraction, and thus the temperature, is negative. Even more interesting is that the graph of temperature as a function of energy over entropy diverges at absolute zero; the delta of entropy approaches zero for deltas of energy around absolute zero, producing infinitely positive or negative values with an undefined division by zero at the origin. That graph, therefore, predicts that absolute zero is actually a state not of zero energy, but of zero change in entropy, regardless of the amount of energy in the system. Absolute zero, therefore, could in fact be observed in systems with extreme (even infinite) amounts of energy, as long as no additional energy added was ever lost to entropy.

This used to be discounted out-of-hand; until recently, every thermal system known to man always exhibited a direct relationship between energy and entropy. You could keep adding all the energy you wanted, to infinity, and entropy would continue to increase as well. You could keep cooling a system all you wanted, until you took out all you could possibly remove, and entropy would decrease as well. Again, this is borne out by our everyday observations of the world; solid, crystalline ice, when heated, becomes more chaotic but generally predictable water, which when further heated becomes less predictable gas, and eventually decomposes into its even less predictable component atoms, which would further decompose into plasma.

However, work with lasers, and the theoretical behavior of same, gave us a thermal system that has an "upper bound" to the amount of possible energy we could add that remains contained within the system, and moreover, that limit was pretty easy to reach. This allows us to observe a system that actually becomes less chaotic as more energy is added to it, because the more energy that is in the system, the closer it gets to its upper limit of total energy state, and thus the fewer the number of particles in the system that are at a state less than the highest state (and thus the ability to accurately predict the energy state of any arbitrary particle is increased).

On the other side of the spectrum, recent news has reported that scientists have produced the opposite; they can get entropy to increase by removing energy from the system. Work with superfluids at extremely cold temperatures has demonstrated that at a critical point of energy removal from the system, particles within it no longer have sufficient energy to sustain the electromagnetic force that attracts them to and repels them from each other in their lowest energy state (which is also their most ordered state). They lose the ordered structure that defines conventional matter, and begin to "flow" around each other without resistance (zero viscosity). At that critical point, you have increased entropy as the result of removing energy; the particles become less predictable as to position and direction of motion when they're cooled, instead of our classical idea that things which are cooled become more orderly. At this point, we have reached "negative absolute temperature".

Thus, temperature seems to exhibit a "wraparound"; as energy increases to infinity, eventually the amount of it that can be in entropy will decrease, seemingly breaking the First Law of Thermodynamics and allowing us to get more energy back from the system than the incremental amount we added (but not more than the total amount of energy ever introduced to the system, so the First Law still holds). Because that threshold is attained (in an unbound system) at infinite energy states, we'll never get there with most of our everyday thermal systems, but we can see it in a bound system, and we can "wrap around" from the low end by removing energy to reach a negative absolute temperature. This is backed up by observance of the reciprocal of temperature, which is the thermodynamic beta or "perk". This fraction, by placing the zero entropy delta in the numerator, is perfectly continuous for all real values of the domain, including zero.

Turing's development of computability which led to the theoretical basis of computing.

As a personal note, I take pride in dealing with models of ZF without the axiom of choice and all sort of strange consistency results. The only way an amorphous sets and D-finite combinatorics could be utilized for "practical uses" is when we prove that the universe is actually a good model for an infinite D-finite set, and we can apply all sort of crazy non-AC theorems to argue about properties of the universe.

Too many to count, many "pure mathematics" in the past become "applied mathematics" now.

The problem with pure mathematics is that it has advanced too much for science and engineering to catch up now.

Btw, doing a PhD in any serious science and engineering displine (even some social science subjects) is like doing some mathematics in the end, and of cause many of these mathetmatics used there were regarded as "pure mathematics" 100-200 years ago.

I'd say that basically all technological achievements are founded in pure mathematics. The relationship is often long and distant, but I'd say without pure mathematics they wouldn't be possible. In fact, I think it'd be rather hard to find a technological achievement that wouldn't be based on results of pure mathematics.

To give a few examples:

Computer science. Computers are based on Turing's and Church's research about what mathematical functions are computable in some sense. At that time, it was pure mathematics, yet now it's the basis of what we use every day. CS uses many concepts from pure mathematics, starting from binary numbers, number theory etc.

Physics. Physics evolved hand in hand with mathematics. Things that used to be purely mathematical were subsequently used in physics,. Without this pure math, we wouldn't have many achievements in physics, simply because physicists wouldn't have the required theoretical tools to work with. And this means, we wouldn't have engineering achievements that use them. To give some examples:

Without calculus and infinitesimals, we wouldn't have statics, which is indispensable for most today's complex architecture.

Lie groups, a purely theoretical idea, become very useful in particle physics, which is the basis of many nowadays technological advancements.

Probability and statistics are used everywhere. All empiric research is (or should be) validated using statistical methods.

IMO any pure mathematics which is generated by a human brain (and there probably exists and most certainly will exist other kinds in the near future) is at least motivated by something which actually exists in the world of human experience.

But once the work actually gets underway on a new idea in some area it takes on a life of its own and will, when polished & refined, look very different from how it did at the outset. Calculus is a great example of a very refined area of mathematics - you can see this in the notation, which has been polished smooth by generations of heavy usage and is very powerful & expressive (and typically takes students a long time to learn well).

And the magic is that every time a human brain learns a new piece of pure mathematics, it monitors its own (human) experience for any relevance/connections and the chances increase for the discovery of a new application.

So I'm not sure it has ever happened that a piece of pure mathematics was invented for no reason and was absolutely useless until an application was discovered later. And conversely, I'd be willing to bet that almost every aspect of applied mathematics has been the inspiration for pure theoretical work of some sort (whether it led to any significant advances or not).

I guess what I'm trying to say is that in mathematics (as in all of science) the dialogue between theory and practice goes in both directions and never stops.

Fractals were invented specifically to explore areas of geometry which were thought to only exists in the world of imagination of pure mathematics. They failed miserably, because it turned out that the world is chock full of fractals. Nowadays, fractals are used heavily in computer graphics and describing the patterns of nautilus shell, pine cones, coastlines, lightnings, among many other natural phenomenons.

According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense). In his writings, Leibniz used the term "fractional exponents", but lamented that "Geometry" did not yet know of them. Indeed, according to various historical accounts, after that point few mathematicians tackled the issues and the work of those who did remained obscured largely because of resistance to such unfamiliar emerging concepts, which were sometimes referred to as mathematical "monsters". (Wikipedia)

When CDs were first being discussed, the engineers from Phillips were in discussion in Japan with the company Sony on standards, and those from Sony said they were not happy with the error correction standards set by Phillips. So their engineers went back to Eindhoven and called people together to ask who was the best expert in Europe on this new science of error correction. They were told it was a Professor of Number Theory, J. van Lint, in Eindhoven! I did check this story with him.

I have been told that the high quality of the pictures from the Voyager space probes would not be possible without error correction, because of the weak signals, and the noisy space.

Error correction is quite widespread, from hard disks, to simple ones in the ISBN, and the advanced ones, see for example the wikipedia article, use sophisticated pure mathematics.

The first such code, the Hamming code, was invented by a researcher at Bell Labs, when he ran programs over the weekend, and came back to find "your program has an error". He swore to himself, and thought: "If it can find an error, why can't it correct it?"

Wavelet and Fourier transforms are used in a very long list of medical equipment (MRA, blood pressure monitor, diabetis monitor, just to mention a few), in audio-video compression (mp3, jpeg, jpeg2000,h.264 et al) and audio-video effects (audio equalization, image enhancing, etc).
Linear-Algebra is the basis of the Google Page-rank algorithm, and some face-recognition algorithms. This is not by any means an extensive list of applications, just a few that I remember.

Coming from software development background I can say that functional programming languages were influenced to some degree from Lambda Calculus, a formal system. Lambda Calculus was introduced by mathematician Alonzo Church in the 1930s

How about in calculating orbital patterns (i.e. before the first satelite was ever launched). Without the work of pure mathematics laying the ground work for astro-physics, Apollo 13 would have been lost.

Counting processes and martingales are objects I view as purely mathematical/probabilistic objects. Nevertheless, they fundamental objects when describing the theory of survival analysis - survival analysis being a branch that is used in many registry-based studies in e.g. epidemiology.

A simple model (a model without censoring) of survival analysis is the following: Let $X_1,\ldots,X_r$ be iid random variables with values in $(0,\infty)$, where $X_i$ is the lifetime of the $i$th individual. Let $X_i$ have density $f$ and distribution function $F$ with $F(t)<1$ for all $t\in (0,\infty)$. Put
$$
N_t^i=1_{\{X_i\leq t\}},\quad i=1,\ldots,r,
$$
and
$$
N_t=\sum_{i=1}^r N_t^i,
$$
i.e. $N_t$ is the number of individuals dead before $t$.
Then $(N_t^1,\ldots,N_t^r)_{t\geq 0}$ is an $r$-dimensional counting process and $(N_t)_{t\geq 0}$ is a counting process. Now, theory of local martingales and predictable covariation can be used to derive estimators such as the Nelson-Aalen estimator of the cumulative hazard rate, i.e. the function
$$
\Lambda(t)=-\log S(t),
$$
where $S(t)=1-F(t)$ is the survival function.

There are uses in Digital Media, Cryptography, Physics and Engineering.

A lot of Pure math is knowing how to apply it, most theories have a specific problem set they are known to solve because they are designed that way or were found to solve that problem set that way but when you apply theories in ways which are not typical of the solution you incite innovation and expand your horizons.