Post navigation

Why is mathematics possible?

Spectacular advances in number theory

Last weeks we heard about two spectacular results in number theory. As announced in Nature, Yitang Zhang proved that there are infinitely many pairs of consecutive primes which are at most 70 million apart! This is a sensational achievement. Pushing 70 million to 2 will settle the ancient conjecture on twin primes, but this is already an extremely amazing breakthrough. An earlier breakthrough came in 2005 when Daniel Goldston, János Pintz, and Cem Yıldırım proved that the gaps between consecutive primes is infinitely often smaller than .

Harald Helfgott proved that every integer is the sum of three primes. Here the story starts with Vinogradov who proved it for sufficiently large integers, but pushing down what “sufficiently large” is, and pushing up the computerized methods needed to take care of “small” integers required much work and ingenuity.

Why is Mathematics possible?

The recent news, and a little exchange of views I had with Boaz Barak, bring us back to the question: “Why is mathematics possible?” This is an old question that David Kazhdan outlined in a lovely 1999 essay “Reflection on the development of mathematics in the twentieth century.” The point (from modern view) is this: We know that mathematical statements can, in general, be undecidable. We also know that a proof for a short mathematical statement can be extremely long. And we also know that even if a mathematical statement admits a short proof, finding such a proof can be computationally intractable. Given all that, what are the reasons that mathematics is at all possible?

It is popular to associate “human creativity” with an answer. The problem with incorrect (or, at least, incomplete) answers is that they often represent missed opportunities for better answers. I think that for the question “why is mathematics possible” there are opportunities (even using computational complexity thinking) to offer better answers.

Also, to reply to actual question: Apparently although there are undecidable statements, and statements who have minimum proof length much much larger than the statement length, the fraction of statements which fall into those categories and are interesting is small enough that we can still do stuff.

The answer depends on the philosophical standpoint. Say, if one believes in evolution according to Darwin’s suggestion and also in the standard “complexity assumptions” (and I suspect that this aesthetically-ridiculous combination today is the most common one), then our ability to reason could have “self-developed” only in a universe where quite efficient heuristics exist for solving many of those problems, which are “natural” with respect to the same universe. Mathematics aims to deal with natural questions, and therefore heuristics exist.

Furthermore, the fact that we are asking this question means that our brain has developed quite considerably; in particular, by the time when we can ask this sort of questions, the brain is likely to be able to apply at least some of those existing heuristics.

So, under the vulgar assumptions, mathematics is possible because the question “Why is mathematics possible?” has been asked. Choosing a different philosophical standpoint would make this type of questions more interesting.

mathematics and computation are at a much deeper level rooted in fractals and the use of computation/algorithms to alternate/map/convert/transform between order and “disorder” ie (pseudo-)randomness. think there is a remarkable demonstration of this in some recent algorithmic analysis of the collatz conjecture [cooked up only yesterday!] but unfortunately its only embodied in a small program so far & few are available to review it until the results need to be written up…. for anyone interested plz drop me a comment on my blog

My sense is that because the world is full of relationships and because we are able to generalize, those two can be combined to form systems that, for mathematics at least, can be specified rigously. These abstract systems are then useful to compare back to more complex, but informal systems in the physical world to allow us to predict future events.

Why is mathematics possible? // ¿Why e**(Pi*i) +1 = 0? As far as our species can see and think about it, that is the way this reality works. Are there realities without a mathematical foundation? Who can really know?

A short answer is that human mathematicians look only at a very special case of the undecidable problem. However, that answer is unsatisfactory without a good understanding of what it is that marks out the proofs that humans are able to find, an understanding that seems to me to be in its infancy. (I think that it will be much better understood by, say, 2100, and that mathematics will look very different as a result.) Clearly one component of the answer is that the proofs we are able to find are based on a hierarchical structure of definitions and results. This allows us to condense arguments hugely, and gives us a method for finding long proofs. Roughly speaking, the method is to find short proof-generating ideas and then expand them (a process that can be iterated), rather than doing something disastrously inefficient like a depth-first search.

Dear Tim, Probably the process of finding proofs leads to more efficient processes than some greedy or depth-first search. But the question is more about the theorems and less about the proofs. What is it in the Poincare conjecture that allows a proof at all, and a relatively short proof, and even a relatively short proof that is not hopeless to find?

In that case, maybe the following would be a way of rephrasing your question. We know that undecidability results don’t show that mathematics is impossible, since we are interested in a tiny fraction of mathematical statements, and in practice only in a tiny fraction of possible proofs (roughly speaking, the comprehensible ones). But why is it that these two classes match up so well? Why is it that nice mathematical statements so often have proofs that are of the kind that we are able to discover?

I think part of the answer is that the evolution of mathematics follows what we can prove: it is quite easy to come up with simple statements that look hopeless (is zeta(3) a normal number? etc. etc.), so the match-up just mentioned is far from perfect. I think we develop an instinct for what kinds of statements are likely to be amenable to our techniques, and from time to time we are surprised by a statement like the Poincare conjecture that turns out to be very very hard.

I wonder whether the true answer might be something like this: there is a “random variable” associated with “natural” mathematical statements, which takes as its value the length of the shortest “humanly discoverable” proof. We know that this variable can take very large values, or even be infinite, but on average it is fairly small (if only because a lot of statements you can write down have simple counterexamples). We get interested in the statements for which the random variable takes a large but, it seems to us, not too large, value. And empirically such statements exist.

Of course, all this is wild speculation. I don’t know how to formulate a precise question, but it would be fascinating if one could somehow do rigorous justice to the intuition that most natural statements have short proofs but a few are more difficult and a few are much more difficult. It would be something like a “quantitative statistical version of Godel’s theorem”.

Will and Jaden Smith contribute their two pennies:http://www.vulture.com/2013/05/will-and-jaden-smith-on-working-together.html
—————
Jaden: I think that there is that special equation for everything, but I don’t think our mathematics have evolved enough for us to even—I think there’s, like, a whole new mathematics that we’d have to learn to get that equation.
Will: I agree with that.
Jaden: It’s beyond mathematical. It’s, like, multidimensional mathematical, if you can sort of understand what I’m saying.