Any pure mathematician will from time to time discuss, or think about, the question of why we care about proofs, or to put the question in a more precise form, why we seem to be so much happier with statements that have proofs than we are with statements that lack proofs but for which the evidence is so overwhelming that it is not reasonable to doubt them.

That is not the question I am asking here, though it is definitely relevant. What I am looking for is good examples where the difference between being pretty well certain that a result is true and actually having a proof turned out to be very important, and why. I am looking for reasons that go beyond replacing 99% certainty with 100% certainty. The reason I'm asking the question is that it occurred to me that I don't have a good stock of examples myself.

The best outcome I can think of for this question, though whether it will actually happen is another matter, is that in a few months' time if somebody suggests that proofs aren't all that important one can refer them to this page for lots of convincing examples that show that they are.

Added after 13 answers: Interestingly, the focus so far has been almost entirely on the "You can't be sure if you don't have a proof" justification of proofs. But what if a physicist were to say, "OK I can't be 100% sure, and, yes, we sometimes get it wrong. But by and large our arguments get the right answer and that's good enough for me." To counter that, we would want to use one of the other reasons, such as the "Having a proof gives more insight into the problem" justification. It would be great to see some good examples of that. (There are one or two below, but it would be good to see more.)

Further addition: It occurs to me that my question as phrased is open to misinterpretation, so I would like to have another go at asking it. I think almost all people here would agree that proofs are important: they provide a level of certainty that we value, they often (but not always) tell us not just that a theorem is true but why it is true, they often lead us towards generalizations and related results that we would not have otherwise discovered, and so on and so forth. Now imagine a situation in which somebody says, "I can't understand why you pure mathematicians are so hung up on rigour. Surely if a statement is obviously true, that's good enough." One way of countering such an argument would be to give justifications such as the ones that I've just briefly sketched. But those are a bit abstract and will not be convincing if you can't back them up with some examples. So I'm looking for some good examples.

What I hadn't spotted was that an example of a statement that was widely believed to be true but turned out to be false is, indirectly, an example of the importance of proof, and so a legitimate answer to the question as I phrased it. But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth. There are a few below. The more the merrier.

There's a clear advantage to knowing a 'good' proof of a statement (or even better, several good proofs), as it is an intuitively comprehensible explanation of why the statement is true, and the resulting insight probably improves our hunches about related problems (or even about which problems are closely related, even if they appear superficially unrelated). But if we are handed an 'ugly' proof whose validity we can verify (with the aid of a computer, say), but where we can't discern any overall strategy, what do we gain?
–
Colin ReidSep 3 '10 at 13:53

9

What kind of person do you have in mind who would suggest proofs are not important? I can't imagine it would be a mathematician, so exactly what kind of mathematical background do you want these replies to assume?
–
KConradSep 3 '10 at 15:33

8

Colin Reid- I think one can differentiate between a person understanding and a technique understanding. The latter applies even if we cannot understand the proof. We know that the tools themselves "see enough" and "understand enough", and that in itself is a significant advance in our understanding. But we still want a "better proof", because a hard proof makes us feel that our techniques aren't really getting to the heart of the problem- we want techniques which understand the problem more clearly.
–
Daniel MoskovichSep 3 '10 at 16:26

13

Concerning the Zeilberger link that Jonas posted, sorry but I think that essay is absurd. If Z. thinks that the fact that only a small number of mathematicians can understand something makes it uninteresting then he should reflect on the fact that most of the planet won't understand a lot of Z's own work since most people don't remember any math beyond high school. Therefore is Z's work dull and pointless? He has written other essays that take extreme viewpoints (like R should be replaced with Z/p for some unknown large prime p).
–
KConradSep 5 '10 at 1:39

15

Every proof has it's own "believability index". A number of years ago I was giving a lecture about a certain algorithm related to Galois Theory. I mentioned that there were two proofs that the algorithm was polynomial time. The first depended on the classification of finite simple groups, and the second on the Riemann Hypothesis for a certain class of L-functions. Peter Sarnak remarked that he'd rather believe the second.
–
Victor MillerSep 6 '10 at 15:56

I don't think that proofs are about replacing 99% certainty with 99.99% (or 100%, if the proof is simple enough). In one of his problems he studied early on, Fermat stated that it was important to find out whether a prime divides only numbers $a^n-1$, or also numbers of the form $a^n+1$. For $a = 2$ and $a = 3$ he saw that the answer seemed to depend on the residue class of $p$ modulo $4a$. He did not really come back to investigate this problem more closely; Euler did, but couldn't find the proof. Gauss's proofs did not remove the remaining 1 % uncertainty, it brought in structure and allowed to ask the next question.
Just looking at patterns of prime divisors of $a^n \pm 1$ wouldn't have led to Artin reciprocity.

Surely calculus is the ultimate treasure trove for such examples. In antiquity, the Egyptians, Greeks, Indians, Chinese, and many others could calculate integrals with a pretty good degree of certainty via the method of exhaustion and its variants. But it is not without reason that Newton and Leibniz are credited with the invention of calculus. Because once you had a formalism- a proof- of the product rule, chain rule, taylor expansions, calculation of an integral- in fact, once you had the formalism in hand to make such a proof possible- then with that came an understanding, and from that sprung the most powerful analytic machine known to man, that is calculus. Without a formalism, Zeno's paradox was just that- a paradox. With the concept of limits and of epsilon-delta proofs, it becomes a triviality.
Thus, in my opinion, proof is important in that it leads to mathematics. Mathematics is important in that it leads to understanding patterns, and patterns govern all of science and the universe. If you can prove something, you understand it, or at least "your concepts understand it". If you can't prove it, you're nothing more than a goat, knowing the sun will rise in the morning from experience or from experiment, but having not the slightest inkling of why.
The specific example, then, is "calculating integrals" and "solving differential equations".

With the reader's indulgence, an example of a mathematical proof saving lives. My friend's mum is an aeronautical engineer at a place which designs fighter jets. There was some wing design, whose wind resistance satisfied some PDE. They numerically simulated it by computer, and everything was perfect. My friend's mum, who had studied PDE's seriously in university and thought this one could be solved rigourously, set about finding an exact solution, and lo-and-behold, there was some unexpected singularity, and if wind were to blow at some speed from some direction then the wing would shear off. She pointed this out, was awarded a medal, and the wing design was changed. Lives saved by a proof. I'm sure there are a thousand examples like that.

Do you have a citation for the wing story? Otherwise if I repeat it the story becomes "I read about this guy on the internet who had a friend whose mother...".
–
Dan PiponiSep 3 '10 at 17:12

47

Not to politicise this, but is it clear which of (1) having a properly-working fighter jet or (2) the opposite, saves more lives in the end?
–
José Figueroa-O'FarrillSep 3 '10 at 18:31

39

All stories about wings falling off airplanes due to design errors are jokes or urban legends. In practice wings are not only tested extensively, but are built with huge error tolerances. And I doubt that anyone could find an exact solution of a realistic differential equation modeling 3 dimensional air flow over an airplane wing.
–
Richard BorcherdsSep 3 '10 at 22:03

Richard Lipton recently blogged about this question in the context of why a potential proof of $P \neq NP$ would be important. I am probably bastardizing his words, but one of the reasons he gives is that a proof may give new insight and methods of attack to other problems. He cites Wiles' proof of Fermat's Last Theorem as an example of this phenomenom.

Isn't the point that human reason is generally frail altogether, especially when making conclusions by using long serial chains of arguments? So in mathematics where such extended arguments are routine, we want their soundness to be as close to ideal as possible. Of course, even generally accepted proofs are occasionally later seen to be lacking, but to give up proofs as the ideal changes the very nature of mathematics.

I heard that the example of parts of the Italian school of algebraic geometry of the 19th century was an important example of this overextension of intuition.

Furthermore, it is only in the attempt at proof that the real nature of the reasons why a statement is true are finally exposed. So the reformulation and refoundation of algebraic geometry in the 20th century is said to have exposed revolutionary new ways of seeing mathematics in general.

Finally, it is only by proof that the limits of applicability of a theorem are really understood. This comes into play many times in physics, say where some "no-go theorem" is elided because its assumptions are not valid in some new realm.

I'm not downvoting this, but it seems overly vague to me. I think for this issue specific examples of problems that intuitive methods get wrong are more convincing than generalities about the nature of human reasoning. The Italian school example is a good one but again needs to be made more specific.
–
David EppsteinSep 3 '10 at 18:18

1

Regarding the topic of Itatlian algebraic geometry, this question and some of the comments and answers may be of interest: mathoverflow.net/questions/19420/… (The linked email of David Mumford, in the comment by bhwang, is particularly interesting.)
–
EmertonSep 3 '10 at 18:49

Nonexistence theorems can not be demonstrated with numerical evidence. For example, the impossibility of classical geometric construction problems (trisecting the angle, doubling the cube) could only be shown with a proof that the efforts in the positive direction were futile. Or consider the equation $x^n + y^n = z^n$ with $n > 2$. [EDIT: Strictly speaking my first sentence is not true. For example, the primality of a number is a kind of nonexistence theorem -- this number has no nontrivial factorization -- and one could prove the primality of a specific number by just trying out all the finitely many numerical possibilities, whether by naive trial division or a more efficient rigorous primality test.
Probabilistic primality tests, such as the Solovay--Strassen or Miller--Rabin tests, allow one to present a short amount of compelling numerical evidence, without a proof, that a number is quite likely to be prime. What I should have written is that nonexistence theorems are usually not (or at least some of them are not) demonstrable by numerical evidence, and the geometric impossibility theorems which I mentioned illustrate that. I don't see how one can give real evidence short for those theorems other than by a proof. Lack of success in making the constructions is not convincing: the Greeks couldn't construct a regular 17-gon by their rules, but Gauss showed much later that it can be done.]

You can't apply a theorem to all commutative rings unless you have a proof of the result which works that broadly. Otherwise math just becomes conjectures upon conjectures, or you have awkward hypotheses: "For a ring whose nonzero quotients all have maximal ideals, etc." Emmy Noether revolutionized abstract algebra by replacing her predecessor's tedious computational arguments in polynomial rings with short conceptual proofs valid in any Noetherian ring, which not only gave a better understanding of what was done before but revealed a much broader terrain where earlier work could be used. Or consider the true scope of harmonic analysis: it can be carried out not just in Euclidean space or Lie groups, but in any locally compact group. Why? Because, to get things started, Weil's proof of the existence of Haar measure works that broadly. How are you going to collect 99% numerical evidence that all locally compact groups have a Haar measure? (In number theory and representation theory one integrates over the adeles, which are in no sense like Lie groups, so the "topological group" concept, rather than just "Lie group", is really crucial.)

Proofs tell you why something works, and knowing that explanatory mechanism can give you the tools to generalize the result to new settings. For example, consider the classification of finitely generated torsion-free abelian groups, finitely generated torsion-free modules over any PID, and finitely generated torsion-free modules over a Dedekind domain. The last classification is very useful, but I think its statement is too involved to believe it is valid as generally as it is without having a proof.

Proofs can show in advance how certain unsolved problems are related to each other. For instance, there are tons of known consequences of the generalized Riemann hypothesis because the proofs show how GRH leads to those other results. (Along the same lines, Ribet showed how modularity of elliptic curves would imply FLT, which at the time were both open questions, and that work inspired Wiles.)

Re 1: Not all evidence is numerical! It had been known for a long time that $x^n+y^n=z^n$ doesn't admit a solution in non-constant polynomials in 1 variable for $n\geq 3,$ Kummer proved FLT for regular prime exponents, etc. I wouldn't try to assign numerical "confidence rating" to these developments prior to Wiles and Taylor-Wiles.
–
Victor ProtsakSep 3 '10 at 23:32

2

The kind of thing I had in mind was the following argument for Goldbach's conjecture. The mere fact that it is true up to some large n is not that convincing, but the fact that if you assume certain randomness heuristics for the primes you can predict how many solutions there ought to be to p_1+p_2=2n and that more refined prediction is true up to a large n is, to my mind, very convincing indeed.
–
gowersSep 4 '10 at 17:21

3

Since you bring up prime heuristics on the side of numerical evidence in lieu of a proof, I will point out one problem with them. Cramer's "additive" probabilistic model for the primes, which suggested results that seemed consistent with numerical data, does predict relations that are known to be wrong. See Granville's paper dms.umontreal.ca/~andrew/PDF/icm.pdf, especially starting on page 5.
–
KConradSep 5 '10 at 1:49

1

I think what we learn from that example is that those kinds of heuristics have to be treated with care when we start looking at very delicate phenomena such as gaps between primes. But the randomness properties you'd need for Goldbach to be true (in its more precise form where you approximate the number of solutions) are much more robust, so far more of a miracle would be needed for the predictions they make to be false.
–
gowersSep 5 '10 at 19:18

2

Oh, I'm not disputing the value of the Hardy--Littlewood type prime distribution heuristics. At the same time, I should point out that if you apply their ideas on prime values of polynomials (so not Goldbach, but still good w.r.t. numerical data) to the distribution of irreducible values of polynomials with coefficients in F_q[u], there are provable counterexamples and this has an explanation: the Mobius function on F_q[u] has predictable behavior along some irreducible polynomial progressions. For a survey on this, see math.uconn.edu/~kconrad/articles/texel.pdf.
–
KConradSep 6 '10 at 1:20

Mumford in Rational equivalence of 0-cycles on surfaces gave an example where an intuitive result of Severi, who claimed the space of rational equivalence classes was finite dimensional, was just completely wrong: it is infinite dimensional for most surfaces. This is a typical example of why the informal non-rigorous style of algebraic geometry was abandoned: too many of the "obvious" but unproved results turned out to be incorrect.

When I teach our "Introduction to Mathematical Reasoning" course for undergraduates, I start out by describing a collection of mathematical "facts" that everybody "knew" to be true, but which, with increasing standards of rigor, were eventually proved false. Here they are:

Non-Euclidean geometry: The geometry described by Euclid is the only possible "true" geometry of the real world.

Zeno's paradox: It is impossible to add together infinitely many positive numbers and get a finite answer.

Cardinality vs. dimension: There are more points in the unit square than there are in the unit interval.

Nowhere-differentiable functions: A continuous real-valued function on the unit interval must be differentiable at "most" points.

The Cantor Function: A function that is continuous and satisfies f'(x)=0 almost everywhere must be constant.

The Banach-Tarski paradox: If a bounded solid in R^3 is decomposed into finitely many disjoint pieces, and those pieces are rearranged by rigid motions to form a new solid, then the new solid will have the same volume as the original one.

Regarding 5: cf the comment here: mathoverflow.net/questions/22189/… gives a strictly increasing function whose derivative is zero almost everywhere. Intuitively such a thing shouldn't exist, but applying the definitions rigourously shows it is true.
–
David RobertsSep 4 '10 at 4:08

23

Historical examples tend to retroactively attribute stupid errors that were not the original, and still subtle, issue. In 3 and 7 the equivalences are not geometric (see Feynman's deconstruction of Banach-Tarski as "So-and-So's Theorem of Immeasurable Measure"). For #1, Riemannian geometry doesn't address historical/conceptual issue of non-Euclidean geometry, which was about logical status of the Parallel Axiom, categoricity of the axioms, and lack of 20th-century mathematical logic framework. Zeno's contention that motion is mysterious remains true today, despite theory of infinite sums.
–
T..Sep 4 '10 at 8:15

4

It seems to me that items 3-7 are regarded by most people as "monsters" and as such not really worthy of serious consideration. As for items 1 and 2, I think that not only have most people not heard of them, when they do hear of them, they regard them either as jokes or don't really get the point at all. So it doesn't seem to me that these are convincing arguments for most people. (They are, to be sure, convincing arguments for me.) To some extent, I'm sure this is something that can only be appreciated by some experience. I think for instance that the Pythagorean theorem is [out of space
–
Carl OffnerApr 28 '11 at 2:58

This question is begging for someone to state the obvious, so here goes.

Take for example the existence and uniqueness of solutions to differential equations. Without these theorems, the mathematical models used in many branches of the physical sciences are incapable of making actual predictions. If potentially the DE has no solutions, or the model provides infinitely many solutions, your model has no predictive power. So the model isn't really science.

In that regard, the point of proof in mathematics is to create a foundation that allows for quantitative physical sciences to exist to have a firm philosophical foundation.

Moreover, the proofs of existence and uniqueness shed light on the behaviour of solutions, allowing one to make precise predictions about how good various approximations are to actual solutions -- giving a sense for how computationally expensive it is to make reliable predictions.

I regret to say that most physicists seem to neither know nor care about rigorous proofs of existence and uniqueness theorems. But they seem to have no trouble doing good physics without them.
–
Richard BorcherdsSep 3 '10 at 21:55

4

In physics, having a formula or approximation scheme depending on N parameters (= dim. of phase space) shows existence and uniqueness locally, with global questions of singularities, attractors, topology etc understood by calculation and simulation. For classical ODE this is almost always enough and there are very few cases where careful analysis and error estimates overturned accepted physics ideas. There are more cases where physics heuristics drove the mathematics and some where they changed intuitions that prevailed in the math community.
–
T..Sep 4 '10 at 7:58

4

@T it sounds like you're assuming existence and uniqueness. What are you approximating? Approximations don't matter if you don't know what you're approximating. Moreover, if you're approximating one of infinitely-many solutions, this gives you no sense for how all the solutions (with certain initial conditions) behave, and limits your ability to predict anything.
–
Ryan BudneySep 4 '10 at 18:26

4

May I point out that models which do not satisfy existence or uniqueness can be very useful. For an example, consider the Navier-Stokes equations (you get the Clay Prize for proving existence). It is quite possible that there are initial conditions where the solution does not exist. This could happen because the Navier-Stokes equations assume that the fluid is incompressible, and all real fluids are (at least to some very small degree) compressible. Even if existence were to fail to be satisfied, these equations would still have enormous predictive power, and be real science.
–
Peter ShorSep 5 '10 at 17:04

5

From a physical point of view, uniqueness is a claim about causality. Given our belief in causality, any proposed law of nature that does not obey uniqueness may well be considered physically defective. But perhaps more important than uniqueness is the stronger property of continuous dependence on the data. (The former asserts that the same data will lead to the same solution; the latter asserts that nearby data leads to nearby solutions.) Once one has this, one has some confidence that one's model can be numerically simulated with reasonable accuracy, and also be resistant to noise.
–
Terry TaoSep 8 '10 at 5:46

[Edited to correct the Galileo story] An old example of a plausible result
that was overthrown by rigor is the 17th-century example of the hanging chain.
Galileo once said (though he later said otherwise), and Girard claimed to have
proved, that the shape was a parabola. But this was disproved by Huygens
(then aged 17) by a more rigorous analysis. Some decades later, the
exact equation for the catenary was found by the Bernoullis,
Leibniz, and Huygens.

In the 20th century, some people thought it plausible
that the shape of the cable of a suspension bridge is
also a catenary. Indeed, I once saw this claim in a very
popular engineering mathematics text. But a rigorous
argument shows (with the sensible simplifying assumption
that the weight of the cable is negligible compared with
the weight of the road) that the shape is in fact a
parabola.

The story about Galileo and the hanging chain is a myth: he was well aware that it is approximately but not exactly a parabola, and even commented that the approximation gets better for shorter chains. If you take a very long chain with the ends close together, which Galileo was perfectly capable of doing, it is obviously not a parabola
–
Richard BorcherdsSep 3 '10 at 21:43

3

It may be a mistranslation: my guess is he meant that a catenary RESEMBLES a parabola. Later on in the same book he makes it clear that he knows they are different.
–
Richard BorcherdsSep 3 '10 at 22:44

2

Years ago, I saw in "Scientific American" a 2-page ad for some calculator. One of the two pages was a photograph of a suspension bridge. Across the top was the equation of a catenary.
–
Andreas BlassSep 4 '10 at 1:27

9

I was very fascinated by Richard Borcherds's comments and looked at two different translations of the Galileo's book (I also found a quote from the original text, but my Italian is not good enough). The hanging line is definitely described to take the shape of a parabola, but this statement is given in a section describing quick ways of sketching parabolae. Indeed, later in the book, Galileo talks about the shape of a hanging chain being parabolic only approximately: "the Fitting shall be so much more exact, by how much the describ'd Parabola is less curv'd, i.e. more distended".
–
Aleksey PichuginSep 4 '10 at 11:15

4

@J.M.:Jungius may have been the first to publish a proof that the catenary is not a parabola (1669), but the proof of Huygens in his letter to Mersenne was earlier (1646).
–
John StillwellSep 8 '10 at 15:10

The evidence for both quantum mechanics and for general relativity is overwhelming. However, one can prove that without serious modifications, these two theories are incompatible. Hence the (still incomplete) quest for quantum gravity.

Devils Advocate: Couldn't one argue that this demonstrates the opposite? Our theories are mathematically incompatible, yet they can compute the outcome of any fundamental physical experiment to a half dozen digits. Clearly, this shows that mathematical consistency is overrated!
–
David SpeyerSep 6 '10 at 12:06

17

@David, you may be right: quantum mechanics and general relativity are incompatible, and the first time that they come into mathematical conflict the universe will end. This should be around the time when the first black hole evaporates, around $10^{66}$ years from now.
–
Peter ShorSep 6 '10 at 14:01

3

Indeed nonrigirous mathematical computations and heuristic arguments in physics are spectacularly successful for good experimental predictions and even for mathematics. Yet the accuracy David talked about is only in small fragments of standard physics. Of course, just like asking what is the purpose of rigor we can also ask what is the purpose gaining the 7th accurate digit in experimental predictions. The answer that allowing better predictions like rigorous proofs enlighten us is a good partial answer.
–
Gil KalaiSep 7 '10 at 5:17

i once got a letter from someone who had overwhelming numerical evidence that the sum of the reciprocals of primes is slightly bigger than 3 (he may have conjectured the limit was π). The sum is in fact infinite, but diverges so slowly (like log log n) that one gets no hint of this by computation.

This reminds me of a classical mathematical foklore "get rich" scheme (or scam). The ad in a newspaper says: send in 10 dollars and get back unlimited amount of money in monthly installments. The dupe follows the instructions and receives 1 dollar the first month, 1/2 dollar the second month, 1/3 dollar the third month, ...
–
Victor ProtsakSep 6 '10 at 2:37

21

I remember the first time I learned that the harmonic series is divergent. I was in high school, in my first calculus class; it was in 2001. I was really surprised and couldn't really believe that it could be divergent, so I programmed my TI-83 to compute partial sums of the harmonic series. I let it run for the entire day, checking in on the progress periodically. If I recall correctly, by the end of the day the partial sum remained only in the 20s. Needless to say, I was not convinced of the divergence of the series that day.
–
Kevin H. LinSep 6 '10 at 7:09

11

If one wants to carry this to the extreme, any divergent series with the property that the n-th term goes to zero will converge on a calculator as the terms will eventually fall below the underflow value for the calculator, and hence be considered to be zero.
–
Chris LearyJan 1 '12 at 23:54

1

@KevinLin To be fair, your TI-83 calculator uses floating-point numbers, which cannot carry out that computation for very long. At some point, the number you're adding to the sum will be so small that (after floating-point rounding) the sum will literally remain unchanged.
–
BlueRajaAug 12 '13 at 18:46

I think that the question itself is entirely misleading. It tacitly
assumes as if mathematics could be separated into two parts:
mathematical results and their proofs. Mathematics is nothing other
than the proofs of mathematical results. Mathematical statements lacks
any value, they are neither good nor bad. From the mathematical point
of view, it is entirely immaterial whether the answer to a
mathematical question like `Is there an even integer greater than two
that is not the sum of two primes?' is yes or no. Mathematicians
simply do not interested in the right answer. What they would like to
do is to solve the problem. That is the main difference between
natural sciences or engineering on the one hand, and mathematics on
the other. A physicist would like to know the right answer to his
question and he does not interested in the way it is obtained. An
engineer needs a tool that he can use in the course of his work. He
does not interested in the way a useful device works. Mathematics is
nothing other than a specific set consisting of different solutions to
different problems and, of course, some unsolved problems waiting to
be solved. Proofs are not important for mathematics, they constitute
the body of knowledge we call mathematics.

This is a very Bourbakist view. Much of interesting mathematics does not conform to it, because ideas and open problems are just as important in mathematics as rigorous proofs (even leaving aside the distinction between theory and proofs that is not appreciated by non-mathematicians).
–
Victor ProtsakSep 5 '10 at 4:27

4

Victor, Gyorgy's point of view does not conflict with the importance of ideas and open problems. Still for a large part of mathematics proofs is the essential part. The relation between a mathematical result and its proof can often be compared to the relation between the title of a picture or a poem or a musical piece and the content.
–
Gil KalaiSep 5 '10 at 5:20

6

Gil, gowers' question addressed the distinction between an intuitive proof and a rigorous proof and my comment was written with that in mind. Using your artistic analogies, let me say that a piece of music cannot be reduced to the set of notes, nor a poem to the set of words, that comprise it (the analogy obviously breaks down for "modern art", such as atonal music and dadaist poetry).
–
Victor ProtsakSep 5 '10 at 7:28

The way current computer algebra systems (that I know of) are designed is a compromise between ease of use and mathematical rigor. Although in practice, most of the answers given by CASes are correct, the lack of rigor is still a problem because the user cannot fully trust the results (even under the assumption that the software is bug-free). Now, it might sound like just another case of "99% certainty is enough," but in practice it means having to verify the results independently afterwards, which could be considered unnecessary extra work.

The root of the problem seems to be that a CAS manipulates expressions when it should output theorems instead. In many cases, the expressions simply don't have any rigorous interpretation. For example, variables are usually not introduced explicitly and thus not properly quantified; in the result of an indefinite integral they might even appear out of nowhere. Dealing with undefinedness is another problem.

All of this is inherent in the architecture of computer algebra systems, so it cannot be fixed properly without switching to a different design. The extra 1% of certainty may indeed not justify such a change. But if rigor had been emphasized more from the start, maybe we would have trustworthy CASes now.

I think this line of thought can be generalized. (As a non-mathematician) I can't help but wonder how mathematics would have progressed without the widespread introduction of rigor in the 19th century. I can't really imagine what things would be like if we still didn't have a proper definition of what a function is. So maybe rigor is indeed not strictly necessary in particular cases, but it has shaped mathematical practice in general.

I tend to not use CAS packages that aren't open-source. Knowing the precise details of the implementation of the algorithm allows you to understand its limitations, when it will and will not function. There is still uncertainty in this of course -- did I really understand the algorithm? Is the computer hardware faulty? Does the compiler not compile the code correctly? And so on. Open-source also has the advantage that the algorithms are re-usable and not "canned".
–
Ryan BudneySep 4 '10 at 20:40

Here is an example: 19 century geometers extended Euler's formula V-E+F=2 to higher dimensions: the alternating sum of the number of i-faces of a d-dimensional polytope is 2 in odd dimensions and 0 in even dimensions. The 19th centuries proofs were incomplete and the first rigorous proof came with Poincare and used homology groups. Here, what enabled a rigorous proof was arguably even more important than the theorem itself.

This is not so much an example of increased care in an arguement as the development of critical technology needed to prove a result.The need for such technology is not always clear to mathematicains when they begin to formulate such arguments.The important question here is whether or not the general form of Euler's formula could have been proven WITHOUT it.In dimensions less then or equal to 3,there are many alternate proofs using purely combinatorial arguements.I'm not sure it can be proven without homology in higher-dimensional spaces.
–
Andrew LSep 4 '10 at 23:39

It does happen that techniques developed in order to give a proof are more very important. In this case, as it turned out a few decades after Poincare, the high dimensional Euler's theorem can be proved without algebtaic topology, and in the 70s even the specific gaps in the 19th century proofs was fixed, but the new technology allows for extensions of the theorem that cannot be proved by elementary method and it also shed light on the original Euler theorem: That the Euler characteristic is a topological invariant.
–
Gil KalaiSep 5 '10 at 5:10

Based on the recent update to the question, Fermat's Last Theorem seems like the top example of a proof being far more valuable than the truth of the statement. Personally it's a rare occurrence for me to use the nonexistence of a rational point on a Fermat curve but for instance it is quite common for me to use class numbers.

There's a lot of discussion over not only the role of rigor in mathematics,but whether or not this is a function of time and point in history.Clearly,what was a rigorous argument to Euler would not pass muster today in a number of cases.

Passing from generalities to specific cases,I think the prototype of statements which were almost universally accepted as true without proof was the early-19th century notion that globally continuous real valued functions had to have at most a finite number of nondifferentiable points.Intuitively,it's easy to see why in a world ruled by handwaving and graph drawing,this would be seen as true. Which is why the counterexamples by Bolzano and Weirstrauss were so conceptually devastating to this way of approaching mathematics.

Edit: I see Jack Lee already mentioned this example "below" in his excellent list of such cases.
But to be honest,I don't think his first example is really about rigor so much as a related but more profound change in our understanding how mathematical systems are created. The main reason no one thought non-Euclidean geometries made any sense was because most scientists believed Euclidean geometry was an empirical statement about the fundamental geometry of the universe.Studies of mechanics supported this until the early 20th century;as long as one stays in the "common sense" realm of the world our 5 senses perceive and we do not approach relativistic velocities or distances,this is more or less true. Eddington's eclipse experiments finally vindicated not only Einstein's conceptions,but indirectly,non-Euclidean geometry-which until that point,was greeted with skepticism outside of pure mathematics.

Here's an example:
In the Mathscinet review of "Y-systems and generalized associahedra", by Sergey Fomin and Andrei Zelevinsky, you find:

Let $I$ be an $n$-element set and $A=(a_{ij})$, $i,j\in I$, an indecomposable Cartan matrix of finite type. Let $\Phi$ be the corresponding root system (of rank $n$), and $h$ the Coxeter number. Consider a family $(Y_i(t))_{i\in I,\,t\in\Bbb{Z}}$ of commuting variables satisfying the recurrence relations $$Y_i(t+1)Y_i(t-1)=\prod_{j\ne i}(Y_j(t)+1)^{-a_{ij}}.$$ Zamolodchikov's conjecture states that the family is periodic with period $2(h+2)$, i.e., $Y_i(t+2(h+2))=Y_i(t)$ for all $i$ and $t$.

That conjecture claims that an explicitly described algebraic map is periodic.
The conjecture can be checked numerically by plugging in real numbers with 30 digits,
and iterating the map the appropriate number of times. If you see that time after time, the numbers you get back agree with the initial values with a 29 digit accuracy, then you start to be pretty confident that the conjecture is true.

For the $E_8$ case, the proof presented in the above paper involves a massive amount of symbolic computations done by computer.
Is it really much better than the numerical evidence?

Conclusion: I think that we only like proofs when we learn something from them.
It's not the property of "being a proof" that is attractive to mathematicians.

Gian-Carlo Rota would agree, for he said (in "The Phenomenology of Mathematical Beauty") that we most value a proof that enlightens us.
–
Joseph O'RourkeSep 5 '10 at 3:00

2

That's a very interesting example, even if it is of the opposite of what I asked. My instinct is to think it's good that there's a proof, but I'm not sure how to justify that. And obviously I'd prefer a conceptual argument.
–
gowersSep 5 '10 at 15:26

14

And now, of course, we finally understand exactly what Gian-Carlo meant: A proof enlightens us if: 1) it is the first proof, 2) it is accepted, and 3) it has at least 10 up votes!
–
Gil KalaiSep 6 '10 at 21:05

I would like to preface this long answer by a few philosophical remarks. As noted in the original posting, proofs play multiple roles in mathematics: for example, they assure that certain results are correct and give insight into the problem.

A related aspect is that in the course of proving an intuitively obvious statement, it is often necessary to create theoretical framework, i.e. definitions that formalize the situation and new tools that address the question, which may lead to vast generalizations in the course of the proof itself or in the subsequent development of the subject; often it is the proof, not the statement itself, that generalizes, hence it becomes valuable to know multiple proofs of the same theorem that are based on different ideas. The greatest insight is gained by the proofs that subtly modify the original statement that turned out to be wrong or incomplete. Sometimes, the whole subject may spring forth from a proof of a key result, which is especially true for proofs of impossibility statements.

Most examples below, chosen among different fields and featuring general interest results, illustrate this thesis.

Differential geometry

a. It had been known since the ancient times that it was impossible to create a perfect (i.e. undistorted) map of the Earth. The first proof was given by Gauss and relies on the notion of intrinsic curvature introduced by Gauss especially for this purpose. Although Gauss's proof of Theorema Egregium was complicated, the tools he used became standard in the differential geometry of surfaces.

b. Isoperimetric property of the circle has been known in some form for over two millenia. Part of the motivation for Euler's and Lagrange's work on variational calculus came from the isoperimetric problem. Jakob Steiner devised several different synthetic proofs that contributed technical tools (Steiner symmetrization, the role of convexity), even though they didn't settle the question because they relied on the existence of the absolutely minimizing shape. Steiner's assumption led Weierstrass to consider the general question of existence of solutions to variational problems (later taken up by Hilbert, as mentioned below) and to give the first rigorous proof. Further proofs gained new insight into the isoperimetric problem and its generalizations: for example, Hurwitz's two proofs using Fourier series exploited abelian symmetries of closed curves; the proof by Santaló using integral geometry established more general Bonnesen inequality; E.Schmidt's 1939 proof works in $n$ dimensions. Full solution of related lattice packing problems led to such important techniques as Dirichlet domains and Voronoi cells and the geometry of numbers.

Algebra

a. For more than two and a half centuries since Cardano's Ars Magna, no one was able to devise a formula expressing the roots of a general quintic equation in radicals. The Abel–Ruffini theorem and Galois theory not only proved the impossibility of such a formula and provided an explanation for the success and failure of earlier methods (cf Lagrange resolvents and casus irreducibilis), but, more significantly, put the notion of group on the mathematical map.

b. Systems of linear equations were considered already by Leibniz. Cramer's rule gave the formula for a solution in the $n\times n$ case and Gauss developed a method for obtaining the solutions, which yields the least square solution in the underdetermined case. But none of this work yielded a criterion for the existence of a solution. Euler, Laplace, Cauchy, and Jacobi all considered the problem of diagonalization of quadratic forms (the principal axis theorem). However, the work prior to 1850 was incomplete because it required genericity assumptions (in particular, the arguments of Jacobi et al didn't handle singular matrices or forms. Proofs that encompass all linear systems, matrices and bilinear/quadratic forms were devised by Sylvester, Kronecker, Frobenius, Weierstrass, Jordan, and Capelli as part of the program of classifying matrices and bilinear forms up to equivalence. Thus we got the notion of rank of a matrix, minimal polynomial, Jordan normal form, and the theory of elementary divisors that all became cornerstones of linear algebra.

Topology

a. Attempts to rigorously prove the Euler formula $V-E+F=2$ led to the discovery of non-orientable surfaces by Möbius and Listing.

b. Brouwer's proof of the Jordan curve theorem and of its generalization to higher dimensions was a major development in algebraic topology. Although the theorem is intuitively obvious, it is also very delicate, because various plausible sounding related statements are actually wrong, as demonstrated by the Lakes of Wada and the Alexander horned sphere.

Analysis The work on existense, uniqueness, and stability of solutions of ordinary differential equations and well-posedness of initial and boundary value problems for partial differential equations gave rise to tremendous insights into theoretical, numerical, and applied aspects. Instead of imagining a single transition from 99% ("obvious") to 100% ("rigorous") confidence level, it would be more helpful to think of a series of progressive sharpenings of statements that become natural or plausible after the last round of work.

a. Picard's proof of the existence and uniqueness theorem for a first order ODE with Lipschitz right hand side, Peano's proof of the existence for continuous right hand side (uniqueness may fail), and Lyapunov's proof of stability introduced key methods and technical assumptions (contractible mapping principle, compactness in function spaces, Lipschitz condition, Lyapunov functions and characteristic exponents).

b. Hilbert's proof of the Dirichlet principle for elliptic boundary value problems and his work on the eigenvalue problems and integral equations form the foundation for linear functional analysis.

c. The Cauchy problem for hyperbolic linear partial differential equations was investigated by a whole constellation of mathematicians, including Cauchy, Kowalevski, Hadamard, Petrovsky, L.Schwartz, Leray, Malgrange, Sobolev, Hörmander. The "easy" case of analytic coefficients is addressed by the Cauchy–Kowalevski theorem. The concepts and methods developed in the course of the proof in more general cases, such as the characteristic variety, well-posed problem, weak solution, Petrovsky lacuna, Sobolev space, hypoelliptic operator, pseudodifferential operator, span a large part of the theory of partial differential equations.

Dynamical systems

Universality for one-parameter families of unimodal continuous self-maps of an interval was experimentally discovered by Feigenbaum and, independently, by Coullet and Tresser in the late 1970s. It states that the ratio between the lengths of intervals in the parameter space between successive period-doubling bifurcations tends to a limiting value $\delta\approx 4.669201 $ that is independent of the family. This could be explained by the existence of a nonlinear renormalization operator $\mathcal{R}$ in the space of all maps with a unique fixed point $g$ and the property that all but one eigenvalues of its linearization at $g$ belong to the open unit disk and the exceptional eigenvalue is $\delta$ and corresponds to the period-doubling transformation. Later, computer-assisted proofs of this assertion were given, so while Feigebaum universality had initially appeared mysterious, by the late 1980s it moved into the "99% true" category.

The full proof of universality for quadratic-like maps by Lyubich (MR) followed this strategy, but it also required very elaborate ideas and techniques from complex dynamics due to a number of people (Douady–Hubbard, Sullivan, McMullen) and yielded hitherto unknown information about the combinatorics of non-chaotic quadratic maps of the interval and the local structure of the Mandelbrot set.

Number theory

Agrawal, Kayal, and Saxena proved that PRIMES is in P, i.e. primality testing can be done deterministically in polynomial time. While the result had been widely expected, their work was striking in at least two respects: it used very elementary tools, such as variations of Fermat's little theorem, and it was carried out by a computer science professor and two undergraduate students. The sociological effect of the proof may have been even greater than its numerous consequences for computational number theory.

I meant the inspirational effect due to (a) elementary tools used; and (b) the youth of 2/3 of the authors.
–
Victor ProtsakSep 7 '10 at 8:09

30

It is indeed great and inspiring to see very young people cracking down famous problems. Recently, I find it no less inspiring to see old people cracking down famous problems.
–
Gil KalaiSep 7 '10 at 8:57

Circle division by chords, http://mathworld.wolfram.com/CircleDivisionbyChords.html, leads to a sequence whose first terms are 1, 2, 4, 8, 16, 31. It's simple and effective to draw the first five cases on a blackboard, count the regions, and ask the students what's the next number in the sequence.

I like that example and have used it myself. However, the conjecture that the number of regions doubles each time has nothing beyond a very small amount of numerical evidence to support it, and the best way of showing that it is false is, in my view, not to count 31 regions but to point out that the number of crossings of lines is at most n^4, from which it follows easily that the number of regions grows at most quartically.
–
gowersSep 5 '10 at 15:23

The fundamental lemma is an example that most believed and on whose truth several results depend. According to Wikipedia, Professor Langlands has said

... it is not the fundamental lemma as such that is critical for the analytic theory of automorphic forms and for the arithmetic of Shimura varieties; it is the stabilized (or stable) trace formula, the reduction of the trace formula itself to the stable trace formula for a group and its endoscopic groups, and the stabilization of the Grothendieck–Lefschetz formula. None of these are possible without the fundamental lemma and its absence rendered progress almost impossible for more than twenty years.

and Michael Harris has also commented that it was a "bottleneck limiting progress on a host of arithmetic questions."

Fundamental lemma is not even 50% obvious by any stretch of imagination. Like most of the Langlands program, it is not a specific result that admits a precise, uniform statement; rather, it is a guiding principle that needs to be fine-tuned in order to be compatible with other things that we, following Langlands, would like to believe in $-$ then, and only then, does it becomes meaningful to talk about proving it.
–
Victor ProtsakSep 6 '10 at 21:35

1

Thank you, Victor. I'm not proposing that the Fundamental Lemma is obvious, but it seems that is was accepted as likely to be true, because others based new work on it. PC below gives the example of Skinner and Urban, and Peter Sarnak says here time.com/time/specials/packages/article/…, that "It's as if people were working on the far side of the river waiting for someone to throw this bridge across," ... "And now all of sudden everyone's work on the other side of the river has been proven."
–
Anthony PulidoSep 6 '10 at 22:45

2

I was under the impression that the fundamental lemma, at least, is a set of statements clear enough to be amenable to proof attempts. I think we have on page 3 here arxiv.org/abs/math/0404454 and Theorems 1 and 2 here arxiv.org/abs/0801.0446 precise statements for the various fundamental lemmas... It's possible I'm misunderstanding you.
–
Anthony PulidoSep 6 '10 at 22:55

4

The fundamental lemma had a precise formulation, due to Langlands and Shelstad, in the 1980s (following earlier special cases). It is a collection of infinitely many equations, each side involving an arbitrarily large number of terms (i.e. by choosing an appropriate case of the FL, you can make the number of terms as large as you like). It was universally believed to be true because otherwise the theory of the trace formula (some of which was proved, but some of which remained conjectural), as developed by Langlands and others, would be internally inconsistent, something which no-one ...
–
EmertonSep 7 '10 at 4:05

3

... believed could be true. This a typical phenomenon in the Langlands program, I would say: certain very general principles, which one cannot really doubt (at least at this point, when the evidence for Langlands functoriality and reciprocity seems overwhelming), upon further examination, lead to exceedingly precise technical statements which in isolation can seem very hard to understand, and for which there is no obvious underlying mechanism explaining why they are true. But one could note that class field theory (which one knows to be true!) already has this quality.
–
EmertonSep 7 '10 at 4:13

As I understand from listening to a talk by Emmanuel Candes - please correct me if I get anything wrong - the recent advances in compressed sensing began with an empirical observation that a certain image reconstruction algorithm seemed to perfectly reconstruct some classes of corrupted images. Candes, Romberg, and Tao collaborated to prove this as a mathematical theorem. Their proof captured the basic insight that explained the good performance of the algorithm: $l_1$ minimization finds a sparse solution to a system of equations for many classes of matrices. It was then realized this insight is portable to other problems and analogous tools could work in many other settings where sparsity is an issue (e.g., computational genetics).

If Candes, Romberg, and Tao had not published their proof, and if only the empirical observation that a certain image reconstruction works well was published, it is possible (likely?) that this insight would never have penetrated outside the image processing community.

I have the tendency to think that the need for absolute certainty is related to the arborescent structure of mathematic. The mathematics of today rest upon layers of more ancient theories and after piling up 50 layers of concepts, if you are only sure of the
previous layers with a confidence of 99%, a disaster is bound to happen and a beautiful branch of the tree to disappear with all the mathematicians living on it. This is rather unique in natural sciences with the exception of extremely sophisticated computer programms but, in mathematics, you will have to fix by yourself an equivalent of 2K bug.

Of course, there are people who are willing to take the risk to see what they have achieved collapse in front of their eyes by working under the assumption that an unproven, but highly plausible, result is true (like Riemann hypothesis or Leopoldt conjecture). In some cases this is actually a good way to be on top of the competition (think of the work of Skinner and Urban on the main conjecture for elliptic curves which rests upon the existence of Galois representations that were not proven to exist before the completion of the proof of the Fundamental Lemma).

Claim

The trefoil knot is knotted.

Discussion

One could scarcely find a reasonable person who would doubt the veracity of the above claim. None of the 19th century knot tabulators (Tait, Little, and Kirkman) could rigourously prove it, nor could anybody before them. It's not clear that anyone was bothered by this.

Yet mathematics requires proof, and proof was to come. In 1908 Tietze proved the beknottedness of the trefoil using Poincaré's new concept of a fundamental group. Generators and relations for fundamental groups of knot complements could be found using a procedure of Wirtinger, and the fundamental group of the trefoil complement could be shown to be non-commutative by representing it in $SL_2(\mathbb{Z})$, while the fundamental group of the unknot complement is $\mathbb{Z}$. In general, to distinguish even fairly simple knots, whose difference was blatantly obvious to everybody, it was necessary to distinguish non-abelian fundamental groups given in terms of Wirtinger presentations, via generators and relations. This is difficult, and the Reidemeister-Schreier method was developed to tackle this difficulty. Out of these investigations grew combinatorial group theory, not to mention classical knot theory.

All because beknottedness of a trefoil requires proof.

Claim

Kishino's virtual knot is knotted.

Discussion

We are now in the 21st century, and virtual knot theory is all the rage. One could scarecely find a reasonable person who would argue that Kishino's knot is trivial. But the trefoil's lesson has been learnt well, and it was clear to everyone right away that proving beknottedness of Kishino's knot was to be a most fruitful endeavour. Indeed, that is how things have turned out, and proofs that Kishino's knot is knotted have led to substantial progress in understanding quandles and generalized bracket polynomials.

Summary

Above we have claims which were obvious to everybody, and were indeed correct, but whose proofs directly led to greater understanding and to palpable mathematical progress.

I very much like your answer although I'd put what I consider to be a different spin on it. To me the key point of interest in showing the trefoil is non-trivial is that it shows that one can talk in a quantitative, analytical way about a concept that at first glance seems to have nothing to do with standard conceptions of what mathematics is about. A trefoil has no obvious quantitative, numerical thing associated to it. In contrast, the statement $\pi > 3$ is very much steeped in traditional mathematical language so it's rather unsurprising that mathematics can say things about it.
–
Ryan BudneySep 6 '10 at 20:18

7

Let me make my point more hyperbolically. That one can rigorously show that a trefoil can't be untangled, this is one of the most effective mechanisms one can use to communicate to people that modern mathematics deals with sophisticated and substantial ideas. Mathematics as a subject wasn't solved with the development of the Excel spreadsheet. :)
–
Ryan BudneySep 7 '10 at 0:28

I tend to think that mathematics or---better---the activity we mathematicians do, is not so much defined by (let me use what's probably nowadays rather old fashioned language) its material object of study (whatever it may be: it is surely very difficult to try to pin down exactly what it is that we are talking about...) but by its formal object, by the way we know what we know. And, of course, proofs are the way we know what we know.

Now: rigour is important in that it allows us to tell apart what we can prove from what we cannot, what we know in the way that we want to know it.

(By the way, I don't think that it is fair to say that, for example, the Italians were not rigorous: they were simply wrong)

I once heard a less accurate but more punch version of this in some idiosyncratic history of mathematics lectures: "in mathematics, we don't know what it is we are doing, but we know how to do it".
–
Yemon ChoiSep 8 '10 at 5:08

3

Yemon, there is a famous definition of mathematics, perhaps, from "What is mathematics?" by Courant and Robbins: mathematics is what mathematicians do.
–
Victor ProtsakSep 8 '10 at 8:53

But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth.

How about Stokes' Theorem ?

The two-dimensional version involving line and surface integrals is "proved" in most physics textbooks using a neat little picture dividing up the surface into little rectangles and shrinking them to zero.

Similarly, the Divergence Theorem related volume and surface integrals is demonstrated with very intuitive ideas about liquid flowing out of tiny cubes.

But to prove these rigorously requires developing the theory of differential forms whose consequences go way beyond the original theorems

In fact, the original motivation for the Bourbaki project was the scandalous state of affairs that no rigorous proof of the Stokes theorem could be located in the (French?) literature. The rest is history...
–
Victor ProtsakSep 8 '10 at 8:44

2

I don't see how this example demonstrates the importance of rigour. Most engineers or physicists (e. g. doing electrodynamics) can get by perfectly well with "tiny-cubes" proof of Stokes' theorem and nothing ever gets wrong from using this intuitive approach.
–
Michal KotowskiSep 8 '10 at 22:35

3

@MichalKotowski: that is not inconsistent with the notion that mathematics has benefited hugely from a rigorous theory of differential forms.
–
gowersSep 10 '10 at 17:19

One can rigorously prove that pyramid schemes cannot run forever, and that no betting system with finite monetary reserves can guarantee a profit from a martingale or submartingale.

But there are countless examples of people who have suffered monetary loss due to their lack of awareness of the rigorous nature of these non-existence proofs. Here is a case in which having a non-rigorous 99% plausibility argument is not enough, because one can always rationalise that "this time is different", or that one has some special secret strategy that nobody else thought of before.

In a similar spirit: a rigorous demonstration of a barrier (e.g. one of the three known barriers to proving P != NP) can prevent a lot of time being wasted on pursuing a fruitless approach to a difficult problem. (In contrast, a non-rigorous plausibility argument that an approach is "probably futile" is significantly less effective at preventing an intrepid mathematician or amateur from trying his or her luck, especially if they have excessive confidence in their own abilities.)

[Admittedly, P!=NP is not a great example to use here as motivation, because this is itself a problem whose goal is to obtain a rigorous proof...]

I doubt if the main problem is that people are not aware of the rigorous nature of non-existence proofs. First, for most people the meaning of a regorous mathematical proof makes no sense and has no importance. The empirical fact that pyramid schemes always ended in a collapse in the past should be more convincing. But even people who realize that the pyramid scheme cannot run forever (from past experience or from mathematics) may still think that they can make money by entering early enough. (The concept "run forever" is an abstraction whose relevance should also be explained.)
–
Gil KalaiSep 8 '10 at 7:01

2

@Gil: this is where a proof can give more than what you set out to prove. For the pyramid scheme, not only can we prove it cannot run forever, but we can also extract quantitative evidence to show that the odds of you getting in early enough are close to zero. Of course, this will not convince the numerically illiterate, but I'm convinced there is a non-negligible portion of the population that you could reach in this way.
–
Thierry ZellMar 19 '11 at 5:20

In response to the request for an example of a statement that was widely but erroneously believed to be true: does Gauss's conjecture that $\pi(n) < \operatorname{li}(n)$ for every integer $n \geq 2$, disproved by Littlewood in 1914, qualify?

It's a good example of the need for rigor. But I've always been skeptical of the story that it was widely believed to be true, since any competent mathematician familiar with Riemann's 1857 explicit formula for π(n) would have realized that there are almost certainly going to be occasional exceptions. (Littlewood removed the word "almost" by a more careful analysis.)
–
Richard BorcherdsSep 8 '10 at 19:58

Mathematics wasn't that rigorous before N. Bourbaki: in the Italian school of Algebraic Geometry of the beginning of the XXth century the standard procedure was Theorem, Proof, Counterexample.
Also at the time of Cauchy some theorems in analysis began like "If the reader doesn't choose a specially bad function we have..."

The use of rigour in analysis, which Cauchy began, avoided that by being able to explain what was a "good function" in each case: analytic, $C^{\infty}$, being able to do term-by-term derivation in its expansions series...

@Gabriel : This seems like a delicate historical claim. I would actually be interested to see some sources on the evolution of standards of rigor within the mathematical community.
–
Andres CaicedoDec 6 '10 at 23:11

1

@Andres www-history.mcs.st-and.ac.uk/Biographies/Weil.html "Nicolas Bourbaki, a project they began in the 1930s, in which they attempted to give a unified description of mathematics. The purpose was to reverse a trend which they disliked, namely that of a lack of rigour in mathematics. The influence of Bourbaki has been great over many years but it is now less important since it has basically succeeded in its aim of promoting rigour and abstraction."
–
Gabriel FurstenheimDec 7 '10 at 8:26

2

The unified description of mathematics was not the initial intent of the Bourbaki project. By all accounts, it was to write an up to date analysis textbook (losing a whole generation to World War 1 had left a gap). Of course, the whole thing got out of hand pretty quickly...
–
Thierry ZellMar 19 '11 at 5:15

A rich source of examples may be found in the study of finite element methods for PDEs in mixed form. Proving that a given mixed finite element method provided a stable and consistent approximation strategy was usually done 'a posteriori': one had a method in mind, and then sought to establish well-posedness of the discretization. This meant a proliferation of methods and strategies tailored for very specific examples.

In the bid for a more comprehensive treatment and unifying proofs, the finite element exterior calculus was developed and refined (eg., the 2006 Acta Numerica paper by Arnold, Falk and Winther). The proofs revealed the importance of using discrete subspaces which form a subcomplex of the Hilbert complex, as well as bounded co-chain projections (we now call this the 'commuting diagram property). These ideas, in turn, provided an elegant design strategy for stable finite element discretizations.