I think we all secretly hope that in the long run mathematics becomes easier, in that with advances of perspective, today's difficult results will seem easier to future mathematicians. If I were cryogenically frozen today, and thawed out in one hundred years, I would like to believe that by 2110 the Langlands program would be reduced to a 10-page pamphlet (with complete proofs) that I could read over breakfast.

Is this belief plausible? Are there results from a hundred years ago that have not appreciably simplified over the years? From the point of view of a modern mathematician, what is the hardest theorem proven a hundred years ago (or so)?

The hardest theorem I can think of is the Riemann Mapping Theorem, which was first proposed by Riemann in 1852 and (according to Wikipedia) first rigorously proven by Caratheodory in 1912. Are there harder ones?

To clarify: are you asking about what was hard for them then (a history-of-maths point of view) or what was done then but still remains hard even with today's tools?
–
Yemon ChoiNov 2 '10 at 17:54

2

How do you define hard? The Riemann Mapping Theorem is sort of standard material on a lot of complex analysis quals (especially if you use the version of the proof given in Alfors [due to Koebe?], instead of going through Dirichlet's principle like Riemann did), so I'd say the proof did simplify quite a bit over the years. Anyway, I think the premise of this question is subjective, and I am dubious about its value here on MO.
–
Willie WongNov 2 '10 at 17:56

4

Also, I find "hardness" extremely vague and subjective... I am not convinced that this question will generate more light than heat
–
Yemon ChoiNov 2 '10 at 17:58

1

Willie, the approach to the RMT via Montel's theorem is due to Fejer and Riesz, later simplified by Ostrowski and Caratheodory, using a a couple of important ideas from the Caratheodory-Koebe theory.
–
timurNov 2 '10 at 18:13

4

@Willie, Kevin: do not confuse en.wikipedia.org/wiki/Riemann_mapping_theorem (which is sort of standard) and en.wikipedia.org/wiki/… (which certainly isn't). Both theorems are 100 years old. The proof of the Uniformization theorem that is discussed in the books by Forster or Hubbard; is younger and simpler, but also old. Since I taught a course on it, I can assure you: it is also pretty difficult! I think it is an excellent example of an important result that has not been substantially simplified for a long time.
–
Johannes EbertNov 2 '10 at 19:03

6 Answers
6

Difficulty is not additive, and measuring the difficulty of proving a single result is not a good measure of the difficulty of understanding the body of work in a given field as a whole.

Suppose for instance that 100 years ago, there were ten important theorems in (say) complex analysis, each of which took 30 pages of elementary arguments to prove, with not much in common between these separate arguments. (These numbers are totally made up for the purposes of this discussion.) Nowadays, thanks to advances in understanding the "big picture", we can now describe the core theory of complex analysis in, say, 40 pages, but then each of the ten important theorems become one-page consequences of this theory. By doing so, we have actually made the total amount of pages required to prove each theorem longer (41 pages, instead of 30 pages); but the net amount of pages needed to comprehend the subject as a whole has shrunk dramatically (from 300 pages to 50). This is generally a worthwhile tradeoff (although knowing the "low tech" elementary proofs is still useful to round out one's understanding of the subject).

There are very slick and short proofs now of, say, the prime number theorem, but actually this is not the best measure of how well we understand such a result, and more importantly how it fits in with the rest of its field. The fact that we can incorporate the prime number theorem into a much more general story of L-functions, number fields, Euler products, etc. which then ties in with many other parts of number theory is a much stronger sign that we understand number theory as a whole.

I didn't mean to rule out "improvement in big picture understanding" as a way in which mathematics gets easier -- that's my own personal definition of difficulty, though I didn't want to require that definition in the question.
–
arsmathNov 3 '10 at 8:46

Global class field theory. Statements are relatively simple (of the form "the abelianisation of the absolute Galois group of a number field (or, more generally, a global field) looks like this"). This is only perhaps 90 or so years old---it all depends on exactly what you mean by global class field theory. I'm no historian, but precursors to the theorems in their current form are over 100 years old and Hilbert already raised the question of making them more "explicit" as one of his problems (he wanted to see concrete generation of abelian extensions of an arbitrary number field rather than an abstract isomorphism of of a Galois group with another group). Original proofs were surely longer than current proofs, but current proofs are still very very long. If you want a proof of the main theorems of class field theory then there are nowadays a number of books which you can choose from. I think this is a strong contender for hard evidence that the belief in the original question is way too optimistic. I think that things will, in some sense, only get worse!

To strengthen your point, Milne claims, if I understand correctly, that the modern proofs are longer than the original proofs, just popular because they buy you more. Davidac asks what to do and decides to learn the original way first. mathoverflow.net/questions/6932/…
–
Ben WielandNov 2 '10 at 23:21

4

Dear Kevin, I think that already Kummer's work on cyclotomic fields from the 1860s remains quite difficult. Of course it is placed in a conceptual framework that makes parts of it easier to grasp, but it remains an elaborate piece of mathematics. The idea that Langlands will ever be reduced to 10 pages (let alone in a mere 100 years) seems very unlikely to me!
–
EmertonNov 3 '10 at 4:32

Though the idea behind it all is childishly simple, yet the method of analytic geometry is so powerful that the very ordinary boys of seventeen can use it to prove results which would have baffled the greatest of Greek geometers-Euclid, Archimedes, and Apollonius.
---E.T. Bell, "Men of Mathematics"

I would like to believe that by 2110 the Langland's program would be reduced to a 10-page pamphlet (with complete proofs) that I could read over breakfast. Is this belief plausible?

In a way, no. The problem is that there cannot be a computable function $f:\mathbb N\rightarrow\mathbb N$ such that each question that can be stated with $n$ symbols (in the language of set theory, say) has a proof, disproof, or proof that it is independent of set theory, that is no more than $f(n)$ symbols long.

Indeed, this would contradict the result (going back to Gödel) that there is no algorithm that determines which statements are provable: if such an $f$ existed the algorithm would just be to search through all proofs of length at most $f(n)$.

I don't think this is a problem because we are constantly changing the underlying language in which we formulate the proofs. Previously complicated theorems suddenly become easy when viewed from the right point of view, and we change the language to capture that. The net effect is that over time, mathematics develops a language where all the interesting theorems and proofs are encoded in a fairly short sequence of symbols. The point is probably that we are not interested in the uninteresting theorems anyway, so they can occupy all the long proofs, for all I care.
–
Greg GravitonNov 2 '10 at 18:39

1

Some of the old results have been made rather simple by the introduction of abstract language, e.g. the Riemann-Roch theorem via cohomology. Some old results remain difficult, because the new language apparently only scratches the surface of the problem (e.g. the Riemann uniformization theorem). And don't say these results are uninteresting.
–
Johannes EbertNov 2 '10 at 19:15

5

@Graviton: It is a problem since you may shorten the proof but the translation becomes longer. Imagine, "asdfewfgg44" could be the writing of the proof of the four colour problem, but it is unreadable unless you know the translation.
–
Mlazhinka Shung Gronzalez LeWyNov 2 '10 at 19:21

3

As a concrete example, consider the theorem "Checkers is a draw." I can't imagine anyone ever "conceptualizing" the proof of this theorem so dramatically that a single human being could rigorously check it in a day without any machine assistance. If current trends are anything to go by, proofs that involve an essential computational component are only going to increase in the future.
–
Timothy ChowNov 2 '10 at 22:52

2

@GregGraviton: the log of a non-computable function is still, hum, rather large.
–
Benoît KloecknerDec 20 '13 at 8:48

Proofs will simplify but there is a limit to simplification. I think more important is an organization of mathematical knowledge and education so that it is possible to fit the proof of any reasonable result in 10-page pamphlet that an average-educated person could read over breakfast.

One measure of hardness might be how long it takes to write up a computer-checkable proof of the theorem in question. It took a few months for a
colleague of mine to produce a completely formal proof of the prime
number theorem, so in that sense it is still hard.

The same goes for the Jordan curve theorem. As for the Riemann mapping theorem, I don't know that there is a computer-checkable proof yet.