15 Answers
15

For what it's worth, here's a trivial one: when explaining induction to students, I sometimes stress that it might be easier to prove a stronger result by induction than a weaker one---you're trying to get more out, but you're putting more in. As a concrete example I note that proving that the sum of the first 100 odd numbers is a square sounds like it might be tricky, proving that the sum of the first $n$ odd numbers is a square for all $n\geq1$ sounds like it might be accessible using induction but in fact it still too weak, and proving that the sum of the first $n$ odd numbers is $n^2$ is really rather easy to prove. In some sense the stronger the statements get, the easier they become.

The first problem on the first homework in my number theory class last semester had something similar, but it was with finding a bound on something. I don't remember the exact problem, but it ended up being easier to give an upper bound $2-\frac{1}{n}$ instead of just $2$.
–
Harry GindiApr 13 '10 at 17:01

Another way of proving that something is nonzero is to prove that it is odd. One good example of that idea is the proof of Sperner's lemma

More generally, one can prove that something is nonzero by proving that it is nonzero mod p.
That is the idea used in Chevalley-Warning theorem (one proves that the number of solutions is 0 mod p, then proves there is a trivial solution), and in the proof of Cauchy's theorem.

Frequently in mathematics the best way to determine the value of a sequence at a particular index is to compute its value at every index, even though the latter seems on the surface like a harder problem.

Here is one of my favorite examples of this phenomenon. Suppose you want to know how many vectors of a particular norm there are in some lattice $L$. On the surface, this seems like a hard problem - it involves figuring out how many times some quadratic form takes some value. One can solve this problem by solving the harder problem of determining the answer for every possible norm by writing down the theta function
$$\Theta_L(\tau) = \sum_{v \in L} e^{\pi i \tau \left< v, v \right>}.$$

If $L$ satisfies certain technical properties, $\Theta_L$ is a modular form with respect to some congruence subgroup, and modular forms live in finite-dimensional vector spaces; moreover, a lot is known about how to write down modular forms. For example, the theta function of the $E_8$ lattice is a modular form of weight $4$ and level $1$. The space of such forms is one-dimensional - in fact, it's spanned by an Eisenstein series - and it then follows that
$$\Theta_{E_8}(\tau) = 1 + 240 \sum_{n \ge 1} \sigma_3(n) q^n$$

where $q = e^{2\pi i \tau}$. Similar considerations lead to the well-known formulas for the number of ways to represent an integer as the sum of two or four squares.

Some of the previous answers have said that sometimes the easiest way to prove a set it non-empty is to show that it's large or even infinite. A variation on that idea is to show that something exists by showing that the probability of selecting it at random from some larger set is positive. As one of my professors used to say, some things are so hard to find that the best way to look for them is at random.

A nice example of looking at random is in the area of error correcting codes. There is lots of fine theory for generating codes involving all kinds of beautiful mathematics from group theory and algebraic geometry. But it turns out that if you want to get close to the Shannon limit, random codes (subject to some constraints) will do the job just fine: en.wikipedia.org/wiki/Low-density_parity-check_code
–
Dan PiponiApr 13 '10 at 23:19

The only way to prove that there's at least one prime in every arithmetic progression is by proving that there are infinitely many primes in every arithmetic progression. This is intuitively a fairly tremendous jump in difficulty to get the initial rather modest result out.

I imagine that most examples of this phenomenon take the form that the question as asked is "more difficult" only in the sense that it's been phrased in such a way as to mask what's "really going on." I think this is probably at the core of hundreds and thousands of problem-solving type puzzles -- the difficulty of the puzzle comes from masking the influence of the governing theorem, which is likely to be easier to see how to prove in its general form than it is to realize which parts of the puzzle are the important ones. In short, puzzles have red herrings, good theorems do not.

When we come across a (rearch) problem, sometimes we may not see its full picture. And in each step for a good theorem, we may not need their full generality, but a special case is enough.
–
SunniApr 13 '10 at 16:09

10

A good example of such a puzzle is this: Let x = sqrt(2) and y = 2+sqrt(2). Let X be the set { floor(nx) | n a positive integer }, and define Y similarly. Prove that X and Y are disjoint, and that their union is the set of positive integers. The not-so-obvious key is that x and y are (positive) irrational numbers satisfying 1/x + 1/y = 1; and in fact the statement holds (and is easier to prove) for all such pairs.
–
villemoesApr 13 '10 at 17:01

4

You can delete the word "known": the two are equivalent, since if there's at least one prime in every arithmetic progression then given an arithmetic progression a mod n, there's a prime congruent to a+n mod n^2, a prime congruent to a+n^2 mod n^3, etc.
–
Qiaochu YuanApr 13 '10 at 21:04

This might not be correct---perhaps someone can confirm---but I was once told (when I was a graduate student) that the way that Leopoldt's conjecture was proved for abelian number fields was as follows: first do the standard reduction to show that Leopoldt is true if certain special values of certain $p$-adic $L$-functions $L(1,\chi)$ are non-zero, and then prove that these numbers are non-zero by showing that they are transcendental! As I say, I don't know for sure if this is true, but my source was pretty reliable. The emphasis was on the observation that (at the time at least), apparently the only way of proving the numbers were non-zero was by showing they were transcendental.

I don't know about "needed": my understanding is that people simply didn't believe that such things were possible before Matiyasevich's theorem was proven. The actual construction was done explicitly a few years later by Jones et al.: en.wikipedia.org/wiki/…
–
Qiaochu YuanApr 13 '10 at 21:00

Somewhat similar to this answer is this: One asks "is there a regular expression for matching numbers divisible by 7?" and the answer is "Yes. Just prove that any DFA can be converted to a regular expression."
–
Jisang YooDec 3 '13 at 17:07

Occasionally when trying to prove a certain type of object exists, it is easier to show that the set of those objects is very large.

For instance, it's difficult to give an example of a transcendental number over the rationals. However, it is quite easy to show that the set of algebraic numbers is only countably infinite, so almost every real number is transcendental.

A related example: it's not completely trivial to write down a language that can't be recognized by a Turing machine, but there are uncountably many languages and countably many Turing machines.
–
Qiaochu YuanApr 13 '10 at 18:13

4

Also related, it is difficult to give an example of a continuous real valued function on [0,1] that is not monotone on any interval, but it is not too difficult to prove using the Baire category theorem that almost all functions are like this.
–
user4977Apr 14 '10 at 3:55

(John, the original said "almost real numbers". I made the obvious correction.)
–
François G. Dorais♦Apr 14 '10 at 0:05

1

Thanks, François. That will teach me always to proofread, even a single sentence!
–
John StillwellApr 14 '10 at 0:11

Proving that almost all real numbers are normal does not solve the problem of whether $\sqrt{2}$ is normal, so this isn't really an example of what the question asks for.
–
gowersSep 26 '10 at 8:38

2

On second thoughts, it does if you regard the problem as being "Prove that there exists a normal number." But then it's not obvious that proving that almost all real numbers are normal is easier than proving that 0.12345678910111213141516... is normal.
–
gowersSep 26 '10 at 10:22

I believe Fermat's Last Problem was solved by proving the Modularity Theorem (for the case of semistable elliptic curves), but I don't know the proof enough to say if the problem is just a direct corollary. The Modularity theorem is not in any sense easy anyway, but at least it's been proved successfully. :)

I wouldn't call the modularity theorem a "generalization" of FLT; it's just a large and diffficult theorem that happens to imply FLT, and this implication is itself a theorem: en.wikipedia.org/wiki/Ribet%27s_theorem . I would reserve the term "generalization" for when the specialization can be obtained by specializing some universal quantifiers.
–
Qiaochu YuanApr 14 '10 at 17:09

@Qiaochu: Would you say that ZMT is a generalization of the Nullstellensatz?
–
Harry GindiApr 14 '10 at 17:27

@Qiaochu: True. As I said, I didn't know the deduction from Modularity Theorem itself is a theorem. Btw, does MT imply information about solutions of a certain class of equations, or is FLT really one-of-a-kind type of thing?
–
elfkingApr 14 '10 at 18:16

The Carlson Lemmas in combinatorics. He told me at the time that he had struggled with the simple and natural way the problem was originally posed, but was able to push through a far more elaborate, stronger, and much less intuitive version.

The generating function proof of Cayley's theorem counting labeled trees (e.g., the theorem that there are $n^{n-2}$ labeled trees on $n$ vertices) is a good example. In the lecture notes I linked to, the more general question is Theorem 1 and the particular question is Corollary 1.