I had a funny idea for proving an identity in Euclidean geometry. While it didn't end up being a very nice proof strategy in my case, I would still like to collect nice examples of where the proof strategy works out, because I think such a collection of examples would be nice for impressing students of high school algebra.

I was trying to prove the "Power of a point" theorem (although I did not know it by that name at the time). My idea for a proof strategy was as follows:

Set up a coordinate system where the circle in centered at the origin, and the point P is on the x-axis.

Define a function $f(y)$ as follows: First connect $P$ and $(0,y)$ with a line. This line intersects the circle in two points, call them $A$ and $B$. Then let $f(y) = (AP \times AB)^2$. It is not too hard to see that $f$ is a polynomial of degree at most 8 in $y$.

Find 9 values of $y$ where $f(y)$ is easy to compute - they should all have a common value.

Since a degree at most 8 polynomial is determined by 9 points, we see that $f$ must be constant, which proves the theorem.

After having this idea, we found the proof by similar triangles. While 3 values of $y$ are pretty easy to compute with, it was hard to find 9 easy ones.

So even though the method of proof didn't pan out in this case, it still seemed pretty cute, and hinted that "baby algebraic geometry" ideas could be presented to students in geometry or algebra classes this way.

I am asking for a list of similar situations where the proof strategy does pan out, so that they could be pulled out when teaching students of algebra or geometry.

7 Answers
7

If you want research level mathematics, the joint theorem is an excellent example of the polynomial technique that can be presented to high-school students. The statement is

$n$ lines in the space can form at most $Cn^{3/2}$ joints (the points where at least three non-coplanar lines intersect).

The proof (for an expert) consists of 3 "elementary" steps:

1) Let $(m+1)^3$ be the least cube that is greater than the number of joints formed. Then we can find a not identically zero polynomial $P(x,y,z)$ of degree $3m$ or less vanishing on all joints (the number of free parameters (coefficients) exceeds the number of linear conditions).

2) Take such polynomial of least degree. If every line contains at least $3m+1$ joints, then the polynomial must vanish on every line. Hence, at each joint, three independent directional derivatives must vanish, whence the differential vanishes. Thus some partial derivative $P_x$, $P_y$, or $P_z$ is a lower degree polynomial with the same properties, which is impossible. Therefore, at least one line contains $3m$ joints or fewer.

3) Remove that line and repeat the argument. We see that each time we remove $3m$ joints or fewer, so we will have to remove at least $m^2/3$ lines before we get rid of all joints. So, we had at least that number of lines in the configuration.

Presenting it to high school kids may easily take two full lectures but there is nothing there beyond their comprehension.

If you want an "olympiad flavor" purely geometric problem, my vote goes to the famous pentagonal area identity:

If you have a convex pentagon $A_1\dots A_5$, $S$ is its area, and $S_j$ is the area of the triangle $A_{j-1}A_jA_{j+1}$, then
$$
S^2-S(S_1+S_2+S_3+S_4+S_5)+S_1S_2+S_2S_3+S_3S_4+S_4S_5+S_5S_1=0
$$
The slickest proof I know is to start with a pentagon and to move $A_1$ along the line parallel to $A_5A_2$ until you get a degenerated pentagon, which is actually a convex quadrilateral with one extra vertex on a side. Note that the left hand side is a linear function in this case, so if we have the identity at the endpoints, we have it throughout. For the degenerate pentagon repeat the trick moving the point on the side along it. You'll see that it suffices to prove the identity for a pentagon that is a convex quadrilateral with one vertex doubled (say $A_1=A_2$). But then it becomes obvious because $S_1=S_2=0$, so we are left with
$$
S^2-S(S_3+S_4+S_5)+S_3S_4+S_4S_5=0
$$
but $S_3+S_5=S$ (those two tile the quadrilateral now).

I seem to have a vague memory that something like this is used in Guth/Katz?!
–
Igor RivinJan 6 '12 at 19:58

1

The first solution of the joint problem was by Guth-Katz, but after that a few people cleaned it up quite a bit, which finally resulted in the above argument. I do not remember now who placed the last point over i or crossed the last t but the final result is very nice, indeed. I also do not know who provided more inspiration here: Alon or Dwir. History is a very tangled subject, you know...
–
fedjaJan 6 '12 at 21:49

The Reed-Solomon code. A simple example--you have 45gb of data to back up on ten DVD's, but you're worried some discs might get damaged, so you want some redundancy. Erasure codes to the rescue: write out the 10 discs, and also for each byte position on a disc, fit a 9th degree Lagrange polynomial over GF(256) through the byte values across the discs. So, for example, polynomial #23 would go through $(1,b_{1,23}),(2,b_{2,23}),\dots,(10,b_{10,23})$ where $b_{d,n}$ is the data at byte #n of disc #d. Then you can start burning more discs at additional points on the polynomials, e.g. disc #15's data at byte 23 would be $b_{15,23}$ computed from the interpolation polynomial (you can do the finite field arithmetic with a table of "logarithms" and "antilogarithms" for efficiency, but that's a side issue). Now if you make 5 spare discs, you can reconstruct the original data from any ten of the total 15 that you made. This is probably more robust than making a whole backup set since if you just duplicate all 10 discs and then have 5 failures, there's a high chance that you'll lose both a primary and its backup. Fancy RAID systems work something like this.

I think this should make sense and be interesting for high-school algebra students who use computers. Edit: I guess it is also pretty similar to Shamir secret sharing. Reed-Solomon is of course more widely used as an error-correcting code, with a slightly more involved explanation.

Many computer-generated proofs use this technique. For instance, there is an entire euclidean geometry textbook written this way! It is available on Doron Zeilberger's web page.

It is titled "Plane Geometry: an elementary textbook", and it is attributed thus:
"By Shalosh B. Ekhad, XIV (Circa 2050), downloaded from the future by Doron Zeilberger". Zeilberger, it seems, always names his current computer Shalosh B. Ekhad and frequently cites them as his coauthors.

All the theorems are proven by computer-generated proofs, and they rely upon this property of polynomials. As such, the proofs are most easily read if one understands Maple's programming language, so whether they'd work as a good math education tool would strongly depend on your audience.

okay, that was embarassing. These proofs actually do not use this mechanism at all, they rely instead on the correctness of Maple's symbolic algebra code (specifically the solve function).
The proof I was thinking of was in section 1 of A=B, by Marko Petkovsek, Herbert Wilf and Doron Zeilberger; it's a proof that the angle bisectors of a triangle meet in a point. There are other, non-geometric theorems proved there using the same trick.

Oh no! I'm rereading the textbook carefully, and I was completely wrong about the mechanism which these proofs use. They DON'T use the mechanism I thought they did. They seem to rely instead upon the correctness of the maple solve algorithm. How do I remove this post?
–
Benjamin YoungJan 6 '12 at 19:57

3

Benjamin -- I am almost sure that I have read somewhere something written by Zeilberger, where he uses precisely this method to prove something, so perhaps your precise reference is not what you were thinking of but I am fairly confident that Zeilberger has used the trick to great effect in some of his writings. My memory was that Zeilberger was pointing out that a statement that looked like it was not a "finite" statement could indeed be checked by verifying it for a finite number of cases, which was easy for a computer to do, because when you thought about it the right way it was a poly.
–
Kevin BuzzardJan 6 '12 at 20:04

Yeah, I figured it out, it was in A=B. See my extremely sheepish edit above.
–
Benjamin YoungJan 6 '12 at 20:08

1

Yes, that was it! Why not just delete the post and make a new one? Or just edit it substantially so it says what you want it to say, rather than just crossing it out? People get a bit annoyed if people edit their questions substantially (thus making some comments/answers non-sensical or even stupid-looking) but I'm not so sure the same holds for answers.
–
Kevin BuzzardJan 6 '12 at 22:05

No need to be sheepish, Benjamin. It was an innocent mistake, and the truth wasn't far off.
–
Tom LeinsterJan 7 '12 at 11:18

This is profoundly less interesting to me than the other two answers already posted, but I think when I was in high school it would have been otherwise. Especially after having seen a bit of calculus.

I recall being amazed that there were formulas --- polynomial formulas! --- for sums like $$s(n) := \sum_{i=1}^n i^k.$$ The first values, $k=0,1$ made sense. For $k=2$, things seemed a bit lucky, and for higher $k$ miraculous indeed. Here's the proof/formula I have in mind.

First, define $\Delta f(x) := f(x+1)-f(x)$ and the falling factorial $x^{\underline 0} := 1, x^{\underline n} := (x-n+1) x^{\underline{n-1}}$. Then $\Delta x^{\underline n} = n x^{\underline{n-1}}$ (for $n\geq 1$). Next, prove that if $\Delta^{k+1} f(x) = 0$ and $\Delta^{k+1} g(x) =0$, then $f(x)-g(x)= c$, and from this get that if $\Delta^{k+1} f(x)=0$, then $f(x)$ is a degree $k$ polynomial. Finally, we can get the formula by using only $s(0),s(1),s(2),\dots,s(k),s(k+1)$ and Lagrange Interpolation:
$$ s(n) = \sum_{i=0}^{k+1} s(i) \frac{n(n-1)(n-2)\cdots (n-k-1)}{n-i} \cdot
\frac{i-i}{i(i-1)(i-2)\cdots (i-k-1)}$$
with the need to cancel before multiplying.

That's ugly enough that I hesitate to press the "Post Your Answer" button, but it has so very many ingredients that would have rocked my world in high school. And the final formula is easy enough to remember.

I agree totally! But there's got to be something wrong with the numerator of the second fractional factor!
–
LubinJan 7 '12 at 1:18

Nothing is wrong, I think. Remember that you need to cancel with the "$i-i$" factor in the denominator, leaving just a factor of "1" in that numerator.
–
Kevin O'BryantJan 7 '12 at 1:22

1

Also really nice is that you can prove $s(n)$ is always divisible by $n(n+1)$, and by $2n+1$ for $k$ even, and by $n^2(n+1)^2$ for $n>1$ odd, by considering $s(-n)$ and using induction, thus getting formulae for $s(-x)$ in terms of $s(x)$, and then considering zeros of $s$ and its derivative $s'$. I still love the fact that values at non-integers, and derivatives, can be relevant to a question originally only about positive integers. This is also motivation for how the Riemann Zeta Function and analytic continuation can tell us about prime numbers.
–
Zen HarperJan 7 '12 at 3:44

Very relevant- I am teaching integration right now, and this seems perfect for the curious student.
–
Steven GubkinJan 11 '12 at 15:22

This one is at the undergraduate level rather than the high-school level because it involves finite fields, but it certainly fits the title of this question: Dvir's proof of the finite field Kakeya conjecture. The stunningly short argument took the experts by surprise, and the fact that a degree-$d$ polynomial has at most $d$ zeros is one of the two key ingredients in the proof.