Continuing from a previous post, we found that if we begin with th powers of consecutive integers and then repeatedly take successive differences, it seems like we always end up with the factorial of , that is, . We then derived an algebraic expression for the result of the iterative difference procedure. So the goal now is to prove that

that is,

Now, it’s possible (probable, in fact) that this can be proved by purely algebraic means. If you come up with such a proof I would love to see it! But I must confess that I spent several hours banging my head against it (algebraically speaking) without making any progress. Eventually I turned to the idea of a combinatorial proof.

What do I mean by that? Combinatorics is the subfield of mathematics concerned with counting. The essence of a combinatorial proof is to show that two different expressions are just two different ways of counting the same set of objects—and must therefore be equal. I’ve described some combinatorial proofs before, in counting the number of ways to distribute cookies.

It’s certainly possible to prove this algebraically, by expanding out the binomial coefficients (using ), but we can give a more elegant proof, based on the fact that is the number of ways to choose a subset of things out of a set of things. For example, here are the ways to choose three things out a set of five:

Consider the first element of the size- set. Every subset of size either includes this first element, or it doesn’t. The number of size- subsets which do not include the first element is , since that’s the number of ways to choose things from the remaining elements. The number of size- subsets which do include the first element is , because they correspond to choosing of the remaining things. Therefore .

Here’s an illustration of how this works in the particular case when and :

Notice how the ten subsets from above have been split into two groups: the first group, on the left, are those that don’t include the first element; you can see that each of them corresponds to one of the size- subsets of the remaining four elements. The second group, on the right, are those that do include the first element; each corresponds to one of the size- subsets of the remaining four elements.

So that’s the idea of a combinatorial proof. And we want to do something similar for the identity we are trying to prove—although, of course, it’s going to be a bit more difficult!

You might have fun trying to think about what a combinatorial proof of our target equation might look like; although if you don’t have much experience with combinatorics you may have trouble coming up with what sorts of things the two sides of the equation might be counting! That’s what I’ll talk about in my next post.

12 Responses to Combinatorial proofs

Well, I don’t know about algebraic, but if you notice that you are taking and differentiating it times, the fact that you get as a result becomes not as surprising. And yes, this argument can be expanded into a proof.

Ah, interesting! I think I see what you are getting at—taking pairwise differences is a discrete analog of differentiation. I’m not sure I see how to work out the details, though. If you have time to expand it into something a bit more detailed I would certainly be interested to see it.

It does look wrong, doesn’t it? But surprisingly, the k is in fact correct! It turns out that the equation is true for all values of k. This corresponds to the fact that we can start with any sequence of (n+1) consecutive integers and we still end up with the factorial of n; k represents the first integer in the sequence.

I have a proof, but I wouldn’t call it algebraic, since I’m taking derivatives. Start with the original sum. Expand out the and switch the order of summation, factoring out everything that is not in the inner sum.

Now go to the binomial theorem: Taking derivatives and plugging in gives you 0 on the left, and the terms of the inner sum on the right. At least for the first derivatives. The -th derivative gives you on the left, and the very last term of the inner sum (and only for one value of the outer sum). So everything cancels except for one term:

Intriguing identity.. you may want to have a look at the following short article I came across:http://cms.math.ca/crux/v27/n8/page510-513.pdf
It explains how to prove finite sums with sine and cosine rules and differentiation. Looks rather clever and more creative than induction (which obviously was my first idea on the one above here, but turns out rather awkward soon..). The article technique is nice and I never saw it before.
In your case I guess you have to squeeze the k in the cosine arguments and probably you need to differentiate n times to get the (k+i)^n. I haven’t tried to work it out (maybe later;) so I don’t know if the binomial term nicely drops out, but it might be worth the try. the simpler examples of Lin suggest some factorial build-up (n*(n-1)*…) Best of luck!😉

If you take consecutive values of a polynomial of degree n, and take first differences, you get values for a polynomial of degree n-1 whose leading coefficient (the one in ) is n times the leading coefficient in your original polynomial (the one in ).

Similarly, if you start with every kth value of a polynomial of degreen n and keep taking differences, you end up with times the original leading coefficient.