Proof that sum^infty of 1/n^2 = ...

I am trying to prove . I've done quite a lot:

Taking derivative of both sides and using part (a), we see that and differ by at most a constant on (in part (a) I have basically shown that the derivative of each side is equal). Evaluating at , we see that the constant difference is , which proves the equality. Likewise evaluating at 0 and we obtain the identity.

(Note, this argument depends on the claim that , at the point when I evaluate at . ...

So here's where I'm stuck. I have a hint that I'm supposed to use the following theorem:

If a function is continuous and periodic on with period , then, using the usual notation about and , .

My first question is, since my function isn't periodic, how do I use this theorem?

Anyway, I started trying this for the function I'm working with, but what I get is:

Now I have to say, already this looks messed up. Am I doing something wrong?

My first question is, since my function isn't periodic, how do I use this theorem?

The answer is that your function is periodic. Or you can make it periodic. Just think of copies of repeating their values every Remember that your original function's domain was restricted to . Therefore, it's not the same function as for all real numbers .

I had a hunch that was why the theorem would hold on the interval, but I'm still worried about the rest. If I crunch some numbers on a black-board to compute what I have at the end of my last message, I get , and the sum seems to diverge... And here, I don't even know for a fact that the function I should be putting into this theorem is the one I've been using, since after all, my goal is just to compute ...

I had a hunch that was why the theorem would hold on the interval, but I'm still worried about the rest. If I crunch some numbers on a black-board to compute what I have at the end of my last message, I get , and the sum seems to diverge... And here, I don't even know for a fact that the function I should be putting into this theorem is the one I've been using, since after all, my goal is just to compute ...

First observe .

Therefore , as .
We can switch the order here as this converges absolutely.

Perform the substitution .

We then obtain , where is the diamond with corners at .

Evaluating the integral we get .

Skipping the mess of integration (it's straightforward, and Wolfram|Alpha can help if need be), .

When I put in into Wolfram I get something very different from what you got, Tonio.

I looked at the link, Mr. Fantastic, but the part which gives the value of the sum just sort of states the result and I don't get the argument.

And I appreciate the many responses, but a lot of them use techniques that seem just too advanced, or they expand a Fourier series and even if I can blindly crunch the numbers I still have no idea why I should have thought of that particular series and so I feel as though I still don't understand what's going on.

When I put in into Wolfram I get something very different from what you got, Tonio.

I really don't care since I don't know what algorithms/programs is Wolfram implementing. The interesting question is what do you get when you do by parts (twice, as said) that integral?!
Note also that you can multiply by two the integral and integrate from zero to pi (why??)

Tonio

I looked at the link, Mr. Fantastic, but the part which gives the value of the sum just sort of states the result and I don't get the argument.

And I appreciate the many responses, but a lot of them use techniques that seem just too advanced, or they expand a Fourier series and even if I can blindly crunch the numbers I still have no idea why I should have thought of that particular series and so I feel as though I still don't understand what's going on.