As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem)
$$\zeta(2)=\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}.$$
However, Euler was Euler and he gave other proofs.

I believe many of you know some nice proofs of this, can you please share it with us?

makes no sense to have an Euler tag... maybe Eulerian but that's pushing it.
–
anonOct 30 '10 at 10:46

6

Probably Robin should answer with a link to his note. I know I've pointed people to it when they ask precisely this, and they've always been more than satisfied!
–
Mariano Suárez-Alvarez♦Oct 30 '10 at 14:09

6

What I like the most about this thread is that I know most of the proofs that I've seen posted up to this time, it makes me think that perhaps I was given adequate mathematical education after all :)
–
Asaf KaragilaNov 1 '10 at 12:57

25 Answers
25

OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9
(EDIT: ...which is actually the proof that I read in Aigner & Ziegler).

Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with,
$$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$
Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short,
$$S_n = 4 S_{n-1} + 2.$$
Since $S_1=2$, the solution of this recurrence is
$$S_n = \frac{2(4^n-1)}{3}.$$
(For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)

We now have
$$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$
Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!

I might add that, as an alternative, one can evaluate the equivalent sum $\sum_{m=0}^{\infty} (2m+1)^{-2}=\pi^2/8$ by summing only over the odd-numbered gridpoints. Then the midpoint $\pi/4$ never enters the computation, and one gets an even simpler recurrence, of the form $T_n = 4 T_{n-1}$.
–
Hans LundmarkOct 30 '10 at 21:20

@Downvoter: Well, yes, at least from a modern perspective, since we define series using limits. I don't know if Euler thought about it that way. What's your point?
–
Hans LundmarkNov 12 '11 at 10:13

12

@Downvoter: it's hard to know whether you're really serious, but if so...Euler probably did more calculus-y things than any other mathematician in history (including Newton and Leibniz).
–
Pete L. ClarkMar 4 '12 at 19:36

5

@Downvoter Are you confusing Euler with Euclid?
–
columbus8myhwSep 30 '14 at 3:26

This is a very cool peek into the way math was done in the 18th century. I love the total kamikaze approach of the initial assumption, which, as the Sandifer paper discusses on p. 6, is obviously not strictly justifiable. Sandifer gives $e^x\sin x$ as an alternative function with the same zeroes.
–
Ben CrowellFeb 11 '12 at 15:47

Alfredo Z has given a similar presentation of this below with some interesting differences.
–
Ben CrowellFeb 11 '12 at 16:14

@BenCrowell I love this argument too. But as you say one should be aware that is it not "Cauchy stringent", I mean Euler did not have any $\epsilon$-$\delta$ arguments to his proofs, however his "feeling" is often correct.
–
AD.Feb 11 '12 at 17:27

2

It seems the Sandifer link is now a dead link.
–
asmeurerNov 15 '14 at 20:53

Typically, when an answer is months old (more than half a year, in this case), and very good answers have been provided, with so many upvotes, it's best to leave it alone (by your answering it, it "bumps" the question to the forefront, under "active" questions...diverting attention from legitimately "active questions")...So unless you have some striking, novel, ingenious contribution to an "old" question, best to put your efforts into answering active, or new, questions.
–
amWhyJun 14 '11 at 2:33

22

@amWhy: Keep the doors open. This is the first answer of Alfred Z.
–
AD.Jun 14 '11 at 5:05

Of course, Alfred: welcome! (I just wanted to give you "heads up" 'cuz others around here will literally "bite your head off" for "bumping" an old question.) I, personally, would like to have us revisit "old" questions periodically. I'm sorry if I came across as unwelcoming! +1 Keep at it: you've got a lot to offer, no doubt, here at math.SE @AD. Understood. Apologies if I came across as "shutting doors" to new users!
–
amWhyJun 14 '11 at 13:03

5

This is closely related to the method of Euler already described above by AD.
–
Ben CrowellFeb 11 '12 at 16:14

I have two favorite proofs. One is the last proof in Robin Chapman's collection; you really should take a look at it.

The other is a proof that generalizes to the evaluation of $\zeta(2n)$ for all $n$, although I'll do it "Euler-style" to shorten the presentation. The basic idea is that meromorphic functions have infinite partial fraction decompositions that generalize the partial fraction decompositions of rational functions.

The particular function we're interested in is $B(x) = \frac{x}{e^x - 1}$, the exponential generating function of the Bernoulli numbers $B_n$. $B$ is meromorphic with poles at $x = 2 \pi i n, n \in \mathbb{Z}$, and at these poles it has residue $2\pi i n$. It follows that we can write, a la Euler,

because, after rearranging terms, the sum over odd powers cancels out and the sum over even powers doesn't. (This is one indication of why there is no known closed form for $\zeta(2n+1)$.) Equating terms on both sides, it follows that

$$B_{2n} = (-1)^{n+1} \frac{2 \zeta(2n)}{(2\pi)^{2n}}$$

or

$$\zeta(2n) = (-1)^{n+1} \frac{B_{2n} (2\pi)^{2n}}{2}$$

as desired. To compute $\zeta(2)$ it suffices to compute that $B_2 = \frac{1}{6}$, which then gives the usual answer.

This is my favorite proof and the one I was going to post, although Qiaochu's explanation is better than mine would have been. :) Instead, I will just add that there's a nice discussion in Concrete Mathematics (2nd edition, pp 285-286) that relates this argument to proof #7 in Robin's list.
–
Mike SpiveyOct 30 '10 at 19:59

Lemma: Let $Z$ be a complex curve in $\mathbb{C}^2$. Let $R(Z) \subset \mathbb{R}^2$ be the projection of $Z$ onto its real parts and $I(Z)$ the projection onto its complex parts. If these projections are both one to one, then the area of $R(Z)$ is equal to the area of $I(Z)$.

Given a point on $e^{-z_1} + e^{-z_2} = 1$, consider the triangle with vertices at $0$, $e^{-z_1}$ and $e^{-z_1} + e^{-z_2} = 1$. The inequalities on the $y$'s states that the triangle should lie above the real axis; the inequalities on the $x$'s state the horizontal base should be the longest side.

Projecting onto the $x$ coordinates, we see that the triangle exists if and only if the triangle inequality $e^{-x_1} + e^{-x_2} \geq 1$ is obeyed. So $R(Z)$ is the region under the curve $x_2 = - \log(1-e^{-x_1})$. The area under this curve is
$$\int_{0}^{\infty} - \log(1-e^{-x}) dx = \int_{0}^{\infty} \sum \frac{e^{-kx}}{k} dx = \sum \frac{1}{k^2}.$$

Now, project onto the $y$ coordinates. Set $(y_1, y_2) = (-\theta_1, \theta_2)$ for convenience, so the angles of the triangle are $(\theta_1, \theta_2, \pi - \theta_1 - \theta_2)$. The largest angle of a triangle is opposite the largest side, so we want $\theta_1$, $\theta_2 \leq \pi - \theta_1 - \theta_2$, plus the obvious inequalities $\theta_1$, $\theta_2 \geq 0$. So $I(Z)$ is the quadrilateral with vertices at $(0,0)$, $(0, \pi/2)$, $(\pi/3, \pi/3)$ and $(\pi/2, 0)$ and, by elementary geometry, this has area $\pi^2/6$.

Very nice indeed! (Although it took me a while to understand that the triangle lives in its own complex plane, not related to the $z_1$ and $z_2$ planes.) But I think it should be $x_1\ge 0$, $x_2\ge 0$, $e^{-x_1}+e^{-x_2} \le 1$, and the quadrilateral should have vertices at $(0,0)$, $(0,\pi/2)$, $(\pi/3,\pi/3)$ and $(\pi/2,0)$.
–
Hans LundmarkOct 31 '10 at 9:35

Thanks for the corrections! I still think $e^{- x_1} + e^{- x_2} \geq 1$ is right, but I've fixed the others.
–
David SpeyerOct 31 '10 at 12:12

Ah, you're right about that one, of course. Sorry.
–
Hans LundmarkOct 31 '10 at 14:44

where the sum is taken over all branches of the logarithm. Each point in $D$ has a neighbourhood on which the branches of $\log(z)$ are analytic. Since the series converges uniformly away from $z=1$, $R(z)$ is analytic on $D$.

Now a few observations:

(i) Each term of the series tends to $0$ as $z\to0$. Thanks to the uniform convergence this implies that the singularity at $z=0$ is removable and we can set $R(0)=0$.

(ii) The only singularity of $R$ is a double pole at $z=1$ due to the contribution of the principal branch of $\log z$. Moreover, $\lim_{z\to1}(z-1)^2R(z)=1$.

(iii) $R(1/z)=R(z)$.

By (i) and (iii) $R$ is meromorphic on the extended complex plane, therefore it is rational. By (ii) the denominator of $R(z)$ is $(z-1)^2$. Since $R(0)=R(\infty)=0$, the numerator has the form $az$. Then (ii) implies $a=1$, so that
$$R(z)=\frac{z}{(z-1)^2}.$$

This is not really an answer, but rather a long comment prompted by David Speyer's answer.
The proof that David gives seems to be the one in How to compute $\sum 1/n^2$ by solving triangles by Mikael Passare,
although that paper uses a slightly different way of seeing that
the area of the region $U_0$ (in Passare's notation)
bounded by the positive axes and the curve $e^{-x}+e^{-y}=1$,
$$\int_0^{\infty} -\ln(1-e^{-x}) dx,$$
is equal to $\sum_{n\ge 1} \frac{1}{n^2}$.

This brings me to what I really wanted to mention, namely another curious
way to see why $U_0$ has that area; I learned this from
Johan Wästlund.
Consider the region $D_N$ illustrated below for $N=8$:

Although it's not immediately obvious,
the area of $D_N$ is $\sum_{n=1}^N \frac{1}{n^2}$.
Proof: The area of $D_1$ is 1. To get from $D_N$ to $D_{N+1}$ one removes the boxes along the
top diagonal, and adds a new leftmost column of rectangles of width $1/(N+1)$
and heights $1/1,1/2,\ldots,1/N$,
plus a new bottom row which is the "transpose" of the new column,
plus a square of side $1/(N+1)$ in the bottom left corner.
The $k$th rectangle from the top in the new column
and the $k$th rectangle from the left in the new row (not counting the
square) have a combined area which exactly matches the $k$th box in the removed diagonal:
$$ \frac{1}{k} \frac{1}{N+1} + \frac{1}{N+1} \frac{1}{N+1-k} = \frac{1}{k} \frac{1}{N+1-k}. $$
Thus the area added in the process is just that of the square, $1/(N+1)^2$.
Q.E.D.

(Apparently this shape somehow comes up in connection with the "random
assignment problem", where there's an expected value of something which
turns out to be $\sum_{n=1}^N \frac{1}{n^2}$.)

Now place $D_N$ in the first quadrant, with the lower left corner at the origin.
Letting $N\to\infty$ gives nothing but the region $U_0$:
for large $N$ and for $0<\alpha<1$,
the upper corner of column number $\lceil \alpha N \rceil$ in $D_N$ lies at
$$ (x,y) =
\left(
\sum_{n=\lceil (1-\alpha) N \rceil}^N \frac{1}{n},
\sum_{n=\lceil \alpha N \rceil}^N \frac{1}{n}
\right)
\sim
\left(\ln\frac{1}{1-\alpha}, \ln\frac{1}{\alpha}\right),$$
hence (in the limit) on the curve $e^{-x}+e^{-y}=1$.

Tracing through the proof that $D_N = \sum_{d=1}^{N} 1/d^2$, I discovered the following curiosity: If you look at all the rectangle in $D_N$ of the form $1/j \times 1/k$ with $GCD(j,k)=d$, their total area is $1/d^2$. In particular, if you look at the rectangles of the form $1/j \times 1/k$ with $GCD(x,y)=1$, in the limit they are spread everywhere across the region $e^{-x} + e^{-y} \geq 1$, with density equal to the probability that two randomly chosen integers are relatively prime, namely $6/\pi^2$.
–
David SpeyerAug 22 '14 at 0:46

Note that
$$ \frac{\pi^2}{\sin^2\pi z}=\sum_{n=-\infty}^{\infty}\frac{1}{(z-n)^2} $$
from complex analysis and that both sides are analytic everywhere except $n=0,\pm 1,\pm 2,\cdots$. Then one can obtain
$$ \frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}=\sum_{n=1}^{\infty}\frac{1}{(z-n)^2}+\sum_{n=1}^{\infty}\frac{1}{(z+n)^2}. $$
Now the right hand side is analytic at $z=0$ and hence
$$\lim_{z\to 0}\left(\frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}\right)=2\sum_{n=1}^{\infty}\frac{1}{n^2}.$$
Note
$$\lim_{z\to 0}\left(\frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}\right)=\frac{\pi^2}{3}.$$
Thus
$$\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}.$$

In response to a request here: Compute $\oint z^{-2k} \cot (\pi z) dz$ where the integral is taken around a square of side $2N+1$. Routine estimates show that the integral goes to $0$ as $N \to \infty$.

Now, let's compute the integral by residues. At $z=0$, the residue is $\pi^{2k-1} q$, where $q$ is some rational number coming from the power series for $\cot$. For example, if $k=1$, then we get $- \pi/3$.

Common variants: We can replace $\cot$ with $\tan$, with $1/(e^{2 \pi i z}-1)$, or with similar formulas.

This is reminiscent of Qiaochu's proof but, rather than actually establishing the relation $\pi^{-1} \cot(\pi z) = \sum (z-n)^{-1}$, one simply establishes that both sides contribute the same residues to a certain integral.

At risk of contravening group etiquette w.r.t. old questions, I'm going to take this opportunity to post my own version. I don't see it in a transparent form in any of the other posts or in Robin Chapman's article, so I invite anyone to point out the correspondence if it's there. I like this argument because it's physical and can be followed without mathematical formalism.

We start by assuming the well-known series for $\pi/4$ in alternating odd fractions. We can recognize it as the sum of the Fourier series of the square wave, evaluated at the origin:

$\cos(x) - \cos(3x)/3 + \cos(5x)/5 ...$

It is easily argued on physical grounds that this adds up to a square wave; and that the height of the wave is pi/4 follows from the alternating sequence already mentioned. Now we are going to interpret this wave as an electric current flowing through a resistor. There are two ways of calculating the power and they must agree. First, we can just take square of the amplitude; in the case of this square wave, this is obviously a constant and it is just $\,\,\pi^2/16$. The other way is to add up the power of the sinusoidal components. These are the squares of the individual amplitudes:

$1 + 1/9 + 1/25 .... = (?)\, \pi^2/16 \,\,??$

No, not quite; I've been a little sloppy and neglected to mention that when calculating the power of a sine wave, you use its RMS amplitude and not its peak amplitude. This introduces a factor of two; so in fact the series as written adds up to $\,\pi^2/8.$ This isn't quite what we want; remember we've just added up the odd fractions. But the even fractions contribute in a rather picturesque way; it's easy to group them by powers of two into a geometric sum leading to the desired result of $\,\,\pi^2/6.$

At the risk of being rude, you've used "It is easily argued on physical grounds" in place of a theorem on pointwise convergence of fourier series, and a particular physical manifestation/application of Plancherel's theorem. You gain "intuition" for why the result is plausible (assuming you have the corresponding physics background), but you lose both rigor and clarity. The problem with making a physical argument for any mathematical fact is that even if you know that certain calculations work for physically relevant examples, it's hard to say what condition "physically relevant" imposes.
–
AaronAug 14 '11 at 1:14

1

Thanks for the feedback. I'm understanding that my argument wasn't so sketchy that you weren't able to fill in the details as necessary. I am blown away by the mathematical sophistication of the people who post hear but I still wish I would see more arguments made the way I do.
–
Marty GreenAug 14 '11 at 1:51

1

Well, you lucked out that I had seen the argument before (though not phrased with such language), and I remembered enough physics to understand what you were doing. I appreciate how you feel: technical arguments can be difficult to digest and sometimes offer no intuition about the result. A heuristic explanation, even if it isn't fully rigorous, is often a wonderful addition. However, for mathematics, the heuristic cannot be everything, as the mathematical battleground is littered with the bodies of proofs which are simple, intuitive, and wrong.
–
AaronAug 14 '11 at 2:08

A short way to get the sum is to use Fourier's expansion of $x^2$ in $x\in(-\pi,\pi)$. Recall that Fourier's expansion of $f(x)$ is
$$ \tilde{f}(x)=\frac{1}{2}a_0+\sum_{n=1}^\infty(a_n\cos nx+b_n\sin nx), x\in(-\pi,\pi)$$
where
$$ a_0=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\;dx, a_n=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\cos nx\; dx, b_n=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\sin nx\; dx, n=1,2,3,\cdots $$
and
$$ \tilde{f}(x)=\frac{f(x-0)+f(x+0)}{2}. $$
Easy calculation shows
$$ x^2=\frac{\pi^2}{3}+4\sum_{n=1}^\infty(-1)^n\frac{\cos nx}{n^2}, x\in[-\pi,\pi]. $$
Letting $x=\pi$ in both sides gives
$$ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

Another way to get the sum is to use Parseval's Identity for Fourier's expansion of $x$ in $(-\pi,\pi)$. Recall that Parseval's Identity is
$$ \int_{-\pi}^{\pi}|f(x)|^2dx=\frac{1}{2}a_0^2+\sum_{n=1}^\infty(a_n^2+b_n^2). $$
Note
$$ x=2\sum_{n=1}^\infty(-1)^{n+1}\frac{\sin nx}{n}, x\in(-\pi,\pi). $$
Using Parseval's Identity gives
$$ 4\sum_{n=1}^\infty\frac{1}{n^2}=\int_{-\pi}^{\pi}|x|^2dx$$
or
$$ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

Let $f\in Lip(S^{1})$, where $Lip(S^{1})$ is the space of Lipschitz functions on $S^{1}$. So its well defined the number for $k\in \mathbb{Z}$ (called Fourier series of $f$) $$\hat{f}(k)=\frac{1}{2\pi}\int \hat{f}(\theta)e^{-ik\theta}d\theta.$$

By the inversion formula, we have $$f(\theta)=\sum_{k\in\mathbb{Z}}\hat{f}(k)e^{ik\theta}.$$

Now take $f(\theta)=|\theta|$, $\theta\in [-\pi,\pi]$. Note that $f\in Lip(S^{1})$

Theorem:
Let $\lbrace a_n\rbrace$ be a nonincreasing sequence of positive numbers such that $\sum a_n^2$ converges. Then both series
$$s:=\sum_{n=0}^\infty(-1)^na_n,\,\delta_k:=\sum_{n=0}^\infty a_na_{n+k},\,k\in\mathbb N $$
converge. Morevere $\Delta:=\sum_{k=1}^\infty(-1)^{k-1}\delta_k$ also converges, and we have the formula
$$\sum_{n=0}^\infty a_n^2=s^2+2\Delta.$$
Proof: Knopp. Konrad, Theory and Application of Infinite Series, page 323.

If we let $a_n=\frac1{2n+1}$ in this theorem, then we have
$$s=\sum_{n=0}^\infty(-1)^n\frac1{2n+1}=\frac\pi 4$$
$$\delta_k=\sum_{n=0}^\infty\frac1{(2n+1)(2n+2k+1)}=\frac1{2k}\sum_{n=0}^\infty\left(\frac1{2n+1}-\frac1{2n+2k+1}\right)=\frac{1}{2k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)$$
Hence,
$$\sum_{n=0}^\infty\frac1{(2n+1)^2}=\left(\frac\pi 4\right)^2+\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)=\frac{\pi^2}{16}+\frac{\pi^2}{16}=\frac{\pi^2}{8}$$
and now
$$\zeta(2)=\frac4 3\sum_{n=0}^\infty\frac1{(2n+1)^2}=\frac{\pi^2}6.$$

Consider the function $\pi \cot(\pi z)$ which has poles at $z=\pm n$ where n is an integer. Using the L'hopital rule you can see that the residue at these poles is 1.

Now consider the integral $\int_{\gamma_N} \frac{\pi\cot(\pi z)}{z^2} dz$ where the contour $\gamma_N$ is the rectangle with corners given by ±(N + 1/2) ± i(N + 1/2) so that the contour avoids the poles of $\cot(\pi z)$. The integral is bouond in the following way:
$\int_{\gamma_N} |\frac{\pi\cot(\pi z)}{z^2} |dz\le Max |(\frac{\pi\cot(\pi z)}{z^2}) | Length(\gamma_N)$. It can easily be shown that on the contour $\gamma_N$ that $\pi \cot(\pi z)< M$ where M is some constant. Then we have

where (8N+4) is the lenght of the contour and $\sqrt{2(1/2+N)^2}$ is half the diagonal of $\gamma_N$. In the limit that N goes to infinity the integral is bound by 0 so we have
$\int_{\gamma_N} \frac{\pi\cot(\pi z)}{z^2} dz =0$

by the cauchy residue theorem we have 2πiRes(z = 0) + 2πi$\sum$Residues(z$\ne$ 0) = 0. At z=0 we have Res(z=0)=$-\frac{\pi^2}{3}$, and $Res (z=n)=\frac{1}{n^2}$ so we have

Here's a proof based upon periods and the fact that $\zeta(2)$ and $\frac{\pi^2}{6}$ are periods forming an accessible identity.

The definition of periods below and the proof is from the fascinating introductory survey paper about periods by M. Kontsevich and D. Zagier.

Periods are defined as complex numbers whose real and imaginary parts are values of absolutely convergent integrals of rational functions with rational coefficient over domains in $\mathbb{R}^n$ given by polynomial inequalities with rational coefficients.

The set of periods is therefore a countable subset of the complex numbers. It contains the algebraic numbers, but also many of famous transcendental constants.

In order to show the equality $\zeta(2)=\frac{\pi^2}{6}$ we have to show that both are periods and that $\zeta(2)$ and $\frac{\pi^2}{6}$ form a so-called accessible identity.

First step of the proof: $\zeta(2)$ and $\pi$ are periods

There are a lot of different proper representations of $\pi$ showing that this constant is a period. In the referred paper above following expressions (besides others) of $\pi$ are stated:

Second step: $\zeta(2)$ and $\frac{\pi^2}{6}$ form an accessible identity.

An accessible identity between two periods $A$ and $B$ is given, if we can transform the integral representation of period $A$ by application of the three rules: Additivity (integrand and domain), Change of variables and Newton-Leibniz formula to the integral representation of period $B$.

This implies equality of the periods and the job is done.

In order to show that $\zeta(2)$ and $\frac{\pi^2}{6}$ are accessible identities we start with the integral $I$

$$I=\int_{0}^{1}\int_{0}^{1}\frac{1}{1-xy}\frac{dxdy}{\sqrt{xy}}$$

Expanding $1/(1-xy)$ as a geometric series and integrating term-by-term,

the last equality being obtained by considering the involution $(\xi,\eta) \mapsto (\xi^{-1},\eta^{-1})$ and comparing this with the last integral representation of $\pi$ above we obtain:
$$I=\frac{\pi^2}{2}$$

So, we have shown that $\frac{\pi^2}{6}$ and $\zeta(2)$ are accessible identities and equality follows.