797
$\begingroup$

As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$

However, Euler was Euler and he gave other proofs.

I believe many of you know some nice proofs of this, can you please share it with us?

$\endgroup$

49 Answers 49

364
$\begingroup$

OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9 (EDIT: ...which is actually the proof that I read in Aigner & Ziegler).

When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus $$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$ Note that $1/\tan^2 x = 1/\sin^2 x - 1$. Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$: $$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$ Denoting the sum on the right-hand side by $S_n$, we can write this as $$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$

Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with, $$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$

Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short, $$S_n = 4 S_{n-1} + 2.$$ Since $S_1=2$, the solution of this recurrence is $$S_n = \frac{2(4^n-1)}{3}.$$ (For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)

We now have $$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$ Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!

$\endgroup$
7
  • 27
    $\begingroup$ I might add that, as an alternative, one can evaluate the equivalent sum $\sum_{m=0}^{\infty} (2m+1)^{-2}=\pi^2/8$ by summing only over the odd-numbered gridpoints. Then the midpoint $\pi/4$ never enters the computation, and one gets an even simpler recurrence, of the form $T_n = 4 T_{n-1}$. $\endgroup$ Oct 30, 2010 at 21:20
  • 8
    $\begingroup$ @Downvoter: Well, yes, at least from a modern perspective, since we define series using limits. I don't know if Euler thought about it that way. What's your point? $\endgroup$ Nov 12, 2011 at 10:13
  • 35
    $\begingroup$ @Downvoter: it's hard to know whether you're really serious, but if so...Euler probably did more calculus-y things than any other mathematician in history (including Newton and Leibniz). $\endgroup$ Mar 4, 2012 at 19:36
  • 29
    $\begingroup$ @Downvoter Are you confusing Euler with Euclid? $\endgroup$ Sep 30, 2014 at 3:26
  • 19
    $\begingroup$ @AkivaWeinberger: Just saw this (sorry it's 3 years late), but I must have been, because I'm not sure what else I could've been thinking either... $\endgroup$
    – Downvoter
    Feb 19, 2017 at 22:20
246
$\begingroup$

We can use the function $f(x)=x^{2}$ with $-\pi \leq x\leq \pi $ and find its expansion into a trigonometric Fourier series

$$\dfrac{a_{0}}{2}+\sum_{n=1}^{\infty }(a_{n}\cos nx+b_{n}\sin nx),$$

which is periodic and converges to $f(x)$ in $[-\pi, \pi] $.

Observing that $f(x)$ is even, it is enough to determine the coefficients

$$a_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }f(x)\cos nx\;dx\qquad n=0,1,2,3,...,$$

because

$$b_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }f(x)\sin nx\;dx=0\qquad n=1,2,3,... .$$

For $n=0$ we have

$$a_{0}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }x^{2}dx=\dfrac{2}{\pi }\int_{0}^{\pi }x^{2}dx=\dfrac{2\pi ^{2}}{3}.$$

And for $n=1,2,3,...$ we get

$$a_{n}=\dfrac{1}{\pi }\int_{-\pi }^{\pi }x^{2}\cos nx\;dx$$

$$=\dfrac{2}{\pi }\int_{0}^{\pi }x^{2}\cos nx\;dx=\dfrac{2}{\pi }\times \dfrac{ 2\pi }{n^{2}}(-1)^{n}=(-1)^{n}\dfrac{4}{n^{2}},$$

because

$$\int x^2\cos nx\;dx=\dfrac{2x}{n^{2}}\cos nx+\left( \frac{x^{2}}{ n}-\dfrac{2}{n^{3}}\right) \sin nx.$$

Thus

$$f(x)=\dfrac{\pi ^{2}}{3}+\sum_{n=1}^{\infty }\left( (-1)^{n}\dfrac{4}{n^{2}} \cos nx\right) .$$

Since $f(\pi )=\pi ^{2}$, we obtain

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+\sum_{n=1}^{\infty }\left( (-1)^{n}\dfrac{4}{ n^{2}}\cos \left( n\pi \right) \right) $$

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+4\sum_{n=1}^{\infty }\left( (-1)^{n}(-1)^{n} \dfrac{1}{n^{2}}\right) $$

$$\pi ^{2}=\dfrac{\pi ^{2}}{3}+4\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}.$$

Therefore

$$\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}=\dfrac{\pi ^{2}}{4}-\dfrac{\pi ^{2}}{12}= \dfrac{\pi ^{2}}{6}$$


Second method (available on-line a few years ago) by Eric Rowland. From

$$\log (1-t)=-\sum_{n=1}^{\infty}\dfrac{t^n}{n}$$

and making the substitution $t=e^{ix}$ one gets the series expansion

$$w=\text{Log}(1-e^{ix})=-\sum_{n=1}^{\infty }\dfrac{e^{inx}}{n}=-\sum_{n=1}^{ \infty }\dfrac{1}{n}\cos nx-i\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx,$$

whose radius of convergence is $1$. Now if we take the imaginary part of both sides, the RHS becomes

$$\Im w=-\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx,$$

and the LHS

$$\Im w=\arg \left( 1-\cos x-i\sin x\right) =\arctan \dfrac{-\sin x}{ 1-\cos x}.$$

Since

$$\arctan \dfrac{-\sin x}{1-\cos x}=-\arctan \dfrac{2\sin \dfrac{x}{2}\cdot \cos \dfrac{x}{2}}{2\sin ^{2}\dfrac{x}{2}}$$

$$=-\arctan \cot \dfrac{x}{2}=-\arctan \tan \left( \dfrac{\pi }{2}-\dfrac{x}{2} \right) =\dfrac{x}{2}-\dfrac{\pi }{2},$$

the following expansion holds

$$\dfrac{\pi }{2}-\frac{x}{2}=\sum_{n=1}^{\infty }\dfrac{1}{n}\sin nx.\qquad (\ast )$$

Integrating the identity $(\ast )$, we obtain

$$\dfrac{\pi }{2}x-\dfrac{x^{2}}{4}+C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}\cos nx.\qquad (\ast \ast )$$

Setting $x=0$, we get the relation between $C$ and $\zeta (2)$

$$C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}=-\zeta (2).$$

And for $x=\pi $, since

$$\zeta (2)=2\sum_{n=1}^{\infty }\dfrac{(-1)^{n-1}}{n^{2}},$$

we deduce

$$\dfrac{\pi ^{2}}{4}+C=-\sum_{n=1}^{\infty }\dfrac{1}{n^{2}}\cos n\pi =\sum_{n=1}^{\infty }\dfrac{(-1)^{n-1}}{n^{2}}=\dfrac{1}{2}\zeta (2)=-\dfrac{1}{ 2}C.$$

Solving for $C$

$$C=-\dfrac{\pi ^{2}}{6},$$

we thus prove

$$\zeta (2)=\dfrac{\pi ^{2}}{6}.$$

Note: this 2nd method can generate all the zeta values $\zeta (2n)$ by integrating repeatedly $(\ast\ast )$. This is the reason why I appreciate it. Unfortunately it does not work for $\zeta (2n+1)$.

Note also the $$C=-\dfrac{\pi ^{2}}{6}$$ can be obtained by integrating $(\ast\ast )$ and substitute $$x=0,x=\pi$$ respectively.

$\endgroup$
9
  • 3
    $\begingroup$ Would using fractional calculus to integrate $0.5$ times allow you to obtain $\zeta(2n+1)$? $\endgroup$
    – Alice Ryhl
    Feb 17, 2015 at 15:45
  • 4
    $\begingroup$ Definitely the best answer! Awesome job. I never really understood a proof of this until I read your post. $\endgroup$
    – Neil
    Apr 6, 2015 at 5:51
  • 2
    $\begingroup$ @KristofferRyhl Sorry to revive a year old comment, but... I tried to integrate $(**)$ .5 times and (if I did it correctly) got a denominator of $n^{2.5}$ so that doesn't work. Integrating $(**)$ once gives us a denominator of $n^3$ but that also gives us $sin(nx)$ which equals $0$ when $x=0$ - so that seems to be the why this method doesn't work for $\zeta(2n+1)$ $\endgroup$ Apr 27, 2016 at 19:23
  • 1
    $\begingroup$ @zerosofthezeta I played a bit around with it myself just now, I can see why it only works for $\zeta(2n)$ and not $\zeta(2n+a)$ for any $0<a<2$: in order to do the trick you need to substitute $x=\text{something}$ such that $f=d^n\sin/dx^n$ applied as $f(nx) = 1$, for any integer $n$. However it is only when the argument to $\zeta$ is even that you can find such an $x$. $\endgroup$
    – Alice Ryhl
    Apr 27, 2016 at 20:01
  • 1
    $\begingroup$ It is far from clear how you can integrate the expression (*), what doesnt seems to converge uniformly of converge at all for $|x|<1$. $\endgroup$
    – Masacroso
    Sep 15, 2018 at 15:02
208
$\begingroup$

Here is an other one which is more or less what Euler did in one of his proofs.

The function $\sin x$ where $x\in\mathbb{R}$ is zero exactly at $x=n\pi$ for each integer $n$. If we factorized it as an infinite product we get

$$\sin x = \cdots\left(1+\frac{x}{3\pi}\right)\left(1+\frac{x}{2\pi}\right)\left(1+\frac{x}{\pi}\right)x\left(1-\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1-\frac{x}{3\pi}\right)\cdots =$$ $$= x\left(1-\frac{x^2}{\pi^2}\right)\left(1-\frac{x^2}{2^2\pi^2}\right)\left(1-\frac{x^2}{3^2\pi^2}\right)\cdots\quad.$$

We can also represent $\sin x$ as a Taylor series at $x=0$:

$$\sin x = x - \frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots\quad.$$

Multiplying the product and identifying the coefficient of $x^3$ we see that

$$\frac{x^3}{3!}=x\left(\frac{x^2}{\pi^2} + \frac{x^2}{2^2\pi^2}+ \frac{x^2}{3^2\pi^2}+\cdots\right)=x^3\sum_{n=1}^{\infty}\frac{1}{n^2\pi^2}$$ or $$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

Here are two interesting links:

$\endgroup$
11
  • 45
    $\begingroup$ This is a very cool peek into the way math was done in the 18th century. I love the total kamikaze approach of the initial assumption, which, as the Sandifer paper discusses on p. 6, is obviously not strictly justifiable. Sandifer gives $e^x\sin x$ as an alternative function with the same zeroes. $\endgroup$
    – user13618
    Feb 11, 2012 at 15:47
  • 28
    $\begingroup$ @BenCrowell I think that Euler would argue that $e^x\sin x$ has an infinite-degree zero at $-\infty$, requiring $(1+\frac x\infty)^\infty$ to be appended to the infinite product… which is correct, when interpreted correctly. $\endgroup$ Jan 15, 2016 at 17:11
  • 1
    $\begingroup$ @Ant Append $\lim_{N\to\infty}(1+\frac xN)^N=e^x$ to the infinite product $\endgroup$ Jun 20, 2016 at 23:08
  • 3
    $\begingroup$ @Ant I do not think Euler was known for his rigor. :) $\endgroup$ Jun 21, 2016 at 13:09
  • 1
    $\begingroup$ @Ant This can, like so many things Euler did, be rigorously expressed in the language of Nonstandard Analysis: pick a positive unlimited number $\omega\in {}^*\mathbb R$, i.e. $\omega>x\forall x\in\mathbb R$. Then one can show that $(1+\tfrac{x}{\omega})^{\omega}$ is infinitesimally close to $\exp(x)=\sum_{k\in\mathbb N} \tfrac{1}{k!}x^k$ for all $x\in\mathbb R$, and indeed it does have a zero of infinite order at $x=-\omega$. Which $\omega$ you pick is irrelevant, since the behaviour of the function on the reals is the same up to infinitesimal change. $\endgroup$
    – Hyperplane
    Jul 20, 2021 at 11:48
162
$\begingroup$

Define the following series for $ x > 0 $

$$\frac{\sin x}{x} = 1 - \frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+\cdots\quad.$$

Now substitute $ x = \sqrt{y}\ $ to arrive at

$$\frac{\sin \sqrt{y}\ }{\sqrt{y}\ } = 1 - \frac{y}{3!}+\frac{y^2}{5!}-\frac{y^3}{7!}+\cdots\quad.$$

if we find the roots of $\frac{\sin \sqrt{y}\ }{\sqrt{y}\ } = 0 $ we find that

$ y = n^2\pi^2\ $ for $ n \neq 0 $ and $ n $ in the integers

With all of this in mind, recall that for a polynomial

$ P(x) = a_{n}x^n + a_{n-1}x^{n-1} +\cdots+a_{1}x + a_{0} $ with roots $ r_{1}, r_{2}, \cdots , r_{n} $

$$\frac{1}{r_{1}} + \frac{1}{r_{2}} + \cdots + \frac{1}{r_{n}} = -\frac{a_{1}}{a_{0}}$$

Treating the above series for $ \frac{\sin \sqrt{y}\ }{\sqrt{y}\ } $ as polynomial we see that

$$\frac{1}{1^2\pi^2} + \frac{1}{2^2\pi^2} + \frac{1}{3^2\pi^2} + \cdots = -\frac{-\frac{1}{3!}}{1}$$

then multiplying both sides by $ \pi^2 $ gives the desired series.

$$\frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6}$$

$\endgroup$
5
  • 3
    $\begingroup$ Does the formula you use actually hold for all entire functions defined by power series? Are there conditions that need to be present for this to work? As an entire function is not determined by its roots (e.g. $f(z)$ versus $e^{g(z)}f(z)$), is it clear that making such a change wouldn't affect the answer? Or does this rely on Euler's formula for $\sin x$ as an infinite product? This is certainly an interesting idea, but I fear it could be a misleading coincidence. $\endgroup$
    – Aaron
    Aug 14, 2011 at 1:45
  • 10
    $\begingroup$ This is closely related to the method of Euler already described above by AD. $\endgroup$
    – user13618
    Feb 11, 2012 at 16:14
  • $\begingroup$ @BenCrowell Yes, but slightly different anyway, love this one too.. :) $\endgroup$ Feb 11, 2012 at 17:29
  • $\begingroup$ @Alfredo Z. Crazy that we think exactly alike (⊙o⊙) Must upvote! $\endgroup$
    – Vim
    Feb 20, 2015 at 3:01
  • $\begingroup$ However I am confused with this problem (take a look at my question here if you don't mind): How to rectify that the fundamental theorem of algebra also holds for an infinite polynomial? $\endgroup$
    – Vim
    Feb 20, 2015 at 3:27
146
$\begingroup$

This method apparently was used by Tom Apostol in $1983$. I will outline the main ideas of the proof, the details can be found in here or this presentation (page $27$)

Consider

$$\begin{align} \int_{0}^{1} \int_{0}^{1} \frac{1}{1 - xy} dy dx &= \int_{0}^{1} \int_{0}^{1} \sum_{n \geq 0} (xy)^n dy dx \\ &= \sum_{n \geq 0} \int_{0}^{1} \int_{0}^{1} x^n y^n dy dx \\ &= \sum_{n \geq 1} \frac{1}{n^2} \\ \end{align}$$

You can verify that the left hand side is indeed $\frac{\pi^2}{6}$ by letting $x = u - v$ and $y = v + u.$

$\endgroup$
1
95
$\begingroup$

I have two favorite proofs. One is the last proof in Robin Chapman's collection; you really should take a look at it.

The other is a proof that generalizes to the evaluation of $\zeta(2n)$ for all $n$, although I'll do it "Euler-style" to shorten the presentation. The basic idea is that meromorphic functions have infinite partial fraction decompositions that generalize the partial fraction decompositions of rational functions.

The particular function we're interested in is $B(x) = \frac{x}{e^x - 1}$, the exponential generating function of the Bernoulli numbers $B_n$. $B$ is meromorphic with poles at $x = 2 \pi i n, n \in \mathbb{Z}$, and at these poles it has residue $2\pi i n$. It follows that we can write, a la Euler,

$$\frac{x}{e^x - 1} = \sum_{n \in \mathbb{Z}} \frac{2\pi i n}{x - 2 \pi i n} = \sum_{n \in \mathbb{Z}} - \left( \frac{1}{1 - \frac{x}{2\pi i n}} \right).$$

Now we can expand each of the terms on the RHS as a geometric series, again a la Euler, to obtain

$$\frac{x}{e^x - 1} = - \sum_{n \in \mathbb{Z}} \sum_{k \ge 0} \left( \frac{x}{2\pi i n} \right)^k = \sum_{k \ge 0} (-1)^{n+1} \frac{2 \zeta(2n)}{(2\pi )^{2n}} x^{2n}$$

because, after rearranging terms, the sum over odd powers cancels out and the sum over even powers doesn't. (This is one indication of why there is no known closed form for $\zeta(2n+1)$.) Equating terms on both sides, it follows that

$$B_{2n} = (-1)^{n+1} \frac{2 \zeta(2n)}{(2\pi)^{2n}}$$

or

$$\zeta(2n) = (-1)^{n+1} \frac{B_{2n} (2\pi)^{2n}}{2}$$

as desired. To compute $\zeta(2)$ it suffices to compute that $B_2 = \frac{1}{6}$, which then gives the usual answer.

$\endgroup$
7
  • 5
    $\begingroup$ This is my favorite proof and the one I was going to post, although Qiaochu's explanation is better than mine would have been. :) Instead, I will just add that there's a nice discussion in Concrete Mathematics (2nd edition, pp 285-286) that relates this argument to proof #7 in Robin's list. $\endgroup$ Oct 30, 2010 at 19:59
  • $\begingroup$ In your last equation, shouldn't it be $(2\pi)^{2n}$? See en.wikipedia.org/wiki/Riemann_zeta_function#Specific_values $\endgroup$ Nov 29, 2013 at 8:54
  • $\begingroup$ @evil: yes, thanks for the correction. Edited. $\endgroup$ Nov 29, 2013 at 22:30
  • $\begingroup$ Actually, your partial fraction decomposition of $\frac{x}{\mathrm e^x - 1}$ does not converge :/ $\endgroup$
    – Célestin
    Aug 15, 2017 at 15:52
  • $\begingroup$ @Phoenix: yes, that's what makes this proof "Euler-style." $\endgroup$ Aug 16, 2017 at 2:12
84
$\begingroup$

Here is one more nice proof, I learned it from Grisha Mikhalkin:

Lemma: Let $Z$ be a complex curve in $\mathbb{C}^2$. Let $R(Z) \subset \mathbb{R}^2$ be the projection of $Z$ onto its real parts and $I(Z)$ the projection onto its complex parts. If these projections are both one to one, then the area of $R(Z)$ is equal to the area of $I(Z)$.

Proof: There is an obvious map from $R(Z)$ to $I(Z)$, given by lifting $(x_1, x_2) \in R(Z)$ to $(x_1+i y_1, x_2 + i y_2) \in Z$, and then projecting to $(y_1, y_2) \in I(Z)$. We must prove this map has Jacobian $1$. WLOG, translate $(x_1, y_1, x_2, y_2)$ to $(0,0,0,0)$ and let $Z$ obey $\partial z_2/\partial z_1 = a+bi$ near $(0,0)$. To first order, we have $x_2 = a x_1 - b y_1$ and $y_2 = a y_1 + b x_1$. So $y_1 = (a/b) x_1 - (1/b) x_2$ and $y_2 = (a^2 + b^2)/b x_1 - (a/b) x_2$. So the derivative of $(x_1, x_2) \mapsto (y_1, y_2)$ is $\left( \begin{smallmatrix} a/b & - 1/b \\ (a^2 + b^2)/b & -a/b \end{smallmatrix} \right)$ and the Jacobian is $1$. QED

Now, consider the curve $e^{-z_1} + e^{-z_2} = 1$, where $z_1$ and $z_2$ obey the following inequalities: $x_1 \geq 0$, $x_2 \geq 0$, $-\pi \leq y_1 \leq 0$ and $0 \leq y_2 \leq \pi$.

Given a point on $e^{-z_1} + e^{-z_2} = 1$, consider the triangle with vertices at $0$, $e^{-z_1}$ and $e^{-z_1} + e^{-z_2} = 1$. The inequalities on the $y$'s states that the triangle should lie above the real axis; the inequalities on the $x$'s state the horizontal base should be the longest side.

Projecting onto the $x$ coordinates, we see that the triangle exists if and only if the triangle inequality $e^{-x_1} + e^{-x_2} \geq 1$ is obeyed. So $R(Z)$ is the region under the curve $x_2 = - \log(1-e^{-x_1})$. The area under this curve is $$\int_{0}^{\infty} - \log(1-e^{-x}) dx = \int_{0}^{\infty} \sum \frac{e^{-kx}}{k} dx = \sum \frac{1}{k^2}.$$

Now, project onto the $y$ coordinates. Set $(y_1, y_2) = (-\theta_1, \theta_2)$ for convenience, so the angles of the triangle are $(\theta_1, \theta_2, \pi - \theta_1 - \theta_2)$. The largest angle of a triangle is opposite the largest side, so we want $\theta_1$, $\theta_2 \leq \pi - \theta_1 - \theta_2$, plus the obvious inequalities $\theta_1$, $\theta_2 \geq 0$. So $I(Z)$ is the quadrilateral with vertices at $(0,0)$, $(0, \pi/2)$, $(\pi/3, \pi/3)$ and $(\pi/2, 0)$ and, by elementary geometry, this has area $\pi^2/6$.

$\endgroup$
5
  • 1
    $\begingroup$ Very nice indeed! (Although it took me a while to understand that the triangle lives in its own complex plane, not related to the $z_1$ and $z_2$ planes.) But I think it should be $x_1\ge 0$, $x_2\ge 0$, $e^{-x_1}+e^{-x_2} \le 1$, and the quadrilateral should have vertices at $(0,0)$, $(0,\pi/2)$, $(\pi/3,\pi/3)$ and $(\pi/2,0)$. $\endgroup$ Oct 31, 2010 at 9:35
  • $\begingroup$ Thanks for the corrections! I still think $e^{- x_1} + e^{- x_2} \geq 1$ is right, but I've fixed the others. $\endgroup$ Oct 31, 2010 at 12:12
  • $\begingroup$ Ah, you're right about that one, of course. Sorry. $\endgroup$ Oct 31, 2010 at 14:44
  • 2
    $\begingroup$ I have another comment too, which I posted as a separate answer because it was too long, and also because I wanted to include an image: math.stackexchange.com/questions/8337/… $\endgroup$ Nov 1, 2010 at 12:37
  • $\begingroup$ @DavidSpeyer , do you think your method (or a similar) can be applied here: math.stackexchange.com/questions/1284161/… ? $\endgroup$
    – VividD
    May 24, 2015 at 8:00
77
+100
$\begingroup$

I'll post the one I know since it is Euler's, and is quite easy and stays in $\mathbb{R}$. (I'm guessing Euler didn't have tools like residues back then).

Let

$$s = {\sin ^{ - 1}}x$$

Then

$$\int\limits_0^{\frac{\pi }{2}} {sds} = \frac{{{\pi ^2}}}{8}$$

But then

$$\int\limits_0^1 {\frac{{{{\sin }^{ - 1}}x}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{{\pi ^2}}}{8}$$

Since

$${\sin ^{ - 1}}x = \int {\frac{{dx}}{{\sqrt {1 - {x^2}} }}} = x + \frac{1}{2}\frac{{{x^3}}}{3} + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{{{x^5}}}{5} + \frac{{1 \cdot 3 \cdot 5}}{{2 \cdot 4 \cdot 6}}\frac{{{x^7}}}{7} + \cdots $$

We have

$$\int\limits_0^1 {\left\{ {\frac{{dx}}{{\sqrt {1 - {x^2}} }}\int {\frac{{dx}}{{\sqrt {1 - {x^2}} }}} } \right\}} = \int\limits_0^1 {\left\{ {x + \frac{1}{2}\frac{{{x^3}}}{3}\frac{{dx}}{{\sqrt {1 - {x^2}} }} + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{{{x^5}}}{5}\frac{{dx}}{{\sqrt {1 - {x^2}} }} + \cdots } \right\}} $$

But

$$\int\limits_0^1 {\frac{{{x^{2n + 1}}}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{2n}}{{2n + 1}}\int\limits_0^1 {\frac{{{x^{2n - 1}}}}{{\sqrt {1 - {x^2}} }}dx} $$

which yields

$$\int\limits_0^1 {\frac{{{x^{2n + 1}}}}{{\sqrt {1 - {x^2}} }}dx} = \frac{{\left( {2n} \right)!!}}{{\left( {2n + 1} \right)!!}}$$

since all powers are odd.

This ultimately produces:

$$\frac{{{\pi ^2}}}{8} = 1 + \frac{1}{2}\frac{1}{3}\left( {\frac{2}{3}} \right) + \frac{{1 \cdot 3}}{{2 \cdot 4}}\frac{1}{5}\left( {\frac{{2 \cdot 4}}{{3 \cdot 5}}} \right) + \frac{{1 \cdot 3 \cdot 5}}{{2 \cdot 4 \cdot 6}}\frac{1}{7}\left( {\frac{{2 \cdot 4 \cdot 6}}{{3 \cdot 5 \cdot 7}}} \right) \cdots $$

$$\frac{{{\pi ^2}}}{8} = 1 + \frac{1}{{{3^2}}} + \frac{1}{{{5^2}}} + \frac{1}{{{7^2}}} + \cdots $$

Let

$$1 + \frac{1}{{{2^2}}} + \frac{1}{{{3^2}}} + \frac{1}{{{4^2}}} + \cdots = \omega $$

Then

$$\frac{1}{{{2^2}}} + \frac{1}{{{4^2}}} + \frac{1}{{{6^2}}} + \frac{1}{{{8^2}}} + \cdots = \frac{\omega }{4}$$

Which means

$$\frac{\omega }{4} + \frac{{{\pi ^2}}}{8} = \omega $$

or

$$\omega = \frac{{{\pi ^2}}}{6}$$

$\endgroup$
1
  • 6
    $\begingroup$ @Downvote This is how Euler did it, if you're interested. $\endgroup$
    – Pedro
    Feb 12, 2012 at 1:12
70
$\begingroup$

The most recent issue of The American Mathematical Monthly (August-September 2011, pp. 641-643) has a new proof by Luigi Pace based on elementary probability. Here's the argument.

Let $X_1$ and $X_2$ be independent, identically distributed standard half-Cauchy random variables. Thus their common pdf is $p(x) = \frac{2}{\pi (1+x^2)}$ for $x > 0$.

Let $Y = X_1/X_2$. Then the pdf of $Y$ is, for $y > 0$, $$p_Y(y) = \int_0^{\infty} x p_{X_1} (xy) p_{X_2}(x) dx = \frac{4}{\pi^2} \int_0^\infty \frac{x}{(1+x^2 y^2)(1+x^2)}dx$$ $$=\frac{2}{\pi^2 (y^2-1)} \left[\log \left( \frac{1+x^2 y^2}{1+x^2}\right) \right]_{x=0}^{\infty} = \frac{2}{\pi^2} \frac{\log(y^2)}{y^2-1} = \frac{4}{\pi^2} \frac{\log(y)}{y^2-1}.$$

Since $X_1$ and $X_2$ are equally likely to be the larger of the two, we have $P(Y < 1) = 1/2$. Thus $$\frac{1}{2} = \int_0^1 \frac{4}{\pi^2} \frac{\log(y)}{y^2-1} dy.$$ This is equivalent to $$\frac{\pi^2}{8} = \int_0^1 \frac{-\log(y)}{1-y^2} dy = -\int_0^1 \log(y) (1+y^2+y^4 + \cdots) dy = \sum_{k=0}^\infty \frac{1}{(2k+1)^2},$$ which, as others have pointed out, implies $\zeta(2) = \pi^2/6$.

$\endgroup$
0
65
$\begingroup$

This is not really an answer, but rather a long comment prompted by David Speyer's answer. The proof that David gives seems to be the one in How to compute $\sum 1/n^2$ by solving triangles by Mikael Passare, although that paper uses a slightly different way of seeing that the area of the region $U_0$ (in Passare's notation) bounded by the positive axes and the curve $e^{-x}+e^{-y}=1$, $$\int_0^{\infty} -\ln(1-e^{-x}) dx,$$ is equal to $\sum_{n\ge 1} \frac{1}{n^2}$.

This brings me to what I really wanted to mention, namely another curious way to see why $U_0$ has that area; I learned this from Johan Wästlund. Consider the region $D_N$ illustrated below for $N=8$:

A shape with area = sum of reciprocal squares

Although it's not immediately obvious, the area of $D_N$ is $\sum_{n=1}^N \frac{1}{n^2}$. Proof: The area of $D_1$ is 1. To get from $D_N$ to $D_{N+1}$ one removes the boxes along the top diagonal, and adds a new leftmost column of rectangles of width $1/(N+1)$ and heights $1/1,1/2,\ldots,1/N$, plus a new bottom row which is the "transpose" of the new column, plus a square of side $1/(N+1)$ in the bottom left corner. The $k$th rectangle from the top in the new column and the $k$th rectangle from the left in the new row (not counting the square) have a combined area which exactly matches the $k$th box in the removed diagonal: $$ \frac{1}{k} \frac{1}{N+1} + \frac{1}{N+1} \frac{1}{N+1-k} = \frac{1}{k} \frac{1}{N+1-k}. $$ Thus the area added in the process is just that of the square, $1/(N+1)^2$. Q.E.D.

(Apparently this shape somehow comes up in connection with the "random assignment problem", where there's an expected value of something which turns out to be $\sum_{n=1}^N \frac{1}{n^2}$.)

Now place $D_N$ in the first quadrant, with the lower left corner at the origin. Letting $N\to\infty$ gives nothing but the region $U_0$: for large $N$ and for $0<\alpha<1$, the upper corner of column number $\lceil \alpha N \rceil$ in $D_N$ lies at $$ (x,y) = \left( \sum_{n=\lceil (1-\alpha) N \rceil}^N \frac{1}{n}, \sum_{n=\lceil \alpha N \rceil}^N \frac{1}{n} \right) \sim \left(\ln\frac{1}{1-\alpha}, \ln\frac{1}{\alpha}\right),$$ hence (in the limit) on the curve $e^{-x}+e^{-y}=1$.

$\endgroup$
7
  • 10
    $\begingroup$ That's a neat observation. $\endgroup$ Nov 1, 2010 at 14:53
  • 6
    $\begingroup$ Tracing through the proof that $D_N = \sum_{d=1}^{N} 1/d^2$, I discovered the following curiosity: If you look at all the rectangle in $D_N$ of the form $1/j \times 1/k$ with $GCD(j,k)=d$, their total area is $1/d^2$. In particular, if you look at the rectangles of the form $1/j \times 1/k$ with $GCD(x,y)=1$, in the limit they are spread everywhere across the region $e^{-x} + e^{-y} \geq 1$, with density equal to the probability that two randomly chosen integers are relatively prime, namely $6/\pi^2$. $\endgroup$ Aug 22, 2014 at 0:46
  • $\begingroup$ @DavidSpeyer: That's also a neat observation! :-) $\endgroup$ Aug 22, 2014 at 9:43
  • 3
    $\begingroup$ @VividD: Feel free to try, I'm not going to stop you! ;-) $\endgroup$ May 24, 2015 at 8:18
  • 3
    $\begingroup$ Kiran Kedlaya pointed out to the following to me: In the interval $[0,1]$, consider all fractions $p/q$ with $q \leq N$. For example, when $N=3$, look at $(0/1, 1/3, 1/2, 2/3, 1/1)$. Look at the blocks of $D_3$ of size $1/q \times 1/q'$ with $GCD(q,q')=1$. The pairs $(q,q')$ which occur are precisely the successive denominators. For example, $1 \times 1/3$, $1/3 \times 1/2$, $1/2 \times 1/3$, $1/3 \times 1$ in $D_3$. We have $1/(q q') = (p/q)-(p'/q')$ (this is a well known property of Farey fractions) so the areas add up to $1$ because this is the length of the interval. $\endgroup$ Jul 8, 2016 at 19:35
64
$\begingroup$

Note that $$ \frac{\pi^2}{\sin^2\pi z}=\sum_{n=-\infty}^{\infty}\frac{1}{(z-n)^2} $$ from complex analysis and that both sides are analytic everywhere except $n=0,\pm 1,\pm 2,\cdots$. Then one can obtain $$ \frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}=\sum_{n=1}^{\infty}\frac{1}{(z-n)^2}+\sum_{n=1}^{\infty}\frac{1}{(z+n)^2}. $$ Now the right hand side is analytic at $z=0$ and hence $$\lim_{z\to 0}\left(\frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}\right)=2\sum_{n=1}^{\infty}\frac{1}{n^2}.$$ Note $$\lim_{z\to 0}\left(\frac{\pi^2}{\sin^2\pi z}-\frac{1}{z^2}\right)=\frac{\pi^2}{3}.$$ Thus $$\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}.$$

$\endgroup$
4
  • 3
    $\begingroup$ (+1)Nice! The first equality can be derived without complex analysis as well! $\endgroup$
    – L. F.
    Mar 8, 2013 at 14:08
  • $\begingroup$ @L.F. Thank you for telling me the link. It is a short proof without using complex analysis and I like it. $\endgroup$
    – xpaul
    Apr 21, 2013 at 17:35
  • $\begingroup$ (+1) I was considering posting a similar answer based on the proof in this answer of $\sum\limits_{n=-\infty}^\infty\frac1{z+n}=\pi\cot(\pi z)$, then differentiating. $\endgroup$
    – robjohn
    Dec 21, 2014 at 17:42
  • 2
    $\begingroup$ I was about to comment "Curious though how one proves the Weierstrass form of $\Gamma$ without complex analysis" but it seems the usual proof of Weierstrass factorization for entire functions can be adapted to real analytic functions. $\endgroup$ Oct 28, 2017 at 10:21
64
>

Just as a curiosity, a one-line-real-analytic-proof I found by combining different ideas from this thread and this question:

$$\begin{eqnarray*}\zeta(2)&=&\frac{4}{3}\sum_{n=0}^{+\infty}\frac{1}{(2n+1)^2}=\frac{4}{3}\int_{0}^{1}\frac{\log y}{y^2-1}dy\\&=&\frac{2}{3}\int_{0}^{1}\frac{1}{y^2-1}\left[\log\left(\frac{1+x^2 y^2}{1+x^2}\right)\right]_{x=0}^{+\infty}dy\\&=&\frac{4}{3}\int_{0}^{1}\int_{0}^{+\infty}\frac{x}{(1+x^2)(1+x^2 y^2)}dx\,dy\\&=&\frac{4}{3}\int_{0}^{1}\int_{0}^{+\infty}\frac{dx\, dz}{(1+x^2)(1+z^2)}=\frac{4}{3}\cdot\frac{\pi}{4}\cdot\frac{\pi}{2}=\frac{\pi^2}{6}.\end{eqnarray*}$$


Update. By collecting pieces, I have another nice proof. By Euler's acceleration method or just an iterated trick like my $(1)$ here we get: $$ \zeta(2) = \sum_{n\geq 1}\frac{1}{n^2} = \color{red}{\sum_{n\geq 1}\frac{3}{n^2\binom{2n}{n}}}\tag{A}$$ and the last series converges pretty fast. Then we may notice that the last series comes out from a squared arcsine. That just gives another proof of $ \zeta(2)=\frac{\pi^2}{6}$.


A proof of the identity $$\sum_{n\geq 0}\frac{1}{(2n+1)^2}=\frac{\pi}{2}\sum_{k\geq 0}\frac{(-1)^k}{2k+1}=\frac{\pi}{2}\cdot\frac{\pi}{4}$$ is also hidden in tired's answer here. For short, the integral $$ I=\int_{-\infty}^{\infty}e^y\left(\frac{e^y-1}{y^2}-\frac{1}{y}\right)\frac{1}{e^{2y}+1}\,dy $$ is clearly real, so the imaginary part of the sum of residues of the integrand function has to be zero.


Still another way (and a very efficient one) is to exploit the reflection formula for the trigamma function: $$\psi'(1-z)+\psi'(z)=\frac{\pi^2}{\sin^2(\pi z)}$$ immediately leads to: $$\frac{\pi^2}{2}=\psi'\left(\frac{1}{2}\right)=\sum_{n\geq 0}\frac{1}{\left(n+\frac{1}{2}\right)^2}=4\sum_{n\geq 0}\frac{1}{(2n+1)^2}=3\,\zeta(2).$$


2018 update. We may consider that $\mathcal{J}=\int_{0}^{+\infty}\frac{\arctan x}{1+x^2}\,dx = \left[\frac{1}{2}\arctan^2 x\right]_0^{+\infty}=\frac{\pi^2}{8}$.
On the other hand, by Feynman's trick or Fubini's theorem $$ \mathcal{J}=\int_{0}^{+\infty}\int_{0}^{1}\frac{x}{(1+x^2)(1+a^2 x^2)}\,da\,dx = \int_{0}^{1}\frac{-\log a}{1-a^2}\,da $$ and since $\int_{0}^{1}-\log(x)x^n\,dx = \frac{1}{(n+1)^2}$, by expanding $\frac{1}{1-a^2}$ as a geometric series we have $$ \frac{\pi^2}{8}=\mathcal{J}=\sum_{n\geq 0}\frac{1}{(2n+1)^2}. $$

$\endgroup$
2
  • 2
    $\begingroup$ Thanks to @genepeer, I just know that the argument is almost the same of the one presented by Daniele Ritelli and discussed here: euler.genepeer.com/?p=212. $\endgroup$ Dec 25, 2013 at 9:17
  • 3
    $\begingroup$ Four lines now, just to make it more readable :) $\endgroup$ Jul 12, 2014 at 23:24
60
$\begingroup$

Here is a complex-analytic proof.

For $z\in D=\mathbb{C}\backslash${$0,1$}, let

$$R(z)=\sum\frac{1}{\log^2 z}$$

where the sum is taken over all branches of the logarithm. Each point in $D$ has a neighbourhood on which the branches of $\log(z)$ are analytic. Since the series converges uniformly away from $z=1$, $R(z)$ is analytic on $D$.

Now a few observations:

(i) Each term of the series tends to $0$ as $z\to0$. Thanks to the uniform convergence this implies that the singularity at $z=0$ is removable and we can set $R(0)=0$.

(ii) The only singularity of $R$ is a double pole at $z=1$ due to the contribution of the principal branch of $\log z$. Moreover, $\lim_{z\to1}(z-1)^2R(z)=1$.

(iii) $R(1/z)=R(z)$.

By (i) and (iii) $R$ is meromorphic on the extended complex plane, therefore it is rational. By (ii) the denominator of $R(z)$ is $(z-1)^2$. Since $R(0)=R(\infty)=0$, the numerator has the form $az$. Then (ii) implies $a=1$, so that $$R(z)=\frac{z}{(z-1)^2}.$$

Now, setting $z=e^{2\pi i w}$ yields $$\sum\limits_{n=-\infty}^{\infty}\frac{1}{(w-n)^2}=\frac{\pi^2}{\sin^2(\pi w)}$$ which implies that $$\sum\limits_{k=0}^{\infty}\frac{1}{(2k+1)^2}=\frac{\pi^2}{8},$$ and the identity $\zeta(2)=\pi^2/6$ follows.

The proof is due to T. Marshall (American Mathematical Monthly, Vol. 117(4), 2010, P. 352).

$\endgroup$
1
  • $\begingroup$ Cool, I never thought about working around the branching problem of $\log$ like this! $\endgroup$ Oct 28, 2017 at 10:18
43
$\begingroup$

In response to a request here: Compute $\oint z^{-2k} \cot (\pi z) dz$ where the integral is taken around a square of side $2N+1$. Routine estimates show that the integral goes to $0$ as $N \to \infty$.

Now, let's compute the integral by residues. At $z=0$, the residue is $\pi^{2k-1} q$, where $q$ is some rational number coming from the power series for $\cot$. For example, if $k=1$, then we get $- \pi/3$.

At $m \pi$, for $m \neq 0$, the residue is $z^{-2k} \pi^{-1}$. So $$\pi^{-1} \lim_{N \to \infty} \sum_{-N \leq m \leq N\ m \neq 0} m^{-2k} + \pi^{2k-1} q=0$$ or $$\sum_{m=1}^{\infty} m^{-2k} = -\pi^{2k} q/2$$ as desired. In particular, $\sum m^{-2} = - (\pi^2/3)/2 = \pi^2/6$.

Common variants: We can replace $\cot$ with $\tan$, with $1/(e^{2 \pi i z}-1)$, or with similar formulas.

This is reminiscent of Qiaochu's proof but, rather than actually establishing the relation $\pi^{-1} \cot(\pi z) = \sum (z-n)^{-1}$, one simply establishes that both sides contribute the same residues to a certain integral.

$\endgroup$
36
$\begingroup$

Another variation. We make use of the following identity (proved at the bottom of this note):

$$\sum_{k=1}^n \cot^2 \left( \frac {2k-1}{2n} \frac{\pi}{2} \right) = 2n^2 – n. \quad (1)$$

Now $1/\theta > \cot \theta > 1/\theta - \theta/3 > 0$ for $0< \theta< \pi/2 < \sqrt{3}$ and so $$ 1/\theta^2 – 2/3 < \cot^2 \theta < 1/\theta^2. \quad (2)$$

With $\theta_k = (2k-1)\pi/4n,$ summing the inequalities $(2)$ from $k=1$ to $n$ we obtain

$$2n^2 – n < \sum_{k=1}^n \left( \frac{2n}{2k-1}\frac{2}{\pi} \right)^2 < 2n^2 – n + 2n/3.$$

Hence

$$\frac{\pi^2}{16}\frac{2n^2-n}{n^2} < \sum_{k=1}^n \frac{1}{(2k-1)^2} < \frac{\pi^2}{16}\frac{2n^2-n/3}{n^2}.$$

Taking the limit as $n \rightarrow \infty$ we obtain

$$ \sum_{k=1}^\infty \frac{1}{(2k-1)^2} = \frac{\pi^2}{8},$$

from which the result for $\sum_{k=1}^\infty 1/k^2$ follows easily.

To prove $(1)$ we note that

$$ \cos 2n\theta = \text{Re}(\cos\theta + i \sin\theta)^{2n} = \sum_{k=0}^n (-1)^k {2n \choose 2k}\cos^{2n-2k}\theta\sin^{2k}\theta.$$

Therefore

$$\frac{\cos 2n\theta}{\sin^{2n}\theta} = \sum_{k=0}^n (-1)^k {2n \choose 2k}\cot^{2n-2k}\theta.$$

And so setting $x = \cot^2\theta$ we note that

$$f(x) = \sum_{k=0}^n (-1)^k {2n \choose 2k}x^{n-k}$$

has roots $x_j = \cot^2 (2j-1)\pi/4n,$ for $j=1,2,\ldots,n,$ from which $(1)$ follows since ${2n \choose 2n-2} = 2n^2-n.$

$\endgroup$
34

A short way to get the sum is to use Fourier's expansion of $x^2$ in $x\in(-\pi,\pi)$. Recall that Fourier's expansion of $f(x)$ is $$ \tilde{f}(x)=\frac{1}{2}a_0+\sum_{n=1}^\infty(a_n\cos nx+b_n\sin nx), x\in(-\pi,\pi)$$ where $$ a_0=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\;dx, a_n=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\cos nx\; dx, b_n=\frac{2}{\pi}\int_{-\pi}^{\pi}f(x)\sin nx\; dx, n=1,2,3,\cdots $$ and $$ \tilde{f}(x)=\frac{f(x-0)+f(x+0)}{2}. $$ Easy calculation shows $$ x^2=\frac{\pi^2}{3}+4\sum_{n=1}^\infty(-1)^n\frac{\cos nx}{n^2}, x\in[-\pi,\pi]. $$ Letting $x=\pi$ in both sides gives $$ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

Another way to get the sum is to use Parseval's Identity for Fourier's expansion of $x$ in $(-\pi,\pi)$. Recall that Parseval's Identity is $$ \int_{-\pi}^{\pi}|f(x)|^2dx=\frac{1}{2}a_0^2+\sum_{n=1}^\infty(a_n^2+b_n^2). $$ Note $$ x=2\sum_{n=1}^\infty(-1)^{n+1}\frac{\sin nx}{n}, x\in(-\pi,\pi). $$ Using Parseval's Identity gives $$ 4\sum_{n=1}^\infty\frac{1}{n^2}=\int_{-\pi}^{\pi}|x|^2dx$$ or $$ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}.$$

$\endgroup$
0
$\begingroup$

Theorem: Let $\lbrace a_n\rbrace$ be a nonincreasing sequence of positive numbers such that $\sum a_n^2$ converges. Then both series $$s:=\sum_{n=0}^\infty(-1)^na_n,\,\delta_k:=\sum_{n=0}^\infty a_na_{n+k},\,k\in\mathbb N $$ converge. Morevere $\Delta:=\sum_{k=1}^\infty(-1)^{k-1}\delta_k$ also converges, and we have the formula $$\sum_{n=0}^\infty a_n^2=s^2+2\Delta.$$ Proof: Knopp. Konrad, Theory and Application of Infinite Series, page 323.

If we let $a_n=\frac1{2n+1}$ in this theorem, then we have $$s=\sum_{n=0}^\infty(-1)^n\frac1{2n+1}=\frac\pi 4$$ $$\delta_k=\sum_{n=0}^\infty\frac1{(2n+1)(2n+2k+1)}=\frac1{2k}\sum_{n=0}^\infty\left(\frac1{2n+1}-\frac1{2n+2k+1}\right)=\frac{1}{2k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)$$ Hence, $$\sum_{n=0}^\infty\frac1{(2n+1)^2}=\left(\frac\pi 4\right)^2+\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k}\left(1+\frac1 3+...+\frac1 {2k-1}\right)=\frac{\pi^2}{16}+\frac{\pi^2}{16}=\frac{\pi^2}{8}$$ and now $$\zeta(2)=\frac4 3\sum_{n=0}^\infty\frac1{(2n+1)^2}=\frac{\pi^2}6.$$

$\endgroup$
25
$\begingroup$

At risk of contravening group etiquette w.r.t. old questions, I'm going to take this opportunity to post my own version. I don't see it in a transparent form in any of the other posts or in Robin Chapman's article, so I invite anyone to point out the correspondence if it's there. I like this argument because it's physical and can be followed without mathematical formalism.

We start by assuming the well-known series for $\pi/4$ in alternating odd fractions. We can recognize it as the sum of the Fourier series of the square wave, evaluated at the origin:

$\cos(x) - \cos(3x)/3 + \cos(5x)/5 ...$

It is easily argued on physical grounds that this adds up to a square wave; and that the height of the wave is pi/4 follows from the alternating sequence already mentioned. Now we are going to interpret this wave as an electric current flowing through a resistor. There are two ways of calculating the power and they must agree. First, we can just take square of the amplitude; in the case of this square wave, this is obviously a constant and it is just $pi^2/16$. The other way is to add up the power of the sinusoidal components. These are the squares of the individual amplitudes:

$1 + 1/9 + 1/25 .... = (?)\, \pi^2/16 \,\,??$

No, not quite; I've been a little sloppy and neglected to mention that when calculating the power of a sine wave, you use its RMS amplitude and not its peak amplitude. This introduces a factor of two; so in fact the series as written adds up to $\pi^2/8.$ This isn't quite what we want; remember we've just added up the odd fractions. But the even fractions contribute in a rather picturesque way; it's easy to group them by powers of two into a geometric sum leading to the desired result of $\,\,\pi^2/6.$

$\endgroup$
3
  • 1
    $\begingroup$ At the risk of being rude, you've used "It is easily argued on physical grounds" in place of a theorem on pointwise convergence of fourier series, and a particular physical manifestation/application of Plancherel's theorem. You gain "intuition" for why the result is plausible (assuming you have the corresponding physics background), but you lose both rigor and clarity. The problem with making a physical argument for any mathematical fact is that even if you know that certain calculations work for physically relevant examples, it's hard to say what condition "physically relevant" imposes. $\endgroup$
    – Aaron
    Aug 14, 2011 at 1:14
  • 2
    $\begingroup$ Thanks for the feedback. I'm understanding that my argument wasn't so sketchy that you weren't able to fill in the details as necessary. I am blown away by the mathematical sophistication of the people who post hear but I still wish I would see more arguments made the way I do. $\endgroup$ Aug 14, 2011 at 1:51
  • 2
    $\begingroup$ Well, you lucked out that I had seen the argument before (though not phrased with such language), and I remembered enough physics to understand what you were doing. I appreciate how you feel: technical arguments can be difficult to digest and sometimes offer no intuition about the result. A heuristic explanation, even if it isn't fully rigorous, is often a wonderful addition. However, for mathematics, the heuristic cannot be everything, as the mathematical battleground is littered with the bodies of proofs which are simple, intuitive, and wrong. $\endgroup$
    – Aaron
    Aug 14, 2011 at 2:08
24
$\begingroup$

I like this one:

Let $f\in Lip(S^{1})$, where $Lip(S^{1})$ is the space of Lipschitz functions on $S^{1}$. So its well defined the number for $k\in \mathbb{Z}$ (called Fourier series of $f$) $$\hat{f}(k)=\frac{1}{2\pi}\int \hat{f}(\theta)e^{-ik\theta}d\theta.$$

By the inversion formula, we have $$f(\theta)=\sum_{k\in\mathbb{Z}}\hat{f}(k)e^{ik\theta}.$$

Now take $f(\theta)=|\theta|$, $\theta\in [-\pi,\pi]$. Note that $f\in Lip(S^{1})$

We have $$ \hat{f}(k) = \left\{ \begin{array}{rl} \frac{\pi}{2} &\mbox{ if k=0} \\ 0 &\mbox{ if |k|\neq 0 and |k| is even} \\ -\frac{2}{k^{2}\pi} &\mbox{if |k|\neq 0 and |k| is odd} \end{array} \right. $$

Using the inversion formula, we have on $\theta=0$ that $$0=\sum_{k\in\mathbb{Z}}\hat{f}(k).$$

Then,

\begin{eqnarray} 0 &=& \frac{\pi}{2}-\sum_{k\in\mathbb{Z}\ |k|\ odd}\frac{2}{k^{2}\pi} \nonumber \\ &=& \frac{\pi}{2}-\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{4}{k^{2}\pi} \nonumber \\ \end{eqnarray}

This implies $$\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{k^{2}} =\frac{\pi^{2}}{8}$$

If we multiply the last equation by $\frac{1}{2^{2n}}$ with $n=0,1,2,...$ ,we get $$\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{(2^{n}k)^{2}} =\frac{\pi^{2}}{2^{2n}8}$$

Now $$\sum_{n=0,1,...}(\sum_{k\in\mathbb{N}\ |k|\ odd}\frac{1}{(2^{n}k)^{2}}) =\sum_{n=0,1,...}\frac{\pi^{2}}{2^{2n}8}$$

The sum in the left is equal to: $\sum_{k\in\mathbb{N}}\frac{1}{k^{2}}$

The sum in the right is equal to :$\frac{\pi^{2}}{6}$

So we conclude: $$\sum_{k\in\mathbb{N}}\frac{1}{k^{2}}=\frac{\pi^{2}}{6}$$

Note: This is problem 9, Page 208 from the boof of Michael Eugene Taylor - Partial Differential Equation Volume 1.

$\endgroup$
23
+25
$\begingroup$

Here's a proof based upon periods and the fact that $\zeta(2)$ and $\frac{\pi^2}{6}$ are periods forming an accessible identity.

The definition of periods below and the proof is from the fascinating introductory survey paper about periods by M. Kontsevich and D. Zagier.

Periods are defined as complex numbers whose real and imaginary parts are values of absolutely convergent integrals of rational functions with rational coefficient over domains in $\mathbb{R}^n$ given by polynomial inequalities with rational coefficients.

The set of periods is therefore a countable subset of the complex numbers. It contains the algebraic numbers, but also many of famous transcendental constants.

In order to show the equality $\zeta(2)=\frac{\pi^2}{6}$ we have to show that both are periods and that $\zeta(2)$ and $\frac{\pi^2}{6}$ form a so-called accessible identity.

First step of the proof: $\zeta(2)$ and $\pi$ are periods

There are a lot of different proper representations of $\pi$ showing that this constant is a period. In the referred paper above following expressions (besides others) of $\pi$ are stated:

\begin{align*} \pi= \iint \limits_{x^2+y^2\leq 1}dxdy=\int_{-\infty}^{\infty}\frac{dx}{1+x^2} \end{align*}

showing that $\pi$ is a period. The known representation

\begin{align*} \zeta(2)=\iint_{0<x<y<1} \frac{dxdy}{(1-x)y} \end{align*}

shows that $\zeta(2)$ is also a period.

$$ $$

Second step: $\zeta(2)$ and $\frac{\pi^2}{6}$ form an accessible identity.

An accessible identity between two periods $A$ and $B$ is given, if we can transform the integral representation of period $A$ by application of the three rules: Additivity (integrand and domain), Change of variables and Newton-Leibniz formula to the integral representation of period $B$.

This implies equality of the periods and the job is done.

In order to show that $\zeta(2)$ and $\frac{\pi^2}{6}$ are accessible identities we start with the integral $I$

$$I=\int_{0}^{1}\int_{0}^{1}\frac{1}{1-xy}\frac{dxdy}{\sqrt{xy}}$$

Expanding $1/(1-xy)$ as a geometric series and integrating term-by-term,

we find that

$$I=\sum_{n=0}^{\infty}\left(n+\frac{1}{2}\right)^{-2}=(4-1)\zeta(2),$$

providing another period representation of $\zeta(2)$.

Changing variables:

$$x=\xi^2\frac{1+\eta^2}{1+\xi^2},\qquad\qquad y=\eta^2\frac{1+\xi^2}{1+\eta^2}$$

with Jacobian $\left|\frac{\partial(x,y)}{\partial(\xi,\eta)}\right|=\frac{4\xi\eta(1-\xi^2\eta^2)}{(1+\xi^2)(1+\eta^2)} =4\frac{(1-xy)\sqrt{xy}}{(1+\xi^2)(1+\eta^2)}$, we find

$$I=4\iint_{0<\eta,\xi\leq 1}\frac{d\xi}{1+\xi^2}\frac{d\eta}{1+\eta^2} =2\int_{0}^{\infty}\frac{d\xi}{1+\xi^2}\int_{0}^{\infty}\frac{d\eta}{1+\eta^2},$$

the last equality being obtained by considering the involution $(\xi,\eta) \mapsto (\xi^{-1},\eta^{-1})$ and comparing this with the last integral representation of $\pi$ above we obtain: $$I=\frac{\pi^2}{2}$$

So, we have shown that $\frac{\pi^2}{6}$ and $\zeta(2)$ are accessible identities and equality follows.

$\endgroup$
22
$\begingroup$

As taken from my upcoming textbook:

There is yet another solution to the Basel problem as proposed by Ritelli (2013). His approach is similar to the one by Apostol (1983), where he arrives at

$$\sum_{n\geq1}\frac{1}{n^2}=\frac{\pi^2}{6}\tag1$$

by evaluating the double integral

$$\int_0^1\int_0^1\dfrac{\mathrm{d}x\,\mathrm{d}y}{1-xy}.\tag2$$

Ritelli evaluates in this case the definite integral shown in $(4)$. The starting point comes from realizing that $(1)$ is equivalent to

$$\sum_{n\geq0}\frac{1}{(2n+1)^2}=\frac{\pi^2}{8}\tag3$$

To evaluate the above sum we consider the definite integral

$$\int_0^\infty\int_0^\infty\frac{\mathrm{d}x\,\mathrm{d}y}{(1+y)(1+x^2y)}.\tag4$$

We evaluate $(4)$ first with respect to $x$ and then to $y$

$$\begin{align} \int_0^\infty\left(\frac{1}{1+y}\int_0^\infty\frac{\mathrm{d}x}{1+x^2y}\right)\mathrm{d}y &=\int_0^\infty\left(\frac{1}{1+y}\left[\frac{\tan^{-1}(\sqrt{y}\,x)}{\sqrt{y}}\right]_{x=0}^{x=\infty}\right)\mathrm{d}y\\ &=\frac\pi2\int_0^\infty\frac{\mathrm{d}y}{\sqrt{y}(1+y)}\\ &=\frac\pi2\int_0^\infty\frac{2u}{u(1+u^2)}\mathrm{d}u=\frac{\pi^2}{2},\tag5 \end{align}$$

where we used the substitution $y\leadsto u^2$ in the last step. If we reverse the order of integration one gets

$$\begin{align} \int_0^\infty\left(\int_0^\infty\frac{\mathrm{d}y}{(1+y)(1+x^2y)}\right)\mathrm{d}x&=\int_0^\infty\frac{1}{1-x^2}\left(\int_0^\infty\left(\frac{1}{1+y}-\frac{x^2}{1+x^2y}\right)\mathrm{d}y\right)\mathrm{d}x\\ &=\int_0^\infty\frac{1}{1-x^2}\ln\frac1{x^2}\mathrm{d}x=2\int_0^\infty\frac{\ln x}{x^2-1}\mathrm{d}x.\tag6 \end{align}$$

Hence since $(5)$ and $(6)$ are the same, we have

$$\int_0^\infty\frac{\ln x}{x^2-1}\mathrm{d}x=\frac{\pi^2}{4}.\tag7$$

Furthermore

$$\begin{align} \int_0^\infty\frac{\ln x}{x^2-1}\mathrm{d}x&=\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x+\int_1^\infty\frac{\ln x}{x^2-1}\mathrm{d}x\\ &=\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x+\int_0^1\frac{\ln u}{u^2-1}\mathrm{d}u,\tag8 \end{align}$$

where we used the substitution $x\leadsto1/u$. Combining $(7)$ and $(8)$ yields

$$\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x=\frac{\pi^2}{8}.\tag{9}$$

By expanding the denominator of the integrand in $(10)$ into a geometric series and using the Monotone Convergence Theorem,

$$\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x=\int_0^1\frac{-\ln x}{1-x^2}\mathrm{d}x=\sum_{n\ge0}\int_0^1(-x^{2n}\ln x)\mathrm{d}x.\tag{10}$$

Using integration by parts one can see that

$$\int_0^1(-x^{2n}\ln x)\mathrm{d}x=\left[-\frac{x^{2n+1}}{2n+1}\ln x\right]^1_0+\int_0^1\frac{x^{2n}}{2n+1}\mathrm{d}x=\frac{1}{(2n+1)^2}\tag{11}$$

Hence from $(10)$, and $(11)$

$$\int_0^1\frac{\ln x}{x^2-1}\mathrm{d}x=\sum_{n\geq0}\frac{1}{(2n+1)^2},\tag{12}$$

which finishes the proof. ${\square}$

References:

Daniele Ritelli (2013), Another Proof of $\zeta(2)=\frac{\pi^2}{6}$ Using Double Integrals, The American Mathematical Monthly, Vol. 120, No. 7, pp. 642-645

T. Apostol (1983), A proof that Euler missed: Evaluating $\zeta(2)$ the easy way, Math. Intelligencer 5, pp. 59–60, available at http://dx.doi.org/10.1007/BF03026576.

$\endgroup$
1
21
$\begingroup$

Here is Euler's Other Proof by Gerald Kimble

\begin{align*} \frac{\pi^2}{6}&=\frac{4}{3}\frac{(\arcsin 1)^2}{2}\\ &=\frac{4}{3}\int_0^1\frac{\arcsin x}{\sqrt{1-x^2}}\,dx\\ &=\frac{4}{3}\int_0^1\frac{x+\sum_{n=1}^{\infty}\frac{(2n-1)!!}{(2n)!!}\frac{x^{2n+1}}{2n+1}}{\sqrt{1-x^2}}\,dx\\ &=\frac{4}{3}\int_0^1\frac{x}{\sqrt{1-x^2}}\,dx +\frac{4}{3}\sum_{n=1}^{\infty}\frac{(2n-1)!!}{(2n)!!(2n+1)}\int_0^1x^{2n}\frac{x}{\sqrt{1-x^2}}\,dx\\ &=\frac{4}{3}+\frac{4}{3}\sum_{n=1}^{\infty}\frac{(2n-1)!!}{(2n)!!(2n+1)}\left[\frac{(2n)!!}{(2n+1)!!}\right]\\ &=\frac{4}{3}\sum_{n=0}^{\infty}\frac{1}{(2n+1)^2}\\ &=\frac{4}{3}\left(\sum_{n=1}^{\infty}\frac{1}{n^2}-\frac{1}{4}\sum_{n=1}^{\infty}\frac{1}{n^2}\right)\\ &=\sum_{n=1}^{\infty}\frac{1}{n^2} \end{align*}

$\endgroup$
21
$\begingroup$

I saw this proof in an extract of the College Mathematics Journal.

Consider the Integeral : $I = \int_0^{\pi/2}\ln(2\cos x)dx$

From $2\cos(x) = e^{ix} + e^{-ix}$ , we have:

$$\int_0^{\pi/2}\ln\left(e^{ix} + e^{-ix}\right)dx = \int_0^{\pi/2}\ln\left(e^{ix}(1 + e^{-2ix})\right)dx=\int_0^{\pi/2}ixdx + \int_0^{\pi/2}\ln(1 + e^{-2ix})dx$$

The Taylor series expansion of $\ln(1+x)=x -\frac{x^2}{2} +\frac{x^3}{3}-\cdots$

Thus , $\ln(1+e^{-2ix}) = e^{-2ix}- \frac{e^{-4ix}}{2} + \frac{e^{-6ix}}{3} - \cdots $, then for $I$ :

$$I = \frac{i\pi^2}{8}+\left[-\frac{e^{-2ix}}{2i}+\frac{e^{-4ix}}{2\cdot 4i}-\frac{e^{-6ix}}{3\cdot 6i}-\cdots\right]_0^\frac{\pi}{2}$$

$$I = \frac{i\pi^2}{8}-\frac{1}{2i}\left[\frac{e^{-2ix}}{1^2}-\frac{e^{-4ix}}{2^2}+\frac{e^{-6ix}}{3^2}-\cdots\right]_0^\frac{\pi}{2}$$

By evaluating we get something like this..

$$I = \frac{i\pi^2}{8}-\frac{1}{2i}\left[\frac{-2}{1^2}-\frac{0}{2^2}+\frac{-2}{3^2}-\cdots\right]_0^\frac{\pi}{2}$$

Hence

$$\int_0^{\pi/2}\ln(2\cos x)dx=\frac{i\pi^2}{8}-i\sum_{k=0}^\infty \frac{1}{(2k+1)^2}$$

So now we have a real integral equal to an imaginary number, thus the value of the integral should be zero.

Thus, $\sum_{k=0}^\infty \frac{1}{(2k+1)^2}=\frac{\pi^2}{8}$

But let $\sum_{k=0}^\infty \frac{1}{k^2}=E$ .We get $\sum_{k=0}^\infty \frac{1}{(2k+1)^2}=\frac{3}{4} E$

And as a result $$\sum_{k=0}^\infty \frac{1}{k^2} = \frac{\pi^2}{6}$$

$\endgroup$
20
$\begingroup$

This popped up in some reading I'm doing for my research, so I thought I'd contribute! It's a more general twist on the usual pointwise-convergent Fourier series argument.


Consider the eigenvalue problem for the negative Laplacian $\mathcal L$ on $[0,1]$ with Dirichlet boundary conditions; that is, $\mathcal L f:=-f_n'' = \lambda_n f_n$ with $f_n(0) = f_n(1) = 0$. Through inspection we can find that the admissible eigenvalues are $\lambda_n = n^2\pi^2$ for $n=1,2,\ldots$

One can verify that the integral operator $$\mathcal Gf(x) = \int_0^1 G(x,y)f(y)\,dy,$$ where $$G(x,y) = \min(x,y) - xy = \frac{1}{2}\left( -|x-y| + x(1-y) + y(1-x) \right)~~,$$

inverts the negative Laplacian, in the sense that $\mathcal L \mathcal G f = \mathcal G \mathcal L f = f$ on the admissible class of functions (twice weakly differentiable, satisfying the boundary conditions). That is, $G$ is the Green's function for the Dirichlet Laplacian. Because $\mathcal G$ is a self-adjoint, compact operator, we can form an orthonormal basis for $L^2([0,1])$ from its eigenfunctions, and so may express its trace in two ways: $$ \sum_n <f_n,\mathcal G f_n> = \sum_n \frac{1}{\lambda_n} $$and $$\sum_n <f_n,\mathcal G f_n> = \int_0^1 \sum_n f_n(x) <G(x,\cdot),f_n>\,dx = \int_0^1 G(x,x)\,dx~~.$$

The latter quantity is $$ \int_0^1 x(1-x)\,dx = \frac 1 2 - \frac 1 3 = \frac 1 6~~.$$

Hence, we have that $$\sum_n \frac 1 {n^2\pi^2} = \frac 1 6~~\text{, or}~~ \sum_n \frac 1 {n^2} = \frac {\pi^2} 6~~.$$

$\endgroup$
0
17
$\begingroup$

Consider the function $\pi \cot(\pi z)$ which has poles at $z=\pm n$ where n is an integer. Using the L'hopital rule you can see that the residue at these poles is 1.

Now consider the integral $\int_{\gamma_N} \frac{\pi\cot(\pi z)}{z^2} dz$ where the contour $\gamma_N$ is the rectangle with corners given by ±(N + 1/2) ± i(N + 1/2) so that the contour avoids the poles of $\cot(\pi z)$. The integral is bouond in the following way: $$\int_{\gamma_N} |\frac{\pi\cot(\pi z)}{z^2} |dz\le Max |(\frac{\pi\cot(\pi z)}{z^2}) | Length(\gamma_N)$$

It can easily be shown that on the contour $\gamma_N$ that $\pi \cot(\pi z)< M$ where M is some constant. Then we have

$$\int_{\gamma_N} |\frac{\pi\cot(\pi z)}{z^2} |dz\le M Max |\frac{1}{z^2} | Length(\gamma_N) = (8N+4) \frac{M}{\sqrt{2(1/2+N)^2}^2}$$

where (8N+4) is the lenght of the contour and $\sqrt{2(1/2+N)^2}$ is half the diagonal of $\gamma_N$. In the limit that N goes to infinity the integral is bound by 0 so we have $$\int_{\gamma_N} \frac{\pi\cot(\pi z)}{z^2} dz =0$$

by the cauchy residue theorem we have 2πiRes(z = 0) + 2πi$\sum$Residues(z$\ne$ 0) = 0. At z=0 we have Res(z=0)=$-\frac{\pi^2}{3}$, and $Res (z=n)=\frac{1}{n^2}$ so we have

$$2\pi iRes(z = 0) + 2\pi i\sum Residues(z\ne 0) = -\frac{\pi^2}{3}+2\sum_{1}^{\infty} \frac{1}{n^2} =0$$

Where the 2 in front of the residue at n is because they occur twice at +/- n.

We now have the desired result $\sum_{1}^{\infty} \frac{1}{n^2}=\frac{\pi^2}{6}$.

$\endgroup$
1
  • $\begingroup$ can you please explain why you divide $M \cdot \text{Length}(\gamma_{N})$ by half the diagonal of $\gamma_{N}$? The only thing I can think of is that it's some kind of bounds on $Max \vert \frac{1}{z^{2}} \vert$. But if you could explain it that would be great. $\endgroup$
    – user100463
    May 4, 2016 at 23:15
17
$\begingroup$

I have another method as well. From skimming the previous solutions, I don't think it is a duplicate of any of them

In Complex analysis, we learn that $\sin(\pi z) = \pi z\Pi_{n=1}^{\infty}\Big(1 - \frac{z^2}{n^2}\Big)$ which is an entire function with simple zer0s at the integers. We can differentiate term wise by uniform convergence. So by logarithmic differentiation we obtain a series for $\pi\cot(\pi z)$. $$ \frac{d}{dz}\ln(\sin(\pi z)) = \pi\cot(\pi z) = \frac{1}{z} - 2z\sum_{n=1}^{\infty}\frac{1}{n^2 - z^2} $$ Therefore, $$ -\sum_{n=1}^{\infty}\frac{1}{n^2 - z^2} = \frac{\pi\cot(\pi z) - \frac{1}{z}}{2z} $$ We can expand $\pi\cot(\pi z)$ as $$ \pi\cot(\pi z) = \frac{1}{z} - \frac{\pi^2}{3}z - \frac{\pi^4}{45}z^3 - \cdots $$ Thus, \begin{align} \frac{\pi\cot(\pi z) - \frac{1}{z}}{2z} &= \frac{- \frac{\pi^2}{3}z - \frac{\pi^4}{45}z^3-\cdots}{2z}\\ -\sum_{n=1}^{\infty}\frac{1}{n^2 - z^2}&= -\frac{\pi^2}{6} - \frac{\pi^4}{90}z^2 - \cdots\\ -\lim_{z\to 0}\sum_{n=1}^{\infty}\frac{1}{n^2 - z^2}&= \lim_{z\to 0}\Big(-\frac{\pi^2}{6} - \frac{\pi^4}{90}z^2 - \cdots\Big)\\ -\sum_{n=1}^{\infty}\frac{1}{n^2}&= -\frac{\pi^2}{6}\\ \sum_{n=1}^{\infty}\frac{1}{n^2}&= \frac{\pi^2}{6} \end{align}

$\endgroup$
14
$\begingroup$

I would like to present you a method I found recently here.

Let $A_n=\int_0^{\pi/2}\cos^{2n}x\;\mathrm{d}x$ and $B_n=\int_0^{\pi/2}x^2\cos^{2n}x\;\mathrm{d}x$.

The first integral is well known, by per partes we get the recurecnce relation :

$$A_{n}=\frac{2n-1}{2n}A_{n-1}\tag{1}$$

By per partes for the second integral:

$$A_n=\int_0^{\pi/2}\cos^{2n}x\;\mathrm{d}x=x\cos^{2n}x\bigg{|}_0^{\pi/2}-\frac{x^2}{2}(\cos^{2n}x)'\bigg{|}_0^{\pi/2}+\frac{1}{2}\int_0^{\pi/2}x^2(\cos^{2n}x)''\;\mathrm{d}x$$

First two terms vanish, so we are left only with the integral and since $(\cos^{2n}x)''=2n(2n-1)\cos^{2n-2}x-4n^2\cos^{2n}x$ we have :

$$A_n=(2n-1)nB_{n-1}-2n^2B_{n}\tag{2}$$

for $n\geq 1$. Rearranging and substituing $(2n-1)=2n\frac{A_n}{A_{n-1}}$ from $(1)$ we get :

$$\frac{1}{n^2}=2\frac{B_{n-1}}{A_{n-1}}-2\frac{B_n}{A_n}\tag{3}$$

Summing from $1$ to some $k$ natural we get by telescoping property

$$\sum_{n=1}^k\frac{1}{n^2}=2\frac{B_0}{A_0}-2\frac{B_k}{A_k}=\frac{\pi^2}{6}-2\frac{B_k}{A_k}\tag{4}$$

Next, using the inequality $\sin x\geq \frac{2x}{\pi}$ on $(0,\frac{\pi}{2})$ and by $(1)$ :

$$\frac{4}{\pi^2}B_{n-1}=\frac{4}{\pi^2}\int_0^{\pi/2}x^2\cos^{2n-2}x\;\mathrm{d}x<\int_0^{\pi/2}\sin^2x\cos^{2n-2}x\;\mathrm{d}x=A_{n-1}-A_n=\frac{A_{n-1}}{2n}$$

so in the limit the last term vanishes by the sqeeze theorem, so we are left with

$$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}\tag{4}$$

That concludes the result.

$\endgroup$
2
  • $\begingroup$ I think you mean $\sin x\geq 2x/\pi$. But this is a clever approach. Another reason to appreciate integration by parts. $\endgroup$
    – user123641
    Jun 9, 2017 at 0:51
  • $\begingroup$ Thanks for sharing, very easy to follow and totally new to me. Fixed the typo mentioned by @Bryan. $\endgroup$ Jun 9, 2017 at 10:11
14
$\begingroup$

Applying the usual trick 1 transforming a series to an integral, we obtain

$$\sum_{n=1}^\infty\frac1{n^2}=\int_0^1\int_0^1\frac{dxdy}{1-xy}$$

where we use the Monotone Convergence Theorem to integrate term-wise.

Then there's this ingenious change of variables 2, which I learned from Don Zagier during a lecture, and which he in turn got from a colleague:

$$(x,y)=\left(\frac{\cos v}{\cos u},\frac{\sin u}{\sin v}\right),\quad0\leq u\leq v\leq \frac\pi2$$

One verifies that it is bijective between the rectangle $[0,1]^2$ and the triangle $0\leq u\leq v\leq \frac\pi2$, and that its Jacobian determinant is precisely $1-x^2y^2$, which means $\frac1{1-x^2y^2}$ would be a neater integrand. For the moment, we have found

$$J=\int_0^1\int_0^1\frac{dxdy}{1-x^2y^2}=\frac{\pi^2}8$$ (the area of the triangular domain in the $(u,v)$ plane).


There are two ways to transform $\int\frac1{1-xy}$ into something $\int\frac1{1-x^2y^2}$ish:

  • Manipulate $S=\sum_{n=1}^\infty\frac1{n^2}$: We have $\sum_{n=1}^\infty\frac1{(2n)^2}=\frac14S$ so $\sum_{n=0}^\infty\frac1{(2n+1)^2}=\frac34S$. Applying the series-integral transformation, we get $\frac34S=J$ so $$S=\frac{\pi^2}6$$

  • Manipulate $I=\int_0^1\int_0^1\frac{dxdy}{1-xy}$: Substituting $(x,y)\leftarrow(x^2,y^2)$ we have $I=\int_0^1\int_0^1\frac{4xydxdy}{1-x^2y^2}$ so $$J=\int_0^1\int_0^1\frac{dxdy}{1-x^2y^2}=\int_0^1\int_0^1\frac{(1+xy-xy)dxdy}{1-x^2y^2}=I-\frac14I$$ whence $$I=\frac43J=\frac{\pi^2}6$$

(It may be seen that they are essentially the same methods.)


After looking at the comments it seems that this looks a lot like Proof 2 in the article by R. Chapman.

See also: Multiple Integral $\int\limits_0^1\!\!\int\limits_0^1\!\!\int\limits_0^1\!\!\int\limits_0^1\frac1{1-xyuv}\,dx\,dy\,du\,dv$

1 See e.g. Proof 1 in Chapman's article.
2 It may have been a different one; maybe as in the above article. Either way, the idea to do something trigonometric was not mine.

$\endgroup$
1
14
$\begingroup$

This is, by no measure, the best nor the simplest approach, but I think the approach is pretty peculiar.

We estimate the number $N(x)$ of integer solutions to $a^2+b^2+c^2+d^2\leq x$ as $x\rightarrow\infty$. On one hand, this is the number of lattice points inside the the $4$-ball of radius $\sqrt{x}$, which has volume $\frac{1}{2}\pi^2x^2$, hence $N(x)=\frac{\pi^2}{2}x^2+O(x^{3/2})$.

On the other hand, let $r_4(n)$ be the number of solutions to $a^2+b^2+c^2+d^2=n$. Following the derivation in the book by Iwaniec-Kowalski, by Jacobi's four-square identity we can write $$N(x)=\sum_{n\leq x}r_4(n)=8\sum_{m\leq x}(2+(-1)^m)\sum_{dm\leq x,d\text{ odd}} d \\ =8\sum_{m\leq x}(2+(-1)^m)\left(\frac{x^2}{4m^2}+O\left(\frac{x}{m}\right)\right)\\ =2x^2\sum_{m\leq x}(2+(-1)^m)m^{-2}+O(x\log x)\\ =3x^2\zeta(2)+O(x\log x)$$ (I have copied the steps as they were in the book, it's a neat exercise to justify every transition). In particular, we have $$\zeta(2)=\lim\limits_{x\rightarrow\infty}\frac{N(x)}{3x^2}=\frac{\pi^2}{6}.$$

$\endgroup$
1
  • $\begingroup$ (+1) I wonder if one can do the same by only exploiting the fact that the average value of $r_2(n)$ is $\pi$ by Gauss circle problem. $\endgroup$ Nov 9, 2017 at 4:50
13
$\begingroup$

See evaluations of Riemann Zeta Function $\zeta(2)=\sum_{n=1}^\infty\frac{1}{n^2}$ in mathworld.wolfram.com and a solution by in D. P. Giesy in Mathematics Magazine:

D. P. Giesy, Still another elementary proof that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$, Math. Mag. 45 (1972) 148–149.

Unfortunately I did not get a link to this article. But there is a link to a note from Robin Chapman seems to me a variation of proof's Giesy.

$\endgroup$