Just curious, how do you calculate an irrational number? Take $\pi$ for example. Computers have calculated $\pi$ to the millionth digit and beyond. What formula/method do they use to figure this out? How does it compare to other irrational numbers such as $\varphi$ or $e$?

Minor aside: your question is not "how do you calculate an irrational number", but "how do you calculate the decimal expansion of an irrational number".
–
HurkylMay 20 '12 at 12:20

6

This is a really good question, because it's simple to ask but has no simple answer. It depends a lot on which number you have in mind. For an interesting counterpoint, consider Euler's constant $\gamma$$\approx 0.57721\ldots$. Methods are known for calculating $\gamma$ with great precision, but it is not known whether it is irrational or not!
–
MJDMay 20 '12 at 13:12

@Sean All these formulas have been proven equivalent, otherwise they wouldn't be called "formulas for $\pi$". In general, showing that two formulas are equivalent is very hard, and requires a great deal of mathematics
–
Alex BeckerJul 6 '12 at 2:50

5 Answers
5

$\pi$

For computing $\pi$, many very convergent methods are known. Historically, popular methods include estimating $\arctan$ with its Taylor's series expansion and calculating $\pi/4$ using a Machin-like formula. A basic one would be

$$\frac{\pi}{4} = 4 \arctan\frac{1}{5} - \arctan\frac{1}{239}$$

The reason these formulas are used over estimating $\arctan 1 =\frac{\pi}{4}$ is because the series for $\arctan x$ is move convergent for $x \approx0$. Thus, small values of $x$ are better for estimating $\pi/4$, even if one is required to compute $\arctan$ more times. A good example of this is Hwang Chien-Lih's formula:

Iterative algorithms, such as Borwein's algorithm or Gauss–Legendre algorithm can converge to $\pi$ extremely fast (Gauss–Legendre algorithm find 45 million correct digits in 25 iterations), but require much computational effort. Because of this, the linear convergence of Ramanujan's algorithm or the Chudnovsky algorithm is often preferred (these methods are mentioned in other posts here as well). These methods produce 6-8 digits and 14 digits respectively term added.
It is interesting to mention that the Bailey–Borwein–Plouffe formula can calculate the $n^{th}$ binary digit of $\pi$ without needing to know the $n-1^{th}$ digit (these algorithms are known as "spigot algorithms"). Bellard's formula is similar but 43% faster.

The first few terms from the Chudnovsky algorithm are (note the accuracy increases by about 14 decimal places):

$e$

The most popular method for computing $e$ is its Taylor's series expansion, because it requires little computational effort and converges very quickly (and continues to speed up).
$$e=\sum_{n=0}^\infty \frac{1}{n!}$$
The first sums created in this series are as follows:

One should also note that the limit definition of $e$ and the series may be used in conjunction. The canonical limit for $e$ is

$$e=\lim_{n \to \infty}\left(1+\frac{1}{n}\right)^n$$

Noting that this is the first two terms of the Taylor's series expansion for $\exp(\frac{1}{n})$ to the exponent of $n$ for $n$ large, it is clear that $\exp(\frac{1}{n})$ can be computed to a higher accuracy in fewer terms then $e^1$ in the series, because in two terms give a better and better estimate as $n \to \infty$. This means that if we add another few terms of the expansion of $\exp(\frac{1}{n})$, we can find the $n^{th}$ root of $e$ to high accuracy (higher then the limit and the series) and then we just multiply the answer $n$ times with itself (easy, if $n$ is an integer).

As a formula, we have, if $m$ and $a$ are large:

$$e \approx \left(\sum_{n=0}^m \frac{1}{n!a^n}\right)^a$$

If we use the series to find the $100^{th}$ root (i.e. using the above formula, $a=100$) of $e$, this is what results (note the fast rate of convergence):

$\varphi$

The golden ratio is
$$\varphi=\frac{\sqrt{5}+1}{2}$$
so once $\sqrt{5}$ is computed to a sufficient accuracy, so can $\varphi$. To estimate $\sqrt{5}$, many methods can be used, perhaps most simply through the Babylonian method. Newton's root-finding method may also be used to find $\varphi$ because it and its reciprocal, $\Phi$, are roots of
$$0=x^2-x-1$$

If $\xi$ is a root of $f(x)$, Newtons method finds $\xi$:

$$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$
$$\xi=\lim_{n \to \infty}x_n$$

We thus assign $f(x)=x^2-x-1$ and $f'(x)=2x-1$. Then
$$x_{n+1}=x_n-\frac{x_n^2-x_n-1}{2x_n-1}=\frac{x_n^2+1}{2x_n-1}$$

$\log 2$

The Taylor's series for $\log$ has disappointingly poor convergence and for that alternate methods are needed to efficiently compute $\log 2$. Common ways to compute $\log 2$ include "Machin-like formulae" using the $\operatorname{arcoth}$ function, similar to the ones used to compute $\pi$ with the $\arctan$ function mentioned above:

Different irrationals yield to different techniques. $\phi=(1+\sqrt5)/2$ just involves calculating $\sqrt5$, which can be done easily by Newton's method from introductory calculus. The infinite series $$e=1+1+1/2+1/6+1/24+\cdots$$ where the denominators are the factorials, can be used to calculate $e$. For pi, this article on Gauss-Legendre algorithm will give you some ideas.

Gerry Myerson's answer above is correct in saying that different irrational numbers lead to different techniques. In essence, though, all those techniques boil down to one idea: Find some sort of method (formula, infinite series, algorithm, etc.) that when used, will yield a decimal expansion that will converge to the value of the irrational (or rational, for that matter!). Naturally, certain techniques are more useful in certain circumstances (e.g., in computing, techniques that converge very quickly, but also result in as few processor instructions as possible are preferred).

As an aside, my personal favorite formula for $\pi$ was given by Ramanujan:

$\zeta(3)=1.20205690315959428539973816151144$

The number $$\zeta (3)=\sum_{n=1}^\infty \frac{1}{n^3} \tag{1}$$ is called Apéry's constant, because its irrationality was first proved by Roger Apéry. The following series, which converges to $\zeta (3)$ faster than $(1)$, can be used to compute it

There are many other methods to compute $\pi$, including algorithms able to find any number of $\pi$'s hexadecimal expansion independently of the others. As I remember, the wikipedia has a lot on methods to compute $\pi$. Moreover, as $\pi$ is a number intrinsic to mathematics, it shows in many unexpected places, e.g. in a card game called Mafia, for details see this paper.

As for $e$, there are also power series and continued fractions, but there exists more sophisticated algorithms that can compute $e$ much faster. And for $\phi$, there is simple recurrence relation based on Newton's method, e.g. $\phi_{n+1} = \frac{\phi_n^2+1}{2\phi_n-1}$. It is worth to mention that the continued fraction for the golden ratio contain only ones, i.e. $[1;1,1,1,\ldots]$ and the successive approximations are ratios of consecutive Fibonacci numbers $\frac{F_{n+1}}{F_n}$.

To conclude, majority of example methods here was in one of the forms: computing better and better ratios (but each fraction was calculated exactly) or work with approximations the whole time, but create a process that will eventually converge to the desired number. In fact this distinction is not sharp, but the methods that are used in those approaches are usually different. Useful tools: power series, continued fractions, and root-finding.