Click Iterate Once to get started. Then keep
clicking. Notice how the result slowly converges on 3.14159265358979.
If you get tired, you can switch to 10 Times. Then switch from
Zeta to Machin and click Iterate Once several
times. See how much more quickly it converges? Then try the other
algorithms! Note: this app is crippled by the fact that JavaScript only
performs 64-bit math. [JavaScript source]

[Disclaimer: I'm not a math guy, so there are probably all kinds
of inexactitudes in here. Feel free to add corrections to the
comments.]

A post went by on reddit recently showing an employer trying to weed
out candidates. The idea was that you were to find the first 7-digit
palindrome in π, and that was the email address to send your
resumestuffs. (The first 10-digit palindrome is 0136776310.)

Now, searching a string for palindromes is a pretty trivial exercise,
as is finding the first million digits of π on the Internet.

So the only thing that's left is to write a program to compute π for
us! How do we do that? Let us count the ways!

No, let's not. There are too many. We'll just cover a few, and then
show an implementation of the fastest in C using the GNU MultiPrecision
library, GMP.

First, a bit of history

People have been figuring ways to compute π for ages, and most of
them were very very tedious and time-consuming. Archimedes figured it
out to 3 decimal places. 750 hundred years later, we managed to get 4
digits. Another 900 years passed, and we got to 16. In the late 1800s,
hapless William
Shanks busted his buns for 15 years to compute π to 707
places using John
Machin's formula (seen in the app, above). In a stroke of bad luck,
however, he blew it 527 digits in, so the remainder of the estimate is
incorrect. [Nooooooo...!] Fortunately, no one was willing to
spend the time to double-check his work, and he died in the blissful
happiness of a job well-done.

Archimedes had used a geometric calculation to estimate π, but later
on we mathed-up and figured out infinite series.
It turns out, this opens up some serious doors, since π has a habit of
showing up all over the place.

The "go to" formula for many years was John Machin's. Following that
in the early 20th century, super
geniusSrinivasa
Ramanujan came up with some good speed-demons. In 1987 the Chudnovsky
brothers came up with the blistering Chudnovsky algorithm, which
computes over 14 digits of π per iteration! (These three algorithms are
featured in the app at the top of this page.)

Estimation with Infinite Series

If we have a formula that we can solve for π, and there is an
infinite series that can produce a result for that solution, then we're
golden.

What does that mean?

Here's an example: we know sin π/2 = 1, so
we can solve for π like so: π = 2 arcsin 1.
Now... is there an infinite series for arcsin? Sure, it's:

arcsin z

=

∞

Σ

k=0

(2k)! z2k+1

22k (k!)2 (2k + 1)

Notes for non-math types: sigma, Σ,
is a summation, and
basically means we're going to loop k from 0 to infinity, and
keep a running sum of the results of the formula inside the summation.
The exclamation point, !, is the factorial which is the
product of all integers between that number and 1. And, of course, the
superscripts are exponents.

So the basic plan of attack is to write a for-loop to do the
summation, and then compute the final result, like so:

You might have noticed a slight implementation detail in the
above code. Notably, it literally has an infinite loop in it, and you
have to be somewhere by sometime, so who can wait that long?

So we give up after a while. We run it for a few thousand
iterations, and we decide that's good enough, and call it a day. But as
you saw with the app at the top of this page, some series
converge (approach π) very very VERY slowly. So the quest has
been on for the last few hundred years to come up with equations that
converge more and more quickly so we can wrap it up and have some
dinner.

Using arcsin to compute π this way is
really abysmally slow, so we'll be focusing on Chudnovsky instead.

Getting Precise with GMP

You might have noticed that the app at the top of this page gives out
after just a few digits. This is because JavaScript uses 64-bit IEEE-754 floating point
numbers, and that's as much precision as they offer. Also, some
intermediate calculations might eventually overflow and fail, as well
(since there are a lot of powers and factorials involved.)

This library allows you to set the precision of your computations,
and then has a variety of useful functions to help (e.g. arithmetic,
exponentiation, factorial, etc.) Here's an example that computes √2 to 1024 bits:

As you can see, we have way more digits than will fit in a standard
long double; we have 1024-bits-worth, to be precise.

How many decimal digits are there in 1024 bits, anyway? How can we
figure it out? (Fun Fact: did you know "bit" is a contraction of
"binary digit"? Of course you did!)

Thinking about it intuitively, we know that 1 bit can encode two
values: 1 or 0. We know that 2 bits can store 4 values: 0, 1, 2, and 3.
But for decimal numbers, we need to store the values 0-through-9... so
we need more bits.

3 bits gets us 0-7 (23 different values),
which is almost there, but 4 bits gets us 0-15 (24), which is too many. Sure we can encode decimal
numbers at one decimal digit per nibble—that's called Binary Coded
Decimal—but it wastes part of a bit.

So intuitively, it's between 3 and 4 bits. Like 3.4 bits per decimal
digit or something. How can we do better?

In general, a x-decimal-digit number can encode 10x different values. Similarly, a
y-bit (binary digit) number can encode 2y different values. And for any y-bit
value, there is some x-decimal-digit number that can hold the
same value.

And we know the relation between x and y is some
constant factor (we speculated 3.4 bits-per-decimal-digit, earlier). So
we can say that y = fx, where f
is probably around 3.4, and have fun:

10x

=

2y

=

2fx

since we declared y =
fx

fx

=

log2(10x)

solve for fx using definition
of the logarithm

fx

=

x log210

f

=

log210

=

3.3219280948873626

Hey, so it's 3.32192-blahblahblah bits-per-decimal-digit! Our 3.4
guess wasn't bad at all. And now we know how many bits we need to hold
a certain number of digits:

GMP can do a lot more than just square roots. There are oodles of functions, some of which
will make you especially happy if you're trying to implement RSA. For
Chudnovsky, we just need square root, exponent, factorial, and
arithmetic.

Chudnovsky, we meet at last

I'm not even going to begin to pretend I know how the Chudnovsky
equation came about:

426880 √10005

π

=

∞

Σ

k=0

(6k)! (13591409 + 545140134k)

(3k)! (k!)3 (-640320)3k

but there it is in all its glory. Solve for π, and you're done!

Again, it's just like the arcsine example, above, where we run the
series for so many iterations until we feel the estimate is good enough.
Now, for Chudnovsky, we know how many decimal digits it generates per
iteration, so it's easy to compute the number of iterations needed for a
desired result. (I don't know how they figured how many digits per
iteration; maybe someone can enlighten us.)

Here's the inner part of the loop from the source file, chudnovsky.c, that runs the
summation. Obviously the larger iterations is, the longer it's
going to run, and the better the π approximation will be.

You should be able to match the comments in the source up to the
equation to see the individual bits at work. (Note this code is not
optimized for speed in the least; it merely matches the equation as
closely as possible.)

Faster!

One cool thing about the summation is that you can break it up
between cores. You could have one core do half the work, and another
core do the other half, and then add their results together when they're
both done. It's embarrassingly
parallel. It should be possible to take something like OpenMP, which I blogged about earlier, and
speed up the computation considerably.

I'm not going to bother implementing that, since Hanhong Xue and
David Carver did it better. Links to that source is below.