"How does your record compares to the previous one ?The previous Pi computation record of about 2577 billion decimal digits was published by Daisuke Takahashi on August 17th 2009. The main computation lasted 29 hours and used 640 nodes of a T2K Open Supercomputer (Appro Xtreme-X3 Server). Each node contains 4 Opteron Quad Core CPUs at 2.3 GHz, giving a peak processing power of 94.2 Tflops (trillion floating point operations per second).

My computation used a single Core i7 Quad Core CPU at 2.93 GHz giving a peak processing power of 46.9 Gflops. So the supercomputer is about 2000 times faster than my computer. However, my computation lasted 116 days, which is 96 times slower than the supercomputer for about the same number of digits. So my computation is roughly 20 times more efficient. It can be explained by the following facts:

* The Pi computation is I/O bound, so it needs very high communication speed between the nodes on a parallel supercomputer. So the full power of the supercomputer cannot really be used.
* The algorithm I used (Chudnovsky series evaluated using the binary splitting algorithm) is asymptotically slower than the Arithmetic-Geometric Mean algorithm used by Daisuke Takahashi, but it makes a more efficient use of the various CPU caches, so in practice it can be faster. Moreover, some mathematical tricks were used to speed up the binary splitting. " ( http://bellard.org/pi/pi2700e9/faq.html [bellard.org] )

The Pi computation is I/O bound, so it needs very high communication speed between the nodes on a parallel supercomputer. So the full power of the supercomputer cannot really be used.

The algorithm I used (Chudnovsky series evaluated using the binary splitting algorithm) is asymptotically slower than the Arithmetic-Geometric Mean algorithm used by Daisuke Takahashi, but it makes a more efficient use of the various CPU caches, so in practice it can be faster.

If you reply to the first post, it's almost just as good as having the 1st post. You get more views. So then the next guy replies to that reply, so on and so forth. Mods be on the lookout for offtopic.

In another thread someone had posted that there was no reason for any modern CPUs; the idea being that anything one could reasonably want to do with a computer was possible with decade old hardware.

This.. *This* article is why I enjoy the breakneck pace of processor speed improvements. The thought of being able to do some pretty serious computing on a relatively inexpensive bit of hardware -- even if it takes half a year to get results -- does what the printing press did. It allows the unwashed masses (of which I am one) a chance to do things that were once only the realm of researchers in academia or the corporate world. Sure, all that you need to do some serious mathematics is a pen and paper, but more and more discoveries occur using methods that can only be performed with a computer.

There's always the argument that cheap computers and cheap access to powerful software pollutes the space with hacks and dilletantes. People have said this about desktop publishing, ray tracing, and even the growth of Linux. But it's this ability to do some amazing things with computers that makes it all worthwhile.

What does large number theory, factorization, optimizations that offer 2000x speedups in this field, and specific information for desktop computers... what is it about?

Psychologically, it's rather obvious that it's peacocking.That said, it may have some positive side effects down the road, if nothing else for making more people understand that tailoring solutions around bottlenecks can often give better results than raw power.

If the expression of pi in any base is random, eventually any message you want will be found within it. You do not need 1000K monkeys to output all of some dead guy's work, which is good, because that would be a lot of monkey doo you would have to clean up waiting for them to finish. And you would have copyright problems to boot.

Like a Mandelbrot fractal, or the number e, Pi has the interesting property that it's full of detail, but derived from a simple rule. So yes, it is interesting.

What I find most interesting about such things is their universality. If one was to suggest the existence of some God, then go on to say that this God created the universe where we live, one could still never claim that Pi had been created or assigned a value at that time. No, Pi was never created, it simply is, and it is everywhere always the same.

No, Pi was never created, it simply is, and it is everywhere always the same. A universe with a different value of Pi would be impossible.

Last I checked, the value of Pi we use is the one for Euclidean geometry. For non-Euclidean geometry, pi has different values, depending on the curvature of the surface that is the basis for the geometry.

Note, by the by, that we don't actually live in a Euclidean geometry, but that the curvature is small enough that the ideal euclidean value of pi still works.

Hmm, for such a record attempt, do you actually have to calculate all these earlier digits? They're already known. Can anyone prove the computer calculated the already known digits first (instead of getting them from a table) before finally getting to the 120 million new ones?

Pi is interesting in that regard -- there are algorithms that can compute the Nth digit without needing to compute the intermediate digits. If you want to compute all digits from 0 to N, however, there are more efficient algorithms.

Do they verify these numbers somehow?
Anyone can write down a series of a numbers and claim it's a specific sequence.

Not saying these numbers aren't correct, just a thought.

Perhaps this is why you should read the article. The press release [bellard.org] answers this question directly.

The binary result was verified with a formula found by the author with the Bailey-Borwein-Plouffe algorithm which directly gives the n'th hexadecimal digits of Pi. With this algorithm, the last 50 hexadecimal digits of the binary result were checked. A checksum modulo a 64 bit prime number done in the last multiplication of the Chudnovsky formula evaluation ensured a negligible probability of error.

The conversion from binary to base 10 was verified with a checksum modulo a 64 bit prime number.

Fascinating about the Bailey-Borwein-Plouffe formula [wikipedia.org] for quickly generating an arbitrary digit of pi. Reading the FAQ, he doesn't offer the entire series of PI, but does offer excerpts. The devil's advocate in me can't help but point out that he could have prepared these excerpts using the above formula, rather than actually calculating pi to that many places. I suppose even if he did offer the entire result, nobody could verify it except by generating it himself.

No, because most software is large and complex, doing things like mathematical induction on all code is infeasible for this reason.

In contrast, code to calculate something like this is relatively extremely small.

Effectively, the reason your argument doesn't hold is that although we can fairly trivially prove some algorithms correct, the method isn't scalable and hence doesn't scale to the scale of pretty much any piece of modern software.

The implementation (compiled or uncompiled) is in itself an algorithm which can equally be checked because the language follows pre-defined logical rules which may act as axioms or depending on the details of the algorithm it may be trivial to just use induction.

It's not like we're checking a full operating system or office suite here, so size isn't a restrictive problem in such a proof.

It may be that the processor itself hasn't been checked so that the results of executing that algorithm isn't correct eith

A standard desktop PC was used, using a Core i7 CPU at 2.93 GHz. This CPUcontains 4 physical cores. We put only 6 GiB of RAM to reduce the cost. It means the amount of RAMis about 170 times smaller than the size of the final result, so the I/O performanceof the mass storage is critical. Unfortunately, the RAM had no ECC (ErrorCorrecting Code), so random bit errors could not be corrected nor detected. Sincethe computation lasted more than 100 days, such errors were likely [12]. Hopefully,the computations included verification steps which could detect such errors.
For the mass storage, five 1.5 TB hard disks were used. The aggregated peakI/O speed is about 500 MB/s.

Verification as actually quite easy, due to the (totally unexpected at the time!) discovery of the Borwein-Bailey-Plouffe [wikipedia.org] algorithm which allows you to directly calculate the N'th hexadecimal digit of pi, without having to determine any other digits.

He used this on a bunch of the last digits and they all came out correct, which makes the probability of an error extremely small.

Last year I used his algorithm to calculate 1e9 (i.e. a US billion) digits of pi and made them searchable:

There is a program package for Linux called Sage math where you can get a lot of digits in your constants. For example, to accurately calculate the circumference of a circular table with diameter=1000 mm you could type:

1000*pi().n(digits=1000000)

All you need now is a decent measuring tape...

These two also work, in case you're worried about not getting good enough accuracy when you calculate Fourier coefficients or something:

He also wrote the Obfuscated Tiny C Compiler (http://bellard.org/otcc/) in 2002 for the Obfuscated C contest, where otcc could compile itself. This became the Tiny C Compiler (TCC) which was picked up by Robert Landley (but subsequently dropped a while later) that is a capable, fast C90/C99 compiler.

His projects page (http://bellard.org/) and the older projects (http://bellard.org/projects.html) contain a lot of interesting projects.

As he points out himself, he doesn't really care about calculating digits of Pi; it's a convenient hook on which to hang an interesting algorithms challenge. From the FAQ:

I am not especially interested in the digits of Pi, but in the various algorithms involved to do arbitrary-precision arithmetic. Optimizing these algorithms to get good performance is a difficult programming challenge.

He also mentions elsewhere that of his code, "The most important part is an arbitrary-precision arithmetic library able to manipulate huge numbers stored on hard disks."

There is an algorithm now for calculating the nth digit of Pi at a whim.

The algorithm [wikipedia.org] only works for hexadecimal digits. There is no known formula or algorithm for calculating the n-th decimal digit directly.

Having said that, the existence or non-existence of an n-th digit algorithm does not have any relevance on the silliness or non-silliness of computing trillions of digits of pi, unless the algorithm is extremely trivial (i.e. computing the digit takes less CPU time than a byte of I/O), which is not the case here.

I present here a way of computing the nth decimal digit of pi (or any other base) by using more time than the [BBP] algorithm but still with very little memory.

The algorithm you linked to requires cubic time in n. It hardly qualifies as "calculating the n-th decimal digit directly" given that the naive approach (calculating every single digit between 1 and n, and throwing away all but the last digit) is faster than cubic time.

The only advantage of the algorithm you linked to is that it requires constant space.

Knowing how to calculate the nth digit of Pi itself is slightly retarded.

The observable universe is about 50 billion light years across, which is about 4.27 * 10^26 meters. If we take a ring of atoms each roughly 1 Angstrom (10^-10 meters) apart with a diameter the size of the observable universe and want to determine the circumference of the resulting circle, then knowing Pi to 40 or so places is sufficient that the error caused by the atoms themselves is greater than that introduced by using an approxima

That is a great engineering answer. Technically correct and calculated all the factors brilliantly.

However Pi is not just used for measuring physical objects.

An engineer, a physicist, and a mathematician were on a train heading north, and had just crossed the border into Scotland.

The engineer looked out of the window and said "Look! Scottish sheep are black!"
The physicist said, "No, no. Some Scottish sheep are black."
The mathematician looked irritated. "There is at least one field, containing at le

I believe in "Contact" (the book by Carl Sagan, not the movie), the travelers ask the superintelligent aliens "Do you believe in God? To which they reply: "Yes" When asked why, they say "We have proof" in the finding of a message in a transcendental number (pi?).

After reading the Wikipedia summary I understand that when the travelers come home and are accused of fabricating the whole thing, one of them tries to "find" this message by running their own computer program. She finds a message, or does she?

Well given (I think, though may be wrong on this) that pretty much any finite sequence of digits will show up in the decimal expansion of pi at some point, there should be a raster image of a circle in 1s and 0s buried in it somewhere. Along with a greyscale raster of Goatse.

In any large enough collection of random numbers you will be guaranteed to find whatever pattern you're looking for, whether it's a hundred thousand zeros in a row or the text of the collected works of Shakespeare. You can test statistically how likely you are to find particular patterns in a collection of numbers of a particular size though.

Finding patterns can be hard. If you have an idea of what you're looking for you can do much better than if you just want to find any pattern. SETI at Home has a pag

Interestingly, if someone has 2.7 trillion digits of Pi stored on their hard drive, they could actually have more than just a message. The may, unwittingly, be in possession of material that breaks copyright law!:-)

From TFA's technical notes: "Unfortunately, the RAM had no ECC (Error Correcting Code), so random bit errors could not be corrected nor detected. Since the computation lasted more than 100 days, such errors were likely [12]."

Great we have all these digits, but they're mostly useless bits and their reliability is suspect.

He mentions in the "press release" page that the most important thing developed in his code is "an arbitrary-precision arithmetic library able to manipulate huge numbers stored on hard disks", which sounds basic-research-y. There's some more on that in the technical-details PDF, although unfortunately he says he doesn't plan to release the code (somewhat unusual, since most of his projects are free software).

Improving the algorithms for arbitrary precision arithmetic -- that is the area that Fabrice is interested in, not necessarily computing X number of digits of pi. That, and (a) it is interesting, (b) it is a challenge and (c) let's do it for fun.

Well, 'til now I saw the Pi-calculating e-peen waving as something like basic research. Ya know, where you build better computers and then you don't find anything sensible to do with them, so let's have them, say, find the next big prime (ok, being in cryptography I can see an application for that...)

He developed a highly efficient library for arbitrary precision floating-point number calculations, capable of having a desktop machine best a supercomputer. Now go change your signature to "For lack of a better question...";-)

I mean, apart from sheer nerd value, this has absolutely no worth to science or humanity.

Are you sure? I did not read the original article. Would be useless since I doubt I would understand the math his program is based on.
But when the previous record was done on a multi million dollar machine and he did it on a single desktop computer I think it is not far fetched to assume he must have found some significant improvements to the pi calculating algorithm.

If this is possible for pi calculation, could something like that also be possible for prime number calculations? Maybe someone like him f