Calculating Pi to 10 Trillion Digits; the last number is 5

In August, 2010, [Alexander Yee] and [Shigeru Kondo] won a respectable amount of praise for calculating pi to more digits than anyone else. They’re back again, this time doubling the number of digits to 10 Trillion.

The previous calculation of 5 Trillion digits of Pi took 90 days to calculate on a beast of a workstation. The calculations were performed on 2x Xeon processors running at 3.33 GHz, 96 Gigabytes of RAM, and 32 Terabytes worth of hard drives. The 10 Trillion digit attempt used the same hardware, but needed 48 Terabytes of disk to store everything.

Unfortunately, the time needed to calculate 10 Trillion digits didn’t scale linearly. [Alex] and [Shigeru] waited three hundred and seventy-one days for the computer to finish the calculations. The guys used y-cruncher, a multithreaded pi benchmarking tool written by [Alex]. y-cruncher calculates hexadecimal digits of pi; conveniently, it’s fairly easy to find the nth hex digit of pi for verification.

If you’re wondering if it would be faster to calculate pi on a top 500 supercomputer, you’d be right. Those boxes are a little busy predicting climate change, nuclear weapons yields, and curing cancer, though. Doing something nobody else has ever done is still an admirable goal, especially if it means building an awesome computer.

The reason to calculate Pi in with the methods described in the OP is useful for checking if the second attached link really is doing what it is suppose to. Which is calculate a given place in Pi without having to calculate the preceding numbers.

All of this is useful for the first link that I’ve posted which aims to “compress” files by addressing parts of it’s data into sections of Pi. Which is really cool as the method used to derive your first address list can be further compressed again before storing or sending to a remote server for reconstruction.

I plan to use these methods in the third link that I posted where I’m also combining Crypto currencies and MPI for ARM CPUs. Which also will work on RPi’s and micro controllers.

Hmm. Well, I’d heard long ago that you could could create a circle around the visible universe with an error of less than the width of a proton with only X digits of pi, but when I googled, it, I found X = 32, 39, 41, 43, 47, and 50. Well, a bit less than 10 trillion anyway. :}

i remember deducing it as the limit of some equation of the radio of a polygon with the number of faces going to +inf.
it was just a matter of putting like 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 faces and you get a lot of accuracy in the result

I remember reading an awesome article on the Chudnovsky brothers (think the newyorker linked on wikipedia), it sounded as if their whole house was given over to a supercomputer, it also outlined their various hardships. It would be nice to see an update.

I actually have access to a top 500 computer for supersonic flow analysis, I might be tempted to give this a go. I’m not sure if I should say exactly what computer but I will say it’s got over 1500 xeon 5660 processors and near 20 TB of ram

I have followed Alexander Yee’s work for awhile, and there are some very good reasons that GPUs are not ideal for calculating pi. I think the main reason is memory bandwidth–when you’re multiplying very long numbers, you’re constantly having to go to RAM, and this is far from speedy with current GPUs. Clustering has similar issues, but the main issue with clusters I think is effective multithreading. Alex discusses that too at his website.