parallel random number generation

parallel random number generation

I would like to use the vsl random number generators in a parallel monte carlo simulation, ie the possibility to distribute the simulation on all processor cores. Regarding this I have 2 different cases:- In the first case I would like to accelerate a simulation by distributing it on multiple processor cores. For example, let s say I need to simulate 10000 runs with each run containing 5000 timesteps. That means that I need to generate 10000*5000 random variates.My simulation would look something like this:

#define SEED 1

#define BRNG VSL_BRNG_MCG31

#define METHOD VSL_RNG_METHOD_GAUSSIAN_ICDF

// initialize vsl random generator stream

VSLStreamStatePtr stream;

double a=0.0,sigma=0.3;

errcode = vslNewStream( &stream, BRNG, SEED );

for(int i=0; i<9999; i++){

// simulate one path by generating 5000 variates.

double r[5000];

vdRngGaussian( METHOD, stream, N, r, a, sigma );

for (int j=0;j<4999;j++){

// simulate random walk using the variates

}

}

I would like to parallelize the outer loop. My question is: is it safe to call vdRngGaussian from multiple threads? And am I guaranteed to have independant variates?

The second scenario would be to parallelize multiple simulations. In this case I would like to do one full simulation per thread and I need to to generate independant variates for all simulations. In this case my question would be what is the approach to generating the random variates? Should I use one rng per thread and initialize them with different seeds? I have been told that this is not the best way of getting independant variates. Another method would be to use the leapfrog method. What is best?

Intel MKL Random Number Generators support parallel Monte Carlo simulations by means of the following methodologies:

1. Block-splitting which allows you to split the original sequence into k non-overlapping blocks, where k - number of independent streams. The first stream would generate the random numbers x(1),...,x(nskip), the second stream would generate the numbers x(nskip+1),...,x(2nskip), etc. Service function vslSkipAheadStream( stream, nskip ) is way to use this methodology in your app.

2. Leap-Frogging that allows you to split the original sequence into k disjoint subsequences, where k - number of independent streams. The first stream would generate the random numbers x(1), x(k+1),x(2k+1),... the second stream would generate the numbers x(2), x(k+2),x(2k+2),...,etc. Service function vslLeapfrogStream( stream, k, nstreams ) will help you to parallelize the application by means of this approach.

All those techniques will help you to have streams of indepedent variates. As soon as you create nstreams random streams you can call safely call MKL Gaussian or any other RNG with k-th stream stream[k] in thread safe way. Below are the additional notes related to parallelization of Monte-Carlo simulations:

1. To parallelize one simulation in your code you might use Block-Splitting or LeapFrog methodologies with MCG31, MCG59 or other MKL BRNG which supports them. Before the simulations please check that BRNG period addresses needs of your application in random numbers.

2. Please, avoid using the same VSL RNG per one thread initialized with different seeds. It can results in the biased output of the Monte-Carlo simulations. Instead, please, use one of the methodologies described above.

3. To parallelize multiple simulations you can use one of the methodologies above. Considerations related to BRNG period are applicable to this scenario too. Also, please, keep inmind the following aspects:

- to apply SkipAhead technique you will need to compute number of variates per one simulation, parameter nskip. If this number is not available (or difficult to compute) for some reasons in advance you might want to use Leapfrog technique

- Leapfrog technique, however, is recommended when number of independent streams k is fairly small

- Use of WH or MT2203 BRNG family could probably be the suitable option for your needs

Thanks a lot for your feedback and please forgive me for this delayed response. Indeed your explanations clarified things a lot. I will now play around with the various generators using Intel TBB and come back to you with further questions :)

Andrey,Just to make sure I understood everything correctly. Is the following assertion correct:- Even in the case of one simulation distributed on many cores, I should initialize one stream per thread using leap-frogging or block-splitting. If I share the same stream amongst several threads, then not only it's not thread-safe but I will also introduce correlations in the simulation paths. Correct?Thanks in advance for your answer.Regards,Anwar

Anwar,To share the same random streamamong several threads in thread-safe and "correlation-free" wayyou would need to manage the access to the random number generationthrough thread syncronization primitives (so, that at any time only one thread uses the stream for the producing random numbers). Otherwise, you would use the stream in not thread-safe manner whichpotentially canput correlations in your results.To splitone simulation across cores I would suggestto use one of the parallelization approaches supportedby Intel MKL Random Number Generators and shortly described above. Use of those methodologies would also be advantageous from perspective of performance/effective use of multi-core resources and would help to avoid unnecessary thread syncronization.Thanks,Andrey

I'm hitting another stumbling block. I m trying to use the random generators with intel tbb and I m not sure how/when to initialize them correctly. As an example let s consider thecalculation of pi through monte carlo. I would like to use a tbbparallel_reduce algorithm for this. Here's an example i've taken from the tina rng manual that I would like to adapt to mkl. For one thing I can t know in advance how many streams I will need because this is taken care of by tbb. My guesswould be that I need to pass adifferent stream to each splitting constructor, but how can I know in advance how manystreams I willnead? Any help would be greatly appreciated!Regards, Anwar

I have quick suggestions you might want to have a look at.1.You have the number of samples whose processing you want to split across threads, samples = 1M. I assume that you can check the maximal number of cores for your CPU, say ncores. So, you can assign the processing of range of size k=samples/ncore to one core.

For TBB blocked range you can specify the grainsize parameter: your blocked_range will not be split into two sub-ranges if the size of the range less than grainsize (please, see Section 4.2.1 of Intel TBB Manual for more details).

If you set the grain size to kyouwill avoid undersubcription (that is, the number of logical threads is not enough to keep physical threads working) and oversubsription (number of logical threads exceeds number of physical threads). On the next step youwould associate VSL Random stream with a thread in the operator()(const tbb::blocked_range &range) by using the boundaries of the range, e.g., idx=range.end())/k - 1. So, on high level the general scheme could look like this (the modification of this scheme are possible):a) Determine number of cores on CPU ncoreb)Create ncore MKL Random Streams by applying one of VSL parallelization techniquesc)Construct object of type parallel_pi by providing number of cores, number of samples, and array of random streamsd)Compute index idx of the block being processed and obtain random numbers from the stream indexed idx. Use nsamples/ncore for grainsize in blocked_range.

2. Also, please, have a look at Chapter "Catalog of Recommended task Patterns". The methodology described there would allow creating and spawning k tasks; each task would process the random numbers obtained from specific VSL Random Stream created by using one of parallelization techniques.

Hi Andrey,I really want thank you for your time in answering all of my questions :). I tried to implement the second method, ie spawning tasks and using the leapfrog method.I must be doing something wrong because my program crashes just when the rng is called before generating the variates.

OK reduced the samples from 1M to 100000 and now the program no longer crashes. Also there was a bug in class PiCalculator: variable "in" should be declared as long& and not long. However I still don t understand why the program crashes if the sample size is large? normallyVSL_BRNG_MCG31 has a period of 2^31 which should be enough in this case. Besides from my understanding if the period was not long enough the variates would simply repeat?...

Period of Intel MKL MCG31m1 BRNG should be sufficient for goals of this demo application. However, ifthe application requestsnumberof random variatesthat exceeds its period you would see repeated random numbers in the sequence.The root of the crash seems to be in the operator() - the size of the buffer which is allocated on stackand is used for random number is 2 * samples * sizeof( double ) = 2 * 1M * 8 = 16 MB.To avoid the issues with buffer size and to improve perfromance of the application Imodified your code as shown below. The essense of the changes is use of the buffer of the fixed size for random numbers.Size of the buffer is chosen to get the best performance of the application (you would probably need several experiments to choose the best size of the buffer).

Hi Andrey,I'm trying to do something similar in Fortran, but am having a bit of a hard time following the above posts (and the vslnotes.pdf file) as I know nothing about C. In particular, I'd like to parallelize a simple simulation using OpenMP. The idea is to generate NSIM independent samples of N observations, calculating some statistic of interest for each sample. For example: PROGRAM MC IMPLICIT NONE INTEGER I, N, NSIM PARAMETER (N=100, NSIM=1000000) DOUBLE PRECISION X(N), MUHAT(NSIM)!$OMP PARALLEL SHARED(MUHAT) PRIVATE(I,X)!$OMP DO DO I=1,NSIM CALL RANDOM_NUMBER(X) MUHAT(I) = SUM(X)/DBLE(N) END DO!$OMP END DO!$OMP END PARALLELEND PROGRAM MCI have no idea whether RANDOM_NUMBER() is actually thread-safe, but, in reality, I would be using some non-uniform (e.g., normal) generator from the MKL anyway.Any suggestions on which BRNG to use and whether to use leapfrogging or blocksplitting (or neither) would be very helpful. To put the problem in context, NSIM will be a very large (1 million), N is medium-sized (100 or 1000), and my machine has 12 cores (i.e., the parallel parts above are spread across 12 threads).Thanks in advance.

Hi Andrey,Thanks a lot for your advice! I modified the code accordingly and now everything works like a charm! Your trick of breaking down the simulation in blocks in order to fit in memory is quite nice. I also played around with the number of tbb tasks and I get good speedup results with 50 tasks. Simulation runs in 70 sec with one task and in 19 sec using 50 tasks. Looking at my cpu bar I can see that all processor cores are used while running the simulation, showing that tbb correctly distributes the tasks on all physical threads.Here's the final version of the code with your modifications in case anyone wants to play around with it:

At first glance, the problem you solve is similar to the problem Anwar has earlier described. You need to parallelize Monte Carlo simulations, and process random numbers using some algorithm(say, compute some statistics).

The dimensions you mentioned in the postindicatethat MKL Random Number Generatorstogether with OpenMP* look suitable choice for your simulations. The methodology for parallelization of the simulations with Intel MKL Random Number Generators is "language independent".Before choosing type of Basic RNG and parallelization methodology you would need to better understand requirements to the generator:1. How many numbers would you need (especially if number of simulationswould potentially increase)?

The list of the requirements would help you to choose MKL BRNG that meets requirements of your problem.If you choose MT2203BRNG which supports up to 6024 threads (in MKL 10.3.3)and has period~10^663 youmight wanttohave array of ncore VSL Random streams (ncore is 12 in your case). Each of them is initizalized in the usual way by means of Fortran version of the function NewStream vslnewstream. Youthen need to split NSIM simulations acrossncore threads that is, assign blocks of simulations to each thread. Assume, for simplicityNSIM=120. Then random stream #1 would serve block of simulations 1-10, second - block of 11-20, that is number of simulations per core is 10. Using RNG you will generate arrays of the observations in each block.On high levelit would look like this

Please, note that it would be more effective from perspective of performance to call VSL RNGs on vector lengths like few thousands. If number of observations is 100 you might want to groupseveralvectors of the observationsinto one call to the generator.

If you choose BRNG with skip-ahead or leapfrog feature for your simulations, e.g. MCG59BRNGthe computations would look similar.The only change is initialization by means of service functions vslleapfrogstream or vslskipaheadstream. MKL installationdirectory contains example/vslfwith Fortran examples that demonstrate RNG use; Skip-Ahead and Leap-Frog features are among them.

If you need to compute simple statistical estimates like mean or covariance you might want to do it with Summary Statistics ofVSL. As VSL RNG it provides Fortran API and can be called from your application. Please, note that Summary Stats routines incorporate threadingfeature while processingthe dataset of size p x n when appropriate.

Please, let me know when/if you parallelize the computations or ask more questions on Statistical Feature of MKL.

Hi Anwar,It is great to know that you got speed-up of the app with MKL RNGs,TBB on multi-core CPU.Please, feel free to ask questions on Stat Component of Intel MKL, we would be ready to discuss and help.Best,Andrey

I don't know if this can be useful, but here's a simple C program using OpenMP that I guess you could easily translate to Fortran. Here's a quick overview of what it does. First I start by finding how many threads I have available using omp_get_numprocs(). This will give me the number of random streams to initialize using an array of VSLStreamStatePtr. Then each stream is created and "configured" using the leapfrog method. In the parallel region I use once again the OpenMP runtime function omp_get_thread_numto determine the treadid. This will give me the index of the corresponding stream to use from the array of streams previously initialized. I m also doing a toy reduction but this really could be anything you decide to calculate using the variates. The result of the reduction is available at the end of the parallel region and printed to the console.

Hi Andrey,That is very helpful - thank you. I have included my modified code below, but still have a few questions.(1) Is it fine to set SEED as a shared scalar that is initialized outside my OMP directives? Other options would be to (a) set it is a private scalar that is initialized within each thread, or (b) set it a shared vector (of length NCORE) that is initialized outside my OMP directives.(2) Similar to (1), is it fine that STREAM is set as a private variable? I guess these questions have more to do with OMP, but I'm not sure how to deal with them when using the VSL.(3) Can I delete each stream independently? On the second-last line of your code snippet, you hadstatus = vsldeletestream( stream )but I thinkstatus = vsldeletestream( stream(i) )would be more appropriate.More generally, am I correct to think that methods like skip-ahead or leapfrog would only be useful if there were more threads than streams? In my case, since the number of threads and streams are both equal to NCORE, I can't see any benefit to such methods.Thanks again! PROGRAM MC IMPLICIT NONE INCLUDE "mkl_vsl.f77" INTEGER N, NSIM, NCORE PARAMETER (N=1000, NSIM=100000, NCORE=10)c make sure that NSIM/NCORE is an integer INTEGER STREAM(NCORE), SEED, I, J, STATUS DOUBLE PRECISION X(N), MUHAT(NCORE,NSIM/NCORE), MU, SIGMA SEED = 42 MU = 0.0D0 SIGMA = 1.0D0 CALL OMP_SET_NUM_THREADS(NCORE)!$OMP PARALLEL DEFAULT(PRIVATE) SHARED(SEED, MU, SIGMA)!$OMP DO DO I=1,NCORE STATUS=VSLNEWSTREAM(STREAM(I),VSL_BRNG_MT2203+I,SEED) DO J=1,(NSIM/NCORE) STATUS=VDRNGGAUSSIAN(VSL_RNG_METHOD_GAUSSIAN_ICDF, & STREAM(I),N,X,MU,SIGMA)c now do whatever with x END DO STATUS=VSLDELETESTREAM(STREAM(I)) END DO!$OMP END DO!$OMP END PARALLEL END PROGRAM MC

1. I do not expect any issues if seed would be a shared scalar - in the threads the library reads this value once to initialize the state of the basic random generator. Also, as another option you could initialize the array of streams prior toyour OpenMP directives, in serial part of the program.

2. It is important that i-the entry of the array stream is assigned to threadindexed i; andthe expectation is that i-th thread/core would use only i-th entry (stream)ofthe array to produce random numbers and will not use j-th entry.From this perspective, the array can be shared among the threads.

4. Choice of the parallelization methodology (Skip-ahead, Leap-Frog, BRNG family) is entirely defined by you and requirements of your problem. In some cases (due to specifics of the problem)it might be moresuitable to useLeap-Frog or Skip-Ahead methodologies for parallelization of Monte Carlo simulations. How many streams (or blocks) to use inyour environementwill be definedon your side - youinitialize as manyIntel MKL Random Streams as you need/want.If the example we are considering for 12 core based computer you could set number ofVSL random streamsto 6 if, say, you plan to use the rest cores for other computations.

Hi, I need to find the default probability of a derivative via Monte-Carlo simulation with C++ in VS2010 with Intel TBB and MKL installed and only 1GB of memory.

Let S(t) denote the price of the derivative at time t. The current price is S(0) = 100.

For simplicity, the derivative is defined by (the real derivative is more complex):
With a probability of 99 percent S(t+1) = S(t) * exp(X), with X ~ N(mu=0, sigma=0.01)
With a probability of 1 percent the derivative jumps down to S(t+1) = S(t) * 0.4

If the derivative falls below 10 somewhere in 0 < t <=250 the following happens:
With ha probability of 80 percent the price of the derivative at S(t) is set to 1.5 * S(t) or with a probability of 10 percent we will have a default.

Let's say I need to run at least 10mn simulations of this derivative and count all losses.
Then my simulated default probability is #defaults / 10mn. The serial code somehow looks like:

How do I parallelize the code?
What is the best strategy for the random number generation? I cannot generate the random numbers of type double a priori, because 10mn * 250 * 2 (at least) = 500mn random number * 8Byte = 4GB.

Is it a good idea to dived the number of simulations into chunks of 10 * number of processors?

1. The blocking approach you look at is worth to try, but could be further improved in terms of memory usage and performance. Say, if your system has 8 processors, then you would need 1mn / 8 * 250 * 8 ~ 250 MB for Gaussian numbers. You also need Bernoulli numbers what means additional memory.

2. To split simulations between processors it makes sense to keep in mind number of processors, cores on each processor, size of last level cache, in addition to size of memory (I assume you use shared memory computer). I would also pay attention to 1st level cache. The important aspect is that your threads should not "compete" for cache.

3. It appears that second set of Bernoulli numbers is used once S(t) < 10 at any time t. It means that number of Bernoulli variates required to process one path is not exactly 250. It can be smaller than 250, say 1,2 or even zero (we do not know in advance). This reveals opportunity to save memory and random numbers. The same is the case for Gaussian numbers (howerver, we expect that 99% of Gaussian array will be used).

So, I would imagine this approach (which, of course, does not exclude other options, as missed details can be also important).
Let M - number of cores (we also can dettach M from number of cores and try smaller or larger values)

paths = 10000000;
paths_per_core = paths / M;
// for simplicity assume that paths % M = 0; if this is not the case, the remaining paths should be properly processed

const int T = 250; // number of Gaussian numbers per one path
const int K = 4; // number of paths generated per one call to Gaussian RNG. We need to try different values from 1 till several units
const int N = 1024; //number of Bernoulli of random numbers used to simulate a default. Can try different values.

Another quick observation: if you used Intel(R) C++ compiler the loop over time could be vectorized, at first glance, and you would get additional performance speed-up. However, this is the case only if you would generate maximal number of Bernoulli variates for a default in advance in the block of calls to RNGs (both calls to Bernoulli RNG can be combined in one). In this case the loop over time would not contain any calls to MKL functions.

As your algorithm uses exp() function, you also might want to exploit opportunity to use vector math functions available in Intel(R) MKL.
The algorithm appears to estimate probability of a default, and one possible way to do this is to use Intel(R) MKL Summary Stats aglorithm for comutation of mean.

thank you so much for your very helpful reply. I followed some of your instructions and finally divided the number of simulations into chunks of 100,000 and stored all random numbers needed for one chunk in vector<double> data types.Furthermore, I introduced another serial for loop (serial_iter) to avoid memory problems...

The output was a nice Excel library (xll) containing one function. Calling the function from Excel gave me a speedup of 5.5 on two Xeon X5650 in comparison to the serial code.I was happy to see Excel using 11 cores at rougly 80% for a couple of minutes :-)Unfortunately, I cannot use the Intel(R) C++ compiler in my company (we only have the MKL library avaialble) and couldn't figure out how to do the memory allignment needed for SIMD (SSE) in VS2010.Do you have any ideas how to vectorize my code?Can I make the following run faster using other data types having memory allignment? vector<double> vecUniform(number_sims_chunk * number_days, 0.0); vdRngUniform( VSL_RNG_METHOD_UNIFORM_STD_ACCURATE, stream[serial_iter * processors + p], vecUniform.size(), &vecUniform[0], 0.0, 1.0 );

I'm glad to know that you developed the algorithm which uses HW resources in the effective way, excellent progress!

To my understanding, in many (not all) cases Monte Carlo simulations can demonstrate close to linear scalability over number of cores. You mentioned about 5.5x speed-up on ~11 cores. Does it mean that your code still spends visible amount of time in serial sections? Which scalability do you expect for your algorithm, say 10x for 11 cores or different?

One possible way to vectorize the code in your environment is to use SSE2 intrinsics, see, for example, http://msdn.microsoft.com/en-us/library/9b07190d(v=vs.80).aspx . However, intrinics based approach is not "scalable": once you want to migrate your code to Intel(R) AVX with 256 bit registers or try Intel(R) Xeon Phi with 512 bit vector registers, you would need to re-develop your vectorized code again to make effective use of wider vector registers. Intel(R) C++ compiler does this work for you.

Yes, please, apply memory alingment to your arrays when ever possible (however, I'm not sure that this would make the application faster in that specific part of the code).

You also might want to try VSL_RNG_METHOD_UNIFORM_STD method for generation of uniform random numbers only if you do not have reasons for use of the accurate method.

I tried running an example with independent streams in Intel Fortran 12.1.5, and I'm a little surprised by the results. Why are the STARTING bolded values from streams 1 and 5, and the italicized values from streams 2 and 4 so similar? Is this normal? The output and the code are given below:

Further, I need to run one indepedent Markov Chain Monte Carlo sequence per stream. The problem is that beforehand, I do not know exactly how many random numbers I will need (perhaps I can calculate a minimum number of random numbers needed) in each chain (stream). Is using the vector statistical library still the right approach?

Answer to your second question - yes: you can use streams of independent random numbers generated by MT2203 BRNGs to produce your MCMC. If, for some reasons, MT2203 approach does not work, you might want to try, for example, MCG59 properly initialized with leapfrog service function. The specifics of leapfrog blocking approach is that you do not need to know in advance how many random numbers would be necessary for one path/chain. The parameter you need to provide to the function is a stride which is basically a number of threads you are going to use to parallelize your application.

Thanks for your reply and the link explaining the specific nature of certain seeds like 777. I can initialize the streams with a different seed to get widely dispersed starting values. I will proceed now based on your advice on the following matter. For a Markov chain decisions to make a move to the next sampling point is made by the use of ONE random number, based on a ratio of probabilities of the previous to the current sampling point. What scheme would you recommend I use to generate ONE random number at a time, for each independent sequence? Also, I intend to use openMPI or something like it to run the code in parallel on at least 56 cores on a cluster.

If you need N chains (say, N > Ncores) split them between cores by assigning ~ N / Ncores chains to each core. In its turn, each core should be supplied by source of random numbers, which are used to generate those chains. One possible option is to use Ncore streams of MT2203, each of them would serve a different core (maximal number of streams supported by MT2203 is ~6K). Use of random numbers and blocking technique described above you would be able to generate chains, one by one on each core, and all cores would produce blocks of the chains in parallel.

Of course, there should be more options to parallelize your MCMC code, choice of the most suitable would be defined by requirements and details of your app, for example

- would number of nodes/cores and/or number of paths increase in future?

- do you expect the the output would not change numerically when you add additional cores/nodes to your cluster?

From what I understand then, I should pre-initialize the random numbers for each stream (one stream per Markov chain) in an array, and then use these numbers in each step of the MCMC chain. I do indeed intend to use Ncore streams of the Mersenne Twister. It is just that I will have to be a little smart about how many numbers per stream (where Ncores = Nstreams = N_parallel_chains) to initialize in each chain, because I have more than one probable move per step - and sometimes the ratio does not need to be calculated if the proposed candidate sample is outisde the prior probability bounds. However, this is not a major issue as I can safely calculate the maximum number of random numbers per chain (~10^7) I will need, and can have them pre-initialized in an array (per chain) as you suggested.

About the questions you raise:

1) I do not expect the number of cores to go up significantly beyond 96 cores

2) I do not expect the output to change significantly on adding more than 96 cores

3) I am not sure I need an extremely large period, as this is not purely random Monte Carlo sampling, but Markov Chain Monte Carlo sampling which is a directed random walk, the directions being given by a model likelihood function. If the starting models are dispersed widely enough, then a period as low as 10^7 (per chain) is fine for me.

Based on the additional details you provided MT2203 streams appear to be reasonable way to start parallelization of your MCMC. I wonder if you would able to apply the blocking similar to what I described earlier? Also, another question is about number of chains - what are the reasons you correlate number of MCMC chains and number of cores, Ncores = N_chains? What if number of chains you would need is much greater than number of available cores?

Perhaps the blocking technique will be useful, but I just tried the separate streams approach you outlined above (pre-initialize the random numbers before MCMC): Separate streams are quite easy to implement. For each chain, I implemented 2 separate streams, one with vRngUniform, and the other with vRngGaussian, as I needed both uniform and normally distributed Gaussian numbers (see code below)

I correlate the number of chains and number of cores (N_cores=N_chains) because we're working on massive clusters where a lot of cores are available at any one time. Also, we're working with geophysical problems where one independent chain on one core takes a day to finish. Running more than one chain on one core will take too long, I suspect.

I ran into exactly the same problem. Thanks for both the question and the great answer!!

The keeper of the city keys
Put shutters on the dreams.
I wait outside the pilgrim's door
With insufficient schemes.
The black queen chants
The funeral march,
The cracked brass bells will ring;
To summon back the fire witch
To the court of the crimson king.