some days ago i started thinking upon afast and simple random number generator of fewlines of codes, wondering about the use ofMUL and RDTSC. after some "deviations" from the theory :Di stumbled upon this following idea of mine:

1gb of data generated by this rand, tested with ent:Entropy = 8.000000 bits per byte.Optimum compression would reduce the sizeof this 1073741824 byte file by 0 percent.Chi square distribution for 1073741824 samples is 268.19, and randomlywould exceed this value 50.00 percent of the times.Arithmetic mean value of data bytes is 127.4989 (127.5 = random).Monte Carlo value for Pi is 3.141590272 (error 0.00 percent).Serial correlation coefficient is 0.000092 (totally uncorrelated = 0.0).its fairly late but ill chime in on this some more tomorrow.

that's very very nice. with those values it deserves more than simple attention. i must say that i thought soemthing like a delta from 2 or moreRDTSC instructions, just because it is a non-serialized instruction...i need some times for a diehard battery test on it, and porting itto SSE 64bit eventually. stay tuned andThanks a lot,

well, after diehard testing, i must say that the algo performs very well.all values are like expected , or in the average.i can share results if someone needs it. that is very simple and straightforward.now, calling ECX the "state" and EBX the seed, it is very interesting the part 2after the second RDTSC, where the seed will be elaborated. i find the following line funny,

add eax, esi

because the fact that offset in memory is being added to the final random number let me guess that, with some little modificationsthis algo may be totally reversible i.e., given a state and a seedone should be able to rebuild the offset in memory from the random number.

Don't forget to execute emms after using MMX, else the next FPU instruction will cause an exception because the FPU stack is in a bad state.(Not required for SSE, one of the best reasons to always use SSE registers for MMX operations).

if i remember correctly diehard has been unsupported for quite some time, most people now a days are using NISTs random testing suitehttp://csrc.nist.gov/groups/ST/toolkit/rng/documentation_software.html

if i have some time this weekend ill run my code though the new nist tests and see how it pans out.

if i remember correctly diehard has been unsupported for quite some time, most people now a days are using NISTs random testing suitehttp://csrc.nist.gov/groups/ST/toolkit/rng/documentation_software.html

yes, diehard is somewhat old,verbose but good. i have given a look to the NIST package andthe code inside is clean and effective. i think it deserves a porting to assembly,because almost 100% optimizable. as i have some time i will startto do it as coding exercise on the contained functions.thanks for sharing.

i have managed to let my algo behave in a more stable way,when outputting the chi-square, at the moment +20% the normal,considering other values are ok;the whole is acceptable, but in some way not fully satisfying.

For each step (ie request for a random number), 1. Generate a Park Miller random, and set it as Seed for Mersenne.2. Generate a mersenne random, and set it as Seed for Park Miller. This is your random number for your request.3. At some count of iterations, periodically regenerate Mersenne's internal table of values using values generated from the above scheme.

In order to properly evaluate the efficacy of a RNG, I analyze the numerical distribution of the output values in N dimensions, which basically is asking the question: for the numerical bounded domain of N bits, what is the distribution like? Are there dense clusters or is it well distributed?Park Miller is not very well distributed, and MTB has ribbon-like distribution properties, mixing them created a better distribution than either could accomplish alone, with very little execution overhead we get a better result in terms of numerical distribution across the numerical keyspace.