In my preceeding introduction about randomness , viewtopic.php?f=11&t=533, we exclusively considered random stars from the simplest, so-called uniformProbability Density Function (PDF) which is just a positive constant with area normalized to 1, i.e

with

Normally, the random number generator commands implemented in the various computer languages refer to a uniform PDF.

However, for modelling randomly generated star distributions in celestia.Sci we needed completely different shapes of PDFs as determined from actual astronomical measurements! Moreover, in order to arrive at realistic random star populations, several measured PDFs need to be invoked consecutively, corresponding to:

position (x,y,z) (surface brightness distribution)

luminosity (absolute magnitude distribution)

color index (V-I or B-V color-magnitude relations)

surface temperature <-> RGB color table

As an example of such non-trivial PDFs, I discussed already the data for the absolute magnitude (M_V) distribution of galactic halo stars.

The purpose of this presentation is to introduce two advanced methods of generating random stars for non-constant shapes of the underlying PDF. While they may be used individually, it is through their combination that a high efficiency for arbitrary, measured PDFs was achieved in celestia.Sci!

I) Acceptance - Rejection Method by John Von Neumann

This method is used a lot in my professional research field in form of so-called Monte-Carlo generators of Elementary Particle processes at particle colliders or in Astrophysics/Cosmology.

I thought it's easiest to grasp, if I go with you through the method step by step for a specific example:

First, I made up a "crazy" fake PDF, p(x), the shape of which is certainly far from a constant, uniform PDF

The random variable I called generically x. You may think of this PDF as an exotic surface brightness distribution with x identified with the distance r from the center of some spherically symmetric star cloud... With this interpretation, the shape suggests that the stars generated from this PDF will accumulate around the cloud's center and --moreover-- form kind of a ring structure further out near the cloud periphery.

The analytic formula for this PDF looks crazy, indeed. Here it is for completeness:

Don't forget, a proper PDF needs to have a unit area. Hence we need to divide by the PDF's integral along the full range (0,1) of the random variable x:

Hence this properly normalized p(x) has indeed a unit area:

++++++++++++++++++++++++++++++++++++++How can we now generate stars from this crazy non-constant PDF, although we only have a random number generator based on a uniform PDF in our computers!?++++++++++++++++++++++++++++++++++++++

According to Von Neumann's ingenious method, we first need a simpler so-called majorant function p0(x) , satisfying these two requirements:

The majorant p0(x) needs to be normalized to a unit area (just like p(x)), but after multiplication with a suitable constant C encloses our actual PDF p(x) entirely, i.e p(x) ≤ C p0(x) for all x in (0,1)

The majorant p0(x) is simple enough that we can easily generate stars from it!An obvious (but often not optimal) choice would be taking a constant uniform p0(x) as a majorant.

This is illustrated here:

The majorant p0(x) is just our normalized uniform PDF displayed in Fig. 1 above. Then the constant C may obviously be taken as C = p(x=0), hence

The crucial step follows next:+++++++++++++++++++++++++++++++++++++++++++To generate an x according to the (red) PDF p(x), first generate a candidate x according to the uniform (black) p0(x). Calculate p(x) and the height of the envelope C*p0(x) with this x value; generate another random u in (0,1) according to the uniform PDF in your computer and test if

u * C p0(x) ≤ p (x).

If so, accept x; if not reject x and try again.

The efficiency is simply the ratio of the areas under the two PDFs or here (in %) just+++++++++++++++++++++++++++++++++++++++++++++

To achieve a much higher efficiency, one has to optimize the shape of the majorant such that the normalization constant C can take values near 1! Here it was large (C ~ 3), since we chose a uniform PDF as majorant that differed in shape a lot from the PDF p(x) of interest. The 2nd method below will allow to improve this situation significantly, in general.

Here are the few lines of Maple scripting code that you may consult as a cross check of the method's logics. The syntax is actually very close to LUA scripting...

Note that the position vectors of the stars are accumulated in the list variable vv, while the corresponding random radius values accumulate in the list variable s.

Let us now identify x with the distance r of the random stars from the cloud center and see how the resulting 3D star "cloud" looks like for our crazy PDF p(x). As in viewtopic.php?f=11&t=533, I took as angular variables besides r the uniformly distributed cosΦ and θ.

Here is the star cloud with 20 000 random stars!

Attachment:

3Dpoints.jpg [ 56.35 KiB | Viewed 4746 times ]

Indeed, the random stars are strongly concentrated around the center (small r), but also show a clear ringlike structure near the periphery associated with the hump in our underlying PDF at larger values of r.

But how can we be sure that the resulting random stars are distributed correctly??

The crucial cross check is shown in the next plot.

Attachment:

histogram.jpg [ 23.29 KiB | Viewed 4746 times ]

In this histogram you see first of all 15 (dark blue) distance bins of width dr . The vertical sizes of these bins are proportional to the number of generated stars fitting in each bin on account of a distance value between r and r + dr. The green curve is nothing but an overlay of our "crazy" PDF p(r). From the perfect match you see clearly that the generated sample of stars corresponds perfectly to our underlying PDF p(r)! The cyan constant line indicates the constant bin height you would get from using a uniform PDF instead.

So far for the first advanced method!

II) Inverse Transform Method

This method only works in particular cases, but is very useful in providing improved majorants compared to the trivial uniform PDF.

It rests on the fact that the so-called cumulant distribution F(r) in terms of a general PDF p(x) is also a random variable that is distributed according to a uniform PDF (no matter what the shape of p(x))! In terms of p(x), the cumulant F(r) is defined as

F(r) specifies the integrated probability for an x value up to a point r in 0 ≤ x ≤ r.

If p(x) is properly normalized to unit area, it is easy to see that F(r) = u takes uniformly distributed values in (0,1).

As it stands, F = u = function( r ). If we manage to invert this mapping analytically, we are done, since then the inverted function

r = function( u )

provides the values r (distributed as the desired PDF p(r)), as function of the uniform random variable u that the computer can generate!

Finally, here is a simple example:+++++++++++++++++++++++++++

Let us take an exponential exp(-a*x) as PDF and normalize the area to 1. This means

Next compute the cumulant F(r) according to the above integral. This can easily be done analytically, and the result can indeed be inverted as r = r(u):

Here is a respective plot with a = 4.

Attachment:

inverse.jpg [ 11.95 KiB | Viewed 4746 times ]

Note that with this analytical method of generating e.g. exponentially distributed r values we enjoy an efficiency of 100%! The drawback is that this method only works for sufficiently simple PDFs where the integrals and the subsequent inversion can be done analytically. But in combination, i.e. as majorant for the above Von Neumann method, it is simply great!

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum