Tuesday, August 7, 2012

Metropolis Algorithm: Discrete Position Probabilities

I was asked by a reader how I created Figure 7.2 of the book, reproduced at right, which shows the progression of discrete position probabilities at each step in a simple Metropolis algorithm. I couldn't find the original program, so I made a new one, reported here, with an additional example that illustrates a bimodal target distribution.

The main point is mentioned in the middle of p. 128: "At every time step, we multiply the current position probability vector w by the transition probability matrix T to get the position probabilities for the next time step. We keep multiplying by T over and over again to derive the long-run position probabilities. This process is exactly what generated the graphs in Figure 7.2."

The code consists of two main parts. First, build the transition matrix. Second, specify an initial position vector and repeatedly multiply by the transition matrix. Here is the code:

2 comments:

P.S. The graphs in this example would have been better if their y-axes started at zero, as in Figure 7.2 of the book. You can do that by adding an argument to the plot() commands: ylim=c(0,max(positionVec)) and ylim=c(0,max(pTarget))