Markov chain for Geometric Brownian Motion parameters

A Markov chain is a discrete-time random process with Markov property. Its components are states and probability transitions between them. Markov property states that the probability of next states depends only on the current state.

So Markov chain is a set of states and all transition probabilities between states.

Simple chain for drift

Let’s assume that estimation of drift parameter might lead to the following 2 states:

Positive, i.e. drift is greater or equal zero

Negative, i.e. drift is less than zero

So, one could construct Markov chain for these states as it shown below.

In 2010 for USD/CHF the daily drifts gives the following probabilities:

P(+) = 53.0769 %

P(-) = 46.9231 %

P(+|+) = 51.09 %

P(-|+) = 48.91 %

P(-|-) = 45.08 %

P(+|-) = 54.92 %

Markov chain has stationary distribution. It is calculated by putting:
and therefore by law of total probability:

Thus the stationary probabilities of positive and negative drifts are:

Unfortunately, it doesn’t give any hints on how to predict future drift. Let’s see what it can give for volatility parameter.

This asymmetry in transitive probabilities seems to have roots in the definition of “up” and “down”. It is relatively easy to show that if volatility is a random variable with cumulative distribution function F(x), then probabilities of up and down and their transitions should be:

P(+) = 1/2

P(-) = 1/2

P(+|+) = 1/3

P(-|+) = 2/3

P(-|-) = 1/3

P(+|-) = 2/3

State probabilities are derived as follows:
where F(x) is a cumulative distribution function. Integration of the inner expression results in:
And down probability is accordingly:

Transition probabilities require a little bit of math. By definition of conditional probability the up-up probability is:
Let’s calculate joint probability of 2 “ups”:
Integration by y and x yields:
Therefore, conditional probability of up given previous up is:
And this corresponds well with experimental data. The up-down case is solved by the following: