Simulation and Bootstrapping

Explain the use of antithetic and control variates in reducing Monte Carlo sampling error.

Describe the bootstrapping method and its advantage over the Monte Carlo simulation.

Describe pseudo-random number generation.

Describe situations where the bootstrapping method is ineffective.

Describe the disadvantages of the simulation approach to financial problem-solving.

Introduction and Definitions

Simulation is a way of modeling random events to match real-world outcomes. By observing simulated results, researchers gain insight into real problems. Examples of the application of the simulation are the calculation of option payoff and determining the accuracy of an estimator. Some of the simulation methods are the Monte Carlo Simulation (Monte Carlo) and the Bootstrapping.

Monte Carlo Simulation approximates the expected value of a random variable using the numerical methods. The Monte Carlo generates the random variables from an assumed data generating process (DGP), and then it applies a function(s) to create realizations from the unknown distribution of the transformed random variables. This process is repeated (to improve the accuracy), and the statistic of interest is then approximate using the simulated values.

Bootstrapping is a type of simulation where it uses the observed variables to simulate from the unknown distribution that generates the observed variables. In other words, bootstrapping involves the combination of the observed data and the simulated values to create a new sample that is related but different from the observed data.

The notable similarity between Monte Carlo and bootstrapping is that both aim at calculating the expected value of the function by using simulated data (often by use of a computer).

Also, the contrasting feature in these methods is that in Monte Carlo simulation, a data generating process (DGP) is entirely used to simulate the data. However, in bootstrapping, observed data is used to generate the simulated data without specifying an underlying DGP.

Simulation of Random Variables

The simulation requires the generation of random variables from an assumed distribution, mostly using a computer. However, computer-generated numbers are not necessarily random and thus termed as pseudo-random numbers. Pseudo numbers are produced by the complex deterministic functions (pseudo number generators, PNGs) which seem to be random. The initial values of pseudo numbers are termed as a seed value, which is usually unique but generates similar random variables when PNRG runs.

The ability of the simulated variables from PRNGs to replicate makes it possible to use pseudo numbers across multiple experiments because the same sequence of random variables can be generated using the same seed value. Therefore, we can use this feature to choose the best model or reproduce the same results in the future in case of regulatory requirements. Moreover, the corresponding random variables can be generated using different computers.

Simulating Random Variables from a Specific Distribution

Simulating random variables from a specific distribution is initiated by first generating a random number from a uniform distribution (0,1). After that, the cumulative distribution of the distribution we are trying to simulate used to get the random values from that distribution. That is, we first generate a random number U from U(0,1) distribution, then, we use the generated random number to simulate a random variable X with the pdf f(x) by using the CDF, F(x).

Let U be the probability that X takes a value less than or equal to x, that is,

$$ \text U=\text P( \le \text x)=\text F(\text x) $$

Then we can derive the random variable x as:

$$ \text x=\text F^{-1} (\text u) $$

To put this in a more straightforward perspective, the algorithm for simulating random variable from a specific distribution involves:

Monte Carlo Simulation

Monte Carlo simulation is used to estimate the population moments or functions. The Monte Carlo is as follows:

Assume that X is a random variable that can be simulated and let g(X) be a function that can be evaluated at the realizations of X. Then, the simulation generates multiple copies of g(X) by simulating draws from \(\text X=\text x_{\text j}\) and calculate \(\text g_{\text i}=\text g(\text x_{\text i})\).

This process is then repeated b times so that a set of iid variables is generated from the unknown distribution g(X), which can then be used to estimate the desired statistic.

For instance, if we wish to estimate the mean of the generated random variables, then the mean is given by:

The standard error of the simulated expectation measures the level of accuracy of the estimation; thus, the choice of b determines the accuracy of the simulation.,

Another quantity than can be calculated from the simulation is the \(\bf {\alpha}\)-quantile by arranging the b draws in ascending order than selecting the value \(\bf{b{\alpha}}\) of the sorted set.

Moreover, using the simulation, we determine the finite sample properties of the estimated parameters. Assume that the sample size n is large enough so that approximation by CLT is adequate. Now, consider a finite-sample distribution of a parameter \(\hat \theta\). Using the assumed DGP, n random samples are generated so that:

$$ \text X=[\text x_1,\text x_2,…,\text x_{\text n} ] $$

We need to estimate a parameter \(\hat \theta\).

We would need to simulate new data set and estimate the parameter b times: \((\hat \theta_1,\hat \theta_2,…,\hat \theta_{\text b})\) from the finite-sample distribution of the estimator of \(\theta\). From these values, we can rule out the properties of the estimator \(\hat \theta\). For instance, the bias defined as:

From the replications \(\left\{\text g_1,\text g_2,…,\text g_{\text b}\right\}\), calculate the statistic of interest

Determine the accuracy of the estimated quantity by calculating the standard error. If the standard is huge, increase the number of b-replications to obtain the smallest error possible.

Example: Using the Monte Carlo Simulation to Estimate the Price of a Call Option

Recall that the price of a call option is given by:

$$ \text{max}⁡(0,\text S_{\text T}-\text K) $$

\(\text S_{\text T}\) is the price of the underlying stock at the time of maturity T, and K is the strike price. The price of the call option is a non-linear function of the underlying stock price at the expiration date, and thus, we can model the price of the call option.

Assuming that the log of the stock price is normally distributed, then the price of the stock can be modeled as the sum of the initial stock price, a mean and normally distributed error. Mathematically stated as:

Reducing Monte Carlo Sampling Error

Sampling error in Monte Carlo simulation is reduced by two complementary methods:

Antithetic Variables, and

Control Variates.

These methods can be used simultaneously.

To set the mood, recall that the estimation of expected values in simulation depends on the Law of Large Numbers (LLN) and that the standard error of the estimated expected value is proportional to 1/√b. Therefore, the accuracy of the simulation depends on the variance of the simulated quantities.

The antithetic variables use the last result. The antithetic variables reduce the sampling error by incorporating the second set of variables that are generated in such a way that they are negatively correlated with the initial iid simulated variables. That is, each simulated variable is paired with an antithetic variable so that they occur in pairs and are negatively correlated.

If \(\text U_1\) is a uniform random variable, then:

$$ \text F^{-1} (\text U_1 ) \sim \text F_{\text x} $$

Denote an antithetic variable \(\text U_2\) which is denoted as:

$$ \text U_2=1-\text U_1 $$

Note that \(\text U_2\) is also a uniform random variable so that:

$$ \text F^{-1} (\text U_2 )\sim \text F_{\text x} $$

Then by definition of antithetic variables, the correlation between \(\text U_1\) and \(\text U_2\) is negative as well as their mappings onto the CDF \(\text F_{\text x}\).

Using the antithetic random variables is analogous to typical Monte Carlo simulation only that values are constructed in pairs \(\left[\left\{\text U_1,1-{\text U}_1 \right\}, \left\{\text U_2,1-{\text U}_2 \right\},…,\left\{\text U_{\frac {\text b}{2}},1-\text U_{\frac {\text b}{2}} \right\} \right]\) which are then transformed to have the desired distribution using the inverse CDF.

Note that the number of simulations is b/2 since the simulation values are in pairs. The antithetic variables reduce the sampling error only if the function g(X) is monotonic in x so that \(\text {Corr}(\text x_{\text i},-\text x_{\text i} )=\text {Corr}(\text g(\text x_{\text i} ),\text g(-\text x_{\text i} ))\).

Notably, the antithetic random variables reduce the sampling error through the correlation coefficient. Note that usually sampling error using b iid simulated values, is

$$ \cfrac {\sigma_{\text g}}{\sqrt{\text b}} $$

But by introducing the antithetic random variables, then the standard error is given by:

Control Variates

Control variates reduce the sampling error by incorporating values that have a mean of zero and correlated to simulation. The control variates have a mean of zero so that it does not bias the approximation. Given that the control variate and the desire function are correlated, an effective combination (optimal weights) of the control variate and the initial simulation value to reduce the variance of the approximation.

Denote the control variate by \(\text h(\text X_{\text i})\) so that by definition, \(\text E[\text h(\text X_{\text i})]=0\) and that it is correlated with \(\eta_{\text i}\)

An ideal control variate should be less costly to construct and that it should be highly correlated with g(X) so that the optimal combination parameter \(\beta_0\) that minimizes the estimation errors can be approximated by the regression equation:

Disadvantages of Simulation

Monte Carlo Simulation can result in unreliable approximates of moments if the DGPs used do not adequately describe the observed data. This mostly occurs due to misspecifications of the DGP.

Simulation can be costly, especially when you are running multiple simulation experiments because it can be time-consuming.

Bootstrapping

As stated earlier, bootstrapping is a type of simulation where it uses the observed variables to simulate from the unknown distribution that generates the observed variable. However, note that bootstrapping does not directly model the observed data or suggest any assumption about the distribution, but rather, the unknown distribution in which the sample is drawn is the origin of the observed data.

There are two types of bootstraps:

iid Bootstraps

Circular Blocks Bootstraps (CBB)

iid Bootstrap

iid bootstraps select the samples that are constructed with replacement from the observed data. Assume that a simulation sample of size m is created from the observed data with n observations. iid bootstraps construct observation indices by randomly sampling with replacing from the values 1,2, n. These random indices are then used to draw the observed data to be included in the simulated data (bootstrap sample).

For instance, assume we want to draw 10 observations from a sample of 50 data points: \(\left\{\text x_1,\text x_2,\text x_3,…,\text x_{50} \right\}\). The first simulation could use \(\left\{\text x_1,\text x_{12},\text x_{23} \text x_{11},\text x_{32},\text x_{43} \text x_1,\text x_{22},\text x_2,\text x_{22} \right\}\)observations and second simulation could use \(\left\{\text x_{50},\text x_{21},\text x_{23} \text x_{19},\text x_{32},\text x_{49} \text x_{41},\text x_{22},\text x_{12},,\text x_{39} \right\}\) and so on until the desired number of simulations is reached.

In other words, iid bootstrap is analogous to Monte Carlo Simulation, where bootstrap samples are used instead of simulated samples. Under iid bootstrap, the expected values are estimated as:

Circular Block Bootstrap (CBB)

The circular block bootstrap differs with the iid bootstrap in that instead of sampling each data point with replacement; it samples the blocks of size q with replacement. For instance, assume that we have 50 observations which are sampled into five blocks (q=5), each with 10 observations.

The blocks are sampled with replacement until the desired sample size is produced. In the case that the number of observations in sampled blocks is larger than the required sample size, some of the observations are omitted in the last block.

The size of the number of blocks should be large enough to reflect the dependence of observations but not too large to exclude some crucial blocks. Conventionally, the size of the blocks is the square root of the sample size (\(\sqrt{\text n}\)).

The general steps of generating sample using the CBB are:

Decide on the size of block q

Select the first block index i from (1,2,…,n) and transfer \(\left\{x_{\text i},\text x_{\text i+1},…,\text x_{\text i+\text q} \right\}\) to the bootstrap sample where the indices larger (i>n) wrap around

In case the bootstrap sample has more than m elements, omit the values from the end of the bootstrap sample until the sample size is m.

Application of Bootstrapping

One of the applications of the bootstrapping is the estimation of the p-value at risk in financial markets. Recall the p-value at risk (p-VaR) is defined as:

If the loss is measured in percentages of a particular portfolio, then p-VaR can be seen as a quantile of the return distribution. For instance, if we wish to calculate a one-year VaR of a portfolio, then we will simulate a one-year data (252 days) and then find the quantile of the simulated annual returns.

The VaR is then calculated by sorting the b bootstrapped annual returns from lowest to highest and then determining (1-p)b which is basically the empirical 1-p quantile of the annual returns.

Simulations Where Bootstrap Will be Ineffective

The following are the two situations where bootstraps will not be sufficiently effective:

In cases where there are outliers in the data, hence there is a likelihood that the bootstrap’s conclusion will be affected.

Non-independent data – When bootstrap is applied, the assumption the data are independent of one another.

Disadvantages of Bootstrapping

Bootstrapping uses the whole data to generate a simulated sample and thus may make the simulated sample unreliable when the past and the present data are different. For example, the present state of a financial market might be different from the past.

Bootstrapping of historical data can be unreliable due to changes in the market so that the present is different from the past. For instance, if we are bootstrapping market interest rates, there might be huge discrepancies due to past and present market forces which cause the interest rate to significantly fluctuate.

Comparison between Monte Carlo Simulation and Bootstrapping

Monte Carlo simulation uses an entire statistical model that incorporates the assumption on the distribution of the shocks, and therefore, the results are inaccurate if the model used is poor even when the replications are significantly large.

On the other hand, the bootstrapping does not specify the model but instead assumes the past resembles the present of the data. In other words, the bootstrapping incorporates the aspect of the dependence of the observed data to reflect the sampling variation.

Both Monte Carlo Simulation and the bootstrapping are affected by the “Black Swan” problem, where the resulting simulations in both methods closely resemble historical data. In other words, simulations tend to focus on historical data and thus, the simulations are not so different from what it has been observed.

Question

Which of the following statements correctly describes an antithetic variable?

They are variables that are generated to have a negative correlation with the initial simulated sample.

They are mean zero values that are correlated to the desired statistic that is to be computed from through simulation.

They are the mean zero variables that are negatively correlated with the initial simulated sample.

None of the above

Solution

The correct answer is A.

Control variates are used to reduce the sampling error in the Monte Carlo simulation. They are constructed to have a negative correlation with the initial simulated sample so that the overall standard error of approximation is reduced.