I am not sure that you want to discard observation $X_i$ because it produces an estimate that falls outside your target interval. You raise the number of iterations $n$ to lower the standard error of your estimate. I don't think there's a way around this. (Although if you wanted a better idea of what the distribution of estimates looks like in a certain neighborhood, then you could provide $X_i$ from a specific portion of the distribution to provide estimates in the correct neighborhood.)
–
Richard HerronMar 8 '11 at 1:17

1 Answer
1

If your variable of integration is truly one-dimensional, as you seem to be saying, then you should be using quadrature to evaluate the expectation integral. The computational efficiency of quadrature is much higher than Monte Carlo in one dimension (even accounting for modified sampling).

If your problem is actually multidimensional, your best bet is to use the first few iterations (you suggest 500 above) to help choose a scheme for importance sampling. Your windowing scheme is a different trick sometimes labeled stratified sampling, and tends to get tricky from a coding perspective.

To perform importance sampling, you will modify the distribution with what is known as an equivalent measure of your random samples so that most of them fall in the "interesting" region. The easiest technique is to ensure you are sampling from the multivariate normal distribution, and then shift the mean and variance of your samples such that, say, 90% of them fall within your "interesting" region.

Having shifted your samples, you need to then track their likelihood ratio (or Radon-Nikodym derivative) versus the original distribution, because your samples now need to be weighted by that ratio in your Monte Carlo sum. In the case of a shift in normal distributions, this is fairly easy to compute for each sample $\vec{x}$, as