Just wanted to find out what formula is being used by Mplus to calculate the bias in bias-corrected bootstrap confidence intervals. Is it the formula that Efron had established or is the bias being calculated some other way?

Thank you for getting back to me. From what I know, the Efron formula considers both bias and acceleration, however MacKinnon (2004) considered only the bias in his simulation (he found that considering the acceleration in addition to the bias did not improve the accuracy of confidence intervals). Hence, to elaborate on my earlier question, does Mplus consider both bias and acceleration in computing the bias-corrected bootstrap confidence intervals (as has been suggested by Efron in his formula) or does it consider the bias only (as has been done by MacKinnon, 2004)?

Does Mplus have a way to change the seed of the random bootstrapping algorithm? All the seed options in ANALYSIS: don't seem to do anything, and the bootstrap results don't appear to vary across runs. The only thing that I can do to get variation across bootstrap results is to change the number of bootstraps.

I am using a subset of a longitudinal data set (as not everyone attended a focus group). The ones who dropped out were mostly disadvantaged individuals with less education, from lower SES level etc. I want to run a SEM model (with 4 latent variables) and 3 observed variables (all binary). I use the WLSMV estimator. So my question is: 1) is it possible to run a SEM model with Heckman correction for the missing data? 2) if I run the model for the full sample (regardless of whether they have attended the focus group or not), Mplus will do some kind of imputation, would be similar to Heckman correction or not?

Missing data is properly handled by ML or Bayes assuming MAR, which is the Mplus default setting. WLSMV is less suited to handling missing data (see UG). Heckman modeling can be done in Mplus, but I would simply use ML or Bayes and our default settings. For more on missing data modeling, see

Thank you very much. I think it would have been great and much easier (and maybe better) than applying Heckman correction but unfortunately I do not have the computer capacity to run the ML estimator. It gives an error "*** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN THE PROGRAM ON THE CURRENT INPUT FILE. THE ANALYSIS REQUIRES 4 DIMENSIONS OF INTEGRATION RESULTING IN A TOTAL OF 0.50625E+05 INTEGRATION POINTS. THIS MAY BE THE CAUSE OF THE MEMORY SHORTAGE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER APPLICATIONS THAT ARE CURRENTLY RUNNING. NOTE THAT THE MODEL MAY REQUIRE MORE MEMORY THAN ALLOWED BY THE OPERATING SYSTEM.***" And a reviewer suggested that I should use Heckman correction so I would appreciate if you can direct me to somewhere where I can learn more about how to do that.

I will send the output as soon as possible but I managed the run my model using estimator = Bayes (it took 2 hours but I have it now! so thank you very much for the suggestion).

I have 3 questions: 1) Is it similar to WLSMV estimator, so if the endogenous variable is categorical it is probit coefficients and if it is a latent variable or continuous variable it is a linear regression coefficient?

2) Can I use model constraint to look for indirect effects?

3) when I run my model with WLSMV, I get some fit indices (CFI & TLI) but with Bayes I did not get any of that so how would I know if my model fits?

I am conducting a number of bias corrected bootstrap mediation models involving one predictor variable (each), two mediating variables, and three outcome variables.

I used 1000 bootstrap replications and the CINTERVAL(BCBOOTSTRAP) option to run my models.

I have run several of these models, but for one of my models I obtain 2.5% and 97.5% confidence interval estimates for the indirect effects that are beyond -1 and 1. I recognize that my models do not imply mediation, however, I am concerned that I might have missed something... These out-of-range values do not occur for any of my other models. Could you please point me to a few references that might help me understand why this might have occured?

I'm confused though, as the output from the model indirect command gives me a p-value of .10 for the indirect effect, but the cinterval command says that my indirect effect ranges from .02 (lower 2.5%) to .052 (upper 2.5%). Give that these are bounded within a 95% CI, shouldn't the p-value for indirect effect be <.05?

Thank you for the suggestion. I reran the model with CINTERVAL(BOOTSTRAP) rather than BCBOOSTRAP. The 95% CI for the indirect effect is still .001 to .05, implying a significant indirect effect, but the p-value of the indirect output is .09.

Which would you suggest interpreting in terms of concluding whether or not the indirect effect is significant? Also, do you know why the two would be different?