I have a follow-up question arising from my afternoon spent playing around with this.

In the version history on the website, under "Analysis Conditional on Covariates" the last paragraph discusses bringing covariates into the model as the ML approach to dealing with missingness in x but gives the impression that a different approach is needed when using Bayesian imputation.

Say as part of a larger model, I have missingness in x, but auxiliary information z containing info about missing x, how do I build this in? regress x on z? covary x and z? clearly both render x dependent so perhaps it doesn't matter that much.

I'm attempting some two-level imputation to replicate the findings obtained using REALCOM-Impute, a package developed by the multilevel-modellers here in the UK.

Their final model is to predict literacy using a handful of covariates, and allowing for within-school clustering. As one of these covariates suffers from missing data they have an initial step where they derive an imputation model for this covariate, using the other data as predictors, and again allowing for school level clustering.

I've attempted to achieve this same objective by setting up an H0 model in Mplus. Whilst I can get this to work using the following (below, where X1 has missing data) and also using a simpler approach using type = basic (still with clustering), I am unable to turn my within model from simple covariances to a regression model. For instance, if I minic the realcom approach and go with "X1 on Y X2 X3;" I lose all the cases for which X1 is missing.

which would handle missing on Y X2 X3 as well as not deleting subjects with missing on X1. But regarding missing on X1, the regression slopes in the X1 regression are estimated using only those with X1 observations; subjects with missing on X1 contribute only to the estimation of the parameters in the marginal part of Y X2 X3.