Why Bayesian analysis?

You may be interested in Bayesian analysis if

you have some prior information available from previous studies that you
would like to incorporate in your analysis. For example, in a study of
preterm birthweights, it would be sensible to incorporate the prior
information that the probability of a mean birthweight above 15 pounds is
negligible.

your research problem may require you to answer a
question: What is the probability that my parameter of interest belongs to
a specific range? For example, what is the probability that an odds ratio
is between 0.2 and 0.5?

you want to assign a probability to your research hypothesis.
For example, what is the probability that a person accused of a
crime is guilty?

Stata provides a suite of features for performing Bayesian analysis. The main
estimation commands are bayes: and bayesmh. The bayes
prefix is a convenient command for fitting Bayesian regression
models—simply prefix your estimation command with bayes:. The
bayesmh command fits general Bayesian models—you can choose
from a variety of built-in models or
program your own. The main simulation
method is an adaptive Metropolis–Hastings (MH) Markov chain Monte Carlo (MCMC) method.
Gibbs
sampling is also supported for selected likelihood and prior combinations.
Commands for checking convergence and efficiency of MCMC, for obtaining
posterior summaries for parameters and functions of parameters, for hypothesis
testing, and for model comparison are also provided.

Let's see it work

Your Bayesian analysis can be as simple or as complicated as your research
problem. Here's an overview.

bayesmh discarded the first 2,500 burn-in iterations and used the
subsequent 10,000 MCMC iterations to produce the results. The estimated
posterior mean, the mean of the posterior distribution, of parameter
{mpg:_cons} is close to the OLS estimate obtained earlier, as is expected with
the noninformative prior. The estimated posterior standard deviation is close
to the standard error of the OLS estimate.

The MCSE of the posterior mean estimate is 0.016. The MCSE is about the
accuracy of our simulation results. We would like it to be zero, but that
would take an infinite number of MCMC iterations. We used 10,000 iterations and have
results accurate to about 1 decimal place. That's good enough, but if we
wanted more accuracy, we could increase the MCMC sample size.

According to the credible interval, the probability that the mean of mpg
is between 19.92 and 22.65 is about 0.95. Although the confidence interval
reported in our earlier regression has similar values, it does not have the same
probabilistic interpretation.

Because bayesmh uses MCMC, a simulation-based method, the results will be
different every time we run the command. (Inferential conclusions should stay
the same provided MCMC converged.) You may want to specify a random-number
seed for reproducibility in your analysis.

Checking MCMC convergence

The interpretation of our results is valid only if MCMC converged. Let's
explore convergence visually.

. bayesgraph diagnostics {mpg:_cons}, histopts(normal)

The trace plot of {mpg:_cons} demonstrates good mixing. The
autocorrelation dies off quickly. The posterior distribution of
{mpg:_cons} resembles the normal distribution, as is expected for the
specified likelihood and prior distributions. We have no reason to suspect
nonconvergence.

We can now proceed with further analysis.

Hypothesis testing

We can test an interval hypothesis that the mean mileage is greater than 21.

The estimated probability of this interval hypothesis is 0.67. This is in
contrast with the classical hypothesis testing that provides a deterministic
decision of whether to reject the null hypothesis that the mean is
greater than 21 based on some prespecified level of significance. Frequentist
hypothesis testing does not assign probabilistic statements to the tested
hypotheses.

Informative priors

Suppose that based on previous studies, we have prior information that the
mean mileage is normally distributed with mean 30 and variance 5. We can
easily incorporate this prior information in our Bayesian model. We will also
store our MCMC and estimation results for future comparison.

The posterior probability of the first model is very low compared with that of the
second model. In fact, the posterior probability of the first model is near 0,
whereas the posterior probability of the second model is near 1.

Normal model with unknown variance

Continuing our car-mileage example, we now relax the assumption of a known
variance of the normal distribution and model it as a parameter {var}.
We specify a noninformative Jeffreys prior for the variance parameter.

Note that the MCSE for parameter {mpg:_cons} is larger in this
model than it was in the model with a fixed variance. As the number of
model parameters increases, the efficiency of the MH algorithm decreases,
and the task of constructing an efficient algorithm becomes more and
more important. In the above, for example, we could have improved the
efficiency of MH by specifying the variance parameter in a separate
block, block({var}), to be sampled independently of the mean
parameter.

Even without adding the blocking,
convergence diagnostics for both mean and variance look good.

. bayesgraph diagnostics _all

We can compute summaries for linear and nonlinear expressions of our
parameters. Let's
compute summaries for a standardized mean, which is a
function of both the mean parameter and the variance parameter.

Simple linear regression

See Linear regression
for how to fit linear regression models using the bayes prefix.
Continuing with bayesmh, the command makes it easy to include explanatory
variables in our Bayesian models. The syntax for regressions looks just as it does
in other Stata estimation commands. For example, we can include an indicator of
whether the car is foreign or domestic when modeling the mean car mileage.

The main difference between the bayes prefix and the bayesmh
command is that bayes: builds all model parameters automatically and
assigns default priors for them. Depending on a regression model, bayes:
may also use different sampling settings than bayesmh, such as blocking
of model parameters to improve the efficiency of the sampling algorithm.

In the above, bayes: used the default normal priors with 0 mean and
variance of 10,000 for the regression coefficients and the default
inverse-gamma prior with scale and shape parameters of 0.01 for the error
variance. Also, because the regression coefficients and the error variance
are a priori independent, bayes: samples them separately in two
different blocks. The following bayesmh's specification produces
identical results, provided that the same random-number seed is specified.

Multivariate linear regression

We can fit a multivariate normal regression to model two size characteristics of
automobiles—trunk space, trunk, and turn circle, turn—as a function
of where the car is manufactured, foreign, foreign or domestic. The
syntax for the regression part of the model is just like the syntax for
Stata's mvreg (multivariate regression) command.

We model the covariance matrix of trunk and turn as the matrix
parameter {Sigma,matrix}. We specify
noninformative normal priors with large variances for all regression
coefficients and use Jeffreys prior for the covariance. The MH algorithm
has very low efficiencies for sampling covariance matrices, so we use
Gibbs sampling instead. The regression coefficients are sampled by using
the MH method.

Nonlinear model: Change-point analysis

As an example of a nonlinear model, we consider a change-point analysis of the
British coal-mining disaster dataset for the period of 1851 to 1962. This
example is adapted from Carlin, Gelfand, and Smith (1992). In these data, the
count variable records the number of disasters involving 10 or more
deaths.

The graph below suggests a fairly abrupt decrease in the rate of disasters
around the 1887–1895 period.

Let's estimate the date when the rate of disasters changed.

We will fit the model

count ~ Poisson(mu1), if year < cpcount ~ Poisson(mu2), if year >= cp

cp—the change point—is the main parameter of interest.

We will use noninformative priors for the parameters: flat priors for the
means and a uniform on [1851,1962] for the change point.

We will model the mean of the Poisson distribution as a mixture of mu1
and mu2.

After 1890, the mean number of disasters decreased by a factor of about 3.4
with a 95% credible range of [2.5, 4.5].

The interpretation of our change-point results is valid only if MCMC
converged. We can explore convergence visually.

. bayesgraph diagnostics {cp} (ratio: {mu1}/{mu2})

The graphical diagnostics for {cp} and the ratio look reasonable. The
marginal posterior distribution of the change point has the main peak at about
1890 and two smaller bumps around the years 1886 and 1896, which correspond to
local peaks in the number of disasters.

Reference

Tell me more

Stata's Bayesian analysis features are documented in their own manual.
You can read more about Bayesian analysis, more about Stata's
Bayesian features, and see many worked examples in Stata Bayesian Analysis
Reference Manual.