Welcome to the Institute for Digital Research and Education

Stata Data Analysis Examples
Multivariate Regression Analysis

Version info: Code for this page was tested in Stata 12.

As the name implies, multivariate regression is a technique that estimates a
single regression model with more than one outcome variable. When there is more
than one predictor variable in a multivariate regression model, the model is a
multivariate multiple regression.

Please Note: The purpose of this page is to show how to use various data analysis commands.
It does not cover all aspects of the research process which researchers are expected to do. In
particular, it does not cover data cleaning and checking, verification of assumptions, model
diagnostics and potential follow-up analyses.

Examples of multivariate regression

Example 1. A researcher has collected data on three psychological variables,
four academic variables (standardized test scores), and the type of educational
program the student is in for 600 high school students. She is interested in how
the set of psychological variables is related to the academic variables and the
type of program the student is in.

Example 2. A doctor has collected data on cholesterol, blood pressure, and
weight. She also collected data on the eating habits of the subjects
(e.g., how many ounces of red meat, fish, dairy products, and chocolate consumed
per week). She wants to investigate the relationship between the three
measures of health and eating habits.

Example 3. A researcher is interested in determining what factors influence
the health African Violet plants. She collects data on the average leaf
diameter, the mass of the root ball, and the average diameter of the blooms, as
well as how long the plant has been in its current container. For predictor variables,
she measures several elements in the soil, as well as the amount of light
and water each plant receives.

Description of the data

Let's pursue Example 1 from above. We have a hypothetical dataset with 600
observations on seven variables. he psychological variables are locus of control
(locus_of_control), self-concept (self_concept), and
motivation (motivation). The academic variables are standardized tests scores in
reading (read), writing (write), and science (science), as well as a categorical
variable (prog) giving the type of program the student is in (general,
academic, or vocational).

Let's look at the data (note that there are no missing values in this data set).

Analysis methods you might consider

Below is a list of some analysis methods you may have encountered.
Some of the methods listed are quite reasonable while others have either
fallen out of favor or have limitations.

Multivariate multiple regression, the focus of this page.

Separate OLS Regressions - You could analyze these data using separate
OLS regression analyses for each outcome variable. The individual
coefficients, as well as their standard errors will be the same as those
produced by the multivariate regression. However, the OLS regressions will
not produce multivariate results, nor will they allow for testing of
coefficients across equations.

Canonical correlation analysis might be feasible if you don't want to
consider one set of variables as outcome variables and the other set as
predictor variables.

Multivariate regression

To conduct a multivariate regression in Stata, we need to use two commands,
manova and mvreg. The manova command will indicate if
all of the equations, taken together, are statistically significant. The
F-ratios and p-values for four
multivariate criterion are given, including Wilks' lambda, Lawley-Hotelling
trace, Pillai's trace, and Roy's largest root. Next, we use the mvreg
command to obtain the coefficients, standard errors, etc., for each of the predictors in
each part of the
model. We will also show the use of the test command after the
mvreg command. The use of the test command is one of the
compelling reasons for conducting a multivariate regression analysis.

Below we run the manova command. Note the use of c. in front of the
names of the continuous predictor variables -- this is part of the factor variable
syntax introduced in Stata 11. It is necessary to use the c. to identify
the continuous variables, because, by default, the manova command assumes all
predictor variables are categorical.

The tests for the overall mode, shown in the section labeled Model (under
Source), indicate that the model is statistically significant, regardless of the type of
multivariate criteria that is used (i.e. all of the p-values are less than 0.0001). If the
overall model was not statistically significant, you might want to modify it
before running mvreg.

Below the overall model tests, are the multivariate tests for each of the predictor variables. Each of the
predictors is statistically significant overall, regardless of which test is
used.

We can use mvreg to obtain estimates of the coefficients in our model.
Normally mvreg requires the user to specify both outcome and predictor
variables, however, because we have just run the manova command, we can use the mvreg command, without
additional input, to run a multivariate regression corresponding to the model just
estimated by maova (note that this feature was introduced in Stata 11, if
you are using an earlier version of Stata, you'll need to use the full syntax for mvreg).

The output from the mvreg command looks much like the output from
the regress command, except that the output is for three equations (one for each
outcome measure) instead of one. In addition to looking like the output from an
OLS regression, the output is interpreted much like the output from an OLS regression.

The first table gives the number of observations, number of parameters, RMSE,
R-squared, F-ratio, and p-value for each of the three models.

Looking at the column labeled P, we see that each of the three
univariate models are statistically significant.

In the column labeled R-sq, we see that the five predictor variables explain
19%, 5%, and 15% of the variance in the outcome variables locus_of_control,
self_concept, and
motivation, respectively. (Note this value is a standard R-squared, not an
adjusted R-squared.)

The second table contains the coefficients, their standard errors, test statistic (t), p-values,
and 95% confidence interval, for each predictor variable in the model, grouped
by outcome. As mentioned above, the coefficients are interpreted in the
same way coefficients from an OLS regression are interpreted. For example, looking at the top of
the table, a one unit change in read is associated with a 0.013 unit
change in the predicted value of locus_of_control.

If you ran a separate OLS regression
for each outcome variable, you would get exactly the same coefficients, standard
errors, t- and
p-values, and confidence intervals as shown above. So why conduct a
multivariate regression? As we mentioned earlier, one of the advantages of using mvreg is that you
can conduct tests of the coefficients across the different outcome variables. (Please
note that many of these tests can be preformed after the manova command,
although the process can be more difficult because a series of contrasts needs
to be created.) In the
examples below, we test four different hypotheses.

For the first test, the null hypothesis is that the coefficients for the variable read
are equal to 0 in all three equations. (Note that this duplicates the
test for the variable read in the manova output above.)

The results of this test reject the null hypothesis that the coefficients for
read across the three equations are simultaneously equal to 0, in other
words, the coefficients for read, taken for all three outcomes together,
are statistically significant.

Second, we can test the null hypothesis that the coefficients for prog=2
(identified as 2.prog) and prog=3 (identified as 3.prog) are simultaneously equal to 0 in the
equation for
locus_of_control. When used to test the coefficients for dummy variables
that form a single categorical predictor, this type of test is sometimes called an overall test
for the effect of the categorical predictor (i.e. prog). Note that the variable name in brackets (i.e.
locus_of_control) indicates which equation the coefficient being tested
belongs to, with the equation identified by the name of the outcome variable.

The results of the above test indicate that the two coefficients together are
significantly different from 0, in other words, the overall effect of prog
on locus_of_control
is statistically significant.

The next example tests the null hypothesis that the coefficient for the variable
write in the equation with
locus_of_control as the outcome is equal to the coefficient for write
in the equation with self_concept as the outcome. The null hypothesis
printed by the test command is that the difference in the coefficients is 0,
which is another way of saying two coefficients are equal. Another way of
stating this null hypothesis is that,
that the effect of write on locus_of_control is equal to the
effect of write on self_concept.

The results of this test indicate that the difference between the
coefficients for write with locus_of_control and
self_concept as the outcome is significantly different from 0, in other
words, the coefficients are significantly different.

For the final example, we test the null hypothesis that the
coefficient of science in the equation for
locus_of_control is equal to the coefficient for science in the
equation for self_concept, and that the coefficient for the variable
write in the equation with the outcome variable
locus_of_control equals the coefficient for write in the
equation with the outcome variable self_concept. We tested the
difference in the coefficients for write in the last example, so we can use
the accum option to add the test of the difference in coefficients
for science, allowing us to test both sets of coefficients at the
same time.

The results of the above test indicate that taken together the differences in the two
sets of coefficients is statistically significant.

Things to consider

The residuals from multivariate regression models are assumed to be multivariate normal.
This is analogous to the assumption of normally distributed errors in univariate linear
regression (i.e. ols regression).

Multivariate regression analysis is not recommended for small samples.

The outcome variables should be at least moderately correlated for the
multivariate regression analysis to make sense.

If the outcome variables are
dichotomous, then you will want to use either mvprobit or
biprobit.