In this chapter we
will go into various commands that go beyond OLS. This chapter is a bit different from
the others in that it covers a number of different concepts, some of which may be new
to you. These extensions, beyond OLS, have much of the look and feel of OLS but will
provide you with additional tools to work with linear models.

It seems to be a rare dataset that meets all of the assumptions underlying multiple
regression. We know that failure to meet assumptions can lead to biased estimates of
coefficients and especially biased estimates of the standard errors. This fact explains a
lot of the activity in the development of robust regression methods.

The idea behind robust regression methods is to make adjustments in the estimates that
take into account some of the flaws in the data itself. We are going to look at three
approaches to robust regression: 1) regression with robust standard errors including the cluster
option, 2) robust regression using iteratively reweighted least squares, and 3) quantile
regression, more specifically, median regression.

Before we look at these approaches, let’s look at a standard OLS regression using the
elementary school academic performance index (elemapi2.dta) dataset.

use https://stats.idre.ucla.edu/stat/stata/webbooks/reg/elemapi2

We will look at a model that predicts the api 2000 scores using the average class size
in K through 3 (acs_k3), average class size 4 through 6 (acs_46), the
percent of fully credentialed teachers (full), and the size of the school (enroll).
First let’s look at the descriptive statistics for these variables. Note the missing
values for acs_k3 and acs_k6.

Here is the residual versus fitted plot for this regression. Notice that the pattern of
the residuals is not exactly as we would hope. The spread of the residuals is
somewhat wider toward the middle right of the graph than at the left, where the
variability of the residuals is somewhat smaller, suggesting some heteroscedasticity.

rvfplot

Below we show the avplots. Although the plots are small, you can see some
points that are of concern. There is not a single extreme point (like we saw in chapter
2)
but a handful of points that stick out. For example, in the top right graph you can
see a handful of points that stick out from the rest. If this were just one or two
points, we might look for mistakes or for outliers, but we would be more reluctant to
consider such a large number of points as outliers.

avplots

Here is the lvr2plot for this regression. We see 4 points that are
somewhat high in both their leverage and their residuals.

lvr2plot

None of these results are dramatic problems, but the rvfplot suggests that there
might be some outliers and some possible heteroscedasticity; the avplots have some
observations that look to have high leverage, and the lvr2plot shows some
points in the upper right quadrant that could be influential. We might wish to use
something other than OLS regression to estimate this model. In the next several sections
we will look at some robust regression methods.

4.1.1 Regression with Robust Standard Errors

The Stata regress command includes a robust option for
estimating the standard errors using the Huber-White sandwich estimators. Such robust
standard errors can deal with a collection of minor concerns about failure to meet
assumptions, such as minor problems about normality, heteroscedasticity, or some
observations that exhibit large residuals, leverage or influence. For such minor problems,
the robust option may effectively deal with these concerns.

With the robust option, the point estimates of the coefficients are exactly the
same as in ordinary OLS, but the standard errors take into account issues concerning
heterogeneity and lack of normality. Here is the same regression as above using the robust
option. Note the changes in the standard errors and t-tests (but no change in the
coefficients). In this particular example, using robust standard errors did not change any
of the conclusions from the original OLS regression.

As described in Chapter 2, OLS regression assumes that the residuals are independent.
The elemapi2 dataset contains data on 400 schools that come from 37 school
districts. It is very possible that the scores within each school district may not be
independent, and this could lead to residuals that are not independent within districts.
We can use the cluster option to indicate that the observations
are clustered into districts (based on dnum) and that the observations
may be correlated within districts, but would be independent between districts.

By the way, if we did not know the number of districts, we could quickly find out how
many districts there are as shown below, by quietly tabulating dnum
and then displaying the macro r(r) which gives the numbers of rows in the
table, which is the number of school districts in our data.

quietly tabulate dnumdisplay r(r)
37

Now, we can run regress with the cluster option. We do not need to include the
robust option since robust is implied with cluster. Note that the standard errors have
changed substantially, much more so, than the change caused by the robust option by
itself.

As with the robust option, the estimate of the coefficients are the
same as the OLS estimates, but the standard errors take into account that the observations
within districts are non-independent. Even though the standard errors are larger in
this analysis, the three variables that were significant in the OLS analysis are
significant in this analysis as well. These standard errors are computed based on
aggregate scores for the 37 districts, since these district level scores should be
independent. If you have a very small number of clusters compared to your overall sample
size it is possible that the standard errors could be quite larger than the OLS results.
For example, if there were only 3 districts, the standard errors would be computed on the
aggregate scores for just 3 districts.

4.1.3 Robust Regression

The Stata rreg command performs a robust regression using iteratively reweighted
least squares, i.e., rreg assigns a weight to each observation with higher weights given to
better behaved observations. In fact, extremely deviant cases, those with Cook’s D greater than 1,
can have their weights set to missing so that they are not included in the analysis at all.

We will use rreg with the generate option so that we can
inspect the weights used to weight the observations. Note that in this analysis both the
coefficients and the standard errors differ from the original OLS regression. Below we
show the same analysis using robust regression using the rreg command.

If you compare the robust regression results (directly above) with the OLS results
previously presented, you can see that the coefficients and standard errors are quite
similar, and the t values and p values are also quite similar. Despite the minor problems
that we found in the data when we performed the OLS analysis, the robust regression
analysis yielded quite similar results suggesting that indeed these were minor problems.
Had the results been substantially different, we would have wanted to further
investigate the reasons why the OLS and robust regression results were different, and
among the two results the robust regression results would probably be the more
trustworthy.

Let’s calculate and look at the predicted (fitted) values (p), the
residuals (r), and the leverage (hat) values (h). Note
that we are including if e(sample) in the commands because rreg can generate
weights of missing and you wouldn’t want to have predicted values and residuals for those
observations.

Now, let’s check on the various predicted values and the weighting. First, we will sort
by wt then we will look at the first 15 observations. Notice that the smallest
weights are near one-half but quickly get into the .7 range.

Now, let’s look at the last 10 observations. The weights for observations 391 to 395
are all very close to one. The values for observations 396 to the end are missing due to
the missing predictors. Note that the observations above that have the lowest weights are
also those with the largest residuals (residuals over 200) and the observations below with
the highest weights have very low residuals (all less than 3).

After using rreg, it is possible to generate predicted values, residuals and
leverage (hat), but most of the regression diagnostic commands are not available after rreg.
We will have to create some of them for ourselves. Here, of course, is the graph of
residuals versus fitted (predicted) with a line at zero. This plot looks much like the OLS
plot, except that in the OLS all of the observations would be weighted equally, but as we
saw above the observations with the greatest residuals are weighted less and hence have
less influence on the results.

graph r p, yline(0)

To get an lvr2plot we are going to have to go through several steps in order to
get the normalized squared residuals and the means of both the residuals and the leverage
(hat) values.

First, we generate the residual squared (r2) and then divide it by the
sum of the squared residuals. We then compute the mean of this value and save it as a
local macro called rm (which we will use for creating the
leverage vs. residual plot).

Now, we can plot the leverage against the residual squared as shown below. Comparing
the plot below with the plot from the OLS regression, this plot is much better behaved.
There are no longer points in the upper right quadrant of the graph.

graph h r2, yline(`hm') xline(`rm')

Let’s close out this analysis by deleting our temporary variables.

drop wt p r h r2

4.1.4 Quantile Regression

Quantile regression, in general, and median regression, in particular, might be
considered as an alternative to rreg. The Stata command qreg does quantile
regression. qreg without any options will actually do a median regression in which
the coefficients will be estimated by minimizing the absolute deviations from the median.
Of course, as an estimate of central tendency, the median is a resistant measure that is
not as greatly affected by outliers as is the mean. It is not clear that median regression
is a resistant estimation procedure, in fact, there is some evidence that it can be
affected by high leverage values.

Here is what the quantile regression looks like using Stata’s qreg command. The
coefficient and standard error for acs_k3 are considerably different when
using qreg as compared to OLS using the regress command
(the coefficients are 1.2 vs 6.9 and the standard errors are 6.4 vs 4.3). The coefficients
and standard errors for the other variables are also different, but not as dramatically
different. Nevertheless, the qreg results indicate that, like the OLS
results, all of the variables except acs_k3 are significant.

iqreg estimates interquantile regressions, regressions of the difference in
quantiles. The estimated variance-covariance matrix of the estimators is obtained via
bootstrapping.

sqreg estimates simultaneous-quantile regression. It produces the same
coefficients as qreg for each quantile. sqreg obtains a bootstrapped
variance-covariance matrix of the estimators that includes between-quantiles blocks. Thus,
one can test and construct confidence intervals comparing coefficients describing
different quantiles.

bsqreg is the same as sqreg with one quantile. sqreg is, therefore,
faster than bsqreg.

4.2 Constrained Linear Regression

Let’s begin this section by looking at a regression model using the hsb2 dataset.
The hsb2 file is a sample of 200 cases from the Highschool and Beyond
Study (Rock, Hilton, Pollack, Ekstrom & Goertz, 1985). It includes the
following variables: id, female, race, ses, schtyp,
program, read, write, math, science and socst.
The variables read, write, math, science and socst
are the results of standardized tests on reading, writing, math, science and
social studies (respectively), and the variable female is coded 1 if
female, 0 if male.

use https://stats.idre.ucla.edu/stat/stata/webbooks/reg/hsb2

Let’s start by doing an OLS regression where we predict socst score
from read, write, math, science
and female (gender)

Notice that the coefficients for read and write are very similar, which
makes sense since they are both measures of language ability. Also, the coefficients
for math and science are similar (in that they are both
not significantly different from 0). Suppose that we have a theory that suggests that read
and write should have equal coefficients, and that math and science
should have equal coefficients as well. We can test the equality
of the coefficients using the test command.

Both of these results indicate that there is no significant difference in the
coefficients for the reading and writing scores. Since it appears that the coefficients
for math and science are also equal, let’s test the
equality of those as well (using the testparm command).

Let’s now perform both of these tests together, simultaneously testing that the
coefficient for read equals write and math
equals science. We do this using two test
commands, the second using the accum option to accumulate the first test
with the second test to test both of these hypotheses together.

Note this second test has 2 df, since it is testing both of the hypotheses listed, and
this test is not significant, suggesting these pairs of coefficients are not significantly
different from each other. We can estimate regression models where we constrain
coefficients to be equal to each other. For example, let’s begin on a limited scale
and constrain read to equal write. First, we will define a constraint and
then we will run the cnsreg command.

Notice that the coefficients for read and write are identical, along with
their standard errors, t-test, etc. Also note that the degrees of freedom for the F test
is four, not five, as in the OLS model. This is because only one coefficient is estimated
for read and write, estimated like a single variable equal to the sum of
their values. Notice also that the Root MSE is slightly higher for the constrained
model, but only slightly higher. This is because we have forced the model to
estimate the coefficients for read and write that are
not as good at minimizing the Sum of Squares Error (the coefficients that would minimize
the SSE would be the coefficients from the unconstrained model).

Next, we will define a second constraint, setting math equal to science.
We will also abbreviate the constraints option to c.

Now the coefficients for read = write and math = science
and the degrees of freedom for the model has dropped to three. Again, the Root MSE
is slightly larger than in the prior model, but we should emphasize only very slightly
larger. If indeed the population coefficients for read = write
and math = science, then these combined (constrained) estimates
may be more stable and generalize better to other samples. So although these
estimates may lead to slightly higher standard error of prediction in this sample, they
may generalize better to the population from which they came.

4.3 Regression with Censored or Truncated Data

Analyzing data that contain censored values or are truncated is common in many research
disciplines. According to Hosmer and Lemeshow (1999), a censored value is one whose value
is incomplete due to random factors for each subject. A truncated observation, on the
other hand, is one which is incomplete due to a selection process in the design of the
study.

We will begin by looking at analyzing data with censored values.

4.3.1 Regression with Censored Data

In this example we have a variable called acadindx which is a weighted
combination of standardized test scores and academic grades. The maximum possible score on
acadindx is 200 but it is clear that the 16 students who scored 200 are not exactly
equal in their academic abilities. In other words, there is variability in academic
ability that is not being accounted for when students score 200 on acadindx. The variable acadindx
is said to be censored, in particular, it is right censored.

Let’s look at the example. We will begin by looking at a description of the data, some
descriptive statistics, and correlations among the variables.

The tobit command is one of the commands that can be used for regression with
censored data. The syntax of the command is similar to regress with the addition of the ul
option to indicate that the right censored value is 200. We will follow the tobit
command by predicting p2 containing the tobit predicted values.

When we look at a listing of p1 and p2 for all students who scored the
maximum of 200 on acadindx, we see that in every case the tobit predicted value is
greater than the OLS predicted value. These predictions represent an estimate of what the
variability would be if the values of acadindx could exceed 200.

You can declare both lower and upper censored values. The censored values are fixed in
that the same lower and upper values apply to all observations.

There are two other commands in Stata that allow you more flexibility in doing
regression with censored data.

cnreg estimates a model in which the censored values may vary from observation
to observation.

intreg estimates a model where the response variable for each observation is
either point data, interval data, left-censored data, or right-censored data.

4.3.2 Regression with Truncated Data

Truncated data occurs when some observations are not included in the analysis because
of the value of the variable. We will illustrate analysis with truncation using the
dataset, acadindx, that was used in the previous section. If acadindx is no
longer loaded in memory you can get it with the following use command.

use https://stats.idre.ucla.edu/stat/stata/webbooks/reg/acadindx
(max possible on acadindx is 200)

Let’s imagine that in order to get into a special honors program, students need to
score at least 160 on acadindx. So we will drop all observations in which the value
of acadindx is less than 160.

drop if acadindx <= 160
(56 observations deleted)

Now, let’s estimate the same model that we used in the section on censored data, only
this time we will pretend that a 200 for acadindx is not censored.

It is clear that the estimates of the coefficients are distorted due to the fact that
56 observations are no longer in the dataset. This amounts to restriction of range on both
the response variable and the predictor variables. For example, the coefficient for
writing dropped from .79 to .59. What this means is that if our goal is to find the
relation between adadindx and the predictor variables in the population, then the
truncation of acadindx in our sample is going to lead to biased estimates. A better
approach to analyzing these data is to use truncated regression. In Stata this can be
accomplished using the truncreg command where the ll option is used to
indicate the lower limit of acadindx scores used in the truncation.

The coefficients from the truncreg command are closer to the OLS results, for
example the coefficient for writing is .77 which is closer to the OLS
results of .79. However, the results are still somewhat different on the other
variables, for example the coefficient for reading is .52 in the truncreg
as compared to .72 in the original OLS with the unrestricted data, and better than the OLS
estimate of .47 with the restricted data. While truncreg may
improve the estimates on a restricted data file as compared to OLS, it is certainly no
substitute for analyzing the complete unrestricted data file.

4.4 Regression with Measurement Error

As you will most likely recall, one of the assumptions of regression is that the
predictor variables are measured without error. The problem is that measurement error in
predictor variables leads to under estimation of the regression coefficients. Stata’s eivreg
command takes measurement error into account when estimating the coefficients for the model.

The predictor read is a standardized test score. Every test has measurement error. We
don’t know the exact reliability of read, but using .9 for the reliability would
probably not be far off. We will now estimate the same regression model with the Stata eivreg
command, which stands for errors-in-variables regression.

Note that the overall F and R2 went up, but that the coefficient for read is
no longer statistically significant.

4.5 Multiple Equation Regression Models

If a dataset has enough variables we may want to estimate more than one regression model.
For example, we may want to predict y1 from x1 and also predict y2 from x2. Even though there
are no variables in common these two models are not independent of one another because
the data come from the same subjects. This is an example of one type of multiple equation regression
known as seemingly unrelated regression.
We can estimate the coefficients and obtain standard errors taking into account the correlated
errors in the two models. An important feature of multiple equation models is that we can
test predictors across equations.

Another example of multiple equation regression is if we wished to predict y1, y2 and y3 from
x1 and x2. This is a three equation system, known as multivariate regression, with the same
predictor variables for each model. Again, we have the capability of testing coefficients across
the different equations.

It is the case that the errors (residuals) from these two models would be correlated. This
would be true even if the predictor female were not found in both models. The errors would
be correlated because all of the values of the variables are collected on the same set of
observations. This is a situation tailor made for seemingly unrelated regression using the
sureg command. Here is our first model using OLS.

With the sureg command we can estimate both models simultaneously while
accounting for the correlated errors at the same time, leading to efficient estimates of
the coefficients and standard errors. By including the corr option with sureg
we can also obtain an estimate of the correlation between the errors of the two models.
Note that both the estimates of the coefficients and their standard errors are different
from the OLS model estimates shown above. The bottom of the output provides a
Breusch-Pagan test of
whether the residuals from the two equations are independent (in this case, we
would say the residuals were not independent, p=0.0407).

Now that we have estimated our models let’s test the predictor variables. The test for female
combines information from both models. The tests for math and read are
actually equivalent to the z-tests above except that the results are displayed as
chi-square tests.

These regressions provide fine estimates of the coefficients and standard errors but
these results assume the residuals of each analysis are completely independent of the
others. Also, if we wish to test female, we would have to do it three times and
would not be able to combine the information from all three tests into a single overall
test.

Now let’s use sureg to estimate the same models. Since all 3 models have
the same predictors, we can use the syntax as shown below which says that read,
write and math will each be predicted by female,
prog1 and prog3. Note that the coefficients are identical
in the OLS results above and the sureg results below, however the
standard errors are different, only slightly, due to the correlation among the residuals
in the multiple equations.

In addition to getting more appropriate standard errors, sureg allows
us to test the effects of the predictors across the equations. We can test the
hypothesis that the coefficient for female is 0 for all three outcome
variables, as shown below.

Below we use mvreg to predict read, write and math
from female, prog1 and prog3. Note that the top part of
the output is similar to the sureg output in that it gives an overall
summary of the model for each outcome variable, however the results are somewhat different
and the sureg uses a Chi-Square test for the overall fit
of the model, and mvreg uses an F-test. The lower part
of the output appears similar to the sureg output; however, when you
compare the standard errors you see that the results are not the same. These standard errors
correspond to the OLS standard errors, so these results below do not take into account the
correlations among the residuals (as do the sureg results).

Now, let’s test female. Note, that female was statistically significant
in only one of the three equations. Using the test command after mvreg allows us to
test female across all three equations simultaneously. And, guess what? It is
significant. This is consistent with what we found using sureg (except
that sureg did this test using a Chi-Square test).

Many researchers familiar with traditional multivariate analysis may not recognize the
tests above. They don’t see Wilks’ Lambda, Pillai’s Trace or the Hotelling-Lawley Trace
statistics, statistics that they are familiar with. It is possible to obtain these
statistics using the mvtest command written by David E. Moore of the University of
Cincinnati. mvtest , which UCLA updated to work with Stata 6 and above,
can be downloaded over the internet like this.

The sureg and mvreg commands both allow you to test
multi-equation models while taking into account the fact that the equations are not
independent. The sureg command allows you to get estimates for each
equation which adjust for the non-independence of the equations, and it allows you to
estimate equations which don’t necessarily have the same predictors. By contrast, mvreg
is restricted to equations that have the same set of predictors, and the estimates it
provides for the individual equations are the same as the OLS estimates. However, mvreg
(especially when combined with mvtest) allows you to perform more
traditional multivariate tests of predictors.

4.6 Summary

This chapter has covered a variety of topics that go beyond ordinary least
squares regression, but there still remain a variety of topics we wish we could
have covered, including the analysis of survey data, dealing with missing data,
panel data analysis, and more. And, for the topics we did cover, we wish we
could have gone into even more detail. One of our main goals for this chapter
was to help you be aware of some of the techniques that are available in Stata
for analyzing data that do not fit the assumptions of OLS regression and some of
the remedies that are possible. If you are a member of the UCLA research
community, and you have further questions, we invite you to use our consulting
services to discuss issues specific to your data analysis.

4.7 Self Assessment

1. Use the crime data file that was used in chapter 2 (use
https://stats.idre.ucla.edu/stat/stata/webbooks/reg/crime ) and look at a regression model
predicting murder from pctmetro, poverty, pcths
and single using OLS and make a avplots and a lvr2plot
following the regression. Are there any states that look worrisome? Repeat this analysis
using regression with robust standard errors and show avplots
for the analysis. Repeat the analysis using robust regression and make a
manually created lvr2plot. Also run the results using qreg.
Compare the results of the different analyses. Look at the weights from the
robust regression and comment on the weights.

2. Using the elemapi2 data file (use https://stats.idre.ucla.edu/stat/stata/webbooks/reg/elemapi2
) pretend that 550 is the lowest score that a school could achieve on api00,
i.e., create a new variable with the api00 score and recode it
such that any score of 550 or below becomes 550. Use meals, ell
and emer to predict api scores using 1) OLS to predict the
original api score (before recoding) 2) OLS to predict the recoded score where
550 was the lowest value, and 3) using tobit to predict the
recoded api score indicating the lowest value is 550. Compare the results of
these analyses.

3. Using the elemapi2 data file (use https://stats.idre.ucla.edu/stat/stata/webbooks/reg/elemapi2
) pretend that only schools with api scores of 550 or higher were included in
the sample. Use meals, ell and emer
to predict api scores using 1) OLS to predict api from the full set of
observations, 2) OLS to predict api using just the observations with api scores
of 550 or higher, and 3) using truncreg to predict api using
just the observations where api is 550 or higher. Compare the results of these
analyses.

4. Using the hsb2 data file (use https://stats.idre.ucla.edu/stat/stata/webbooks/reg/hsb2
) predict read from science, socst, math and write.
Use the testparm and test commands to test
the equality of the coefficients for science, socst
and math. Use cnsreg to estimate a model where
these three parameters are equal.

Estimate the coefficients for these predictors in predicting api00
and api99 taking into account the non-independence of the
schools. Test the overall contribution of each of the predictors in jointly
predicting api scores in these two years. Test whether the contribution of emer
is the same for api00 and api99.