Truncated Regression | R Data Analysis Examples

Truncated regression is used to model dependent variables for which some of the
observations are not included in the analysis because of the value of the
dependent variable.

This page uses the following packages. Make sure that you can load
them before trying to run the examples on this page. If you do not have
a package installed, run: install.packages("packagename"), or
if you see the version is out of date, run: update.packages().

Please note: The purpose of this page is to show how to use various data
analysis commands. It does not cover all aspects of the research process which
researchers are expected to do. In particular, it does not cover data
cleaning and checking, verification of assumptions, model diagnostics or
potential follow-up analyses.

Examples of truncated regression

Example 1. A study of students in a special GATE (gifted and talented education) program
wishes to model achievement as a function of language skills and the type of
program in which the student is currently enrolled. A major concern is
that students are required to have a minimum achievement score of 40 to enter
the special program. Thus, the sample is truncated at an achievement score
of 40.

Example 2. A researcher has data for a sample of Americans whose income is
above the poverty line. Hence, the lower part of the distribution of
income is truncated. If the researcher had a sample of Americans whose
income was at or below the poverty line, then the upper part of the income
distribution would be truncated. In other words, truncation is a result of
sampling only part of the distribution of the outcome variable.

Description of the data

Let’s pursue Example 1 from above. We have a hypothetical data file,
truncreg.dta, with 178 observations. The
outcome variable is called achiv, and the language test score
variable is called langscore. The variable prog is a
categorical predictor variable with three levels indicating the type
of program in which the students were enrolled.

Let’s look at the data. It is always a good idea to start with descriptive
statistics.

Analysis methods you might consider

Below is a list of some analysis methods you may have
encountered. Some of the methods listed are quite reasonable, while others have
either fallen out of favor or have limitations.

OLS regression – You could analyze these data using OLS regression.
OLS regression will not adjust the estimates of the coefficients to take
into account the effect of truncating the sample at 40, and the coefficients
may be severely biased. This can be conceptualized as a model
specification error (Heckman, 1979).

Truncated regression – Truncated regression addresses the bias
introduced when using OLS regression with truncated data. Note that with truncated regression, the
variance of the outcome variable is reduced compared to the distribution
that is not truncated. Also, if the lower part of the distribution is
truncated, then the mean of the truncated variable will be greater than the
mean from the untruncated variable; if the truncation is from above, the
mean of the truncated variable will be less than the untruncated variable.

These types of models can also be conceptualized as Heckman
selection models, which are used to correct for sampling selection bias.

Censored regression – Sometimes the concepts of truncation and censoring
are confused. With censored data we have all of the observations, but we don’t know the “true” values of
some of them. With truncation, some of the observations are not included in the analysis
because of the value of the outcome variable. It would be inappropriate to analyze
the data in our example using a censored regression model.

Truncated regression

Below we use the truncreg function in the truncreg package
to estimate a truncated regression model. The point argument indicates
where the data are truncated, and the direction indicates whether it is
left or right truncated.

In the table of coefficients, we have the truncated regression coefficients,
the standard error of the coefficients, the Wald z-tests (coefficient/se),
and the p-value associated with each z-test (shown as t-values).

The ancillary statistic /sigma is equivalent to the standard error of
estimate in OLS regression. The value of 8.76 can be compared to the
standard deviation of achievement which was 8.96. This shows a modest reduction.
The output also contains an estimate of the standard error of sigma.

The variable langscore is statistically significant. A unit
increase in language score leads to a .71 increase in predicted achievement.
One of the indicator variables for prog is also statistically
significant. Compared to general programs, academic programs are about 4.07 higher.
To determine if prog itself is statistically significant,
we can test models with it in and out for the two degree-of-freedom test of this variable.

The two degree-of-freedom chi-square test indicates that prog is a
statistically significant predictor of achiv. We can get the
expected means for each program at the mean of langscore by
reparameterizing the model.

We could use the bootstrapped standard error to get a normal approximation
for a significance test and confidence intervals for every parameter. However,
instead we will get the percentile and bias adjusted 95 percent confidence
intervals, using the boot.ci function.

The conclusions are the same as from the default model tests. You can compute
a rough estimate of the degree of association for the overall model,
by correlating achiv with the predicted value and squaring the result.

dat$yhat<-fitted(m)# correlation(r<-with(dat,cor(achiv, yhat)))

## [1] 0.552

# rough variance accounted forr^2

## [1] 0.305

The calculated value of .31 is rough estimate of the R2 you would find in an OLS
regression. The squared correlation between the observed and predicted
academic aptitude values is about 0.31, indicating that these predictors
accounted for over 30% of the variability in the outcome variable.

Things to consider

The truncreg function is designed to work when the
truncation is on the outcome variable in the model. It is possible to
have samples that are truncated based on one or more predictors. For
example, modeling college GPA as a function of high school GPA (HSGPA)
and SAT scores involves a sample that is truncated based on the predictors,
i.e., only students with higher HSGPA and SAT scores are admitted into the college.

You need to be careful about what value is used as the truncation value,
because it affects the estimation of the coefficients and standard errors.
In the example above, if we had used point = 39 instead of
point = 40, the results would have been slightly different.
It does not matter that there were no values of 40 in our sample.