In the previous chapter, we learned how to do ordinary linear regression with Stata,
concluding with methods for examining the distribution of our variables.
Without verifying that your data have met the assumptions underlying OLS regression, your results may
be misleading. This chapter will explore how you can use Stata to check on how well your
data meet the assumptions of OLS regression. In particular, we will consider the
following assumptions.

Linearity - the relationships between the predictors and the outcome variable should be
linear

Normality - the errors should be normally distributed - technically normality is
necessary only for hypothesis tests to be valid,estimation of the coefficients only requires
that the errors be identically and independently distributed

Homogeneity of variance (homoscedasticity) - the error variance should be constant

Independence - the errors associated with one observation are not correlated with the
errors of any other observation

Errors in variables - predictor variables are measured without error (we will cover this
in Chapter 4)

Many graphical methods and numerical tests have been developed over the years for
regression diagnostics. Stata has many of these methods built-in, and others are available
that can be downloaded over the internet. In particular, Nicholas J. Cox (University
of Durham) has produced a collection of convenience commands which can be
downloaded from SSC (ssc installcommandname). These commands include indexplot,
rvfplot2, rdplot, qfrplot and ovfplot. In this chapter,
we will explore these methods and show how to verify
regression assumptions and detect potential problems using Stata.

2.1 Unusual and influential data

A single observation that is substantially different from all other observations can
make a large difference in the results of your regression analysis. If a single
observation (or small group of observations) substantially changes your results, you would
want to know about this and investigate further. There are three ways that an
observation can be unusual.

Outliers: In linear regression, an outlier is an observation with large
residual. In other words, it is an observation whose dependent-variable value is unusual
given its values on the predictor variables. An outlier may indicate a sample peculiarity
or may indicate a data entry error or other problem.

Leverage: An observation with an extreme value on a predictor variable is called
a point with high leverage. Leverage is a measure of how far an observation
deviates from the mean
of that variable. These leverage points can have an effect on the estimate of
regression coefficients.

Influence: An observation is said to be influential if removing the observation
substantially changes the estimate of coefficients. Influence can be thought of as the
product of leverage and outlierness.

How can we identify these three types of observations? Let's look at an example dataset
called crime. This dataset appears in Statistical Methods for Social
Sciences, Third Edition by Alan Agresti and Barbara Finlay (Prentice Hall, 1997). The
variables are state id (sid), state name (state), violent crimes per 100,000
people (crime), murders per 1,000,000 (murder), the percent of the
population living in metropolitan areas (pctmetro), the percent of the population
that is white (pctwhite), percent of population with a high school education or
above (pcths), percent of population living under poverty line (poverty),
and percent of population that are single parents (single).

Let's say that we want to predict crime by pctmetro, poverty, and single. That is to say, we want to build a linear regression model between the response
variable crime and the independent variables pctmetro, poverty and single.
We will first look at the scatter plots of crime against each of the predictor variables
before the regression analysis so we will have some ideas about potential problems. We can
create a scatterplot matrix of these variables as shown below.

graph matrix crime pctmetro poverty single

The graphs of crime with other variables show some potential problems.
In every plot, we see a data point that is far away from the rest of the data
points. Let's make individual graphs of crime with pctmetro and poverty and single
so we can get a better view of these scatterplots. We will add the
mlabel(state)
option to label each marker with the state name to identify outlying states.

scatter crime pctmetro, mlabel(state)

scatter crime poverty, mlabel(state)

scatter crime single, mlabel(state)

All the scatter plots suggest that the observation for state = dc is a point
that requires extra attention since it stands out away from all of the other points. We
will keep it in mind when we do our regression analysis.

Now let's try the regression command predicting crime from pctmetropoverty
and single. We will go step-by-step to identify all the potentially unusual
or influential points afterwards.

Let's examine the studentized residuals as a first means for identifying outliers.
Below we use the predict command with the rstudent option to generate
studentized residuals and we name the residuals r. We can choose any name
we like as long as it is a legal Stata variable name. Studentized residuals are a type of
standardized residual that can be used to identify outliers.

predict r, rstudent

Let's examine the residuals with a stem and leaf plot. We see three residuals that
stick out, -3.57, 2.62 and 3.77.

The stem and leaf display helps us see some potential outliers, but we cannot see
which state (which observations) are potential outliers. Let's sort the data
on the residuals and show the 10 largest and 10 smallest residuals along with the state id
and state name. Note that in the second list command the -10/l the
last value is the letter "l", NOT the number one.

We should pay attention to studentized residuals that exceed +2 or -2, and get even
more concerned about residuals that exceed +2.5 or -2.5 and even yet more concerned about
residuals that exceed +3 or -3. These results show that DC and MS are the most
worrisome observations followed by FL.

Let's show all of the variables in our regression where the studentized residual
exceeds +2 or -2, i.e., where the absolute value of the residual exceeds 2. We see
the data for the three potential outliers we identified, namely Florida, Mississippi and
Washington D.C. Looking carefully at these three observations, we couldn't find any data
entry error, though we may want to do another regression analysis with the extreme point
such as DC deleted. We will return to this issue later.

As we have seen, DC is an observation that both has a large residual and large
leverage. Such points are potentially the most influential. We can make a plot
that shows the leverage by the residual squared and look for observations that are jointly
high on both of these measures. We can do this using the lvr2plot command.
lvr2plot stands for leverage versus residual squared plot. Using residual
squared instead of residual itself, the graph is restricted to the first
quadrant and the relative positions of data points are preserved. This is a quick way of checking potential influential observations and outliers at the
same time. Both types of points are of great concern for us.

lvr2plot, mlabel(state)

The two reference lines are the means for leverage, horizontal, and for the normalized
residual squared, vertical. The points that immediately catch our attention is DC (with the
largest leverage) and MS (with the largest residual squared). We'll look at those
observations more carefully by listing them.

Now let's move on to overall measures of influence, specifically let's look at Cook's D
and DFITS. These measures both combine information on the residual and leverage.
Cook's D and DFITS are very similar except that they scale differently but they give us
similar answers.

The lowest value that Cook's D can assume is zero, and the higher the Cook's D is, the
more influential the point. The convention cut-off point is 4/n. We can list any
observation above the cut-off point by doing the following. We do see that the Cook's
D for DC is by far the largest.

Now let's take a look at DFITS. The cut-off point for DFITS is 2*sqrt(k/n).
DFITS can be either positive or negative, with numbers close to zero corresponding to the
points with small or zero influence. As we see, dfit also indicates that DC is, by
far, the most influential observation.

The above measures are general measures of influence. You can also consider more
specific measures of influence that assess how each coefficient is changed by deleting
the observation. This measure is called DFBETA and is created for each of
the predictors. Apparently this is more computational intensive than summary
statistics such as Cook's D since the more predictors a model has, the more
computation it may involve. We can restrict our attention to only those
predictors that we are most concerned with to see how well behaved
those predictors are. In Stata, the dfbeta command will produce the DFBETAs for each of
the predictors. The names for the new variables created are chosen by Stata automatically
and begin with the letters DF.

The value for DFsingle for Alaska is .14, which means that by being
included in the analysis (as compared to being excluded), Alaska increases the coefficient for single
by 0.14
standard errors, i.e., .14 times the standard error for BSingle or by (0.14 *
15.5). Since the inclusion of an observation could either contribute to an
increase or decrease in a
regression coefficient, DFBETAs can be either positive or negative. A DFBETA value
in excess of 2/sqrt(n) merits further investigation. In this example, we
would be concerned about absolute values in excess of 2/sqrt(51) or .28.

We can plot all three DFBETA values against the state id in one graph shown below. We add
a line at .28 and -.28 to help us see potentially troublesome observations. We see
the largest value is about 3.0 for DFsingle.

We can repeat this graph with the mlabel() option in the graph command to label the
points. With the graph above we can identify which DFBeta is a problem, and with the graph
below we can associate that observation with the state that it originates from.

The following table summarizes the general rules of thumb we use for these
measures to identify observations worthy of further investigation (where k is the number
of predictors and n is the number of observations).

Measure

Value

leverage

>(2k+2)/n

abs(rstu)

> 2

Cook's D

> 4/n

abs(DFITS)

> 2*sqrt(k/n)

abs(DFBETA)

> 2/sqrt(n)

We have used the predict command to create a number of variables associated with
regression analysis and regression diagnostics. The help regress command not only
gives help on the regress command, but also lists all of the statistics that can be
generated via the predict command. Below we show a snippet of the Stata help
file illustrating the various statistics that can be computed via the predict
command.

We have explored a number of the statistics that we can get after the regress
command. There are also several graphs that can be used to search for unusual and
influential observations. The avplot command graphs an added-variable plot.
It is also called a partial-regression plot and is very useful in identifying
influential points. For example, in the avplot for single shown below, the graph
shows crime by single after both crime and single have been
adjusted for all other predictors in the model. The line plotted has the same slope
as the coefficient for single. This plot shows how the observation for DC
influences the coefficient. You can see how the regression line is tugged upwards
trying to fit through the extreme value of DC. Alaska and West Virginia may also
exert substantial leverage on the coefficient of single.

avplot single, mlabel(state)

Stata also has the avplots command that creates an added variable plot for all
of the variables, which can be very useful when you have many variables. It does
produce small graphs, but these graphs can quickly reveal whether you have problematic
observations based on the added variable plots.

avplots

DC has appeared as an outlier as well as an influential point in every analysis.
Since DC is really not a state, we can use this to justify omitting it from the analysis
saying that we really wish to just analyze states. First, let's repeat our analysis
including DC by just typing regress.

Now, let's run the analysis omitting DC by including if state != "dc"
on the regress command (here != stands for "not equal to" but you
could also use ~= to mean the same thing). As we expect, deleting DC made a large
change in the coefficient for single. The coefficient for single dropped
from 132.4 to 89.4. After having deleted DC, we would repeat the process we have
illustrated in this section to search for any other outlying and influential observations.

Finally, we showed that the avplot command can be used to searching for outliers
among existing variables in your model, but we should note that the avplot command
not only works for the variables in the model, it also works for variables that are not in
the model, which is why it is called added-variable plot. Let's use the regression
that includes DC as we want to continue to see ill-behavior caused by DC as a
demonstration for doing regression diagnostics. We can do an avplot on variable pctwhite.

regress crime pctmetro poverty single
avplot pctwhite

At the top of the plot, we have "coef=-3.509". It is the coefficient for pctwhite
if it were put in the model. We can check that by doing a regression as below.

In this section, we explored a number of methods of identifying outliers
and influential points. In a typical analysis, you would probably use only some of these
methods. Generally speaking, there are two types of methods for assessing
outliers: statistics such as residuals, leverage, Cook's D and DFITS, that
assess the overall impact of an observation on the regression results, and
statistics such as DFBETA that assess the specific impact of an observation on
the regression coefficients.

In our example, we found that DC was a point of major concern. We
performed a regression with it and without it and the regression equations were very
different. We can justify removing it from our analysis by reasoning that our model
is to predict crime rate for states, not for metropolitan areas.

2.2 Checking Normality of Residuals

Many researchers believe that multiple regression requires normality. This is not the case.
Normality of residuals
is only required for valid hypothesis testing, that is, the normality assumption assures that the
p-values for the t-tests and F-test will be valid. Normality is not required in order to obtain
unbiased estimates of the regression coefficients. OLS regression merely requires that the
residuals (errors) be identically and independently distributed. Furthermore, there is no
assumption or requirement that the predictor variables be normally distributed. If this were the
case than we would not be able to use dummy coded variables in our models.

After we run a regression analysis, we can use the predict command to create
residuals and then use commands such as kdensity, qnorm and pnorm to
check the normality of the residuals.

Let's use the elemapi2 data file we saw in Chapter 1 for these analyses. Let's predict academic performance (api00) from percent receiving free meals (meals),
percent of English language learners (ell), and percent of teachers with emergency
credentials (emer).

Below we use the kdensity command to produce a kernel density plot with the normal
option requesting that a normal density be overlaid on the plot. kdensity stands
for kernel density estimate. It can be thought of as a histogram with narrow bins
and moving average.

kdensity r, normal

The pnorm command graphs a standardized normal probability (P-P) plot while qnorm
plots the quantiles of a variable against the quantiles of a normal distribution. pnorm
is sensitive to non-normality in the middle range of data and qnorm is sensitive to
non-normality near the tails. As you see below, the results from pnorm show no
indications of non-normality, while the qnorm command shows a slight deviation from
normal at the upper tail, as can be seen in the kdensity above. Nevertheless,
this seems to be a minor and trivial deviation from normality. We can accept that
the residuals are close to a normal distribution.

iqr stands for inter-quartile range and assumes the symmetry of the
distribution. Severe outliers consist of those points that are either 3
inter-quartile-ranges below the first quartile or 3 inter-quartile-ranges above the third
quartile. The presence of any severe outliers should be sufficient evidence to reject
normality at a 5% significance level. Mild outliers are common in samples of any size. In
our case, we don't have any severe outliers and the distribution seems fairly symmetric.
The residuals have an approximately normal distribution.

Another test available is the swilk test which performs the Shapiro-Wilk W test
for normality. The p-value is based on the assumption that the distribution is
normal. In our example, it is very large (.51), indicating that we cannot reject that r
is normally distributed.

One of the main assumptions for the ordinary least squares regression is the
homogeneity of variance of the residuals. If the model is well-fitted, there should be no
pattern to the residuals plotted against the fitted values. If the variance of the
residuals is non-constant then the residual variance is said to be
"heteroscedastic." There are graphical and non-graphical methods for detecting
heteroscedasticity. A commonly used graphical method is
to plot the residuals versus fitted (predicted) values. We do this by
issuing the rvfplot command. Below we use the rvfplot
command with the yline(0) option to put a reference line at y=0. We see
that the pattern of the data points is getting a little narrower towards the
right end, which is an indication of heteroscedasticity.

rvfplot, yline(0)

Now let's look at a couple of commands that test for heteroscedasticity.

The first test on heteroskedasticity given by imest is the White's
test and the second one given by hettest is the Breusch-Pagan test. Both
test the null hypothesis that the variance of the residuals is homogenous.
Therefore, if the p-value is very small, we would have to reject the hypothesis
and accept the alternative hypothesis that the variance is not homogenous. So in
this case, the evidence is against the null hypothesis that the variance is
homogeneous. These tests are very sensitive to model assumptions, such as the
assumption of normality. Therefore it is a common practice to combine the tests
with diagnostic plots to make a judgment on the severity of the
heteroscedasticity and to decide if any correction is needed for
heteroscedasticity. In our case, the plot above does not show too strong an
evidence. So we are not going to get into details on how to correct for
heteroscedasticity even though there are methods available.

2.4 Checking for Multicollinearity

When there is a perfect linear relationship among the predictors, the estimates for a
regression model cannot be uniquely computed. The term collinearity implies that two
variables are near perfect linear combinations of one another. When more than two
variables are involved it is often called multicollinearity, although the two terms are
often used interchangeably.

The primary concern is that as the degree of multicollinearity increases, the
regression model estimates of the coefficients become unstable and the standard errors for
the coefficients can get wildly inflated. In this section, we will explore some Stata
commands that help to detect multicollinearity.

We can use the vif command after the regression to check for multicollinearity. vif
stands for variance inflation factor. As a rule of thumb, a variable whose VIF
values are greater than 10 may merit further investigation. Tolerance, defined as 1/VIF, is
used by many researchers to check on the degree of collinearity. A tolerance value lower
than 0.1 is comparable to a VIF of 10. It means that the variable could be considered as a
linear combination of other independent variables. Let's first look at the regression we
did from the last section, the regression model predicting api00 from meals, ell
and emer and then issue the vif command.

In this example, the VIF and tolerance (1/VIF) values for avg_edgrad_sch
and col_grad are worrisome. All of these variables measure education of the
parents and the very high VIF values indicate that these variables are possibly
redundant. For example, after you know grad_sch and col_grad, you
probably can predict avg_ed very well. In this example, multicollinearity
arises because we have put in too many variables that measure the same thing, parent
education.

Let's omit one of the parent education variables, avg_ed. Note that the
VIF values in the analysis below appear much better. Also, note how the standard
errors are reduced for the parent education variables, grad_sch and col_grad.
This is because the high degree of collinearity caused the standard errors to be inflated.
With the multicollinearity eliminated, the coefficient for grad_sch, which
had been non-significant, is now significant.

Let's introduce another command on collinearity. The collin command displays
several different measures of collinearity. For example, we can test for collinearity
among the variables we used in the two examples above. Note that the collin
command does not need to be run in connection with a regress command, unlike the vif
command which follows a regress command. Also note that only predictor
(independent) variables are used with the collin command. You can download
collin from within Stata by
typing findit collin (see
How can I used the findit command to search for programs and get additional
help? for more information about using findit).

The condition number is a commonly used index of the global instability of the
regression coefficients -- a large condition number, 10 or more, is an indication of
instability.

2.5 Checking Linearity

When we do linear regression, we assume that the relationship between the response
variable and the predictors is linear. This is the assumption of linearity. If this
assumption is violated, the linear regression will try to fit a straight line to data that
does not follow a straight line. Checking the linear assumption in the case of simple
regression is straightforward, since we only have one predictor. All we have to do is a
scatter plot between the response variable and the predictor to see if nonlinearity is
present, such as a curved band or a big wave-shaped curve. For example, recall we did a
simple linear regression in Chapter 1 using dataset elemapi2.

Below we use the scatter command to show a scatterplot
predicting api00 from enroll and use lfit to show a linear
fit, and then lowess to show a lowess smoother predicting api00
from enroll. We clearly see some
degree of nonlinearity.

Checking the linearity assumption is not so straightforward in the case of multiple
regression. We will try to illustrate some of the techniques that you can use. The most
straightforward thing to do is to plot the standardized residuals against each of the
predictor variables in the regression model. If there is a clear nonlinear pattern, there
is a problem of nonlinearity. Otherwise, we should see for each of the plots just a random
scatter of points. Let's continue to use dataset elemapi2 here. Let's use a
different model.

The two residual versus predictor variable plots above do not indicate strongly a clear
departure from linearity. Another command for detecting non-linearity is acprplot. acprplot
graphs an augmented component-plus-residual plot, a.k.a. augmented partial residual plot.
It can be used to identify nonlinearities in the data. Let's use the acprplot
command for meals and some_col and use the lowess lsopts(bwidth(1))
options to request lowess smoothing with a bandwidth of 1.

In the first plot below the smoothed line is very close to the ordinary regression
line, and the entire pattern seems pretty uniform. The second plot does seem more
problematic at the right end. This may come from some potential influential points.
Overall, they don't look too bad and we shouldn't be too concerned about non-linearities
in the data.

We have seen how to use acprplot to detect nonlinearity. However our last
example didn't show much nonlinearity. Let's look at a more interesting example. This
example is taken from "Statistics with Stata 5" by Lawrence C. Hamilton (1997,
Duxbery Press). The dataset we will use is called nations.dta. We can get the
dataset from the Internet.

Let's build a model that predicts birth rate (birth), from per capita gross
national product (gnpcap), and urban population (urban). If this were a
complete regression analysis, we would start with examining the variables, but for the
purpose of illustrating nonlinearity, we will jump directly to the regression.

The acprplot plot for gnpcap shows clear deviation from linearity and the
one for urban does not show nearly as much deviation from linearity. Now, let's
look at these variables more closely.

graph matrix birth gnpcap urban, half

We see that the relation between birth rate and per capita gross national product is
clearly nonlinear and the relation between birth rate and urban population is not too far
off from being linear. So let's focus on variable gnpcap. First let's look at the
distribution of gnpcap. We suspect that gnpcap may be very skewed. This may
affect the appearance of the acprplot.

kdensity gnpcap, normal

Indeed, it is very skewed. This suggests to us that some transformation of the variable
may be necessary. One of the commonly used transformations is log transformation. Let's try
it here.

The plot above shows less deviation from nonlinearity than before, though the problem
of nonlinearity has not been completely solved yet.

2.6 Model Specification

A model specification error can occur when one or more relevant variables are omitted
from the model or one or more irrelevant variables are included in the model. If relevant
variables are omitted from the model, the common variance they share with included
variables may be wrongly attributed to those variables, and the error term is inflated. On
the other hand, if irrelevant variables are included in the model, the common variance
they share with included variables may be wrongly attributed to them. Model specification
errors can substantially affect the estimate of regression coefficients.

Consider the model below. This regression suggests that as class size increases the
academic performance increases. Before we publish results saying that increased class size
is associated with higher academic performance, let's check the model specification.

There are a couple of methods to detect specification errors. The linktest command performs a model specification link test for
single-equation models. linktest is based on the idea that if a regression is
properly specified, one should not be able to find any additional independent variables
that are significant except by chance. linktest creates two new variables, the
variable of prediction, _hat, and the variable of squared prediction, _hatsq.
The model is then refit using these two variables as predictors. _hat
should be significant since it is the predicted value. On the other hand, _hatsq
shouldn't, because if our model is specified correctly, the squared predictions should not have much
explanatory power. That is we wouldn't expect _hatsq to be a
significant predictor if our model is specified correctly. So we will be looking at the p-value for _hatsq.

From the above linktest, the test of _hatsq is not significant.
This is to say that linktest has failed to reject the assumption that the model
is specified correctly. Therefore, it seems to us that we don't have a
specification error. But now, let's look at another test before we jump to the
conclusion.

The ovtest command performs another test of regression model specification. It
performs a regression specification error test (RESET) for omitted variables.
The idea behind ovtest is very similar to linktest. It also
creates new variables based on the predictors and refits the model using those
new variables to see if any of them would be significant. Let's try ovtest
on our model.

The ovtest command indicates that there are omitted variables. So we
have tried both the linktest and ovtest, and one of them (ovtest)
tells us that we have a specification error. We therefore have to
reconsider our model.

Let's try adding the variable full to the model. Now, both the linktest
and ovtest are significant, indicating we have a specification error.

The linktest is once again non-significant while the p-value for ovtest
is slightly greater than .05. Note that after including meals and full, the
coefficient for class size is no longer significant. While acs_k3 does have a
positive relationship with api00 when no other variables are in the model, when we
include, and hence control for, other important variables, acs_k3 is no
longer significantly related to api00 and its relationship to api00
is no longer positive.

linktest and ovtest are tools available in Stata for checking
specification errors, though linktest can actually do more than check
omitted variables as we used here, e.g., checking the correctness of link
function specification. For more details on those tests, please refer to Stata
manual.

2.7 Issues of Independence

The statement of this assumption that the errors associated with one observation are not
correlated with the errors of any other observation cover several different situations.
Consider the case of collecting data from students in eight different elementary schools. It is
likely that the students within each school will tend to be more like one another
than students
from different schools, that is, their errors are not independent. We will deal with this type
of situation in Chapter 4 when we demonstrate the regress command with cluster option.

Another way in which the assumption of independence can be broken is when data are collected on the
same variables over time. Let's say that we collect truancy data every semester for 12 years. In
this situation it is likely that the errors for observation between adjacent semesters will be
more highly correlated than for observations more separated in time. This is known as
autocorrelation. When you have data that can be considered to be time-series you should use
the dwstat command that performs a Durbin-Watson test for correlated residuals.

We don't have any time-series data, so we will use the elemapi2 dataset and
pretend that snum indicates the time at which the data were collected. We will also need to
use the tsset command to let Stata know which variable is the time variable.

The Durbin-Watson statistic has a range from 0 to 4 with a midpoint of 2. The observed value in
our example is very small, close to zero, which is not surprising since our data are not truly
time-series. A simple visual check would be to plot the residuals versus the time variable.

. predict r, resid
scatter r snum

2.8 Summary

In this chapter, we have used a number of tools in Stata for determining whether our
data meets the regression assumptions. Below, we list the major commands we demonstrated
organized according to the assumption the command was shown to test.

Detecting Unusual and Influential Data

predict -- used to create predicted values, residuals, and measures of influence.

rvpplot --- graphs a residual-versus-predictor plot.

rvfplot -- graphs residual-versus-fitted plot.

lvr2plot -- graphs a leverage-versus-squared-residual plot.

dfbeta -- calculates DFBETAs for all the independent variables in the linear
model.

1. The following data set consists of measured weight, measured height,
reported weight and reported height of some 200 people. You can get it from
within Stata by typing use http://www.ats.ucla.edu/stat/stata/webbooks/reg/davis
We tried to build a model to predict measured weight by reported weight, reported height and measured height. We did an lvr2plot after the regression and here is what we have. Explain what you see in the graph and try to use other STATA commands to identify the problematic observation(s). What do you think the problem is and
what is your solution?

2. Using the data from the last exercise, what measure would you use if
you want to know how much change an observation would make on a coefficient
for a predictor? For
example, show how much change would it be for the coefficient of predictor reptht
if we omit observation 12 from our regression analysis? What are the other
measures that you would use to assess the influence of an observation on
regression? What are the cut-off values for them?

3. The following data file is
called bbwt.dta and it is from Weisberg's Applied Regression Analysis.
You can obtain it from within Stata by typing use http://www.ats.ucla.edu/stat/stata/webbooks/reg/bbwt
It consists of the body weights and brain weights of some 60 animals. We want to predict the brain weight by body
weight, that is, a simple linear regression of brain weight against body
weight. Show what you have to do to verify the linearity assumption.
If you think that it violates the linearity assumption, show some possible remedies that you
would consider.

4. We did a regression analysis using the data file elemapi2 in chapter 2. Continuing with the analysis we did, we did an avplot
here. Explain what an avplot is and what type of information you would
get from the plot. If variable full were put in the model, would it be a
significant predictor?

5. The data set wage.dta is from a national sample of 6000 households
with a male head earning less than $15,000 annually in 1966. You can get this
data file by typing use http://www.ats.ucla.edu/stat/stata/webbooks/reg/wage from
within Stata. The data were classified
into 39 demographic groups for analysis.
We tried to predict the average hours worked by average age of respondent and average yearly non-earned income.

6. Continue to use the previous data set.
This time we want to predict the average hourly wage by average percent of white
respondents.
Carry out the regression analysis and list the STATA commands that you can use to check for
heteroscedasticity. Explain the result of your test(s).

Now we want to build another model to predict the average percent of white respondents
by the average hours worked. Repeat the analysis you performed on the previous regression model.
Explain your results.

7. We have a data set that consists of volume, diameter and height
of some objects. Someone did a regression of volume on diameter and height.