Welcome to the Institute for Digital Research and Education

FAQ
What is complete or quasi-complete separation in logistic/probit regression and
how do we deal with them?

Occasionally when running a logistic/probit
regression we run into the problem of so-called complete separation or
quasi-complete separation. On this page, we will discuss what complete or
quasi-complete separation is and how to deal with the problem when it occurs.

Notice that the example data set used for this page is extremely small. It is
for the purpose of illustration only.

What is complete separation and what do some of the most commonly used
software packages do when it happens?

A complete separation happens when the outcome variable separates a predictor
variable or a combination of predictor variables completely. Albert and Anderson (1984) define this
as, "there is a vector α that correctly allocates all observations to their group." Below is a small example.

Y X1 X2
0 1 3
0 2 2
0 3 -1
0 3 -1
1 5 2
1 6 4
1 10 1
1 11 0

In this example, Y is the outcome variable, X1 and X2 are predictor variables. We can see that
observations with Y = 0 all have values of X1<=3 and observations with
Y = 1 all have values of X1>3.
In other words, Y separates X1 perfectly. The other way to see it is
that X1 predicts Y perfectly since X1<=3 corresponds to Y
= 0 and X1 > 3 corresponds to Y = 1. By chance, we have found a perfect predictor X1 for
the outcome variable Y. In terms of predicted probabilities, we have Prob(Y
= 1 | X1<=3) = 0 and Prob(Y=1 X1>3) = 1, without the need for estimating a
model.

Complete separation or perfect prediction can occur for several reasons. One common
example is when using several categorical variables whose categories are coded by indicators.
For example, if one is studying an age-related disease (present/absent) and age is one of the
predictors, there may be subgroups (e.g., women over 55) all of whom have the disease.
Complete separation also may occur if there is a coding error or you mistakingly included
another version of the outcome as a predictor. For example, we might have dichotomized a
continuous variable X into a binary variable Y. We then wanted to study the
relationship between Y and some predictor variables. If we would include X as a
predictor variable, we would run into the problem of perfect prediction, since
by definition, Y separates X completely. . The other possible scenario for
complete separation to happen is when the sample size is very small. In our
example data above, there is no reason for why Y has to be 0 when X1 is
<=3. If the sample were large enough, we would probably have some observations
with Y = 1 and X1 <=3, breaking up the complete separation of X1.

What happens when we try to fit a logistic
or a probit regression model of Y on X1 and X2? Mathematically the maximum
likelihood estimate for X1 does not exist. In particular with this example, the
larger the coefficient for X1, the larger the likelihood. In other words, the
coefficient for X1 should be as large as it can be, which would be infinity!
In terms of the behavior of statistical software packages, below is what SAS (version
9.2), SPSS (version 18), Stata
(version 11) and R (version 2.11.1) do when we run the model on the sample data. We present these results here in the
hope that some level of understanding of the behavior of logistic/probit regression
when using our familiar software package might help us identify
the problem of complete separation more efficiently.

We can see that the first related message is that SAS detected complete
separation of data points, it gives further warning messages indicating
that the maximum likelihood estimate does not exist and continues to finish the
computation. Also notice that SAS does not tell us which variable is or which
variables are being separated completely by the outcome variable and the
parameter estimate for X1 is incorrect.

We see that SPSS detects a perfect fit and immediately stops the rest of the
computation. It does not provide any parameter estimates. Neither does it provide us
with any further information on the set of variables that gives the perfect fit.

The only warning message that R gives is right after fitting the logistic model. It
says that "fitted probabilities numerically 0 or 1 occurred".
Combining this piece of information with the parameter estimate for x1 being
really large (>15), we suspect that there is a problem of complete or quasi-complete separation. The standard errors
for the parameter estimates are way too large. This
usually indicates a convergence issue or some degree of data separation.

What is quasi-complete separation and what do some of the most commonly used
software packages do when it happens?

Quasi-complete separation in a logistic/probit regression happens when the outcome
variable separates a predictor variable or a combination of predictor variables
to certain degree. Here is an example.

What happens when we try to fit a logistic or a probit regression model of Y on X1 and X2
using the data above? It turns out that the maximum likelihood estimate for X1
does not exist. With this example, the larger the parameter for X1, the larger
the likelihood. In practice,
a value of 15 or larger does not make much difference and they all basically
correspond to predicted probability of 1. The behavior of different statistical
software packages differ at how they deal with the issue of quasi-complete
separation. Below is what each package of SAS, SPSS, Stata
and R does with our sample data and the logistic regression model of Y on X1 and
X2. We present these results here in the
hope that some level of understanding of the behavior of logistic/probit regression
within our familiar software package might help us identify
the problem of separation more efficiently.

We see that SAS used all 10 observations and it gave warnings at various
points. It informed us that it detected quasi-complete separation of the data
points. It is worth noticing that neither the parameter estimate for X1 or for
the intercept mean much at all.

Stata detected that there was a quasi-separation and informed us which
predict variable was part of the issue. It tells us that predictor variable x1
predicts the data perfectly except when x1 = 3. It therefore drops all the cases
when x1 predicts the outcome variable perfectly, keeping only the three
observations when x1 = 3. Since x1 is a constant (=3) on this small sample, it
is also dropped out of the analysis.

SPSS tried to iterate to the default number of iterations and couldn't
reach a solution and thus stopped the iteration process. It didn't tell us
anything about quasi-complete separation. So it is up to us to figure out why
the computation didn't converge. One obvious evidence in this example is the
large magnitude of the
parameter estimate for x1. It is really large and its standard error is even
larger. Based on this piece of evidence, we should look at the relationship
between the outcome variable y and x1. For instance, we can take a look at
the cross tabulation of x1 by y as follows.

The visual inspection reveals that there is a problem of quasi-complete
separation involving x1. In practice, this process of identifying the issue could be very
lengthy since there may
be multiple predictor variables involved.

The only warning we get from R is right after the glm command about
predicted probabilities being 0 or 1. From the parameter estimates we can see
that the coefficient for x1 is very large and its standard error is even larger,
an indication that the model might have some issues with x1. Based on this piece
of evidence, we should look at the relationship between the outcome variable y
and x1 descriptively as shown below. Visual inspection tells us that there is a problem with
quasi-complete separation involving variable x1.

What are the techniques for dealing with complete separation or quasi-complete separation?

Now we have some understanding of what complete or quasi-complete separation
is, an immediate question is what the techniques are for dealing with the issue.
We will give a general and brief description about a few techniques for dealing
with the issue with illustration sample code in SAS. Note that these techniques
may be available in other packages, for example, Stata's user written
firthlogit command. Let's say that the
predictor variable involved in complete quasi-complete separation is called X.

In the case of complete separation, make sure that the outcome variable
is not a dichotomous version of a variable in the model.

If it is quasi-complete separation, the easiest strategy is the "Do nothing" strategy. This is because that the maximum likelihood for other predictor variables are
still valid. The drawback is that we don't get any reasonable estimate for
the variable X that actually predicts the outcome variable effectively.
This strategy does not work well for the situation of complete separation.

Another simple strategy is to not include X in the model. The problem is
that this leads to biased estimates for the other predictor variables in
the model. Thus, this is not a recommended strategy.

Possibly we might be able to collapse some categories of X if X is a categorical variable
and if it makes substantive sense to do so.

Exact method is a good strategy when the data set is small and the model
is not very large. Below is a sample code in SAS.

Firth logistic regression is another good strategy. It uses a penalized likelihood estimation method.
Firth bias-correction is considered as an ideal solution to separation issue
for logistic regression. For more information on logistic regression using
Firth bias-correction, we refer our readers to the article by Georg
Heinze and Michael Schemper.

proc logistic data = t2 descending;
model y = x1 x2 /firth;
run;

Bayesian method can be used when we have some additional information on the
parameter estimates of the predictor va.