We have brief introduction to logistic regression in R for single numerical predictor,
now what if we have
multiple numerical predictors?

We are using the folloing dataset:
Seen: is coded as 0=no and 1=yes.
C: score is derived from naming the color.
W: score derived from reading a list of color words.
CW: Stroop task, score derived from the subject's attempt to name the color

The Stroop scale scores are moderately positively correlated with each other,
but none of them appears to be related to the "seen" response variable, at
least not to any impressive extent. There doesn't appear to be much here to
look at. Let's have a go at it anyway.

Since the response is a binomial variable, a logistic regression can be done
as follows...

Two different extractor functions have been used to see the results of our
analysis. What do they mean?

The first gives us what amount to regression coefficients with standard
errors and a z-test, as we saw in the single variable example above. None of the
coefficients are significantly different from zero (but a few are close). The
deviance was reduced by 8.157 points on 7 degrees of freedom, for a p-value
of...

> 1 - pchisq(8.157, df=7)
[1] 0.3189537

Overall, the model appears to have performed poorly, showing no significant
reduction in deviance (no significant difference from the null model).

The second print out shows the same overall reduction in deviance, from
65.438 to 57.281 on 7 degrees of freedom. In this print out, however, the
reduction in deviance is shown for each term, added sequentially first to last.
Of note is the three-way interaction term, which produced a nearly significant
reduction in deviance of 3.305 on 1 degree of freedom (p = 0.069).

In the event you are encouraged by any of this, the following graph might be
revealing...