Multiple Regression as Canonical Correlations

While I was cleaning up some files on my computer, I found an old homework problem set where I just casually wrote in one answer “it can be trivially shown that a Canonical Correlation analysis with only one dependent variable reduces to a Multiple Regression analysis”. Me being the scatter-minded student I was, never really bothered to show it. But now that I have a little bit more time to work on these things, I think it would be interesting to highlight some of the connections between univariate and multivariate versions of the General Linear Model and how things reduce naturally to traditional methods we all know and love when you jump back from multiple dimensions to one.

I’m assuming people who are reading this are familiar with both Multiple Regression and Canonical Correlation Analysis, but perhaps not with how they are connected. Now, at the core of Canonical Correlation Analysis are, of course, the pairs of Canonical Variates and their respective Canonical Correlations. Consider the case of two data matrices and . We’re not necessarily referring to as the predictors and as the criterion. Just as with correlations, each one can take whichever role the researcher prefers. Finding the Canonical Variates and Canonical Correlations is equivalent to finding the eigendecomposition of the following matrices of correlations:

Where is the correlation matrix of the variables we decided to call , is the correlation matrix of variables that we decided to call and , are the cross-correlation matrices with elements of correlating with elements of . As far as where this matrix comes from can be found all over the place. Wikipedia has a very nice derivation of it but for some reason I find this one much clearer .

Where is the correlation matrix of the predictors in a Multiple Regression setting and is the vector of correlations that each predictor has with the criterion variable.

Notice the similarities? In Multiple Regression we only have one vector-valued (*not* matrix-valued) variable . And since we’re working with correlation matrices (so variables are standardized) the matrix becomes 1, because of the variance of of the standardized . Also, , are now *vectors* as opposed to matrices because, again, we only have one variable . When you reduce the dimensionality of back to 1, the matrices needed to solve for the Canonical Correlations becomes the matrix and vectors needed to solve for R-squared. Let’s try it with an example;

Pretty cool, huh? Now, what about the coefficients? Well those are slightly trickier, You see, although the problems tackled by Canonical Correlation and Multiple Regressions are related they are not exactly the same. That’s why they scale their coefficients differently. For instance, have a look at how the Multiple Regression coefficients look compared to the Canonical Correlation ones:

Yeah, not even close. But this really is just a matter of scaling more than anything else so, for the proper scaling constant, you can find the relationship between the Canonical Correlation coefficients (aka “loadings”) and the Multiple Regression ones. For this, a very useful package to use is the ‘matlib’ package and consider the set of of Canonical coefficients as linear equations related to the set of Multiple Regression ones. In more traditional linear algebra terms, solving for the equation where are the Canonical loadings and are the Multiple regression ones. So something like this: