George Box and Sir David Cox collaborated on one paper (Box, \(1964\)). The story is that while Cox was visiting Box at Wisconsin, they decided they should write a paper together because of the similarity of their names (and that both are British). In fact, Professor Box is married to the daughter of Sir Ronald Fisher.

The Box-Cox transformation of the variable \(x\) is also indexed by \(λ\), and is defined as

\[ x' = \dfrac{x^\lambda-1}{\lambda} \label{eq1}\]

At first glance, although the formula in Equation \ref{eq1} is a scaled version of the Tukey transformation \(x^\lambda\), this transformation does not appear to be the same as the Tukey formula in Equation (2). However, a closer look shows that when \(λ < 0\), both \(x_\lambda\) and \(X_{\lambda }^{'}\) change the sign of \(x^\lambda\) to preserve the ordering. Of more interest is the fact that when \(λ = 0\), then the Box-Cox variable is the indeterminate form \(0/0\). Rewriting the Box-Cox formula as

as \(\lambda \rightarrow 0\). This same result may also be obtained using l'Hôpital's rule from your calculus course. This gives a rigorous explanation for Tukey's suggestion that the log transformation (which is not an example of a polynomial transformation) may be inserted at the value \(λ = 0\).

Figure \(\PageIndex{1}\):Examples of the Box-Cox transformation \(X_{\lambda }^{'}\)versus \(x\) for \(λ = −1, 0, 1\). In the second row, \(X_{\lambda }^{'}\)is plotted against \(log(x)\). The red point is at \((1, 0)\).

Notice with this definition of \(X_{\lambda }^{'}\) that \(x = 1\) always maps to the point \(X_{\lambda }^{'} = 0\) for all values of \(λ\). To see how the transformation works, look at the examples in Figure \(\PageIndex{1}\). In the top row, the choice \(λ = 1\) simply shifts \(x\) to the value \(x−1\), which is a straight line. In the bottom row (on a semi-logarithmic scale), the choice \(λ = 0\) corresponds to a logarithmic transformation, which is now a straight line. We superimpose a larger collection of transformations on a semi-logarithmic scale in Figure \(\PageIndex{2}\).

Transformation to Normality

Another important use of variable transformation is to eliminate skewness and other distributional features that complicate analysis. Often the goal is to find a simple transformation that leads to normality. In the article on \(q-q\) plots, we discuss how to assess the normality of a set of data,

\[x_1,x_2, \ldots ,x_n.\]

Data that are normal lead to a straight line on the q-q plot. Since the correlation coefficient is maximized when a scatter diagram is linear, we can use the same approach above to find the most normal transformation.

where \(\Phi ^{-1}\) is the inverse CDF of the normal density and \(x_{(i)}\) denotes the \(i^{th}\) sorted value of the data set. As an example, consider a large sample of British household incomes taken in \(1973\), normalized to have mean equal to one (\(n = 7125\)). Such data are often strongly skewed, as is clear from Figure \(\PageIndex{3}\). The data were sorted and paired with the \(7125\) normal quantiles. The value of \(λ\) that gave the greatest correlation (\(r = 0.9944\)) was \(λ = 0.21\).

Figure \(\PageIndex{3}\): (L) Density plot of the \(1973\) British income data. (R) The best value of \(λ\) is \(0.21\).

The kernel density plot of the optimally transformed data is shown in the left frame of Figure \(\PageIndex{4}\). While this figure is much less skewed than in Figure \(\PageIndex{3}\), there is clearly an extra "component" in the distribution that might reflect the poor. Economists often analyze the logarithm of income corresponding to \(λ = 0\); see Figure \(\PageIndex{4}\). The correlation is only \(r = 0.9901\) in this case, but for convenience, the log-transform probably will be preferred.

Figure \(\PageIndex{4}\): (L) Density plot of the \(1973\) British income data transformed with \(λ = 0.21\). (R) The log-transform with \(λ = 0\).

Other Applications

Regression analysis is another application where variable transformation is frequently applied. For the model

each of the predictor variables \(x_j\) can be transformed. The usual criterion is the variance of the residuals, given by

\[\frac{1}{n} \sum_{i=1}^{n} (\widehat{y}_i-y_i)^2\]

Occasionally, the response variable y may be transformed. In this case, care must be taken because the variance of the residuals is not comparable as \(λ\) varies. Let \(\bar{g}_y\) represent the geometric mean of the response variables.

Recommended articles

The LibreTexts libraries are Powered by MindTouch®and are supported by the National Science Foundation under grant numbers 1246120, 1525057, and 1413739 and the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions, and Merlot. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Have questions or comments? For more information contact us at info@libretexts.org.