:: <math>t_o = \frac{r}{\sqrt{\frac{1-r^2}{N-2}}}</math>, which has [[AP_Statistics_Curriculum_2007_StudentsT |T-distribution with df=N—2]].

:: <math>t_o = \frac{r}{\sqrt{\frac{1-r^2}{N-2}}}</math>, which has [[AP_Statistics_Curriculum_2007_StudentsT |T-distribution with df=N—2]].

-

* Comparing two correlation coefficients: The [http://en.wikipedia.org/wiki/Fisher_transformation Fisher's transformation] provides a mechanism to test for comparing two correlation coefficients using [[AP_Statistics_Curriculum_2007#Chapter_V:_Normal_Probability_Distribution |Normal distribution]]. Suppose we have 2 independent paired samples <math>\{(X_i,Y_i)\}_{i=1}^{n_1}</math> and <math>\{(U_j,V_j)\}_{j=1}^{n_2}</math>, and the r1=corr(X,Y) and r2=corr(U,V) and we are testing <math>H_o: r1=r2</math> vs. <math>H_a: r1\not= r2</math>. The Fisher's transformation for the 2 correlations is defined by:

+

* Comparing two correlation coefficients: The [http://en.wikipedia.org/wiki/Fisher_transformation Fisher's transformation] provides a mechanism to test for comparing two correlation coefficients using [[AP_Statistics_Curriculum_2007#Chapter_V:_Normal_Probability_Distribution |Normal distribution]]. Suppose we have 2 independent paired samples <math>\{(X_i,Y_i)\}_{i=1}^{n_1}</math> and <math>\{(U_j,V_j)\}_{j=1}^{n_2}</math>, and the r1=corr(X,Y) and r2=corr(U,V) and we are testing \(H_o: r1=r2\) vs. \(H_a: r1\not= r2\). The Fisher's transformation for the 2 correlations is defined by:

Many biomedical, social, engineering and science applications involve the analysis of relationships, if any, between two or more variables involved in the process of interest. We begin with the simplest of all situations where bivariate data (X and Y) are measured for a process and we are interested in determining the association, relation or an appropriate model for these observations (e.g., fitting a straight line to the pairs of (X,Y) data). If we are successful determining a relationship between X and Y, we can use this model to make predictions - i.e., given a value of X predict a corresponding Y response. Note that in this design, data consists of paired observations (X,Y) - for example, the height and weight of individuals.

Lines in 2D

There are 3 types of lines in 2D planes - Vertical Lines, Horizontal Lines and Oblique Lines. In general, the mathematical representation of lines in 2D is given by equations like aX + bY = c, most frequently expressed as Y = aX + b, provides the line is not vertical.

Recall that there is a one-to-one correspondence between any line in 2D and (linear) equations of the form

If the line is vertical (X1 = X2): X = X1;

If the line is horizontal (Y1 = Y2): Y = Y1;

Otherwise (oblique line): , (for and )

where (X1,Y1) and (X2,Y2) are two points on the line of interest (2-distinct points in 2D determine a unique line).

The Correlation Coefficient

Correlation coefficient () is a measure of linear association, or clustering around a line of multivariate data. The main relationship between two variables (X, Y) can be summarized by: (μX,σX), (μY,σY) and the correlation coefficient, denoted by ρ = ρ(X,Y) = R(X,Y).

If ρ = 1, we have a perfect positive correlation (straight line relationship between the two variables)

If ρ = 0, there is no correlation (random cloud scatter), i.e., no linear relation between X and Y.

If ρ = − 1, there is a perfect negative correlation between the variables.

Computing ρ = R(X,Y)

The protocol for computing the correlation involves standardizing, multiplication and averaging.

Sample correlation - we only have sampled data - we replace the (unknown) expectations and standard deviations by their sample analogues (sample-mean and sample-standard deviation) to compute the sample correlation:

Suppose {} and {} are bivariate observations of the same process and (μX,σX) and (μY,σY) are the means and standard deviations for the X and Y measurements, respectively.

where and are the sample means of X and Y , sx and sy are the sample standard deviations of X and Y and the sum is from i = 1 to n. We may rewrite this as

Note: The correlation is defined only if both of the standard deviations are finite and are nonzero. It is a corollary of the Cauchy-Schwarz inequality that the correlation is always bounded .

Of course, these calculations become difficult for more than a few paired observations and that is why we use the Simple Linear Regression in SOCR Analyses to compute the correlation and other linear associations in the bivariate case. The image below shows the calculations for the same data shown above in SOCR.

Properties of the Correlation Coefficient

A trivial correlation, ρX,Y = 0 only implies that there is no linear relation between X and Y, but there may be other relations (e.g., quadratic). Therefore, statistical independence of X and Y does imply that ρX,Y = 0. However the converse is false, ρX,Y = 0 does not imply independence!

A high correlation between X and Y does not imply causality (i.e., does not mean that one of the variables causes the observed behavior in the other. For example, consider X={math scores} and Y={shoe size) for all K-12 students. X and Y are very highly positively correlated, yet higher shoe sizes do not imply better math skills.

Statistical inference on correlation coefficients

There is a simple statistical test for the correlation coefficient. Suppose we want to test if the correlation between X and Y is equal to rho. If our bivariate sample is of size N and the observed sample correlation is r, then the test statistics is:

Comparing two correlation coefficients: The Fisher's transformation provides a mechanism to test for comparing two correlation coefficients using Normal distribution. Suppose we have 2 independent paired samples and , and the r1=corr(X,Y) and r2=corr(U,V) and we are testing \(H_o: r1=r2\) vs. \(H_a: r1\not= r2\). The Fisher's transformation for the 2 correlations is defined by: