Regression (OLS) - overview

This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table

H0: the population proportions in each of the $J$ conditions are $\pi_1$, $\pi_2$, $\ldots$, $\pi_J$

or equivalently

H0: the probability of drawing an observation from condition 1 is $\pi_1$, the probability of drawing an observation from condition 2 is $\pi_2$, $\ldots$,
the probability of drawing an observation from condition $J$ is $\pi_J$

H0: $\mu = \mu_0$

$\mu$ is the population mean; $\mu_0$ is the population mean according to the null hypothesis

H0: $\mu_1 = \mu_2$

$\mu_1$ is the population mean for group 1, $\mu_2$ is the population mean for group 2

H0: $\rho_s = 0$

$\rho_s$ is the unknown Spearman correlation in the population. The Spearman correlation is a measure for the strength and direction of the monotonic relationship between two variables of at least ordinal measurement level.

In words, the null hypothesis would be:

H0: there is no monotonic relationship between the two variables in the population

Alternative hypothesis

Alternative hypothesis

Alternative hypothesis

Alternative hypothesis

Alternative hypothesis

$F$ test for the complete regression model:

H1: not all population regression coefficients are 0or equivalenty

H1: the variance explained by all the independent variables together (the complete model) is larger than 0 in the population, i.e. $\rho^2 > 0$

$t$ test for individual regression coefficient $\beta_k$:

H1 two sided: $\beta_k \neq 0$

H1 right sided: $\beta_k > 0$

H1 left sided: $\beta_k < 0$

H1: the population proportions are not all as specified under the null hypothesis

or equivalently

H1: the probabilities of drawing an observation from each of the conditions are not all as specified under the null hypothesis

In the population, the residuals are normally distributed at each combination of values of the independent variables

In the population, the standard deviation $\sigma$ of the residuals is the same for each combination of values of the independent variables (homoscedasticity)

In the population, the relationship between the independent variables and the mean of the dependent variable $\mu_y$ is linear. If this linearity assumption holds, the mean of the residuals is 0 for each combination of values of the independent variables

The residuals are independent of one another

Often ignored additional assumption:

Variables are measured without error

Also pay attention to:

Multicollinearity

Outliers

Sample size is large enough for $X^2$ to be approximately chi-squared distributed. Rule of thumb: all $J$ expected cell counts are 5 or more

Sample is a simple random sample from the population. That is, observations are independent of one another

Scores are normally distributed in the population

Population standard deviation $\sigma$ is known

Sample is a simple random sample from the population. That is, observations are independent of one another

Within each population, the scores on the dependent variable are normally distributed

The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$

Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one another

Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another

Note: this assumption is only important for the significance test, not for the correlation coefficient itself. The correlation coefficient itself just measures the strength of the monotonic relationship between two variables.

If only one independent variable: $SE_{b_1} = \dfrac{\sqrt{\sum (y_j - \hat{y}_j)^2 / (N - 2)}}{\sqrt{\sum (x_j - \bar{x})^2}} = \dfrac{s}{\sqrt{\sum (x_j - \bar{x})^2}}$, with $s$ the sample standard deviation of the residuals, $x_j$ the score of subject $j$ on the independent variable $x$, and $\bar{x}$ the mean of $x$. For models with more than one independent variable, computing $SE_{b_k}$ becomes complicated

Note 1: mean square model is also known as mean square regression; mean square error is also known as mean square residual
Note 2: if only one independent variable ($K = 1$), the $F$ test for the complete regression model is equivalent to the two sided $t$ test for $\beta_1$

$X^2 = \sum{\frac{(\mbox{observed cell count} - \mbox{expected cell count})^2}{\mbox{expected cell count}}}$
where the expected cell count for one cell = $N \times \pi_j$, the observed cell count is the observed sample count in that same cell, and the sum is over all $J$ cells

$z = \dfrac{\bar{y} - \mu_0}{\sigma / \sqrt{N}}$
$\bar{y}$ is the sample mean, $\mu_0$ is the population mean according to the null hypothesis, $\sigma$ is the population standard deviation,
$N$ is the sample size.

$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
$\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2,
$s_p$ is the pooled standard deviation,
$n_1$ is the sample size of group 1, $n_2$ is the sample size of group 2. The 0 represents the difference in population means according to the null hypothesis.

In all formulas, the critical value $t^*$ is the value under the $t_{N - K - 1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).

-

$\bar{y} \pm z^* \times \dfrac{\sigma}{\sqrt{N}}$
where $z^*$ is the value under the normal curve with the area $C / 100$ between $-z^*$ and $z^*$ (e.g. $z^*$ = 1.96 for a 95% confidence interval)

Proportion variance explained $R^2$:
Proportion variance of the dependent variable $y$ explained by the sample regression equation (the independent variables):
$$
\begin{align}
R^2 &= \dfrac{\sum (\hat{y}_j - \bar{y})^2}{\sum (y_j - \bar{y})^2}\\ &= \dfrac{\mbox{sum of squares model}}{\mbox{sum of squares total}}\\
&= 1 - \dfrac{\mbox{sum of squares error}}{\mbox{sum of squares total}}\\
&= r(y, \hat{y})^2
\end{align}
$$
$R^2$ is the proportion variance explained in the sample by the sample regression equation. It is a positively biased estimate of the proportion variance explained in the population by the population regression equation, $\rho^2$. If there is only one independent variable, $R^2 = r^2$: the correlation between the independent variable $x$ and dependent variable $y$ squared.

Wherry's $R^2$ / shrunken $R^2$:
Corrects for the positive bias in $R^2$ and is equal to
$$R^2_W = 1 - \frac{N - 1}{N - K - 1}(1 - R^2)$$
$R^2_W$ is a less biased estimate than $R^2$ of the proportion variance explained in the population by the population regression equation, $\rho^2$

Stein's $R^2$:
Estimates the proportion of variance in $y$ that we expect the current sample regression equation to explain in a different sample drawn from the same population. It is equal to
$$R^2_S = 1 - \frac{(N - 1)(N - 2)(N + 1)}{(N - K - 1)(N - K - 2)(N)}(1 - R^2)$$

Per independent variable:

Correlation squared $r^2_k$: the proportion of the total variance in the dependent variable $y$ that is explained by the independent variable $x_k$, not corrected for the other independent variables in the model

Semi-partial correlation squared $sr^2_k$: the proportion of the total variance in the dependent variable $y$ that is uniquely explained by the independent variable $x_k$, beyond the part that is already explained by the other independent variables in the model

Partial correlation squared $pr^2_k$: the proportion of the variance in the dependent variable $y$ not explained by the other independent variables, that is uniquely explained by the independent variable $x_k$

Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$:
$$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$
Indicates how many standard deviations $s_p$ the two sample means are removed from each other

If you also have code (dummy) variables as independent variables, you can put these in the box below Covariates as well

Instead of transforming your categorical independent variable(s) into code variables, you can also put the untransformed categorical independent variables in the box below Factors. Jamovi will then make the code variables for you 'behind the scenes'

Frequencies > N Outcomes - $\chi^2$ Goodness of fit

Put your categorical variable in the box below Variable

Click on Expected Proportions and fill in the population proportions / probabilities according to $H_0$ in the boxes below Ratio. If $H_0$ states that they are all equal, you can leave the ratios equal to the default values (1)