This article needs attention from an expert in Statistics. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Statistics may be able to help recruit an expert.(November 2008)

When the logarithm of the likelihood ratio is used, the statistic is known as a log-likelihood ratio statistic, and the probability distribution of this test statistic, assuming that the null model is true, can be approximated using Wilks' theorem.

In the case of distinguishing between two models, each of which has no unknown parameters, use of the likelihood ratio test can be justified by the Neyman–Pearson lemma, which demonstrates that such a test has the highest power among all competitors.[1]

Note that in this special case, under either hypothesis, the distribution of the data is fully specified; there are no unknown parameters to estimate. The likelihood ratio test is based on the likelihood ratio, which is often denoted by Λ{\displaystyle \Lambda } (the capital Greek letterlambda). The likelihood ratio is defined either as[2][3]

where θ↦L(θ∣x){\displaystyle \theta \mapsto {\mathcal {L}}(\theta \mid x)} is the likelihood function, and sup{\displaystyle \sup } is the supremum function.

These are not both the same function, but they are monotone functions of each other, and so equivalent for present purposes. Some references may use the reciprocal of the first function above as the definition.[4] In the form stated here, the likelihood ratio is small if the alternative model is better than the null model.

The likelihood ratio test provides the decision rule as follows:

If Λ>c{\displaystyle \Lambda >c}, do not reject H0{\displaystyle H_{0}};

A null hypothesis is often stated by saying the parameter θ{\displaystyle \theta } is in a specified subset Θ0{\displaystyle \Theta _{0}} of the parameter space Θ{\displaystyle \Theta }. The alternative hypothesis is thus that θ{\displaystyle \theta } is in the complement of Θ0{\displaystyle \Theta _{0}}, i.e. in Θ∖Θ0{\displaystyle \Theta \smallsetminus \Theta _{0}}, which is denoted by Θ0∁{\displaystyle \Theta _{0}^{\complement }}.

A likelihood ratio test is any test with critical region (or rejection region) of the form {x∣Λ≤c}{\displaystyle \{x\mid \Lambda \leq c\}} where c{\displaystyle c} is any number satisfying 0≤c≤1{\displaystyle 0\leq c\leq 1}. Many common test statistics such as the Z-test, the F-test, Pearson's chi-squared test and the G-test are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof.

Being a function of the data x{\displaystyle x}, the likelihood ratio is therefore a statistic. The likelihood ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

The numerator corresponds to the likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator. The likelihood ratio hence is between 0 and 1. Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as the alternative, and the null hypothesis cannot be rejected.

The likelihood-ratio test requires nested models – models in which the more complex one can be transformed into the simpler model by imposing a set of constraints on the parameters. If the models are not nested, then a generalization of the likelihood-ratio test can usually be used instead: the relative likelihood.

If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to sustain or reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine[citation needed].

A fundamental result by Samuel S. Wilks, shows that as the sample size n{\displaystyle n} approaches ∞{\displaystyle \infty }, the test statistic −2log⁡(Λ){\displaystyle -2\log(\Lambda )} for a nested model asymptotically will be chi-squared distributed (χ2{\displaystyle \chi ^{2}}) with degrees of freedom equal to the difference in dimensionality of Θ{\displaystyle \Theta } and Θ0{\displaystyle \Theta _{0}}, when H0{\displaystyle H_{0}} holds true.[6] This means that for a great variety of hypotheses, a practitioner can conveniently estimate the likelihood ratio Λ{\displaystyle \Lambda } for the data and compare −2log⁡(Λ){\displaystyle -2\log(\Lambda )} to the χ2{\displaystyle \chi ^{2}} value corresponding to a desired statistical significance as an approximate statistical test. Other extensions exist.