It has the key feature that its functional dependence on the likelihoodL{\displaystyle L} is invariant under reparameterization of the parameter vector θ→{\displaystyle {\vec {\theta }}} (the functional form of the prior density function itself is not invariant under reparameterization, of course: only the measure that is identically zero has that property; see below). This makes it of special interest for use with scale parameters.[1]

That is, the functional form of the prior p(⋅){\displaystyle p(\cdot )} can be derived from that of the likelihood L(⋅){\displaystyle L(\cdot )} using the same procedure for both parametrizations.

Note, however, that the form of the prior is different for the two parametrizations. For example, if p(θ)=1/θ{\displaystyle p(\theta )=1/\theta } (as in the case of the normal distribution, see below), and φ=ln⁡(θ){\displaystyle \varphi =\ln(\theta )}, then p(φ)=1{\displaystyle p(\varphi )=1}, which is obviously different from 1/φ{\displaystyle 1/\varphi }.

From a practical and mathematical standpoint, a valid reason to use this non-informative prior instead of others, like the ones obtained through a limit in conjugate families of distributions, is that its construction from the likelihood is dependent upon the set of parameter variables that is chosen to describe parameter space. It is not the only prior with this property, however. As is clear from the derivation above, instead of ln⁡(L){\displaystyle \ln(L)} we could use any other smooth function f(L){\displaystyle f(L)}, and the resulting prior would still have the same kind of invariance property.

Sometimes the Jeffreys prior cannot be normalized, and is thus an improper prior. For example, the Jeffreys prior for the distribution mean is uniform over the entire real line in the case of a Gaussian distribution of known variance.

Use of the Jeffreys prior violates the strong version of the likelihood principle, which is accepted by many, but by no means all, statisticians. When using the Jeffreys prior, inferences about θ→{\displaystyle {\vec {\theta }}} depend not just on the probability of the observed data as a function of θ→{\displaystyle {\vec {\theta }}}, but also on the universe of all possible experimental outcomes, as determined by the experimental design, because the Fisher information is computed from an expectation over the chosen universe. Accordingly, the Jeffreys prior, and hence the inferences made using it, may be different for two experiments involving the same θ→{\displaystyle {\vec {\theta }}} parameter even when the likelihood functions for the two experiments are the same—a violation of the strong likelihood principle.

In the minimum description length approach to statistics the goal is to describe data as compactly as possible where the length of a description is measured in bits of the code used. For a parametric family of distributions one compares a code with the best code based on one of the distributions in the parameterized family. The main result is that in exponential families, asymptotically for large sample size, the code based on the distribution that is a mixture of the elements in the exponential family with the Jeffreys prior is optimal. This result holds if one restricts the parameter set to a compact subset in the interior of the full parameter space[citation needed]. If the full parameter is used a modified version of the result should be used.

That is, the Jeffreys prior for μ{\displaystyle \mu } does not depend upon μ{\displaystyle \mu }; it is the unnormalized uniform distribution on the real line — the distribution that is 1 (or some other fixed constant) for all points. This is an improper prior, and is, up to the choice of constant, the unique translation-invariant distribution on the reals (the Haar measure with respect to addition of reals), corresponding to the mean being a measure of location and translation-invariance corresponding to no information about location.

Equivalently, the Jeffreys prior for log⁡σ=∫dσ/σ{\displaystyle \log \sigma =\int d\sigma /\sigma } is the unnormalized uniform distribution on the real line, and thus this distribution is also known as the logarithmic prior. Similarly, the Jeffreys prior for log⁡σ2=2log⁡σ{\displaystyle \log \sigma ^{2}=2\log \sigma } is also uniform. It is the unique (up to a multiple) prior (on the positive reals) that is scale-invariant (the Haar measure with respect to multiplication of positive reals), corresponding to the standard deviation being a measure of scale and scale-invariance corresponding to no information about scale. As with the uniform distribution on the reals, it is an improper prior.

For a coin that is "heads" with probability γ∈[0,1]{\displaystyle \gamma \in [0,1]} and is "tails" with probability 1−γ{\displaystyle 1-\gamma }, for a given (H,T)∈{(0,1),(1,0)}{\displaystyle (H,T)\in \{(0,1),(1,0)\}} the probability is γH(1−γ)T{\displaystyle \gamma ^{H}(1-\gamma )^{T}}. The Jeffreys prior for the parameter γ{\displaystyle \gamma } is

This is the arcsine distribution and is a beta distribution with α=β=1/2{\displaystyle \alpha =\beta =1/2}. Furthermore, if γ=sin2⁡(θ){\displaystyle \gamma =\sin ^{2}(\theta )} the Jeffreys prior for θ{\displaystyle \theta } is uniform in the interval [0,π/2]{\displaystyle [0,\pi /2]}. Equivalently, θ{\displaystyle \theta } is uniform on the whole circle [0,2π]{\displaystyle [0,2\pi ]}.

Similarly, for a throw of an N{\displaystyle N}-sided die with outcome probabilities γ→=(γ1,…,γN){\displaystyle {\vec {\gamma }}=(\gamma _{1},\ldots ,\gamma _{N})}, each non-negative and satisfying ∑i=1Nγi=1{\displaystyle \sum _{i=1}^{N}\gamma _{i}=1}, the Jeffreys prior for γ→{\displaystyle {\vec {\gamma }}} is the Dirichlet distribution with all (alpha) parameters set to one half. This amounts to using a pseudocount of one half for each possible outcome.

Alternatively, if we write γi=ϕi2{\displaystyle \gamma _{i}={\phi _{i}}^{2}} for each i{\displaystyle i}, then the Jeffreys prior for ϕ→{\displaystyle {\vec {\phi }}} is uniform on the (N−1)-dimensional unit sphere (i.e., it is uniform on the surface of an N-dimensional unit ball).

Jeffreys, H. (1946). "An Invariant Form for the Prior Probability in Estimation Problems". Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences. 186 (1007): 453–461. doi:10.1098/rspa.1946.0056. JSTOR97883.