Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey shift invariance have been shown to obey maximum Fisher information.[3] The level of the maximum depends upon the nature of the system constraints.

The Fisher information is a way of measuring the amount of information that an observable random variableX carries about an unknown parameterθ upon which the probability of X depends. Let f(X; θ) be the probability density function (or probability mass function) for X conditional on the value of θ. This is also the likelihood function for θ. It describes the probability that we observe a given sample X, given a known value of θ. If f is sharply peaked with respect to changes in θ, it is easy to indicate the “correct” value of θ from the data, or equivalently, that the data X provides a lot of information about the parameter θ. If the likelihood f is flat and spread-out, then it would take many, many samples like X to estimate the actual “true” value of θ that would be obtained using the entire population being sampled. This suggests studying some kind of variance with respect to θ.

Formally, the partial derivative with respect to θ of the natural logarithm of the likelihood function is called the “score”. Under certain regularity conditions, if θ is the true parameter (i.e. X is actually distributed as f(X; θ)), it can be shown that the expected value (the first moment) of the score is 0:[4]

Note that 0≤I(θ)<∞{\displaystyle 0\leq {\mathcal {I}}(\theta )<\infty }. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable X has been averaged out.

If log f(x; θ) is twice differentiable with respect to θ, and under certain regularity conditions,[4] then the Fisher information may also be written as[5]

Thus, the Fisher information may be seen as the curvature of the support curve (the graph of the log-likelihood). Near the maximum likelihood estimate, low Fisher information therefore indicates that the maximum appears "blunt", that is, the maximum is shallow and there are many nearby values with a similar log-likelihood. Conversely, high Fisher information indicates that the maximum is sharp.

The second bracketed factor is defined to be the Fisher Information, while the first bracketed factor is the expected mean-squared error of the estimator θ^{\displaystyle {\hat {\theta }}}. By rearranging, the inequality tells us that

A Bernoulli trial is a random variable with two possible outcomes, "success" and "failure", with success having a probability of θ. The outcome can be thought of as determined by a coin toss, with the probability of heads being θ and the probability of tails being 1 − θ.

Let X be a Bernoulli trial. The Fisher information contained in X may be calculated to be

When there are N parameters, so that θ is a N × 1vectorθ=[θ1,θ2,…,θN]T,{\displaystyle \theta ={\begin{bmatrix}\theta _{1},\theta _{2},\dots ,\theta _{N}\end{bmatrix}}^{\mathrm {T} },} then the Fisher information takes the form of an N × Nmatrix. This matrix is called the Fisher information matrix (FIM) and has typical element

We say that two parameters θi and θj are orthogonal if the element of the ith row and jth column of the Fisher information matrix is zero. Orthogonal parameters are easy to deal with in the sense that their maximum likelihood estimates are independent and can be calculated separately. When dealing with research problems, it is very common for the researcher to invest some time searching for an orthogonal parametrization of the densities involved in the problem.[citation needed]

In this case the Fisher information matrix may be identified with the coefficient matrix of the normal equations of least squares estimation theory.

Another special case occurs when the mean and covariance depend on two different vector parameters, say, β and θ. This is especially popular in the analysis of spatial data, which often uses a linear model with correlated residuals. In this case,[9]

where IY∣X(θ){\displaystyle {\mathcal {I}}_{Y\mid X}(\theta )} is the Fisher information of Y relative to θ{\displaystyle \theta } calculated with respect to the conditional density of Y given a specific value X = x.

As a special case, if the two random variables are independent, the information yielded by the two random variables is the sum of the information from each random variable separately:

and where JT{\displaystyle {\boldsymbol {J}}^{\mathrm {T} }} is the matrix transpose of J.{\displaystyle {\boldsymbol {J}}.}

In information geometry, this is seen as a change of coordinates on a Riemannian manifold, and the intrinsic properties of curvature are unchanged under different parametrization. In general, the Fisher information matrix provides a Riemannian metric (more precisely, the Fisher–Rao metric) for the manifold of thermodynamic states, and can be used as an information-geometric complexity measure for a classification of phase transitions, e.g., the scalar curvature of the thermodynamic metric tensor diverges at (and only at) a phase transition point.[13]

In the thermodynamic context, the Fisher information matrix is directly related to the rate of change in the corresponding order parameters.[14] In particular, such relations identify second-order phase transitions via divergences of individual elements of the Fisher information matrix.

Fisher information is widely used in optimal experimental design. Because of the reciprocity of estimator-variance and Fisher information, minimizing the variance corresponds to maximizing the information.

When the linear (or linearized) statistical model has several parameters, the mean of the parameter estimator is a vector and its variance is a matrix. The inverse of the variance matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these "information criteria" can be maximized.

Traditionally, statisticians have evaluated estimators and designs by considering some summary statistic of the covariance matrix (of an unbiased estimator), usually with positive real values (like the determinant or matrix trace). Working with positive real numbers brings several advantages: If the estimator of a single parameter has a positive variance, then the variance and the Fisher information are both positive real numbers; hence they are members of the convex cone of nonnegative real numbers (whose nonzero members have reciprocals in this same cone).
For several parameters, the covariance matrices and information matrices are elements of the convex cone of nonnegative-definite symmetric matrices in a partiallyordered vector space, under the Loewner (Löwner) order. This cone is closed under matrix addition and inversion, as well as under the multiplication of positive real numbers and matrices. An exposition of matrix theory and Loewner order appears in Pukelsheim.[15]

The Fisher information has been used to find bounds on the accuracy of neural codes. In that case, X is typically the joint responses of many neurons representing a low dimensional variable θ (such as a stimulus parameter). In particular the role of correlations in the noise of the neural responses has been studied.[17]

Fisher information is related to relative entropy.[20] Consider a family of probability distributions f(x;θ){\displaystyle f(x;\theta )} where θ{\displaystyle \theta } is a parameter which lies in a range of values. Then the relative entropy, or Kullback–Leibler divergence, between two distributions in the family can be written as

If θ{\displaystyle \theta } is fixed, then the relative entropy between two distributions of the same family is minimized at θ′=θ{\displaystyle \theta '=\theta }. For θ′{\displaystyle \theta '} close to θ{\displaystyle \theta }, one may expand the previous expression in a series up to second order:

Thus the Fisher information represents the curvature of the relative entropy.

Schervish (1995: §2.3) says the following.

One advantage Kullback-Leibler information has over Fisher information is that it is not affected by changes in parameterization. Another advantage is that Kullback-Leibler information can be used even if the distributions under consideration are not all members of a parametric family.

...
Another advantage to Kullback-Leibler information is that no smoothness
conditions on the densities … are needed.

The Fisher information was discussed by several early statisticians, notably F. Y. Edgeworth.[21] For example, Savage[22] says: "In it [Fisher information], he [Fisher] was to some extent anticipated (Edgeworth 1908–9 esp. 502, 507–8, 662, 677–8, 82–5 and references he [Edgeworth] cites including Pearson and Filon 1898 [. . .])."
There are a number of early historical sources[23]
and a number of reviews of this early work.[24][25][26]