More specifically, the probability distribution is a mathematical description of a random phenomenon in terms of the probabilities of events.[3]

For instance, if the random variableX is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails (assuming the coin is fair). Examples of random phenomena can include the results of an experiment or survey.

A probability distribution is a mathematical function that has a sample space as its input, and gives a probability as its output. The sample space is the set of all possible outcomes of a random phenomenon being observed; it may be the set of real numbers or a set of vectors, or it may be a list of non-numerical values. For example, the sample space of a coin flip would be {heads, tails} .

Probability distributions are generally divided into two classes. A discrete probability distribution (applicable to the scenarios where the set of possible outcomes is discrete, such as a coin toss or a roll of dice) can be encoded by a discrete list of the probabilities of the outcomes, known as a probability mass function. On the other hand, a continuous probability distribution (applicable to the scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day) is typically described by probability density functions (with the probability of any individual outcome actually being 0). The normal distribution is a commonly encountered continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.

Introduction

The probability mass function (pmf) p(S) specifies the probability distribution for the sum S of counts from two dice. For example, the figure shows that p(11) = 2/36 = 1/18. The pmf allows the computation of probabilities of events such as P(S > 9) = 1/12 + 1/18 + 1/36 = 1/6, and all other probabilities in the distribution.

To define probability distributions for the simplest cases, it is necessary to distinguish between discrete and continuousrandom variables. In the discrete case, it is sufficient to specify a probability mass functionp{\displaystyle p} assigning a probability to each possible outcome: for example, when throwing a fair die, each of the six values 1 to 6 has the probability 1/6. The probability of an event is then defined to be the sum of the probabilities of the outcomes that satisfy the event; for example, the probability of the event "the dice rolls an even value" is

In contrast, when a random variable takes values from a continuum then typically, any individual outcome has probability zero and only events that include infinitely many outcomes, such as intervals, can have positive probability. For example, the probability that a given object weighs exactly 500 g is zero, because the probability of measuring exactly 500 g tends to zero as the accuracy of our measuring instruments increases. Nevertheless, in quality control one might demand that the probability of a "500 g" package containing between 490 g and 510 g should be no less than 98%, and this demand is less sensitive to the accuracy of measurement instruments.

Continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. The probability that the possible values lie in some fixed interval can be related to the way sums converge to an integral; therefore, continuous probability is based on the definition of an integral.

On the left is the probability density function. On the right is the cumulative distribution function, which is the area under the probability density curve.

The cumulative distribution function describes the probability that the random variable is no larger than a given value; the probability that the outcome lies in a given interval can be computed by taking the difference between the values of the cumulative distribution function at the endpoints of the interval. The cumulative distribution function is the antiderivative of the probability density function provided that the latter function exists. The cumulative distribution function is the area under the probability density function from minus infinity ∞{\displaystyle \infty } to x{\displaystyle x} as described by the picture to the right.[4]

The probability density function (pdf) of the normal distribution, also called Gaussian or "bell curve", the most important continuous random distribution. As notated on the figure, the probabilities of intervals of values correspond to the area under the curve.

Relative frequency distribution: A frequency distribution where each value has been divided (normalized) by a number of outcomes in a sample i.e. sample size.

Discrete probability distribution function: general term to indicate the way the total probability of 1 is distributed over all various possible outcomes (i.e. over entire population) for discrete random variable

Functions for continuous variables

Probability density function (PDF): function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample

Basic terms

Mode: for a discrete random variable, the value with highest probability (the location at which the probability mass function has its peak); for a continuous random variable, a location at which the probability density function has a local peak.

Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half.

Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution.

Standard deviation: the square root of the variance, and hence another measure of dispersion.

Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value is a mirror image of the portion to its right.

Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. The third standardized moment of the distribution.

Kurtosis: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution.

Cumulative distribution function

Because a probability distribution P on the real line is determined by the probability of a scalar random variable X being in a half-open interval (−∞, x], the probability distribution is completely characterized by its cumulative distribution function:

Discrete probability distribution

The probability mass function of a discrete probability distribution. The probabilities of the singletons {1}, {3}, and {7} are respectively 0.2, 0.5, 0.3. A set not containing any of these points has probability zero.

... of a distribution which has both a continuous part and a discrete part.

A discrete probability distribution is a probability distribution that can take on a countable number of values.[5] For the probabilities to add up to 1, they have to decline to zero fast enough. For example, if P⁡(X=n)=12n{\displaystyle \operatorname {P} (X=n)={\tfrac {1}{2^{n}}}} for n = 1, 2, ..., the sum of probabilities would be 1/2 + 1/4 + 1/8 + ... = 1.

When a sample (a set of observations) is drawn from a larger population, the sample points have an empirical distribution that is discrete and that provides information about the population distribution.

Measure theoretic formulation

A measurable functionX:A→B{\displaystyle X\colon A\to B} between a probability space(A,A,P){\displaystyle (A,{\mathcal {A}},P)} and a measurable space(B,B){\displaystyle (B,{\mathcal {B}})} is called a discrete random variable provided that its image is a countable set. In this case measurability of X{\displaystyle X} means that the pre-images of singleton sets are measurable, i.e., X−1({b})∈A{\displaystyle X^{-1}(\{b\})\in {\mathcal {A}}} for all b∈B{\displaystyle b\in B}.
The latter requirement induces a probability mass functionfX:X(A)→R{\displaystyle f_{X}\colon X(A)\to \mathbb {R} } via fX(b):=P(X−1({b})){\displaystyle f_{X}(b):=P(X^{-1}(\{b\}))}. Since the pre-images of disjoint sets
are disjoint,

Cumulative distribution function

Equivalently to the above, a discrete random variable can be defined as a random variable whose cumulative distribution function (cdf) increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant between those jumps. Note however that the points where the cdf jumps may form a dense set of the real numbers. The points where jumps occur are precisely the values which the random variable may take.

Delta-function representation

Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part.[6]

Indicator-function representation

For a discrete random variable X, let u0, u1, ... be the values it can take with non-zero probability. Denote

In particular, the probability for X to take any single value a (that is a ≤ X ≤ a) is zero, because an integral with coinciding upper and lower limits is always equal to zero.

Note on terminology: some authors use the term "continuous distribution" to denote distributions whose cumulative distribution functions are continuous, rather than absolutely continuous. These distributions are the ones μ{\displaystyle \mu } such that μ{x}=0{\displaystyle \mu \{x\}\,=\,0} for all x{\displaystyle \,x}. This definition includes the (absolutely) continuous distributions defined above, but it also includes singular distributions, which are neither absolutely continuous nor discrete nor a mixture of those, and do not have a density. An example is given by the Cantor distribution.

Some properties

The probability distribution of the sum of two independent random variables is the convolution of each of their distributions.

Random number generation

Most algorithms are based on a pseudorandom number generator that produces numbers X that are uniformly distributed in the half-open interval [0,1). These random variatesX are then transformed via some algorithm to create a new random variate having the required probability distribution. With this source of uniform pseudo-randomness, realizations of any random variable can be generated.[10]

For example, suppose U{\displaystyle U} has a uniform distribution between 0 and 1. To construct a random Bernoulli variable for some 0<p<1{\displaystyle 0<p<1}, we define

This random variable X has a Bernoulli distribution with parameter p{\displaystyle p}.[10] Note that this is a transformation of discrete random variable.

For a distribution function F{\displaystyle F} of a continuous random variable, a continuous random variable must be constructed. Finv{\displaystyle F^{inv}}, an inverse function of F{\displaystyle F}, relates to the uniform variable U{\displaystyle U}:

U≤F(x)=Finv(U)≤x.{\displaystyle {U\leq F(x)}={F^{inv}(U)\leq x}.}

For example, suppose a random variable that has an exponential distribution F(x)=1−e−λx{\displaystyle F(x)=1-e^{-\lambda x}} must be constructed.

so Finv(u)=−1λln⁡(1−u){\displaystyle F^{inv}(u)={\frac {-1}{\lambda }}\ln(1-u)} and if U{\displaystyle U} has a U(0,1){\displaystyle U(0,1)} distribution, then the random variable X{\displaystyle X} is defined by X=Finv(U)=−1λln⁡(1−U){\displaystyle X=F^{inv}(U)={\frac {-1}{\lambda }}\ln(1-U)}. This has an exponential distribution of λ{\displaystyle \lambda }.[10]

Common probability distributions and their applications

The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate.

The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, continuous, multivariate, etc.)

All of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution.

Linear growth (e.g. errors, offsets)

Normal distribution (Gaussian distribution), for a single such quantity; the most commonly used continuous distribution

Absolute values of vectors with normally distributed components

Rayleigh distribution, for the distribution of vector magnitudes with Gaussian distributed orthogonal components. Rayleigh distributions are found in RF signals with Gaussian real and imaginary components.

Rice distribution, a generalization of the Rayleigh distributions for where there is a stationary background signal component. Found in Rician fading of radio signals due to multipath propagation and in MR images with noise corruption on non-zero NMR signals.

In quantum mechanics, the probability density of finding the particle at a given point is proportional to the square of the magnitude of the particle's wavefunction at that point (see Born rule). Therefore, the probability distribution function of the position of a particle is described by Pa≤x≤b(t)=∫abdx|Ψ(x,t)|2{\displaystyle P_{a\leq x\leq b}(t)=\int _{a}^{b}dx\,|\Psi (x,t)|^{2}}, probability that the particle's position x will be in the interval a ≤ x ≤ b in dimension one, and a similar triple integral in dimension three. This is a key principle of quantum mechanics.[12]

Probabilistic load flow in power-flow study explains the uncertainties of input variables as probability distribution and provide the power flow calculation also in term of probability distribution.[13]