The concept of independence extends to dealing with collections of more than two events or random variables, in which case the events are pairwise independent if each pair are independent of each other, and the events are mutually independent if each event is independent of each other combination of events.

Contents

Two events A and B are independent (often written as A⊥B{\displaystyle A\perp B} or A⊥⊥B{\displaystyle A\perp \!\!\!\perp B}) if and only if their joint probability equals the product of their probabilities:

Thus, the occurrence of B does not affect the probability of A, and vice versa. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if P(A) or P(B) are 0. Furthermore, the preferred definition makes clear by symmetry that when A is independent of B, B is also independent of A.

A finite set of events is mutually independent if every event is independent of any intersection of the other events[2]—that is, if and only if for every k-element subset of {Ai}i=1n{\displaystyle \{A_{i}\}_{i=1}^{n}},

This is called the multiplication rule for independent events. Note that it is not a single condition involving only the product of all the probabilities of all single events (see below for a counterexample); it must hold true for all subsets of events.

For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse is not necessarily true (see below for a counterexample).

Two random variables X and Y are independentif and only if (iff) the elements of the π-system generated by them are independent; that is to say, for every a and b, the events {X ≤ a} and {Y ≤ b} are independent events (as defined above). That is, X and Y with cumulative distribution functionsFX(x){\displaystyle F_{X}(x)} and FY(y){\displaystyle F_{Y}(y)}, and probability densitiesfX(x){\displaystyle f_{X}(x)} and fY(y){\displaystyle f_{Y}(y)}, are independent iff the combined random variable (X, Y) has a joint cumulative distribution function

A set of random variables is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next.

A set of random variables is mutually independent if and only if for any finite subset X1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}} and any finite sequence of numbers a1,…,an{\displaystyle a_{1},\ldots ,a_{n}}, the events {X1≤a1},…,{Xn≤an}{\displaystyle \{X_{1}\leq a_{1}\},\ldots ,\{X_{n}\leq a_{n}\}} are mutually independent events (as defined above).

The measure-theoretically inclined may prefer to substitute events {X ∈ A} for events {X ≤ a} in the above definition, where A is any Borel set. That definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space (which includes topological spaces endowed by appropriate σ-algebras).

Intuitively, two random variables X and Y are conditionally independent given Z if, once Z is known, the value of Y does not add any additional information about X. For instance, two measurements X and Y of the same underlying quantity Z are not independent, but they are conditionally independent given Z (unless the errors in the two measurements are somehow connected).

for any x, y and z with P(Z = z) > 0. That is, the conditional distribution for X given Y and Z is the same as that given Z alone. A similar equation holds for the conditional probability density functions in the continuous case.

Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events.

The definitions above are both generalized by the following definition of independence for σ-algebras. Let (Ω,Σ,P){\displaystyle (\Omega ,\Sigma ,\mathrm {P} )} be a probability space and let A{\displaystyle {\mathcal {A}}} and B{\displaystyle {\mathcal {B}}} be two sub-σ-algebras of Σ{\displaystyle \Sigma }. A{\displaystyle {\mathcal {A}}} and B{\displaystyle {\mathcal {B}}} are said to be independent if, whenever A∈A{\displaystyle A\in {\mathcal {A}}} and B∈B{\displaystyle B\in {\mathcal {B}}},

and an infinite family of σ-algebras is said to be independent if all its finite subfamilies are independent.

The new definition relates to the previous ones very directly:

Two events are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by an event E∈Σ{\displaystyle E\in \Sigma } is, by definition,

Two random variables X and Y defined over Ω are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by a random variable X taking values in some measurable spaceS consists, by definition, of all subsets of Ω of the form X−1(U), where U is any measurable subset of S.

Using this definition, it is easy to show that if X and Y are random variables and Y is constant, then X and Y are independent, since the σ-algebra generated by a constant random variable is the trivial σ-algebra {∅, Ω}. Probability zero events cannot affect independence so independence also holds if Y is only Pr-almost surely constant.

The event of getting a 6 the first time a dice is rolled and the event of getting a 6 the second time are independent. By contrast, the event of getting a 6 the first time a dice is rolled and the event that the sum of the numbers seen on the first and second trial is 8 are not independent.

If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are independent. By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are not independent, because a deck that has had a red card removed has proportionately fewer red cards.

Consider the two probability spaces shown. In both cases, P(A) = P(B) = 1/2 and P(C) = 1/4. The random variables in the first space are pairwise independent because P(A|B) = P(A|C) =1/2 = P(A), P(B|A) = P(B|C) = 1/2 = P(B), and P(C|A) = P(C|B) = 1/4 = P(C); but the three random variables are not mutually independent. The random variables in the second space are both pairwise independent and mutually independent. To illustrate the difference, consider conditioning on two events. In the pairwise independent case, although any one event is independent of each of the other two individually, it is not independent of the intersection of the other two:

and yet no two of the three events are pairwise independent (and hence the set of events are not mutually independent).[4] This example shows that mutual independence involves requirements on the products of probabilities of all combinations of events, not just the single events as in this example.