What is the point of $\pi$-systems and
$\mathcal{D}$ / Dynkin /
$\lambda$-systems?

I am an analyst in the process of consolidating my measure theory knowledge before moving on to harder/newer things, having been first introduced to measure theory in a course with a probability as opposed to an analysis viewpoint. So far, everything that I've needed from elementary measure theory for analysis can be done (and is done in all of my analysis textbooks) without mention of the $\pi$-systems and $\mathcal{D}$-systems which were used in my first course. Do these set systems belong strictly to probability and not analysis? Heuristically, are they useful or important in any way? Why?

These are incredibly useful in the strangest of places. On my measure theory midterm last year, there was a problem that was harder than our professor intended it to be. The problem came down to some nasty analysis where you had to come up with n separate bounds bounded by another bound. However, a friend of mine told us later that he came up with a beautiful concise proof using Dynkin systems (which was confirmed when he received full marks).
–
Harry GindiJul 17 '10 at 15:44

2

I also am puzzled, Spencer. When I teach Real Analysis, I use the $\pi$-$\lambda$ theorem even though the book we use does not mention it.
–
Bill JohnsonJul 17 '10 at 16:53

I think this is nothing but a historical accident. I believe that Dynkin - a probabilist - proved the $\pi-\lambda$ theorem after the modern foundations of measure theory had been basically set up. Probabilists have, since then, used it in writing their textbooks, which are usually (at least presented as) books on probability as opposed to analysis. Analysis textbooks, on the other hand, are mostly written by functional analysts, harmonic analysts, PDEists, etc., who may never have opened a probability textbook, and so $\pi-\lambda$ systems have been slow to cross the divide.
–
Mark MeckesJul 20 '10 at 17:57

I seem to recall, in fact, hearing someone say he'd taken measure theory from Dynkin as an undergraduate, then took it again without the $\pi-\lambda$ in grad school, and was surprised at how much more complicated it seemed the second time.
–
Mark MeckesJul 20 '10 at 17:59

2 Answers
2

I'm not sure that $\pi$-systems and $\lambda$-systems are important objects in their own right, not in the same way that $\sigma$-algebras are. I think they're convenient names attached to two sets of technical conditions that appear in Dynkin's theorem.

The theorem itself, though, is a huge convenience. It's properly a theorem of measure theory (measurable theory, if you want to be pedantic, since it doesn't have any measures in its statement), and so it belongs to both probability and analysis. It does seem to be more widely used in probability, most likely because Dynkin himself was a probabilist, and some popular books from the Cornell probability school use it, such as Durrett and Resnick. But it's also very useful in analysis, especially in the functional form cited by Peter (hi!). For instance, lots of approximation theorems about things being dense in $L^p$ spaces can be obtained from it.

My guess is that they are more useful in probability than in analysis. Many people have the impression that probability is just analysis on spaces of measure 1. However, this is not exactly true. One way to tell analysts and probabilists apart: ask them if they care about independence of their functions.

Suppose that $\mathcal{F}_1,\mathcal{F}_2,...,\mathcal{F}_n$ are families of subsets of some space $\Omega$. Suppose further that given any $A_i\in \mathcal{F}_i$ we know that $P(A_1\cap A_2 \cap ...\cap A_n)=P(A_1)P(A_2)...P(A_n)$. Does it follow that the $\sigma(\mathcal{F}_i)$ are independent? No. But if the $\mathcal{F}_i$ are $\pi$-systems, then the answer is yes.

When proving the uniqueness of the product measure for $\sigma$-finite measure spaces, one can use the $\pi$-$\lambda$ lemma, though I think there is a way to avoid it (I believe Bartle avoids it, for instance). However, do you know of a text which avoids using the monotone class theorem for Fubini's theorem? This, to me, has a similar feel to the $\pi$-$\lambda$ lemma. Stein and Shakarchi might avoid it, but as I recall their proof was fairly arduous.

Here is a direct consequence of the $\pi$-$\lambda$ lemma when you work on probability spaces:

Let a linear space H of bounded functions contain 1 and be closed under bounded convergence. If H contains a multiplicative family Q, then it contains all bounded functions measurable with respect to the $\sigma$-algebra generated by Q.

Why is this useful? Suppose that I want to check that some property P holds for all bounded, measurable functions. Then I only need to check three things:

If P holds for f and g, then P holds for f+g.

If P holds for a bounded, convergent sequence $f_n$ then P holds for $\lim f_n$.