Some Basic Theory for Statistical Inference

London: Chapman and Hall, 1979
"Monographs on Applied Probability and Statistics" series

Intermediate Statistics from an Advanced Point of View

This little book has its origins in a feeling of dis-satisfaction with the
literature on theoretical statistics. Pitman felt, it seems, that even the
best of it was just too messy and jerry-rigged —
too kludgy
— to meet the standards of the rest of applied mathematics, of which it
is a part.

This book is an attempt to present some of the basic mathematical
results required for statistical inference with some elegance as well as
precision, and at a level which will make it readable by most students of
statistics.

It is a largely successful attempt. In somewhat less than a hundred pages
of main text, Pitman covers many of the key issues of the theory of statistical
inference: using parametric probability distributions to model real-world
phenomena, using data to learn about the parameters of those models, how to
gauge the ability of different statistics to enable that learning, estimation
of parameters, hypothesis testing and its efficiency, maximum likelihood
estimation, and the convergence of empirical distribution functions.

Throughout, Pitman is generally clear, usually giving short, direct proofs
of theorems with straightforward hypotheses. (I intend to steal his proof of
the Cramér-Rao inequality when I
teach that.) He makes abundant and ingenious use of the Hellinger metric for
distances between probability distributions (though he doesn't call it that).
He also has some harsh words, in several places, for the Fisher information,
with examples of how it doesn't always behave in the way the word "information"
leads one to expect. (He thinks it should be called "sensitivity".) Two
weaknesses did strike me, however. (Assuming all samples are independent is a
weakness, but so common as to not be striking.) One is that he
often silently use a curious, non-standard version of the dominated
convergence theorem, which he states in an appendix; this repeatedly had me
going "huh?" in mid-proof. The other is the chapter on the convergence of
empirical distribution functions, and Kolmogorov-Smirnov-type tests. There he
avoids "crossing the Brownian bridge", that is, using advanced probability (the
functional central limit theorem, empirical
processes, etc.), by forcing readers to trudge through opaque, multi-page
combinatorial calculations.

Pitman presumes his readers can find their way around in theoretical
statistics and measure-theoretic probability, sort of, and wants to show them
how different parts of the territory fit together, why the highways go where
they do, and why they can't just always take them. It's strongly recommended if
any of this sounds at all interesting.