Important update, December 2011

Data mining is the art of extracting useful patterns from large bodies of
data; finding seams of actionable knowledge in the raw ore of information. The
rapid growth of computerized data, and the computer power available to analyze
it, creates great opportunities for data mining in business, medicine, science,
government, etc. The aim of this course is to help you take advantage of these
opportunities in a responsible way. After taking the class, when you're faced
with a new problem, you should be able to (1) select appropriate methods, and
justify their choice, (2) use and program statistical software to implement
them, and (3) critically evaluate the results and communicate them to
colleagues in business, science, etc.

Data mining is related to statistics and to machine learning, but has its own
aims and scope. Statistics is a mathematical science, studying how reliable
inferences can be drawn from imperfect data. Machine learning is a branch of
engineering, developing a technology of automated induction. We will freely
use tools from statistics and from machine learning, but we will use them as
tools, not things to study in their own right. We will do a lot of
calculations, but will not prove many theorems, and we will do even more
experiments than calculations.

Outline, Notes, Readings

This is a rough outline of the material; details may change depending on
time and class interests.

Lecture Notes and Supplementary Readings

Note: Solutions are no longer available online, since there have
been too many instances of their being turned in as original work. I regret the inconvenience this causes those wanting to use the notes for self-study.

Page Rank (31 August). Links as
pre-existing feedback. How to exploit link information? The random walk on
the graph; using the ergodic theorem. Eigenvector formulation of page-rank.
Combining page-rank with textual features. Other applications. Further
reading on information retrieval.

Image Search, Abstraction and
Invariance (2 September). Similarity search for images. Back to
representation design. The advantages of abstraction: simplification,
recycling. The bag-of-colors representation. Examples. Invariants.
Searching for images by searching text. An example in practice.
Slides for this lecture.

Information Theory I (4
September). Good features help us guess what we can't represent. Good
features discriminate between different values of unobserved variables.
Quantifying uncertainty with entropy. Quantifying reduction in uncertainty/
discrimination with mutual information. Ranking features based on mutual
information. Examples, with code, of informative words for
the Times. Code.
Supplementary reading: David
P. Feldman, Brief Tutorial on
Information Theory, chapter 1

Clustering II (14 September).
Distances between partitions; variation-of-information distance.
Hierarchical clustering by agglomeration and its varieties. Picking the
number of clusters by merging costs. Performance of different clustering
methods on various doodles. Why we would like to pick the number of clusters
by predictive performance, and why it is hard to do at this stage. Reifying clusters.

Principal Components I (18
September). Principal components are the directions of maximum variance.
Derivation of principal components as the best approximation to the data in a
linear subspace. Equivalence to variance maximization. Avoiding explicit
optimization by finding eigenvalues and eigenvectors of the covariance matrix.
Example of principal components with cars; how to tell a sports car from a
minivan. The standard recipe for doing PCA. Cautions in interpreting
PCA. Data-set used in the notes.

Factor Analysis (23 and 25
September). From PCA to factor analysis by adding noise. Roots of factor
analysis in causal discovery: Spearman's general factor model and the tetrad
equations. Problems with estimating factor models: more unknowns than
equations. Solution 1, "principal factors", a.k.a. estimation through heroic
feats of linear algebra. Solution 2, maximum likelihood, a.k.a. estimation
through imposing distributional assumptions. The rotation problem: the factor
model is unidentifiable; the number of factors may be meaningful, but
the individual factors are not.

The Truth about PCA and Factor
Analysis (28 September) PCA is data reduction without any probabilistic
assumptions about where the data came from. Picking number of components.
Faking predictions from PCA. Factor analysis makes stronger, probabilistic
assumptions, and delivers stronger, predictive conclusions --- which could be
wrong. Using probabilistic assumptions and/or predictions to pick how many
factors. Factor analysis as a first, toy instances of a graphical causal
model. The rotation problem once more with feeling. Factor models and mixture
models. Factor models and Thomson's sampling model: an outstanding fit to a
model with a few factors is actually evidence of a huge number
of badly measured latent variables. Final advice: it all depends, but
if you can only do one, try PCA.
R code for the Thomson sampling
model.

Nonlinear Dimensionality Reduction I:
Locally Linear Embedding (5 and 7 October). Failure of PCA and all other
linear methods for nonlinear structures in data; spirals, for example.
Approximate success of linear methods on small parts of nonlinear structures.
Manifolds: smoothly curved surfaces embedded in higher-dimensional Euclidean
spaces. Every manifold looks like a linear subspace on a sufficiently small
scale, so we should be able to patch together many small local linear
approximations into a global manifold. Local linear embedding: approximate
each vector in the data as a weighted linear combination of its k
nearest neighbors, then find the low-dimensional vectors best reconstructed by
these weights. Turning the optimization problems into linear algebra. Coding
up LLE. A spiral rainbow. R.

Nonlinear Dimensionality Reduction II:
Diffusion Maps (9 October). Making a graph from the data; random walks on
this graph. The diffusion operator, a.k.a. Laplacian. How the Laplacian
encodes the shape of the data. Eigenvectors of the Laplacian as coordinates.
Connection to page-rank. Advantages when data are not actually on a manifold.
Example.

Regression I: Basics (19
October). Guessing a real-valued random variable; why expectation values are
mean-square optimal point forecasts. The regression function; why its
estimation must involve assumptions beyond the data. The bias-variance
decomposition and the bias-variance trade-off. First example of improving
prediction by introducing variance. Ordinary least squares linear regression
as smoothing. Other linear smoothers: k-nearest-neighbors and kernel
regression. How much should we
smooth? R, data
for running example

Regression II: The Truth About Linear
Regression (21 October). Linear regression is optimal linear (mean-square)
prediction; we do this because we hope a linear approximation will work well
enough over a small range. What linear regression does: decorrelate the input
features, then correlate them separately with the response and add up. The
extreme weakness of the probabilistic assumptions needed for this to make
sense. Difficulties of linear regression; collinearity, errors in variables,
shifting distributions of inputs, omitted variables. The usual extra
probabilistic assumptions and their implications. Why you should always
looking at residuals. Why you generally shouldn't use regression for causal
inference. How to torment angels. Likelihood-ratio tests for restrictions of
nice models.

Smoothing
Methods in Regression (30 October). How much smoothing should we do?
Approximation by local averaging. How much smoothing we should do to
find the unknown curve depends on how smooth the curve really is,
which is unknown. Adaptation as a partial substitute for actual knowledge.
Cross-validation for adapting to unknown smoothness. Application: testing
parametric regression models by comparing them to nonparametric fits. The
bootstrap principle. Why ever bother with parametric
regressions? R
code for some of the examples.

Support Vector Machines (20
November). Turning nonlinear problems into linear ones by expanding into
high-dimensional feature spaces. The dual representation of linear
classifiers: weight training points, not features. Observation: in the dual
representation, only inner products of vectors matter. The kernel trick:
kernel functions let us compute inner products in feature spaces without
computing the features. Some bounds on the generalization error of linear
classifiers based on "margin" and the number of training points with non-zero
weight ("support vectors"). Learning support vector machines by trading
in-sample performance against bounds on over-fitting.

Blackboard

Textbook

Our required textbook is Principles of Data
Mining
by Hand, Mannila
and Smyth. It should be at the
campus bookstore already, but you can also buy it online (I
like Powell's),
or directly from MIT
Press.

Berk'sStatistical Learning from a Regression Perspective (Powell's; publisher) is
an optional book, which covers some topics (mostly from the
second half of the course) in greater detail.

R

R is a free, open-source software
package/programming language for statistical computing. (It is a dialect of a
language called S, whose commercial version is S-plus.) You can expect at
least one assignment every week which uses R. If you do not have ready access
to a computer which can run it, contact me at once.

Here are some resources for learning R:

The official intro, "An Introduction to R", available online in
HTML
and PDF