Machine Learning: Clustering & Retrieval

Machine Learning: Clustering & Retrieval

Machine Learning: Clustering & Retrieval

University of Washington

About this course: Case Studies: Finding Similar Documents
A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover?
In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce.
Learning Outcomes: By the end of this course, you will be able to:
-Create a document retrieval system using k-nearest neighbors.
-Identify various similarity metrics for text data.
-Reduce computations in k-nearest neighbor search by using KD-trees.
-Produce approximate nearest neighbors using locality sensitive hashing.
-Compare and contrast supervised and unsupervised learning tasks.
-Cluster documents by topic using k-means.
-Describe how to parallelize k-means using MapReduce.
-Examine probabilistic clustering approaches using mixtures models.
-Fit a mixture of Gaussian model using expectation maximization (EM).
-Perform mixed membership modeling using latent Dirichlet allocation (LDA).
-Describe the steps of a Gibbs sampler and how to use its output to draw inferences.
-Compare and contrast initialization techniques for non-convex optimization objectives.
-Implement these techniques in Python.

Clustering and retrieval are some of the most high-impact machine learning tools out there. Retrieval is used in almost every applications and device we interact with, like in providing a set of products related to one a shopper is currently considering, or a list of people you might want to connect with on a social media platform. Clustering can be used to aid retrieval, but is a more broadly useful tool for automatically discovering structure in data, like uncovering groups of similar patients.<p>This introduction to the course provides you with an overview of the topics we will cover and the background knowledge and resources we assume you have.

4 videos, 3 readings

Reading: Slides presented in this module

Video: Welcome and introduction to clustering and retrieval tasks

Video: Course overview

Video: Module-by-module topics covered

Video: Assumed background

Reading: Software tools you'll need for this course

Reading: A big week ahead!

WEEK 2

Nearest Neighbor Search

We start the course by considering a retrieval task of fetching a document similar to one someone is currently reading. We cast this problem as one of nearest neighbor search, which is a concept we have seen in the Foundations and Regression courses. However, here, you will take a deep dive into two critical components of the algorithms: the data representation and metric for measuring similarity between pairs of datapoints. You will examine the computational burden of the naive nearest neighbor search algorithm, and instead implement scalable alternatives using KD-trees for handling large datasets and locality sensitive hashing (LSH) for providing approximate nearest neighbors, even in high-dimensional spaces. You will explore all of these ideas on a Wikipedia dataset, comparing and contrasting the impact of the various choices you can make on the nearest neighbor results produced.

In clustering, our goal is to group the datapoints in our dataset into disjoint sets. Motivated by our document analysis case study, you will use clustering to discover thematic groups of articles by "topic". These topics are not provided in this unsupervised learning task; rather, the idea is to output such cluster labels that can be post-facto associated with known topics like "Science", "World News", etc. Even without such post-facto labels, you will examine how the clustering output can provide insights into the relationships between datapoints in the dataset. The first clustering algorithm you will implement is k-means, which is the most widely used clustering algorithm out there. To scale up k-means, you will learn about the general MapReduce framework for parallelizing and distributing computations, and then how the iterates of k-means can utilize this framework. You will show that k-means can provide an interpretable grouping of Wikipedia articles when appropriately tuned.

13 videos, 2 readings

Reading: Slides presented in this module

Video: The goal of clustering

Video: An unsupervised task

Video: Hope for unsupervised learning, and some challenge cases

Video: The k-means algorithm

Video: k-means as coordinate descent

Video: Smart initialization via k-means++

Video: Assessing the quality and choosing the number of clusters

Reading: Clustering text data with k-means

Video: Motivating MapReduce

Video: The general MapReduce abstraction

Video: MapReduce execution overview and combiners

Video: MapReduce for k-means

Video: Other applications of clustering

Video: A brief recap

Graded: k-means

Graded: Clustering text data with K-means

Graded: MapReduce for k-means

WEEK 4

Mixture Models

In k-means, observations are each hard-assigned to a single cluster, and these assignments are based just on the cluster centers, rather than also incorporating shape information. In our second module on clustering, you will perform probabilistic model-based clustering that provides (1) a more descriptive notion of a "cluster" and (2) accounts for uncertainty in assignments of datapoints to clusters via "soft assignments". You will explore and implement a broadly useful algorithm called expectation maximization (EM) for inferring these soft assignments, as well as the model parameters. To gain intuition, you will first consider a visually appealing image clustering task. You will then cluster Wikipedia articles, handling the high-dimensionality of the tf-idf document representation considered.

15 videos, 4 readings

Reading: Slides presented in this module

Video: Motiving probabilistic clustering models

Video: Aggregating over unknown classes in an image dataset

Video: Univariate Gaussian distributions

Video: Bivariate and multivariate Gaussians

Video: Mixture of Gaussians

Video: Interpreting the mixture of Gaussian terms

Video: Scaling mixtures of Gaussians for document clustering

Video: Computing soft assignments from known cluster parameters

Video: (OPTIONAL) Responsibilities as Bayes' rule

Video: Estimating cluster parameters from known cluster assignments

Video: Estimating cluster parameters from soft assignments

Video: EM iterates in equations and pictures

Video: Convergence, initialization, and overfitting of EM

Video: Relationship to k-means

Reading: (OPTIONAL) A worked-out example for EM

Video: A brief recap

Reading: Implementing EM for Gaussian mixtures

Reading: Clustering text data with Gaussian mixtures

Graded: EM for Gaussian mixtures

Graded: Implementing EM for Gaussian mixtures

Graded: Clustering text data with Gaussian mixtures

WEEK 5

Mixed Membership Modeling via Latent Dirichlet Allocation

The clustering model inherently assumes that data divide into disjoint sets, e.g., documents by topic. But, often our data objects are better described via memberships in a collection of sets, e.g., multiple topics. In our fourth module, you will explore latent Dirichlet allocation (LDA) as an example of such a mixed membership model particularly useful in document analysis. You will interpret the output of LDA, and various ways the output can be utilized, like as a set of learned document features. The mixed membership modeling ideas you learn about through LDA for document analysis carry over to many other interesting models and applications, like social network models where people have multiple affiliations.<p>Throughout this module, we introduce aspects of Bayesian modeling and a Bayesian inference algorithm called Gibbs sampling. You will be able to implement a Gibbs sampler for LDA by the end of the module.

12 videos, 2 readings

Reading: Slides presented in this module

Video: Mixed membership models for documents

Video: An alternative document clustering model

Video: Components of latent Dirichlet allocation model

Video: Goal of LDA inference

Video: The need for Bayesian inference

Video: Gibbs sampling from 10,000 feet

Video: A standard Gibbs sampler for LDA

Video: What is collapsed Gibbs sampling?

Video: A worked example for LDA: Initial setup

Video: A worked example for LDA: Deriving the resampling distribution

Video: Using the output of collapsed Gibbs sampling

Video: A brief recap

Reading: Modeling text topics with Latent Dirichlet Allocation

Graded: Latent Dirichlet Allocation

Graded: Learning LDA model via Gibbs sampling

Graded: Modeling text topics with Latent Dirichlet Allocation

WEEK 6

Hierarchical Clustering & Closing Remarks

In the conclusion of the course, we will recap what we have covered. This represents both techniques specific to clustering and retrieval, as well as foundational machine learning concepts that are more broadly useful.<p>We provide a quick tour into an alternative clustering approach called hierarchical clustering, which you will experiment with on the Wikipedia dataset. Following this exploration, we discuss how clustering-type ideas can be applied in other areas like segmenting time series. We then briefly outline some important clustering and retrieval ideas that we did not cover in this course.<p> We conclude with an overview of what's in store for you in the rest of the specialization.

12 videos, 2 readings

Reading: Slides presented in this module

Video: Module 1 recap

Video: Module 2 recap

Video: Module 3 recap

Video: Module 4 recap

Video: Why hierarchical clustering?

Video: Divisive clustering

Video: Agglomerative clustering

Video: The dendrogram

Video: Agglomerative clustering details

Video: Hidden Markov models

Reading: Modeling text data with a hierarchy of clusters

Video: What we didn't cover

Video: Thank you!

Graded: Modeling text data with a hierarchy of clusters

FAQs

How It Works

Coursework

Each course is like an interactive textbook, featuring pre-recorded videos, quizzes and projects.

Help from Your Peers

Connect with thousands of other learners and debate ideas, discuss course material,
and get help mastering concepts.

Certificates

Earn official recognition for your work, and share your success with friends,
colleagues, and employers.

Creators

University of Washington

Founded in 1861, the University of Washington is one of the oldest state-supported institutions of higher education on the West Coast and is one of the preeminent research universities in the world.

Pricing

Purchase Course

Audit

Access to course materials

Available

Available

Access to graded materials

Available

-

Not available

Receive a final grade

Available

-

Not available

Earn a shareable Course Certificate

Available

-

Not available

Ratings and Reviews

Rated 4.6 out of 5 of 1,034 ratings

Since I took the courses 1, 2 and 3 of this series, I really enjoyed this fourth part a lot!

Now I'm really looking forward to do some clustering!

Excellent course! must for machine learning beginners!!

Rs

Great instruction, great course, and provide information I used directly in my work.