Quick Tutorial: Topic Modeling with LDA

Jul 22, 2017

After thinking (and reading) about Wikipedia scraping and topic modeling today, I wanted to provide a (really) simple, but clear example of topic modeling using LDA (Latent Dirichlet Allocation). Technically, the example will make use of Python (3.6), NLTK, wikipedia, and Gensim.

The Plan

In the following example, I want to do the following:

Download (scrape) a number of random Wikipedia articles

Download (scrape) two additional Wikipedia articles (‘Car’ and ‘Bus’)

Use LDA to model (or rather discover) abstract ‘topics’ in these articles (excluding ‘Bus’)

Use the model to predict the topic of a new/unknown article (in this case ‘Bus’)

Topic Modeling and LDA

Topic Modeling is primarily concerned with identifying ‘topics’ (in this sense, a pattern of co-occurring words; ultimately, a topic is a distribution over a given vocabulary) in a corpus (= set of documents). Put simply, a Topic Model is an abstract statistical model of these topics in a given corpus.

Latent Dirichlet Allocation is a generative probabilistic model based on Bayesian thinking that has been developed by Blei et al. (2003). Without going into the details, LDA will lead us to a list of ‘topics’, each consisting of multiple words. These words are what define the ‘topic’ - there is no explicit name or label for each topic.

Wikipedia Example

Before going into the example, I want to issue a warning: the corpus used is extremely small and the results will likely be somewhat skewed. Also, I will not go into fine-tuning the model (e.g. finding the optimal number of topics). While both issues are fairly severe, they will not really matter for this simple example.

Getting the Data

First, I want to download a number of random Wikipedia articles (just the content) in order to introduce some randomness into the corpus and ultimately the topics. Then, I want to add the articles ‘Car’ and ‘Bus’ to the corpus. These two will serve as the actual basis for the example.

The wikipedia module for Python makes accessing the Wikipedia API very straight forward.

This will lead us to a list of lists (tuples). Each item is comprised of the title and the actual content of the article (title, article_content). Now, we have to clean, tokenize, and stem the articles. With the help of the NLTK,
this can be done fairly quickly:

In the first step, for the sake of simplicity, we build a list that just contains the text of all articles. Then gensim is used to construct a dictionary - “a mapping between words and their integer ids”
(Řehůřek 2017).

Constructing a vector representation of an article is equally simple:

bag_of_words=[dictionary.doc2bow(article_content)]

LDA Model

Having these two things in place we can build (train) the LDA model. We need to (at least) provide two arguments - a predefined number of topics and a number of passes.

First, a corpus of all articles is constructed and vectorized. Then, a LDA model is trained with five topics over 100 passes. I’ve choosen the number of topics based on intuition - theoretically, one would have to experiment here. A higher number of passes usually leads to more precise results, but increases the computational complexity. For this example, 100 is just good enough.

The above output is based on seven articles (‘Ese Que Va Por Ahí’, ‘Just Got Paid, Let's Get Laid’, ‘The Nearly Man’, ‘Invergordon F.C.’, ‘Caswell House (Troy, Michigan)’, ‘Car’, ‘Bus’) and represents the five topics. For each topic, the first three defining words are given. Hence, one topic (Topic 3) consists of vehicl, car, and use.

Having this in mind, we can now try to apply our model to a new text. As indicated above, we will be using the ‘Bus’ article that has not been used as training data. Intuitively, we would expect Topic 3 to also be assigned to this article.

print(list(lda_model[[dictionary.doc2bow(article_contents[-1])]]))

While this looks rather complicated, we are just feeding the last article into the model as a vector. The model returns the following predictions: