Topicmodels, topicmodels, …

I have previously done some topic modelling using LDA (Latent Dirilech Allocation). Back then I used a nice video from some nice guy but somehow could not find the video with search engines anymore. Too bad. Implemented LDA in Java back then based on that tutorial. I learned how it works, not why it works. Still don’t quite get why the set of topics emerges from the algorithm.

Actually I found a reasonably good explanation on Quora. Well, it is a good one if you already know most of how LDA works. Eh. Also a tutorial briefly summarizing how online LDA works, which is a nice improvement, and I guess what the tools use these days.

The number of topics LDA produces is given as a parameter, and is always a bit of a puzzle for me how to pick the best number for topics. Googling for it, I found various references to using “perplexity” to choose the best number of topics. I still have not found a good “for dummies” explanation for what that really means in practice for LDA, or how to implement it. Maybe some of the libs out there will do it for me? Python seems all the rage in data science these days, because whatever. So after a few search, gensim it is.

Gensim seems to have some perplexity options and a bunch of weird formulas to apply. Is it so hard to write some simple docs and explain these things? I guess nobody pays people to do it, and doing for free would just go against the goal of making oneself important. Sort of makes sense, and applies to most OSS software I have used. Or maybe I am just bad at using stuff.

Anyway. There is also something called topic coherence in Gensim. This is supposed to be some way to evaluate the number of topics. Somehow the explanation does not work for me. I did not quite grasp how it works for real. So I just gave it a try to see what I get, that would be most important for me regardless.

I start with the English wikipedia (I used a May 2017 dump). Because it is sorta big and I can put the results here, everyone knows it and it’s public data. Gensim nicely comes with a script to parse it for dictionary and corpus:

python -m gensim.scripts.make_wiki

Then some code to build different sizes of topic models (25 to 200 topics in 25 topic size increments)

The code above drops a set of 9 different sized topic models into matching directories. Both for default parameters and autotuned parameters. Takes a while to run. The machine I ran it on has 32GB RAM and a quad-core Core i7 processor (hyperthreads to 8 virtual cores). Resource use? I actually found the Gensim implementations are quite nicely optimized not to take huge amounts of memory, and they also pretty much make use of all the cores in a system. Except perhaps the topic cohesion ones that seemed to run single core still. Perhaps because they seem relatively new?

My first mistake in this regard was to think of LDA as a single-core solution. I implemented the original algorithm some times back, and did not see it becoming anything else. But the online version seems to batch it in pieces, which I guess makes it more parallelizable. And the Gensim docs also nicely describe how running this online algorithm now also merges the results in a way that you don’t necessarily need to run large numbers of passes (iterations) over the corpus to converge on a better model. Chunksize 10000 in the above code seems to cause this merge after each 10000 docs, and with Wikipedia having about 4 million articles, this amounts for quite a few merges. Maybe somewhat equal to iterations of old.

With logging enabled, Gensim prints some texts about “topic diff” between each batch and merge. This seems to indicate how much the topic model changed between the runs. So I plotted the topic diff for the wikipedia run (when generating the LDA models), to see how much the topics drift during the run. See figure below for the 9 sizes I used, using Gensim default LDA parameters:

And for using the autotuned parameters:

From this, it seems the topic model actually pretty much “converges” quite early in the process. That is, the topic diff goes down to a small number and the topics become quite stable across merges/iterations. Maybe because there is so much data in this dataset? And the autotuned version seems much more direct to converge. So I will use that later.

After this, I ran the same analysis on a bunch of document sets I have from different Finnish organizations. I won’t be putting the exact data for those documents online here, but I will show some statistics on the runs and the models produced, as well as my feeling from looking at the topics generated and the stats. Some stats when running the autotuned version (because the autotuned seemed to converge faster and about equally on quality on wikipedia):

type id

doc count

1

3651

2

1930

3

679

4

5596

5

1058

6

343

7

228

8

1069

9

333

10

213

11

279

12

316

13

592

14

397

15

104

16

1076

17

1648

Since these have a very small number of documents when compare to Wikipedia, I ran the Gensim LDA model generator for them in the online mode using batch size of 1000. Separately with 10 iterations and 100 iterations to get some comparable data on impact of iteration counts. Listing all 3×3 grids for the 17 document sets would be a bit much to show here. So after looking at them, I figured they were mostly similar but with maybe a few minor differences. So I picked three types (based on my feelings when looking at the figures):

Type 1 (this grid is for doc set with type id 6 from above):
10 iterations:

100 iterations:

Type 2 (this grid is for doc set with type id 5 from above):
10 iterations:

100 iterations:

Type 3 (this grid is for doc set with type id 7 from above):
10 iterations:

100 iterations:

Remember, the types are just something I made up myself. I chose Type 1 to refer to models where there was a big difference from 10 iterations to 100 iterations in the final topic diff for the 25 topic run. In the example Type 1 figures here (for doc type 6), the 10 iteration run gets to around 0.25 final diff. In my set for type 1, document sets 2, 16, and 17 had the biggest diff of about 0.5 in the end after 10 iterations. Document sets 3, 6, 9, 12, 13, and 14 were close to 0.2 diff after 10 iterations. Document sets 10 and 11 were close to 0.1 diff for 10 iterations. Each of these was close to 0 final diff after 100 iterations.

Type 2 refers to models where the 25 topics line has a noticeable “jiggly” effect to it. Maybe this is between the iterations (or “passes”)? Not sure how Gensim restarts iterations, so could have something to do with it. Topics for document sets 5 and 8 had the biggest such effects, as also shown in the Type 2 figure above for document set 5. For document sets 1 and 4, the effect was smaller but still seemed to be there.

Type 3 refers to models where there was no big difference in final topic diff in 10 vs 100 iterations. This was just the models for document sets 7 and 15. These are also the two smallest document sets (least docs). Maybe smaller sets converge better with fewer iterations?

Looking at the document type count table above, there is no clear correlation with document count and the types of figures (1,2,3) I used above. There could be other differences in properties of the documents (e.g., length, number of real distinct topics embedded in each). Not in my scope to investigate further, but the reasons could be anything, what do I know.

The properties I used to select the types are mostly visible in the smaller number of topics. With higher number of topics they all seem quite similar. Maybe the algorithm has to work harder to fit the data into fewer topics? Or maybe I just have so little data there that larger number of topics always produces garbage topics uniformly? No idea, really.

And once the models are built, the Gensim cohesion estimatior can be run to evaluate which of these is best according to Gensim. I used the u_mass evaluator here, since it does not require the corpus to be reloaded. According to this website, others such as c_v are more accurate while u_mass is faster. For my experiments I am just looking for a general experience on usefulness of the coherence measure here. If I had more motivation and resources I might try the others as well. Mostly resources, since my results are not too good and further exploration would be interesting to make the results better. But lets not jump too far. Code:

Have to say, maybe not very excited. Mostly the topics make at least some sense but many of those coherence measures show higher values for bigger numbers. Like 100 iteration coherence for document sets 7 and 15 showing a set of topics around 150 would be great. Doc set 15 even has fewer documents that that. Manually looking at the generated topics, a large number them are almost the same topics actually. They have mostly the same words, and very low weights for topics/words, meaning very few words in the docs got assigned to the topics. So it would seem that for most purposes topic count for these document sets is better at the lower number of topics. Unless maybe if you want to capture really fine grained differences in topics. Not sure what that would be good fo but maybe it has some use cases.

So if the smaller number of topics would be better, maybe I need to try even smaller number of topics. Seems reasonable given the smallish number of documents I have. Like number of topics at 5, 10, 15, 20. See where that takes me. Here we go:

Doc set id

coherence (autotuned parameters, 100 iterations)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

Comparing these figures with the ones before for topic counts 25-200, the lower number of topics generally scored better here. Just for a quick comparison, most of these 2-20 sizes have the highest score close to -0.5 to -0.7, while the best scores for 25-200 were closer to -1.0. The difference being againg topic 15, which trolls us again with a value close to -0.8 at 3 and 150 topics. Eh.

For final comparison and seeing what I think of the topics found at different sizes, I simply manually examined the topics by printing them to files like so:

After dumping all my doc sets (1-17) like this, and looking at the ones getting the highest/lowest cohesion values, I could not really say in any way that the values would have been better for the highest cohesion values. Certainly for these small document sets, the smaller topic counts were better if looking for clearly distinct topics. Which I think most people would look for. So I am sure there is some value here. And trying out the more accurate cohesion metrics such as c_v (as discussed at the beginning of this post) would probably give better results. Maybe someday.

Alternatively, for a more visual exploration, there is also the option to use the LDAvis package. Wikipedia example:

This dumps the whole LDAvis thing into a HTML file you can then load up any time later and play with. Nice thing about this is that it can be run on a headless remote server, and produces a single HTML file (a bit large but anyway). This HTML file can then be downloaded and opened from a local file. So no webserver needed anywhere, and the interactive visualization can be shared as a single file.

How does it look? To continue avoiding dumping the Finnish datasets here, I use examples for 25, 100 and 200 topics from Wikipedia:

25:

100:

200:

The first (and biggest) topic in the list of 25 is related to movies. Same for the 100 topics. In 200 topics, music takes the first spot. In 200, the second is about novels (book), third football, and finally movies come fourth.

In the LDAvis figure here for 25 topics, the cluster of four smaller ones on the right are related to Asian countries. In the topic word list below for 25 topics, these are topics 4, 14,16, and 20. The numbering is just different because they are ordered differently. The LDAvis figure above for 200 topics also has a cluster of small ones on the left, with many of those for countries/states but also some for other topics such as chess, church, weightlifting and more. I am sure this would also be an interesting topic to study, why PCA grounds them together.

In general, there are a number of parameters to play with in LDAvis, and I don’t pretend to know all of/about them. For example, you can cycle through the topics using the controls on the top as well. A handy tool for topic exploration.

But I do also prefer just using the textual outputs of the topics as shown below. To see a large number of topics at once vs cycling through one at a time. Maybe some combination would work best.

The 25 and 100 topics from wikipedia for my text output code above:

25 Wikipedia topics (I manually tried cut these to 20 top words from 100 I printed, so its ~20 words each):

I find these topics for Wikipedia to be pretty good and clear topics. More data obviously gives better topics. I am still running the cohesion metrics for these for Wikipedia. Even if u_mass is supposed to be faster, it took me 4 days to run it just for the 25 topics on Wikipedia. So it would take me weeks to run it for all the 25-200 sized topic counts. If I ever finish it, maybe I will post some update.

I am sure there would be lots of interesting this there to explore via Wikipedia by increasing topic counts, looking at the relations between topics, how they evolve as the numbers increase and so on. Unfortunately, I am not paid for this and have too many other things to do..

So if I want to apply topic models, what would I do right now (NLP is getting lots of attention so who knows in a few years..)? Try a number of different topic distributions and parameters if possible, look at the models manually both in text and visually, and pick a nice configuration. Depends really if the topics are used for human consumption as such or just as some form of automated input.

If I needed to model large numbers of separate sets that are evolving over time, I might just use the cohesion metrics along with some heuristics (e.g., number of docs vs number of topics) to make automated choices, run the things as micro-services at intervals and use the results automatically. Tune as needed over time.

Fewer and more static sets might benefit from more tailored approaches.