Tuesday, November 6, 2012

A Wordcloud in Python

Last week I was at Pycon DE, the German Python conference. After hacking on scikit-learn a lot last week, I decided to to something different on my way back, that I had planned for quite a while:
doing a wordl-like word cloud.

I know, word clouds are a bit out of style but I kind of like them any way. My motivation to think about word clouds was that I thought these could be combined with topic-models to give somewhat more interesting visualizations.

So I looked around to find a nice open-source implementation of word-clouds ... only to find none. (This has been a while, maybe it has changed since).

While I was bored in the train last week, I came up with this code.
A little today-themed taste:

The next step is to extract words and give the words some weighting - for example how often they occur in the document.
I used scikit-learn's CountVectorizer for that as it is convenient and fast, but you could also use nltk or just some regexp.
I get the counts of the 200 most common non-stopwords and normalize by the maximum count (to be somewhat invariant to document size).

Now the real work starts. The basic idea is to randomly sample a place on the canvas and draw a word with a size related to its importance (frequency).
We have to take care not to make the words overlap, though.

There seems to be no good alternative to the Python image library (PIL), which is really, really horrible. There are no docstrings. You specify colors using strings. There is a weird module structure. There are no docstrings.

The font_path
here is an absolute path to a true type font on your system. I found now way to get around this (didn't look very hard, though).

Ok, now we could draw random positions and see if we could draw there without touching any other words.
There is a handy function in ImageDraw.textsize, which tells you how large a piece of text will be once rendered. We can use that to test if there is any overlap.

Unfortunately, random sampling any place in the image turns out to be very inefficient: if a lot of the room is already taken, we have to try quite often to find some space.

My next idea was first to find out all possible free places in the image and then sample randomly from those. The easiest way to find free positions is to convolve the current image with a box of size ImageDraw.textsize(next_word). The places where the result is zero are exactly the places that have enough room for the text.
Using scipy.ndimage.uniform_filter that worked quite nicely.

But what do we do if there is not enough room to draw a word in the size we want?
Then we have to make the font smaller and try again. Which means convolving the image again, this time with a somewhat smaller box.

The code wasn't very fast and this seemed pretty wasteful, so I wanted to use another approach: integral images! Integral images are a way to pre-compute a simple 2d structure from which it is possible to extract the sum over arbitrary rectangles in the image in constant time.
The integral image is basically a 2d cumulative sum and can be computed as
integral_image = np.cumsum(np.cumsum(image, axis=0), axis=1).
This can be done once, and then we can look up rectangles of any size very fast.
If we are interested in windows of size (w, h) we can find the sum over all possible windows of this size via

This is a combination of the integral image query (see wikipedia) and my favorite numpy trick to query all positions simulataneuosly.
So basically this does the same as the convolution above, only it precomputes a structure so that we can query for all possible windows sizes.

After drawing a word, we have to compute the integral image again.
Unfortunately, the fancy indexing with the integral image was a bit sluggish.

Except for the last two lines ... lists are not fast.
I couldn't get that much faster (the array module doesn't have a C API afaik).

I wanted to sample from all possible positions any way, so I just rand the above code twice: once counting how many possible positions there are, then sampling, then going to the position that I sampled.
Using C++ lists would probably be easier but I was to lazy to try...

Anyhow, now I had pretty decent integral images :)
The building still took some time, though... so I lazily recomputed only the part that is changed after I draw a new word.
Check out the full code on github.
It is not very pretty but I think should be quite readable.

Less talk more pictures:

To scale the fonts I used some arbitrary logarithmic dependency on the frequency, that I felt looked decent.
It is also possible just to become smaller if there is no more room.

Oh and of course I allowed flipping of the words :)
I also played with using arbitrary colors. I didn't see anything like colormaps in PIL, so I just used the HSL space and just sampled the hue. More elaborate schemes are obviously possible.

Again, I used a slight trick for a bit more speed: I first computed everything in grey-scale, saved all the positions and then re-did it in color.

One more, this time a bit more with the theme of the blog (can you guess what this is?)

And with less saturation:

There is definitely some room for improvement w.r.t. the look of it, but I feel this is already a nice start if you want to play around.

One last comment: I though about improving performance (apparently the only thing on my mind during this little project) by doing the whole thing at a lower resolution and then recreating it at a higher one.
This has two problems: if you use a too small resolution, some text might actually become invisible as it is too small. The other problem is that PIL's font sizes don't scale linearly. So it is not possible to say "I want this font 4 times larger".
You can work around that but it's not pretty.
So I went with the cython / integral image way, which I think is kind of cool :)

Great job Andreas ... I did an implementation of wordly cloud in Python years back using PyQt and it was great fun ... You output is much better then mine. It's truly a fun exercise to do is what I can recall. http://uptosomething.in

Thanks for the package and for your reply. I am trying to generate word cloud for 'nepali' text. I installed font 'preeti' and gave the path to the font in wordcloud function. Now I the characters in the image is similar to that in the text. But the words in the image are random; they are not present in the text I supplied. Can you suggest something with this? I can send you my text and the output I got if it is not clear to you.Thanks!

It is not clear to me how to do that. The words have different sizes and shapes, so if you start from, say, the top right, the shape will become "unorderly" very soon and the collision detection will be as hard as it is with random assignments, I would guess.

Hi Andres!Thank you for the great post!I tried your script and I got this error message, I tried to google it but no luck.any idea? def query_integral_image(unsigned int[:,:] integral_image, int size_x, int size_y): ^SyntaxError: invalid syntax

I run "python setup.py build_ext -i" and I get this message :"running build_ext" then I run "python wordcloud.py" and I still get the message. ,maybe something to do with my configuration ubuntu system ?

And which version of Cython are you calling there? Can you try ``cython --version`` and ``python -c "import Cython; print(Cython.__version__)`` ? I would guess you have an older cython somewhere in your path.

That error is weird as it is inside PIL. Did you change the font path in the file? You need to set "FONT_PATH" to a true-type font that exists on your system. The default will only work under Linux. The code uses the constitution by default but you can just pass another text file as command line argument.Hth,Andy

Thanks Andy. After a lot of Google search I found this that resolved the error:To get it to work change line 189 in from C:\Python33\Lib\site-packages\PIL\ImageFont.py:w, h = self.font.getsize(text)[0]to:w, h = self.font.getsize(text)

So that is a bug in PIL under Python3? For Persian: basically yes. if:1) you pick a font that supports the signs,2)your text is properly encoded (utf8 and hopefully my code reads that correctly)3) the regular expression in the scikit-learn Vectorizer makes sense for the language (which is probably fine). The vectorizer tokenizes the text into words based on a simple regular expression that basically separates words at whitespaces and punctuation iirc. For languages where that is not meaningful you would need to adjust the regular expression (an optional argument to the Vectorizer).

Thanks for the explanation for Persian language.I used a Persian font, and I debugged the code. It reads a persian text fine and in the code it creates correct "words" and "counts"but at the end the generated image is just a bunch of rectangles! do you know what should I do to create in image with Persian words in it? Thanks again for all your help.

So do the extracted "words" make sense? And what is their encoding? The code just renders the words using PIL. I am not very familiar with PIL, sorry. You could try writing a stand-alone script that tries to render some word using PIL and see if the problem persists.

It means running the program "make", the way most software is build on most operating systems. You can just run "python setup.py build_ext -i" as I said above. Feel free to send a PR improving the Readme.

This is really interesting. Though my brain can't compered the stuff about integral images. I've been playing with making word clouds using bash scripting and ImageMagick, starting from a state of pretty much total ignorance on how to do it. Rather than randomly selecting points in the canvas and trying to put a word there I've been starting off by putting the most common word in the centre of the canvas and then checking for free space spiralling out from the centre.

Your post provides an answer to a question I've been wondering about which is how do people get clouds to fit a specified shape, even just a simple rectangle:

"But what do we do if there is not enough room to draw a word in the size we want? Then we have to make the font smaller and try again."

However, this seems to conflict with the premise of a word cloud. As you put it:

"…draw a word with a size related to its importance (frequency)."

If you're fitting words in to spaces by way of shrinking their size then aren't you destroying the relationship between the size of the word and it's frequency? Especially because as I read it if a word won't fit in a space you just shrink it until it fits. Doesn't this approach mean that you can potentially end up with a word of frequency N being drawn larger than one with frequency 2N? Or I have misunderstood something?

Hey. I think my approach to wordclouds is very non-standard. I also started from ignorance and tried something out. There is a paper about the wordl way, which I can't find at the moment. I think this java-script implementation uses the same algorithm: it also relies on a spiral and a dynamic that moves the words apart if they overlap.

Actually, the way I present the algorithm here (and the way it is implemented) it is true that the size does not correspond to the frequency. BUT the ranking of the words is preserved. I sort the words by frequency before I start drawing, and the size will only decrease. Maybe that wasn't clear from my description.

How difficult would it be to create an image where the background is white? I've tried playing around in the code- specifically adding color="white" parameter when all of the images are created, but was unsuccessful.

I've been using word cloud and enjoying the results. I was curious if it's possible to enable a HD mode that would support zooming without a loss of detail? Or if this isn't a current feature would it be possible to add it?

Hi. Currently it only produces a bitmap, not a vector graphic, so there is no loss-less zooming. You can set "scale" to a higher number to get out a higher resolution image at no extra computational cost (setting width and height to different values makes the computation slower). I'm planning to rewrite the code to create vectorgraphics and html but don't hold your breath.

Great job Andreas ... I did an implementation of wordly cloud in Python years back using PyQt and it was great fun ... You output is much better then mine. It's truly a fun exercise to do is what I can recall.love spells

Thanks for the wonderful package! I want to use it to display topic model results for an academic paper (i.e., the LDA and Dynamic Topic Model of the Gensim package), but unfortunately that's not ideal with the current wordcloud package. Specifically, I would like 1 wordcloud with the top 30 words of each of the 3 topics in a different color. The 'color by group' example on your website is great for that type of thing, weren't it that topic models like LDA allow the words to occur in all the topics. Hence, there's overlap between the top 30 words of the 3 topics. As the words and frequencies are included as dictionary items, it is not possible to include the same word (with a different probability) twice. The only way to work around it now is to omit words that appear in the top 30 of more than 1 of the topics before computing the dictionary. I was wondering whether you might know how to work around this issue. If you could adjust the code to make this possible I think many people will use it to display topic model results this way.

Hi Myrthe. Feel free to send a PR to allow a word to appear multiple times. Personally, to visualize LDA I would either color a word according to the topic it is most strongly associated with, or color it using a mixture of the topic colors. I think showing a word multiple times will make it hard to see the correspondences.

I am trying to create custom colors for my word cloud and found the below function in the documentation. But this has hsl values for the single color we choose. As in the official documentation (for grey).

I tried the changing the colormap parameter, but the colors were too bright.