Posts Tagged ‘nlp’

There are quite a few well-known libraries for doing various NLP tasks in Java and Python, such as the Stanford Parser (Java) and the Natural Language Toolkit (Python). For Ruby, there are a few resources out there, but they are usually derivative or not as mature. By derivative, I mean they are ports from other languages or extensions using code from another language. And I’m responsible for two of them! :)

There are also a number of fledgling or orphaned projects out there purporting to be ports or interfaces for various other libraries like Stanford POS Tagger and Named Entity Recognizer. Ruby (straight Ruby, not just JRuby) can interface just about any Java library using the Ruby Java Bridge (RJB). RJB can be a pain, and I could only initialize it once per run (a second attempt never succeeds), so there are some limitations. But using it, I was able to easily interface with the Stanford POS tagger.

So while there aren’t terribly many libraries for NLP tasks in Ruby, the availability of interfacing with Java directly widens the scope quite a bit. You can also incorporate a c library using extensions.

Naturally, if I missed anything, no matter how small, please let me know.

A while back I ported David Blei’s lda-c code for performing Latent Dirichlet Allocation to Ruby. Basically I just wrapped the C methods in a Ruby class, turned it into a gem, and called it a day. The result was a bit ugly and unwieldy, like most research code. A few months later, Todd Fisher came along and discovered a couple bugs and memory leaks in the C code, for which I am very grateful. I had been toying with the idea of improving the Ruby code, and embarked on a mission to do so. The result is a hopefully much cleaner gem that can be used right out of the box with little screwing around.

Unfortunately, I did something I’m ashamed of. Ruby gems are notorious for breaking backwards compatibility, and I have done just that. The good news is, your code will almost work, assuming you didn’t start diving into the Document and Corpus classes too heavily. If you did, then you will probably experience a lot of breakage. The result, I hope is a more sensical implementation, however, so maybe you won’t hate me. Of course, I could be wrong and my implementation is still crap. If that’s the case, please let me know what needs to be improved.

A twitter friend (@communicating) tipped me off to the UEA-Lite Stemmer by Marie-Claire Jenkins and Dan J. Smith. Stemmers are NLP tools that get rid of inflectional and derivational affixes from words. In English, that usually means getting rid of the plural -s, progressive -ing, and preterite -ed. Depending on the type of stemmer, that might also mean getting rid of derivational suffixes like -ful and -ness. Sometimes it’s useful to be able to reduce words like consolation and console to the same root form: consol. But sometimes that doesn’t make sense. If you’re searching for video game consoles, you don’t want to find documents about consolation. In this case, you need a conservative stemmer.

The UEA-Lite Stemmer is a rule-based, conservative stemmer that handles regular words, proper nouns and acronyms. It was originally written in Perl, but had been ported to Java. Since I usually code in Ruby these days, I thought it’d be nice to make it available to the Ruby community, so I ported it over last night.

The code is open source under the Apache 2 License and hosted on github. So please check out the code and let me know what you think. Heck, you can even fork the project and make some improvements yourself if you want.

One direction I’d like to be able to go is to turn all of the rules into finite state transducers, which can be composed into a single large deterministic finite state transducer. That would be a lot more efficient (and even fun!), but Ruby lacks a decent FST implementation.

So I am on the market after getting my masters. I’ve posted my resume to Dice and Monster and a couple others. Monster gets the most unsolicited calls. I’m finding that recruiters are an odd lot. There are some who are pleasant, though to a man (or woman) they’ve never heard of NLP or computational linguistics and have no idea how to help me (with the exception of the one or two recruiters I’ve contacted for NLP jobs). For the most part, they seem to not even read my resume. Oh, you have Java skills? How about this Java grunt job that only requires a bachelor’s degree? Waste of time. The best are the ones who contact me in broken English with a multitude of typos. Yeah, right.

I have been told that with my CMU degree, I should be looking exclusively at the big corporations: Google, Microsoft, Amazon, Yahoo, etc. If I do my time there, I can get a job anywhere and have a good career. That’s true, I’m sure. Something about startups is really attractive to me, though, so I’ve been looking at a lot of them. What if the only job I can get at a Googlosoftazonahoo is not NLP-related? Everything is so rushed. I have a September 1st exit date for CMU and I want to be in the city of my chosen job by then. Add lease problems. The problem is my decision to abandon academia didn’t come at the right time: back in the winter. I am, however, more confident than ever that it was the right decision.

So I decided to finally fart around with OpenCalais a little. There’s a nice video on the site that gives you an impression of what it is capable of, but it’s also like all videos about software: propaganda. Calais is basically Named Entity Recognition (NER) software that can be accessed via a web API. Whereas a regular NER system might recognize named entities like people, organizations, and places, Calais also recognizes relationships like corporate acquisitions. To be a little more clear if you aren’t familiar with NER, it is basically the task of identifying the proper nouns in a body of text. Named entities aren’t always proper nouns, but that is one starting point. Examples would be: John Hancock (Person), New York (Place), and Apple (Organization). Calais recognizes relationships, which means we get an extra layer of information: Acquisition(Microsoft, Yahoo!).

Calais is put out by Reuters which has a long history of helping out the NLP and IR research communities with data sets. Being Reuters, the data sets are all newswire stuff, and Calais is produced in that spirit. Currently the relationships and named entities available reflect that bias, but the list is expanding and it is probably flexible enough for most domains. Their claim is that with each new release, there will be additional entities and relationships available. Also, the software is completely open source free for commercial and private use. For this, I give Reuters props.

OpenCalais uses SOAP or HTTP post to issue requests and you can take a look at their tutorials for exactly how to use it. After some very shallow digging on the googles, I found an open source project called python-calais, which is basically just a script that wraps some text and sends it to the Calais service, then processes the output. The output is in RDF (resource description framework), which is a type of xml document that is not very friendly to the human eye but is nice and powerful otherwise. The python-calais script uses an rdf library for python, so you’ll need to download that if you don’t already have it.

As it picks up everything on the page, there is a lot included there that isn’t related to the post about Old English translation. Also, it picks up some weird so-called industry terms like “steel.” If you filter out just the text (manually), the output is a little more sensible:

(The codes are unique identifiers.) Unfortunately, some important terms are still missed, like Old English. So it appears Calais has some growing to do, but it’s off to a good start. Part of the problem might be that that blog post is out of domain. I imagine with time, it will continue to improve. We’ll see.

NLP app idea: construct random songs by scraping lyrics websites and stringing together common phrases. It’s a Pandora night for me and here were a couple lyrics that struck me as particularly meaningful. Both by Regina Spektor, introduced to me by Pandora before she became (semi-)famous.

And then you take that love you made
And stick it into some
Someone else’s heart
Pumping someone else’s blood
– “On the Radio”

Beneath the stars came fallin’ on our heads
But they’re just old light, they’re just old light
– “Samson”

I love how she takes the beautiful image of stars falling on their heads and strips it bare of all romanticism and attached meanings, exposing them for what they are: old light.

Now here’s a great idea. StupidFilter is an open-source project with the goal of rooting out and destroying stupid comments in blogs, wikis, YouTube, flickr, and just about any place morons are allowed to voice their opinions. Pulling this off would allow me to read the comments on Flickr without wanting to rip my eyeballs out. No longer will I gag when I accidentally allow myself to glance at the comments on a YouTube video. Yes, the world will be a better place.

The best part will be the complaints by users who are no longer able to leave comments.