Friday, October 24, 2008

Concordances, Part 3: Positioning Tokens

Today, we're going to add to the processing information about where a token
appears in the original document.

Tokens Today

Currently, a token is just the string containing the token data.

Easy enough.

Tokens Tomorrow

To hold more information about the token, we'll need a richer data type. To
accommodate that, here's a struct for tokens:

(defstruct token:text:raw:line:start:end:filename)

This gives us slots to hold the token's text; its original text before case
normalization, stemming, or whatever; the line it occurred on; the start and
end indices where it can be found on that line; and the name of the file the
token was read from.

Again, pretty simple.

Updating the Tokenization

The big changes happen in the tokenization procedure. Currently, it doesn't
take lines into account.

Let's start with the highest-level functions and drill down to the lowest.
First, these functions tokenize either a file or a string.

split-lines breaks a string into lines based on a regex of line endings.

tokenize-str uses split-lines to break its input into lines, and it calls
tokenize-str-seq with them. The second overload for this function then
filters the tokens with a stop list.

tokenize opens a file with a java.io.BufferedReader, and it calls
tokenize-str-seq with them. It sets the :filename key on the token
structures.

doall is thrown in there because map is lazy, but with-open isn't.
doall forces map to evaluate everything. Without it, with-open would
close the file before its contents could be read. This is a common mistake,
and it will probably bit you regularly. It does me.

This function tokenizes a sequence of strings. It walks through the sequence,
numbering each line (line-no). For each input line, it constructs a lazy
sequence by concatenating the tokens for that line (tokenize-line) with the
tokens for the rest of the lines.

(defn- tokenize-line"This tokenizes a single line into a lazy sequence of tokens."([line-nomatcher](tokenize-lineline-nomatcher0))([line-nomatcherstart](when (.findmatcherstart)(lazy-cons (mk-tokenline-nomatcher)(tokenize-lineline-nomatcher(.endmatcher))))))

mk-token constructs a token struct from a regex and line number.

(defn- mk-token"This creates a token given a line number and regex matcher."[line-nomatcher](let [raw(.groupmatcher)](struct token(.toLowerCaseraw)rawline-no(.startmatcher)(.endmatcher))))

That's it. tokenize and tokenize-str create a sequence of strings of input
data. Each item in the sequence is a line in the input.

tokenize-str-seq takes that input sequence and creates a lazy sequence of
the tokens from the first line and the tokens from the rest of the input
sequence.

tokenize-line takes a line and constructs a lazy sequence of the tokens in
it, as defined by the regex held in token-regex.

Finally, mk-token constructs the token from the regex Matcher and the line
number.

If you've made it this far, you've probably got Clojure up and running, but if
not, Bill Clementson has a great post on how to set up
Clojure+Emacs+SLIME. In the future, he'll be exploring Clojure in more
detail. He's got a lot of good posts on Common Lisp and Scheme, and I'm
looking forward to seeing what he does with Clojure.