Parsing? Tokenization? Analysis!

Parsing

Applications that build their search capabilities upon Lucene may support documents in various formats – HTML, XML, PDF, Word – just to name a few.
Lucene does not care about the Parsing of these and other document formats, and it is the responsibility of the
application using Lucene to use an appropriate Parser to convert the original format into plain text before passing that plain text to Lucene.

Tokenization

Plain text passed to Lucene for indexing goes through a process generally called tokenization. Tokenization is the process
of breaking input text into small indexing elements – tokens.
The way input text is broken into tokens heavily influences how people will then be able to search for that text.
For instance, sentences beginnings and endings can be identified to provide for more accurate phrase
and proximity searches (though sentence identification is not provided by Lucene).

In some cases simply breaking the input text into tokens is not enough
– a deeper Analysis may be needed. Lucene includes both
pre- and post-tokenization analysis facilities.

Pre-tokenization analysis can include (but is not limited to) stripping
HTML markup, and transforming or removing text matching arbitrary patterns
or sets of fixed strings.

There are many post-tokenization steps that can be done, including
(but not limited to):

Stemming –
Replacing words with their stems.
For instance with English stemming "bikes" is replaced with "bike";
now query "bike" can find both documents containing "bike" and those containing "bikes".

Stop Words Filtering –
Common words like "the", "and" and "a" rarely add any value to a search.
Removing them shrinks the index size and increases performance.
It may also reduce some "noise" and actually improve search quality.

Text Normalization –
Stripping accents and other character markings can make for better searching.

Synonym Expansion –
Adding in synonyms at the same token position as the current word can mean better
matching when users search with words in the synonym set.

Core Analysis

The analysis package provides the mechanism to convert Strings and Readers
into tokens that can be indexed by Lucene. There are four main classes in
the package from which all analysis processes are derived. These are:

Analyzer – An Analyzer is
responsible for building a
TokenStream which can be consumed
by the indexing and searching processes. See below for more information
on implementing your own Analyzer.

CharFilter – CharFilter extends
Reader to perform pre-tokenization substitutions,
deletions, and/or insertions on an input Reader's text, while providing
corrected character offsets to account for these modifications. This
capability allows highlighting to function over the original text when
indexed tokens are created from CharFilter-modified text with offsets
that are not the same as those in the original text. Tokenizers'
constructors and reset() methods accept a CharFilter. CharFilters may
be chained to perform multiple pre-tokenization modifications.

Tokenizer – A Tokenizer is a
TokenStream and is responsible for
breaking up incoming text into tokens. In most cases, an Analyzer will
use a Tokenizer as the first step in the analysis process. However,
to modify text prior to tokenization, use a CharStream subclass (see
above).

TokenFilter – A TokenFilter is
also a TokenStream and is responsible
for modifying tokens that have been created by the Tokenizer. Common
modifications performed by a TokenFilter are: deletion, stemming, synonym
injection, and down casing. Not all Analyzers require TokenFilters.

Hints, Tips and Traps

The synergy between Analyzer and
Tokenizer is sometimes confusing. To ease
this confusion, some clarifications:

The Analyzer is responsible for the entire task of
creating tokens out of the input text, while the Tokenizer
is only responsible for breaking the input text into tokens. Very likely, tokens created
by the Tokenizer would be modified or even omitted
by the Analyzer (via one or more
TokenFilters) before being returned.

Lucene Java provides a number of analysis capabilities, the most commonly used one being the StandardAnalyzer.
Many applications will have a long and industrious life with nothing more
than the StandardAnalyzer. However, there are a few other classes/packages that are worth mentioning:

PerFieldAnalyzerWrapper – Most Analyzers perform the same operation on all
Fields. The PerFieldAnalyzerWrapper can be used to associate a different Analyzer with different
Fields.

The analysis library located at the root of the Lucene distribution has a number of different Analyzer implementations to solve a variety
of different problems related to searching. Many of the Analyzers are designed to analyze non-English languages.

There are a variety of Tokenizer and TokenFilter implementations in this package. Take a look around, chances are someone has implemented what you need.

Analysis is one of the main causes of performance degradation during indexing. Simply put, the more you analyze the slower the indexing (in most cases).
Perhaps your application would be just fine using the simple WhitespaceTokenizer combined with a StopFilter. The benchmark/ library can be useful
for testing out the speed of the analysis process.

Invoking the Analyzer

Applications usually do not invoke analysis – Lucene does it for them:

At indexing, as a consequence of
addDocument(doc),
the Analyzer in effect for indexing is invoked for each indexed field of the added document.

At search, a QueryParser may invoke the Analyzer during parsing. Note that for some queries, analysis does not
take place, e.g. wildcard queries.

However an application might invoke Analysis of any text for testing or for any other purpose, something like:

Indexing Analysis vs. Search Analysis

Selecting the "correct" analyzer is crucial
for search quality, and can also affect indexing and search performance.
The "correct" analyzer differs between applications.
Lucene java's wiki page
AnalysisParalysis
provides some data on "analyzing your analyzer".
Here are some rules of thumb:

Test test test... (did we say test?)

Beware of over analysis – might hurt indexing performance.

Start with same analyzer for indexing and search, otherwise searches would not find what they are supposed to...

In some cases a different analyzer is required for indexing and search, for instance:

Certain searches require more stop words to be filtered. (I.e. more than those that were filtered at indexing.)

Query expansion by synonyms, acronyms, auto spell correction, etc.

This might sometimes require a modified analyzer – see the next section on how to do that.

Implementing your own Analyzer

Creating your own Analyzer is straightforward. Your Analyzer can wrap
existing analysis components — CharFilter(s) (optional), a
Tokenizer, and TokenFilter(s) (optional) — or components you
create, or a combination of existing and newly created components. Before
pursuing this approach, you may find it worthwhile to explore the
analyzers-common library and/or ask on the
java-user@lucene.apache.org mailing list first to see if what you
need already exists. If you are still committed to creating your own
Analyzer, have a look at the source code of any one of the many samples
located in this package.

The following sections discuss some aspects of implementing your own analyzer.

Field Section Boundaries

When document.add(field)
is called multiple times for the same field name, we could say that each such call creates a new
section for that field in that document.
In fact, a separate call to
tokenStream(field,reader)
would take place for each of these so called "sections".
However, the default Analyzer behavior is to treat all these sections as one large section.
This allows phrase search and proximity search to seamlessly cross
boundaries between these "sections".
In other words, if a certain field "f" is added like this:

Then, a phrase search for "ends starts" would find that document.
Where desired, this behavior can be modified by introducing a "position gap" between consecutive field "sections",
simply by overriding
Analyzer.getPositionIncrementGap(fieldName):

Token Position Increments

By default, all tokens created by Analyzers and Tokenizers have a
position increment of one.
This means that the position stored for that token in the index would be one more than
that of the previous token.
Recall that phrase and proximity searches rely on position info.

If the selected analyzer filters the stop words "is" and "the", then for a document
containing the string "blue is the sky", only the tokens "blue", "sky" are indexed,
with position("sky") = 3 + position("blue"). Now, a phrase query "blue is the sky"
would find that document, because the same analyzer filters the same stop words from
that query. But the phrase query "blue sky" would not find that document because the
position increment between "blue" and "sky" is only 1.

If this behavior does not fit the application needs, the query parser needs to be
configured to not take position increments into account when generating phrase queries.

Note that a StopFilter MUST increment the position increment in order not to generate corrupt
tokenstream graphs. Here is the logic used by StopFilter to increment positions when filtering out tokens:

Inhibiting phrase and proximity matches in sentence boundaries – for this, a tokenizer that
identifies a new sentence can add 1 to the position increment of the first token of the new sentence.

Injecting synonyms – here, synonyms of a token should be added after that token,
and their position increment should be set to 0.
As result, all synonyms of a token would be considered to appear in exactly the
same position as that token, and so would they be seen by phrase and proximity searches.

Token Position Length

By default, all tokens created by Analyzers and Tokenizers have a
position length of one.
This means that the token occupies a single position. This attribute is not indexed
and thus not taken into account for positional queries, but is used by eg. suggesters.

The main use case for positions lengths is multi-word synonyms. With single-word
synonyms, setting the position increment to 0 is enough to denote the fact that two
words are synonyms, for example:

Term

red

magenta

Position increment

1

0

Given that position(magenta) = 0 + position(red), they are at the same position, so anything
working with analyzers will return the exact same result if you replace "magenta" with "red"
in the input. However, multi-word synonyms are more tricky. Let's say that you want to build
a TokenStream where "IBM" is a synonym of "Internal Business Machines". Position increments
are not enough anymore:

Term

IBM

International

Business

Machines

Position increment

1

0

1

1

The problem with this token stream is that "IBM" is at the same position as "International"
although it is a synonym with "International Business Machines" as a whole. Setting
the position increment of "Business" and "Machines" to 0 wouldn't help as it would mean
than "International" is a synonym of "Business". The only way to solve this issue is to
make "IBM" span across 3 positions, this is where position lengths come to rescue.

Term

IBM

International

Business

Machines

Position increment

1

0

1

1

Position length

3

1

1

1

This new attribute makes clear that "IBM" and "International Business Machines" start and end
at the same positions.

How to not write corrupt token streams

There are a few rules to observe when writing custom Tokenizers and TokenFilters:

The first position increment must be > 0.

Positions must not go backward.

Tokens that have the same start position must have the same start offset.

Tokens that have the same end position (taking into account the position length) must have the same end offset.

Although these rules might seem easy to follow, problems can quickly happen when chaining
badly implemented filters that play with positions and offsets, such as synonym or n-grams
filters. Here are good practices for writing correct filters:

Token filters should not modify offsets. If you feel that your filter would need to modify offsets, then it should probably be implemented as a tokenizer.

Token filters should not insert positions. If a filter needs to add tokens, then they shoud all have a position increment of 0.

When they remove tokens, token filters should increment the position increment of the following token.

Token filters should preserve position lengths.

TokenStream API

"Flexible Indexing" summarizes the effort of making the Lucene indexer
pluggable and extensible for custom index formats. A fully customizable
indexer means that users will be able to store custom data structures on
disk. Therefore an API is necessary that can transport custom types of
data from the documents to the indexer.

Attribute and AttributeSource

Classes Attribute and
AttributeSource serve as the basis upon which
the analysis elements of "Flexible Indexing" are implemented. An Attribute
holds a particular piece of information about a text token. For example,
CharTermAttribute
contains the term text of a token, and
OffsetAttribute contains
the start and end character offsets of a token. An AttributeSource is a
collection of Attributes with a restriction: there may be only one instance
of each attribute type. TokenStream now extends AttributeSource, which means
that one can add Attributes to a TokenStream. Since TokenFilter extends
TokenStream, all filters are also AttributeSources.

The term text of a token. Implements CharSequence
(providing methods length() and charAt(), and allowing e.g. for direct
use with regular expression Matchers) and
Appendable (allowing the term text to be appended to.)

Using the TokenStream API

There are a few important things to know in order to use the new API efficiently which are summarized here. You may want
to walk through the example below first and come back to this section afterwards.

Please keep in mind that an AttributeSource can only have one instance of a particular Attribute. Furthermore, if
a chain of a TokenStream and multiple TokenFilters is used, then all TokenFilters in that chain share the Attributes
with the TokenStream.

Attribute instances are reused for all tokens of a document. Thus, a TokenStream/-Filter needs to update
the appropriate Attribute(s) in incrementToken(). The consumer, commonly the Lucene indexer, consumes the data in the
Attributes and then calls incrementToken() again until it returns false, which indicates that the end of the stream
was reached. This means that in each call of incrementToken() a TokenStream/-Filter can safely overwrite the data in
the Attribute instances.

For performance reasons a TokenStream/-Filter should add/get Attributes during instantiation; i.e., create an attribute in the
constructor and store references to it in an instance variable. Using an instance variable instead of calling addAttribute()/getAttribute()
in incrementToken() will avoid attribute lookups for every token in the document.

All methods in AttributeSource are idempotent, which means calling them multiple times always yields the same
result. This is especially important to know for addAttribute(). The method takes the type (Class)
of an Attribute as an argument and returns an instance. If an Attribute of the same type was previously added, then
the already existing instance is returned, otherwise a new instance is created and returned. Therefore TokenStreams/-Filters
can safely call addAttribute() with the same Attribute type multiple times. Even consumers of TokenStreams should
normally call addAttribute() instead of getAttribute(), because it would not fail if the TokenStream does not have this
Attribute (getAttribute() would throw an IllegalArgumentException, if the Attribute is missing). More advanced code
could simply check with hasAttribute(), if a TokenStream has it, and may conditionally leave out processing for
extra performance.

Example

In this example we will create a WhiteSpaceTokenizer and use a LengthFilter to suppress all words that have
only two or fewer characters. The LengthFilter is part of the Lucene core and its implementation will be explained
here to illustrate the usage of the TokenStream API.

Then we will develop a custom Attribute, a PartOfSpeechAttribute, and add another filter to the chain which
utilizes the new custom attribute, and call it PartOfSpeechTaggingFilter.

In this easy example a simple white space tokenization is performed. In main() a loop consumes the stream and
prints the term text of the tokens by accessing the CharTermAttribute that the WhitespaceTokenizer provides.
Here is the output:

This
is
a
demo
of
the
new
TokenStream
API

Adding a LengthFilter

We want to suppress all tokens that have 2 or less characters. We can do that
easily by adding a LengthFilter to the chain. Only the
createComponents() method in our analyzer needs to be changed:

In LengthFilter, the CharTermAttribute is added and stored in the instance
variable termAtt. Remember that there can only be a single
instance of CharTermAttribute in the chain, so in our example the
addAttribute() call in LengthFilter returns the
CharTermAttribute that the WhitespaceTokenizer already added.

The tokens are retrieved from the input stream in FilteringTokenFilter's
incrementToken() method (see below), which calls LengthFilter's
accept() method. By looking at the term text in the
CharTermAttribute, the length of the term can be determined and tokens that
are either too short or too long are skipped. Note how
accept() can efficiently access the instance variable; no
attribute lookup is necessary. The same is true for the consumer, which can
simply use local references to the Attributes.

Now we also need to write the implementing class. The name of that class is important here: By default, Lucene
checks if there is a class with the name of the Attribute with the suffix 'Impl'. In this example, we would
consequently call the implementing class PartOfSpeechAttributeImpl.

This should be the usual behavior. However, there is also an expert-API that allows changing these naming conventions:
AttributeSource.AttributeFactory. The factory accepts an Attribute interface as argument
and returns an actual instance. You can implement your own factory if you need to change the default behavior.

Now here is the actual class that implements our new Attribute. Notice that the class has to extend
AttributeImpl:

This is a simple Attribute implementation has only a single variable that
stores the part-of-speech of a token. It extends the
AttributeImpl class and therefore implements its abstract methods
clear() and copyTo(). Now we need a TokenFilter that
can set this new PartOfSpeechAttribute for each token. In this example we
show a very naive filter that tags every word with a leading upper-case letter
as a 'Noun' and all other words as 'Unknown'.

Just like the LengthFilter, this new filter stores references to the
attributes it needs in instance variables. Notice how you only need to pass
in the interface of the new Attribute and instantiating the correct class
is automatically taken care of.

Apparently it hasn't changed, which shows that adding a custom attribute to a TokenStream/Filter chain does not
affect any existing consumers, simply because they don't know the new Attribute. Now let's change the consumer
to make use of the new PartOfSpeechAttribute and print it out:

Each word is now followed by its assigned PartOfSpeech tag. Of course this is a naive
part-of-speech tagging. The word 'This' should not even be tagged as noun; it is only spelled capitalized because it
is the first word of a sentence. Actually this is a good opportunity for an exercise. To practice the usage of the new
API the reader could now write an Attribute and TokenFilter that can specify for each word if it was the first token
of a sentence or not. Then the PartOfSpeechTaggingFilter can make use of this knowledge and only tag capitalized words
as nouns if not the first word of a sentence (we know, this is still not a correct behavior, but hey, it's a good exercise).
As a small hint, this is how the new Attribute class could begin:

Adding a CharFilter chain

Analyzers take Java Readers as input. Of course you can wrap your Readers with FilterReaders
to manipulate content, but this would have the big disadvantage that character offsets might be inconsistent with your original
text.

CharFilter is designed to allow you to pre-process input like a FilterReader would, but also
preserve the original offsets associated with those characters. This way mechanisms like highlighting still work correctly.
CharFilters can be chained.