Advanced Text Indexing with Lucene

Lucene is a free text-indexing and -searching API written in Java. To
appreciate indexing techniques described later in this article, you need a
basic understanding of Lucene's index structure. As I mentioned in the previous
article in this series, a typical Lucene index is stored in a single
directory in the filesystem on a hard disk.

The core elements of such an index are segments, documents, fields, and
terms. Every index consists of one or more segments. Each segment contains
one or more documents. Each document has one or more fields, and each field
contains one or more terms. Each term is a pair of Strings
representing a field name and a value. A segment consists of a series of
files. The exact number of files that constitute each segment varies from
index to index, and depends on the number of fields that the index contains.
All files belonging to the same segment share a common prefix and differ in the
suffix. You can think of a segment as a sub-index, although each segment is
not a fully-independent index.

Note that all files that belong to this segment start with a common prefix:
_lfyc. Because this index contains two fields, you will notice
two files with the fN suffix, where N is a number. If
this index had three fields, a file named _lfyc.f3 would also be
present in the index directory.

The number of segments in an index is fixed once the index is fully built,
but it varies while indexing is in progress. Lucene adds segments as new
documents are added to the index, and merges segments every so often. In the
next section we will learn how to control creation and merging of segments in
order to improve indexing speed.

For more information about the files that make up a Lucene index, please see
the File Formats document on Lucene's web site. You can find the URL in the Reference section at the end of this article.

The previous article demonstrated how to index text using the
LuceneIndexExample class. Because the example was so basic, there
was no need to think about speed. If you are using Lucene in a non-trivial
application, you will want to ensure optimal indexing performance. The
bottleneck of a typical text-indexing application is the process of writing
index files onto a disk. Therefore, we need to instruct Lucene to be smart
about adding and merging segments while indexing documents.

When new documents are added to a Lucene index, they are initially stored in
memory instead of being immediately written to the disk. This is done for
performance reasons. The simplest way to improve Lucene's indexing performance
is to adjust the value of IndexWriter's mergeFactor
instance variable. This value tells Lucene how many documents to store in
memory before writing them to the disk, as well as how often to merge multiple
segments together. With the default value of 10, Lucene will store 10
documents in memory before writing them to a single segment on the disk. The
mergeFactor value of 10 also means that once the number of
segments on the disk has reached the power of 10, Lucene will merge these
segments into a single segment. (There is a small exception to this rule,
which I shall explain shortly.)

For instance, if we set mergeFactor to 10, a new segment will
be created on the disk for every 10 documents added to the index. When the
10th segment of size 10 is added, all 10 will be merged into a single segment
of size 100. When 10 such segments of size 100 have been added, they will be
merged into a single segment containing 1000 documents, and so on. Therefore,
at any time, there will be no more than 9 segments in each power of 10 index
size.

The exception noted earlier has to do with another IndexWriter
instance variable: maxMergeDocs. While merging segments, Lucene
will ensure that no segment with more than maxMergeDocs is
created. For instance, if we set maxMergeDocs to 1000, when we
add the 10,000th document, instead of merging multiple segments into a single
segment of size 10,000, Lucene will create a 10th segment of size 1000, and
keep adding segments of size 1000 for every 1000 documents added.

The default value of maxMergeDocs is
Integer#MAX_VALUE. In my experience, one rarely needs to change
this value.

Now that I have explained how mergeFactor and
maxMergeDocs work, you can see that using a higher value for
mergeFactor will cause Lucene to use more RAM, but will let Lucene
write data to disk less frequently, which will speed up the indexing process.
A smaller mergeFactor will use less memory and will cause the
index to be updated more frequently, which will make it more up-to-date, but
will also slow down the indexing process. Similarly, a larger
maxMergeDocs is better suited for batch indexing, and a smaller
maxMergeDocs is better for more interactive indexing.

To get a better feel for how different values of mergeFactor
and maxMergeDocs affect indexing speed, take a look at the
IndexTuningDemo class below. This class takes three arguments on
the command line: the total number of documents to add to the index, the value
to use for mergeFactor, and the value to use for
maxMergeDocs. All three arguments must be specified, must be
integers, and must be in this order. In order to keep the code short and
clean, there are no checks for improper usage.

As you can see, both invocations created an index with 100,000 documents,
but the first one took much longer to complete. That is because it used the
default mergeFactor of 10, which caused Lucene to write documents
to the disk more often than the mergeFactor of 1000 used in the
second invocation.

Note that while these two variables can help improve indexing performance,
they also affect the number of file descriptors that Lucene uses, and can
therefore cause the "Too many open files" exception. If you get this error, you
should first see if you can optimize the index, as will be described shortly.
Optimization may help indexes that contain more than one segment. If
optimizing the index does not solve the problem, you could try increasing the
maximum number of open files allowed on your computer. This is usually done at
the operating-system level and varies from OS to OS. If you are using Lucene
on a computer that uses a flavor of the UNIX OS, you can see the
maximum number of open files allowed from the command line.

Under bash, you can see the current settings with the built-in
ulimit command:

prompt> ulimit -n

Under tcsh, the equivalent is:

prompt> limit descriptors

To change the value under bash, use this:

prompt> ulimit -n <max number of open files here>

Under tcsh, use the following:

prompt> limit descriptors <max number of open files here>

To estimate a setting for the maximum number of open files allowed while
indexing, keep in mind that the maximum number of files Lucene will open is
(1 + mergeFactor) * FilesPerSegment.

For instance, with a default mergeFactor of 10 and an index of
1 million documents, Lucene will require 110 open files on an unoptimized
index. When IndexWrite's optimize() method is
called, all segments are merged into a single segment, which minimizes the
number of open files that Lucene needs.

In the previous section, I mentioned that new documents added to an index
are stored in memory before being written to the disk. You also saw how to
control the rate at which this is done via IndexWriter's instance
variables. The Lucene distribution contains the RAMDirectory
class, which gives even more control over this process. This class implements
the Directory interface, just like FSDirectory does,
but stores indexed documents in memory, while FSDirectory stores
them on disk.

Because RAMDirectory does not write anything to the disk, it
is faster than FSDirectory. However, since computers usually come
with less RAM than hard disk space, RAMDirectory is not suitable
for very large indices.

The MemoryVsDisk class demonstrates how to use
RAMDirectory as an in-memory buffer in order to improve the
indexing speed.

If you want to improve indexing performance with Lucene, and manipulating
IndexWriter's mergeFactor and
maxMergeDocs prove insufficient, you can use
RAMDirectory to create in-memory indices. You could create a
multi-threaded indexing application that uses multiple
RAMDirectory-based indices in parallel, one in each thread, and
merges them into a single index on the disk using IndexWriter's
addIndexes(Directory[]) method. Taking this idea further, a
sophisticated indexing application could even create in-memory indices on
multiple computers in parallel. To make full use of this approach, one needs
to ensure that the thread that performs the actual indexing on the disk is
never idle, as that translates to wasted time.

While multiple threads or processes can search (i.e. read) a single Lucene
index simultaneously, only a single thread or process is allowed to modify
(write) an index at a time. If your indexing application uses multiple
indexing threads that are adding documents to the same index, you must
serialize their calls to the IndexWriter.addDocument(Document)
method. Leaving these calls unserialized may cause threads to get in each
other's way and modify the index in unwanted ways, causing Lucene to throw
exceptions. In addition, to prevent misuse, Lucene uses file-based locks in
order to stop multiple threads or processes from creating
IndexWriters with the same index directory at the same time.

For instance, this code:

import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.StopAnalyzer;
/**
* Demonstrates how Lucene uses locks to prevent multiple processes from
* writing to the same index at the same time.
* Note: before running this for the first time, manually create the
* directory called 'index' in your temporary directory.
*/
public class DoubleTrouble
{
public static void main(String[] args) throws Exception
{
// create an index called 'index' in a temporary directory
String indexDir =
System.getProperty("java.io.tmpdir", "tmp") +
System.getProperty("file.separator") + "index";
Analyzer analyzer = new StopAnalyzer();
IndexWriter firstWriter = new IndexWriter(indexDir, analyzer, true);
// the following line will cause an exception
IndexWriter secondWriter = new IndexWriter(indexDir, analyzer, false);
// the following two lines will never even be reached
firstWriter.close();
secondWriter.close();
}
}

I have mentioned index optimization a few times in this article, but I have
not yet explained it. To optimize an index, one has to call
optimize() on an IndexWriter instance. When this
happens, all in-memory documents are flushed to the disk and all index segments
are merged into a single segment, reducing the number of files that make up the
index. However, optimizing an index does not help improve indexing
performance. As a matter of fact, optimizing an index during the indexing
process will only slow things down. Despite this, optimizing may sometimes be
necessary in order to keep the number of open files under control. For
instance, optimizing an index during the indexing process may be needed in
situations where searching and indexing happen concurrently, since both
processes keep their own set of open files. A good rule of thumb is that if
more documents will be added to the index soon, you should avoid calling
optimize(). If, on the other hand, you know that the index will
not be modified for a while, and the index will only be searched, you should
optimize it. That will reduce the number of segments (files on the disk), and
consequently improve search performance--the fewer files Lucene has to open
while searching, the faster the search.

To illustrate the effect of optimizing an index, we can use the
IndexOptimizeDemo class:

import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.StopAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
/**
* Creates an index called 'index' in a temporary directory.
* If you want the index to optimize the index at the end use '-o'
* command line argument. If you do not want to optimize the index
* at the end use any other value for the command line argument.
* This class expects to be called correctly.
*
* Note: before running this for the first time, manually create the
* directory called 'index' in your temporary directory.
*/
public class IndexOptimizeDemo
{
public static void main(String[] args) throws Exception
{
// create an index called 'index' in a temporary directory
String indexDir =
System.getProperty("java.io.tmpdir", "tmp") +
System.getProperty("file.separator") + "index";
Analyzer analyzer = new StopAnalyzer();
IndexWriter writer = new IndexWriter(indexDir, analyzer, true);
for (int i = 0; i < 15; i++)
{
Document doc = new Document();
doc.add(Field.Text("fieldname", "Bibamus, moriendum est"));
writer.addDocument(doc);
}
if ("-o".equalsIgnoreCase(args[0]))
{
System.out.println("Optimizing the index...");
writer.optimize();
}
writer.close();
}
}

As you can see from the class Javadoc and code, the created index will be
optimized only if -o command line argument is used. To create an
unoptimized index with this class, use this:

This article has discussed the basic structure of a Lucene index and has
demonstrated a few techniques for improving indexing performance. You also
learned about potential problems with indexing in multi-threaded environments,
about what it means to optimize an index, and how this affects indexing. This
knowledge should allow you to gain more control over Lucene's indexing process
to improve its performance. The next article will examine Lucene's text-searching capabilities.