Custom tokenization and Lucene's FastVectorHighlighter

NOTE: The approach described below is wrong, you may want to read the follow-up post.

Perhaps you have tackled this before: you wanted to use Lucene's FastVectorHighlighter (aka FVH), but since you have a custom CharFilter in your analysis chain, the highlighter fails to produce valid fragments.

In my particular case, I used HTMLStripCharFilter (available to Lucene.Net through my pet contrib project) to extract text content from HTML pages, and then pass it through the rest of the analysis process. This confused FVH, since it was taking the full content from store, where HTML was still present, and token positions were not taking that into account. And any other custom CharFilter that is added to the analysis chain is going to cause the same troubles.

To overcome this, I needed to make sure FVH is aware of all content stripping operations that are made before or while tokenization is happening. All I had to do was to implement a custom FragmentsBuilder, looking as follows (.Net code; a Java version would look almost identical):

If you're using Lucene.Net, you'll have to make sure this patch is applied to your FVH before this could compile.

That was the easiest way to get this working, and fast. Perhaps I could make it more generic, or change the original implementation to allow that and submit it as a patch. Maybe I'll do it someday. Or you could...

Comments

Itamar, please see Sujit Pal's approach to customizing the FVH (in Java, but the principle is similar):

FVH works well with custom analyzers too. If you use the same analyzer while indexing & searching there should be no need for HtmlFragmentsBuilder.
e.g.
public class HtmlStripAnalyzer : Analyzer
{
public override TokenStream TokenStream(string fieldName, TextReader reader)
{
return new LowerCaseFilter(new WhitespaceTokenizer(new HTMLStripCharFilter(Lucene.Net.Analysis.CharReader.Get(reader))));
}
}