I got the chance to do a barebone Lucene implementation for a client with 40 million records. They liked to introduce faceting on the author field. I was tempted to just go ahead with Solr. However, it’d be counterproductive to the project because they don’t need the full package provided by Solr. My client only wants to build the facets on top of their index with minimal changes. Bobo became the obvious choice in this matter. To say the least, Bobo is amazingly simple to use and yet it provides decent performance.

The biggest roadblock we faced with this implementation is the memory footprint. When the author index was loaded using Bobo, it allocated 12G of memory. Initially, we set our young generation size way too small, the GC algorithm we selected, CMS (Concurrent Mark Sweep), had to constantly do full sweep after every 2-3 searches. The full sweep would halt the entire service for about a minute before returning. It was unacceptable as it pretty much killed search altogether. It appeared that Bobo allocates quite a bit of temporary memory to calculate the facets. Perhaps it was the nature of our data with a lot of intersection between authors that caused the excessive memory usage. We slowly increased the young generation size from 2G (yes I know, it’s very small) to around 8G to get a stable system with virtually zero full sweep.