I’m a Pivotal Labs developer at our NYC offices working on the Casebook development team. Casebook is a child-welfare-focused web application used by governments and non-profit organizations. Our users are social workers, caseworkers, and their leadership who work with children, families, and the broader community to provide services that ensure children are safe and healthy.

Search worries

Our users need to quickly find accurate information about the people on their workload to respond appropriately in crises and keep a high quality written record of their work with the children and families.

Solr powered Casebook’s initial search engine. Solr is built in Java, so we set up our application servers to run Java alongside our Ruby on Rails web application. We maintained a real-time copy of our important searchable data, such as people’s names, in our Solr index.

Our Solr-based approach ran into a few problems. Sometimes users would see outdated search results or, even worse, errors. This was annoying and also potentially damaging to our users’ ability to keep up with emergency situations.

Keeping our data synched in multiple locations caused most of our problems with Solr. Some of our more complex code paths would update the database but not propagate those changes to the search index. Users saw search-related error messages when there were communication problems with our Solr instances.

We had some fail-safes in place.

We wrote code that automatically restarted the Solr instances when they crashed. When we found the search data diverged from our application data, we manually rebuilt the search index to get the two data stores back in sync. These solutions just managed our problems rather than solving them.

These problems aren’t unique to Solr. Other tools like Lucene, Ferret, and Sphinx have the same shortcomings when combined with Ruby on Rails.

Using the database itself as the search index

So the thought occurred to our team that we ought to try to make the database itself be the search index. We use a PostgreSQL database, and PostgreSQL 8.3 and later have built-in support for full-text search. PostgreSQL is a popular, mature SQL database solution that works great with Active Record. If you use Heroku, then you are already using a PostgreSQL 8.3 database that supports full-text search.

Since full-text search in PostgreSQL uses fairly complex SQL queries, we decided that the best approach would be to take advantage of Active Record’s scopes. The idea is to make it easy to write code that looks like this:

Adding more features

We took cues from the texticle gem to figure out how to generate our SQL code. Thanks to Aaron Patterson for this wonderful gem! However, our Solr solution had several features that texticle and the basic PostgreSQL full-text search alone don’t currently provide, like ignoring diacritical marks (accents like ü), searching for soundalikes, and searching for words that are misspelled.

We spent a day or two trying to hack texticle into something we could use, but realized that if we started from scratch we could more easily build a gem that could combine more than one PostgreSQL feature into a single search scope. That way, we could improve our Book.search_title scope by using unaccent to ignore accent marks, Double Metaphone to match soundalikes, and trigrams to match misspellings.

Except for :tsearch, the default-in full-text search implementation, the other features require you to install certain contrib packages into your database. For now, this is an exercise for the reader, but we hope to help automate this process soon.

Our gem development approach

We started by taking our application with the existing Solr-based search intact and boosting our test coverage as we could to cover all of the different cases (misspellings, soundalikes, etc.) for some of our most complicated searchable models. Once we were satisfied with our test coverage, we completely removed the Solr search code and were left with dozens of failing tests.

We then created a blank gem and starting adding features to it one-by-one to get each of our application’s tests to pass. First we made sure that simple situations were solid, such as when the search query string exactly matches the searchable text.
Then we moved on to the complicated parts.

Our existing application uses Ruby 1.8 and Rails 2.3, and at the same time we have a new second project that uses Ruby 1.9 and Rails 3. So we made sure that all of our code worked in both environments. I will write another blog post soon about how we used two instances of autotest to make this easy to do.

The great thing about this approach is that we were able to start by defining a set of behaviors based on what our real-world application needed. This kept our code lean. Also, we were able to define our own syntax for the pg_search_scope method. By mimicking the Active Record scope syntax, hopefully we have created something that is easy to pick up. We would just add a new option to one of our calls to pg_search_scope and code until it worked as desired.

User impact

Our users have noticed the difference after we deployed our updated search implementation. We had been rebuilding the search index or troubleshooting a search related bug a few times a week. We haven’t seen a search related help request from our users since we made the changes. In addition, our developers are happier because code deployments are much more reliable and easy to understand.

Overall, the project has been a resounding success!

Getting involved

pg_search isn’t complete yet (will it ever be?). There are many more features we’d like to have to improve performance, search quality, and overall user experience.

For example, right now our developers have to hand-build SQL indexes to improve query speed. pg_search should automatically generate those indexes for us based on which PostgreSQL features are in use.

That’s just one example. We’d love to hear more ideas from you about how pg_search can improve to meet users needs.