DataStax Developer Blog

Caching in Cassandra 1.1

Cassandra has offered built-in key and row caches for a long time — since versions 0.5 and 0.6, respectively. In Cassandra 1.1, we’ve completely revamped cache tuning to make them easier to use effectively, making this a good time to review how these caches work and when to use them.

Why integrate caching with the database?

Integrated caching provides several benefits over a multi-tier design:

Distributed by default: Cassandra takes care of distributing your data around the cluster for you. You don’t need to rely on an external library like ketema to spread your cache across multiple servers.

Architectural simplicity: The less moving pieces in your architecture, the less combinations of things there are to go wrong. Troubleshooting a separate caching tier is something experienced ops teams prefer to avoid.

Cache coherence: When your cache is external to your database, your client must update both. Whether it updates the cache first, or the database, when you have server failure you will end up with data in the cache that is older (or newer) than what is in the database, indefinitely. With an integrated cache, cached data will always exactly match what’s in the database.

Redundancy: If you have temporarily lose a memcached server, your database will receive the read load from that set of keys until that server comes back online. With Cassandra’s fully distributed cache, your client can read from another (cached) replica of the data instead, reducing the impact on your cluster.

Solving the cold start problem: the cold start problem refers to the scenario where after an outage such as a power failure, where all your caches have restarted and are thus empty (“cold”), your database will be hit with the entire application read load, including a storm of retries as initial requests time out. Usually the reason you have memcached in the first place is that your database can’t handle this entire read load, so your ops team is going to be sweating bullets trying to throttle things until they get the caches reheated. Since Cassandra provides a durable database behind the cache, it can save your cache to disk periodically and read the contents back in when it restarts, so you never have to start with a cold cache.

Thus, most high-profile Cassandra users like Netflix make heavy use of Cassandra’s key and row caches.

What are key and row caches?

The key cache is essentially a cache of the primary key index for a Cassandra table. It will save CPU time and memory over relying on the OS page cache for this purpose. However, enabling just the key cache will still result in disk (or OS page cache) activity to actually read the requested data rows.

The row cache is more similiar to a traditional cache like memcached: when a row is accessed, the entire row is pulled into memory (merging from multiple sstables, if necessary) and cached so that further reads against that row can be satisfied without hitting disk at all.

Typically you’ll enable exactly one of the key and row caches per table. The main exception is for archive tables that are infrequently read, where you should disable caching entirely.

Configuring caches in Cassandra 1.1

The main settings are key_cache_size_in_mb and row_cache_size_in_mb in cassandra.yaml, as well as settings for how often to save the caches to disk.

Unlike in earlier Cassandra versions, cache sizes do not need to be specified per table. Just set caching to all, keys_only, rows_only, or none, (defaulting to keys_only) and Cassandra will weight the cached data by size and access frequency, and thus make optimal use of the cache memory without manual tuning.

There is one other setting to know about: row_cache_provider specifies what kind of implementation to use for the row cache. The choices are SerializingCacheProvider, which is the default and more memory-efficient — between 5x and 10x for applications that are not blob-intensive — but can perform worse in update-heavy workload, since it invalidates cached rows on update instead of updating them in place the way ConcurrentLinkedHashCacheProvider does.

OpsCenter can tell you if your caches are effective; typically, you want to see at least a 90% hit rate for row caches. If you can’t achieve that for a given table, you may want to consider switching to just the key cache instead, freeing up cache memory for other tables that can use it more effectively.

Limitations and future work

The main limitation is that the row cache pulls entire (physical) rows into memory, making it unable to cache queries on compound-key rows like “give me the most recent 50 data points from a series of millions.”

We’re working on a solution for this in Cassandra 1.2. Two approaches have been proposed:

Comments

As you have mentioned “Unlike in earlier Cassandra versions, cache sizes do not need to be specified per table” this setting is more like global for all the keyspaces and column families.
But consider a case where I am having some hot rows in a particular CF and want to enable row_cache for the same and for rest of the CFs I do not want any cache or I want say key_cache.I do not have this control over caching anymore which I think was nice.
Is there any provision to handle this kind of scenario?

How cassandra key caches works when same key is exists in more than one sstables?

1) As per datastax key cache stores the primary key index for rowkey.

2) In our case we have enough memory allocated for key cache and same key is present in multiple sstables with diffrent columns.

3) If no of calls are made to access all these same key from multiple sstables then how indexes are stored in key cache? will it store indexes for all the sstables OR just for the last sstable from which key is accessed recently?