Monday, June 17, 2013

Application Caching

Caching is one of the most important performance optimizations you can make. It is often left out till the end, patched in, and botched up badly. I have seen many implementations that qualify for posts on TDWTF.

I'll be making a series of blog posts that talk about caching in your application from what technologies to use, down to how to model caching from an object-level.
In this post, I'll discuss caching technologies, pros and cons, and some as-scientific-as-I-can metrics.

Memory Cache

The new .Net 4.0 MemoryCache is really nice. You can configure the size of the named caches in your config file, and there are even events you can subscribe to when an item is removed. It is very fast (fastest), thread-safe, and has a mock-able base class that you can inject into your repository. The only downfall is that it is not distributed or synchronized across your servers. So each request to each server must fetch the data, and then cache it. If the data invalidates, you'll have to notify each server that the cached item must be refreshed.

AppFabric

This distributed cache was a huge disappointment in just about every way. It drastically under-performs, it is difficult to work with, and the features are really lacking. I highly recommend staying away.

Memcached

Memcached has been around for a while now. It has your basic operational feature set, open sourced and multi-platform client libraries, and an incredible web-based interface. From an operations standpoint, this is the best solution out there. It is just incredibly easy to manage your cache cluster, add and remove nodes, etc.

Redis

Redis is an incredible product. Every time I work with it, I learn new and interesting ways to leverage its operational features. It can be used as a simple key/value system just like Memcached, but it also has some more advanced data types such as lists and hashes. It even has great pub/sub system where it becomes the transport to send events to listening subscribers. It can be used as a queue server, replacing other infrastructure. The only negative is that it is limited to physical memory. If that is an obstacle, you can shard out your values across multiple servers.