Why Come2Play Chose Membase

Tom Rosenfeld
of
Come2Play
Published
December 13, 2010

Guest post: Tom Rosenfeld, Team Lead, Come2Play

Come2Play is a company that serves multiplayer web games online. Our site serves over 4 million players on a monthly basis, with tens of thousands of players at any given moment. We use MySQL as our database and IIS on our web servers, which run ASP.Net and classic asp (legacy code).

As a large-scale web application we found we needed to be able to scale out. One of the most sensible and common ways to aid scaling in today's web architecture is to use a caching layer, and the most widely used software for that is obviously Memcached.

When I searched for a Memcached library for ASP.Net I found the Enyim Memcached Client, which led me to Membase (then named NorthScale). After a bit of experimentation with the “regular” version of Memcached and comparing it to Membase, several immediate advantages were very clear:

• Bucket separation
• Very simple configuration
• Ease of installation
• A very polished and intuitive web UI management console
• A secure way to use both Memcached and Membase in the same implementation

Regarding the configuration:
Instead of having to manage a list of all our Memcached servers in a configuration file, I just defined a single entry point to a Membase server. I installed Membase on our IIS servers and let localhost be the entry point, thus dodging the dreaded single-point-failure bullet while keeping our configuration files succinct.

We chose to assimilate the usage of Memcached and Membase in a very gradual way, in order to avoid risks and to avoid throttling new features development (a very agile point of view).

Phase 1: Heavy queries
The preliminary challenge here was to create a library suitable for both .Net and COM. Once we dealt with our library, the rest was easy. The effect on our MySQL servers was very positive and noticeable – a significant reduction in temp tables meant more free memory for indices and a massive improvement in the overall site’s performance, not to mention the expected reduction in the queries/second.

With the baseline Memcached implementation in place, I was able to implement the first phase quickly by using the query with its parameters appended as keys in Memcached and writing a method named “GetDataTableCached,” which either grabs data from the cache or from the DB upon failure and then caches it. Then, by running through all the calls to "GetDataTable" I was able to find and replace all calls that were not user specific with "GetDataTableCached."

I set a 15-minute timeout on these keys so that the data in the cache won't become too stale. So, if someone changes a system setting through our CMS system it takes effect within 15 minutes at the most. The advantage of this approach is obvious – there's no need to write code to invalidate or update these keys.

Also note that the caching of a methods result could be made even simpler with the use of AOP (e.g. PostSharp). I chose to avoid that due to the increase in build times.

Phase 2: User-specific data
This move is very gradual as we are moving data that is:

• In very high use – we check the MySQL logs in order to find these
• Easy to move – data that is being changed from many places in the code
(e.g. the user's virtual currency) is much harder to move than data that never
changes (e.g. the user's registration date)

Phase 3: User-specific data with disk persistence
This step was very easy after the previous phases thanks to a very simple and elegant solution (and by the way what I think is the main strong point of Membase) – it uses the exact same "on the wire" protocol used by Memcached. This is useful for two things: