Speed up your web server with memcached distributed caching

Fast Cache

This practical caching tool can reduce the load on a web database server by as much as 90%.

Brad Fitzpatrick was frustrated: Although the LiveJournal.com blogger platform that he had founded – and for which he had done most of the development work – was up and running on more than 70 powerful machines, its performance still left much to be desired. Not even the database server cache size of up to 8GB seemed to help. Something had to be done, and quickly. The typical measures in scenarios like this are to generate some content up front or to cache pages that have been served up previously. Of course, these remedies require redundant storage of any elements that occur on multiple pages – a sure-fire way of bloating the cache with junk. If the systems ran out of RAM, things could be swapped out to disk, of course, but again this would be fairly slow.

In Fitzpatrick's opinion, the solution had to be a new and special kind of cache system – one that would store the individual objects on a page separately, thus avoiding slow disk access. Soon, he gave up searching for a suitable solution and designed his own cache. The servers that he wanted to use for this task had enough free RAM. At the same time, all the machines needed to access the cache simultaneously, and modified content had to be available to any user without any delay. These considerations finally led to memcached, which reduced the load on the LiveJournal database by an amazing 90 percent, while time accelerating page delivery speeds for users and improving the resource utilization on the individual machines.

Memcached [1] is a high-performance, distributed caching system. Although it is designed to be application-neutral for the most part, memcached is typically used to cache time-consuming database access in dynamic web applications.

Now major players such as Slashdot, Fotolog.com, and, of course, its creator LiveJournal.com rely on memcached for faster web performance. Since the initial development, LiveJournal.com has been acquired and sold various times, and memcached, which is available under a BSD open source license, is now the responsibility of Danga Interactive.

New Clothes

Setting up a distributed cache with memcached is easy. You need to launch the memcached daemon on every server with RAM to spare for the shared cache. If necessary, you can enable multiple cache areas on a single machine. This option is particularly useful with operating systems that only give a process access to part of the total available RAM. In this case, you need to start multiple daemons that each grab as much memory as the operating system will give them, thus using the maximum available free memory space for the cache.

A special client library provides an interface to the server application. It accepts the data to be stored and stores it on one of the existing servers using a freely selectable keyword (Figure 1). The client library applies a sophisticated mathematical method to choose which of the memcached daemons is handed the data and asked to park it in its RAM.

You could compare this procedure with the cloakroom in a theater: You hand your coat to the attendant behind the counter and call out a number. The attendant takes your coat, locates the right stand, and hangs up your coat on the peg with your chosen number. At the end of the performance, the whole process is repeated in reverse: you tell the cloakroom attendant – I mean, the client library – the number again, and the client runs off to the corresponding daemon, takes the data off the peg, and delivers the data to your application.

The whole model is very much reminiscent of a distributed database, or a distributed filesystem. But when you are working with memcached, you should always remember that it is just a cache. In other words, the cloakroom attendant is not reliable and has trouble remembering things. If the space is insufficient for new elements, one of the daemons will dump the least-requested data to free up some space. A similar thing happens if a memcached daemon fails – in this case, any information it had in storage just disappears. In other words, you wouldn't get your coat back at the end of the performance, and the application is forced to talk to the database again. The memcached system does not do redundancy, but there is no need for it: After all, it's just a cache, and its job is to store information temporarily and hand it out as quickly as it can. In line with this philosophy, it is impossible to iterate against all the elements on the cache or to dump the whole content of the cache trivially onto disk.

Listener

Danga Interactive posts the memcached daemon on its homepage for users to download [1]. The only dependencies it has are the libevent library and the corresponding developer package. The daemon is easily built with the normal three-step procedure:

configure
make
sudo make install

Some of the major distributions have pre-built, but typically obsolete, memcached packages in their repositories. After completing the installation, the following command – or similar – launches the daemon:

memcached -d -m 2048 -l 192.168.1.111 -p 11211 -u USERNAME

This command launches memcached in daemon (-d) mode, telling it to use 2048MB of RAM for the distributed cache on this machine (-m 2048). The daemon listens for client requests directed to port 11211 at IP address 192.168.1.111. The daemon also needs to know the account it should use, although you can omit the -u option to run the daemon with the account of the user logged in.

Security experts are probably frothing at the mouth: By default, any user on a Linux system can launch their own memcached daemon. To prevent this, you need to take steps, such as withdrawing access privileges – just one of multiple security issues that memcached deliberately avoids (more of that later).

Choose Your Partners

After lining up all the daemons, you need to choose one of the numerous client libraries, which are now available for many programming and scripting languages: In some cases you even have a choice of packages [2]. If you prefer to create your own client, you will find a detailed description of the underlying protocol in the memcached wiki at the Google Code site [3].

Because memcached is often used to accelerate web applications, most people are likely to opt for a PHP client. For more information on using memcached with C and C++, see the box titled "libmemcached."

libmemcached

The most popular memcached client library for C and C++ applications right now is libmemcached[4] – which should not be confused with its now discontinued predecessor libmemcache (without a "d" at the end). Even if you are not a C or C++ programmer, you might want to take a look at the package; it contains a couple of interesting (server) diagnostic command-line tools. For example, memcat retrieves the data for a key from the cache and outputs the results at the console, memstat queries the current status of one or multiple servers. To build libmemcached, you need both a C and a C++ compiler on your system; apart from that, it's business as usual: ./configure; make; make install.

The basic approach is the same for any language: After locating and installing the correct client library, the developer needs to include it in their own program. The following line creates a new Memcached object using PHP and the memcache client from the PECL repository, as included in the Ubuntu PHP5-memcache package:

$memcached = new Memcached;

Following this, a function call tells the library on which servers memcached daemons are listening:

From now on, you can use more function calls to populate the cache with your own content:

$memcache->set('key', 'test', false, 10);

This instruction writes the test string to the cache, with key as the key, and keeps the entry for 10 seconds. Key lengths are currently restricted to 250 characters – this restriction is imposed by the memcached daemon.

To retrieve the data, you need to pass the key in to the client library and accept the results passed back to you:

The option of writing multiple sets of data to the cache while retrieving them is interesting. The client library automatically parallelizes your request to the memcached servers. Unfortunately, some client libraries do not provide this function; this PHP example