Making WordPress Stable on EC2-Micro

EC2 Micro Instance Limitations

EC2 offers a lot of advantages over many web site hosting options. I am a bit of a control freak and like having full control over my web server. This has advantages and disadvantages of course, meaning more work but more flexibility. Running a WordPress blog on a micro instance can be a serious challenge. I have fought with getting my site to have a minimum level of stability, and here are some of my notes on what helped. Amazon offers a free EC2 micro instance for a year to new users, so it is a very attractive option for hosting a web site.
The EC2 micro instance is pretty cheap compared to the other system options that Amazon offers, but there are some caveats that may shock you after using it for a while. There are a couple of major problems with using this option for hosting a website:

CPU Usage restrictions: If you use 100% CPU for more than a few minutes, Amazon will “steal” CPU time from the instance, meaning that they throttle your instance. This can last (from my observations) as long as five minutes, and then you get a few seconds of 100% again, then the restrictions are back. This will cripple your website, making it slow, and even timing-out requests.

Limited Memory: The instance is limited to 613MB of RAM, and does not have a swap partition. If you run out of memory the system will panic and reboot.

Here is one symptom of CPU throttling from EC2, looking at the CPU usage from the “top” command:

According to the top man page: ”st = steal (time given to other DomU instances)”

If you have more than 1000 visitors or so a day, a micro instance probably isn’t worth your time. But for many small sites (like mine) it does make sense. I wasn’t aware of these limitations before setting up my site, and I very quickly ran into site reliability issues. Here are a few of the things that I did to make my site more stable.
You can save a lot of money by purchasing a reserved instance for a year, but my advice is to run for a few months before making the leap. If you find that your micro instance doesn’t cut it, you have just thrown away a chunk of cash.
So, let’s look at a few of the things you can do to make a WordPress site run reasonably well on a Micro Instance:

Configuration:

Tune Apache to run the correct number of threads.

Use the minimum required memory for MySQL.

Pre-cache your web pages.

Use a content distribution network (CDN) such as CloudFront.

Setup a swap partition.

Reacting to site overload:

Configure alerting for CPU usage and network traffic.

Be ready to rent a larger instance if you get a big traffic spike.

Use a 32 bit operating system.

Proactive Steps:

There are a few settings that can have a big impact on site stability. Remember, a micro instance is really limited, so consider carefully if you really want to host a site on it! The examples here use Amazon’s Linux AMI (which appears to be based upon the CentOS distribution.)

Tuning Apache:

You should have your site up and running in WordPress before attempting to tune it, otherwise you won’t have realistic numbers.
The first step is to tune Apache. This will greatly reduce the memory usage, but does make your site more susceptible to denial of service attacks–not that a micro instance is particularly resistant to a DoS in the first place. At least normal traffic (probably) won’t cause the site to crash.
The first step is to figure out how much RAM each prefork thread requires. You can get the memory usage in KB using the ‘ps’ command:

There are two fields we are interested in, the virtual memory and real memory being used by each process (the 7th and 8th fields respectively.) We don’t need to have a real accurate number so just just use one of the values that looks reasonable. So in this case, we we will figure actual memory use for a thread by using the formula:

real * ( 1 - (real / virtual) )

So, sampling one of these processes, we get:

65 132 * (1 - (65 132 / 419 842)) = 55 027.7764

Or roughly 53MB per process. So, to ensure Apache doesn’t run the system out of memory we will want to limit the threads to a number that is less than the physical memory available. In this case, ten threads should be safe. This means that if we receive more than ten simultaneous requests, the other requests will be queued until a worker thread is available. In order to maximize performance, we will also configure the system to have this number of threads available all of the time.
In Apache’s configuration file (/etc/httpd/conf/httpd.conf) we will modify the prefork section to look like the following:

Limiting MySQL’s Memory:

Once we setup caching on the site, we will greatly reduce the number of database calls. Not that a small site will have extensive database requirements. It isn’t necessary to perform extensive analysis here, just use the example configuration that is provided with the MySQL RPM package.

There is one modification I would recommend to this file, which is to limit connections to the localhost address by adding the following line to the [mysqld] section of the configuration:

bind-address=127.0.0.1

Then restart the service.

[root@domU etc]# service mysqld restart

Pre-cache the pages:

Most of the work a WordPress site does is just serving pages. There really is no reason to dynamically recreate the page from the database every time that a visitor requests an article. The easiest option is to use a WordPress plugin to create static pages. The tool I chose for this is WP Super Cache.
I won’t tell you how to set it up, as the task is really easy.

Use a CDN:

Not serving images and files directly from the website has a few benefits: less connections, and regional distribution. Since you are already hosted on Amazon’s services it makes sense to use CloudFront for this, and setup is super easy.
Once you have the CloudFront setup and working, configure the CDN options in WP Super Cache to serve up the files.

With both caching and a CDN up and running you will have greatly reduced the performance requirements for your site. It probably runs a lot faster and as a result will get a higher page rank too.

Setup a Swap Partition:

Whether you want to take this step is really up to you. Once your system starts swapping it will probably run extremely slow. But a panic and reboot can be even worse if it causes database corruption. Another downside is that if the system does start swapping it will use gobs of bandwidth because you are using a network-based disk for swap, and this isn’t free on EC2. There are good reasons not to use swap, but I personally use one just to act as a safety net.
Here I create a 1GB file, format it as swap, activate it, and configure it to be mounted at boot time:

Reactive Steps:

Even with these settings it isn’t guaranteed that the site will still run smoothly. If something comes up and you suddenly have a ton of traffic you will need to react to it to keep your site online.

Configure Alerting:

Included with the EC2 subscription is 10 CloudWatch alarms. These are very valuable and they can give you an idea of when something is happening with your website. If you get a sudden spike in traffic that overloads the site it will allow you to react.
I use a couple of alerts: CPU utilization and Network Out. When my CPU runs at 80% or greater for 10 minutes, or if I am sending more than 500K of data for 5 minutes I receive an email letting me know what is going on.

Be Prepared to Move to a Larger Instance:

If you do get hit with a lot of traffic suddenly it is really easy to move to a larger instance for a short period. You can do this by creating an AMI image of your server, bringing it up as a new instance and temporarily pointing your Elastic IP to that instance.
First, create a snapshot of your EBS volume:
Then create an AMI from the snapshot:
Once created, you can start a new instance from the AMI:
I highly suggest using a “Spot Request” to save money, bid up to the standard price for the image cost and you should be safe, you will get a larger instance for a fraction of the cost.

Use a 32 Bit Image!!!

It may seem like a good idea to use a 64 bit image in EC2, but this isn’t necessarily a good idea. Why? The next two instance sizes up from micro (small and medium instances) are 32-bit only. So, if your site does get overwhelmed, or if you decide to upgrade permanently to a larger instance your options with a 64 bit image are go directly to a large image ($$$) or rebuild from scratch. Not necessarily great options.
Many of these tips and tricks were hard-learned lessons, and I hope they make your life on a humble EC2-micro instance a little easier!