Post navigation

Tips for deploying a LAMP stack on Amazon EC2

If you’re interested in using Amazon EC2 and other services to deploy a LAMP (Linux, Apache, MySQL, and PHP) stack, you will probably find this post invaluable. I spent about three full days migrating all my sites over from a physical dedicated server to an EC2 instance, and what follows are several things I learned during the process.

This post will cover the following (in varying levels of detail):

Selecting and setting up an AMI (Amazon Machine Image) with Apache, MySQL, and PHP.

Setting up an elastic IP address.

Setting up an EBS (Elastic Block Store).

Sending email from an EC2 instance (not as easy as one might think).

Backing up your data and web applications.

First Thing’s First

The first thing you need to do sign up for Amazon EC2 which you can do on the Amazon Elastic Compute Cloud homepage. There’s no need to get into the details of how to do so here. Just follow the instructions and download the necessary security certificates.

Setting Up Your AMI

Once you have registered for EC2, you can sign into the AWS Management Console to manage several aspects of EC2. Many people recommend using the Elasticfox Firefox extension which I’ve heard very good things about. I didn’t use it myself because I mostly use Safari these days, but I’m sure you can use either one.

I launched a few different AMIs from the management console and played around with them before deciding which one I wanted to use (if you only leave them running for an hour or so while you get to know them, this is very cheap research). Remember to shut old instances down when you’re finished with them so you don’t continue to incur costs, and remember that when you shut down an instance, everything you installed or changed will be lost. (I’ll get into how to deal with this issue shortly.)

I ended up going with a community 64-bit Fedora AMI with Apache 2, PHP, and MySQL already installed (AMI ID “ami-5ba34432”). My plan was to customize the instance, then create my own AMI from it. I ended up customizing it so much that I probably should have just started with a clean 64-bit Fedora instance because I essentially recompiled and reinstalled everything before I was done. Using an instance with LAMP already installed probably saved me some time, though, since many of the dependancies were already in place.

Whichever you decide to do, the idea is to get the server into a fully configured and clean state with everything you want already installed and ready to go so that you can create your own AMI which you can then deploy or shutdown at will. Everything that you want to preserve between instance launches (databases, for instance) will go on an Elastic Block Store which I’ll talk about soon.

I had a lot of trouble getting PHP to compile on a 64-bit system. In fact, in my opinion, PHP isn’t ready for 64-bit yet. But after several hours of effort and hacks, I got it to configure, compile, and to pass all but a few of its tests. (Unless you need a 64-bit system, compiling PHP on a 32-bit system will save you a lot of time and trouble.)

Key things to remember while setting up your AMI:

Use yum to install missing dependancies that PHP needs (yum will already be installed on your Fedora instance). You can even use yum to install Apache, PHP, and MySQL if you want, but I didn’t do that because I like to customize them quite a bit.

Don’t worry about setting up your Apache document root yet. Just stick with the default for now. We’re going to change where it lives later.

Don’t worry about getting any databases set up yet. We’re going to change where the actual database files live, as well, so they don’t get destroyed when your instance gets shut down.

Download the latest version of the Amazon EC2 AMI tools. Don’t use the version that comes on your AMI. The tools on my AMI were out of date and therefore created AMIs that I couldn’t get to deploy. I removed all the existing tools and installed the newest version from the Amazon AWS site.

Go ahead and install the latest Amazon API tools, as well. You’ll have to install Java (use yum) and set up several environment variables to get them working right, but they are very useful.

Once you have Apache, PHP, and MySQL set up, all the Amazon API and AMI tools installed and configured, and once you have all your accounts, permissions, groups, and start-up scripts set up like you want them, use the ec2-bundle-vol command to create a fresh AMI, and then use ec2-upload-bundle to upload your new AMI to S3 (better yet, create shell scripts to do this for you so you can repeat the process easily). This will allow you to launch a fresh instance of your AMI at any point and will protect you from losing all your work should your instance somehow get shut down.

Remember that you have to register your AMI before you can launch an instance of it. This can be done through your management console of choice, or through the AMI command line tools.

Setting up an Elastic IP Address

I’ve been watching EC2 since the day it was first announced, but one of the things that prevented me from ever using it was the lack of static IP addresses. EC2 was originally intended as a way to quickly and easily launch and shut down servers as needed primarily to scale various kinds of CPU-intensive applications and tools, however Amazon quickly discovered that web developers wanted to be able to use virtual servers as a replacement for dedicated servers. But the problem was that EC2 instances have unreliable public host names and IP addresses. In order to map domain names to EC2 instances, web developers needed static IP address.

Amazon responded by implementing something they call elastic IP addresses. In the AWS Management Console, you can simply allocate a new static IP address and map it to any instance you wish. You can then use that IP address to set up your domains’ DNS records. If you need to relaunch your instance for any reason, just map your existing elastic IP address to your new instance, and that IP address will point to your new instance instantly which saves you from having to change your DNS settings (which often take up to 24 hours to propagate throughout the internet).

You can allocate a new IP address and map it to your instance whenever you want. I did it very early on and mapped some spare domain names I have to it in order to test applications as I migrated them from my dedicated server to my EC2 instance. I should also point out that although I have several sites running on my EC2 instance, I only allocated a single IP address (after all, we’re running out of IPv4 addresses). Rather than using several different address, I’m using virtual hosts configured through Apache. The advantage is that I only have to keep track of a single IP address, and I can just tweak my Apache configuration in order to manage my sites.

Setting Up an Elastic Block Store

Using an EBS is critical to having a solid EC2 deployment. EBSs are essentially virtual disks that you can create through your management console (or command line tools), then mount through your EC2 instance. They can range in size from 1GB to 1TB, and they are persistent so that they don’t go away when your EC2 instance shuts down. You can format them any way you want, store anything you want on them (including your home directory, database files, web root, etc.), and they are easily backed up.

I created a 100GB EBS which I use for my MySQL database files, web root, and for a bunch of files that I want to make sure are always available. Anything you don’t want to disappear when your EC2 instance shuts down belongs on your EBS (or backed up through some other process).

Some tips for using an EBS:

You pay for EBSs separately from your EC2 instance, and you pay by the GB. They are cheap, though, so it’s better to allow yourself some room to grow.

To configure Apache to use EBS as its web root, simply change the web root in your httpd.conf file to point to some directory on your EBS. If you’re using virtual hosts like I am, just point each host’s DocumentRoot to a location on your EBS.

Make sure you create your EBS in the same zone as your EC2 instance or you won’t be able to attach it. If you create it in a different zone, you’ll discover your error soon enough (before you’re able to format it and copy any data to it), but it’s much easier if you get it right the first time.

Sending Email From an EC2 Instance

At this point, you’re ready to start moving applications over to your EC2 instance and to begin testing them. I thought I was just about finished until one of my applications tried to send an email and it was flagged as spam by my email provider. I put a lot of time and effort into making sure the email my applications sent from my old dedicated server didn’t get flagged as spam, and I found that most of what I did wouldn’t work on my EC2 instance.

I don’t know all the reasons why email from EC2 instances get flagged as spam, but I believe it’s because:

Spammers have tried using EC2 to send large amounts of email in the past, so I think email servers assume anything coming from an EC2 instance is suspicious.

Since EC2 IP addresses and host names change, it’s difficult to get an SPF record set up properly.

The answer was to use an SMTP service to send all my mail. Several people on the AWS forums recommended AuthSMTP, so I decided to go with them. Once you sign up, you have to add email addresses from which you can send mail through their SMTP servers. When sending mail from a typical client like Mail.app or Outlook through AuthSMTP, it’s easy to set up and everything just seems to work, however when sending email programmatically via PHP, more configuration is necessary. Here are some very valuable tips:

You don’t want to send email directly through the SMTP server from a PHP process because connecting to an SMTP server takes time which slows down your application and blocks a thread. Additionally, SMTP servers will limit the number of connections you can make which means only about five threads (requests) would be able to send email at a time. A better configuration is to use something like postfix to queue up the mail and send it out as it can. For details on configuring postfix to send email through an SMTP server, see Paul Dowman’s post entitled A rock-solid setup for sending SMTP mail from your EC2 web server.

Unfortunately you’re still not done. Every “from” header in an email sent through AuthSMTP has to be in your list of approved from address including reply-to and return-path addresses. These are not headers that you ordinarily have much control over, so in order to programatically send email through AuthSMTP, you have to find a way to set those headers. (More on this below…)

If you’re using PHPMailer, make sure you set the Sender property to an approved from address, and make sure you call AddReplyTo, passing in an approved from address. WordPress uses PHPMailer which mean I had to change some of the WordPress source code (wp-includes/pluggable.php) to get this to work. This is far from ideal, but I didn’t have much choice. WordPress should probably include these changes in their code to make their platform more EC2-friendly and to make sending email from WordPress more robust.

If you’re using PHP’s mail function (or a platform like MediaWiki which uses PHP’s mail function), the last argument needs to be “-r” with an approved address passed in. Again, I had to change the MediaWiki source code (UserMailer.php) to get this to work.

Don’t forget to add an SPF record for your new SMTP server. The article referenced above gives some information on how to do this, and there is also information located here on AuthSMTP’s site about SPF records. This step is critical for getting email through spam filters.

Getting email to work reliably from an EC2 instance was a pretty big hassle, and required me to take on some additional expense (an AuthSMTP account), however now that I have everything working, my email implementation is extremely solid and even more robust than it was before with my dedicated server.

Backing Up Your Data

Now that you have all your applications running on an EC2 instance, it’s time to think about backing up your data. I use three kinds of backups:

Custom AMI. Once I got my server configured like I wanted it, I created an AMI out of it and stored it on S3 so I can redeploy it at any time. This is critical. You can’t assume that your instance is going to live forever. Whenever you make a change to anything on your server, make a new AMI and upload it to S3. If you find you’re having to do this too often, consider moving whatever it is you’re constantly changing to your EBS.

EBS Snapshots. You can take snapshots of your entire EBS (actually, only data that has changed since the last snaphot is backed up) either through your management console, or programatically through command-line tools and scripts. I have a cron job set up to snapshot my EBS once a week. Once a week isn’t nearly frequent enough to back up critical data, so I have one more level of backup…

S3 Backups. I have a cron job that runs each night which backs up all my critical databases using mysqldump, zips them up, and moves them to a secure bucket on S3. Additionally, I zip up and tar up all my critical applications’ web roots and move those over to S3, as well. (You only need to do this step if your web applications routinely write files to disk like uploaded images; otherwise, you just need one backup.)

With all three of these backup processes in place, I can recover from even the most catastrophic mishaps in a matter of minutes, and at the absolute worse, I might lose one day’s worth of data. Of course, you can always perform backups more frequently (several times a day — even hourly) to reduce the risk of data loss even more. I’ll probably do this eventually once I get a good feel for what the backups are costing me though S3 storage.

Conclusion

I hope this post has given you some valuable tips on moving from a physical server to a virtual one. I really believe that virtual servers living in the cloud is the future, and although it wasn’t easy to do the migration (I’m a software developer — not a system administrator), I believe my sites are much better off now then they were on a dedicated machine.

If you have any questions or comments, leave them below so we can make this an increasingly valuable resource.

Nice work. I happened upon your post in a search for an alternate LAMP stack. You are doing this all in 100MB? I was curious… how much per month do you estimate this EC2 deployment will cost you? How is the performance of this deployment compared to a regular light-weight server?

Jeff,
I’m paying just about $300/mo for this instance, including flexible storage — exactly what I was paying for a physical server with only 2GB of RAM. The performance seems to very good. I haven’t had a chance to really push it yet, but it seems faster than my old physical server.

Christian,
If I may make a couple of suggestions. I run my instances with an extra EBS volume to do rsync backups and to hold nightly tarballs. I also use symbolic links to point to my php code on /mnt instead of updating the config because sometimes AWS has issues with the EBS stores and when that happens, you will corrupt your database (it happened to me). Unless you’re doing software RAID/LVM or something like that with the EBS volumes it’s just safer.
Erik

What I do with EBS volumes for MySQL:
I created 2 volumes, set up a software RAID 0 (striping) with them, and then formatted that using the XFS filesystem, as opposed to EXT3.Benefits:
Striping increases your overall disk speed, and you can easily add more volumes to leverage that. You do pay a small price in CPU for handling the software RAID, but ultimately the biggest limiter for just about any database is simply getting data off the disk.
XFS has one big advantage over EXT3 in that you can freeze the filesystem while a snapshot is taken, thus making it a WHOLE lot safer to do backups. My script for full/incremental backups run in literally in under 3 seconds — “FLUSH TABLES WITH READ LOCK” in MySQL, xfs_freeze, ec2-create-snapshot, xfs_freeze -u, “UNLOCK TABLES” in MySQL.
So no need to wait for mysqldump, tar, gzip, etc., and it all ends up safely on S3. Moreover, I can create new EBS volumes from that S3 snapshot with essentially real-time data for other testing/dev work. And ec2-create-snapshot is smart enough to handle incrementals transparently for me — I don’t even have to think about full vs. incremental.