This wiki page is used to share some information about using Magento on Amazon’s EC2 cloud hosting environment.

We have tested with m1.small (1.7GB RAM) and m1.large (7.5GB RAM) instances first with Apache and later also with the not so widely used (only 1-4% market share) Nginx webserver.

You can launch your own instance with our (m1.small) Magento optimized Amazon Machine Image (AMI) has been registered in the US region and can be launched e.g. with the AWS Management Console or with the EC2 API command line utilities. You can easily find the AMI by typing magento into the management console’s search bar.

We’ve also created a Magento optimized m1.large AMI in the EU region which is also registered and publicly available, as well as an m1.small AMI using Nginx instead of Apache. Ssee below for more info on Nginx and the exact AMI names.

The default PHP memory_limit was set to 16M which triggered memory errors on the product listing page, so I upped this to 512MB which solved the problem. I purposely set it to such a large value, because this whole set up is directed towards running a single Magento store on one EC2 instance, but after I found out that reducing the value to 64MB would yield the same result, I preferred to keep it that way, i.e. on 64MB.

Modifying the configuration of MySQL server to take better advantage of the server’s RAM.

Most Linux distributions provide a conservative MySQL package out of the box to ensure it will run on a wide array of hardware configurations. If you have ample RAM (eg, 1gb or more), then you may want to try tweaking the configuration. An example my.cnf is below, though you will want to consult the MySQL documentation for a complete list of configuration directives and recommended settings.

I double checked all query_cache_ variables and upped query_cache_limit from 1 to 16MB. Result: no further improvements. Reset value to 1MB

Although others have reported huge performance improvements after tweaking MySQL config, the demo store with only a couple of products, does not seem to make a big difference. This might be different with stores that have more than 1000 or 10’000 products and many product attributes.

Although I did not do any precise benchmarking so far, the performance improvement was based on the Magento profiles parse times was not more than 100ms (mili seconds).

Finally, I ran the MySQL Performance Tuning Primer Script and got a couple of warnings, but I think that the configuration is still valid, because there has not been a lot of traffic so far. 48 hours have not yet passed, but I think the results are already representative:

KeepAlives are a trick where multiple HTTP requests can be funneled through a single TCP connection. Since the setup of each TCP connection incurs additional time, this can significantly reduce the time it takes to download all the files (HTML, JavaScript, images) for a website.

This can deliver significant improvements to PHP‘s responsiveness by caching PHP code in an intermediate bytecode format, which saves the interpreter from recompiling the PHP code for each and every request.

Use a memory-based filesystem for Magento’s var directory. Magento makes extensive use of file-based storage for caching and session storage. The slowest component in a server is the hard drive, so if you use a memory-based filesystem such as tmpfs, you can save all those extra disk IO cycles by storing these temporary files in memory instead of storing them on your slow hard drive.

Note for Nginx AMI: When you launch your own instance with the magento-nginx AMI, you have to manually start Varnish (HTTP accelerator) with varnishd -f /usr/local/etc/varnish/default.vcl. In production use, you might want to add a start script to make this happen automatically.

After launching a new instance you can access the store in the browser by appending /apache2-default/magento/ to your instance URL. In case you want to login to the admin control panel, please use the username admin and password 4KKEzgn9zZ for the US m1.small instance and admin/magento for the EU m1.large instance.

I have no experience with the ab (ApacheBench) utility and wonder why 960 out of 1000 requests are failing. Maybe someone with more experience can shed some light on this in order to get some representative benchmarking results.

I’ve also installed osCommerce v3.0 Alpha 5 on an m1.small instance and got ApacheBench reports where all requests were successful, so it seems that either ApacheBench has problems with resource hungry scripts or there’s some other problem I’m not aware of.

I’m not yet sure what to think about Pingdom, because the total page load times indicated often are a lot longer than how pages load on my machine, but it seems still to be a good indicator for overall performance as parse time is not everything!

m1.large instances have very fast parse times, but the total page load time as measured for the store home page by Pingdom is over 6 seconds, on an m1.small instance evenaround 10 seconds, let alone the product detail or product listing page.

After testing and tweaking with Apache Prefork (mod_php), we also did some testing and tweaking with Nginx, an open-source, high-performance HTTP server and as a result got total page load times of 4.5 seconds (see archived Pingdom test) on an m1.small instance even though the parse times themselves were between 0.8 and 1.0 seconds! This was only with php_fastcgi and no other tweaks.

We have not yet tested Apache together with php_fastcgi, but it seems that Nginx is the way to go if you’re after high performance Magento hosting.

After that, total page load time of the store home page according to Pingdom was between under 2.4 seconds (see this archived Pindom test!) and under 3.2 seconds. Total page load time both for product listing and product detail page was also below 3.2 seconds! This is really awesome for an m1.small instance!

Further PHP and MySQL configuration optimization has not been done yet, but from our previous experience under Apache, it seems that those are only worth it if you either have a lot of products in your store or you have a lot of traffic, so we will leave those for later.

The following is a hack and work in progress, it is functional, but it can an should be improved. I’m not responsible for any data loss, use a your own risk XD

The following examples are tailored to the debian-4.0-etch-32-magento-nginx-2009-03-15 AMI used as a development host. If you prefere apache or any other setup you need to adjust the scripts.

If you want data to persist between instances and don’t want to build your own AMI (which would be impractical), you need to create a EBS Volume.

You can do so using the AWS Management Console under the “Volumes” tab. When creating the volume you can specify the size (from 1GB to several TB). Pay attenton to the “Availability Zone”, you will only be able to attach the volume to instances within the same zone (e.g. eu-west-a1 or eu-west-b1).

Once it’s created, select the volume, click the “Attach” button and select your running instance. If your instance doesn’t appear in the dropdown it’s probably running in a differen availability zone, or hasn’t booted yet. You need to remember to which device node you attach the volume (e.g. /dev/sdf).

Now you need to initialize your volume so it can be used. Ssh into your instance. Before you can use the volume you need to format it (substitute your device node: mke2fs -j /dev/sdXX).

Create a base mountpoint (mkdir /mnt/store1)

Stop the services

/etc/init.d/nginx stop; /etc/init.d/mysql stop

Copy the directories you want to persist between instances to the EBS volume (adjust the list as you need)

cp -a /etc/php5 /mnt/store1/php5.conf

cp -a /var/lib/mysql /mnt/store1/mysql

cp -a /etc/mysql /mnt/store1/mysql.conf

mkdir -p /mnt/store1/nginx /usr/local/nginx/vhosts

cp -a /usr/local/nginx/conf /mnt/store1/nginx/conf

cp -a /usr/local/nginx/vhosts /mnt/store1/nginx/vhosts

cp -a /usr/local/nginx/html /mnt/store1/nginx/html

Create the mount scripts

mkdir -p /mnt/store1/scripts

Contens of /mnt/store1/scripts/mount.sh

#!/bin/bash

STORE=$(dirname $0)/..

echo"stopping services..."

/etc/init.d/nginx stop

/etc/init.d/mysql stop

echo"mounting directories..."

mount -obind $STORE/nginx/conf/ /usr/local/nginx/conf/

mkdir -p /usr/local/nginx/vhosts/

mount -obind $STORE/nginx/vhosts/ /usr/local/nginx/vhosts/

mount -obind $STORE/nginx/html/ /usr/local/nginx/html/

mount -obind $STORE/php5.conf/ /etc/php5/

mount -obind $STORE/mysql.conf/ /etc/mysql/

mount -obind $STORE/mysql/ /var/lib/mysql/

echo"starting services..."

/etc/init.d/mysql start

/etc/init.d/nginx start

echodone

Contens of /mnt/store1/scripts/umount.sh

#!/bin/bash

echo"stopping services..."

/etc/init.d/nginx stop

/etc/init.d/mysql stop

echo"unmounting directories..."

umount /usr/local/nginx/conf/

umount /usr/local/nginx/vhosts/

umount /usr/local/nginx/html/

umount /etc/php5/

umount /etc/mysql/

umount /var/lib/mysql/

echo"starting services..."

/etc/init.d/mysql start

/etc/init.d/nginx start

echodone

Make the scripts executable (chmod 0744 /mnst/store1/scripts/*.sh)

Run the mount script /mnt/store1/scripts/mount.sh

And thats it. Now, whenever you boot an instance...

Attach the volume in the AWS Management Console

ssh in

mkdir /mnt/store1

mount /dev/sdf mnt/store1

/mn/store1/scripts/mount.sh

Before terminating the instance, run the unmount script to make sure all databases are closed correctly.