A highly scalable WordPress self-hosting that is not a burden on your wallet

How to host a scalable WordPress setup yourself with W3 Total Cache, Varnish and Amazon Cloudfront.

united-coders.com is not that big. We’re currently getting in between 500 and 1000 unique visits a day. As we’re coders we like to host ourselves. We’ve previously been on a real root server, which was a bit overkill. So we decided try something smaller and cheaper and we’re currently hosting this site for around $7 per month with enough headroom for more traffic. We do have the occasional spike in the 5000 range when we publish something that catches on. So this needs to be covered.

If you’re interested in the setup and a detailed description, you can find it on RichWP.

The Components

The virtual Server (vServer)

A virtual server is a slice of a real server with a certain amount of resources dedicated to you. For us (speaking united-coders.com), the smallest version is sufficient. We got one cpu share (hard to measure, yeah) and a guaranteed 1GB of RAM (up to 2GB dynamically, but we only work with what we will constantly have).

That currently sells for around $6 here. The vServer is our backbone. We set up your WordPress there and then added the other components once everything was running.

Caching

W3 Total Cache and Varnish is our local caching solution. We’re serving static pages only, so most of the content will be in the cache and the website will be fast. Invalidation and updates will be done by WordPress, so except for setting it up you don’t need to worry about much. You can do a bit of fine-tuning to suit your specific needs.

Content Delivery Network (CDN)

To take this even a step further, most of the files (images, css, js, etc.) are hosted on a CDN. Which means they will not be fetched from the vServer but from the CDN, which uses servers closer to your location. This takes some stress of the vServer and also boosts speed.

How does the site end up in my browser?

When you make a request to united-coders.com Varnish will serve the site if it’s in the cache. The cache is in the memory which will improve access times as memory is a lot faster than disk, especially if you’re getting hits on lots of different pages on your domain.

If Varnish does not find the page it will be requested from the webserver and placed there. For this to work Varnish sits in front of the webserver (on port 80) and has the real webserver as a backend to refer to when it’s missing something.

The html sent to your browser refers to files residing in the CDN. So the browser will fetch most of the stuff there, which reliefs our cache and webserver.

How much does it cost?

The virtual server is around $6 month. That part of the bill will stay the same every month.

Additionally to that, we have to pay Amazon for hosting our files which we want to be server by the CDN. They reside in the Amazon Simple Storage Service (S3) and are accessed by the CDN. With our current traffic this is less than $1 per month. This part of the bill will vary, depending on how much traffic you get and how much you decide to put on S3/CDN.

Pitfalls and Lessons Learned

Varnish cache size and response times

I ran into two problems so far.

Depending on the overall load of the vServer (the other guests using up much cpu) the Varnis response to the managedment thread was too slow and lead to crashes. So after some googling a possible solution seemed to increase the cl_timeout, which in my case worked.

The second issue had to do with the flexible ram assignment. This is more of an trial/error assumption due to the observed behavior. The cache would fill up beyon the guaranteed 1GB ram, and when the dynamically allocated RAM was taken away from our vServer (reduced to the guaranteed 1GB) the cache crashed. So I decided to limit the used memory to 256MB, which is more then plenty.

Here’s an example of how to manually start it with 600MB memory cache:

Invalidation files on CloudFront

One time the site kept serving an outdated file. Tracing the error from the source to the front – WordPress -> Varnish -> S3 -> CloudFront – showed, that only CloudFront was serving the outdated file. In this case I had to manually invalidated the file in CloudFront.

Conclusion

After some issues in the beginning and during setup, the site is now operating smoothly. I did some load testing and performance looked good. Of course it can not compete with our root server setup, but it’s only a fraction of the cost.

The procession power we get on our current hoster seems below average. A clear sign of overselling the machine. But caching seems to cover most of it. On my tests the site loaded on average in roughly over one second.

Nico Heid

I work as a software engineer during the day and sometimes hack a bit in my free time. That currently includes anything from software, system and networks to raspberry pi and hardware.
You can find most of my results on this blog.

3 thoughts on “A highly scalable WordPress self-hosting that is not a burden on your wallet”

server4you.com had/has an offer. you get a 12 month contract (currently they changed to 15 USD) in which you don’t pay the first 6 month. So that comes to 7.50. At then end of the period i just hop on the next vServer offer or talk to their sales for reduced prices.

Worked well in the past years.

I keep the server setup fairly simple, so that i can migrate it in one evening. So just a bare debian or centos, no fancy configs.

We use XenLayer.com for our hosting needs for over 2 years now and have been very happy with the uptime, service and features. Not only is the network awesome, they also have auto-installers for a lot of software, and the support is always willing to help out with whatever we need. They also do WordPress migrations from server to server.

Meta

Copyright

How to host a scalable Wordpress setup yourself with W3 Total Cache, Varnish and Amazon Cloudfront.
united-coders.com is not that big. We're currently getting in between 500 and 1000 unique visits a day. As we're coders we like to host ourselves. We've previously been on a real root server, which was a bit overkill. So we decided try some