Also, a point of clarification: in the talk, I mention that EFS’ performance is directly tied to the instance type that connects to it. That’s incorrect; EFS’ performance is directly tied to how much data is being stored on the EFS distribution. The more data, the more performance you can eek out of it. Many people will write empty bites to the EFS file system in order for it to appear larger in order to get more performance! Refer to their documentation for more: http://docs.aws.amazon.com/efs/latest/ug/performance.html

Published

December 10, 2017

WordPress at its heart is a blogging platform, designed to serve a site that’s largely read-only. Logging in isn’t necessary unless you’re an admin looking to write a blog post or adjust settings. This is a good thing! Scaling a site that’s predominantly read-only is very easy because you can place a CDN like CloudFront or a page cache like Varnish in front of a single server and serve many, many requests from hardware as cheap as $5 per month.

But what happens when you have a site that isn’t read-only? What happens when you have, for example, a WooCommerce site with a couple hundred transactions a day? Or perhaps you run a news site with millions of pageviews a month? All of a sudden, that poor $5 server is catching on fire and asking what it did to deserve this!

Running a single server like this is called a “Single Point of Failure” and that is a very big no-no to run in a production environment. Cloud servers are ephemeral in nature and aren’t guaranteed to stay up 100% of the time. Any number of things can go wrong, which is why designing your infrastructure to respond to bursts of traffic as well as be able to continue serving requests when servers go down is paramount to a reliable production environment.

This talk will give an overview of what is involved from moving from a single-server setup to a scalable, highly-available infrastructure on AWS.