Using the same approach we’ve been able to set up a site that tested up to 5,000 concurrent visitors but, once we undid some things that caused stability problems we pulled back to about 3,500. Currently the site has a fail-over server but we could put a load balancer in front and pretty much double the capacity very quickly. That sort of load means a client with a million page views a month ticks over on a cheap 8GB VPS with a typical load average of 0.6 on a four processor machine.

We ran a site that had very few logged in users (basically just serving up pages). It served 10k+ visitors 100k+ page impressions a day using Nginx fcgi cache on a Linode 512 ($20/month). That’s without any other caching.

Tom, one thing to be aware of with local caching is that without a cdn you can quickly saturate your network connection. If each page with images etc is 1MB (not so unusual these days!) then you can only serve about 6 of those concurrently, per second, on a 100Mb/s connection. That’s not a lot of users for a busy site, although it’s still high traffic.

One of the key things with getting scaling right isn’t the overall traffic, but handling of peaks. 5,000 concurrent visitors would be equivalent to 21 mill+ per day if they were neatly spread out, but sadly they never are! However, a tweet from somebody very famous can easily send a lot of traffic. I’ve worked out that the traffic level is approximately equal to one per 50 active followers (the latter being tricky to work out – e.g. light-entertainment celebrities have far fewer active followers than niche players) so somebody with 25,000 genuine followers will create a traffic surge of 500 concurrent visitors.