Xfive.co – Designed and Built for Performance

In these times of website obesity, we've made our website four times leaner than the average. How did we do it?

The average web page size has increased to 2.3MB in April 2016. According to WebPagetest our home page size is 538kB consisting of 13 requests on the initial load. I’d like to share some insights for designing and developing a performance-oriented website.

Thinking performance: The rebranding phase

Our company underwent a major rebrand in January 2016. We started to plan to rebrand early in 2015. One of the main tasks was to redesign the old website so that it was fast, secure and easy to maintain.

The new version of our website should perform better than the previous one.

First, we tested our old XHTMLized website. It had a load of 1.7 MB – slightly below the average at that time, and 81 requests.

XHTMLized’s home page with a robot mascot in April 2015

Once you become interested in performance, you see your website through different eyes:

Do we need a chat script on homepage or blog posts?

Do our users care about how many likes we have on Facebook?

And why on earth we don’t have gzip compression enabled?

With just those fixes we were able to reduce our old website’s home page size to 732 kB and 37 requests. That set up a good starting point to compare our new website to.

Designing for performance: The redesign phase

For the design phase, we teamed up with a talented designer Adrian Hayes from Pan Fried Pixels. We ended up with the following design:

New Xfive home page showing the human face of our company

One of the goals of our new website was to show the human face of our company. Human face equals photos. Photos equal kilobytes, and usually a lot. Add two typefaces, sans serif for titles and serif for texts, and your kilobyte diet plan starts to fall apart.

Once you sense the delicious smell of pixels roasting, you can easily forget your New Year’s resolution of staying lean.

Designing for performance at this stage requires discipline as performance is often something distant and abstract. That’s why design decisions shouldn’t be set in stone, and you should be able to revise them during development.

Let me share two examples with you:

Using Disqus vs. WordPress for user comments

We planned to use the Disqus system for the comments on our blog. Later we found out that Disqus was adding 2.5MB of resources to the site!

That was simply too much even though Disqus loads asynchronously. Since our site is built on top of WordPress, we decided to stay with native WordPress comments. We could make this argument because the advanced commenting functionality wasn’t an essential feature of our site.

To further increase the maintainability and transferability of our code base we have moved some non-theme related functionality to a custom functionality plugin (e.g., custom post types and taxonomies definitions).

While Timber and ACF can increase your productivity and make your code cleaner and more maintainable, they are also responsible for a significant portion of the page generation time. You should avoid regenerating your pages on each hit. Timber has built-in cache support, but we settled with the more transparent Varnish HTTP cache.

Managed WordPress hosting plans like Pantheon often use Varnish or other types of server caching, so you don’t have to be afraid of any performance loss.

npm build tools

Once we have the basic setup for WordPress done, we start theme development. Instead of developing the front-end separately and merging it later, we develop it right away with the theme functionality. This saves us around 20% of development time, but it also has an important psychological effect, as we know that this is the final stage and that there won't be any development later. It's easier to know that we are building a final product.

We setup the following npm tasks to help us with the theme development:

While using npm as a build tool looks good in theory (just one file and a bunch of commands) next time we would use gulp. npm commands can easily become cluttered and the Browsersync reload and style injection wasn’t very smooth. (Last time I checked this it worked better with parallelshell replaced with npm-run-all.)

For WordPress development with Browsersync, you need to proxy your WP site. This is a benefit because it allows you to run two versions side by side:

At localhost:3000 you can run a dev mode with quickly built, un-minified CSS and you don’t have to include critical CSS.

At virtual host (in our case local-xfive.co) you can include prefixed and minified CSS and also critical CSS.

Timber also has a handy resize filter which you can use directly in templates for your custom sized images.

We use progressive JPEGs and lazy load images below the fold using the lazysizes library. To avoid page jumps and repaints as a user scrolls down, we implemented Intrinsic Placeholders for these images.

For the icons, we use inline SVG icons. They are scalable, reduce the number of HTTP requests and are easy to maintain. Check out the following article for instructions how to create them.

We have also created guidelines for image optimization, so the content editors know how to find a right balance between the image quality and its size.

JavaScript

We separated our JavaScript functionality to small independent modules and used Browserify to bundle all dependencies together into a single minified file.

When you use jQuery and its countless plugins, it's easy to forget about their impact on the overall page size. When you remove jQuery from your workflow, it's usually because of your performance goals. Then you more carefully consider the effect of each particular plugin on your site size and performance.

Want a way to require WordPress not to include JQuery by default?

Here is a small snippet from our functionality plugin to deregister jQuery and defer all JavaScript:

Eliminating render blocking resources

To avoid render blocking resources like JavaScript or CSS, we have deferred execution of all JavaScript (see above how to do that in WordPress) and implemented critical CSS.

Critical CSS means that the CSS for the above the fold content is inlined in HTML. It can be a bit tricky to do, but it's still considered a state-of-the performance method of loading CSS, at least until the more convenient and flexible methods are widespread.

We don't use any automatic method for generating critical CSS. We have a main stylesheet with all styles, and we use loadCSS function to load it asynchronously. Then we have a common critical stylesheet (Normalize.css and common styles, styles for the common above the fold components) and individual critical stylesheets for most site templates. In the templates, we include common critical CSS and template specific critical CSS if needed.

As you can see, we are achieving pretty good results in the various performance test. The only problem is a long time to the first byte (TTFB) reported by these tools. In all tests, the page was generated from Varnish cache which can be verified in the response headers. So this shouldn't be an issue with WordPress itself.

So what could the problem be? We run our website on our server in Germany, and most of these test are run from the US servers. Indeed, if we check the sites from European locations, the time to the first byte decreases significantly.

Moreover, we use Cloudflare, so the request from a US visitor goes to the nearest Cloudflare data center in the US, then Cloudflare sends it to our server and passes the response back to the visitor.

Could it be that these requests simply take more time because they need to travel longer physical distance from the US to Europe? Possibly, although differences shouldn't be that big. Cloudflare and TTFB (Time to First Byte) have never been best friends, and this topic raises some controversial discussion. TTFB doesn't automatically mean slow loading times, but we will continue to investigate this.

We use Pingdom for uptime checks, transaction tests and Real User Monitoring (RUM). RUM shows that median loading time of our home page from US is around 2 - 2.5 seconds. That's not bad, but we feel it could be better if we can improve TTFB.

Conclusion

Are the results what we hoped for? Not exactly, but they are pretty close. Web performance is a complicated matter and even if you do many things right, there still might be some things you need to work on or that are out of your control.

Was the effort worth it? Definitely. Not only we have learned a lot of things while building Xfive.co, but when we started orienting on performance our mindset changed completely.

Lean websites not only mean performant websites, but they also mean less annoying websites

Some our decisions like not using popups for collecting blog subscribers were partially influenced by performance concerns and partially by our desire not to annoy our visitors.

If you care about your users, you care about every byte you send their way.

--

Looking for developers who care about performance? Send us your project details and we’ll be happy to provide you with a free review and quote.

Share the article

If you find this blog post interesting, we appreciate if you share it. Thank you.

Learn how to build better websites

Sign up to our blog to receive web design and web development tips.

About the author

Lubos Kmetko started to work for Xfive (formerly XHTMLized) as a front-end developer in 2006. He currently helps with business operations and writes for the Xfive blog.

Hi Guys. Great job with site optimization. I know that You specialize in WordPress development and I found something what is missing on the market and maybe You could code. When someone tries to optimize the website by google page speed tool he most likely will see the problem with loading CSS asynchronously. To get rid of the problem we need to use loadCSS from filament group in Wordpress -> AND THERE IS NO PLUGIN TO WORDPRESS to archive this goal. It would be nice to have one and publish blog entry how You made it and how all people around the world would use it.

Lubos Kmetko•
July 1, 2016

Thanks, Tomasz. There is a couple of other performance issues such plugin could solve yet, so it's a good idea. But don't know if we will work on something like that in anytime soon.