Who says travel websites need to be slow?

According to some sources (travolution and econsultancy) - travel websites are “seven times slower than recommended” and are some of the worst performing websites on the internet. This can be down to a number of reasons, we will take a look at our client, Not Just Travel and how we rebuilt their websites to achieve over 90/100 for Google Page Speed.

Not Just Travel is a franchisee based company; each franchisee receives a free website to help promote their digital presence and increase their sales. With over 400 franchisees they have a lot of websites.

step one

Assess the original platform and process

The original platform that was built as a wordpress multi site which was crippled at about 200 sites, the total website size was over 120Gb and had some major issues with uptime. Before starting the new project we did an exploratory exercise to see what work we could do to the existing platform to ensure better uptime and improve some key performance metrics. What we found was quite interesting:

Over 8,000 database tables within MySQL

Each new “site” in Wordpress would create new tables within the database.

When using the Wordpress admin area, it would try to join lots of tables together resulting in some page loads being nearly 4-5 minutes.

Every time a new site was created it would copy the assets folder, causing duplicate files to be made every time - this ended up in multiple copies of the same files being recreated every time a new site was made

Because of the quantity of tables within MySQL and the way the Wordpress multisite plugin works, the sites would crash repeatedly throughout the day and the server that was running the system was huge.

When publishing a new page - it had to go through “broadcasting” to copy the page to each site, this caused high CPU/MySQL throughput and also took over 12 hours to publish a new page.

Backups were failing

Database backups failed because MySQL timeouts were being hit and also that enough tables couldn’t be opened at the same time to maintain locks and ensure data consistency

Filesystem backups failed - these backups were copied to another drive, then replicated off-site. Except only 1 copy of a backup could be stored locally because of the size of the website folder)

Because the same content existed on the sites - there were major duplicate content issues.

step two

Define our objectives

There were a lot of issues to take a look at, and steering away from Wordpress was an easy decision. Having previously built and managed websites that operate in this way, I knew of a solution that could work for Not Just Travel and we started mapping out the idea.

For the new project we had 3 main objectives:

Must be able to handle over 1,000 sites

Website performance and page speed is a priority

Site performance must remain high after launch and when more sites are adding, no degradation in performance.

step three

Agree on parameters and processes to achieve success

There are a few basic rules to follow when building a website platform that has to scale in this manner:

No content should be duplicated, and should only ever exist in one place.

The process for adding a new site should be able to be completed without any developer interference.

When a new site is added, no code changes or database changes (either new tables or columns) should be required.

The first rule is to ensure that content is easily editable and that the admins can manage content in a quick and trivial manner. The second and third rules go hand-in-hand to ensure that nothing is “hard coded” or that when scaling sites we do not increase our footprint. Only the data we store increases.

Previously each multisite was run as a “subfolder” of the main website for example notjusttravel.com/stevewitt we decided to change this to work on a subdomain level, e.g. stevewitt.notjusttravel.com. This was for 2 reasons, firstly - if we needed to seperate DNS records out for subdomains - this is simple, doing so at a subfolder level is impossible and would require more infrastructure to achieve the same objective. Also if any franchisee’s wanted to do paid marketing, the subdomains make this possible.

We also implemented canonicalisation to ensure that there was one “source of truth” for content ensuring none of the sites would suffer from duplicate content penalties.

Once it came around to starting the build - we started work on the initial “platform” building simple proof of concepts that demonstrated how we would separate site data and ensure the correct content loads on the correct sites. Using Laravel this was a simple task. The concept is simply to accept all domains that come into the application and then validate the domain against our set up sites in the database, if there is a valid site - build the page for that request.

All the content areas of the platform are separated as Modules, with each module having different versions - this allows us to enable/disable different Modules (Blog / Offers / Insights / etc) on various sites, but also allow us to test different designs of a module on sites - so we can make a new layout and test it on a few sites and if it performs well roll it out across the platform. An added bonus of this is giving the individual site owners their own custom admin area with only their specific content available.

Internally - content is stored within specific tables so all “blog posts” are in one table, with relationships tying them to either specific sites, or site types. This allows us to automatically populate a new site with existing data as soon as it goes live and doesn’t require a large amount of work adding content. Any image assets uploaded are passed through basic optimizers for jpg/png/gif to ensure they are stripped of any unnecessary data, they are also resized to our largest image size (if they are bigger).

When we started the production build we kept the concept of page speed alive by setting a subset of objectives:

We wanted to use no JS or CSS frameworks

Every image should be as efficient as possible

Not using a framework for JS and CSS was a fun exercise; in fact the only external components we used were flickity (vanilla JS carousel) and axios (simple JS ajax functions). This kept some of our file sizes down and also reduced the amount of requests needed (as we could minify them all into a single file)

We could have looked at inlining CSS but this would have very little return for quite a complex piece of work and could still be done to achieve a higher speed score. As many assets as possible are served from our CloudFront CDN, with long expiration times - this means the majority of the imagery is cached between page loads.

When we request an image/asset on the front end, we request the size of the asset we would like - for example “Homepage hero - medium”, then a thumbnail is created at the specific dimensions and then served from the CDN, the filename of the thumbnail is a hash of the original file. If the thumbnail already exists - this is served immediately. If the original image is changed - the thumbnails filename no longer matches and a new one is created, automatically sending the new file through the CDN meaning no one is served an old image.

Discover Attention: what is it and why you need it to succeed in the digital realm. Turn brand strangers into loyal brand followers and get your brand the focus it requires within a world of marketing noise.