Over the last weeks we have been putting some effort on reducing the load time of our web applications. In the mobile world the two main suspects that usually come to mind are network and processing power, but it is not easy to considerably reduce requests and resource usage without cutting features from our users. So what if we could cut features that the user cannot see?

A quick analysis to our pages shows that, like many web applications, the user initially sees a much smaller amount of content than what we provide him. In our case, the ratio of initially visible content easily reached 1:13 in some of our main entry points. Below you can see a gross example of the three scenarios we tested and the viewport size of some popular mobile phones.

Page a) is one of the flavours of our home page. You can see some basic elements such as header and footer and then a collection of various modules, each of them with considerable rendering complexity and some of them requiring separate requests to our middleware data servers. In order to save some resources, we already collapse some of the modules until the user actually expands them by clicking their headers, which is far from ideal in terms of user experience. For this scenario our aim was to find an easy way to make the modules aware that they were in the viewport and delay their rendering and data fetching until the users could actually see them.

Pages b) and c) are two different scenarios of betting events. Unlike the previous example, these pages do not have many modules but are actually big lists with many betting possibilities. The difference between them is that page b) needs extra requests to render the whole list, while page c) fetches all the data in one request. Here we intended to find an easy way to render only a visible fraction of the list and make the list grow as the user scrolled down.

We ended up creating two AngularJS directives:

lazy-module – just place this attribute in any DOM element and instead of immediately render its content, this directive will make sure only a placeholder is rendered until the user is actually able to see it. This way you can save all the data fetching, scripting and rendering until the user scrolls this placeholder into the viewport. As soon as the placeholder is visible, the directive will start rendering the actual content. Since the browser may take some time in scripting and data fetching, choose a placeholder that gives the user some feedback of what is happening.

lazy-repeater – use this attribute at the same level where you are using any ng-repeat directive. Instead of having a huge list rendered all at once, you will now get a sublist approximately as big as your viewport. As the user approaches the end of the sublist, it will automatically double its size until the full list is rendered. This will postpone all the rendering until the user starts scrolling down the list and depending on the logic you have in each list item controller, you may also save some data requests since they will not be instantiated.

The results of applying just these two directives were quite impressive. With very little changes we achieved a considerable performance improvement. The table below shows the loading time improvements on all the above scenarios in both an i7 MacBook Pro (Google Chrome) and an older iPhone 4S (Safari).

Improvement MBP

Improvement iPhone 4S

Directive used

a)

-40%

-64%

lazy-module

b)

-21%

-28%

lazy-list

c)

-8%

-26%

lazy-list

If you feel like trying this in your application, both directives were bundled into a package called ng-lazy-render. Try it and let us know what you think!

Today I had the pleasure of opening the doors of our company to about 500 engineers at Commit Porto 2016. This presentation is mostly about problems found when developing our new Betfair mobile web application, but its principles are mostly related to big single-page applications that have client-side rendering. Take a peak now, I’ll post a video as soon as available!

Hope you enjoy it and if you have any questions just post them here or drop me a line!

One of my most recent and interesting challenges was building a module where you can scroll, search and filter through a really long list of horse racing-related bets. These numbers can quickly sum up to be more than a thousand entries, meaning a lot of information must be shown in a small device with slow network speed and sparse processing power.

The first idea that comes to mind is probably pagination. It solves most network issues by making only small requests, scales really well and since you’re only processing small chunks of data each time, you probably won’t suffer from performance issues.

But when you want to scroll through all those horses and not feel like you’re using a website, pagination – even if performed automatically as you scroll down – just can’t allow you to get that responsive feeling as most native applications usually offer. Furthermore, in such a scenario all searches must be done server-side and those results also need pagination. So we aimed to be able to have 10.000 list items with the best possible experience and fast rendering in my four-year-old iPhone 4S.

A quick search on the web shows that the new trend is virtual scrolling and since we have an AngularJS app, we immediately considered angular-vs-repeat, which is quite an amazing directive that wraps your good old ng-repeat and gives it a huge performance boost by rendering only a visible subset of a potentially huge list. Unfortunately this out of the box solution couldn’t work for us since we needed some very specific fine tuning in order to save some extra requests.

So we opted for angular-inview, a simpler AngularJS directive that just informs you when an element becomes visible or hidden. This way we could easily build a custom solution, so we tried 4 different approaches:

Loading Time MBP

Loading Time iPhone 4S

Watchers count

(a)

120 seconds

N/A (crashed)

42.000

(b)

105 seconds

N/A

10.000

(c)

8 seconds

100 seconds

1.200

(d)

5 seconds

30 seconds

1.500

(a) First we started by measuring how much it would cost us to render a full list of 10.000 elements, each of them with a lot of code in it (directives, filters, etc.). The results were not surprising. My Macbook Pro took around 2 minutes to get everything ready and my iPhone’s Safari just crashed.

(b) Then we applied angular-inview directive to every row. If the row was in the viewport, we’d render the content. If the row was outside the viewport, we’d render a placeholder with no data. My laptop timing improved a little bit, but a quick analysis of what was happening showed that the browser was losing a ton of time on angular-inview.js in a hander that was bound to the scroll event. For every row there was a scroll handler and that’s a very, very, bad idea.

(c) So instead of attaching the in-view directive to every element, we tried bucketing the data in groups of 30 rows allowing us to have 1/30 of the handlers. The results were quite surprising: Chrome was now taking only 8 seconds to render everything and the iPhone finally started showing us our list with no problems… in 100 seconds. Having a module that takes 100 seconds to render is as good as nothing, so we still needed a better solution.

(d) It turns out the iPhone was really slow rendering lots of HTML. It wasn’t a matter of scripting anymore, but actually the fact that we were rendering almost 10.000 placeholders that contained some simple styled HTML. As soon as we understood this, we started only rendering placeholders for the buckets that were close to the viewport. The other ones would just render an empty placeholder with the appropriate size. We got some slightly better numbers for Chrome, but a major improvement for iPhone.

Is 30 seconds still a lot? Sure. Could we improve this solution even more? Probably… But if premature optimization is the root of all evil and when we test this solution with a real scenario we get a snappy application even on older devices, maybe that’s a sign we can take the rest of the day off. 🙂

I’ve recently discovered that angular-touch doesn’t do a great job on link-based navigation. Although it prevents the 300ms delay that mobile browsers implement while trying to figure out if your tap on the screen was a click or a double-tap zoom gesture, it only does so for click handlers (through the use of ng-click), not anchors.

Since many Angular apps use links to navigate throughout its multiple views (and they should, SEO-wise), either we implement click handlers for all the links or live with that very annoying delay where the app is just doing nothing, when it could already be loading the next page.

With that in mind, I’ve just created a tiny project called angular-touch-faster. It is intended to be a small set of directives to give our Angular mobile apps that extra performance kick.

For now, I’ve only included an extra ‘a’ directive. It simply registers a click handler on all existent anchors, switching your browser’s location with the content of the anchor’s href attribute. Just include the module as your app dependency and you can immediately feel the difference.

Not only that, but you also don’t lose the active state on the element you clicked, giving the user some much needed feedback while the browser is navigating away.