Pages

Thursday, 28 April 2016

On CSS/jQuery Selector Performance

My post describing the use of a simple selector identifying page spinners was originally going to be about performance, then I learned something I found very interesting.

I likened what I learned to Tom Kyte's essay (Asktom->Resources->Presentations->FalseKnowledge.htm) on Correlation vs Causation. The essence was that things change over time, and we can't always trust authorities on the topic, and we must always test in our own environments. This aligns with skepticism in general, and here we are applying it to web development.

I was going to suggest simply prefixing the provide class with the relevant HTML tag.

span.u-Processing
{
display:none;
}

But I discovered a thing or two while drafting the post to explain my reasoning.

A while ago I incorporated much of my jQuery performance knowledge from this articlehttp://www.artzstudio.com/2009/04/jquery-performance-rules/
It was concise and it made sense. It also harmonised with other information I found on the topic. However, the post is now 7 years old and is probably worth re-testing it's assertions.

Consider the provided solution using .u-Processing. The period prefix means it's looking for a component on the page with the class u-Processing, as per in the definition of the spinner.<span class="u-Processing" ...

If the component had an id, you could use the # prefix to reference the ID, much like using an unique* index. (*in the web world, the an ID is not guaranteed unique, but should be in best practice, particularly to avoid logic errors.)

As I had previously interpreted, the problem was that it needs to look at every single component on the page. If the selector was prefixed with the span, this reduces the workload on the browser by filtering the DOM tree to just the span tags.span.u-Processing
And of those span tags, which have the provided class.

The reasoning seems sound, right? Part of it relates to native JavaScript methods available for locating content. Now if only there was a way to test these assertions...

Well on jsperf.com you can set up such test cases and run them against different browsers. Similar websites are available for SQL (livesql.oracle.com) and JavaScript test cases (jsfiddle.net).

[Note: jsperf.com was alive when I started drafting this post in March 2016, but has been down for a few weeks in April? ]

And true to Tom's essay, when I tried the same tests across different browsers on a different platform, the results are widespread.

But like anything it really depends on the page you're running it on. I did find the test case a little contrived since it was based on a very small portion of HTML, so I attempted the same type of test using HTML extracted from a typical page generated from APEX to see how many entries it's actually sifting through.

To give you a sense of scale, I ran some jQuery statements in the browser console of an even bigger page, one including a region selector so plenty of regions & reports.

This returns the number of elements on the page you could identity with a jQuery selector. In my case it returned about 3000.$('*').length

There are only 148 spans.$('span').length

And only one spinner.$('.u-Processing-spinner')

Considering the amount of operations/second the browsers are capable of, these numbers may seem small, but everything adds up. It's worth keeping up with better practices to trim performance where ever possible.

So the lesson I read out of this is if your developing APEX applications and you have basic jQuery mechanics, or even just use jQuery selectors in your dynamic actions, then perhaps stick with a readable, logical option that you're comfortable with.

Either that or I still have a lot to learn about jQuery performance ;p

If anyone has good, current resource recommendations on the topic, I'd be happy to hear. Particularly if it relates to use in the #orclapex environment.

Yes, maybe :) Everone, us included, tend to want simple answers to questions about systems we don't fully understand, so generally, it ALWAYS depends on more factors than we currently know of. Perhaps the difference between good developers and poor ones is curiosity - the drive to want to understand the details of the systems we're manipulating.