Good question, but always remember: performance is not everything until it is the only thing left to worry about. Only start worrying about performance if you're sure you absolutely need to save as much as possible and your design allows for it.
–
BoltClock♦Jun 27 '12 at 3:32

1 Answer
1

At runtime an HTML document is parsed into a DOM tree containing N elements with an average depth D. There is also a total of S CSS rules in the stylesheets applied.

Elements' styles are applied individually meaning there is a direct relationship between N and overall complexity. Worth noting, this can be somewhat offset by browser logic such as reference caching and recycling styles from identical elements. For instance, the following list items will have the same CSS properties applied (assuming no pseudo-classes such as :nth-child are applied):

<ul class="sample">
<li>one</li>
<li>two</li>
<li>three</li>
</ul>

Selectors are matched right-to-left for individual rule eligibility - i.e. if the right-most key does not match a particular element, there is no need to further process the selector and it is discarded. This means that the right-most key should match as few elements as possible. Below, the p descriptor will match more elements including paragraphs outside of target container (which, of course, will not have the rule apply but will still result in more iterations of eligibility checking for that particular selector):

.custom-container p {}
.container .custom-paragraph {}

Relationship selectors: descendant selector requires for up to D elements to be iterated over. For instance, successfully matching .container .content may only require one step should the elements be in a parent-child relationship, but the DOM tree will need to be traversed all the way up to html before an element can be confirmed a mismatch and the rule safely discarded. This applies to chained descendant selectors as well, with some allowances.

On the other hand, a > child selector, an + adjacent selector or :first-child still require an additional element to be evaluated but only have an implied depth of one and will never require further tree traversal.

The behavior definition of pseudo-elements such as :before and :after implies they are not part of the RTL paradigm. The logic the assumption is that there is no pseudo element per se until a rule instructs for it to be inserted before or after an element's content (which in turn requires extra DOM manipulation but there is no additional computation required to match the selector itself).

I couldn't find any information on pseudo-classes such as :nth-child() or :disabled. Verifying an element state would require additional computation, but from the rule parsing perspective it would only make sense for them to be excluded from RTL processing.

Given these relationships, computational complexity O(N*D*S) should be lowered primarily by minimizing the depth of CSS selectors and addressing point 2 above. This will result in quantifiably stronger improvements when compared to minimizing the number of CSS rules or HTML elements alone^

Shallow, preferably one-level, specific selectors are processed faster. This is taken to a whole new level by Google (programmatically, not by hand!), for instance there is rarely a three-key selector and most of the rules in search results look like

In Selectors, all pseudo-elements (not just ::before and ::after) are subject to the same rule that they may only be applied to the subject of a selector and are only evaluated after selector matching is completed - w3.org/TR/selectors/#pseudo-elements From CSS1 to CSS3, this is always the key selector; however in CSS4 this may change.
–
BoltClock♦Jun 27 '12 at 3:48

1

RTL parsing is an implementation detail and it may vary from engine to engine, but the general concept of starting at the key selector and working backwards is agreed upon among vendors. Which simple selectors in each compound selector are evaluated first seems to be something only the source code can answer... more on that here.
–
BoltClock♦Jun 27 '12 at 3:50

@BoltClock: good point; I'd be willing to speculate that CSS4 parent selector will be effectively a pseudo-class with a has-children condition - otherwise there could be a lot of redundant matching cycles. As for code-specific imlementations, they are probably better off sticking to suggested behavior - a notable example of messing it up is IE7' caching of :first-child references
–
o.v.Jun 27 '12 at 4:51