Authoring Critical Above-the-Fold CSS

The following is a guest post by Ben Edwards. I saw Ben tweet about a simple Sass @mixin that allowed you designate bits of CSS as being "critical" - the idea being to load that critical CSS first and defer the loading of the rest of the CSS until later. A clever idea, and one that is getting very popular in the web performance crowd. I thought I'd get Ben to introduce these ideas in more detail for us.

Google PageSpeed Insights and my web pages; it was a match made in heaven, until things changed... PageSpeed started telling me I needed to optimise my CSS delivery, that my CSS files were render-blocking, that none of the above-the-fold content of my page could render without waiting for those files to load, and that I should in-line the critical portions of those files directly into my HTML.

Go home PageSpeed, I cried, who in their right mind wants a mass of CSS in their HTML? I'm a legitimate professional, I have a workflow don't you know? I scoffed.

This had long been my stand point until I read the following tweet:

I'd like to see a site like CSS Zen Garden, but where developers try to make
the same responsive site score better on webpagetest.orgScott JehlJune 12, 2014

I've long committed myself to getting my web pages the best possible scores from webpagetest.org, and that required a change of workflow, so why shouldn't I change it for PageSpeed? Now, if you're already using Google's mod_pagespeed module, put your feet up and give yourself a pat on the back as the module has you covered. For those of you like myself who aren't, here's how I went about it.

Here comes the science

To solve the problem, I first needed to understand what PageSpeed was telling me. External stylesheets (read those included via link tags) are render-blocking. This means that the browser won't paint content to the screen until all of your CSS has been downloaded. Couple this with the fact that if the amount of data required to render the page exceeds the initial congestion window (typically 14.6kB compressed) it will required additional round trips between the server and the user's browser. This all adds up to additional network latency, and for users on high latency networks, such as mobile, can cause significant delays to page loading.

PageSpeed's recommendation is to split your CSS into two parts; an in-line part that's responsible for styling the above-the-fold portion of the content, and the rest, which can be deferred. Now before we get hung up on whether the fold exists or not, let's just agree that anything we can do to get our data to our users as quickly as possible is a good thing, right?

Determining what is critical

Determining which portions of our CSS are critical required inspecting my web pages at "mobile" and "desktop" sizes, then taking a snapshot of the CSS rules applied to the elements visible in the viewport. This seemed like a daunting task, but fear not, some very smart people were there to help:

A workflow for the future

Excellent news! PageSpeed is elated! It no longer complains of render-blocking CSS and is satisfied that above-the-fold content has been given the priority it deserves, but in this modern world of CSS preprocessors and front-end tooling, a manual process like the one above just isn't going to hack it.

An automated approach

Those of you looking for an automated mod_pagespeed style approach, and also familiar with Node (Apologies to those who aren't, but here at Clock it's a massive part of everything we do) will definitely want to look into Penthouse and Addy Osmani'sexperimental Node module, Critical, both of which provide means for in-lining or manipulating critical CSS as determined via the PageSpeed API. Now while a fully automated workflow sounds like heaven the one thing that irks me with the current tools is that they don't take address the fact that any CSS rules that are in-lined are served again once the below-the-fold CSS is downloaded. And in the spirit of sending as little data as needed to our users, this feels like an unnecessary duplication.

CSS preprocessors to the rescue

Making use of your favourite CSS preprocessor for authoring above and below-the-fold CSS seems like a no-brainer to me and is something the Front-end team is currently experimenting with at Clock.

New projects lend themselves very well to this approach, and critical and noncritical CSS could be authored via some well structured @import rules:

Should your partials not lend themselves to this sort of structuring, Team Sass's conditional styles Compass plug-in Jacket can come in very handy. For example if your partial _shared.scss contained rules for both above and below-the-fold elements, the critical and noncritical rules could be wrapped by Jacket like so:

This approach also feels in-keeping with the way lots of the community is authoring media queries at a component level rather than in a global location, and could feasibly be used to define critical and noncritical CSS rules at a component level.

I hope by giving you some insight into the way I've handled its authoring will entice you into incorporate it into your workflow. And make sure to keep a close eye on the tools outline above, as most are in the early stages of development and I expect exciting changes ahead.

Comments

I think I like the “critical vs. noncritical” way of thinking about this rather than “above the fold”. Above the fold kind of implies “let’s make sure the top of the website has all the styles it needs to look complete and lower down stuff can wait.” Maybe that’s OK, but I think that’s a lot harder to author and maintain than breaking CSS into “critical vs. noncritical” in which you make the call as you see fit as an author. Perhaps you indicate some top-of-page stuff as critical, but also anything layout or typography related so the page is structurally solid, usable, and non-jumpy as the rest of the styles come down.

The loaded nature of “above-the-fold” was one of the reasons I initially dismissed the technique – I’ll be the first to argue that in the traditional sense the fold isn’t really a thing, so treating is as critical and non-critical is the stand point I’ve taken in my mind, even naming my CSS files accordingly.

Additionally the vagueness of my inspect at “mobile” and “desktop” sizes line in the article is down to the fact that PageSpeed Insights treats the sizes and heuristics and doesn’t actually give you a specific set of dimensions to test against, it’s really a case of seeing what works for you, with your web pages, and with your workflow

That’s the way it reads to me and this is my house I’m afraid. The point about the webpagetest.org scoring badly on itself is a little insane. It’s like saying a cheese knife doesn’t taste very good in a sandwich.

Fair point on the not providing the results of a before and after. This is something I will follow up with once I have applied this change to Clock’s website in the coming week or so. And agreed that complicated and poorly written CSS isn’t going to be remedied by these tips, this is something that needs to go hand-in-hand with as much performance enhancement as possible, as inline CSS alone does not a fast website make. But the point about webpagetest.org scoring badly is neither here nor there, but if you’re passionate about it following it’s own advise, get involved and see if they’d like someone to help them – http://www.webpagetest.org/about

Hi ben — thank you for replying to this. I look forward to actual before and after results, perhaps in a follow up post.

Not sure what’s going on with Chris today (one post “ain’t smart”, this post is “a little insane” (he misses the point completely — they are not even following their own pedantic advice!), but yes, it is “his house”, he can call people names all he wants.

How complicate is this going to be? ‘Above the fold’ depends on the viewport, right? sigh

Additionaly, I don’t like like the idea of inline CSS and splitting stylesheets this way. All the time it was about seperating design from semantic HTML. Now Google tells us to combine it again? No. Really no.

I agree that above the fold is arbitrary and hard to define and thus a bit less useful here (see my first comment). But to just write off ideas like this because you have decided you don’t like it is weird. It’s a new idea that improves speed of websites and there are tools presented here that help make it less painful. Feel free to be skeptical or critical, but closing your eyes to it ain’t smart.

As I said in the article, I too was skeptical of this, heck it’s almost been a year since Google PageSpeed Insights was updated and started reporting this stuff. The important thing is to not dismiss it without trying, and to come at it from the perspective that anything you do to improve your scores are only going to be a good thing for your users. But it really boils down to personal preference and workflow – decide how much you want to do, work out what works for you for a particular site, and how you need to change your workflow to make it easier in the future

It sounded to me like Lars was being skeptical and critical, and was not “closing [his] eyes to it”. The passive-aggressive folksy “that ain’t smart” isn’t constructive either. He has reservations that are more substantive than “above the fold” terminology if you ask me.

This is generally a good blog, but this article and your comments fall short. Thanks for listening.

I love the technology but the concept scares me a little. Is this creating a better user experience or just gaining a better score (something that is a great educational challenge). Either way the user knows that mobile sites don’t load instantly and probably doesn’t care. Even on WiFi sites load slowly on mobile as layout and content become increasingly complex. Also, I wonder how is this affected by Mobile Chrome’s bandwidth management (or pre-compression)? I would assume this makes this issue redundant. However, if the situation were to arise that you needed instant load on the above the fold content it’s good to know there is a solution. Preprocessors FTW

The effect of this rewriter can be observed on pssdemos.com
before and
after rewriting. You will need to reload the page a few times to see the
effect, since this rewriter computes the critical rules for the page based on
previous page renders.

I agree, it is a scary concept, and it also seems real dirty to go dumping CSS outside of it’s own file where we’re all so used to putting it. Ultimately it is about creating a better user experience, we shouldn’t rely on users to expect things to be slow on mobile, and if it takes a bit of gamification via scores from an online tool I’m up for that, as you rightly say education is the key, and both Chris and I feel this is something that is likely to gain even more traction in the coming months (Keep a weather eye on the folks at Filament Group, as they are doing great things in the performance space!)

That’s an interesting point RE Mobile Chrome, and something that I’ll make sure I test and report on when I implement the above changes to clock.co.uk. My initial thoughts is that it in-lining CSS would make it render even faster rather than becoming redundant, but I’ll share my findings once I have them

It was only scary when I saw it as writing code to perform better in a performance test, of course the beauty of the web is that every site is different (and every client) so while this solution is very specific it demonstrates a new technique. It may even lead to innovation amongst the browser vendors and even future web specifications – the idea of serving up critical CSS first is wholly new to me (except for loading CSS chunks inline with modules etc).
If you could find the perfect example it would make a great talk!

Thanks Kyle, and thanks for the link. I’ll have to admit that Filament Group caused me a few last minute additions to the article with all the great stuff they’ve been releasing lately, but this one slipped through the gap :)

Using the term “above the fold” is like taking a step backwards in the war for responsive web design. This is an old newspaper/print term. By using it, you’re encouraging other developers to use it, only further complicating things. There is no page fold.

Seeing this article littered with the term makes me want to completely dismiss the article, having had many conversations with traditional print designers about this misconception that there is a page fold.

I respect what you’re trying to do and I appreciate your talent and time spent writing this article, but please recognize how this term can be harmful to the progression of RWD. Especially in the agency world where it is chock-full of traditional print designers trying to learn the basics of web design.

Doesn’t matter what term is used, “above the fold” or just line-of-sight ;-)

In my surfing experience, loading a web page, it makes a big difference if I can see something sooner rather than later. Doesn’t matter what it is, as long as I am engaged with the page sooner rather than later, it makes a difference in my browsing experience, and will definitely affect my frustration level and “bounce” rate, sooner = happier.

Responsive design just complicates it a bit more where there may be elements that just don’t know if they will be in line of sight for some users.

Some websites choose to load all of the CSS last, that’s mostly useless to me as seeing the text/html only version is confusing. Thus, this article, it’s a great and happy medium :-)

As I wrote the article I was aware that using the term “above-the-fold” would probably lead to contention, it was the main reason why I dismissed the suggestions off-hand when I first saw them. However, as this is the term PageSpeed Insights uses I felt that it was important to use it too.

I too am a Responsive Web Design advocate, and designing with “the fold” in mind is something that I would never advocate, as as you put it, it simply does not exist as a unit of measurement.

However, when it comes to asset delivery if thinking about arbitrary “mobile” and “desktop” viewports, and their arbitrary “folds”, determining the critical and non-critical elements across the pages in your site, and endeavouring to deliver the assets required to render those elements as quick as possible is going to benefit all users, regardless of their exact device dimensions

Critical CSS is all the stuff that you need right now to show the page. This could be stuff near the top of the page — say you have a really long page and a fancy footer, then you could delay the footer CSS. But it could also be CSS that is used for interactions such as clicking on a widget that creates an overlay.

Think of it as the CSS needed to display the initial state of the page. The user takes some time to interact with the page — either scrolling the page or activating things — and you can use that time to download the remaining CSS in the background.

Whether this is worth the hassle will depend on your site (and yourself / your team). If you have a simple design with a small amount of well-organised CSS, then maybe it’s not be worth using these fancy tricks. But if you have a vast amount of widgety CSS bloat, it could make a big difference.

Of course, the most effective way to optimise your CSS performance is to write less CSS. ;-)

Totally agree that writing less CSS is the most effective way to optimise your CSS, and I’m not advocating that anyone should think there job is done by simply implement the changes in this article without having done as much optimisation, minification and concatenation as possible as well.

To use my own website as an example, it has a small amount of CSS, and already loaded pretty quickly. But by inlining all the CSS, and asynchronously loading the web fonts, I’ve been able to further reduce load times.

Yep — I think it’s a good technique and this is the best article I’ve read on it. Inlining critical CSS is smart.

My comment was more directed at the “there is no fold” responses. It’s not just about the fold; as you said at the start, people get hung up on that term and start a (mostly) irrelevant discussion.

And of course you’re right — almost any site, even with lean CSS, could improve performance using this technique. Your site probably doesn’t “need” this optimisation very much, but since you put in the extra effort you made it even faster.

I suppose what I’m getting at is that there you can always tweak the performance. How far you go depends on the nature of your site, and also how much time/skill you have to spend.

I’m not in any rush to implement this on my site, because I don’t think it’s worth it for me at this time. But it’s great that I have this excellent article to come back to in the future!

Hmmmm, there’s a lot of hate towards this. But I think it’s just because it’s hard.

Determining above-the-fold, which we’ve all long since dropped, then having to use a third-party tool to determine what CSS is used for that portion of the site, then having to split that CSS off into something inline – something that can’t automatically be done on a site that’s built in a CMS using something like Sass.

The term ‘knackered’ comes to mind if the time comes when you ever need to change the CSS in some meaningful way.

I just ran Scott Jehl’s script on one of my company’s latest sites, and it determined that literally 50% of the entire stylesheet (by character count) was required for above the fold content. That’s a whole lot to be splitting off and putting inline. Partly because of reset styles – which no-one needs to mention in their demos.

This article could be completely rewritten to remove the above the fold reference and any reference to page speed, but the suggested solution was specific to that problem and is not applicable to every site.

However, the technique is actually very insightful and has numerous uses. It could be implemented to only load the CSS relating to the initial view or first page in its entirety, or relating to critical functionality as Mike suggested. Imagine if you have a complex web-app that has a relatively simple launch page and doesn’t share too much styling, or even a dynamic site that contains lots of CSS animations not used on the home page – if you vendor prefix them they can get huge!
It is often recommended that all Javascript is loaded at the end of the DOM but often it isn’t practical and you still need some in the head. The same could be said about critical and non-critical CSS, even if it still is 50% of the overall code. This article enlightened me to a technique I had not heard of before and it would be great to see how it pans out.

That’s one thing the article didn’t really mention, from what I read. There’s a lot of hang-up about the ‘fold’ term, but it would have been easier to simplify it to the first page. That’s what mod_pagespeed does anyway.

There are many cases where it would be more useful than a business website, but it’s one of those techniques that takes enough extra time to actually do – especially if the site is built in a CMS where injecting code isn’t as straightforward as modifying an HTML file, and if any of the CSS changes in the future – that I think that’s why a lot of people don’t like it.

If it were somehow automatic like many other enhancements – or provide as tangible a benefit for such little extra effort (assuming not everyone has the access to use something like mod_pagespeed), I don’t think half of the people who dislike it would.

I also would like to echo the negative feeling to the term “fold,” as we all know its a moving target not worth chasing or referencing.

The average user understands that websites need to load, and because they are used to this on many different speeds (due to the varying devices used to consume the internet), are less likely to leave the page for an average page load time. I am having a hard time understanding what actual benefit there would be to using this tactic.

The average user understands that websites need to load, and because they are used to this on many different speeds, are less likely to leave the page for an average page load time.

I’ll echo that this is very dangerous thinking. It feels like defensive self-validation to me. It feels like “Don’t worry about performance, people understand sites take time to load, whatever I’m doing right now is fine.” And you’ll be validated by continuing to run a business.

There is an alternative route though: care about performance, learn new things to make sites faster, reap the rewards of speed.

I understand your concern. I don’t mean “Don’t worry about performance.” I mean, don’t sacrifice your code’s integrity for a minuscule bump in efficiency and score. The cost outweighs the benefit in most cases, unless you have an already inefficient site. If you’re gaining a significant amount of speed, there is probably something else wrong.

This is a wonderful explanation of critical CSS that addresses the “above the fold” performance bogeymen we’ve all come to ignore. Sometimes, all of this seems a bit like building superb bandaids and ignoring the causes of our wounds.

But I’m sold for now. In fact, I’m working on a website revamp at the moment with some of these techniques and will report back the results. We’ll be rendering inline CSS on content heavy pages that analytics tells us are also first stops. The page’s components will tell the compiler which CSS is needed. We’re also experimenting with asynchronously loading the site’s most-shared CSS into a stringified localStorage object which also might then allow individual components to place their CSS in there as well. It’s a weird way to leverage our five megabytes per origin localStorage. Have you experimented with similar techniques? Are post-processing optimization techniques as stable?

loadCSS seems like a needless duplication of information, i.e. the filename, and needs extension for every attribute you might like on your link. To a zeroth approximation (which I only briefly tested in FF and Chrome), how about putting class="stylesheet deferred" on your noscript and going along the lines of this:

I don’t see this as “incoming css”. The point of separating style and content is in making the workflow clear for the dev. And in this case, if I understood correctly, the dev sees no inline css. The dev simply marks a few things critical and that is it, the automaton does the rest.

Good article, and interesting discussion we have here. I have a few points I would like to add (I wrote one of the tools mentioned in the article):

Yes, there is no “fold”, but don’t get hung up on that. We’re just trying to find an easily defined breakpoint that we can use to automatically sort out what to classify as critical css and what not. Even though I understand the thinking behind wanting to manually say what’s critical and what’s not, like Chris Coyier said – in my opinion it’s just not maintainable. Do you really want to have to go back and revise all your CSS and HTML as soon as either of them changes? Or do you think it’s okay to have your site accidentally flash unstyled content, before the rest of your CSS kicks in?

Every user and device shows of a different amount of pixels on initial load (i.e. “the fold”) – but we really don’t have to focus too much about that here. All we want to do is to ensure that we at least load everything visible on the screen during load, and ideally as little else as possible. If you really feel strongly against using a “fold” breakpoint – just set a ridiculously large breakpoint (9999px) on any of these tools, and you will still get the benefit of cutting of all CSS not being used on the current page. Your page will still start to render much faster.

Someone mentioned being able to shave off 50% of their CSS (as the remaining critical CSS) using these techniques. I have now generated critical path css for quite a large number of requests via the online version of my tool, and on average the saving I have seen is closer to 90%. Obviously it depends on how DRY your CSS is, and how large your site is, but for any large site there’s definitely more than 50% to save in terms of css file size. (on the same point: whether your save 50% or 90% – if the full CSS for your site is smaller than ~14kb minified, then I wouldn’t really bother with critical path CSS.. just inline your full CSS, and you will see the same performance improvement).
In terms of how much harder these kind of techniques make our jobs.. I think thanks to the community we do have on the web, it doesn’t need to be that hard. If you use a task runner like grunt or gulp, just set one of the tools mentioned in the post up in your build, have it automatically generate critical path css for any pages you want, set it up in your template language to check whether a critical path css file exists for the page – if so just output it in a style tag! I wrote my tool to completely automate this task, so you’d only have to set it up once and never look back. I’m going to write a tutorial on every step you need to go through to get there, perhaps people will find that helpful.
As of where we are right now with how to deliver CSS to our users while maintaining good performance.. I do, too, look forward to HTTP2, and letting the browser/server handle some of this stuff for us. But when that happens, you can be sure that we will find new ways to crank out just a little bit better performance out of our sites. :)

Someone mentioned including CSS needed for user interaction in the critical CSS.. I would strongly advice against that. The job of the critical path css is to allow the browser to start rendering the page as quick as possible. Putting stuff in the critical path css that’s not part of this first render is.. counter productive. Focus on general page load optimization instead, so that your full CSS (and perhaps needed JS) can load as soon as possible.

I would even argue that something like icon-fonts should ideally not be part of the critical path css.. If you want to take this further, strip out any such rules out of your CSS before passing it to a tool like Penthouse, and get even better performance. I’m considering allowing you to pass in a “blacklist” of selectors you want to exclude from the critical path css…

Someone mentioned including CSS needed for user interaction in the critical CSS.. I would strongly advice against that.

I think you mean me. I intended the opposite idea: critical CSS should exclude user interaction CSS. This is probably the safest CSS to exclude, because it’s not dependent on an arbitrary viewport height, and instead depends on people not making lightning-fast interactions with the page.

This is very bad practice and an even worse recommendation from Google. We found a client who served his external sheets up differently and accomplished the same thing. This violates all principles of clean HTML and is a dev and accessibility nightmare.

I often wonder why, when I am subscribed to a post on this site, I often see replies which are meant to be in direct reply to someone else… posted as a brand new response. Then I get an email notifying me of something I want to reply to and I don’t see that comment on the site minutes after it was added. You might wanna look into that, Chris… because now I’m starting to understand why responses might not seem to be threaded properly.

Anyways I just wanted to say… it seems it might be helpful for some of the readers to learn the difference between embedded vs inline styles. There is a difference.

Why people insist on saying stuff like “principles of clean HTML” is beyond me. Sometimes rules must be broken to achieve other goals. If you follow life living strict rules then you may as well be a robot. Humans evaluate situations and can bend rules to best suit the circumstance.

I’m calling trolly bs on this too. If you wanna be a naysayer, either do it with some data and facts to back it up, or do it on your own blog.

What data do you want the CSS spec and how it works? Inheritance is the basis of CSS and that is broken with this method. There is no good use case for breaking it except some CMS and that is only because you do not have time to correct it.

And me – I have done CSS since it came out and hand coded HTML since before that circa 1998. I have done accessibility coding since 2004 and I have worked with SEO since 2005.

It is a fine idea if you accept the idea that inlining CSS is ok. I do not, so this type of posting will just create more code nightmares I will tell people to remove – especially because you can meet the page speed recommendations without doing this.

You only McGuyver a site when you don’t know better or have no other options. You have other options, so there is no need for the MGuyvering.

PS Nothing trolling about telling someone their idea is poor. But sadly it seems this is a blog for think alike minds only.

Good article. I’m surprised no-one here has mentioned that the artibary screen size for the critical CSS results in a FOUC (unstyled css!). When I tried loadCSS on my site, the layout was broken for the bottom fifth on a 24″ display. We know what’s going on, but the average user may be confused why the site doesn’t load properly. Tries refreshing etc.

Is know body else encountering this? You all get a styled viewport on PC monitor sizes?

Nick, this is the “fold” problem with generating critical css – finding the sweet spot of how much css to include. I know even my tool would by default not include enough css to properly paint the height of a 24” monitor – but you can pass in your own (larger) dimensions to the tool to generate a more inclusive critical css.

Theoretically if you know on the server what the users viewport dimensions are, then you could create and serve more dimensions specific critical css (small, medium large).

Hi Nick. It’s true that there is a little trial-and-error involved to determine what works best across all the pages on your site, and as Jonas says, testing against a larger viewport as well as a small will help with this

NOTE to those who do not seem to know. You cause SEO issues with the head technique (code vs text ratio) and if someone is not careful you break CSS inheritance as well. Plus you create a dev nightmare (as you have to find the code in each page) and an accessible user cannot add stylesheets to (as they typically do) because the browser cannot insert inherited styles over your inline ones. This creates way more issues than it solves and those issues again – can be solved without it.

responsive / media query based webpages
Web Components
currently commonly used building tools
asynchronous CSS over network request like Font-Face and/or main logo, encouraging the usage of inline data:url when the Font, as example, is considered critical
the fact that spdy or newer network protocols should solve parallel imports so there’s no reason to split in chunks since everything split can be downloaded with a request
new HTML link import
JS runtime added styles for old fashioned scripts

I think overall waiting a bit more for a complete experience instead of wasting time sub-splitting is a win for everyone, 100 extra ms are kinda nothing in the real world, don’t go too maniac with these techniques, there’s always some mobile phone in some country that will take 10 seconds to see anything anyway :P

Andrea,
While this particular task should get easier in the future, that’s no reason not to do our best in the meantime. The difference of doing this is not 100ms, it’s closer to 1s on average, depending on what you’re comparing it too, and how large your css is. And that’s 1s for desktops on cable connection.

On mobile the benefits are even greater, as HTTP requests are even more expensive. If you can manage to remove all HTTP requests from your HEAD and fit your critical CSS inside the initial congestion window (full HTML size < ~14.6kb), then your most likely saving seconds on your render start time.

Why so little empathy for people on mobile phones in other countries? If their connections are bad, then bad performance hits them the worst. If your site took less time to load, perhaps they would consider starting using it. ;)

–

Regarding your list:
* media queries, font-face, data-urls … – All these things are automatically handled, at least in my critical path css generator. If it's in the CSS, and used "above the fold" on the page, then it will be part of the critical css.
* as for being hard to implement with the current building tools – what are you referring to here? generating the critical CSS should be easy, using Grunt, Node, Gulp, or your own task runner.. are you talking about automating the setup to get the critical css into your html?

It depends on your comparison. The HTTP request(s) makes up the biggest portion of the saving you can get for your Start render time – if you go from having basically any blocking >link< stylesheet in your head to just inlining any sanely sized (>~20kb) critical css, you should see a saving of at least about 1 second (more if your HTTP request(s) is/are large or slow in any way). Please note that this requires that you don’t leave any other blocking assets in your HEAD, like javascript, otherwise they will still be there as a bottleneck (and you’ll see no saving at all).

The second major saving breakpoint is reducing the size of inlined CSS to the point where the full HTML request (including your inlined css) fits in the initial congestion window. Because of this, if you’re comparing inlining a larger (full) css to inlining a critical css small enough to make this happen (full HTML <~14.6kb minified) then you will again see a great saving, especially on mobile (as the article explains). I don't have any exact numbers on this comparison, but it's definitely worth going for.

Finally, if you're just comparing reducing the size of the inlined css, but without passing any of the above two mentioned barriers.. Then the saving only consists of the lesser download time of the smaller HTML request, and the faster parsing and rendering time of the CSS file – as it is will have fewer rules. I've tried getting some numbers on this, but I've found that it varies a lot as it depends so much on response and download times – server side stuff. I would just suggest not to chase bytes in your inlined CSS unless you can achieve one of the to points above. If not, focus on something else, like compressing images, or optimizing your javascript.

I don’t think, that thinking in critical and non-critical modules/parts is a practicable way to build large high performance websites. In most cases the problem is, that you have developed a lot of modules and components for a website and all those components can be used on any template. Therefore even if you think in critical and non-critical, you often end up with 30kb+ of critical CSS.

It might be better to simply split your CSS and your JS into a base file on the one hand, which includes the main layout including the grid-system, header and navigation and all template layouts and then a lot of small additional stylesheets and js files for each components (each component can have one or more stylesheets/js files) on the other hand. Those components can then be loaded at the point, where they are needed. Using this technique you still end up having less blocking CSS and less unused CSS, but it’s a lot more developer friendly.

The downside is that you could end up with a lot of HTTP requests for CSS, which can easily outweigh the savings from downloading fewer bytes.

This can be mitigated by a clever packaging system (I think Facebook does this). Of course, that increases complexity again. At some point you need to decide whether the complexity is worth the benefit. For Facebook or Amazon, almost certainly; for a small business website, almost certainly not.

The most efficient method depends on the website. If the site uses similar design across all (or most) pages, then it’s usually better to keep the CSS in one file (or a small number). If the site has very different designs — say, a “portal” site such as Yahoo — then it may be better to split CSS into modules.

Much of this ground has been covered long ago (in web time) by Steve Souders et al. The only thing that’s new here is the automation (or semi-automation) from these tools that inspect your page.

Completely understand the desired benefit but not sure it is practical in most cases. If your core styles (normalize, base, typography, navigation, etc.) all set the foundation for your module and helper css — meaning the latter are dependent upon the former for at least a portion of their presentation would it really be worth it to pluck those portions out of the core CSS for modules/elements in the critical path? Will give it a try but most small-to-medium scale projects might not see enough savings to justify the time. In many ways it does feel like a step back away from web standardsYour text to link here…. What, if anything, have people like Zeldman said about the idea?

Excellent point. Page Speed insights is just assuming a single page load without any caching. Over time those extra bytes can add up. On a big site, that means a bigger bandwidth bill too.

I wonder if using cookies would help here? Basically after the initial page load, setup a cookie. In your website template, if the template sees the cookie, then they already have the CSS from a previous page (assuming the inlined styles are part of the larger CSS file as well). You’d need to a way to invalidate the “cookie” too, assuming your CSS changes (which is usually done via filename).

It’s true that the total combined bytes of HTML and CSS of every page in your site would be larger with this technique, but you could more and make up for this by optimising, or maybe even removing, one of those gorgeous 2x retina ready images we all love so much ;)

👋

CSS-Tricks* is created, written by, and maintained by Chris Coyier and a team of swell people. The tech stack for this site is fairly boring. That's a good thing! I've used WordPress since day one all the way up to v17, a decision I'm very happy with. I also leverage Jetpack for extra functionality and Local for local development.