The tough economic times on Tatooine hit everyone hard, including the Jawas.

Delivering a fast experience on the web usually involves reducing server response time, minification of CSS/JS/HTML and an optimisation of images and above-the-fold content. We can further minimize the latency caused by stylesheet loading by removing unused CSS rules delivered to the client.

In this write-up, we’ll take a look at build tasks you can use to remove unused CSS in your pages. Before we begin, I thought it would be useful to share some success stories just to demonstrate that the tools here can be used to make a difference.

Added grunt-uncss at lunch time to my sites GruntFile, CSS file went from 115kb to 3kb! That’s -97.4% smaller!

Unused CSS is a particular problem when working with a heavy modern UI framework such as Twitter Bootstrap, Zurb Foundation or Adobe TopCoat. In the below Bootstrap test, the results from running an audit of a project through the Chrome DevTools Audits panel indicates that ~ 90% of the CSS rules are unused.

This is a problem that’s been previously highlighted by the PageSpeed team, who include removing unused CSS as part of their speed recommendations for the web:

“Before a browser can begin to render a web page, it must download and parse any stylesheets that are required to lay out the page. Even if a stylesheet is in an external file that is cached, rendering is blocked until the browser loads the stylesheet from disk. Often, many web sites reuse the same external CSS file for all of their pages, even if many of the rules defined in it don’t apply to the current page.”

If, only after measuring, you find that you have a lot of CSS bloat, you may want to remove this CSS at build time to reduce your overall page-weight.

PageSpeed solve this problem via mod_pagespeed, an excellent Apache module that tries to auto-optimize a number of best practices for you, however this may not be something you can feasibly add to your site or project.

UnCSS

Giacomo Martino’s excellent UnCSS provides us with an excellent solution for removing unused CSS in our pages. The process by which UnCSS removes the unused rules is as follows:

The stylesheets are concatenated and the rules are parsed by css-parse.

document.querySelector filters out selectors that are not found in the HTML files.

The remaining rules are converted back to CSS.

Using grunt-uncss

A Grunt task I wrote called grunt-uncss, builds on top of UnCSS and can be dropped into your build process in just a few minutes.

If you haven’t used Grunt before, be sure to check out the Getting Started guide, as it explains how to create a Gruntfile as well as install and use Grunt plugins. Once you’re familiar with that process, you may install this plugin with this command:

npm install grunt-uncss --save-dev

Once the plugin has been installed, it may be enabled inside your Gruntfile with this line of JavaScript:

grunt.loadNpmTasks('grunt-uncss');

Use the grunt-uncss task by specifying a target destination (file) for your cleaned CSS. Below this is dist/css/tidy.css.

Using broccoli-uncss

Using brunch-uncss

An UnCSS plugin for Brunch by Jakub Burkiewicz is similarly available. Options can be specified using config.plugins.uncss in your Brunch configuration file and a multi-file setup per our earlier examples could be setup as follows:

What about CSS pre-processors?

I’ve put together a small example of one way you can approach working with UnCSS (grunt-uncss) and a CSS pre-processor like Sass. In short, consider it a post-process build step which is run once all of the other core tasks in your setup have completed. This ensures that your Sass stylesheets are correctly built and we’re just cleaning what isn’t being used in there. This allows UnCSS to not require knowledge of the nuances of your pre-processor whilst still being useful.

Conclusions

Remember: if you think you might have a CSS bloat issue, don’t guess it, test it. Use the DevTools Audits panel to determine how much unused CSS you might have and don’t forget to check this across multiple pages.

There are still many edge-cases where UnCSS has yet to be improved, such as working with templates depending on the setup of your project and when you’re pulling them in. If you find such an edge-case, please feel free to tell us upstream.

If your project does contain a lot of unused CSS, tools like UnCSS can help you trim down your page-weight at build time and deliver faster pages to your users.

33 Comments

I’m wondering if UnCSS (or any other tool you may know) can remove overridden selectors. Let’s say you have the original bootstrap.css and also custom.css that overrides some of the css rules in bootstrap.css. Will these rules in bootstrap.css be removed?

If I’m understanding your question correctly, we will remove instances of selectors which are unused (even if overridden) as long as you aren’t explicitly excluding those selectors from being cleaned. If you find that this behaviour isn’t correctly working as expected, could you file a bug over at https://github.com/giakki/uncss/?

I haven’t personally run into edge cases around however, but UnCSS does support excluding particular selectors from being stripped. This would allow you to craft a set of whitelisted items that you absolutely don’t want to be removed from your stylesheets.

It’s designed primarily for static sites and single-page webapps however, as long as there’s a URL or set of URLs that you can pass to UnCSS for it to load through Phantom, theoretically this should be possible.

Good question. We were literally just discussing this on Twitter Right now, we do handle classes that are added to the page via JS, however are unable to add those added via user interaction. I can see two ways of solving this: either using WebDriver to control user interaction and tie that back to the UnCSS task or alternatively, dump any styled views/templates that are going to be displayed via interaction into a markup file (just for dev) and run that through UnCSS.

So, does uncss only support classes added by JS or markup added by JS as well? I have some markup that is added via JS (for a slider) and the CSS for that is not being included. It’s not user interaction that is creating the markup or classes, so I’m wondering why PhantomJS wouldn’t pick it up. I’m actually considering putting the ‘dynamic’ CSS stuff somewhere where it won’t get plucked out.

We created a similar transform for Assetgraph. It’s really dead simple when you already have the entire dependency graph populated. We solved the problem of templates, since they are already modeled. However we never figured out how to safely detect when a class would be added to a dom element through javascript, which means that such a transform is very unsafe to run on any code base where javascript is used to manipulate dom, because you can’t reason about what state the dom might be in.

For entriely non-js static pages this is fine, but we ended up not even putting this transform in the library because we were afraid people would end up losing css they needed and not even knowing it.

This indeed a valid concern and something that earlier versions of UnCSS didn’t handle very well. We generally work well now with pages which use JavaScript to manipulate the DOM or inject styles but the level of accuracy varies based on whether custom timeouts are used for the capturing we do from PhantomJS. Capturing styles which are triggered by user interaction are a different problem and something which is still being explored. Some have suggested a setup using WebDriver to try improving our ability to scrape what is added at a time other than page-load but I’m hopeful we’ll see at least experimental solutions to this problem surface in the next few months.

I ran into the exact same issues writing dr-css-inliner. Very tricky when people include js-classes based on feature detection (like modernizr).

I just added an option to force include required selectors (matched by regexps). Selectors with pseudo-classes/elements (:before,:after,:focus,:hover etc) are automatically included if the selector is matched without them.

One of the benefits of using a framework like Bootstrap is you know that it has been tested on multiple browsers. For folks like me who are new to responsive design this removes a massive amount of work and allows you to share in the experience and expertise of a team who know much more than you do.

I do worry about the size of my css files for first time visitors who’ve not already got them in their cache so I tried a Firefox add on to clean out unused rules. This then resulted in all kinds of errors and the need to spend hours and hours trying to find and clean up the errors. In the end the need to then re-test on multiple browsers and lack of knowledge made me decide to go back to the stock stylesheets plus my design style sheet and leave well alone. I’ll just have to live with the issues this brings.

For my next project I’m using a much smaller framework, Skeleton, and I’m trying to remove what I don’t need from this as I go and add in features when I need them.

Great read, this is something everyone should at least try out and try to make part of their workflow.

One question though: how would this work for sites that run on (for example) WordPress? Where you don’t deal with (static) html files. Would you have to generate a complete list of all links and then run the task, or is there something that can do this dynamically?

I create WordPress websites on a daily basis and I’m always looking for ways to improve my workflow with it (especially because WP doesn’t care much about optimizations).

1) Go straight to the preprocessor files and remove things from there. 2) Be able to know all of the potential dynamic HTML that could be added and the context they would be added into. This can be difficult to know because going through templates doesn’t give any context for where they’d be placed, and Backbone views have their own element which could also have their own classes.

One way that might work for this is to attach unCSS to something like a Behat test, or just have it monitoring a browser session that a user is controlling, and pull results from that.

On the other hand, you could just inject a JS file that “runs” the app through all of the situations to generate all the potential dynamic HTML that you would need, but I think it would all need to be present at the same time, which could cause issues with things like nth-child selectors, though that’s sort of an edge case and in most cases wouldn’t actually mark a selector as a false positive for removal.

It should be able to handle dynamic views which are requested as a part of page load, but we currently don’t have a good way to capture styles injected on user interaction. I would say: try it out, your mileage may vary. At minimum the output should give you an indication about what classes in your stylesheets are likely not being used.

As others have mentioned, one potential pitfall is the stripping of styles that are used for dynamic content. Any content lazily loaded by XHR, for example, might find itself unstyled.

As such, I would advise that anyone using tools like this combine them with some kind of automated visual regression testing built on top of a functional test framework (like Selenium webdriver). It might even be possible at some point to have a tool like unCSS “latch on” to a running regression test, so that the logic for exploring styles and the logic for checking regression are one and the same. I would advise strongly against any solution that meant maintaining two separate but almost identical interaction scripts.

Another question that comes to mind is, can tools like unCSS be forced to ignore browser hacks? I imagine a naive fix would be to let authors specify regexes for excluded rules, such that both hacked selectors and the html.lt-ie-x .whatever pattern would still pass unfiltered.

My same concern.And I think we must do it manually on each page,i.e. I made a channel.min.css for channel page and a homepage.min.css for only homepage. In grunt task, I indicate what file it should include and what file the page needs to render correctly. About this plugin, how can we check the dynamic content that is generated by scripts…?

At least I’ll try to use it in my new project although the regression test may become a big issue later.

Hope you will continue update this article, Addy, it is really a great one!

I’m curious: How do you balance this against Bootstrap (and its ilk) being used via CDN? That is, the brunt of the style.css is already cached via a visit to another site (also using the same CDN.version), and then there’s naturally a bit site-centric CSS separate from that.

Mind you, I understand that extra CSS also taxes the browser as it renders the page, but how much so? How impacted (read: compromised) or not is the actual user experience? On a large page? What about something smaller / typical?

That aside, it seems to me that the unCSS’ing is actually something that ideally should be the responsibility of the browser. The browser knows that it needs and when it needs it, so why doesn’t it have the intelligence – beyond brute force (so to speak) caching – to fine tune what it caches? And what if the browser pre-loaded (a la CDN) the top three to five frameworks?

My point being, this unCSS idea is interesting and helpful, but is it as practical and wonder as it looks? As other comments have suggested? Or are we trying to squeeze blood from the wrong stone? In the wrong way? Why are we each individually working so hard to do things that feel browser-centic?

As I think you suspect, I imagine that in most projects, unCSS probably wouldn’t provide much gains. It’s still not terribly common to build production sites on the back of large UI libraries, and if you’re using an OOCSS approach with atomic classes doing tightly defined things, it’s probably feasible to see if a style is being used manually.

That being said, if it can be thrown into a deploy script, so that the minified CSS comes from a reliable unCSS output whenever you do a release, it might prove worthwhile over the lifespan of your project.

But can’t the browser be more intelligent? Why can’t it build its own “unCSS” from the main whole CSS and then cache that? And if that does not for some reason have what’s needed (i.e., there’s some Ajax-ian content update) then it can fall back to the whole CSS, get what it needs and then supplement its custom “unCSS” for a given site or page.

It’s seems to me the natural (i.e, it can be automated) and obvious point of attack is the browser, not the developer. In addition, browsers already have default styles, so why can’t we dictate what that is for a given site (e.g. Bootstrap 3.0.1)? While I wouldn’t expect a browser to have every version of every library a “best of the best” of current industry standards hardly seems unreasonable in 2014.

Yes, of course, there’s a need for speed. As a results the whole end to end system should be re-thought, not force devs to bend over backwards (or not).

The issue with large CSS is not that the size of the file once it’s being parsed by the browser. Most browsers are pretty quick at parsing style rules and creating a CSSOM. The issue is instead the number of bytes down the wire. Browsers obviously can’t ‘unCSS’ styles they haven’t yet seen, and they can’t request a subset of styles on subsequent requests (once the cache expires) because they don’t know if the file has changed.

The notion of ‘bundling common UI frameworks and JS libraries into the browser’ has been mooted before, but seems rather a dead end to me. Once functionality is added, vendors will be unwilling to remove it, so you’re going to see lots of libraries accumulating in the browser’s “private stack” in even a short space of time. These could fall out of date very quickly indeed.