If there’s a request for an HTML page, fetch it from the network and store a copy in a cache (but if the network request fails, try looking in the cache instead).

For any other files, look for a copy in the cache first but meanwhile fetch a fresh version from the network to update the cache (and if there’s no existing version in the cache, fetch the file from the network and store a copy of it in the cache).

So HTML files are served network-first, while all other files are served cache-first, but in both cases a fresh copy is always put in the cache. The idea is that HTML content will always be fresh (unless there’s a problem with the network), while all other content—images, style sheets, scripts—might be slightly stale, but get refreshed with every request.

I got there in the end and the script seems solid enough. It’s a fairly simplistic strategy that could work for quite a few sites, but it has some issues…

Service workers don’t perform any automatic cleanup of caches—that’s up to you to do (usually during the activate event). This script doesn’t do any cleanup so the cache might grow and grow and grow. For that reason, I think the script is best suited for fairly small sites.

The strategy also assumes that a file will either be fetched from the network or the cache. There’s no contingency for when both attempts fail. So there’s no fallback offline page, for example.

In “Minimal Viable Service Worker”, @adactio writes that service workers have such clear benefits to users that all websites ought to have one, and offers a minimal script that anyone can copy and paste to add caching and offline capabilities: adactio.com/journal/13540#webdev

In the last post about this blog I wrote about why
I removed the service worker which made this blog a progressive web application.

The way my blog handles CSS predates the wide availability of service workers.
Since CSS link tags are blocking, it’s good to give CSS a long cache time. In
order to do this, but still deliver fresh CSS without readers having to wait for
it, I give the CSS file a URI including a hash of its content. If the CSS
is updated, its URI changes, as does the href of the CSS link tag in each page
of this blog.

The server sends this header along with CSS it serves:

Cache-Control: max-age=315360000, public, immutable

The immutable tells the browser the file will never change. The long max-age
is a fallback for browsers which don’t support immutable. public lets any
proxies know that they can cache it too.

I instructed the browser not to cache HTML at all. Since the HTML was always
fresh, updated CSS would be loaded at most once on each change. The server
sends this header along with HTML:

Cache-Control: no-cache

Which forces it to make a fresh request for HTML each time. These headers are
still in place now (even since the move to Netlify).

Unfortunately this didn’t mesh well with the caching strategy of the service
worker I added. The worker would download the entire blog, HTML and CSS, the
first time a user navigated to it. The service worker was generated by
sw-precache, which I had set up to be generated every time the
blog was updated. It worked out what to add and remove from its cache based on
file hashes. Since an update to CSS† meant changing the address in a
link header in every HTML page, it removed not only the CSS, but all the HTML
any time the CSS changed. Worse, it proactively downloaded all the updated
files.

The net effect was minor CSS changes triggering a mass download of my blog. This
wasn’t a good use of server or browser resources, so I removed the worker.

I recently attended a Homebrew Website Club held by
Jeremy Keith. By chance he’d written a blog post on this
issue the day before, in which he provides a minimum viable service worker.

This service worker caches everything lazily (no precache). When an HTML page is
requested, it always tries the network first for a fresh copy, and falls back to
the cache when necessary. For everything else it hits the cache first, but gets
an update via the network in the background.

This resolves the CSS issue very nicely. Old CSS and HTML will be cached, so if
your Southern Rail train is stuck in a tunnel, you can still read
that blog entry about mixins you saw earlier and
are now bored enough to take another look at. If you’re stuck outside of a
tunnel and I’ve updated the CSS‡, the request for fresh HTML will
succeed, which will also bring in and cache the fresh CSS! For things like
images, which will probably never change, this also works well. If a change is
urgent to any file, it can still be given a fresh URL to cache bust it (though I
doubt this’ll ever be necessary).

I’ll still need to do some work though. As mentioned in that post, cleanup
isn’t addressed. I’m happy for HTML to be cached indefinitely, but for the
CSS one way to clean it up might be to remove it once it is older than every
HTML entry in the cache. For particularly large resources such as images, a
relatively short cache time and good alt-text might be a good approach…

In the meantime, I’ve borrowed the service worker code from that post mostly
unchanged (most changes are just to align it with the style enforced by my
ESLint config). The one small addition is to allow server sent events
(EventSource) connections. I use these in development to hot-reload my blog
when changes are made. The original script ignores all but GET requests:

I’m in San Diego where I’ll attend the W3C WAI Education and Outreach WG Face-to-Face meeting, and CSUN, the biggest accessibility conference. It’s always amazing to be able to work with my colleagues in one room and to meet all accessibility experts in one place.

Beta: W3C/WAI Website – We managed to launch the beta for the new WAI site last week. There are still a few rough edges, but it is essential to get it in front of people. A lot of work from many people went into the site, from design, user testing, development. I made sure we can edit resources in their respective Jekyll projects on GitHub and then integrate it into one repository using git submodules. All repositories use one common theme, so changes to it will be reflected in all resource previews, hosted on GitHub pages.

Buttons: Designing Button States – Tyler Sticka on different aspects of button design. Sweating details like this can greatly improve the usability and accessibility of your website or application.

PWA: Minimal viable service worker – I don’t know enough about Progressive Web Apps to implement them correctly, yet. However, Jeremy Keith’s article feels like a good starting point to learn more about it.

I’m in San Diego where I’ll attend the W3C WAI Education and Outreach WG Face-to-Face meeting, and CSUN, the biggest accessibility conference. It’s always amazing to be able to work with my colleagues in one room and to meet all accessibility experts in one place.

Beta: W3C/WAI Website – We managed to launch the beta for the new WAI site last week. There are still a few rough edges, but it is essential to get it in front of people. A lot of work from many people went into the site, from design, user testing, development. I made sure we can edit resources in their respective Jekyll projects on GitHub and then integrate it into one repository using git submodules. All repositories use one common theme, so changes to it will be reflected in all resource previews, hosted on GitHub pages.

Buttons: Designing Button States – Tyler Sticka on different aspects of button design. Sweating details like this can greatly improve the usability and accessibility of your website or application.

PWA: Minimal viable service worker – I don’t know enough about Progressive Web Apps to implement them correctly, yet. However, Jeremy Keith’s article feels like a good starting point to learn more about it.