A few months ago I blogged about a problem I was having with busting the http caching Rails does. That post had a pretty naive solution, and I wanted to provide an update to raise a bit more awareness of the problem and a better solution.

First, I’m a bit surprised that this isn’t a problem folks are talking more about.

The basic gist of the problem is: when you use Rails http caching using something like fresh_when in your controllers, simply deploying your application will break the styling of your application for anyone who has one of your pages already http cached. Let me show you what I mean with a super simple application.

All this application does is render a simple view, and makes sure to set an etag.

In your app the fresh_when likely wraps around an object and uses its updated_at timestamp. And here’s what my view looks like after it’s been rendered in my browser for a second time.

Notice the 304s that get sent back to my browser for the index action as well as the application.css file it depends on.

Next, let me simulate what a deploy would do. First I change one of my css files to make the body purple. Then I’ll delete any old precompiled css files since many deployments (especially on Heroku) don’t keep the old files around, I precompile the assets again, and restart my app.

Huh. All my styling is broken. The web inspector says that Rails sends a 304 to the browser requesting the index action, so the browser is using a cached version of the page. However that cached version of the page uses an application.css file that doesn’t exist anymore. Even if that old application.css file did exist it would be serving up a grey version instead of the correct purple version that is required.

In real life, this problem won’t resolve itself until the updated_at timestamp gets touched on the object my etag is based on.

This is a big pain.

In my previous post I ended up setting the ENV[“RAILS_CACHE_ID”] to a value that changes on deploys. I used Time.now which is a bad idea in deployment environments that use multiple servers/nodes since each server will then have a different RAILS_CACHE_ID and generating different cache keys just based on which server got the request.

Another reason this wasn’t a great solution is that all your cache keys depend on ENV[“RAILS_CACHE_ID”], so you’ll be invalidating things like your fragment caches which is an awful performance compromise in order to just restart your application.

With this gem, that variable gets appended to the stuff Rails uses to generate its etags. If you use that variable in an initializer file and have it change on each deployment you’ll get busted http caches.

For example, here’s what I could do if I deploy to Heroku. I’ll create a bust_cache.rb initializer:

That little bit uses the Heroku api to sniff what the current release version is. And uses that as the ETAG_VERSION_ID. Now on every Heroku deploy ETAG_VERSION_ID will change, and your old etags will be invalidated.

Hope this helps some folks who might be having trouble with http caching. Of course, please let me know if you need any help using it or see any problems.