Devise is an excellent framework for strapping authentication features onto your Rails app. One of the very handy modules that provides session timeout features is :timeoutable.

Being a responsible test-driven developer, you start writing tests to ensure your application behaves correctly when the User tries to perform an action that is not allowed after their session timed out. But how to simulate that 30 minutes have gone by? (default config.timeout_in = 30.minutes)

A brief search of the nets offers a few pointers to overriding the Devise User.timedout? method but that doesn’t really help our feature spec when ensuring that User was redirected to the Login page upon performing a session-protected action.

Here’s one solution:

Devise is built on top of Warden, so let’s see if we can’t leverage Warden’s test helpers to simulate our timed out user:

By now you’ve heard all of the hype around Docker. No doubt you have begun forming your own opinion on whether that hype is deserved. (If you’re lacking in opinions, no doubt Reddit can help you find one.)

Conversely, I hope one thing all Engineering teams can agree on is the need for Continuous Integration testing. If your team is not employing a CI by now – there really is no excuse other than just bad engineering practices. With platforms like Jenkins available as open source and the wide-spread availability of cheap hosting solutions, what again was your Lead Engineer’s reasoning for not maintaining a current CI?

When it came time to setup yet another Jenkins-based CI for a Rails web application, I just had to see if I could dockerize the server installation to take advantage of all that Docker has to offer. I am happy to say that the results were particularly fruitful and I can now setup a new Jenkins container via a service like Tutum (backed by DigitalOcean, AWS, etc.) in just minutes.

Are you getting those mysterious Airbrakes telling you a Timeout has occurred trying to talk to Redis and its kept you awake at night worrying what your poor users see while your (hopefully) slave instance takes over for your master in your redis-cluster?

Worry no more! Because we’re going to proactively ping our current Redis server connection to see if its up and hope to catch it napping before our users do. Ping is available via the redis library but how to get access to it from our Rails app?

Here’s how we’ll schedule a ping every 30 minutes. I’m using Rufus but you can use your scheduling gem of your choice:

As long as your session and cache stores both use the same cache server (but hopefully with different keys such as /sessions and /cache, respectively) you can use the above method of retrieving the current Redis client connection held in the @data instance variable of the ActiveSupport::Cache::RedisStore retrieved by ActiveSupport::Cache.lookup_store.

It offers the easiest configuration if you’re used to rvm as your Ruby package manager. I think it handles rbenv well judging by some comments, not so sure about chruby.

Anyway, there are several gems out there for this but they all seem to only offer configuration for a virtual server that is added to the /etc/nginx/sites-available (and then symlinked from /etc/nginx/sites-enabled).

What if you would like to customize Nginx global directives that exist outside of a http or server block and therefore not inherited by your custom server directives?

Two such useful directives for tuning the performance of Nginx are `worker_processes` and `worker_connections`.

Here’s an example task to add to your config/deploy.rb file that you can customize as you need. In my case, my Ubuntu server installed a default /etc/nginx/nginx.conf file that had set `worker_processes` to 4 (too high as only one core) and `worker_connections` to 768 (too low for the box).

We had two release branches, lets call them v100 (current production) and v101 (next release candidate). A bug came up and I squashed it on v101.

Someone then brought up that we should squash that same bug on v100 and release a patch v100.1 to production. Fine.

To squash the bug, I used ‘git cherry-pick’ to grab the commit I made on v101 and apply it to v100. This worked as you would expect.

Here’s the bad part: When I next attempted to push v100 to remote, I was prompted to merge changes. When I then pulled v100 from origin, I was presented with an entire set of commits from v101 performed after v100 had already been “frozen”!!!

I believe the reason these additional commits (from v101) were pulled into v100 has to do with the way that git uses the SHA not only to identify a commit, but to find all the preceding commits. Here’s a more in-depth discussion:

but the AA Readme recommendation to ensure that app/admin/dashboards.rb looked like the default turned out to be a red-herring.

I noticed that a fresh ‘rails generate active_admin:install’ wanted to drop a new app/admin/dashboard.rb file. This new config file had all the jazzy new configuration syntax, so after copying over my section configs from the dashboards.rb to dashboard.rb, renaming ‘section’ to ‘panel’, and removing the dashboards.rb file, I fired up my specs again. My newly styled dashboard looked great but there was still a problem: seems that my root ‘/’ path was no longer pointing at a valid controller. Say huh?

I could see in ‘rake routes’ that I had two routes for ‘/’ – one from my manual route and a mystery one that looked like the commented out root_to configuration in config/initializers/active_admin.rb. Turns out some other folks had just encountered this: https://github.com/gregbell/active_admin/issues/2049

Following the advice there to move my manual root route up above ActiveAdmin in routes.rb did indeed get me back in shape. Looking forward to AA 0.6.1…