Ruby App Deployment with Nanobox

10 March 2017

Ruby is so fun. Seriously, of all the languages I’ve ever learned and used, none come close to matching the developer productivity of Ruby. I wish I could pretend to be a super responsible developer and say that was the primary reason I use it, but the truth is it’s just freaking fun. There is a weird satisfaction that I get with Ruby applications where I start to see them more as pieces of art to be admired than applications with functionality. Over the past decade, I can’t tell you how many times our team has sat down just to show off how clean or fancy some ruby project is. It’s hard to describe the feeling, Ruby is just special.

So naturally as early adopters of Ruby on Rails, we went all in and didn’t give much thought to how to run these application in production. Ruby had us on a natural high, and Rails captivated our wildest imaginations. We just assumed running a Ruby app in production would be simple, sort of like our previous experience with PHP. We were, well… wrong. Needless to say, we spent the majority of the next few years studying ruby web servers, forking vs threaded models, and how to get these apps to scream.

Fast-forward about eight years and now our team has a completely new focus: Nanobox – A portable, micro platform for developing and deploying apps. Our mission is to help developers be more productive, and as a result, help organizations succeed. As Ruby is a language designed for developer productivity, it’s a natural fit. So much so, we’ve spent a significant amount of time developing a native Ruby development environment and deployment engine.

Ruby Deployment Options

So you have a Ruby app and you want to deploy it. At this point, most of the hard parts have already been standardized, with projects like Rack. So really, the only decision to be made, is which web server to use. I’ll recommend an option, but let’s consider the available concurrency models:

Forking

A forking web server delegates the concurrency to the unix/linux kernel. The concept of forking refers to a unix process copying itself, and all memory contents, to a new process. In this model, the web server will “fork” the app into a new unix process to handle the web request.

This is perhaps the most thread-safe model, as none of the processes share a memory space. Each web request does its own thing in an isolated process space, then exits after the request has been served. While this approach can be very fast, it generally requires more resource overhead.

If you know your app isn’t thread-safe, this is the recommended approach. If you’re not sure, but you’re using a new-ish Ruby framework, it’s probably fine. Unicorn is the most popular forking web server, and seriously, its implementation rocks. I’ve spent hours studying the unicorn implementation and, to be honest, I learned most of what I know about unix process management studying this project. It’s pretty boss.

Threaded

The threaded model leverages Ruby’s threads and fibers. Instead of forking a new process for each request, requests are handled in threads or fibers. Threads and fibers are basically the same thing, the only difference being whether pausing and resuming the thread’s execution is handled by Ruby or the application.

For a long time, Rails apps were not thread-safe, so this model was not common until recently. At this point in time, this model seems to be the most common. There are a few good products that implement this model, but I’ve had a lot of success with Puma.

Chances are, you’ll be just fine with Puma. I personally recommend it, and in this tutorial we’ll configure the production app to use Puma. If you’re interested in looking at alternatives, there is a great comparison of the available Ruby web servers here: A Comparison of (Rack) Web Servers for Ruby Web Applications

The boxfile.yml will give me a Ruby runtime with a Postgres database in both local and production environments. It will also provide a web server running Nginx, which proxies down to Puma (more on that below).

Modify the Postgres Connection Config

Because my app needs to be portable between environments, I’m going to use environment variables to populate my Postgres connection. Nanobox will auto-generate these environment variables in each environment. In my config/database.yml:

Setup the Local Environment

To setup my local environment, just as a matter of convenience, I’ll add a dns alias for my local app so I can access easily from a browser. I’ll then use the nanobox run command to spin up my app locally then drop into a console inside my Ruby environment. Once there I’ll seed my local database.

# Add a convenient way to access the app from the browser
nanobox dns add local rails.dev
# Create the dev environment and drop into a console
nanobox run
# Seed the database
rake db:setup

Run the App Locally

With the database seeded, I’m ready to fire up Rails. Inside the Nanobox console, I’ll just run:

rails s

Deploying Ruby

I’m going to go ahead and deploy my RoR app to live servers with Nanobox, but before I do, I’ll need to configure the nginx proxy. I also want to test the deploy process locally.

Configure Nginx & Puma

In production, I’m going to run an Nginx web server that proxies down to my Puma process. To do this, I need to provide an nginx.conf and puma.rb in my project. I’ll store in my config directory.

config/puma.rb:

# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum, this matches the default thread size of Active Record.
#
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count
# Specifies the `port` that Puma will listen on to receive requests, default is 3000.
#
port ENV.fetch("PORT") { 3000 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked webserver processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory. If you use this option
# you need to make sure to reconnect any threads in the `on_worker_boot`
# block.
#
# preload_app!
# The code in the `on_worker_boot` will be called if you are using
# clustered mode by specifying a number of `workers`. After each worker
# process is booted this block will be run, if you are using `preload_app!`
# option you will want to use this block to reconnect to any threads
# or connections that may have been created at application boot, Ruby
# cannot share connections between processes.
#
# on_worker_boot do
# ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
# end
# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart
These are used by the start commands under web.main in my `boxfile.yml`.

Preview a Production Deploy

Nanobox allows you to stage a production deploy on your local machine through its “dry-run“ functionality. I’ll first add a dns alias to my dry-run app, just to make it easy to access from the browser after it’s running.

Nanobox will provision a live server using my cloud provider account, deploy my local codebase to the server, create containers for each of my app’s components (web and database), seed my database and start Rails.

Get Started With Nanobox

Nanobox is free for personal use and open source projects. You can learn about our paid plans by visiting our pricing page.