Prerequisites

All you need in order to be able to follow this tutorial is to have an AWS account. We’re not going to perform any operations locally so it shouldn’t matter if your computer is Mac, Windows or Linux.

Create a Rails App to Deploy

We’re going to create a very simple Rails app to deploy. If this is your first time deploying to Elastic Beanstalk, I wouldn’t recommend trying to deploy an existing app. The reason is that if you try to deploy an existing app and something goes wrong, there’s so much stuff you’d have to dig through to try to find the problem. If you deploy a freshly-created app and it doesn’t work, there are a much smaller number of things that could be the culprit.

Let’s create the Rails app now.

Shell

1

2

$rails newhello_world-dpostgresql-T

$cdhello_world

I personally always use PostgreSQL so that’s why we have the -d postgresql flag. I also always use RSpec instead of Test::Unit so I always initialize my Rails apps with -T to exclude Test::Unit. This tutorial would probably work just fine for you without those flags, though.

Now that we’ve created our Rails app, let’s set it aside for a moment and go get Elastic Beanstalk ready.

Create an Application on Elastic Beanstalk

I assume you’re familiar with the concepts of a development, staging and production environment. Let me explain how those things relate to Elastic Beanstalk.

In Elastic Beanstalk, you can have any number of applications and each one of those applications can contain any number of environments.

So if you want to deploy a particular Rails application, it would make sense to create a single Elastic Beanstalk application which contains within it a production environment and a staging environment.

In our case, we might have an Elastic Beanstalk application called hello-world and inside it a hello-world-production-env environment and a hello-world-staging-env environment. (AWS seems to like environment names to be suffixed with -env and it seems to like kebab case for names of most stuff.

In this tutorial, we’ll only create the hello-world-staging-env environment. The steps for creating a production environment would be the exact same.

First, go to the Services menu and click Elastic Beanstalk. You’ll then want to click the Create New Application link in the upper right-hand corner.

When prompted for an application name, choose hello-world. (This is arbitrary and could be anything you want.)

You should see a screen that looks like this. You could create an environment through the GUI right now if you wanted to, but in my experience that path leads to problems. Instead we’ll create our environment using a certain command-line tool.

Install AWS CLI and EB CLI

These CLI tools will allow us to do the same stuff the AWS console lets us do (by “AWS console”, I mean the AWS website we’ve been using so far) except more quickly and efficiently.

We’re actually going to take another step before we install AWS CLI and EB CLI. When you set up these tools, you’re going to have to hook them up to your particular AWS account. Part of this setup process involves providing something called your AWS Access Key ID and something called your AWS Secret Access Key.

If those terms sound intimidating, don’t worry. For now you can think of those things as kind of a username and password that let you connect to a particular AWS account.

You might wonder where you find your AWS access keys. That’s what we’ll walk through in the next steps.

First, click on your name in the upper right-hand corner and click My Security Credentials.

You’ll get a message about AWS wanting you to get set up with IAM. This is probably a good idea for a real production account, but discussing this part is probably outside the scope of this tutorial, so for now just say Continue to Security Credentials.

Next you’ll want to click on the thing that says Access keys (access key ID and secret access key).

Then click Create New Access Key.

Lastly, click Show Access Key and store the values somewhere where you’ll be able to get at them in a minute when we need to use them.

Now we’ll install the AWS CLI (CLI = command line interface) using Homebrew.

Shell

1

$brew install awscli

Next we have to configure AWS by running aws configure. This is where you’ll be prompted for the access keys I showed you how to get earlier. Important: make sure you’re inside you’re Rails project directory.

Shell

1

2

$cdhello_world

$aws configure

Now we’ll install the Elastic Beanstalk-specific CLI, again using Homebrew.

Shell

1

$brew install aws-elasticbeanstalk

The first step with the Elastic Beanstalk CLI is the init step. Let’s run eb init.

Create an Environment on Elastic Beanstalk

Shell

1

$eb init

You’ll be asked about a region. I’m picking us-east-2.

For the application, select the one you just created. Select Ruby as the platform.

Select Ruby 2.3 (Puma) as the platform version. Why Puma? I just picked Puma since according to its documentation, it’s the default server for Rails.

This will create a file called .elasticbeanstalk/config.yml which will look something like the following:

.elasticbeanstalk/config.yml

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

branch-defaults:

master:

environment:hello-world-staging-env

environment-defaults:

hello-world-staging-env:

branch:null

repository:null

global:

application_name:hello_world

default_ec2_keyname:jasons-key-pair

default_platform:arn:aws:elasticbeanstalk:us-east-2::platform/Puma with Ruby2.3

running on64bitAmazon Linux/2.4.4

default_region:us-east-2

include_git_submodules:true

instance_profile:null

platform_name:null

platform_version:null

profile:null

sc:git

workspace_type:Application

Now we’ll create the Elastic Beanstalk environment.

Shell

1

$eb create hello-world-staging-env

If we visit the URL for our new Elastic Beanstalk environment, we’ll see that it doesn’t work.

If you want to, you can check the logs by going to Logs, then clicking Request Logs, then clicking Last 100 Lines. You’ll probably see something like this:
2017/09/25 11:45:57 [crit] 3023#0: *64427 connect() to unix:///var/run/puma/my_app.sock failed (2: No such file or directory) while connecting to upstream, client: 172.31.3.52, server: _, request: "HEAD /phpMyAdmin/ HTTP/1.1", upstream: "http://unix:///var/run/puma/my_app.sock:/phpMyAdmin/", host: "13.59.32.218"

You might notice a clue there. See the phpMyAdmin stuff? It’s trying to do MySQL stuff but we wanted to use PostgreSQL. We need to set up and configure a PostgreSQL database.

Set Up an RDS Database

Go to the Configuration area and scroll to the bottom. You’ll want to click create a new RDS database. (RDS stands for Relational Database Service.)

For DB engine, choose postgres. I assigned mine a Master username of hwstaging. (AWS seems to be weird about certain names in certain ways. For some reason it seems to want your database name to be really short. I don’t really like that but whatever.) Put whatever you want for the password.

The RDS database will probably take forever to spin up. If you’re looking for ways to kill time while you’re waiting, I might suggest preparing a full Thanksgiving dinner from scratch or watching every episode of Roseanne.

In addition to creating the RDS database instance, there’s another housekeeping item we have to carry out that we might as well get out of the way now. We have to set up the Rails secret keys.

Go to the Configuration area again and click the gear icon next to Software Configuration.

There’s a value you’ll need to grab and paste somewhere. Run the following command:

Shell

1

$rails secret

This will output a token. Copy that token to the clipboard. Under Environment Properties, create a SECRET_KEY_BASE property and paste in the token for the value. Click Apply.

I’ve found that at this point our application probably still won’t work. For some reason, a redeployment is necessary.

Shell

1

$eb deploy

If you thought that after all this work things were finally going to work now, you’re in for a disappointment. If we pull up our environment’s URL, we’ll probably see this screen:

We haven’t created any scaffolds yet so there’s nothing to actually be seen. Let’s create a scaffold so our app has something to show us. We’ll create a Person scaffold.

Shell

1

$railsgscaffold person name:string

Let’s make people#index the root route.

config/routes.rb

Ruby

1

2

3

4

Rails.application.routes.drawdo

resources:people

root'people#index'

end

Let’s commit what we did and redeploy.

Shell

1

2

$git commit-m"Add Person resource."

$eb deploy

Yay! We should finally be able to see a working app.

Observe Our Working App

Just to be sure, let’s use the Person form to add a person.

Now we see our person show up on the list.

Note: It’s unlikely that I wrote this whole post without making a mistake or accidentally leaving something out. If my instructions don’t work for you, please do leave me a comment and let me know what kind of problem you’re having. I want to make sure this tutorial really works.

Who This Tutorial Is For

This tutorial is for developers who are comfortable with Ruby on Rails but have no particular expertise in system administration.

If you’ve never used AWS before and you don’t even know what EC2 is, that’s okay.

What We’re Going To Do

All we’re going to do is get a database-connected Rails application up and running on EC2. This isn’t meant to be the starting point for a production configuration. It’s only meant to be a Rails/EC2 “hello world” so you can see what needs to happen in order to get a Rails application running on EC2.

Prerequisites

All you need in order to be able to follow this tutorial is to have an AWS account. We’re not going to perform any operations locally so it shouldn’t matter if your computer is Mac, Windows or Linux.

Why EC2?

Before I jump into how to deploy a Rails application to EC2, it might make sense to first talk a little bit about why a person would want to use EC2.

It seems to me that there are three prominent options for deploying a Rails app:

Heroku

VPS

AWS

Let’s briefly discuss some pros and cons of each.

Heroku

Heroku is what I personally have worked with the most. When I’ve been responsible for the deployment decision I’ve chosen Heroku in all but one case. The one and only time I chose not to use Heroku was when I deployed my very first Rails app. I think the decision in that case was more out of ignorance than anything else. When I’ve worked on Rails apps deployed by other people, those apps were usually running on Heroku as well.

The reason I personally have chosen Heroku so many times is because it’s so easy. My first-ever Rails deployment was to a “blank” VPS running Ubuntu. I recall that it was a tedious, time-consuming and frustrating experience. Heroku can be frustrating at times too but for the most part it just works.

There’s a downside to the convenience Heroku provides, though, and that’s the cost. Heroku can get pretty expensive pretty quickly.

By the way, if you don’t know, Heroku uses AWS under the hood. You can think of it as an abstraction layer on top of AWS to make AWS easier.

VPS

It seems to me that the main advantage to deploying Rails on a VPS is the cost. You have to do everything manually if you’re using a VPS but it can be pretty inexpensive.

By the way, let me explain what VPS means if you don’t know. VPS stands for Virtual Private Server. The idea, if I understand correctly, is that a hosting company has an actual physical server sitting somewhere, and on that physical server they have a number of virtual machines running.

To give an example, let’s say Bob buys a VPS account from a hosting company and he’s running Ubuntu Linux. Let’s also say that there’s another customer named Alice who buys a VPS account running Windows. Both Bob’s Ubuntu server and Alice’s Linux server are really just virtual machines running on the same physical server.

AWS

Before I talk about AWS specifically I want to make the distinction between cloud and non-cloud hosting options. A lot of people including my past self have perceived “cloud” just to be an empty and redundant marketing term meaning “internet” or “server”. For our purposes it will be helpful to understand the term a little differently.

Here’s Amazon’s definition of cloud which I think works well: “Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing.”

The significant thing about cloud computing is the on-demand delivery. You can go from having just one server to having ten of the same server. Or you can resize a server in terms of memory or disk space. Or you can spin up more database instances whenever you want.

So Heroku is a super easy but pretty expensive cloud provider, AWS is a less easy and less expensive cloud provider, and a VPS is an even less expensive but even less easy non-cloud option, meaning it doesn’t have the on-demand scaling benefits.

I know you could take issue with the accuracy and precision of the above statements. I’m speaking loosely on purpose in an effort to be understandable.

With that said, let’s start our work.

Set Up The EC2 Instance

AWS will allow us to spin up as many EC2 instances as we want. For our purposes we’ll only need one.

We’ll need to get to the EC2 dashboard. To get there, go to the Services menu in the upper left-hand corner and then click on EC2.

From the EC2 dashboard, click the blue Launch Instance button.

On the first page of the wizard that appears, find Ubuntu and click Select. Why are we using Ubuntu? No particular reason. I personally am pretty familiar with Ubuntu over other Linux distributions and I assume that’s true of a lot of other developers as well. If you’re not particularly familiar with Ubuntu, don’t worry. It probably won’t matter. But I am assu.ming you’re comfortable with basic Linux concepts and commands.

On the next screen of the wizard, leave the instance type at the default, t2.micro.

EC2 offers various instance types with different levels of memory, disk space and other attributes. Why are we using t2.micro as opposed to any other instance type? Mainly because we really don’t need anything special in order to carry out our “EC2/Rails hello world” exercise. A small server is fine.

The next step is to click Review and Launch.

On the next screen, just click Launch.

You’ll be prompted about something called a key pair. If you’re not exactly clear on what a key pair is, don’t worry. I wasn’t super clear on key pairs before myself, even though I had been using them for a long time.

“Amazon EC2 uses public–key cryptography to encrypt and decrypt login information. Public–key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair.”

There was a little bit of fancy language in there, like “public-key cryptography”. I think you can understand the gist of key pairs without necessarily understanding terms like that.

Just think of it this way. Let’s say you’re connecting to an EC2 instance from your Macbook. Your Macbook will have a file on it with a string of characters. That’s your public key. Your EC2 instance (that is, Linux server) will have a different file on it with a different string of characters, and that’s your private key. These two keys match up with each other, and that’s your key pair.

Anyway, you’ll need to create a key pair if you don’t have one already. You’ll be prompted for a name to give the key pair, and then you’ll be given a .pem file to download. The .pem file you download is your public key. I called mine aws-us-east-1.pem and put it in my ~/.ssh/ directory.

Okay, so you’ve followed the wizard steps and created your key pair (or, if you had created a key pair some time in the past, you selected that key pair). What you should see now is a page like the one below. There’s a green box near the top that says, “Your instances are now launching”. Below that headline it says, “The following instance launches have been initiated” and there’s a link with your Instance ID. Click that link.

If you started this tutorial without any EC2 instances, you should now have just one single EC2 instance. Right-click the instance and click Connect. You’ll be presented with a screen like the following.

If you’ve freshly created your public key, you’ll need to change the permissions on it. I put my public key in ~/.ssh/aws-us-east-1.pem. Your path may be different.

Shell

1

$chmod400~/.ssh/aws-us-east-1.pem

If you forget to change the permissions on your public key to 400, you’ll get an error when you try to connect to your EC2 instance. The error will say your permissions are too open and “It is required that your private key files are NOT accessible by others.”

By default your public key’s permissions are probably 644, or -rw-r--r--. The first r means the file is readable by your user, and the second two rs mean the file is readable by other users as well. AWS doesn’t like your public key to be readable by other users. The permissions for your own user can be as open as you want. AWS will allow you to connect if your public key’s permissions are 700 or 600. I guess they just recommend 400 (-r--------) because there’s no advantage in making the permissions any more open than that, only risk.

You’ll of course replace ~/.ssh/aws-us-east-1.pem with the path to your own public key and ec2-34-202-231-182.compute-1.amazonaws.com with the URL of your own EC2 instance.

Once you make contact with the EC2 instance via SSH, you’ll be asked if you want to continue connecting. Say yes, of course.

The next chunk of this tutorial will have us follow this incredibly helpful Digital Ocean tutorial. The author of that post deserves a lot of credit for making this one possible. We don’t be following the Digital Ocean tutorial verbatim, though. I’ve laid out a slightly different set of steps below. We’ll start by installing Ruby.

Install Ruby

First we’ll run sudo apt-get update. If you’re curious as to exactly what sudo apt-get update does, there’s a pretty good Stack Exchange question/answer here.

Install Passenger and NGINX

You might find this command mysterious even if you’re fairly familiar with Ubuntu. I didn’t know what this command was when I first saw it, so I had to do some research.

First, what’s apt-key? According to the man page, “apt-key is used to manage the list of keys used by apt to authenticate packages. Packages which have been authenticated using these keys will be considered trusted.”

If you want to be able to understand that definition, you have to understand what “apt” is and what a package is.

So what’s apt? This documentation page says, “The apt command is a powerful command-line tool, which works with Ubuntu’s Advanced Packaging Tool (APT) performing such functions as installation of new software packages, upgrade of existing software packages, updating of the package list index, and even upgrading the entire Ubuntu system.”

Now we can reexamine this sentence and perhaps understand it better: “apt-key is used to manage the list of keys used by apt to authenticate packages.” This kind of makes sense but it doesn’t totally demystify the command itself. For example, what’s the “adv” part?

Here’s what the docs say about adv: “Pass advanced options to gpg. With adv –recv-key you can e.g. download key from keyservers directly into the the trusted set of keys. Note that there are no checks performed, so it is easy to completely undermine the apt-secure(8) infrastructure if used without care.”

Okay, so using adv with --recv-key downloads a key directly into a trusted set of keys. The value 561F9B9CAC40B2F7 must be an identifier for a key. If we paste 561F9B9CAC40B2F7 into Google, the results that come up are all about Passenger.

So the sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 561F9B9CAC40B2F7 must say, “Add the key for Passenger, 561F9B9CAC40B2F7, to APT’s list of trusted keys.”

Once you’ve run that apt-key command, open up /etc/apt/sources.list.d/passenger.list and paste in the following content:

A quick note: when I first tried to follow Digital Ocean’s tutorial, the above line was problematic for me and I didn’t understand why. I later realized that it had to do with my Ubuntu version. Ubuntu 16.04, the version I was using on my EC2 instance, is named Xenial Xerus. The Digital Ocean tutorial used Ubuntu 14.04, Trusty Tahr. When I pasted in the value from the Digital Ocean tutorial, deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main, I had problems because trusty corresponds with Ubuntu 14.04, not the Ubuntu 16.04 that my EC2 server was running.

So if you’re using a version of Ubuntu other than 16.04, you’ll probably have to change xenial to something else.

Next we make the file we just edited, /etc/apt/sources.list.d/passenger.list, readable only to root.

Shell

1

2

$sudo chownroot:/etc/apt/sources.list.d/passenger.list

$sudo chmod600/etc/apt/sources.list.d/passenger.list

Now we update the APT cache.

Shell

1

$sudo apt-getupdate

Then we install NGINX and Passenger.

Shell

1

$sudo apt-getinstall nginx-extras passenger

You might wonder how the nginx-extras package differs from the nginx package. According to the docs for this package, “This package provides a version of nginx with the standard modules, plus extra features and modules such as the Perl module, which allows the addition of Perl in configuration files.” That’s kind of vague (what other “extra features and modules” besides Perl?), and I don’t understand why the plain old nginx package wouldn’t have been sufficient for our purposes, but okay.

Now we have to add the passenger_root and passenger_ruby directives to the NGINX configuration file at /etc/nginx/nginx.conf.

If we were to try to visit our EC2 instance’s URL in the browser right now, it wouldn’t work. That’s because our EC2 instance has a security group that describes what kind of access should be allowed to our instance, and right now that security group is only allowing SSH requests on port 22, not web requests on port 80.

In the AWS web console, go to the Services menu and pick EC2 if you’re not already there. Then, under the “NETWORK & SECURITY” group, click on Security Groups. Right-click your EC2 instance’s security group and click “Edit inbound rules”.

By the way, what if you have multiple security groups and you don’t know to which security group(s) your EC2 instance belongs? You can go to Instances, right-click your instance, hover over the Networking menu item, then click Change Security Groups. You’ll see a list of security groups. The security group(s) that are checked are the ones that correspond to your EC2 instance.

Once you have the “Edit inbound rules” modal open, click the Add Rule button. We’re going to select HTTP as the Type and allow traffic from anywhere. After you do that, click Save.

Now if you visit your EC2 instance’s URL in the browser, you should see something there.

Install Rails And Create Rails Project

Let’s install Rails.

Shell

1

$sudo gem install rails

We’ll create a project called hello_world. I’m a PostgreSQL man so I always use PostgreSQL as my RDBMS as opposed to MySQL or SQLite.

Shell

1

$rails newhello_world-dpostgresql

We’ll need to uncomment the line in our Gemfile that contains therubyracer. If you’re wondering why therubyracer is necessary, this Stack Overflow question/answer might help. therubyracer is a tool that embeds a JavaScript interpreter into Ruby. This is evidently necessary in our case to perform JavaScript compression in the asset pipeline. (Don’t quote me on that, though.)

Gemfile

Ruby

1

gem'therubyracer',platforms::ruby

After modifying our Gemfile we’ll of course need to run bundle install again.

Shell

1

$bundle install

Get Our EC2 Instance To Serve Our Rails Project

We need to go into /etc/nginx/sites-available/default and comment out the following two lines:

/etc/nginx/sites-available/default

Shell

1

2

# listen 80 default_server;

# listen [::]:80 default_server ipv6only=on;

Then we’ll go into /etc/nginx/sites-available/hello_world and paste in the following content:

/etc/nginx/sites-available/hello_world

1

2

3

4

5

6

server{

listen80default_server;

passenger_enabled on;

passenger_app_env development;

root/var/www/hello_world/public;

}

Notice the passenger_app_env development line. This will make it so that visiting the root path of our application will give us the “Yay! You’re on Rails!” page. I want to complete this relatively easy step before we move onto the more involved step of creating and using an actual scaffold with a database connection.

We’ll need to symlink this file to /etc/nginx/sites-enabled/hello_world.

Now, if you visit the URL for your EC2 instance in the browser, you should see this:
Having accomplished our intermediary goal of viewing the “Yay! You’re on Rails!” page, we can now change our passenger_app_env from development to production.

/etc/nginx/sites-available/hello_world

1

2

3

4

5

6

server{

listen80default_server;

passenger_enabled on;

passenger_app_env production;

root/var/www/hello_world/public;

}

There’s no point yet in restarting NGINX and trying to visit the instance’s URL. It won’t work. First we need to get a database going. We’ll also create a scaffold in our Rails application so we actually have something to look at and use.

Create An RDS Instance

To create an RDS instance, first go to Services and choose RDS. From there, click Launch a DB Instance.
Choose PostgreSQL as the RDBMS and then click Select.
We’ll use a production PostgreSQL instance since that’s what we’d do in real life. After that, click Next Step.
On the next screen we’ll need to give our database an Identifier, a Master Username and a Master Password. Let’s use hello-world as the Identifier and hello_world as the Master Username. For the password, use whatever you want. (By the way, why hello-world as the Identifier instead of hello_world? Because AWS doesn’t allow underscores in an Identifier.) Once you’ve entered those values, click Next Step.
The values on the next page page can be left at their defaults. Just click Launch Instance.

After you launch your RDS instance you can monitor its progress under Instances in the RDS Dashboard. In my experience the RDS instance takes forever to launch.

Once the RDS instance is ready we can edit our config/database.yml to make it so our Rails app knows how to connect to our RDS database.

Modify the production section of config/database.yml to match the following. You’ll of course replace yourpassword with your actual password and replace my host URL with your own RDS URL.

To find your RDS’s host URL, go to Instances, click on your instance, pick See Details from the Instance Actions menu, and search the page for Endpoint. (You want the endpoint version that does not have the :5432 at the end.)

config/database.yml

YAML

1

2

3

4

5

6

7

production:

<<: *default

database: hello_world_production

username: hello_world

password: yourpassword

host: hello-world.c3yippmmocvu.us-east-1.rds.amazonaws.com

port: 5432

By the way, isn’t it bad to store sensitive values like passwords directly in files? No, it’s only bad to store sensitive files in version control. If a hacker gains access to your EC2 instance, they’ll be able to read your configuration values whether they’re stored in environment variables or directly in files.

Now that our RDS credentials are in place, we can create the database itself.

Shell

1

$RAILS_ENV=production rails db:create

Now we’ll create a scaffold so we have something to work with. Afterword, we’ll run a migration to create the table for our new scaffold.

Shell

1

2

$railsgscaffold person name:string

$RAILS_ENV=production rails db:migrate

It will also be helpful to set up a root route.

config/routes.rb

Ruby

1

2

3

4

Rails.application.routes.drawdo

resources:people

root'people#index'

end

Now we have to take care of a little plumbing work. We have to set up our Rails secrets.

Shell

1

$rails secrets

Copy the value output by rails secrets and paste it into config/secrets.yml under the production section.

Now we need to precompile our assets.

Shell

1

$RAILS_ENV=production rails assets:precompile

Finally, restart NGINX and we should be good to go.

Shell

1

$sudo service nginx restart

Observe Our Complete Working Application

Now we can observe our complete working application. If we visit the root route, we should see an (empty) list of people.