Brewhouse Engineering Principles

We write code for our peers

We care about the person who will have to read, debug or extend our code. We want them to have a good time. We take extra time to turn "code that works" into code that’s clean, explicit, simple, robust, confident and tested… obviously!

We trust our peers

We trust our peers for making the best decisions and the best work they can.

We experiment (bleeding edge)

We experiment with new shiny technologies, tools and languages. We challenge ourselves, get out of our zone of comfort and expand our horizon. That’s what Friday times are for.

We ship good, old, stable

We ship software built on top of mainstream languages and tools that are stable and should be supported for years to come.

We collaborate

We know where engineering fits in the big picture and do our best to collaborate with product owners, designers and customers. We explain our constraints in words anyone can understand.

We make pragmatic decisions

When we evaluate decisions, we take into account the requirements, unknowns, experience, team, and deadlines. We know the tradeoffs implied by the decisions we make.

We reflect

We take time to reflect on our practice and improve.

We contribute

We acknowledge that we sit on shoulders of giants. We give back by contributing to open source. We share knowledge on our blog and on the Brewhouse Software Engineering Guidelines document.

Brewhouse Engineering Practices

Git

Commit messages

The goal of a commit message is to help future developers to find out why a particular change was made to the code.
We write commit messages that explain Why we made a change. We can look at the diff to know What the change is about.

Not good: "Made the project container an inline-block"

Good: "Fix project cards layout"

A commit is structured like this:

Short (50 chars or less) summary of changes
More detailed explanatory text, if necessary.
* Bullet points are okay, too.

Code

we extract complex code into methods or local variable with meaningful names

We order public methods with Requests first (available?, full_name, ...) then Commands (available!, publish!, ...).

We order private methods so that they always call a method that's
written below them. It makes it easier to find the method and it prevents
infinite recursions.

Pull Requests

A couple of rules:

Any code change should go through a Pull Request.

A pull request gives a high level overview of what has been accomplished.

A picture is worth a thousand words, so we add a screenshot if we've added a new feature or changed the UI.
An animated GIF is worth a million words, so we showcase new UI interactions with one! (LiceCap does the job.)

We review our own changes before creating a pull request: Is there
code that's commented out? Are there tests? ...

When we are not done but we want some feedback, we prefix the PR title
with "[WIP]"

Most pull requests should be reviewed and merged by another team member.

When we make a change that's risky (new database migrations, new
integration with third party service, ...) we (the author) should monitor the
deploy to production.

Things to look at when reviewing a Pull Request:

Is the code clear and robust?

Is it well tested?

Are the commits well written? Should they be squashed?

We want honest and good enough code to be shipped. We don't want to
spend too many cycles making the code Perfect™.

CI

We use Circle-CI for continuous integration. The master branch is setup to deploy to a staging server on Heroku.

Ruby on Rails

Services

Whenever an action becomes complex, we extract it from our controller or model and wrap it into a Service.

We namespace services to keep them well organized:

# BadCreateUser# GoodUser::Create

Most services use the base service mixin.
They favor raising exceptions on failure over returning false.

Whenever we can we decouple services from ActiveRecord to make them more "functional". They take primitives in and return primitives back. That follows the "functional core & imperative shell" approach.
Your service won't deal with persistence and will be super easy to test!

use null: false on fields that should not be empty. This can save you from writing validates_presence_of constraints for models which are not edited directly by users. It is also pretty good as documentation right in schema.rb

use index with unique: true on unique fields or combination of fields.

use index with unique: true on *_id attributes of has_one, has_many through: associations. Rails does not double check that an association does not exist when creating it, so you can easily get multiple associations between the same objects in a join table.

RSpec

With RSpec, we mostly test models, services and api interaction. Cucumber scenarios are likely to cover the thin controller layer. Once in a while, you might need to test a complex controller, go for it!

We use FactoryGirl to create models without trying to go too far with it. Each factory should be valid on its own. If you need some crazy custom setup, call Services from your specs to create stuff.

Instance methods are prefixed with a #, class methods are prefixed with a . (#disable! but .active1)

context should start with the keyword "when". Ex:

context "when email is invalid"
context "when CSV has no content"

Use rspec-set to speed up the RSpec test suite! Be well aware that rspec-set leaves orphan records in the database, so most of your tests should not assume that the database is empty. It’s sometimes tricky to test scopes with an non-empty database. Feel free to truncate the database in a before(:all) block if needed.

Cucumber

Cucumber scenarios should be high level, and contain a few (5-8) steps. They describe a workflow, not UI interactions. Doing so, we spend more time in step definitions (read capybara & ruby) than step definition and cucumber pattern matching hell.

Given steps should rely on factories and services to set up a context. Going through the UI or reusing other steps are likely to slow things down.

By order of preference (because of speed), use capybara-rack, or capybara-webkit... capybara-selenium is slow!

Rake tasks

We use rake tasks to trigger recurring jobs.

rake -T counts about 40 tasks by default. We namespace all the application tasks under the app namespace so that they are easy to find.

We use Heroku Scheduler to run tasks every 10 minutes, hourly and daily. In order to make it easy to update tasks, we create three generic tasks (app:cron:daily, app:cron:hourly, app:cron:every_10_minutes) and call other tasks within those.
We setup Heroku Scheduler to call the app:cron:... tasks.

In order to prevent a failing task to run subsequent ones, each rake task should queue up a job that Sidekiq will happily take care of.

APIs

We have a good experience with jsonapi-resources. It provides a good framework to build a consistent api. Pair it with rspec-api-documentation and apitome and you’ve got a pretty good setup that makes your mobile friends happy. :)

Authentication has been done so far using Basic Auth ("Authentication" header with login:password base64 encoded).