If you simply want to know how to negate an ActiveRecord scope, and you don’t care how it works, here’s the TL;DR:

scope(:not), ->(scope) { where(scope.where_values.reduce(:and).not) }

Arel is the powerful library that powers the Rails ActiveRecord library. It’s also black magic (and largely undocumented black magic at that). It was created in isolation from Rails, and later retrofitted into ActiveRecord. Prepare yourself for a crash course in the dark arts of Arel.

Imagine you’re creating a blog, and you needed a scope that would show you all of the posts that were recently tagged:

We’ve chained two `where` clauses together with ActiveRecord, which will join them together with `AND`. In other words, a post that has tags and has had its tags changed within the last 10 days is “recently tagged”. Freed of the syntactic trappings of ActiveRecord and Arel, it can be simply represented with the following boolean expression:

tagged_at > 10.days.ago && tags != nil

Now suppose you’d like to find the opposite: the list of all posts that were not recently tagged (either because their tags haven’t changed in the last 10 days, or because all of their tags were removed). In boolean logic, this is simply the logical negation of the “recently_tagged” scope:

!(tagged_at > 10.days.ago && tags != nil)

which can be simplified to:

tagged_at <= 10.days.ago || tags == nil

The question is, how do we tell ActiveRecord / Arel to negate a scope? The answer is more difficult than you might expect.

Not suprisingly, it looks like ActiveRecord_Relation_Post is a type of ActiveRecord::Relation. If you peek inside ActiveRecord::Relation, you’ll see it’s a sort of wrapper around Arel (along with a few other ActiveRecord classes and modules). At the top of the ActiveRecord::Relation class file, there are several constants defined, including the following:

This constant, along with the SINGLE_VALUE_METHODS constant, describes all of the possible constituents of an ActiveRecord scope. In our case, we want to peel off the where values of a scope so that we can logically negate them. Searching for usages of the ‘MULTI_VALUE_METHODS’ will lead you to this juicy bit of metaprogramming in ActiveRecord::QueryMethods:

ActiveRecord creates accessor methods for each of the scope’s constituent parts; since we’re interested in the where values, let’s see what `where_values` returns (note that you could also call values[:where]):

Huzzah! We’ve now found the mysterious Arel. Although these class inspections don’t make it entirely obvious, these Arel nodes actually form the nodes of a tree representing boolean expressions in SQL.

The first node is:

>
/ \
/ \
/ \
/ \
:tagged_at 10.days.ago

And the second is simply:

!=
/ \
/ \
:tags nil

Now that we have these where values, what can we do with them? Well, we’d like to AND them back together, and then negate them. Fortunately, there are methods for both of these things on any Arel::Nodes::Node:

The `not` at the end of this expression is what we’ve been searching for. However, this returns an Arel::Nodes::Node (or more specifically, an Arel::Nodes::Not), but we’re using ActiveRecord – we need an ActiveRecord::Relation. Luckily, that’s exactly what `where` returns, and it can take, as an argument, an Arel::Nodes::Node.

And this will, in fact, return what we’ve been searching for: all posts not recently tagged. It’s of course, incredibly ugly and not very reusable; luckily, we can create a generic negation scope with a little help from Ruby’s Enumerable#reduce method:

Someone just told you your code isn’t DRY, and you have no idea what they’re talking about. You’re fresh out of college, and you’re starting to fear that your Computer Science degree left you woefully unprepared for the challenge of real-world software engineering.

“DRY means ‘Don’t Repeat Yourself.’ Look, it’s basically the same code in all of these methods,” your coworker tells you. Your coworker has been out of college for 9 months, so you take another look at your code. You’re writing an application to create widgets, and you’ve just finished a feature that made it possible to create a new type of widget.

On closer inspection, you agree with your coworker that there’s a definite pattern, but you can’t imagine what you could do about it. “We could use a little bit of metaprogramming to DRY this up,” your coworker says, and then blows your mind with the following refactoring:

You had never before imagined that Ruby had such power. Suddenly you can start to fathom how all of those magical methods in Rails must be implemented. And your WidgetFactory code is so clean! It will be so easy to create the factory method for the next widget type that comes along! Your coworker is so pleased to have shown you the secret, magic power of metaprogramming. You’re off and away. Oh, the places you’ll go!

__________________________________________________________

Three years have passed. You’re still working on the same application. You’ve spent the last year refactoring away all of the meta-programming madness of your first year. You shake your head at your former self every time you bump into a needless define_method, method_missing, or instance_eval.

Your product owner asks you to implement a new feature in your application: force your users to confirm every widget creation with a captcha. For the past several months, your application has been plagued by widget spam bots.

You’re pair programming with a new college grad on implementing the confirmation modal when you write the following cucumber step definition:

When /^you create a new widget and confirm with a captcha$/ do
fill_in “Widget Description”, with: “foo bar”
click_button “Submit”
within(“#widget_confirmation_modal”) do
fill_in “#captcha”, with: “HERP DERP”
click_on “Confirm”
end
end

After you and your pair implement the new feature and get your acceptance test passing, you run your entire build – only to discover that a hundred other features have broken. It turns out there were a lot of tests that created a widget, and when you dig into them, you realize they all do it slightly differently. Some of them `click_button “Submit”`. Others `click_link_or_button “Submit”`. Still others `find(“#widget_submit”).click`. And on and on.

The knowledge of how to create a widget in the UI was smeared throughout your test suite. It dawns on you that DRY isn’t about the repetition of structure, it’s about the duplication of knowledge. In this case, the more code that knew about how to create a widget, the more difficult it became to change the way widgets are created.

Your pair learns this lesson with you. You hope it’s better than the lesson you learned when you were fresh out of college. Oh, the programs you’ll DRY!

VCR is a great tool for recording http requests during a test suite so that you can play them back later when the external server is not running or available. However, I’d like to show you how to abuse VCR into giving you the ability to spy on the network interactions during your test suite.

Here’s why: I wanted my integration test suite for AwesomeResource to kill two birds with one stone:

Prove that it integrates correctly with a rails server that serializes models to JSON the default Rails way

Create executable documentation for anyone that wants to look at the required JSON format

In other words, I needed to write a cucumber scenario like the following:

Scenario: Endpoint responds with 201
When I call `create` on an Article model:
"""
Article.create title: "foo"
"""
Then the Article model should successfully POST the following JSON to "http://localhost:3001/articles":
"""
{
"article": {
"title": "foo"
}
}
"""
When I call the `all` method on the Article model
Then the rails app should respond to a GET request to "http://localhost:3001/articles" with the following JSON:
"""
{
"articles": [
{
"title": "foo",
"id": 1
}
]
}
"""
And the `all` method should return the equivalent of:
"""
[
Article.new(
id: 1,
title: "foo"
)
]
"""

If I was only interested in proving that AwesomeResource could integrate with the JSON format represented in this scenario, I could have written steps that faked out the Rails server.

When /^I call `create` on an Article model:$/ do |code|
#no-op
end
Then /^the Article model should successfully POST the following JSON to "([^"]*)":$/ do |endpoint, json|
Article.create_endpoint.should == endpoint
Article.new(“title” => “foo”).to_json.should ==
end
When /^I call the `all` method on the Article model$/ do
#no-op
end
Then /^the rails app should respond to a GET request to "([^"]*)" with the following JSON:$/ do |endpoint, json|
Article.all_endpoint.should == endpoint
@all_json_response = json
end
Then /^the `all` method should return the equivalent of:$/ do |code|
Article.load_all_from_json(@all_json_response).should == eval(code)
end

But I wanted to take this a step further – I need to know when Rails changes the way it serializes and deserializes models to and from JSON. For years, the Rails ActiveResource library has sent broken JSON to Rails servers for nested associations when “ActiveResource.include_root_in_json” is on. Though the fix for this has now been released with Rails 4, I can’t help but wonder why didn’t they have an integration test suite that immediately told them when it broke? Why did they have to wait for community members to submit github issues? Because that’s ActiveResource. It’s not awesome. It’s just active.

The real AwesomeResource step definitions for the afore-mentioned scenario look like this:

When /^I call `create` on an Article model:$/ do |code|
eval code
end
Then /^the Article model should successfully POST the following JSON to "([^"]*)":$/ do |endpoint, json|
posts.should include_interaction(
endpoint: endpoint,
request_body: json,
status: "201"
)
end
When /^I call the `all` method on the Article model$/ do
Article.all
end
Then /^the rails app should respond to a GET request to "([^"]*)" with the following JSON:$/ do |endpoint, json|
gets.should include_interaction(
endpoint: endpoint,
response_body: json
)
end
Then /^the `all` method should return the equivalent of:$/ do |code|
Article.all.should == eval(code)
end

Before the test suite starts, it fires up a Rails server that can respond to CRUD requests for an “article” resource. Notice the `include_interaction` custom matcher? It’s actually iterating over all network requests that have been captured during the execution of the test thus far and finding one that matches the supplied criteria.

In order to capture all of the network interactions, I needed to spy on the network. After googling for a few minutes in vain for a gem that fit the bill, it occured to me that I could simply use “VCR” in “{record: :all}” mode – forcing it to rerecord (and thus not persist anything) during every request. With this in place, I can then create an “after_http_request” hook to snag and save each request/response cycle in memory for the test to access later:

On my current project, we’ve built two applications: a shrinkwap virtual appliance that a user installs into their virtualized datacenter, and a phone-home application that the virtual appliance sends data to over an API. If you’re worried that the rest of this post is irrelevant to you because you’ll never build apps like this, stop – the lessons I’ll present should be useful to anyone doing mobile app development with a web server backend. Just think of our virtual appliance as a mobile application, and our phone-home application as the web server backend.

In development, our acceptance suite for our virtual appliance integrates with a running instance of our phone-home application (with interactions recorded via VCR for reliable playback). For a time, the only CI builds we had were the builds for the two separate applications (with any integrations pre-recorded with VCR).

However, once our application launched to production and had real users, we had to consider the following scenarios:

Does the HEAD of the virtual appliance communicate correctly with the HEAD of the phone-home application?

Do all of the previous releases of the virtual appliance (that we still support) communicate correctly with the HEAD of the phone-home application?

If we released the HEAD of the virtual appliance, would it work with the current release of the phone-home application?

After the virtual appliance pushes all of its data to the phone-home application, do the users of the phone-home application see what we expect them to see?

We first expanded our CI build by one: a build that fires up an instance of our phone-home application on port 3001, then runs the test suite for our virtual appliance without VCR. Actually, we still turned on VCR for all requests other than requests made to our phone-home application (our virtual appliance integrates with lots of other APIs inside a customer’s datacenter that we can’t reasonably setup in a CI environment). However, it’s really simple to tell VCR to ignore only certain requests; for us, we wanted VCR to ignore any requests sent by the virtual appliance to localhost:3001 whenever the “INTEGRATE_WITH_PHONE_HOME” environment variable is present:

Whenever this CI build runs the virtual appliance’s test suite, it simply has to set the “INTEGRATE_WITH_PHONE_HOME” environment variable before running rake. Without this build, it was too easy for us to change our phone-home application and forget to verify that we didn’t inadvertently break our virtual appliance’s ability to properly send data to it.

We’ve now got an automated answer to #1: “Does the HEAD of the virtual appliance communicate correctly with the HEAD of the phone-home application?” And even better, it was simple to extend this solution to answer questions #2 and #3. It’s simply a matter of checking out different revisions of the phone-home application and the virtual appliance and rerunning this same build to verify compatibility.

Everything I’ve presented up to now should apply to anyone developing a mobile application and an API backend. Question #4, however, is a bit more specific to our particular application. The reasons are complicated, but suffice it so say, it was possible for us to change our phone-home application in such a way that, although we didn’t affect the ability of any version of the virtual appliance to successfully send data to it, we affected the ability of a user to see the results inside the phone-home application. Thus, we had to create an all new build that, after pushing data from a version of the virtual appliance, would then log into the phone-home application and verify that it could, in a very basic sense, see what we expected it to see.

CI builds, particularly ones like these that integrate different applications together, don’t come without a cost. You have to weigh the burden of building and maintaining both the systems and the code that runs the builds against the confidence they give you that you haven’t unwittingly broken anything. We were lucky enough to have a team member that had experience building an application that needed very similar builds and could help us cost the options.

CloudFoundry has launched a private beta, and if you’re lucky enough to have access to it, you can get free hosting and services for the next 2 months.

I recently gained access to it and deployed a Rails application to it. The CloudFoundry documentation is still a work in progress, so I thought a blog post on deploying a Rails application to CloudFoundry might prove helpful to anyone attempting this.

Unlike Heroku, CloudFoundry makes no assumptions about version control. You can push whatever you’ve got locally to CloudFoundry with `cf push`. This is great if you find yourself in the position where you’d like to push build artifacts to CloudFoundry that you’d rather not check into version control.

Since you’re deploying a Rails app, you’ll want to use an as-yet undocumented feature of CloudFoundry, the .cfignore file. Think of it like .gitignore for CloudFoundry. You’ll want to add tmp and log to it. At the moment, CloudFoundry doesn’t recognize that you’re attempting to push a rails app and automatically ignore tmp and log, and if you’ve got an application you’ve been developing for more than a few days, you may have some sizable logs that you’d prefer to avoid uploading.

$ cat > .cfignore
tmp/
log/

You’re now ready to push with the “cf push” command. It will ask you all kinds of questions, like what you want your domain name to be, how many instances you want to run, how big you want those instances to be, etc. You’ll have the chance to select services like postgres (via elephantsql).

When you run the `cf push` command, you can also specify the command you want to use to run your application:

$ cf push --command 'bundle exec rails s -p $PORT'

At the moment, there’s no way to run one-off commands like rake tasks other than through the startup command. For example, if you wanted to migrate and seed your database on deploy, you’d need to set the start command as follows:

When it’s done, it will ask you if you want to save your answers to a manifest.yml file. The cli defaults to “no” for this answer, but you’ll want to say “yes” and check in the manifest.yml to your version control. This way, it will remember all of your answers and automatically use your manifest.yml file next time you run “cf push”.

Note that if something goes wrong with your app, you can run the “cf logs” command to view your application logs.

If you want to run a worker like resque or sidekiq, you’ll need to create a new app instance on CloudFoundry, bind it to the same services as your current app, and tell it to start your workers on app boot. There’s a nice tutorial on setting up workers in the CloudFoundry docs here.

But basically, this is as simple as adding another application to your manifest.yml:

Now you have two applications in a single manifest, each bound to the same services. We’ve set the start command on our worker to “bundle exec someworkercommand”. Replace “someworkercommand” with sidekiq or resque or whatever worker system you’re using. Notice that we left the url for the worker blank (do this if there’s no endpoint in that instance that you want anyone accessing).

You’re a product owner, and you have an idea. In your mind, it’s pure, simple, and beautiful. You want to hold onto that idea, to nurture it, cherish it. Reality, however, has other plans. You’ve got investors. You’ve got a team of hungry developers and designers licking their lips. You have to feed them your idea. So you say it. And immediately the illusion is shattered. The designers and the developers start asking questions. They force you to come up with concrete examples. You start breaking the idea down into features. One developer comes up with an edge case you hadn’t considered, forcing you to rethink your idea entirely. BAP!

You’re a designer. You’ve worked hard for the past several months to create a holistic design and a sane, coherent user experience in what has turned out to be a very challenging problem domain. And then your product owner walks in one morning and explains that the product is going in a new direction. Your entire user experience is shattered. You find yourself picking apart the workflows, trying desperately to put them back together in a way that still makes sense. KAPOW!

You’re a developer. You are proud of the application you’ve built. Your classes are well-factored. Your domain is well-modeled. Your class names make sense to even the newest person on the team. Your patterns are so well established that you can build on an existing feature by simply throwing in a new subclass. Your test suite is fast. You’re ready for anything. Except what your product owner just said. You didn’t see this coming. Your code is woefully unprepared to accommodate this new feature. You’ll have to rethink at least a dozen objects in order to implement it. ZLONK!

Building software is hard and painful, but this is what we know:

Good stories make good code.

Writing stories are a collaborative process between the product owner, the developers, and the designer.

A story can’t implement half a feature; if you can’t put it in front of a user and get meaningful feedback, it’s not a story.

Stories start with business value. If you can’t identify what value your business will get out of the story, your business has no business with it.

“Rails is slow, but Rails tests are slower.” Rails may be slow, but I’m here tell you that it’s likely you have only yourself to blame for your slow test suite.

I’ve seen some bad test suites in my day. I was once pulled onto a rescue project that had a total build time of over 24 hours. The health and future of your production code depends on a fast, reliable test suite. On my current project, we believe in fast feedback and we’ve focused a lot of detail and attention to our test suite. It’s paid off in dividends.

We’re six months into a project, and we have a one-minute test suite. That includes both rspec and cucumber (and the time to boot up rails for both). Here’s how we’ve accomplished it:

1. Pyramid

Your test suite is a pyramid. Skinny at the top (journey acceptance tests), fat at the bottom (unit tests). Conceptually, this sounds right, and it’s easy to think that it should just naturally fall out of the BDD outside-in process.

But in reality, it’s not a given. Tools like cucumber make it so easy to spin up new high-level acceptance tests for every edge case you can think of. Taken to an extreme, this can lead to unit-testing at the browser level. That doesn’t mean you were wrong to drive out exceptional paths at the acceptance level. But you should ask yourself this question before you check in: “Will I have any less confidance in my test suite if I don’t check in this acceptance test?” If you have already covered a happy path of a feature, and at least one exceptional path of a feature at the acceptance level, do you then need to cover every other exceptional case at the acceptance level? If you’ve added tests down at lower levels in your stack, then you might already have all the verification value you need.

2. Grooming

We groom our test suite. Not every day. But we watch it and keep it in order. We listen to pain in our test suite (“Why did that change cause so many other tests to break? And why did I have to go to every single test and fix each one of them individually?”). When we revisit old tests, we reconsider it in light of what we know about our application. We keep an eye out for duplicate tests.

3. Refactoring

Our fast test suite practically begs us to refactor. And it turns out, refactoring often leads to simpler designs, objects with fewer methods (and therefore fewer tests), as well as objects that are easier and more natural to test in isolation. Great tests will lead you to refactor your production code, which will lead to even better tests.

4. ActiveRecord Containment

How many tests require you to create data in the database? Ideally, that’s limited to just your acceptance tests and your activerecord model tests. But the reality is never that simple. Although it’s technically possible to accomplish this with any Rails application, it’s not always feasible, or even desired. Libraries you use may make this difficult in some circumstances (e.g., devise).

You might also find yourself in a situation where testing an object in isolation would lead to a very brittle test that knows the entire implementation via stubs and mocks. Worse, you may not see a way to untangle these dependencies. That’s OK. Test the object’s behavior, even if it means integrating with other objects. As your understanding of your application’s problem domain crystallizes, and as more patterns begin to emerge, you will eventually find a way to simplify it, to untangle the dependencies.

5. GC.disable

At one point on our project, our build time crept up to nearly two minutes. We saw ourselves slipping down a very slippery slope. We started running it less, refactoring less, and even finding less motivation to keep our test suite clean. So we threw a chore at the top of our backlog to bring the test suite time back down to a minute. Our PM was naturally wary, asking us to timebox the chore to an hour or two, but we made the case that this was of critical importance to the future of the project, and that the pair working on it would check with the rest of the team after a day.

Near the end of the day, the pair stopped and told the team that they thought they had done all they could, but it only got the test suite down to 1:20. Shaving forty seconds off a test suite in a day is a fantastic feat, and when we looked at what they’d done, we were really impressed. I jokingly suggested disabling GC. And then they actually did it. The test suite time dropped to 48 seconds. I’m not recommending you do this. This is a bit of a nuclear option. If this is the first thing you try to improve your test suite time, then you are missing the point of bringing your test suite time down. But if you feel like you’ve done everything you can to legitimately bring down the time of your test suite, then consider disabling garbage collection. Ruby GC is a beast, with the potential of turning a linear algorithm that creates N objects into a quadratic algorithm. Weigh the pros and cons, and decide for yourself if you can live with the dirty dirty feeling this will give you.

Uncle Bob Martin said that at some point, or something very close to it. I’d like to take that a step further: I like to make my (Rails) controllers so dumb, there’s no reason to test them (unless there’s no higher level acceptance test that would exercise the actions in them).

Complicated controllers are painful. Each controller action is like a mini main() function. That’s a lot main()s. The more each action knows about your underlying application, the more brittle your application gets. Think of an action as a launching point into your business domain.

I have some rules (that I occasionally break). It goes something like this:

1) Controllers should be RESTful. No custom actions. new, create, update, destroy, edit, show. That’s it.
2) Controllers should manage a single resource. If you’re instantiating more than a single object in your controller, you’re probably going to regret it.

There’s some great gems out there that can support this restrictive approach to Rails development:

1) Responders (https://github.com/plataformatec/responders). Responders are built into Rails, but the FlashResponder in the responders gem is essential. Replace all of your tedious flash message management code with defaults that can be overriden in your localization files. You can create your own responders to replace any tedious bookkeeping in your controllers.

2) Informal::Model (https://github.com/joshsusser/informal). Since you’re limited to instantiating a single object in your controllers, it’s likely that you won’t get by with just ActiveRecord models. You’ll need to create higher level models that can coordinate the work of all your lower level database models (and action mailers, etc.). Heads up, Rails 4 will obsolete this gem with ActiveModel::Model.

3) ActiveModel::Serializer (https://github.com/rails-api/active_model_serializers). This only applies if you’re creating a JSON API. But if you are, consider this gem. It’s convention over configuration for your API. It works with responders and makes all of the JSON format choices for you so that you can focus on more important things.

I’m a YAGNI’ist. I’m vigilant against over-engineering. I will seek out and destroy over-engineered, anticipatory, predictive designs. This wasn’t always the case; early on in my career, I was quite the opposite. I realize now that I suffered from a lack of confidence, and that BDD and extreme programming gives us the power to deal with any problem that arises, WHEN it arises.

The Pivots I’m currently working with on a project are very much of the same mindset. So how did we manage to build a crash-proof application that can persevere in the face of extreme catastrophe, when that was never a goal?

Let me start at the beginning. We were tasked with developing an application that an IT administrator could install into his or her datacenter. After some initial configuration, the application would spider the datacenter, gathering all kinds of data about it. This application would then phone home the data to another application on the Internet, where the customer could review it. It was basically your standard ETL application, with some very non-standard data-sources.

At the very beginning, everything about this application was synchronous. The user would fill out a form, click a button, and wait. While waiting, the application would hit various bits of infrastructure in their datacenter, massage the data, and then phone it back home. Instead of making this customer wait for the form to submit, we could have backgrounded this process right off the bat. But we weren’t collecting quite enough data at first to warrant it. And creating a more graceful user experience wasn’t as high on the priority list as other features.

The number of datapoints we collected started to grow, and at some point, we decided the customer had to wait too long. A minute or two was OK, but 5 minutes? 10 minutes? Unacceptable. We were risking losing customers. So we bit the bullet and backgrounded it. But we took no steps at that point to deal with transitory network failures. Remember, this process is collecting data from other pieces of infrastructure on their network, and phoning that data back home over the Internet. Before (when everything was synchronous) if something went wrong, the user could always resubmit the form. In the new user experience, this was no longer possible. They submitted the form, and were instantly presented with a message informing that data collection is proceeding and to come back later.

We could have written code right at that moment that would anticipate failures. But here’s the rub: we’d experienced no failures up this point in any of our testing. What should we expect to fail? Spidering their infrastructure? All of it? Or were certain aspects of their infrastructure more likely to become unresponsive than others? Or should we expect the phone-home application to stop responding? Anything could fail at any point, but writing code to be resilient the in the face of any type of failure is expensive.

More importantly, we had no story telling us to anticipate failures. And we knew that the cost of writing code that could prepare for any type of failure was prohibitive. We made the case to our product owner to wait.

As the application grew, so did the code, and so did the amount of data points we were collecting. The collection phase took longer and longer, and eventually we started seeing occurrences of failed collections. Every failure wasn’t alike. They happened for different reasons; some we could even control or at the very least curtail (e.g., failures due to rate limiting).

And now we had real stories, driven out by real-world experiences, that we could prioritize against new feature work. Dealing with failures in this way allowed our code to grow over time, to respond to likely failures, while ignoring unlikely ones. Had we attempted to engineer a crash-proof application at the beginning, the results would have been disastrous. But this way, not only did our code evolve in a much more organic and sustainable manner, our understanding of the different types of failures grew over time, giving everyone on the team a better understanding of the technologies our application interacted with.

Today, our application is incredibly resilient. Short of a nuclear bomb, you can not stop this application from completing its ETL. The code is well-factored, readable, and maintainable. And we gradually built in that robustness while still delivering new features. WINNING

The year is 2005. I’m one year out of school, and a year into a job doing PHP web development at a small development firm in Dallas. A co-worker tells me jokingly about extreme programming. He laughs about the absurdity of pair programming and writing tests. Another developer goes rogue and develops an application in something called “Ruby on Rails.” He’s learning the framework at the same time he’s developing the application. I boast that I could have done it in half the time in PHP. That developer takes a job for a company in Seattle doing Ruby on Rails. I spend another frustating year at the company in Dallas.

Fast forward four years to 2009. I’m now living in Manhattan. Burned out on PHP, I’ve spent the last year doing free-lance Ruby on Rails development. I love it. It’s everything I wanted out of a language and framework. It solves all of the common, tedious problems that I faced developing in PHP, and lets me focus on developing my applications.

I land a job at a giant multi-national corporation. The team I’m working with is agile. They hold standups. They have a certified scrum master. They do two week iterations (on projects with fixed scope and hard deadlines). They estimate stories (in hours) (that they write themselves). They write tests (sometimes) (after they write their production code).

It was the best thing that had ever happened to me. And it was hard. And it was painful. We attempted to be agile within an organization of six-sigma blackbelts that would spend 18 months defining a process for defining processes. We didn’t know what we were doing half the time. We got a lot of stuff wrong. But we cared. And we learned. And we got better.

Fast forward three years. I’m still working for that giant company. I get an email from Pivotal Labs. A co-worker warns me, “You know they pair all the time.” I’m a little frightened. I go in for a day-long pairing interview. It’s amazing. I learn more in that one day than I had learned in the last year of work. I pair with developers smarter than me, better than me, and more experienced than me. I take the job.

Every day at Pivotal is like that first day, but even better. I learn something new every single day. I work with other engineers absolutely committed to developing great code. We give new meaning to the word “consultant”, giving our clients not just advice, but pairing with them to act on that advice (and course-correcting when things go wrong). We pair program. We test drive. We collaborate on story specification. We start each week looking at our priorities and estimating our stories. We end each week reflecting in a retrospective, talking about what’s working, what’s not, and what we should do about it. We take breaks throughout the day. We play ping-pong. We’re relentless about self-improvement, team-improvement, project-improvement, and company-improvement. And I’ve never been happier. I’ve never had the privilege of working with such a talented, passionate group of people.

We want to change the way the world develops software. You can too. Do the right thing. Do what works. Be kind.

I’ve had it. I’ve had the misfortune to need ActiveResource (an http client library for giving you an ActiveRecord-like API for interacting with restful services) off and on for several years now. Even when it’s worked, it’s never worked well.

Let’s start with the way you configure it. You know how you can use a database.yml file to specify your ActiveRecord connections for different environments? Wouldn’t you expect ActiveResource to work similarly? I mean, it seems unlikely that you’d actually connect to the exact same server endpoint in test, development, and production, right? Too bad. ActiveResource gives you a single way to set a model’s “site”: an attr_writer on the model’s singleton:

That global state isn’t a good sign. Even after you hack in your own environment-specific connection code, do you think your model will be thread-safe? Hell no they won’t. If you try to use your models in a threaded environment (e.g., in threaded background worker systems like Sidekiq), you’ll eventually run into a race condition on the model’s singleton “connection” attribute. And your code will raise an exception. Fun.

Let’s talk about JSON. Everyone loves JSON, right? ActiveResource is an old library; when it was originally written, XML was in vogue. The ActiveResource XML support is very mature. It’s JSON support? Broken. That’s right, it’s broken. Has been for years. Sending nested attributes over JSON does the wrong thing. There’s a fix that was merged in a year ago that will be released with Rails 4. In the meantime you can use my “activeresource_json_patch” gem.

Let’s look at the ActiveResource code. It’s a great example of Stunt Programming. Once, when attempting to determine how I might monkey-patch ActiveResource to allow me to set a lambda as the “site”, I stumbled into this method:

And there you have it. The prefix= method redefines the prefix_source and prefix methods. Thereby avoiding the infinite recursion. FACEPALM

All right, enough complaining. Taken at face value, ActiveResource isn’t actually all that bad. If your needs are incredibly simple, it will likely do the job. And I’ve actually tried to improve the ActiveResource ecosystem over the years. I released a gem that dealt with the nested-attributes-over-JSON bug during the interim until the bug fix is released. I created another gem that made environment-specific site configurations possible. But in the end, I’ve just had it. The code’s a mess. The library is half-forgotten. It’s time for a reboot.

An agile retrospective is a safe space. We can reflect on the week, think through our victories and defeats, and say what we feel. If your team is a ship, the retrospective is the captain. The captain looks at where you’ve been, thinks about where you’re going, and course-corrects as necessary. Without a retrospective, you’ll likely end up blindly sailing straight into a hurricane.

The retrospective is a critical departure from our day to day process of pair programming. When we pair, we’re focused on the problem directly in front of us. We have to put two minds together to solve that problem, and this requires a surprisingly intense amount of focus. It’s not a process that lends itself to introspection and reflection.

The departure from that day-to-day into a retrospective leads to an interesting communication challenge. Consider this: when we pair, we solve problems by thinking out loud and arriving at solutions collaboratively, organically. Minds meld. Buy-in is a given.

But in a retro, we might arrive at a conclusion without consulting anyone else. And when we present it, the team may disagree, but it’s not always obvious how to resolve that dispute.

Let me give you a real example. Once, on a project, a new client developer joined the team. He was a very experienced, highly competent developer, but he was new to pairing, TDD, XP. A couple weeks in, he stated in a retro, “I think we’re testing just for the sake of it.”

As you can imagine, this ruffled some feathers. We instinctively disagreed, but without knowing the context of his statement, the argument basically boiled down to this:

“No, we’re not.”

“Yes, we are.”

“No, we’re not.”

“Yes, we are.”

Clearly, we needed to know more. And then something happened. Another client developer on the project who had been on it for several months said, “Can you give us some concrete examples of tests that lead you to this conclusion?”

And then everything became clear. Once he described what he was reacting to in the test suite, we were able to hone in on the real issue. The new client developer was seeing tests at the acceptance layer that provided very minimal, common path verification. He wanted to see more edge cases fleshed out, but didn’t yet understand the relationship between high-level integration verification and low-level unit testing.

Whenever you hear a conclusion in a retro that you don’t agree with, ask “Why?” Try to understand what lead them to their conclusion. You can argue with someone until the cows come home, but without understanding the context of their statements, you’re not likely to reach any sort of resolution.

I hated Rails helpers. I saw them as dumping grounds; one-off procedural aberrations in a sea of objects. But I didn’t just complain. I acted. I created the “frill” gem, an implementation of the decorator pattern that I extracted from a project that actually needed a decorator pattern in it’s view layer.

I had another project that seemed ideal for “frill”. It was a data analytics application that required a good deal of manipulation of the raw numeric data points for presentation.

if the underlying datapoint exists, render the human sized number, semantically and visually distinguishing the number from the units

if the value is at this point not nil, render the number and its units red or green depending on whether or not it’s healthy

if the underlying datapoint is nil, render a “N/A” message

We implemented the logic using “frill”. Our views looked like a paragon of simplicity:

= environment.disk_space

We had cleverly hidden all of the complexity in a series of modules that frill would dynamically decorate onto our models:

module HumanSizer
include Frill
def disk_space
HumanSizedNumber.new(super)
end
end
module Renderer
include Frill
after HumanSizer
def disk_space
if super
render “human_sized_number”, number: super
else
render “not_available”
end
end
end
#etc...

The frill library took care of stacking all of the decorators together, extending the objects at runtime with the relevant modules.

So what went wrong?

Indirection. If these were the only two modules in the frills directory, then perhaps it wouldn’t have mattered. But when there was no obvious sign pointing from the view or controller to the relevant modules, you were often left scratching your head (or jumping into a debugger session) to determine the thread of execution when something went wrong

Rigidity. The framework worked well for 90% of cases. But what about the other 10% of the time when you need to alter the presentation stack in some small but suble way? It turns out it was hard to remove existing decorations, or alter their presentation stack.

The latter problem proved especially tenacious. And lacing the decorators with all kinds of conditionals about the random one off cases where such and such decorator didn’t help anything.

HELPERS – STATELESS, COMPOSABLE, FLEXIBLE

When the frill library didn’t work out, we tried using helpers – and discovered that Rails helpers were the answer we’d been seeking all along. We created simple helper methods that we could stack together in pretty much any way our presentation demanded.

It dawned on me that there’s really just a simple rule to follow with helpers: keep them stateless. If they’re simple, stateless methods that always return the same output for a given input, then they’re easy test and easy to stack in new and interesting ways. You can even create higher order functions quite easily by taking advantage of the “method” method for turning a method into an object: