Pivotal Labs » John Barkerhttp://pivotallabs.com
Agile DevelopmentWed, 29 Jul 2015 22:44:54 +0000en-UShourly1http://wordpress.org/?v=4.2.3Ahead of the curvehttp://pivotallabs.com/ahead-of-the-curve/
http://pivotallabs.com/ahead-of-the-curve/#commentsSat, 30 Mar 2013 17:57:01 +0000http://pivotallabs.com/?p=15906This is the third and final part in a series I’m writing about lessons that can be learned from functional programming. Find the first part here and the slightly inflammatory second part here. So, long story short. This series of posts was not intended to be a diatribe against all ...

]]>This is the third and final part in a series I’m writing about lessons that can be learned from functional programming. Find the first part here and the slightly inflammatory second part here.

So, long story short. This series of posts was not intended to be a diatribe against all forms of code that are not functional. Nor a rallying call for Pivotal Labs to abandon its traditional platforms in favor of writing everything in Haskell and Yesod. Instead, it was meant to begin with a playful poke at things we consider sacred, some exploration of the benefits of functional programming and what we can learn from emerging technologies and ideas. I feel that Pivotal has always been ahead of the curve in its technology choices and that it would be wise for us to give this crazy old concept a bit of a try.

Unfortunately: I’m leaving Pivotal. So instead I’m going to link you to some interesting articles and blog posts on the subject that have said everything I could hope to say only much better.

Firstly this paper Why functional programming matters is a great introduction to the subject of FP in general. It speaks about FP in terms of the benefits of first class functions and how they enable composition and modularity. Out of the several (somewhat new) FP languages available Clojure is the one that interests me the most. This great article (by former Pivot David Jacobs) makes an excellent case for applying the language to production problems.

]]>http://pivotallabs.com/ahead-of-the-curve/feed/0Letter to myself as a junior developerhttp://pivotallabs.com/letter-to-myself-as-a-junior-developer/
http://pivotallabs.com/letter-to-myself-as-a-junior-developer/#commentsMon, 18 Mar 2013 23:33:15 +0000http://pivotallabs.com/?p=15965Hi John, it’s me: future you. You think you know everything about writing software, and you’ve been told this same thing by countless others. But take it from me: you know so little you don’t even know what you don’t know. You think you’re pretty clever. You always use a ...

]]>Hi John, it’s me: future you. You think you know everything about writing software, and you’ve been told this same thing by countless others. But take it from me: you know so little you don’t even know what you don’t know.

You think you’re pretty clever. You always use a language’s cutting edge features or get the most done with the fewest lines. There’s one problem with clever code – somebody else has to read it. That someone else is not somebody who just doesn’t ‘get’ your genius. It’s future you. You see: if it was hard to write, it’s even harder to read.

Exhaust all blame on yourself before you blame someone else. It’s far more likely that the code you just wrote has problems in it then the industry standard, heavily tested and heavily used code is at fault.

Your professors will consistently insist on excessive comments, you’ll get marks for every method that has an appropriate comment describing how it works. You’ll spend time writing nauseatingly large comments at the beginning of every file. But they’re wrong. Most comments are a waste of time. What’s the best way to communicate with other developers? Naming and self documenting, intention revealing code. Naming is hard, but spending a little bit of time thinking about appropriate and consistent names almost always pays off. Instead of comments; make sure each procedure, object or module has a single purpose and name it appropriately to reveal what that purpose is. This doesn’t mean abandon comments, but look to them as a last resort.

Don’t swear in the code – one day your boss will find it. Or it’ll appear in a demo, in front of the customer. Make kitten references instead, everybody loves kittens.

Don’t be so judgmental when you read someone else’s code. You won’t always know what conditions it was written under, what the requirements were or what the state of code was when those changes were made. Maybe it was written under duress or maybe it solves a problem you just haven’t encountered yet. As they say in the movie business: where were you when the page was blank?

Read the source code. If it’s available it’s usually easier to understand than you think and it’s almost always more accurate than the documentation.

Don’t be afraid to remove code. You may have worked hard on the code, but you have to remember that all code requires maintenance and all code rots. More often than not if you need it back you’ll understand the problem better and write it better. Realizing this can be liberating.

Make interfaces that are easy to use correctly and hard to use incorrectly.

Understand every single line of code you write. If it’s there, it has a purpose and you need to be certain it’s doing the right thing. It can be tempting to sometimes copy code from examples or put things in until it just works. But if you make even the tiniest, trivial mistake – the computer has a license to do whatever it wants to.

Automate everything. If you have to do something more than a couple of times, take a step back and see if there is a script you could write or a tool that could make those steps reliable, repeatable and most satisfyingly of all: fast.

Research first. Someone has usually solved the problem before and they’ve probably done a better job.

Most of all, be aware of what you don’t know, seek knowledge, practice deliberately and expand your horizons beyond what you are comfortable with. These are some of the lessons I’ve learnt over the years that weren’t taught to me in any computer science course, that I had to learn for myself. They seem obvious, almost self evident at times but they require work. Perhaps, once you appreciate them, you will no longer be a junior developer.

]]>http://pivotallabs.com/letter-to-myself-as-a-junior-developer/feed/7Start small and compose: A strategy for using FactoryGirlhttp://pivotallabs.com/how-to-use-factorygirl-effectively/
http://pivotallabs.com/how-to-use-factorygirl-effectively/#commentsWed, 06 Mar 2013 00:54:21 +0000http://pivotallabs.com/?p=15615While I’m still not entirely sold on FactoryGirl, I often see it being used in a particularly lazy way. Imagine your basic factory: factory :project Before long you’re adding relations to projects, and the first thing people do is this: factory :project do association :user end This immediately means that ...

]]>While I’m still not entirely sold on FactoryGirl, I often see it being used in a particularly lazy way. Imagine your basic factory:

factory :project

Before long you’re adding relations to projects, and the first thing people do is this:

factory :project do
association :user
end

This immediately means that every single time you instantiate a project, you’re getting a user as well. In most cases this is more than you want, and if you continue to follow down this path you end up with a huge slow test suite. I prefer a slightly different strategy: start small and compose.

factory :project do
...
end
trait :with_manager do
association :user, factory: :manager
end

This defines a very simple factory :project which gives you only a project and allows you to build a project with an associated user like so:

FactoryGirl.create(:project, :with_manager)

The result is a couple more arguments when you use the factory, but the overall code is more intention revealing.

If this is too much, you could always create a more descriptive factory:

factory :managed_project, parent: :project, traits: [:with_manager]

If you stick with this strategy, you’ll find that tests are more concise, factories are more useful and your test suite run time won’t grow as fast.

]]>http://pivotallabs.com/how-to-use-factorygirl-effectively/feed/1All evidence points to OOP being bullshithttp://pivotallabs.com/all-evidence-points-to-oop-being-bullshit/
http://pivotallabs.com/all-evidence-points-to-oop-being-bullshit/#commentsFri, 22 Feb 2013 00:40:32 +0000http://pivotallabs.com/?p=15387This is the second part in a series I’m writing about lessons that can be learned from functional programming. Find the first part here. Object Oriented Programming (OOP) as an idea has been oversold. The most commonly used languages in use today are designed around the idea of OOP. Extremist ...

]]>This is the second part in a series I’m writing about lessons that can be learned from functional programming. Find the first part here.

Object Oriented Programming (OOP) as an idea has been oversold. The most commonly used languages in use today are designed around the idea of OOP. Extremist languages like Java force you to think of everything in terms of objects. But is Object Orientation (OO) a good idea? Does it have problems? Is it the right tool for everything? Let’s explore some of these questions in a slightly tongue in cheek and cathartic rant.

Imperative vs Declarative

The object-oriented model makes it easy to build up programs by accretion. What this often means, in practice, is that it provides a structured way to write spaghetti code. — Paul Graham

Procedural programming languages are designed around the idea of enumerating the steps required to complete a task. OOP languages are the same in that they are imperative – they are still essentially about giving the computer a sequence of commands to execute. What OOP introduces are abstractions that attempt to improve code sharing and security. In many ways it is still essentially procedural code.

Declarative languages on the other hand are about describing computation. While a a declarative language necessarily maps down to imperative code, the resulting code often reveals less incidental complexity and can sometimes be much more easily parallelized.

State

The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle. — Joe Armstrong

State is not your friend, state is your enemy. Changes to state make programs harder to reason about, harder to test and harder to debug. Stateful programs are harder to parallelize, and this is important in a world moving towards more units, more cores and more work. OOP languages encourage mutability, non determinism and complexity.

As someone who was initially hostile to the idea that state is the root of all problems, I initially greeted this idea with skepticism. Mutating state is so easy and fundamental in OOP that you often overlook how often it happens. If you’re invoking a method on an object that’s not a getter, you’re probably mutating state.

Nouns and Verbs

Java is the most distressing thing to happen to computing since MS-DOS. — Alan Kay

The typical college introduction to OOP starts with a gentle introduction to objects as metaphors for real world concepts. Very few real world OOP programs even consist entirely of nouns, they’re filled with verbs masquerading as nouns: strategies, factories and commands. Software as a mechanism for directing a computer to do work is primarily concerned with verbs.

OOP programs that exhibit low coupling, cohesion and good reusability sometimes feel like nebulous constellations, with hundreds of tiny objects all interacting with each other. Sacrificing readability for changeability. Many of OOP best practices are in fact encouraged by functional programming languages.

Inheritance vs composition

Object-oriented programming is an exceptionally bad idea which could only have originated in California — Edsger W. Dijkstra

Inheritance is one of the primary mechanisms for sharing code in an an OO language. But this idea is so problematic that even the keenest advocates of OO discourage this pattern. Inheritance forces you to define the taxonomy and structure of your application in advance, with all it’s connections and intricacies. This structure is resistant to change which is one of the primary problems software developers face every day. It also fails to model some pretty fundamental concepts.

Further reading

This is far from an exhaustive list of the criticisms leveled at OOP. While I believe the problems with OOP are extensive, I do think it is a valuable mechanism for developing software. But is certainly not the only one. The biggest problem in my mind is thus:

When people overcome the significant hurdle of fully appreciating OOP, they tend to apply it to every problem. OOP becomes the solution, and every problem looks like a nail.

]]>http://pivotallabs.com/all-evidence-points-to-oop-being-bullshit/feed/19Using ActiveRecord with multiple databaseshttp://pivotallabs.com/using-activerecord-with-multiple-databases/
http://pivotallabs.com/using-activerecord-with-multiple-databases/#commentsThu, 14 Feb 2013 14:57:23 +0000http://pivotallabs.com/?p=15326At Pivotal I’ve been working on a project which uses two databases. Doing some quick searching we came up with a rather naive solution, this quick mixin: module SecondDatabaseMixin extend ActiveSupport::Concern included { establish_connection "db2_#{Rails.env}" } end It didn’t become obvious what was wrong with this until we added one ...

]]>http://pivotallabs.com/using-activerecord-with-multiple-databases/feed/0Why you should care about functional programming.http://pivotallabs.com/why-you-should-care-about-functional-programming/
http://pivotallabs.com/why-you-should-care-about-functional-programming/#commentsWed, 30 Jan 2013 18:21:44 +0000http://pivotallabs.com/?p=12024I’ve been experimenting with functional programming (FP) languages for a little while now and their acceptance is generally increasing amongst the wider developer community. This is the first post in a series of articles I hope to do that explore FP, what it is and what we could learn from this trend. ...

]]>I’ve been experimenting with functional programming (FP) languages for a little while now and their acceptance is generally increasing amongst the wider developer community. This is the first post in a series of articles I hope to do that explore FP, what it is and what we could learn from this trend.

Why now?

Functional programming languages are by no means new. LISP often thought of as one of the first programming languages was developed in 1958. IPL a sort of assembler like symbolic language appeared in 1954. The foundations of FP are embedded in mathematical constructs much older than the concept of an electronic computer.

But for all intents and purposes they sort of disappeared into obscurity, replaced by procedural languages like C or Object Orientated (OO) like C++ or Java. Partially because they were impractical – some of the constructs used by FP languages performed worse than their imperative counterparts. In a time when CPU power was scarce, this was not an option.

A different age

Thanks to Moore’s law this has changed somewhat. Computers now are certainly more powerful then they were when the first LISP interpreter appeared. But the ever scaling MHZ peak has slowed. New CPUs come with more cores, not more cycles. To take advantage of these improvements we need to look to parallelization and FP languages are inherently suited to this kind of work.

]]>http://pivotallabs.com/why-you-should-care-about-functional-programming/feed/0Facebook and GooglePlus Javascript SDK sign in with Devise + RoRhttp://pivotallabs.com/facebook-and-googleplus-javascript-sdk-sign-in-with-devise-ror/
http://pivotallabs.com/facebook-and-googleplus-javascript-sdk-sign-in-with-devise-ror/#commentsFri, 18 Jan 2013 13:47:53 +0000http://pivotallabs.com/?p=11707Recently I added a modal sign in and sign up dialog to a Rails application that allowed for sign in using Facebook or Google as well as via email. This dialog can appear any time a user attempts to perform a protected action, allowing them to sign in and continue ...

]]>Recently I added a modal sign in and sign up dialog to a Rails application that allowed for sign in using Facebook or Google as well as via email. This dialog can appear any time a user attempts to perform a protected action, allowing them to sign in and continue without losing any data.

To make this work I had to implement Google and Facebook sign in using the new javascript SDK provided for both platforms. The old style authentication redirects when successful which means any in memory session state is completely lost. This means forms are cleared, event handlers are rebound and any work in progress has to be done again.

Before I explain the difficulties we had getting this to work with I’ll explain briefly how OAuth works with respect to devise.

Typical OAuth workflow

Most sites implement this kind of workflow by opening a popup window pointing at their oauth request url (e.g: /oauth/facebook/). This sets up the initial state of the session using a cookie and redirects to the login page providing a url to it to redirect the user again once they’ve authenticated.

If authentication is successful, an oauth token is generated and stored in a cookie and the user is redirected to the callback url. The callback url hits the devise stack and verifies that the token is real by asking Facebook to verify it. If everything checks out execution dips into your application code and the user is created or looked up by some identifiable piece of information.

Both APIs have a simple method which expects a callback. The callback is executed indicating whether authentication was successful and it’s up to you what you do with that information. This allows for greater flexibility and smoother transitions into an authenticated step.

Getting it to work with Devise

There are a number of gotchas with using the client side approach, some of them related to possible bugs and my difficulty in interpreting just how the OAuth devise gems work.

OAuth Gem Version

Omiauth OAuth2 version 1.1 recently introduced CSRF validation for the authentication workflow. Unfortunately this breaks client side validation since there is no request component.

The comments will give any future reader of your Gemfile an indication of what they need to do to lift the version restrictions on the omniauth dependencies.

Google OAuth2 Token Validation

The current omniauth-google-oauth2 gem will try to validate your access token with a different server and request format to the one required by the new javascript SDK. For the time being you can use our the forked version (here)[https://github.com/pivotal-geostellar/omniauth-google-oauth2/tree/client_login].

Both of these calls hit the OAuth callback endpoint to verify the access tokens obtained by the user. If authentication succeeds you’ll get the typical devise+OAuth workflow and a session omniauth.auth cookie with the appropriate details.

]]>http://pivotallabs.com/facebook-and-googleplus-javascript-sdk-sign-in-with-devise-ror/feed/2Rename all snake cased coffeescript files to CamelCasehttp://pivotallabs.com/rename-all-snake-cased-coffeescript-files-to-camelcase/
http://pivotallabs.com/rename-all-snake-cased-coffeescript-files-to-camelcase/#commentsThu, 13 Dec 2012 19:58:00 +0000http://pivotallabs.com/rename-all-snake-cased-coffeescript-files-to-camelcase/While I don't necessarily think CamelCase is the best way to name your coffeescript files, I was unhappy that we weren't consistent on my project.

I was going to write a pure find/sed/mv script but found Mac OS X's sed doesn't support text transforms like L so I finally delved into ruby -n:

]]>http://pivotallabs.com/rename-all-snake-cased-coffeescript-files-to-camelcase/feed/0Cleaning old brancheshttp://pivotallabs.com/cleaning-old-branches/
http://pivotallabs.com/cleaning-old-branches/#commentsThu, 11 Oct 2012 14:47:00 +0000http://pivotallabs.com/cleaning-old-branches/We're using Github pull requests on our project. Which means whenever a pull request is accepted, a branch is left lying around.

So I wrote a quick script to remove all remote branches that have been merged into develop (our working branch, you'll have to alter the first instance of 'develop' to master if you use a more typical git branching model).

]]>We’re using Github pull requests on our project. Which means whenever a pull request is accepted, a branch is left lying around.

So I wrote a quick script to remove all remote branches that have been merged into develop (our working branch, you’ll have to alter the first instance of ‘develop’ to master if you use a more typical git branching model).

]]>http://pivotallabs.com/cleaning-old-branches/feed/0The Healthy Gemfilehttp://pivotallabs.com/the-healthy-gemfile/
http://pivotallabs.com/the-healthy-gemfile/#commentsThu, 06 Sep 2012 18:00:00 +0000http://pivotallabs.com/the-healthy-gemfile/Often when working on ruby projects that use Bundler, I see Gemfiles that look like this:

The string on the right hand side of each gem specification is a fixed version specification. If you ask bundler to update any of these gems, it will make a bit of noise but those gems listed will essentially stay the same.

The typical reason for structure a Gemfile like this is to prevent changes in dependent software from causing compatibility issues or to reduce the chance of bugs or unexpected behaviour.

This strategy is problematic for several reasons: it keeps your project stale, makes it difficult to maintain overall project security and worse yet, can provide a false sense of security. There is a much better and simpler way of writing a Gemfile that will preserve the health and consistency of your dependencies.

Problems with this approach

The first and most obvious problem is that your application will quickly become out of date. The Ruby community moves very quickly and introduces changes quite frequently which means that if you freeze your gems, you may find it very difficult to upgrade later. This problem can be compounded when a security patch is made and the version of the gem you're using is no longer supported.

A more subtle problem is that the gems listed in your Gemfile have dependencies, and those dependencies may not necessarily be required with as strict a version specification. If you do a bundle update, some of those dependencies could change and break your application.

If you're the more conservative type, and you're developing an application that might be in use for some time, you may also be aware that the gems you depend on might not be available forever. They could be removed from the repository, or even altered in a way that breaks your app. If this concerns you there is a much better solution.

A better way of managing your gems

Use Gemfile.lock to document required versions

The true manifest of gem versions is the file Gemfile.lock which is updated by bundler any time your gemset is changed. This should be kept in source control, so that whenever you or your collaborators run bundler install, the exact versions of every gem are installed.

Document dependency problems

If a particular gem version breaks your project, by introducing a bug or a change to it's API, lock it using the appropriate modifier.

Typically an API change is only introduced in a major version change (e.g: 3.x.x becomes 4.x.x). You can make sure your gem stays reasonably up to date but doesn't change to the next major revision using a pessimistic restriction like so: '~> 3.1.1'.

If the next minor version introduces a bug which breaks your project, lock the gem version with a specific revision (e.g '3.1.1').

Whenever you restrict a gem version, document why! Sometimes the errors causes by a dependency change can be quite obscure and waste significant time. I like to leave a comment like so:

# TODO: Remove version lock when this bug http://github.com/project/issues/311
# is fixed. It breaks the transmogrification adaptor because of a missing method.
gem 'descartes', '3.1.1'

Use tests to drive out dependency issues

The best indication that a gem has broken your project or needs to be managed more carefully is a test suite with good coverage. With good coverage, particularly integrations tests you can be confident that whenever you do a bundle update everything still works.

If your gems are kept up to date most of the time and you use source control it will be quickly obvious which version changes introduced a bug.

Be hesitant to specify a version restriction

And finally, don't specify a version restriction in your Gemfile without a very specific and well understood reason. It can often be tempting to simply list the version of the gem available at the time or to lock the version if you come from a more conservative background. A healthy Gemfile has few version restrictions, explains clearly the ones it has and comes attached with a lockfile for quick deployment and development.

The string on the right hand side of each gem specification is a fixed version specification. If you ask bundler to update any of these gems, it will make a bit of noise but those gems listed will essentially stay the same.

The typical reason for structure a Gemfile like this is to prevent changes in dependent software from causing compatibility issues or to reduce the chance of bugs or unexpected behaviour.

This strategy is problematic for several reasons: it keeps your project stale, makes it difficult to maintain overall project security and worse yet, can provide a false sense of security. There is a much better and simpler way of writing a Gemfile that will preserve the health and consistency of your dependencies.

Problems with this approach

The first and most obvious problem is that your application will quickly become out of date. The Ruby community moves very quickly and introduces changes quite frequently which means that if you freeze your gems, you may find it very difficult to upgrade later. This problem can be compounded when a security patch is made and the version of the gem you’re using is no longer supported.

A more subtle problem is that the gems listed in your Gemfile have dependencies, and those dependencies may not necessarily be required with as strict a version specification. If you do a bundle update, some of those dependencies could change and break your application.

If you’re the more conservative type, and you’re developing an application that might be in use for some time, you may also be aware that the gems you depend on might not be available forever. They could be removed from the repository, or even altered in a way that breaks your app. If this concerns you there is a much better solution.

A better way of managing your gems

Use Gemfile.lock to document required versions

The true manifest of gem versions is the file Gemfile.lock which is updated by bundler any time your gemset is changed. This should be kept in source control, so that whenever you or your collaborators run bundler install, the exact versions of every gem are installed.

Document dependency problems

If a particular gem version breaks your project, by introducing a bug or a change to it’s API, lock it using the appropriate modifier.

Typically an API change is only introduced in a major version change (e.g: 3.x.x becomes 4.x.x). You can make sure your gem stays reasonably up to date but doesn’t change to the next major revision using a pessimistic restriction like so: '~> 3.1.1'.

If the next minor version introduces a bug which breaks your project, lock the gem version with a specific revision (e.g '3.1.1').

Whenever you restrict a gem version, document why! Sometimes the errors causes by a dependency change can be quite obscure and waste significant time. I like to leave a comment like so:

# TODO: Remove version lock when this bug http://github.com/project/issues/311
# is fixed. It breaks the transmogrification adaptor because of a missing method.
gem 'descartes', '3.1.1'

Use tests to drive out dependency issues

The best indication that a gem has broken your project or needs to be managed more carefully is a test suite with good coverage. With good coverage, particularly integrations tests you can be confident that whenever you do a bundle update everything still works.

If your gems are kept up to date most of the time and you use source control it will be quickly obvious which version changes introduced a bug.

Be hesitant to specify a version restriction

And finally, don’t specify a version restriction in your Gemfile without a very specific and well understood reason. It can often be tempting to simply list the version of the gem available at the time or to lock the version if you come from a more conservative background. A healthy Gemfile has few version restrictions, explains clearly the ones it has and comes attached with a lockfile for quick deployment and development.