Hut 8 Labs

Earlier this year we at Hut 8 Labs were working onsite with a client
who didn’t have their own code review system. Since a life without
code reviews just isn’t worth living for us, we found ourselves
emailing diffs back and forth to each other, with messages like “about
halfway through the diff you do X, maybe you should do Y?” Eventually
we even started inserting comments right in the attached diffs
themselves—comments like “EWJRENAMETHISVARIABLEORDIEIN A
FIRE!!!”—which worked surprisingly well, except that:

it was easy to miss comments and replies in large diffs, even when
the comments were all caps and followed by multiple exclamation points

it was a pain to co-ordinate reviews and replies from even two other people

it was a pain to track down the actual source lines a comment
referred to, which meant an unpleasantly high activation energy for
applying small fixes and suggestions

So we created diffscuss—a code review format based on unified diffs,
with editor support for threaded inline comments, basic review
management and git integration, and (best of all) support for jumping
right from a comment to the local source it addresses, without ever
leaving the comfort of Emacs (or, because Hut 8’s own Matt Papi is a
Vimmortal, Vim).

We’ve been using diffscuss for about 6 months now, and we’ve been
happy enough with it that we figure it’s time to share it with the world.

Check it out at Github or
read on for an example of diffscuss in action.

I’m going to talk today about what goes on in inside developers’ heads when
they make estimates, why that’s so hard to fix, and how I personally figured
out how to live and write software (for very happy business owners) even though
my estimates are just as brutally unreliable as ever.

But first, a story.

It was the <insert time period that will not make me seem absurdly old>,
and I was a young developer 1. In college, I had aced coding exercises, as
a junior dev I had cranked out code to solve whatever problems someone
specified for me, quicker than anyone expected. I could learn a new language
and get productive in it over a weekend (or, so I believed).

And thus, in the natural course of things, I got to run my own project. The
account manager explained, in rough form, what the client was looking for, we
talked it out, and I said, “That should be about 3 weeks of work.” “Sounds
good,” he said. And so I got to coding.

Here’s a glitch in my thinking that I realized on a recent job: I am
too terrified of monkeys, and not sufficiently afraid of gorillas. As
a result, I’ve been missing opportunities for early, smart investments
to make my systems more resilient in the Amazon cloud.

By “monkey” and “gorilla” I mean “Chaos Monkey” and “Chaos Gorilla,”
veterans of Netflix’s Simian Army. You can browse the entire
list1, but for easy reference:

Chaos Monkey is the personification (simianification?) of EC2
instance failure.

Chaos Gorilla represents major degradation of an EC2 availability
zone, henceforth “AZ” for short (or, as we sometimes referred to
them at my last job, “failability zones”).

I believe that startups should (mostly) worry less about EC2 instances
failing, and more about entire AZs degrading. This leads to a
different kind of initial tech/devops investment—one that I believe
represents a better return for most early-stage companies and products.