For a team that usually has a 4 month turnover, this allows us to quickly get new team members up to speed with how to contribute to MarkUs. We review every change that they propose, and give them tips/guidance on how to make it fit in well with the application. They learn, and the applications code stays healthy.

We catch defects before they enter the code base. Simple as that.

We get a good sense of what other people are working on, and what is going on in the code. Review Board has become a central conversation and learning hub for the developers on the MarkUs team.

So, the long and the short of it: I like Review Board. Review Board helps us write better code. I want to make Review Board better.

So what am I proposing?

How to Avoid A Bloated Software Monster

You can never make some people happy.

No matter how decent your software is, someone will eventually come up to you and say:

Wow! Your software would be perfect if only it had feature XYZ! Sadly, because you don’t have feature XYZ, I can’t use it. Please implement, k thx!

And so you either have to politely say “no”, and lose that user, or say “yes”, and add feature XYZ to the application. And for users out there who don’t need, or don’t care about feature XYZ, that new feature just becomes a distraction and adds no value. Make this happen a bunch of times, and you’ve got yourself a bloated mutha for a piece of software.

And we don’t want a bloated piece of software. But we do want to make our users happy, and provide feature XYZ for them if they want it.

So what’s the solution? We provide an extension framework (which is also sometimes called a plug-in architecture).

An extension framework allows developers to easily expand a piece of software to do new things. So, if a user wants feature XYZ, we (or someone else) just creates and make available an extension that implements the feature. The user installs the extension, activates it, and bam – our user is happy as a clam with their new feature.

And if we make it super-easy to develop them, third-party developers can write new, wonderful, interesting extensions to do things that…well, we wouldn’t have considered in the first place. It’s a new place for innovation. What’s that old cliché?

If you build it [the plug-in framework], they will come [the third-party developers who write awesome things]

And the developers do come. Just look at Firefox add-ons or WordPress plugins. Entire ecosystems of extensions, doing things that the original developers would probably have never dreamed of doing on their own. Hell, I’ve even written a Firefox add-on. And users love customizing their Firefox / WordPress with those extensions. It adds value.

So we get wins all over the place:

Our user gets their feature

The software gets more attractive because it’s flexible and customizable

The original software developers get to focus on the core piece of software, and let the third-party developers focus on the fringe features

Review Board Extensions

It would be nice if the review board had a “next comment” button that is always available to click, or had a collapse/expand button. This would make it easier to see other people’s comments in cases like this.

…

It will be nice to have post-commit support. Instead of every post-commit review being a separate URL, if we could setup default rules for post-commit reviews to update an existing review providing the diff-between-diff features, it would be very useful.

The Review Board developers could smell the threat of bloated feature-creep from a mile away. So, in a separate branch, they began working on integrating an extension framework into Review Board.

The extension branch, however, has been gathering dust, while the developers focus on more critical patches and releases.

My GSoC proposal is to finish off a draft of the extension framework, document it, and build a very simple extension for it. My simple extension will allow me to record basic statistics about Review Board reviewers – for example, how long they spend on a particular review, their inspection rate, etc.

Having been a project lead MarkUs for so long, it’s going to be a good experience to be back on “the bottom” – to be the new developer who doesn’t entirely have a sense of the application code yet. It’s going to be good to go code spelunking again. I’ve done some preliminary explorations, and it’s reminding me of my first experiences with MarkUs. Like a submarine using its sonar, I’m slowly getting a sense of the code terrain.

I’ve been accepted into Google Summer of Code this year – I’ll be working on Review Board. Details about my project will be the subject of an upcoming post, which I will toss up shortly.

I may or may not be co-directing a radio play. I’ll let you know.

The MarkUs team is about to release version 0.7, and a fresh batch of Summer students will soon be here at UofT to work on it!

I have not forgotten about the UCDPtrip to Poland. I still have to tell you what we saw and did at Auschwitz. Cripes – it’s almost a year since I returned, and I’m only half-way through the whole story. And there’s a ton more to tell. Coming soon.

If this is really going to be my research project, I’ll need to get my feet a bit more wet before I design my experiment. It’s all well and good to say that I’m studying author preparation…but I need to actually get a handle on what authors tend to say when they prepare their review requests.

So how am I going to find out the kinds of things that authors write during author preparation? The MarkUs Project and the Basie Project both use ReviewBoard, so it’ll be no problem to grab some review requests from there. But that’s a lot of digging if I do it by hand.

So I won’t do it by hand. I’ll write a script.

You see, I’ve become pretty good at manipulating the ReviewBoard API. So mining the MarkUs and Basie ReviewBoard’s should be a cinch.

But I’d like to go a little further. I want more data. I want data from some projects outside of UofT.

Luckily, ReviewBoard has been kind enough to list several open source projects that are also using their software. And some of those projects have their ReviewBoard instances open to the public. So I just programmed my little script to visit those ReviewBoard instances, and return all of the review requests where the author of the request was the first person to make a review. Easy.

Remember how I wrote a while back that I wanted to write a script to let me do some quick and easy pre-commit continuous integration with the MarkUs project?

Well, I think I just wrote one.

Introducing TestDrive…

TestDrive will fetch a review request, grab the latest diff (yes, found an easy way past the lack of API there), check out a fresh copy of MarkUs, throw down the diff, set it up with some Sqlite3 databases, run your tests, and voila – go to localhost:3000, and you’re running the review request diff.

I’ve been using it myself for about a week or so, and so far, it’s helped me catch a number of bugs that I wouldn’t have caught just by looking at the code in ReviewBoard. Nice.

It’s a lot harder than I thought it’d be. The screencasts are really only useful if I’m saying what I’m thinking, and I’m finding it difficult to maintain stream of consciousness and perform an effective/thorough review. The last few times I’ve tried it, I find myself blurting an expletive, stopping the recording in frustration, and then starting the review over so that I can do a good, proper job.