For too long, the state of the OpenStack Chef world had been one of duplicative effort, endless forks of Chef cookbooks, and little integration with how many of the OpenStack projects choose to control source code and integration testing. Recently, however, the Chef + OpenStack community has been getting its proverbial act together. Folks from lots of companies have come together and pushed to align efforts to produce a set of well-documented, flexible, but focused Chef cookbooks that install and configure OpenStack services.

My sincere thanks go out to the individuals who have helped to make progress in the last couple weeks, the individuals on the upstream openstack.org continuous integration team, and of course, the many authors of cookbooks whose code and work is being merged together.

OK, so what’s happened?

StackForge now hosting a set of Chef cookbooks for OpenStack

Individual cookbooks for each integrated OpenStack project have been created in the StackForge GitHub organization. Each cookbook name is prefixed with cookbook-openstack- followed by the OpenStack service name (not the project code name):

Note that we have not yet created the cookbook for Heat, but that will be coming in the Havana timeframe, for sure. Also note that the Ceilometer (metering) cookbook is empty right now. We’re in the process of pulling the ceilometer recipes out of the compute cookbook into a separate cookbook.

Finally, there will be another repository called openstack-chef-repo that will contain example Chef roles, databags and documentation showing how all the OpenStack and supporting cookbooks are tied together to create an OpenStack deployment.

Code in cookbooks gated by Gerrit like any other OpenStack project

The biggest advantage of hosting all these Chef cookbooks on the StackForge GitHub repository is the easy integration with the upstream continuous integration system. The upstream CI team has built a metric crap-ton (technical term) of automation code that enabled us to quickly have Gerrit managing the patch queues and code reviews for all these cookbook repositories as well as have each repository guarded by a set of gate jobs that run linter and unit tests against the cookbooks.

The rest of this blog post explains how to use the development and continuous integration systems when working on the OpenStack Chef cookbooks housed in Stackforge.

Prepare to develop on a cookbook

OK, so you want to start working on one of the OpenStack Chef cookbooks? Great! You will need to install the git-review plugin. The easiest way to do so is to use pip:

sudo pip install git-review

The first thing you need to do is clone the appropriate Git repository containing the cookbook code and set up your Gerrit credentials. Here is the code to do that:

Of course, replace $SERVICE above with one of common, compute, identity, image, block-storage, object-storage, network, metering, or dashboard. What that will do is clone the upstream Stackforge repository for the corresponding cookbook to your local machine, change directory into that clone’d repository, and set up a git remote called “gerrit” pointing to the review.openstack.org Gerrit system.

Start to develop on a cookbook

Now that you have git clone’d the upstream cookbook repository and set up your Gerrit remote properly, you can begin coding on the cookbook. Remember, however, that you should never make changes in your local “master” branch. Always work in a local topic branch. This allows you to work on a branch of code separately from the local master branch you will use to bring in changes from other developers.

Once you are checked out into your topic branch, you can now add, edit, delete, and move files around as you wish. When you have made the changes you want to make, you then need to commit your changes to the working tree in source control.

IMPORTANT NOTE:: If you created any new files while working in your branch, you will need to tell Git about those new files before you commit. An easy way to check if you’ve added any new files that should be added to Git source control is to always call git status before doing your commit. git status will tell you if there are any untracked files in your working tree that you may need to add to Git:

As you can see, I edited the README.md file, and the call to git status shows that file as modified. If you want to review the changes you made, use the git diff command:

jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git diff
diff --git a/README.md b/README.md
index 4fbbd57..e197d3b 100644
--- a/README.md
+++ b/README.md
@@ -24,6 +24,8 @@ of all the settable attributes for this cookbook.
Note that all attributes are in the `default["openstack"]` "namespace"
+TODO(jaypipes): Should we list all the attributes in the README?
+
Libraries
=========

If you are happy with the changes, you’re now ready to commit those changes to source control. Call git commit, like so:

git commit -a

This will open up your text editor and present you with an area to write your commit message describing the contents of your patch. Commit messages should be properly formatted and abide by the upstream conventions. Feel free to read that link, but here is a brief rundown of stuff to keep in mind:

Make the first line of the commit message 50 chars or less

Separate the first line from the rest of the commit message with a blank newline

Make the commit message descriptive of what the patch is and what the motivation for the patch was

Do NOT make the commit message into a list of the things in the patch you changed in each revision — we can already see what is contained in the patch

Save and close your editor to finalize the commit. Once successfully committed, you now need to push your changes to the Gerrit code review and patch management system on review.openstack.org. You do this using a call to git review.

When you issue a call to git review for any of the cookbooks, a patch review is created in Gerrit. Behind the scenes, the git review plugin is simply doing the call for you to git push to the Gerrit remote. You can always see what git review is doing by passing the -v flag, like so:

As you can see in the above output, a new patch review was created at the location https://review.openstack.org/29797. You can go to that URL and see the patch review, shown here before any comments or reviews have been made on the patch.

Reviewing patches with Gerrit

Gerrit has three separate levels of reviews: Verify, Code-Review, and Approve.

The Verify (V) review level

The Verify level is limited to the Jenkins Gerrit user, which runs the automated tests that protect each cookbook repository’s master branch. These automated tests are known as gate tests.

When you push code to Gerrit, there are a set of automatic tests that are run against your code by Jenkins. Jenkins is a continuous integration job system that the upstream OpenStack CI team has integrated into Gerrit so that projects managed by Gerrit may have a series of automated check jobs run against proposed patches to the project. Below, you can see that the Jenkins user in Gerrit has already executed two jobs — gate-cookbook-openstack-common-chef-lint and gate-cookbook-openstack-common-chef-unit — against the proposed code changes. The jobs (expectedly) both pass, as I haven’t actually changed anything in the code, only added a blank file and added a line to the README file.

If the Jenkins jobs fail, you will see Jenkins issue a -1 in the V column in the patch review. Any -1 from Jenkins as a result of a failed gate test will prevent the patch from being merged into the target branch, regardless of the reviews of any human.

The Code-Review (R) review level

Anyone who is logged in to Gerrit can review any proposed patch in Gerrit. To log in to Gerrit, click the “Sign In” link in the top right corner and log in using the Launchpad Single-Signon service. Note: This requires you to have an account on Launchpad.

Once logged in, you will see a “Review” button on each patch in the patchset. You can see this Review button in the images above. If you were the one that pushed the commit, you will also see buttons for “Abandon”, “Work in Progress”, and “Rebase Change”. The “Abandon” button simply lets you mark the patchset as abandoned and gets the patch off of the review radar. “Work in Progress” lets you mark the patchset as not ready for reviews, and “Rebase Change” is generally best left alone unless you know what you’re doing.

Each file (including the commit message itself) has a corresponding link that you can view the diff of that file and add inline comments similar to how GitHub pull requests allow inline commenting. Simply double click directly below the line you wish to comment on, and a box for your comments will appear, as shown below:

IMPORTANT NOTE: Unlike GitHub inline commenting on pull requests, your inline comments on Gerrit reviews are NOT viewable by others until you finalize your review by clicking the “Review” button. Your comments will appear in red as “Draft” comments on the main page of the patch review, as shown below:

To put in a review for the patch, click the “Review” button. You will see options for:

+1 Looks good to me, but someone else must approve

+0 No score

-1 I would prefer you didn’t merge this

If you are a core reviewer, in addition to the above three options, you will also see:

+2 Looks good to me (core reviewer)

-2 Do Not Merge

There is also a comment are for you to put your review comments, which is directly above an area that shows all the inline comments you have made:

After selecting the review +/- that matches your overall thoughts on the patch, and entering any comment, click the “Publish Comments” button, and your review will show in the comments of the patch, as shown below:

The Approve (A) review level

Members of the core review team also see a separate area in the review screen for the Approve (A) review level. This level tells Gerrit to either proceed with the merge of the patch into the target branch (+1 Approve) or to wait (0 No Score).

The general rule of thumb is that core reviewers should not hit +1 Approve until 2 or more core reviewers (can include the individual doing the +1 Approve) have added a +2 (R) to the patch in reviews. This rule is subject to the discretion of the core reviewer for trivial changes like typo fixes, etc.

Summary

I hope this tutorial has been a help for those new to the Gerrit and Jenkins integration used by the OpenStack upstream projects. Contributing to the Chef OpenStack cookbooks should be no different than contributing to the upstream OpenStack projects now, and additional gate tests — including full integration test runs using Vagrant or even a multi-node deployment — are on our TODO radar. Please sign up on the OpenStack Chef mailing list if you haven’t already. We look forward to your contributions!

Seriously, Ubuntu, upgrading between versions has become just painful… I waited a few weeks before I upgraded from Xubuntu 11.10 to 12.04 because the upgrade last time completely hosed my system and left me with a borked X configuration. This time, I upgraded and now all my keyboard shortcuts don’t work. None of them. I mean, they appear in my Keyboard Settings –> Application Shortcuts, but none of them work anymore. Seriously, WTF.

UPDATE:

Turns out that the issue is that (for some stupid reason), XFCE changed the name of the <Ctrl> key to “Primary”, so you need to go to Accessories –> Settings Manager –> Keyboard –> Application Shortcuts and then remove all your custom shortcuts that show <Control> in them and re-add them. You’ll notice that when you press the Ctrl key, it will now show up as <Primary>. Completely retarded.

Some of that change is quite visible. For example, the dashboard project, code-named Horizon, was entirely overhauled and became a core OpenStack project in the Essex release cycle. The new Horizon is pretty stunning[2], if I may say so myself. Other visible awesomeness comes from Anne Gentle and the dozens of contributors who worked on the new API documentation site. It’s an excellent, and well-needed, resource for the community of developers who want to build applications on OpenStack clouds.

Other innovations weren’t so visible, but were just as impactful. The Swift development team added the ability for objects to expire, the ability to post objects via HTML forms with the “tempurl” functionality, and integration with the authentication mechanism in the OpenStack Identity API 2.0.

Under Vishvananda Ishaya‘s continued leadership, contributors to the OpenStack Compute project, code-named Nova, focused on a number of things in the past six months. Notably, on the feature front, floating IP pools and role-based access control were added. A variety of internal subsystems were dramatically refactored, including de-coupling the network service from Nova itself — something critical to scaling the network service with the Quantum and Melange incubated projects — as well as separating the volume endpoint into its own service. In addition, the remote procedure call subsystem was streamlined (again) and the way API extensions are handled in source code was cleaned up substantially. On the performance front, there were numerous bug fixes, but one that stands out is the overhaul of the metadata service that Vish completed. This one patch dramatically improves performance of the metadata service used by things like cloud-init when setting up new launched instances. You can see the entire list of 53 blueprints implemented and 765 bugs fixed in Nova in the Essex release here. Pretty impressive.

Over in the OpenStack Image Service project, code-named Glance, we focused on performance and stability in this cycle. With a fresh infusion of contributors like Reynolds Chin, Eoghan Glynn and Stuart McLaren, the Glance project made some dramatic improvements. Notably, Reynolds Chin added a visual progressbar to the Glance CLI tool when uploading images, Stuart McLaren submitted patches that enabled a significant improvement in throughput by starting the Glance API and registry services on multiple operating system processes. Eoghan Glynn fixed a massive amount of bugs and added new functionality revolving around external images and having Glance’s API server automatically copy an image from an external datastore. Brian Waldon, Glance’s new PTL (congrats, Sir Glancelot!), added RBAC support and did the heavy lifting of converting Glance’s image identifiers to a UUID format. Check here for the complete list of 11 blueprints implemented and 185 bugs fixed in Glance in the Essex cycle.

The Keystone codebase was entirely rewritten, causing some late cycle turmoil, but the team of contributors working on Keystone is dedicated to improving its stability and functionality in the Folsom release series. The new Keystone design should enable better extensibility and I’m confident the new PTL, Joe Heck, will work actively with contributing organizations to see Keystone make terrific improvements in coming months.

I’m sure there’s lots of names and stuff I’ve neglected to mention and I’ll apologize for that now! Here’s to a great design summit a week from now and a productive and cooperative Folsom release series. Thank you to all the OpenStack contributors. You are what makes OpenStack so special.

[1] In the OpenStack community — as in the Ubuntu community — we publish major releases every six months. We don’t hold up releases for a specific feature; if the feature isn’t completed, it simply goes into the next release when it is code-complete. In my opinion, this is one of the strengths of the OpenStack release model: it is predictable.

[2] What’s more, we can’t wait to introduce the goodness of the Horizon Essex dashboard into TryStack. We aim to get this done before the summit, but more on that in a later blog post.

This past week, thefirstreleasecandidates of a number of OpenStack projects was released. From this point until the OpenStack Design Summit, we are pretty much focused on testing the release candidates. One way you can help out is to test the release candidate code, and this article will walk you through doing that with the Devstack and Tempest projects.

Setting up an OpenStack Essex RC1 Environment with Devstack

Before you test, you need an OpenStack environment. The easiest way to get an OpenStack environment up and running on a single machine [1] is to use Devstack. To get a version of Devstack that is designed to run against the release candidate branches of the OpenStack projects, simply clone the main repo of Devstack, like so:

Devstack contains a file called stackrc that is sourced by the main stack.sh script to create your OpenStack environment. The stackrc file contains environment variables that tell stack.sh which repositories and branches of OpenStack projects to clone. We will need to change the branches in the stackrc file from master to milestone-proposed to grab the release candidate branches. You will want to change the target branches for Nova, Glance, Keystone, and their respective client libraries. Here is what the default stackrc will look like:

stackrc before...

And here is what it should look like after you change the master branches to milestone-proposed branches appropriately:

stackrc after...

Setting Up Your Devstack localrc for Running Tempest

There are a couple things you will want to put into your Devstack’s localrc file before actually creating your OpenStack environment for testing with Tempest. So, open up your editor of choice and make sure that you have at least the following in your localrc file in Devstack’s root source directory. (If the file does not exist, simply create it)

The first line instructs devstack to disable the default ratelimit middleware in the Nova API server. We need to do this because Tempest issues hundreds of API requests in a short amount of time as it runs its tests. If we don’t do this, Tempest will take a much longer time to run and you will likely get test failures with a bunch of overLimitFault messages.

The other lines simply set the various passwords to an easy-to-remember password “pass” for testing. And the final line is needed currently to set up some services but should be deprecated fairly soon…

Installing the OpenStack RC1 Environment

Now that you’ve got your devstack scripts cloned and your localrc installed, it’s time to run the main stack.sh script that will install OpenStack on your test machine. It’s as easy as running the stack.sh script. After running &mdash and be patient, on a first run the script can take ten or more minutes to complete — you will see a bunch of output and then something like this:

At this point, you have a fully functioning OpenStack RC1 environment. You can do the following to check into the logs (actually just the daemon output in a screen window):

$> screen -x

Switch screen windows using the <Ctrl>-a NUM key combination, where NUM is the number of the screen window you see at the bottom of your console. Type <Ctrl>-a d to detach from your screen session. The screenshot below shows what your screen session may look like. In the screenshot, I’ve hit <Ctrl>-a 4 to switch to the n-api screen window which is showing the Nova API server daemon output.

screen session showing the Nova API server window...

Testing the OpenStack Essex RC1 Environment with Tempest

The Tempest project is an integration test suite for the OpenStack projects. Personally, in my testing setup at home, I run Tempest from a different machine on my local network than the machine that I run devstack on. However, you are free to run Tempest on the same machine you just installed Devstack on.

Grab Tempest by cloning the canonical repo:

$> git clone git://github.com/openstack/tempest

Once cloned, change directory into tempest.

Creating Your Tempest Configuration File

Tempest needs some information about your OpenStack environment to run properly. Because Tempest executes a series of API commands against the OpenStack environment, it needs to know where to find the main Compute API endpoint or where it can find the Keystone server that can return a service catalog. In addition, Tempest needs to know the UUID of the base image(s) that Devstack downloaded and installed in the Glance server.

Create the tempest configuration file by copying the sample config file included in Tempest $tempest_dir/etc/tempest.conf.sample.

$> cp etc/tempest.conf.sample etc/tempest.conf

Next, you will want to query the Glance API server to get the UUID of the base AMI image used in testing. To do this, issue a call like so:

Of course, you will want to replace the appropriate parts of the call above with your own environment. In my case above, my devstack environment is running on a host 192.168.1.98 and I’m accessing Glance with an “admin” user in an “admin” tenant with a password of “pass”. Copy the UUID identifier of the image that is returned from the command above (in my case, that UUID is 99a48bc4-d356-4b4d-95d4-650f707699c2).

Now go ahead and open up the configuration file you just created by copying the tempest.conf.sample file. You will see something like this:

You will want to replace the various configuration option values with ones that correspond to your environment. For the image_ref and image_ref_alt values in the [compute] scetion of the config file, use the UUID you copied from above.

Here is what my fully-replaced config file looks like. Keep in mind, my Devstack environment is running on 192.168.1.98. I’ve highlighted the values different from the sample config…

After you’re done running Tempest — and hopefully everything runs OK — feel free to hit your Devstack Horizon dashboard and log in as your demo user. Unless you made some changes when installing Devstack above, your Horizon dashboard will be available at http://$DEVSTACK_HOST_IP.

If you encounter any test failures or issues, please be sure to log bugs for the appropriate project!

Known Issues

You can try running Tempest with the --processes=N option which uses the nosetest multiprocessing plugin. You might get a successful test run … but probably not

Likely, you will hit two issues: the first is that you will likely hit the quote limits for your demo user because multiple processes will be creating instances and volumes. You can remedy this by altering the quotas for the tenant you are running the compute tests with.

Secondly, you may run into error output that looks like this:

======================================================================
ERROR: An image for the provided server should be created
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/jpipes/repos/tempest/tempest/tests/test_images.py", line 38, in test_create_delete_image
self.servers_client.wait_for_server_status(server['id'], 'ACTIVE')
File "/home/jpipes/repos/tempest/tempest/services/nova/json/servers_client.py", line 147, in wait_for_server_status
raise exceptions.BuildErrorException(server_id=server_id)
BuildErrorException: Server 2e0b78fc-cc98-485b-8471-753778bee472 failed to build and is in ERROR status

I’ve not quite gotten to the bottom of this, but there seems to be a race condition that gets triggered in the Compute API when a similar request is received nearly simulataneously. I can reliably reproduce the above error by simply adding --processes=2 to my invocation of Tempest. I believe there is an issue with the seeding of identifiers, but it’s just a guess. I still have to figure it out. But in the meantime, be aware of the issue.

1. You can certainly use devstack to install a multi-node OpenStack environment, but this tutorial sticks to a single-node environment for simplicity reasons.

Today, a project that has been a long time in the making is finally coming to fruition. Back last summer, when I was working at Rackspace, Nati Ueno from NTT PF Labs came up with the idea of establishing a “Free Cloud” — a site running OpenStack that developers using the OpenStack APIs could use to test their software.

The involvement of a number of companies — Dell, NTT PF Labs, Rackspace, Cisco, Equinix and HP Cloud Services — eventually drove the idea of “The Free Cloud” from concept to reality. Though there were many delays, I’m happy to announce that this new OpenStack testing sandbox has now been launched.

The new name of the artist formerly known as The Free Cloud is now called TryStack. The front page at http://trystack.org now houses some information about the effort and a few FAQs. If you’re interested in trying out OpenStack, TryStack.org is right up your alley, and we encourage you to sign up. Instructions for signup are on the front page.

Using TryStack

Once you’ve gone through the TryStack registration process, you will receive a username and password for logging in to the TryStack Dashboard and Nova API. After logging in, you’ll see the user dashboard for TryStack. This dashboard is essentially the stock upstream OpenStack Dashboard — only lightly modified with a couple extensions that Nati wrote to show consumption of Stack Dollars and the TryStack logo.

Stack Dollars are the “unit of currency” for TryStack. When you perform certain actions in TryStack — launching instances, creating volumes, etc — you consume Stack Dollars. Likewise, instances consume Stack Dollars as long as they are running. When you run out of Stack Dollars, you won’t be able to use TryStack services until your Stack Dollars are replenished. Stack Dollars will be replenished on a periodic basis (haven’t quite decided on the interval yet…)

To prevent people from gaming the system or using TryStack as a tool for evil, instances will remain alive for up to 24 hours or until your Stack Dollar balance is depleted. Also, always keep in mind that TryStack should only be used for testing. Under no circumstances should you use it for any production uses. There is no service level agreement with TryStack.

An Automation and Administration Sandbox Too!

In addition to being a useful sandbox for developers using the OpenStack APIs and others interested in seeing what OpenStack is all about, TryStack.org is also a very useful testbed for work that the OpenStack community is doing to automate the deployment and administration OpenStack environments.

TryStack is deployed using the Chef Cookbooks from the upstream repository, and changes that are needed will be pushed back upstream immediately for consumption by the broad community. We have a limited HA setup for the initial TryStack availability zone and lessons learned from the deployment of these HA setups are being incorporated into an online TryStack Administrator’s Guide that will serve as input for the upstream documentation teams as well.

Roadmap for TryStack

In the next three to six months, we’re planning to bring on-line at least one more availability zone. The next availability zone will be running HP hardware and will be housed in a datacenter in Las Vegas. It is likely that this new zone will be deployed with the Essex release of OpenStack components, enabling users to test against both a Diablo-based OpenStack installation and an Essex-based installation.

This first availability zone does not contain an installation of Swift. Of course, we want to change that, so an installation of Swift is definitely on the roadmap for either the next availability zone or as a separate service itself. Note that, just like the instances launched in TryStack, objects stored in a TryStack Swift cluster would be temporary. After all, TryStack is for trying out OpenStack services, not for providing a free CDN or storage system!

We will also eventually move towards a different registration process to accomodate non-Facebook users. If you are interested in helping with this effort, please do let us know.

Finally, we’ll be adding things like an automated Twitter status feed for each zone, lots of documentation gathered from running TryStack, and hopefully a number of videos showing usage of TryStack as well as common administrative tasks — all with the goal of providing more and better information to the broad and growing OpenStack community. I fully expect numerous hiccups and growing pains in these first couple months of operation, but we promise to turn any pain points into lessons learned and document them for the benefit of the OpenStack community.

Please do check out the trystack.org service. We look forward to your feedback. You can find us on Freenode.net #trystack. Nati and I will be hosting a webinar February 23, and we’ll be speaking at a San Francisco meetup March 6 if you’re interested in learning more or getting involved.

Update: I totally goofed and left Cisco off the list of donor organizations. My apologies to Mark Voelker and the excellent folks at Cisco who provided two 4948-10GE switches that are in use in the TryStack cloud. I also got the link wrong to HP Cloud… which is pretty lame, considering I work for HP. That’s been corrected.

Do you know Python? Do you get a thrill breaking other people’s code? Do you have experience with Chef, Puppet, Cobbler, Orchestra, or Jenkins? Have you ever deployed or worked on highly distributed systems? Do you understand virtualization technologies like KVM, Xen, ESX or Hyper-V?

If you answered “Yes!” to any of the questions above and are interested in working in a distributed, high-energy engineering team on solving complex problems with cloud infrastructure software, I want to hear from you. Experience with OpenStack is a huge plus.

I’m looking for QA software engineers, software deployment and/or automation engineers and software developers that can hit the ground running and make a big impact from Day One. Feel free to email me at REVERSE('moc.liamg@sepipyaj').

I figured I’d write a quick post about how to deal with “pep8 issues” that come up during code reviews on OpenStack core projects. These issues come up often for new contributors, and it can be a source of frustration until the contributor understands how to diagnose and fix the issues that come up.

PEP8 is the Python PEP that deals with a recommended code style. All core (and periphery Python) OpenStack projects validate that new code pushed to the source tree is “pep8-compliant”. When a new patchset is pushed from code review to Jenkins for the set of automated pre-merge tests, the pep8 command-line tool is run against the new source tree to ensure it meets PEP8 code style standards.

If this PEP8 Jenkins job fails, the code submitter will see a notification that the job failed, and the contributor must fix up any pep8 issues and push those fixes up for review again. Typically, this notification looks something like this:

There are a couple ways you can diagnose what style points your code violated. Probably the easiest and fastest is to just follow the link in the notification email to the Jenkins job. Clicking the link above, I get to the Jenkins job page, which looks like this:

Clicking on the graph, I get to a details screen showing the source files and lines of code that violated pep8 rules:

I can then go to line 86 of tempest/openstack.py and investigate the code style

Alternately, I could run the pep8 CLI tool on my local branch, which will tell me the pep8 violations and what to fix, as shown here:

jpipes@uberbox:~/repos/tempest$ pep8 --repeat tempest
tempest/openstack.py:86:73: W292 no newline at end of file

There we are… the tempest/openstack.py file doesn’t end with a newline. An easy fixup.

There are some things in the world of development that you appreciate much more when you do a lot of code reviews. One of those things is commit messages.

At first glance, commit messages seem to be a small, relatively innocuous thing for a developer. When you commit code, you type in some description about your code changes and then, typically, push your code somewhere for review by someone.

Regardless of whether the code you pushed is going to an open source project, an internal proprietary code repository, or just some code exchanged between friends working on a joint project, that simple little commit message tells the person reading your code a whole lot about you. It speaks volumes about the way you feel about the code you submit and the quality of the review you expect for your code.

As an example, suppose I was working on some code that fixed a bug. I got my code ready for initial review and I did the following:

$> git commit -a -m "Fixes some stuff"

And then I push my commit somewhere using git push …

Inevitably, what happens is that another developer will get some email or notification that I have pushed code up to some repository. It is likely that this notification will look something like this:

And in the notification will be some link to a place to go do a code review.

Now, what do you think is the first thought that goes through the reviewer’s mind? My guess would be: Really? Fixes what stuff? By not including any context about what the patch is attempting to solve, you leave the reviewer with a bad taste in their mouth. And a bad taste in the reviewer’s mouth generally means one thing: a reluctance to review your patch.

OK, so what could we do to make the commit message better, to provide the reviewer with more initial context about your patch? Well, the first thing that comes to mind is to reference a specific bug that you are fixing with this patch.

Alright, so we amend our commit message to include a bug identifier:

$> git commit --amend -m "Fixes Bug 123456"

And subsequently push our amended commit message. The reviewer now gets a new notification that you’ve amended a previous patch. Now the notification includes the bug identifier. What do you think the next thought a typical reviewer might have? My guess is this: What, does this developer think that I’ve memorized all the bug IDs for all open bugs? How should I know what Bug 123456 is about? And here comes that bad taste in the mouth again.

OK, so this time, we will forgo the use of the time-saving -m switch to git commit and actually type a proper, multi-line commit message in our editor of choice that describes the bug that our patch fixes, including a brief description of how we fixed the bug:

git commit --amend # This will open up your editor...

Now we’d enter a good commit message … something like this would work:

Fixes Bug 123456 - ImportError raised improperly in Storage Driver
Due to a circular dependency, an ImportError was improperly
being thrown when the storage driver was set to XYZ. Rearranged
code to remove circular dependency.

The commit message now will give the reviewer everything they need in the notification to understand what the patch is for and how you solved a bug, without needing to go to their bug tracker to figure out what the bug was about.

A detailed commit message shows you care about the time that reviewers spend on your patch and that you value the code you are submitting.

Last night I gave a short webinar to some folks about the basics of contributing to the Tempest project, which is the OpenStack integration test suite. It was the first time I’d used Google Docs to create and give a presentation and I must say I was really impressed with the ease-of-use of Google Docs Presentation. Well done, Google.

Anyway, I’ve uploaded a PDF of the presentation to this website and provided a link to the Google Docs presentation along with a brief overview of the topics covered in the slides below. As always, I love to get feedback on slides. Feel free to leave a comment here, email me or find me on IRC. Enjoy!