Navigation

For advanced users, Buildbot acts as a framework supporting a customized build application.
For the most part, such configurations consist of subclasses set up for use in a regular Buildbot configuration file.

This chapter describes some of the more common idioms in advanced Buildbot configurations.

Bearing in mind that master.cfg is a Python file, large configurations can be shortened considerably by judicious use of Python loops.
For example, the following will generate a builder for each of a range of supported versions of Python:

The number of invocations of the callable is proportional to the square of the request queue length, so a long-running callable may cause undesirable delays when the queue length grows.

It should return true if the requests can be merged, and False otherwise.
For example:

@defer.inlineCallbacksdefcollapseRequests(master,builder,req1,req2):"any requests with the same branch can be merged"# get the buildsets for each buildrequestselfBuildset,otherBuildset=yielddefer.gatherResults([master.data.get(('buildsets',req1['buildsetid'])),master.data.get(('buildsets',req2['buildsetid']))])iflen(selfBuildset['sourcestamps'])!=len(otherBuildset['sourcestamps']):defer.returnValue(False)forselfSourcestamp,iinenumerate(selfBuildset['sourcestamps']):ifselfSourcestamp['branch']!=otherBuildset['sourcestamps'][i]['branch']:returnFalsereturnTruec['collapseRequests']=collapseRequests

The prioritizeBuilders configuration key specifies a function which is called with two arguments: a BuildMaster and a list of Builder objects.
It should return a list of the same Builder objects, in the desired order.
It may also remove items from the list if builds should not be started on those builders.
If necessary, this function can return its results via a Deferred (it is called with maybeDeferred).

A simple prioritizeBuilders implementation might look like this:

defprioritizeBuilders(buildmaster,builders):"""Prioritize builders. 'finalRelease' builds have the highest priority, so they should be built before running tests, or creating builds."""builderPriorities={"finalRelease":0,"test":1,"build":2,}builders.sort(key=lambdab:builderPriorities.get(b.name,0))returnbuildersc['prioritizeBuilders']=prioritizeBuilders

When a builder has multiple pending build requests, it uses a nextBuild function to decide which build it should start first.
This function is given two parameters: the Builder, and a list of BuildRequest objects representing pending build requests.

A simple function to prioritize release builds over other builds might look like this:

Each source file that is tracked by a Subversion repository has a fully-qualified SVN URL in the following form: (REPOURL)(PROJECT-plus-BRANCH)(FILEPATH).
When you create the SVNPoller, you give it a repourl value that includes all of the REPOURL and possibly some portion of the PROJECT-plus-BRANCH string.
The SVNPoller is responsible for producing Changes that contain a branch name and a FILEPATH (which is relative to the top of a checked-out tree).
The details of how these strings are split up depend upon how your repository names its branches.

In this case, every Change that our SVNPoller produces will have its branch attribute set to None, to indicate that the Change is on the trunk.
No other sub-projects or branches will be tracked.

If we want our ChangeSource to follow multiple branches, we have to do two things.
First we have to change our repourl= argument to watch more than just amanda/trunk.
We will set it to amanda so that we'll see both the trunk and all the branches.
Second, we have to tell SVNPoller how to split the (PROJECT-plus-BRANCH)(FILEPATH) strings it gets from the repository out into (BRANCH) and (FILEPATH).

We do the latter by providing a split_file function.
This function is responsible for splitting something like branches/3_3/common-src/amanda.h into branch='branches/3_3' and filepath='common-src/amanda.h'.
The function is always given a string that names a file relative to the subdirectory pointed to by the SVNPoller's repourl= argument.
It is expected to return a dictionary with at least the path key.
The splitter may optionally set branch, project and repository.
For backwards compatibility it may return a tuple of (branchname,path).
It may also return None to indicate that the file is of no interest.

Note

The function should return branches/3_3 rather than just 3_3 because the SVN checkout step, will append the branch name to the baseURL, which requires that we keep the branches component in there.
Other VC schemes use a different approach towards branches and may not require this artifact.

If your repository uses this same {PROJECT}/{BRANCH}/{FILEPATH} naming scheme, the following function will work:

Changes for all sorts of branches (with names like "branches/1.5.x", and None to indicate the trunk) will be delivered to the Schedulers.
Each Scheduler is then free to use or ignore each branch as it sees fit.

If you have multiple projects in the same repository your split function can attach a project name to the Change to help the Scheduler filter out unwanted changes:

Note here that we are monitoring at the root of the repository, and that within that repository is a amanda subdirectory which in turn has trunk and branches.
It is that amanda subdirectory whose name becomes the project field of the Change.

Another common way to organize a Subversion repository is to put the branch name at the top, and the projects underneath.
This is especially frequent when there are a number of related sub-projects that all get released in a group.

For example, Divmod.org hosts a project named Nevow as well as one named Quotient.
In a checked-out Nevow tree there is a directory named formless that contains a Python source file named webform.py.
This repository is accessible via webdav (and thus uses an http: scheme) through the divmod.org hostname.
There are many branches in this repository, and they use a ({BRANCHNAME})/({PROJECT}) naming policy.

The fully-qualified SVN URL for the trunk version of webform.py is http://divmod.org/svn/Divmod/trunk/Nevow/formless/webform.py.
The 1.5.x branch version of this file would have a URL of http://divmod.org/svn/Divmod/branches/1.5.x/Nevow/formless/webform.py.
The whole Nevow trunk would be checked out with http://divmod.org/svn/Divmod/trunk/Nevow, while the Quotient trunk would be checked out using http://divmod.org/svn/Divmod/trunk/Quotient.

Now suppose we want to have an SVNPoller that only cares about the Nevow trunk.
This case looks just like the PROJECT/BRANCH layout described earlier:

But what happens when we want to track multiple Nevow branches?
We have to point our repourl= high enough to see all those branches, but we also don't want to include Quotient changes (since we're only building Nevow).
To accomplish this, we must rely upon the split_file function to help us tell the difference between files that belong to Nevow and those that belong to Quotient, as well as figuring out which branch each one is on.

For some version-control systems, making Buildbot aware of new changes can be a challenge.
If the pre-supplied classes in Change Sources are not sufficient, then you will need to write your own.

There are three approaches, one of which is not even a change source.
The first option is to write a change source that exposes some service to which the version control system can "push" changes.
This can be more complicated, since it requires implementing a new service, but delivers changes to Buildbot immediately on commit.

The second option is often preferable to the first: implement a notification service in an external process (perhaps one that is started directly by the version control system, or by an email server) and delivers changes to Buildbot via PBChangeSource.
This section does not describe this particular approach, since it requires no customization within the buildmaster process.

The third option is to write a change source which polls for changes - repeatedly connecting to an external service to check for new changes.
This works well in many cases, but can produce a high load on the version control system if polling is too frequent, and can take too long to notice changes if the polling is not frequent enough.

The easiest way to do this is to subclass buildbot.changes.base.ChangeSource, implementing the describe method to describe the instance.
ChangeSource is a Twisted service, so you will need to implement the startService and stopService methods to control the means by which your change source receives notifications.

When the class does receive a change, it should call self.master.addChange(..) to submit it to the buildmaster.
This method shares the same parameters as master.db.changes.addChange, so consult the API documentation for that function for details on the available arguments.

You will probably also want to set compare_attrs to the list of object attributes which Buildbot will use to compare one change source to another when reconfiguring.
During reconfiguration, if the new change source is different from the old, then the old will be stopped and the new started.

Polling is a very common means of seeking changes, so Buildbot supplies a utility parent class to make it easier.
A poller should subclass buildbot.changes.base.PollingChangeSource, which is a subclass of ChangeSource.
This subclass implements the Service methods, and calls the poll method according to the pollInterval and pollAtLaunch options.
The poll method should return a Deferred to signal its completion.

Aside from the service methods, the other concerns in the previous section apply here, too.

Writing a new latent worker should only require subclassing buildbot.worker.AbstractLatentWorker and implementing start_instance and stop_instance.

defstart_instance(self):# responsible for starting instance that will try to connect with this# master. Should return deferred. Problems should use an errback. The# callback value can be None, or can be an iterable of short strings to# include in the "substantiate success" status message, such as# identifying the instance that started.raiseNotImplementedErrordefstop_instance(self,fast=False):# responsible for shutting down instance. Return a deferred. If `fast`,# we're trying to shut the master down, so callback as soon as is safe.# Callback value is ignored.raiseNotImplementedError

The standard BuildFactory object creates Build objects by default.
These Builds will each execute a collection of BuildSteps in a fixed sequence.
Each step can affect the results of the build, but in general there is little intelligence to tie the different steps together.

By setting the factory's buildClass attribute to a different class, you can instantiate a different build class.
This might be useful, for example, to create a build class that dynamically determines which steps to run.
The skeleton of such a project would look like:

It is sometimes helpful to have a build's workdir determined at runtime based on the parameters of the build.
To accomplish this, set the workdir attribute of the build factory to a callable.
That callable will be invoked with the SourceStamp for the build, and should return the appropriate workdir.
Note that the value must be returned immediately - Deferreds are not supported.

This can be useful, for example, in scenarios with multiple repositories submitting changes to Buildbot.
In this case you likely will want to have a dedicated workdir per repository, since otherwise a sourcing step with mode = "update" will fail as a workdir with a working copy of repository A can't be "updated" for changes from a repository B.
Here is an example how you can achieve workdir-per-repo:

You could make the workdir function compute other paths, based on parts of the repo URL in the sourcestamp, or lookup in a lookup table based on repo URL.
As long as there is a permanent 1:1 mapping between repos and workdir, this will work.

Buildbot has transitioned to a new, simpler style for writing custom steps.
See New-Style Build Steps for details.
This section documents new-style steps.
Old-style steps are supported in Buildbot-0.9.0, but not in later releases.

While it is a good idea to keep your build process self-contained in the source code tree, sometimes it is convenient to put more intelligence into your Buildbot configuration.
One way to do this is to write a custom BuildStep.
Once written, this Step can be used in the master.cfg file.

The best reason for writing a custom BuildStep is to better parse the results of the command being run.
For example, a BuildStep that knows about JUnit could look at the logfiles to determine which tests had been run, how many passed and how many failed, and then report more detailed information than a simple rc==0 -based good/bad decision.

Buildbot has acquired a large fleet of build steps, and sports a number of knobs and hooks to make steps easier to write.
This section may seem a bit overwhelming, but most custom steps will only need to apply one or two of the techniques outlined here.

For complete documentation of the build step interfaces, see BuildSteps.

Build steps act as their own factories, so their constructors are a bit more complex than necessary.
The configuration file instantiates a BuildStep object, but the step configuration must be re-used for multiple builds, so Buildbot needs some way to create more steps.

Consider the use of a BuildStep in master.cfg:

f.addStep(MyStep(someopt="stuff",anotheropt=1))

This creates a single instance of class MyStep.
However, Buildbot needs a new object each time the step is executed.
An instance of BuildStep remembers how it was constructed, and can create copies of itself.
When writing a new step class, then, keep in mind are that you cannot do anything "interesting" in the constructor -- limit yourself to checking and storing arguments.

It is customary to call the parent class's constructor with all otherwise-unspecified keyword arguments.
Keep a **kwargs argument on the end of your options, and pass that up to the parent class's constructor.

A step's execution occurs in its run method.
When this method returns (more accurately, when the Deferred it returns fires), the step is complete.
The method's result must be an integer, giving the result of the step.
Any other output from the step (logfiles, status strings, URLs, etc.) is the responsibility of the run method.

The ShellCommand class implements this run method, and in most cases steps subclassing ShellCommand simply implement some of the subsidiary methods that its run method calls.

To spawn a command in the worker, create a RemoteCommand instance in your step's run method and run it with runCommand:

cmd=RemoteCommand(args)d=self.runCommand(cmd)

The CommandMixin class offers a simple interface to several common worker-side commands.

For the much more common task of running a shell command on the worker, use ShellMixin.
This class provides a method to handle the myriad constructor arguments related to shell commands, as well as a method to create new RemoteCommand instances.
This mixin is the recommended method of implementing custom shell-based steps.
The older pattern of subclassing ShellCommand is no longer recommended.

Each step can summarize its current status in a very short string.
For example, a compile step might display the file being compiled.
This information can be helpful users eager to see their build finish.

Similarly, a build has a set of short strings collected from its steps summarizing the overall state of the build.
Useful information here might include the number of tests run, but probably not the results of a makeclean step.

As a step runs, Buildbot calls its getCurrentSummary method as necessary to get the step's current status.
"As necessary" is determined by calls to buildbot.process.buildstep.BuildStep.updateSummary.
Your step should call this method every time the status summary may have changed.
Buildbot will take care of rate-limiting summary updates.

When the step is complete, Buildbot calls its getResultSummary method to get a final summary of the step along with a summary for the build.

Each BuildStep has a collection of log files.
Each one has a short name, like stdio or warnings.
Each log file contains an arbitrary amount of text, usually the contents of some output file generated during a build or test step, or a record of everything that was printed to stdout/stderr during the execution of some command.

Each can contain multiple channels, generally limited to three basic ones: stdout, stderr, and headers.
For example, when a shell command runs, it writes a few lines to the headers channel to indicate the exact argv strings being run, which directory the command is being executed in, and the contents of the current environment variables.
Then, as the command runs, it adds a lot of stdout and stderr messages.
When the command finishes, a final header line is added with the exit code of the process.

Status display plugins can format these different channels in different ways.
For example, the web page shows log files as text/html, with header lines in blue text, stdout in black, and stderr in red.
A different URL is available which provides a text/plain format, in which stdout and stderr are collapsed together, and header lines are stripped completely.
This latter option makes it easy to save the results to a file and run grep or whatever against the output.

Finally, addHTMLLog is similar to addCompleteLog, but the resulting log will be tagged as containing HTML.
The web UI will display the contents of the log using the browser.

The logfiles= argument to ShellCommand and its subclasses creates new log files and fills them in realtime by asking the worker to watch a actual file on disk.
The worker will look for additions in the target file and report them back to the BuildStep.
These additions will be added to the log file by calling addStdout.

All log files can be used as the source of a LogObserver just like the normal stdioLogFile.
In fact, it's possible for one LogObserver to observe a logfile created by another.

For the most part, Buildbot tries to avoid loading the contents of a log file into memory as a single string.
For large log files on a busy master, this behavior can quickly consume a great deal of memory.

Instead, steps should implement a LogObserver to examine log files one chunk or line at a time.

For commands which only produce a small quantity of output, RemoteCommand will collect the command's stdout into its stdout attribute if given the collectStdout=True constructor argument.

Most shell commands emit messages to stdout or stderr as they operate, especially if you ask them nicely with a option --verbose flag of some sort.
They may also write text to a log file while they run.
Your BuildStep can watch this output as it arrives, to keep track of how much progress the command has made or to process log output for later summarization.

To accomplish this, you will need to attach a LogObserver to the log.
This observer is given all text as it is emitted from the command, and has the opportunity to parse that output incrementally.

There are a number of pre-built LogObserver classes that you can choose from (defined in buildbot.process.buildstep, and of course you can subclass them to add further customization.
The LogLineObserver class handles the grunt work of buffering and scanning for end-of-line delimiters, allowing your parser to operate on complete stdout/stderr lines.

For example, let's take a look at the TrialTestCaseCounter, which is used by the Trial step to count test cases as they are run.
As Trial executes, it emits lines like the following:

When the tests are finished, trial emits a long line of ====== and then some lines which summarize the tests that failed.
We want to avoid parsing these trailing lines, because their format is less well-defined than the [OK] lines.

This parser only pays attention to stdout, since that's where trial writes the progress lines.
It has a mode flag named finished to ignore everything after the ==== marker, and a scary-looking regular expression to match each line while hopefully ignoring other messages that might get displayed as the test runs.

Each time it identifies a test has been completed, it increments its counter and delivers the new progress value to the step with self.step.setProgress.
This helps Buildbot to determine the ETA for the step.

To connect this parser into the Trial build step, Trial.__init__ ends with the following clause:

# this counter will feed Progress along the 'test cases' metriccounter=TrialTestCaseCounter()self.addLogObserver('stdio',counter)self.progressMetrics+=('tests',)

This creates a TrialTestCaseCounter and tells the step that the counter wants to watch the stdio log.
The observer is automatically given a reference to the step in its step attribute.

In custom BuildSteps, you can get and set the build properties with the getProperty and setProperty methods.
Each takes a string for the name of the property, and returns or accepts an arbitrary JSON-able (lists, dicts, strings, and numbers) object.
For example:

Remember that properties set in a step may not be available until the next step begins.
In particular, any Property or Interpolate instances for the current step are interpolated before the step starts, so they cannot use the value of any properties determined in that step.

Statistics can be generated for each step, and then summarized across all steps in a build.
For example, a test step might set its warnings statistic to the number of warnings observed.
The build could then sum the warnings on all steps to get a total number of warnings.

Each BuildStep has a collection of links.
Each has a name and a target URL.
The web display displays clickable links for each link, making them a useful way to point to extra information about a step.
For example, a step that uploads a build result to an external service might include a link to the uploaded file.

To set one of these links, the BuildStep should call the addURL method with the name of the link and the target URL.
Multiple URLs can be set.
For example:

When implementing a BuildStep it may be necessary to know about files that are created during the build.
There are a few worker commands that can be used to find files on the worker and test for the existence (and type) of files and directories.

The worker provides the following file-discovery related commands:

stat calls os.stat for a file in the worker's build directory.
This can be used to check if a known file exists and whether it is a regular file, directory or symbolic link.

listdir calls os.listdir for a directory on the worker.
It can be used to obtain a list of files that are present in a directory on the worker.

glob calls glob.glob on the worker, with a given shell-style pattern containing wildcards.

For example, we could use stat to check if a given path exists and contains *.pyc files.
If the path does not exist (or anything fails) we mark the step as failed; if the path exists but is not a directory, we mark the step as having "warnings".

frombuildbot.pluginsimportsteps,utilfrombuildbot.interfacesimportWorkerTooOldErrorimportstatclassMyBuildStep(steps.BuildStep):def__init__(self,dirname,**kwargs):buildstep.BuildStep.__init__(self,**kwargs)self.dirname=dirnamedefstart(self):# make sure the worker knows about statworkerver=(self.workerVersion('stat'),self.workerVersion('glob'))ifnotall(workerver):raiseWorkerTooOldError('need stat and glob')cmd=buildstep.RemoteCommand('stat',{'file':self.dirname})d=self.runCommand(cmd)d.addCallback(lambdares:self.evaluateStat(cmd))d.addErrback(self.failed)returnddefevaluateStat(self,cmd):ifcmd.didFail():self.step_status.setText(["File not found."])self.finished(util.FAILURE)returns=cmd.updates["stat"][-1]ifnotstat.S_ISDIR(s[stat.ST_MODE]):self.step_status.setText(["'tis not a directory"])self.finished(util.WARNINGS)returncmd=buildstep.RemoteCommand('glob',{'path':self.dirname+'/*.pyc'})d=self.runCommand(cmd)d.addCallback(lambdares:self.evaluateGlob(cmd))d.addErrback(self.failed)returnddefevaluateGlob(self,cmd):ifcmd.didFail():self.step_status.setText(["Glob failed."])self.finished(util.FAILURE)returnfiles=cmd.updates["files"][-1]iflen(files):self.step_status.setText(["Found pycs"]+files)else:self.step_status.setText(["No pycs found"])self.finished(util.SUCCESS)

Each status plugin is an object which provides the twisted.application.service.IService interface, which creates a tree of Services with the buildmaster at the top [not strictly true].
The status plugins are all children of an object which implements buildbot.interfaces.IStatus, the main status object.
From this object, the plugin can retrieve anything it wants about current and past builds.
It can also subscribe to hear about new and upcoming builds.

Status plugins which only react to human queries (like the Waterfall display) never need to subscribe to anything: they are idle until someone asks a question, then wake up and extract the information they need to answer it, then they go back to sleep.
Plugins which need to act spontaneously when builds complete (like the MailNotifier plugin) need to subscribe to hear about new builds.

If the status plugin needs to run network services (like the HTTP server used by the Waterfall plugin), they can be attached as Service children of the plugin itself, using the IServiceCollection interface.

Let's say that we've got some snazzy new unit-test framework called Framboozle.
It's the hottest thing since sliced bread.
It slices, it dices, it runs unit tests like there's no tomorrow.
Plus if your unit tests fail, you can use its name for a Web 2.1 startup company, make millions of dollars, and hire engineers to fix the bugs for you, while you spend your afternoons lazily hang-gliding along a scenic pacific beach, blissfully unconcerned about the state of your tests.
[1]

To run a Framboozle-enabled test suite, you just run the 'framboozler' command from the top of your source code tree.
The 'framboozler' command emits a bunch of stuff to stdout, but the most interesting bit is that it emits the line "FNURRRGH!" every time it finishes running a test case You'd like to have a test-case counting LogObserver that watches for these lines and counts them, because counting them will help the buildbot more accurately calculate how long the build will take, and this will let you know exactly how long you can sneak out of the office for your hang-gliding lessons without anyone noticing that you're gone.

This will involve writing a new BuildStep (probably named "Framboozle") which inherits from ShellCommand.
The BuildStep class definition itself will look something like this:

Remember that master.cfg is secretly just a Python program with one job: populating the BuildmasterConfig dictionary.
And Python programs are allowed to define as many classes as they like.
So you can define classes and use them in the same file, just as long as the class is defined before some other code tries to use it.

This is easy, and it keeps the point of definition very close to the point of use, and whoever replaces you after that unfortunate hang-gliding accident will appreciate being able to easily figure out what the heck this stupid "Framboozle" step is doing anyways.
The downside is that every time you reload the config file, the Framboozle class will get redefined, which means that the buildmaster will think that you've reconfigured all the Builders that use it, even though nothing changed.
Bleh.

(check out the Python docs for details about how import and fromAimportB work).

What we've done here is to tell Python that every time it handles an "import" statement for some named module, it should look in our ~/lib/python/ for that module before it looks anywhere else.
After our directories, it will try in a bunch of standard directories too (including the one where buildbot is installed).
By setting the PYTHONPATH environment variable, you can add directories to the front of this search list.

Python knows that once it "import"s a file, it doesn't need to re-import it again.
This means that reconfiguring the buildmaster (with buildbotreconfig, for example) won't make it think the Framboozle class has changed every time, so the Builders that use it will not be spuriously restarted.
On the other hand, you either have to start your buildmaster in a slightly weird way, or you have to modify your environment to set the PYTHONPATH variable.

In this case, putting the code into /usr/local/lib/python2.4/site-packages/framboozle.py would work just fine.
We can use the same master.cfgimportframboozle statement as in Option 2.
By putting it in a standard include directory (instead of the decidedly non-standard ~/lib/python), we don't even have to set PYTHONPATH to anything special.
The downside is that you probably have to be root to write to one of those standard include directories.

framboozle:Framboozle consists of two parts: framboozle is the name of the Python module where to look for Framboozle class, which implements the plugin

Framboozle is the name of the plugin.

This will allow users of your plugin to use it just like any other Buildbot plugins:

frombuildbot.pluginsimportsteps...steps.Framboozle...

Now you can upload it to PyPI where other people can download it from and use in their build systems.
Once again, the information about how to prepare and upload a package to PyPI can be found in tutorials listed in How to package Buildbot plugins.

And then you don't even have to install framboozle.py anywhere on your system, since it will ship with Buildbot.
You don't have to be root, you don't have to set PYTHONPATH.
But you do have to make a good case for Framboozle being worth going into the main distribution, you'll probably have to provide docs and some unit test cases, you'll need to figure out what kind of beer the author likes (IPA's and Stouts for Dustin), and then you'll have to wait until the next release.
But in some environments, all this is easier than getting root on your buildmaster box, so the tradeoffs may actually be worth it.

Putting the code in master.cfg (1) makes it available to that buildmaster instance.
Putting it in a file in a personal library directory (2) makes it available for any buildmasters you might be running.
Putting it in a file in a system-wide shared library directory (3) makes it available for any buildmasters that anyone on that system might be running.
Getting it into the buildbot's upstream repository (4) makes it available for any buildmasters that anyone in the world might be running.
It's all a matter of how widely you want to deploy that new class.