Introduction

If you are reading this because you were referred here to bisect your kernel, or you want to know more about kernel bisecting, then are taking the best route to get your bug resolved as soon as possible. This article has been written so that any one at any skill level may go from start to finish successfully, working through each section one at a time.

What is a bisect?

A forward bisection, or just traditionally called a bisect, is the fastest process of finding the midpoint between a known good software version release, and a known bad one released afterwards. One continues finding the successive midpoint until one identifies the last good software version release, followed consecutively by the first bad one. Ideally, if one could determine a set of commits that are known good and known bad within a larger range, one could reduce the number of bisect iterations required. However, a bisect would be handy when this wouldn't be possible.

Performing a bisect is faster than testing every version in between the initial known good version, and the known bad one. For example, if your known good release was 1.0, and bad was 1.10, the worst case scenario would be testing 9 releases (1.1, 1.2, 1.3,..., 1.9) until 1.9 was found to be the last known good version. However, bisecting the worst case scenario, one would test only 4 releases (1.5, 1.7, 1.8, and 1.9).

What is a reverse bisect?

A reverse bisect is the process of finding the midpoint between a known bad software version release, and a known good one released afterwards. One continues finding the successive midpoint until one identifies the last bad software version release, followed consecutively in version by the first good one.

How do I bisect a Ubuntu kernel bug?

For example, let us say you started with a fully updated Maverick 32-bit (32-bit is also known as i386), install. Then, instead of upgrading you just did a clean install of Quantal, and found what you think may be a linux kernel bug, as well as regression. However, if you are unsure of what release this regression occurred with, in order to rule out a userspace issue, and to identify the specific regression, the next step would be bisecting Ubuntu releases.

Bisecting Ubuntu releases

Hence, one typically wants to narrow down the first Ubuntu release after Maverick this problem began in. So, we have the following releases:

The midpoint release between Maverick and Quantal is Ubuntu 11.10 Oneiric Ocelot. One may download releases from http://releases.ubuntu.com/. If the bug is reproducible in Oneiric, then one would want to test Ubuntu 11.04 Natty Narwhal. If this is reproducible in Natty, then one knows that the regression happened going from Maverick to Natty. The next step is bisecting Ubuntu kernel versions.

Bisecting Ubuntu kernel versions

Continuing the prior example, the next step would be to find the last good kernel version, followed consecutively in version by the first bad one. So, assuming the Maverick kernel was kept updated, as per https://launchpad.net/ubuntu/maverick/+source/linux, the last Maverick kernel version published for upgrade was 2.6.35-32.67. As one may notice under the Releases in Ubuntu section, a series of kernels are listed vertically:

Commit bisecting Ubuntu kernel versions

Required knowledge and tools

The rest of this page assumes that you know how to fetch a kernel from the Ubuntu git repository, and build it, and that you have basic git skills. If you can't do that yet, try starting with this wiki page. As well, one may want to familiarize themselves with git bisect via a terminal:

git bisect --help

This example

The commands in the example on this page use a real life example. In January of 2011, a kernel which was published to the -proposed pocket caused Radeon graphics to break for a number of users. Typing the commands as shown on this page will recreate the steps taken to find the bad commit in that release. The entire history of testing the bisected kernels for that regression appears in the bug.

Getting set up

You need to have a bug reproducer, or have a cooperative tester in the community. If you can't reliably determine whether the bug exists in a given kernel, bisection will not give meaningful results.

This process goes a lot faster if you can quickly build kernels and have them tested. Using a fast build machine and having good communications with the testers will speed things up.

Check out your tree and get ready

If you want to follow along with the example, use the commands exactly as shown:

Take a look first to see what you can learn

The version which works is tagged Ubuntu-2.6.35-24.42. The version which has the problem is tagged Ubuntu-2.6.35-25.43

First, lets take a quick look at the changes between the two:

git log --oneline Ubuntu-2.6.35-24.42..Ubuntu-2.6.35-25.43

Now, how many commits are in there?

git log --oneline Ubuntu-2.6.35-24.42..Ubuntu-2.6.35-25.43 | wc

It says 325, but two of those are the startnewrelease and final changelog changes, so there are 323 commits, and the bad one is among them.

Sometimes you can easily find the problem if it's in a subsystem that only has changes from a few patches. In this example, it's Radeon hardware that is affected, so try looking at the commits to the radeon driver:

This tells you that git has chosen the commit "olpc_battery: . . ." as the midpoint for the first bisection, and reset your tree so that it is the top commit. Git is also telling you that there are about seven bisection steps left.

Give this test a version number

Before you build this kernel for testing, you have to give it a version number. This is done by editing the debian.master/changelog file.

The top of that file now appears like this:

linux (2.6.35-25.43) UNRELEASED; urgency=low
CHANGELOG: Do not edit directly. Autogenerated at release.
CHANGELOG: Use the printchanges target to see the curent changes.
CHANGELOG: Use the insertchanges target to create the final log.
-- Tim Gardner <tim.gardner@canonical.com> Mon, 06 Dec 2010 10:45:38 -0700

The top line of that file has the version in it. Choose a version that is:

clearly a test

will be superceded by later kernels

has meaning to you in your bisection testing

I use my initials, plus an incrementing number, plus an indicator of the launchpad bug associated with the problem - thus, my first test version is:

2.6.35-25.44~spc01LP703553

The '~' is a special versioning trick that means that this kernel will be superceded and replaced by any version higher than 2.6.35-25.44, yet this version is considered higher than .44 - using this versioning makes sure that if a user tests our kernel they won't keep it around after the next update comes along.

You also need to change the UNRELEASED to the maverick pocket, or it will not be accepted for your PPA build.

Edit the changelog and replace the entire text in the earlier box with this:

Bisecting: a merge base must be tested

git is advising that in order to proceed with the bisect, one would need to tell git if the commit 521cb40b0c44418a4fd36dc633f575813d59a43d is good or bad via:

git bisect good

or:

git bisect bad

Map Ubuntu kernel to mainline kernel for mainline bisection

If you have the issues below, or some other issue preventing you from further commit bisecting Ubuntu kernel versions, assuming the issue is not due to a downstream patch or configuration change, one would want to switch from commit bisecting the Ubuntu kernel to commit bisecting the mainline kernel. As the Ubuntu kernel and mainline kernel have differing version schemes, one would want to use the Ubuntu to Mainline kernel version mapping page. With this is mind, it may not map to an upstream tag one could use directly for bisection. For example, if one is using Ubuntu kernel 3.10.0-6.17, which maps to mainline 3.10.3, when one tries to bisect against this tag, one would get: fatal: Needed a single revision Bad rev input: v3.10.3

Hence, one could simply just use an adjacent tag that is valid v3.10-rc7.

Commit bisecting Ubuntu kernel versions across non-linear tags

The following will tell you whether or not two given tags are non-linear:

If that command outputs a sha1 then the tags are linear, otherwise they are not. If they are not, this will cause the below mentioned folders to disappear. Assuming the issue is not due to a downstream patch or configuration change, one would want to switch from commit bisecting the Ubuntu kernel to commit bisecting the mainline kernel following the instructions below. You can use the Ubuntu to Mainline kernel version mapping page to ease this transition.

Why did the folders debian and debian.master disappear?

For example, while attempting to commit bisect the Ubuntu kernel for Precise, if one performed the following:

How do I bisect the upstream kernel?

Bisecting upstream kernel versions

All of the upstream kernels are published at http://kernel.ubuntu.com/~kernel-ppa/mainline/ . The first step in the bisect process is to find the last "Good" kernel version, followed consecutively in version by the first "Bad" one. That is done by downloading, installing and testing kernels from here. Once this is done, the next step is commit bisecting upstream kernel versions.

Commit bisecting upstream kernel versions

First, follow the KernelTeam/GitKernelBuild guide to build a new kernel from git. The step you will be doing the most is #9. "--append-to-version=-custom" is very important to change to help differeniate your kernels.

As an example, let's say testing of the mainline kernel has shown the regression was introduced somewhere between v3.2-rc1 and v3.2-rc2.

Confirmation of mainline test results

It's not required, but if you are new to building a kernel I suggest confirming your results by building both of the mainline builds you narrowed it down to yourself. In our example this is v3.2-rc1 (good) and v3.2-rc2 (bad).

Basically what that is telling you is that commit fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b is approximately in the middle of v3.2-rc1 and v3.2-rc2 and is a good candidate for testing. You now want to build a kernel up through commit fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b.

To do this, you can use the mainline-build-one script which can be found at ~kteam-tools/malinline-build/maineline-build-one .

Build upstream test kernel

The next step is to run the mainline-build-one script. This script will build an upstream kernel that will be able to install and run on a Ubuntu system. Run the mainline-build-one script as follows (assuming you've added kteam-tools/mainline-build to your PATH):

mainline-build-one fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b precise

This will generate a bunch of .debs one directory level above. One of the debs will be something like:

This is the deb you will want to test and see if the bug exists or not.

Update bisect with test results

Depending on your test results, you'll mark this commit as "good" or "bad":

cd linux-stable

If testing was good (i.e. no issues) do the following:

git bisect good fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b

Otherwise if the testing was bad, you would do the following:

git bisect bad fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b

That'll then spit out the next commit to test. Eventually you'll narrow it down and the bisect will tell you which was the first bad commit.

Once the first bad commit is identified, you can then try reverting that one commit and see if that fixes the bug.

How do I reverse bisect the upstream kernel?

A reverse bisect is just running the same methodology described above in a slightly different way, to narrow down a potential fix identified upstream. For example, let us assume you had a bug in Saucy kernel 3.11.0-15.23. Let us also assume the issue is not due to something in the downstream/Ubuntu kernel (configuration, out-of-tree patch, etc.). Then, you subsequently tested upstream kernel v3.13-rc5 and identified the issue doesn't happen. Mapping the Saucy kernel to upstream gives kernel 3.11.10. So, we now know the issue exists at least as early as mainline 3.11.10 and is fixed in v3.13-rc5. The next step is reverse bisecting upstream kernel versions.

Reverse bisecting upstream kernel versions

The first step is to find the last bad upstream kernel version, followed consecutively by the first good one. This is done by downloading, installing and testing mainline kernels from here. So, looking at the list we have:

The midpoint release is v3.12.1-trusty. One would continue to test the successive midpoints of each result, until we have the first bad version, followed consecutively by the first good version. Let us assume this was narrowed down to v3.13-rc4 as the bad, and v3.13-rc5 as the good. The next step is reverse commit bisecting upstream kernel versions.

Reverse commit bisecting upstream kernel versions

Now one will utilize the git skills learned above in a slightly different way. This is due to how git was designed with forward bisections in mind. However, one may utilize git to accomplish a reverse bisect. So, once Linus's development tree has been cloned:

Please notice how v3.13-rc4 was marked good (even though we tested it to be bad) and v3.13-rc5 was marked bad. This is intentional. If the commit one builds against next works, one will mark this bad. One will continue this process until the fix commit is identified.

Testing a newly released patch from upstream

Lets assume you may have identified a newly released upstream patch that may address your issue, but hasn't been commited to the upstream development tree yet. Let us take as an example the following upstream patch noted here.

Start copying from the line where it notes:

diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c

to the last code line before the double dash:

static int register_count;

Your patch file should be exactly as shown, honoring all spaces, or lack thereof:

If for whatever reason the new kernel doesn't boot, it may not be you did something wrong, but just that it won't boot with this commit applied, or the configuration file choices were not tested against it.

Bisecting via mainline-build-one (Advanced users only)

If you are new to bisecting, then feel free to ignore this section. However, if you will be doing upstream testing often (i.e. not just bisecting your one problem, and then may never bisect again) this is provided as a convenience. You need to set up your system first, which is not described here.

Previously, this article talked about bisecting a Ubuntu Linux kernel. Now you may be wondering how to use mainline-build-one to go about bisecting and building an upstream kernel. This is where you can make use of the mainline build scripts, which are available from the kteam-tools repository http://kernel.ubuntu.com/git/ubuntu/kteam-tools.git. As an example, let's say testing of the mainline kernel has shown the regression was introduced somewhere between v3.2-rc1 and v3.2-rc2. The next section will show you the steps to perform a bisect and build a test kernel.

Login to a machine that you've configured to build kernels and setup environment