The use of convert2bed for this file offers a 9.1x speed improvement. Other large BAM files show similar conversion speedups.

Further time reductions are conferred with use of bam2bedcluster and bam2starchcluster scripts (TBA) which make use of GNU Parallel or a Sun Grid Engine job scheduler, reducing conversion time even further by breaking conversion tasks down by chromosome.

Like this:

I wrote a data extraction utility which uses PolarSSL to export a Base64-encoded SHA-1 digest of some internal metadata (a string of JSON-formatted data), to help validate archive integrity:

$ unstarch --sha1-signature .foo
7HkOxDUBJd2rU/CQ/zigR84MPTc=

So far, so good.

But now I want to validate that the metadata are being digested correctly through some independent means, preferably via the command-line, so that I can perform regression testing. I can use the openssl, xxd and base64 tools together to test that I get the same answer:

As a note to myself: I end up stripping the trailing newline from the JSON output of unstarch because this is what the PolarSSL library ends up digesting. This very nearly had me doubting whether PolarSSL was working correctly, or whether my command-line test was correct!

Like this:

PolarSSL is a C-based cryptography and SSL library which has a GPL license, which makes it ideal for use with BEDOPS, where I plan to use it for quick SHA-1 hashes of metadata, so as to help validate the integrity of the archive.

I’ve been testing it out in Mac OS X 10.8 and it seems pretty straightforward. Here’s a simple project that hashes the string abc:

Like this:

I plan to use a magic number in the second major release of our BEDOPS suite to uniquely identify starch-formatted archive files.

Looking at the first few bytes of the archive will help us because I plan to move the metadata to the back of the archive file, and it would be expensive to seek to the end of the file just to determine the file type. I figure a 64-byte header is enough to hold some extra data that identifies the file type and points the unstarch and starchcat tools to the right byte in the file where the metadata is kept:

This image describes what I’m planning for Starch v2. Basically, we reserve a 64-byte header at the front of the archive that contains the following four items:

Magic number – The magic number ca5cade5 identifies this as a Starch v2-formatted file.

Offset – A zero-padded 16-digit unsigned integer that marks the byte into the file at which the archive’s metadata starts (including the 64-byte header itself).

Hash – Here I make a SHA-1 hash of a concatenated string made from the offset value and metadata string. This value helps validate the integrity of the archive metadata and the offset values. (At a later time, we will add hashes of the chromosome streams to the metadata, to allow full archive validation).

Reserved – We keep 20 bytes of space free, in case we need it.

After these 64 bytes, the compressed, per-chromosome streams start, and we then wrap up with the metadata at the end of the file.

Initially, the header will be the magic number and all zeros. At the conclusion of creating the archive, once all the streams are prepared and the metadata is ready to hash, parts of the header are written over with calculated values (offset and hash).

To assist with picking the right magic number, Ned Batchelder put together a Python script to generate all the hex words possible from a small subset of the English dictionary — words like deadbeef, dec0ded and 0ddba11 which can be used as magic numbers to identify a file type. Thanks, Ned! So far, I’m liking the magic word ca5cade5 as it has a nice biological flavor to it.