Located here is a basic decoder for Apple Lossless Audio Codec files (ALAC). ALAC is a proprietary lossless audio compression scheme. Apple never released any documents on the format.What I provide here is a C implementation of a decoder, written from reverse engineering the file format. It turns out that most of the algorithms in the codec are fairly well known. ALAC uses an adaptive FIR prediction algorithm and stores the error values using a modified rice or golumb algorithm. Further details are in alac.c.

Although an encoder is not provided, by using the decoder as a sort of specification it should be fairly trivial to write an encoder. By exploiting other lossless audio encoders, such as FLAC, the task will be much easier. Although one wouldn't be able to copy the compression algorithms verbatim, as adaptive compression is used in ALAC and not in FLAC. There are, however, a bunch of academic papers on the issue.

The program located here will not be able to handle all ALAC files, it can only handle mono or stereo files. ALAC allows up to 8 channels. It should be trivial to finish the implementation once I find files that I can test it with. Likewise the decoder only supports 16bit sample sizes. Again, it should be trivial to fix.

The decoder is fairly self explanatory, it can read an ALAC stream from either a file or from stdin, and write it as raw PCM data or as a WAV file to either stdout or a file. In theory one should be able to stream data to the decoder.

I uploaded a binary here. Not sure about the legality of this, I will remove if necessary.

I should say that the binary that I uploaded is just for testing purposes *only*. I am not sure if the output is indeed lossless. I really didn't test it much.

I've tested with one big file, encoded in m4a ALAC, and decoded with the 'hacked' decoder. Sound is great, but compared to iTunes decoder bass are not as powerfull, some details are slightly less precise and trebles are also a bit... no, I'm joking of course

- first stage: optional interchannel decorrelation- second stage: FIR interchannel decorrelation- - (both have optional verbatim coding)- - (both have optional fixed low-order predictors)- third stage: residue coding with rice codes- also, before second stage, FLAC has a 'wasted-bits' step which might have an ALAC analogue, it is hard to tell from the source

some differences:

1. interchannel decorrelation can use a linear combination of mid and side channels whereas FLAC computes only mid and side channels2. ALAC FIR decorrelation adapts based on the sign of some measurement (have to look into that more)3. rice parameter adapts, where FLAC uses precomputed parameters that are also transmitted

from those differences we can infer some things. first, the decode complexity is higher that FLAC: there is an extra multiply per sample because of 1), 2) causes a few extra adds per sample times predictor order at least, and 3) is also more complex by an amount I haven't really quantified yet. it is now clear also that the high decode speed on apple hardware is due to significant PPC optimization.

so it looks like apple did not make ALAC because FLAC's decode complexity was too high.

also, even with all these "improvements" the compression ratio of ALAC is similar to (but seems to average slightly lower) than FLAC. so it wasn't made to blow FLAC out of the water on compression.

what's left? either they just wanted something proprietary (well that didn't last long) or it has some unknown advantage on hi-res audio that isn't supported by the encoder yet.