Yes, and it does this by not doing the calculations for all the bands, and only doing a 16 or 8pt iMDCT at the end. Remember that MP3 is cascade of 32 18-pt iMDCTs followed by a 32-pt iMDCT. This was done mainly to reuse hardware and software from Layer 2 decoders. From memory, I think there's a step in between that would prevent an implementation from simply using a 576 point MDCT.

Its actually a subband decomposition followed by an MDCT on each subband. So basically when you decode you do the iMDCT which gives you a bunch of MP2 style subbands. Then you use an inverse filterbank to put the subbands back into a single signal. The downsample by 2 trick works because of some symmetry in the filterbank that lets you throw away half of the samples in exchange for half as much work in the filterbank. Since the filterbank is much slower then the iMDCT, this was used on old systems to speed up decoding. Its not really useful though once you break ~50-60MHz CPUs.

QUOTE (pdq @ Oct 10 2012, 12:12)

So am I understanding correctly that 99.9% of decoding is independent of the original sample rate, and only at the very end where the wav header is filled in does it even matter?

No, I would say the opposite. In a transform codec, entire lossy decoding process (everything after huffman decoding) depends on the sample rate.

Edit: For example, a lot of codecs don't even use the same MDCT at different sample rates (e.g. Vorbis, WMA).