I don't think it is implemented in any computer decoder or player. However, I don't think it is much worth the effort. Some players such as fb2k include ATH-shaped noise shaping profiles that can achieve a dynamic range equivalent to around 110 dB for 16-bit audio.

AFAIK fb2k uses SSRC noise shaping dithering, with some improvements to avoid clipping. There are a few different noise shaping profiles to choose, but for each one the spectral shape of the dither is fixed, not dynamic. As I said, I don't think dynamic noise shaping is implemented in any of the available computer decoders or players.

I read at least one of those articles a couple months ago (when I was working on noise shaping algorithms for WavPack's lossy mode) and you're right, they look very promising. I really think they're a much better way to go than adding all that energy above 15 kHz (which is how the static algorithms work), especially for coarse quanitization (like 8-bit or WavPack lossy). Unfortunately, I was not able to find enough information in the papers to implement the algorithms, and they might be very computationally intensive.

I'm sure you know this better than I do, but single dynamic range measurement is no proof of audible superiority of a noise shaping implementation.

As such, I'm a little surprised at your "it's not worth the effort" comment and would like you to explain your opinion better, if I have misunderstood you.

I said "equivalent" dynamic range. SSRC strong ATH noiseshaping with 16-bit output has a measured dynamic range of just around 70 dB, but an audible-equivalent dynamic range of around 110 dB, just due to the ATH shaped dither noise. I can't hear just this dither noise on my setup even putting the amp volume know at max, so...

QUOTE

I think adaptive noise shaping listening tests have indicated already their possible audible superiority over constant noise shaping curve implementations (regardless of what the average measurements for dynamic range imply).

As bryant says, this is probably clearly noticeable and worth the effort for low-resolution audio, such as for example when dithering to 8-bit. But for 16-bit audio under real-world listening conditions, I think a good static noiseshaping profile is already overkill.

It is significantly more computationally intensive than the current fb2k ditherer, but still fast enough for use in (for example) an audio player, as far as I can gather from the papers.

I haven't got the time to persue this right now, but if you need implementation details, contact the paper authors, and for example ask if they're willing to give out the source used for the papers tests.

I was under the impression that dither needs to be evaluated along with signal and not by itself? Am I mistaken? Not that I'm trying to refute what you say, so please don't take this the wrong way.

The effectiveness of a dither algorithm, (meaning this how well does it remove quantization distortion whenever there's some requantization going on), can only be tested in presence of some signal that gets requantized.

But once you have a dither algorithm that does this properly, you can quickly evaluate the audibility of the dither noise just listening to this noise without any signal present.

The less audible this noise, the more it will get masked with signal, and the more it will be able to resolve low-level signals, since those signals will more easily fall over the dither noise floor.

Those dynamic noiseshaping algorithms are based in that the spectrum of the dither is adjusted in function of the signal, in order to maximize the masking of the signal over the dither noise floor.

It is significantly more computationally intensive than the current fb2k ditherer, but still fast enough for use in (for example) an audio player, as far as I can gather from the papers.

I haven't got the time to persue this right now, but if you need implementation details, contact the paper authors, and for example ask if they're willing to give out the source used for the papers tests.

Adaptive noise shapers have a big disadvantage called noise modulation.That's why you also find a lot of papers which force not to use adaptive noise shapers,because when noise becomes audible it disturbs much much more than a constant noise.

This constant noise can be adapted von title to title/album to album depending on the noisefootprint, but it should constant within a title/album.

--------------------

Diocletian

Time Travel Agency Book a journey to the Diocletian Palace. Not today!