Bottom line, some MP3 players may use a combination of analog and digital gain control (see MAS3539F) .

Like I said above, modern devices do have digital mixers. So what? However, the MAS isn't a modern device. Its a 1990s hardware MP3 decoder with no CPU. It dates back to a time when mobile CPUs were too slow to process audio. If you're really going to try and argue this, do you want to pick something from at least this millenium

Like I said before, you're just wrong about "almost all" of these devices using software volume control. I've never seen a portable device that uses software volume control, and apparently you have not either. Or if you have, feel free to tell me about it . . .

I don't follow you. How does truncation imply digital volume control? And how do you know you were hearing it? You were apparently convinced it was happening on iPods, and that is not the case. How certain are you that you're not mistaken here?

Truncation is easy to spot on a spectrum analyzer. Truncation was occurring, but we never determined exactly why.

Truncation can also occur when 16-bit USB interfaces are connected to a computer (as you point out).

I did not say that truncation is happening on iPods, but it may happen on some portable players. There may be someone on the forum who has some data on this.

I don't follow you. How does truncation imply digital volume control? And how do you know you were hearing it? You were apparently convinced it was happening on iPods, and that is not the case. How certain are you that you're not mistaken here?

Truncation is easy to spot on a spectrum analyzer. Truncation was occurring, but we never determined exactly why.

Truncation is ubiquitous on low power devices. Rockbox is the only portable device I know of that bothers to dither (and even then its off by default since its basically pointless). But the fact that we (typically) truncate does not imply that we're using digital volume control. In fact, we use truncation and analog volume control.

So now you've edited the claim to read:

Quote

In the two above tests, truncation in the MacBook's digital volume control was a dead give-away and resulted in a perfect score in the ABX tests.

But this makes even less sense to me. Why does truncation have to happen in the digital volume control? How do you know theres even digital volume control at all? It sounds like you're just assuming . . .

Like I said before, you're just wrong about "almost all" of these devices using software volume control. I've never seen a portable device that uses software volume control, and apparently you have not either. Or if you have, feel free to tell me about it . . .

A digital volume control can be implemented in hardware or software. It is often implemented in hardware and the software merely sends register commands to the hardware (as in the Rockbox). Rockbox code exists for MAS3539F hardware, and this is why I used this chip as an example. I assumed your code was written to support this chip. Did you write code for a different chip? If so, which one(s). What chip is used in the clip?

Like I said before, you're just wrong about "almost all" of these devices using software volume control. I've never seen a portable device that uses software volume control, and apparently you have not either. Or if you have, feel free to tell me about it . . .

A digital volume control can be implemented in hardware or software. It is often implemented in hardware and the software merely sends register commands to the hardware (as in the Rockbox). Rockbox code exists for MAS3539F hardware, and this is why I used this chip as an example. I assumed your code was written to support this chip. Did you write code for a different chip? If so, which one(s). What chip is used in the clip?

Rockbox is an operating system that runs on many dozens of different devices each with different DACs. The MAS is the first device it ever ran on, being the hardware decoder of one of the very first MP3 players ever made. But you misunderstand the meaning of those registers. The MAS does not provide software access to it's PCM data. There are hardware registers for scaling data because its impossible to do it any other way, and scaling data is often needed (for example, volume normalization). Volume control is supposed to be implemented using a variable gain amplifier (and in fact we do implement it this way).

There are many different clip players. The first used the AS3525 which had a DAC similar to the AS3514 integrated. The newer ones (that are being discussed here) use an unknown chip dubbed (by us) the AS3525v2, which uses a DAC that is related to the AS3543 (although the register maps are not quite identical).

But let me bring this thread back to the question of detecting D/A converter differences in an ABX test. I believe the SNR performance of many common audio products, combined with the ubiquitous use of generous amounts of digital volume control is sufficient to expose detectable differences in an ABX test. This does not mean that the general public is dissatisfied with these devices, and it does not imply that the converter noise is objectionable, but it does indicate that converter differences should be detectable in an ABX test. In fact, this has been my experience.

In the past, I posted the results of 2 ABX tests I conducted here at Benchmark:

Basically, they show the results of their technical test suite when run on 3 different system boards.

Two of the test results are similar and can be summarized as dynamic range = 92.2 dB.

The third is significantly different and shows a dynamic range = 88.8 dB

This makes the point that the performance of the chip is as you suggested in an earlier post, dependent on the actual on-board implementation.

The MacBook is a laptop if memory serves, and conventional wisdom is that laptop sound is often poorer than desktop sound using the same basic components. Given the variation already in evidence, the Macbook implementation might even be considerably worse that the three examples noted above.

I guess what I'm saying is that I see a presumed diagnosis, but I don't see enough evidence to have an opinion of it, one way or the other.

But let me bring this thread back to the question of detecting D/A converter differences in an ABX test. I believe the SNR performance of many common audio products, combined with the ubiquitous use of generous amounts of digital volume control is sufficient to expose detectable differences in an ABX test. This does not mean that the general public is dissatisfied with these devices, and it does not imply that the converter noise is objectionable, but it does indicate that converter differences should be detectable in an ABX test. In fact, this has been my experience.

In the past, I posted the results of 2 ABX tests I conducted here at Benchmark:

Rockbox is an operating system that runs on many dozens of different devices each with different DACs. The MAS is the first device it ever ran on, being the hardware decoder of one of the very first MP3 players ever made. But you misunderstand the meaning of those registers. The MAS does not provide software access to it's PCM data. There are hardware registers for scaling data because its impossible to do it any other way, and scaling data is often needed (for example, volume normalization). Volume control is supposed to be implemented using a variable gain amplifier (and in fact we do implement it this way).

There are many different clip players. The first used the AS3525 which had a DAC similar to the AS3514 integrated. The newer ones (that are being discussed here) use an unknown chip dubbed (by us) the AS3525v2, which uses a DAC that is related to the AS3543 (although the register maps are not quite identical).

I fully understand the purpose of the registers in the MAS, and I understand that the MAS does not give the CPU access to the PCM data. But, digital "scaling" is digital volume control. It may not be the primary volume control system, but it still performs part of the volume control function. 6 dB of gain reduction via "scaling" will move the audio 6 dB closer to the noise floor. This digital scaling can also cause truncation if it is not dithered.

A variable gain amplifier may also attenuate the audio without having much impact on the output noise.

In either case, if the signal is decreased without also decreasing the output noise, then the effective SNR is reduced.

An analog pot (not found on many modern devices) has the ability to reduce the signal and the noise simultaneously (thus preserving the SNR of the audio). Digital volume controls and variable gain amplifiers do not replicate the function of a true analog gain control.

2) Downward scaling in order to achieve equal loudness will not be done on the type of source material you would use to demonstrate audible problems arising from the use of a digital volume control. So AFAICT, the only potential issue at play here regarding scaling (in any amount or direction, either downward or upward) is noise caused by truncation. Furthermore, the end-user generally (always?) has the ability to elect not to use this functionality.

Last Edit: 22 February, 2013, 04:21:43 PM by greynol

Is 24-bit/192kHz good enough for your lo-fi vinyl, or do you need 32/384?

Rockbox is an operating system that runs on many dozens of different devices each with different DACs. The MAS is the first device it ever ran on, being the hardware decoder of one of the very first MP3 players ever made. But you misunderstand the meaning of those registers. The MAS does not provide software access to it's PCM data. There are hardware registers for scaling data because its impossible to do it any other way, and scaling data is often needed (for example, volume normalization). Volume control is supposed to be implemented using a variable gain amplifier (and in fact we do implement it this way).

There are many different clip players. The first used the AS3525 which had a DAC similar to the AS3514 integrated. The newer ones (that are being discussed here) use an unknown chip dubbed (by us) the AS3525v2, which uses a DAC that is related to the AS3543 (although the register maps are not quite identical).

I fully understand the purpose of the registers in the MAS, and I understand that the MAS does not give the CPU access to the PCM data. But, digital "scaling" is digital volume control. It may not be the primary volume control system, but it still performs part of the volume control function. 6 dB of gain reduction via "scaling" will move the audio 6 dB closer to the noise floor. This digital scaling can also cause truncation if it is not dithered.

You're mixing up truncation and scaling. Truncation (with or without dithering) happens no matter what. All of these devices operate at 32 bit internally (except truly ancient devices like the MAS). You always truncate your 32 bit PCM coming out of your MP3/FLAC/whatever decoder down to 16 bit (or 24 bit in the case of PCs). Note that this happens no matter how volume control is implemented. So its no surprise you see the signs of that under a scope. A quality 16 bit device should basically be expected to do that since dithering is uncommon.

Now you can also scale the waveform. In rockbox we do this too, although only by small amounts (a few dB for EQ precut, replaygain, etc). This doesn't change the situation with truncation, since that still has to happen. Finally, at the end you have volume control, which is done by changing gain to preserve SNR.

A variable gain amplifier may also attenuate the audio without having much impact on the output noise.

In either case, if the signal is decreased without also decreasing the output noise, then the effective SNR is reduced.

An analog pot (not found on many modern devices) has the ability to reduce the signal and the noise simultaneously (thus preserving the SNR of the audio). Digital volume controls and variable gain amplifiers do not replicate the function of a true analog gain control.

Yes of course. Like I said before:

Quote

However, while they'll all give you nearly 16 bit limited performance into a line out, they tend to have fairly limited amplifiers (ignoring Apple, Sandisk which are very good). They also have essentially fixed noise floors that are independent of volume. So a more useful approach is to think about them in terms of the impedance and sensitivity that will give good performance. The noise floor puts a limit on sensitivity, since very high sensitivity headphones will produce more acoustic noise, while the finite output impedance limits how low of an impedance can be driven without distortion.

The fixed noise floor kills you. So while analog gain is better then digital, it doesn't save you running into the noise floor at low volumes. So SNR does decrease, although slower then 6dB/bit of digital. The flip side of this though is that going to more effective bits doesn't really help you (unless you do it by reducing the device's analog noise floor). My point, therefore, was that since we operate under these constraints at all but the very highend, worrying about DAC design is not very productive. You need better analog amplifiers after the DAC or its all for nothing.

The fixed noise floor kills you. So while analog gain is better then digital, it doesn't save you running into the noise floor at low volumes. So SNR does decrease, although slower then 6dB/bit of digital. The flip side of this though is that going to more effective bits doesn't really help you (unless you do it by reducing the device's analog noise floor). My point, therefore, was that since we operate under these constraints at all but the very highend, worrying about DAC design is not very productive. You need better analog amplifiers after the DAC or its all for nothing.

I agree, and yes amplifiers are sometimes a limiting factor.

If digital gain control is used, the quality of the converter must increase to maintain the same level of performance that could be achieved with an analog gain control following the converter. Every 6.02 dB of gain reduction that is used in normal listening will require 1 additional bit of effective resolution. If we start with 129 dB SNR (as in the DAC2), then we can use generous amounts of digital volume control without impacting the overall performance of our playback system (the power amplifier almost always becomes the limiting factor). But if we start with a laptop that has a 95 dB SNR and use the digital volume control on iTunes or Windows Media Player or the OS then the situation is very different. iTunes and WMP do not control the gain of an amplifier following the internal DAC, they control the gain in software (digital gain control). These players and most other computer audio applications place significant demands upon the performance of the built-in DACs. In a typical media server system, (line out to amplifier, or headphone jack to headphones) the overall performance will suffer if significant use of the software volume control is necessary to achieve a normal listening level. You can't start with a 95 dB DAC and apply 20 dB attenuation and expect 16-bit performance. In a practical system, few users want to operate at 100% volume to reach a normal listening level (even though this would give the best SNR). The sensitivity of the amplifier or headphones demand that adjustments are made.

Many users are perfectly satisfied with the resulting (95-20)=75 dB effective SNR. However, it is not unreasonable to expect that people will notice a difference when connecting a 129 dB converter to the laptop media server and amplifier. The differences should be detectable in ABX tests. I posted ABX test results showing the effects of truncation (different issue), but similar tests could and should be run on consumer products operating at typical volume settings.

But if we start with a laptop that has a 95 dB SNR and use the digital volume control on iTunes or Windows Media Player or the OS then the situation is very different. iTunes and WMP do not control the gain of an amplifier following the internal DAC, they control the gain in software (digital gain control).

Is that actually true? Seems like such a dumb design choice I doubt it, but I don't know much about how the guts of Windows work.

But if we start with a laptop that has a 95 dB SNR and use the digital volume control on iTunes or Windows Media Player or the OS then the situation is very different. iTunes and WMP do not control the gain of an amplifier following the internal DAC, they control the gain in software (digital gain control).

Is that actually true? Seems like such a dumb design choice I doubt it, but I don't know much about how the guts of Windows work.

Yes, the gain control is software-based DSP.

Software-based digital gain control is easy to implement on a fast processor. A stereo gain-control function places almost no load on the CPU.

From our discussion, it looks like some MP3 players have attempted to do things a little differently, but I am not sure the results are much different (due to the noise limitations of the variable gain amplifiers). If noise does not change when the volume control is adjusted, then the results are nearly identical (in terms of effective SNR).

An analog pot (not found on many modern devices) has the ability to reduce the signal and the noise simultaneously (thus preserving the SNR of the audio). Digital volume controls and variable gain amplifiers do not replicate the function of a true analog gain control.

Most measurements of the performance of an analog pot are made under unrealistic assumptions. They presume that everything following the pot is practically noiseless which is most definitely the the case in the real world.

Slide 13 in http://www.esstech.com/PDF/digital-vs-anal...ume-control.pdf is an example of this unrealistic assumption. In fact the equipment following the pot has far more noise than is shown in the right hand FFT plot. The actual system noise in the equipment following the pot may be the same or even 20 dB higher than the left hand plot!

But let me bring this thread back to the question of detecting D/A converter differences in an ABX test. I believe the SNR performance of many common audio products, combined with the ubiquitous use of generous amounts of digital volume control is sufficient to expose detectable differences in an ABX test. This does not mean that the general public is dissatisfied with these devices, and it does not imply that the converter noise is objectionable, but it does indicate that converter differences should be detectable in an ABX test. In fact, this has been my experience.

In the past, I posted the results of 2 ABX tests I conducted here at Benchmark:

I read it as saying that only the analog inputs of the ALC 888 have an analog volume control. It appears to me that the path from the DAC to its output terminal ha no analog volume control. Only muting (off/on analog swtich) seems to be shown.

Most measurements of the performance of an analog pot are made under unrealistic assumptions. They presume that everything following the pot is practically noiseless which is most definitely the the case in the real world.

Slide 13 in http://www.esstech.com/PDF/digital-vs-anal...ume-control.pdf is an example of this unrealistic assumption. In fact the equipment following the pot has far more noise than is shown in the right hand FFT plot. The actual system noise in the equipment following the pot may be the same or even 20 dB higher than the left hand plot!

We fully agree on this.

Low impedances and/or high signal levels are required to achieve low noise. An active variable gain stage can be constructed from and opamp and a linear pot. This configuration can provide 20 dB adjustment while maintaing a 110 to 120 dB SNR. It takes a low-noise opamp and low resistor values. We do this in many of our products. To your point, this isn't going to happen in a low-power device.

I read it as saying that only the analog inputs of the ALC 888 have an analog volume control. It appears to me that the path from the DAC to its output terminal ha no analog volume control. Only muting (off/on analog swtich) seems to be shown.

But if we start with a laptop that has a 95 dB SNR and use the digital volume control on iTunes or Windows Media Player or the OS then the situation is very different. iTunes and WMP do not control the gain of an amplifier following the internal DAC, they control the gain in software (digital gain control).

Is that actually true? Seems like such a dumb design choice I doubt it, but I don't know much about how the guts of Windows work.

Yes, the gain control is software-based DSP.

Software-based digital gain control is easy to implement on a fast processor. A stereo gain-control function places almost no load on the CPU.[/quote]

Yes but it puts a lot of strain on the DAC by reducing the SNR for no reason at all. It would be very simple to implement a more intelligent mixer that simply scaled the analog volume to maximize SNR. Do you have some documentation indicating that MS or Apple chose not to implement such an obvious feature?

I agree that the proper ALC 888 block diagram shows an analog volume control on its output.

Therefore it seems improper to unconditionally attribute any possible misbehavior of a device containing the ALC 888 on a digital volume control.

The ALC 888 block diagram may show an analog volume control, but this does not necessarily mean that audio applications have access to this feature. In our tests, the iTunes volume control was used to set the level at -20 dB when playing from the Macbook's headphone output.

iTunes has a digital volume control. Prior to iTunes 7.X the volume control was 16-bits. With iTunes 7.X and up, the volume control is 24-bits dithered. Starting with iTunes 7.X, iTunes establishes a 24-bit connection to core audio. Prior to 7.X this connection was 16-bits. More information is available here:

The Mac's audio path looks like this when playing audio with iTunes, and is very similar when using most other audio applications:

1) iTunes applies a digital volume control (16-bit truncating before version 7.X, 24-bit dithered starting with 7.X)2) If the sample rate of the file being played does not match the sample rate set in the audio-midi control panel, then sample rate conversion is applied to the output of iTunes.3) After sample rate conversion, the audio is mixed with other audio sources (midi-generated audio, microphone input, and other wave inputs).4) The digitally mixed audio then reaches the Mac's master gain control (another digital gain control). 5) Audio is then sent to the output device.

In the case of digital interfaces, (USB and optical are both available on the Macbook), there is no opportunity for an analog gain control. An analysis of the data shows that digital gain control is active when feeding digital outputs. In some cases, this digital gain control is undithered 16-bit (older versions of iTunes). In other cases, it is undithered 24-bit (many other audio applications), or dithered 24-bit (some newer audio applications including iTunes 7.X and up). The bit depth delivered to the output is a function of the playback hardware, the bit-depth of the source, the version of iTunes, the OS version, and certain settings in the audio-midi control panel. It is often very hard to acieve a transparent data path. For this reason, specialized media players have been developed that bypass much of the audio processing, while taking control of the system (audio-midi) settings. The latest of these players is the JRiver Media Center for Mac (released this week). JRiver turns off mixing, SRC, and other processing, to deliver bit-accurate transmission of the audio to the digital outputs.

In the tests we ran (Macbook vs. DAC1), the gain control was 16-bit undithered due to the fact that iTunes 7.X was not available at the time. The iTunes volume control was used in the test. The existence of 16-bit truncation was verified with an AP System 2, and can be heard in the fade-out at the end of the samples I posted on this forum.

While some on this forum have pointed out the fact that MP3 players often use variable gain amplifiers as a primary gain control, the situation is different in computers. I had assumed that MP3 players followed the same topology used in computers, but there are some differences.

The need to run a variety of audio producing applications simultaneously demands that a computer use digital volume controls, digital mixing, and sample-rate conversion.

The Windows audio system is very similar to the topology of the Mac audio system, and I will not repeat the overview here.

In the tests we ran (Macbook vs. DAC1), the gain control was 16-bit undithered due to the fact that iTunes 7.X was not available at the time. The iTunes volume control was used in the test. The existence of 16-bit truncation was verified with an AP System 2, and can be heard in the fade-out at the end of the samples I posted on this forum.

Can this be summarized as saying that those tests wouldn't produce the same results if repeated today using up-to-date iTunes software?

While some on this forum have pointed out the fact that MP3 players often use variable gain amplifiers as a primary gain control, the situation is different in computers. I had assumed that MP3 players followed the same topology used in computers, but there are some differences.

The need to run a variety of audio producing applications simultaneously demands that a computer use digital volume controls, digital mixing, and sample-rate conversion.

Which happens to be the same working condition of "smart" devices. On an iPhone, for example, I verified you can even listen to the audio from a music player application in the background of a phone call if both are routed to the headphone out and in this case both are controlled by the same (digital or analog?) master volume.