None. I don't think KMD fully grasps the implications of the questions that they are asking.

QUOTE

We have established that low level signals can be severely distorted by quantization, and that a nominal application of dither removes much, but not all, of that distortion.

It removes all the audible distortion.

True, there's still a visible relationship between the original signal, and the dither+quantisation - i.e. if you subtract the 8-bit version from the 16-bit version, you can see the difference isn't purely uncorrelated noise...

The image in post 77 is way to square. A reconstructed dithered waveform should look like a fuzzy sine wave .

The image in post 77 isn't "reconstructed", or oversampled - it's the actual 44.1kHz data. The second one in post 49 was 8x oversampled to 352.8kHz to simulate reconstruction.

Examining 8-bit quantisation with and without dither using typical pop music won't show much. If you're lucky, you'll be able to see+hear the added noise, but it'll be hard to spot any distortion on a graph.

Suitable classical recordings, or anything with quiet passages, will reveal the same as you've seen with the sine wave.

When you say 8x oversampled to simulate reconstruction can you confirm that that includes simulation the 22Khz low pass filter.

I would not expect anything to be immediately audible in the pop music either. But to see that statistical analysis done on music would prove that digital audio is fundamentally constrained in amplitude variation, with a statistical variance, around the quantization levels, which could effect the listening experience, and is a genuinely original revelation in digital audio engineering. I am a member of the Audio Engineering Society by the way.

To anyone who has had sophomore-level college (or equivalent) exposure in digital signals and has some knowledge in audiology, this would not be a revelation by any stretch of the imagination.

Before we get all caught up in the constraint of digital audio in amplitude variation, you might want to consider whether the human ear and its listening environment isn't also constrained in amplitude variation.

That you are a member of AES does little to demonstrate any competence in this discussion, by the way. IEEE would have gotten you farther, but we'd then have reason to wonder why you seem to be so confused.

But to see that statistical analysis done on music would prove that digital audio is fundamentally constrained in amplitude variation, with a statistical variance, around the quantization levels, which could effect the listening experience, and is a genuinely original revelation in digital audio engineering.

You are aware that the grid for 16 bits is composed of lines that are 0.00026 dB apart at FS, more or less, right? 60 dB below FS they are about 0.1 dB apart. None of those level differences will ever be heard! Below -60 dB even hearing the peak signal above the ambient noise, let alone the microscopic difference, will be a challenge.

But to see that statistical analysis done on music would prove that digital audio is fundamentally constrained in amplitude variation, with a statistical variance, around the quantization levels, which could effect the listening experience, and is a genuinely original revelation in digital audio engineering.

The fact that the reconstructed output waveform of a dithered digital signal might tend to follow the quantization levels is irrelevant.

The quantization error is what makes the reconstructed signal fall closer to the quantization levels. When you have no dither, the quantization error is correlated to the signal, which if audible, is undesirable.

When you add dither (while you are still in the digital domain), you end up with a quantized version of your signal plus dither. Since the dither is random and of an appropriate level compared to the quantization step size, you end up with your original signal plus uncorrelated noise.

It's still quantization error that makes the reconstructed dithered signal tend to follow the quantization level, but now the quantization error is based on the signal plus dither instead of just the signal. And if it's audible, it can indeed affect the listening experience - it sounds like noise. But there's nothing new, original or revelatory about it.

I'm not sure why anyone thinks a filter at 22kHz is going to change a square-ish low frequency waveform that much. The example I posted was a low amplitude sine wave at 50Hz; at 8-bits it ends up with square-wave-like transitions at ~250Hz due to quantisation. In this example, you can comfortably fit the first 40 harmonics within the transition band. That's more than you need to make a square wave look something like a square wave.

Cheers,David.

Because that is how it is taught in textbooks. Typically a stream of spiky samples is shown going into a reconstruction filter and out the other side comes a smooth wave. As the cooledit images shows the reconstruction filter and dither do neither. As your plot shows the output is a stream of jumps between quantization levels, which illustrates my original point in the listening test topic about a quantization grid except rather than a grid it is more of a vertically spaced grating.

Because that is how it is taught in textbooks. Typically a stream of spiky samples is shown going into a reconstruction filter and out the other side comes a smooth wave. As the cooledit images shows the reconstruction filter and dither do neither. As your plot shows the output is a stream of jumps between quantization levels, which illustrates my original point in the listening test topic about a quantization grid accept rather than a grid it is more of a vertically spaced grating.

KMD, I think that you may be guilty of a little knowledge being a dangerous thing. And maybe misunderstanding the graphs.

I posted these graphs in response to a claim that the quantisation levels were never visible after reconstruction. The graphs show that, where the original signal is low frequency, and contains only a few levels, it's easy to see that the quantisation levels are visible after reconstruction.

However, this doesn't show a "fault" in reconstruction or digital audio. It shows that I know how to choose a signal that is largely unaffected by reconstruction i.e. effectively a very low frequency square-ish wave, which really is a square-ish wave, and so after reconstructions still looks like a square-ish wave.

When you say 8x oversampled to simulate reconstruction can you confirm that that includes simulation the 22Khz low pass filter.

Yes, of course.

QUOTE

But to see that statistical analysis done on music would prove that digital audio is fundamentally constrained in amplitude variation, with a statistical variance, around the quantization levels

Of course it is. At the sample points, without noise, with an ideal reconstruction filter, it's absolute constrained. That's what quantisation does. Between the sample points, the reconstructed signal could go anywhere - though for a given input signal there's only one "correct" place for it go (defined by the sinc function) and that may or more likely may not be on an original quantisation step.

QUOTE

which could effect the listening experience

The ear has no conceivable mechanism to detect (i.e. hear) this, even in signals where the effect is visible.

Between the sample points, the reconstructed signal could go anywhere - though for a given input signal there's only one "correct" place for it go (defined by the sinc function) and that may or more likely may not be on an original quantisation step.

Since you're talking about time being filled between points that are sampled you would expect the waveform to fall between quantization levels. The amplitude in between may still be in error just as it may be at the sampled points and likely will be if those sampled points are in error, but this is hardly ground breaking.

My text books on the subject speak clearly about quantization error, so I reject KMD's claim to the contrary. Maybe the problem has to do with glancing at pictures instead of reading the text and equations?

My text books on the subject speak clearly about quantization error, so I reject KMD's claim to the contrary. Maybe the problem has to do with glancing at pictures instead of reading the text and equations?

I think the fact that people believe they can look at something for five minutes and make a genuine ground breaking discovery in a field that's been well understood for decades tells you a lot about 21st century culture.

Or something. I'm probably trying to be philosophical, and I know nothing about that field myself.