My text books on the subject speak clearly about quantization error, so I reject KMD's claim to the contrary. Maybe the problem has to do with glancing at pictures instead of reading the text and equations?

KMD has posted this claim before. When I asked him which textbook exactly he was referring to, he declined to answer. I think he should either cite his source, or stop arguing about it.

The 8-bit signal (which include quantisation error), but analysed at the end of this process...http://www.hydrogenaudio.org/forums/index....st&p=790120...which was to show that individual quantisation steps can kind-of survive reconstruction filtering. I guess it also shows that dither "works", though there is no need for those extra steps if that's what you want to prove.

The quantization grid is not caused by sampling it is caused by the interaction of quantization levels with sampling points. It may not be perceptable but there is no doubt that a waveform created from a digital file must be formed from a selection of points that are selected from a finite number of pre- determined regularly spaced co-ordinates. The digital file is formed from regularly spaced sampling points and reguarly spaced quantization levels therefore the waveform derived from it must have a coresponding regularity.

I work more in commercial TV where video has been digitized since the mid '70s. If what you're describing existed you would not be able to to display diagonal lines, particularly nearly vertical ones. I assure that is not the case. The time resolution is infinitely variable. OK, I can only measure reliably to a nanosecond but for all practical purposes....

I work more in commercial TV where video has been digitized since the mid '70s. If what you're describing existed you would not be able to to display diagonal lines, particularly nearly vertical ones. I assure that is not the case.

Actually, images and video resist the use of "ideal" filters, so aliasing is quite common. Those diagonal lines are often quite "steppy".

Lesser problems these days for still images, because the lens itself often acts as a low pass filter for the x-mega-pixel sensor.

you mean the lens on the sensor within the chip or the main lens? I think there are filters within the sensor. I have a vague recollection in my head, but I cannot remember what does filters and lenses do on the sensor.

Lesser problems these days for still images, because the lens itself often acts as a low pass filter for the x-mega-pixel sensor.

you mean the lens on the sensor within the chip or the main lens? I think there are filters within the sensor. I have a vague recollection in my head, but I cannot remember what does filters and lenses do on the sensor.

There is usually a filter in front of the sensor. But you can't do the "flat to within a few percent of nyquist / kill everything above nyquist" response we're used to in audio.

Lesser problems these days for still images, because the lens itself often acts as a low pass filter for the x-mega-pixel sensor.

you mean the lens on the sensor within the chip or the main lens? I think there are filters within the sensor. I have a vague recollection in my head, but I cannot remember what does filters and lenses do on the sensor.

A lens is a low pass filter, so in practice it serves as the anti-aliasing filter if nothing else. Likewise, the pixels on a camera tend to be quite wide relative to their pitch so they'll integrate over a finite width and thus further lowpass the signal.

Often though some degree of aliasing is tolerated in imaging systems because its difficult to build optical filters with steep drop offs. Its quite common to design imaging systems with the spot size of the optics matched to 2x the pixel size, and then to just let the finite pixel width low pass out much of the frequencies that would alias.