Each of the additional waves only needs a couple of points to represent their frequency. As per usual it appears that CA article really doesn’t get what the Nyquist theorem is saying… .... The whole point is that a perfect DAC will produce a perfect 8kHz signal when sampled at 16kHz.

This key mistake in citing the Nyquist theorem leads to no end of potential trouble. The sampling frequency has to be greater than 2x the maximum signal frequency you want to reproduce.

Each of the additional waves only needs a couple of points to represent their frequency. As per usual it appears that CA article really doesn’t get what the Nyquist theorem is saying… .... The whole point is that a perfect DAC will produce a perfect 8kHz signal when sampled at 16kHz.

This key mistake in citing the Nyquist theorem leads to no end of potential trouble. The sampling frequency has to be greater than 2x the maximum signal frequency you want to reproduce.

agreed, but what people do not understand is that this leads to a perfect storage of the signal (within the quantum of the bits storing the amplitude. It is not an approximation that improves with higher sampling.

While I agree that this is a scientific-based forum and that it is adequate to correct this small imprecision that we're used to say (as in saying only 2x instead of >2x)....

... can we stop arguing about it at last? There's been what, already 5 posts about it?

Back on topic, the number was mentioned in relation to the fact that the frequency can be properly reconstructed from the sampled one, which is something that we all agree on. For this, the images posted by xnor are quite representable of that fact.

We can talk about the needed time in case of frequencies nearing half the sampling rate, like pdq's last post, but let's stop talking about ">" versus ">=".

As you go down, the red is the sum of the first 2, next 2, next 2 and next ?100? I think (the plot was made with a different script may years ago) images, and the resulting waveform with the images, all of them over fs/2, is shown behind the red waveform.

Notice how only images cause it to square off.

I'm not going to figure out the amplitudes for a triangle right here and now today (it should be easy, of course) but you're welcome to take script below, which is an enhanced version of the one that made the plot above, fix it, and pot that here.

But I think this makes the point that drawing striaght lines around plots of incividual samples doesn't show what a lot of people think it does.

If you know that the waveform consists of a single unvarying sine wave whose frequency is less than half the sampling rate then a fairly small number of data points are required to determine the waveform's frequency and amplitude.

On the other hand, if there are multiple frequencies or the amplitude is not constant then you will need a longer observation period.

If you know that the waveform consists of a single unvarying sine wave whose frequency is less than half the sampling rate then a fairly small number of data points are required to determine the waveform's frequency and amplitude.

On the other hand, if there are multiple frequencies or the amplitude is not constant then you will need a longer observation period.

For any arbitrary waveform, correctly sampled, you can reconstruct all frequency components under fs/2 perfectly - but you need an infinite number of sample points.

If we consider quantisation it's far worse.

If however we consider that we don't care about anything beyond ~120dB down, it becomes easily realisable in 1990s-style DSP.

I know everyone here knows this. That computeraudiophile thread is just a parallel universe which I don't want to enter

If you know that the waveform consists of a single unvarying sine wave whose frequency is less than half the sampling rate then a fairly small number of data points are required to determine the waveform's frequency and amplitude.

On the other hand, if there are multiple frequencies or the amplitude is not constant then you will need a longer observation period.

For any arbitrary waveform, correctly sampled, you can reconstruct all frequency components under fs/2 perfectly - but you need an infinite number of sample points.

when you say an infinite number of sample points, over what period? An infinite number of sample points with 1sec, IS a higher sampling rate! An infinite number of points over an infinite is surely just handling the tails?

If you know that the waveform consists of a single unvarying sine wave whose frequency is less than half the sampling rate then a fairly small number of data points are required to determine the waveform's frequency and amplitude.

On the other hand, if there are multiple frequencies or the amplitude is not constant then you will need a longer observation period.

For any arbitrary waveform, correctly sampled, you can reconstruct all frequency components under fs/2 perfectly - but you need an infinite number of sample points.

If we consider quantisation it's far worse.

If however we consider that we don't care about anything beyond ~120dB down, it becomes easily realisable in 1990s-style DSP.

I know everyone here knows this. That computeraudiophile thread is just a parallel universe which I don't want to enter

Whenever infinity is mentioned, the temptation to become pedantic can be overpowering. Take this for exactly a pedantic question, because that is what it it is.

Isn't infinity an indefinite number? That's what I was taught in first semester calculus 48 years ago, if memory serves. I'm under the impression from my work in calculus through grad school that equations involving infinity only make sense if you talk about infinity as the limit. IOW, a sampled wave approaches perfection as the number of sampled points approaches infinity.

In the real world nothing is perfect, and significance and relevance should be our greatest interest. As I think about it, in audio the number of sample points and the numerical precision of those samples are not serious limiting issues in our best or even mediocre currently implemented systems.

If you know that the waveform consists of a single unvarying sine wave whose frequency is less than half the sampling rate then a fairly small number of data points are required to determine the waveform's frequency and amplitude.

On the other hand, if there are multiple frequencies or the amplitude is not constant then you will need a longer observation period.

For any arbitrary waveform, correctly sampled, you can reconstruct all frequency components under fs/2 perfectly - but you need an infinite number of sample points.

Ok, if you have 'n' samples, you'll get n/2+1 distinct lines out of a DFT. Given positive and negative frequencies, you get n, exactly, as one would expect (depending on how you wish to count complex, of course).

You do need order of 1/df time for df being how close to fs/2 in Hertz you are, i.e. df = fs/2 - fmax where fs is sampling frequency and fmax is the highest frequency in the material.

It's order 1/df because that's the best you can do. But you won't do much worse than 10 times that at any reasonable SNR.

In the real world nothing is perfect, and significance and relevance should be our greatest interest. As I think about it, in audio the number of sample points and the numerical precision of those samples are not serious limiting issues in our best or even mediocre currently implemented systems.

I agree entirely.

I think brick wall filters are both vilified and lauded unnecessarily. Wrongly vilified, because you can't hear their action. Wrongly lauded, because any filter that you can't hear when applied twice, but is sufficiently down at fs/2 and above, is good enough for ADC and DAC.

Ok, if you have 'n' samples, you'll get n/2+1 distinct lines out of a DFT. Given positive and negative frequencies, you get n, exactly, as one would expect (depending on how you wish to count complex, of course).

You do need order of 1/df time for df being how close to fs/2 in Hertz you are, i.e. df = fs/2 - fmax where fs is sampling frequency and fmax is the highest frequency in the material.

It's order 1/df because that's the best you can do. But you won't do much worse than 10 times that at any reasonable SNR.

When I first read this, I was a bit uncomfortable with the straight connection you made between a DFT and what frequencies you can store/reconstruct. I thought "what about windowing / glitches / non-periodic etc issues?". Still a bit concerned that what the DFT tells you is what is present if you loop the samples you have ad infinitum, but other than that I think I agree. As I'm not even worthy to polish your shoes I have no choice to agree anyway , but I had to stop and think for a bit.

When I first read this, I was a bit uncomfortable with the straight connection you made between a DFT and what frequencies you can store/reconstruct. I thought "what about windowing / glitches / non-periodic etc issues?". Still a bit concerned that what the DFT tells you is what is present if you loop the samples you have ad infinitum, but other than that I think I agree. As I'm not even worthy to polish your shoes I have no choice to agree anyway , but I had to stop and think for a bit.

Cheers,David.

You're absolutely right that by just taking a DFT you're imposing circularity, and by doing so, you're incorporating any end-discontinuities, i.e. you're using a rectangular window (same thing).

But you still have the stated number of frequencies, it just includes information you probably didn't want.

If you window, you don't have quite the resolution you would otherwise have, but you get more "far frequency" resolution, of course, i.e. you don't have that end discontinuity splashing things all over the place.

It's fun to plot rectangular vs. han vs. blackman vs. some kaiser window, plot two things, first plot the passband to .5dB or so, and then plot the far-frequency response. Unsurprisingly you find lots of differences, but what they are can be surprising. The narrowest passband is rectangular, if you think about it, that's because it has the widest scope (don't forget df * dt > 1 for the problem we're talking about here). But, of course, you have that horrid far-frequency response. The blackman is wider than most anything, but yessiree it falls off like a brick... Perhaps unsurprisingly there is a conservation happening. Life is like that

I think brick wall filters are both vilified and lauded unnecessarily. Wrongly vilified, because you can't hear their action. Wrongly lauded, because any filter that you can't hear when applied twice, but is sufficiently down at fs/2 and above, is good enough for ADC and DAC.

Well, something fun to try. Build the sharpest 20kHz passband filter you can with whatever FIR design program you have available.

Not one that goes 20kHz passband, 22kHz stop band, but as sharp as your software will design.

Then filter castinetts with it and see what happens. Use ABX, of course. Just give it a try.

Even a 8192 samples long filter doesn't cause anything I could hear. Maybe because the passband has to be a lower to make the filter audible, e.g. 18 kHz instead of 21 kHz? Anyway the filter is quite steep. -6 dB at 21 kHz and -90 dB at 21.0157 kHz.

Isn't infinity an indefinite number? That's what I was taught in first semester calculus 48 years ago, if memory serves. I'm under the impression from my work in calculus through grad school that equations involving infinity only make sense if you talk about infinity as the limit. IOW, a sampled wave approaches perfection as the number of sampled points approaches infinity.

Well, in first-semester calculus, infinity is not a number at all. In higher courses, it might even be treated just as a number. Now you are talking about 'the' limit, and that 'the' is not necessarily a unique concept (first-semester calculus may or may not involve the distinction between pointwise and uniform limits).

For each given frequency < half the sampling frequency, a sine wave on time interval [-T, T] will be sampled better as T grows, and -- in the appropriate sense -- tend to perfection. But: Fix the time interval [-T,T] and let the frequency increase to half the sampling frequency. Then the sample quality tends to pretty bad. (And no, you don't have do do a sweep.)

So transients near the Nyquist may be badly sampled, in principle. But even if you can hear 22.05, it does not mean that you will hear that this is bad -- this physiology issue beats me, but: do you really hear a tone before the hair cells in the inner ear have reached a steady resonance with the sound? That takes some oscillations, sampler takes some oscillations to pick up the frequency, wouldn't it?