If we do these tests digitally, and the real-world source is band limited below the Nyquist frequency, then the Nyquist–Shannon sampling theorem will hold and values representing that signal will theoretically reproduce an exact perfect signal,

Its not clear which 'Nyquist frequency' you mean there, that of the recording, or that of the target samplerate. To be clear -i acknowledge that when they are the same, we have very great, potentialy lossless accuracy of reproduction of each (to each other).

Quote

In terms of the peaks moving after a low pass filter etc, I was taught these such things were due to the phase characteristic of the filter....

This issue of filters which change phases of frequencies is a different one.

Peaks can move after bandwidth filtering, because the whole employed spectrum contributes to every observable detail in a waveform, so if we remove a portion of the frequency spectrum, every peak, trough, slope and level in the waveform is suspectable to being affected. Its not a case of 'one' peak shifting like an isolated part of a sinusoid changing phase, its 'one' peak (along with all the other instants) being rendered by a bundle of different superimposed sinusoids, and then how a peaks rendering might change, when sines' are removed from the origional bundle.

@ChiGung, the Nyquist frequency I was referring to was the one of half the target sample rate. And again, the source signal would have to be band-limited to below this frequency. I'm interested in what you are saying, but in the heat of the argument in previous posts you were offering some confusing comments.

In terms of the posts original question at the top, " what is "time resolution"? ", I don't believe that sample rate conversion would affect the continuous time position of peaks and troughs of any arbitrary signal if, again, the signals you are up/down sampling are contained below the Nyquist frequency of the lower sample rate, the interpolation was theoretically ideal and subject to suitable filtering bringing in no mag/phase changes.

As a side interest that I might play with, I wonder about upsampling a file by loading in all the samples and performing a sinc interpolation per sample across "all time" would yield an improved sample rate converter for pre-recorded files?? - I wonder how long such a calculation would take??

Its all good stuff, just hard to read with the heated debate going on.

As a side interest that I might play with, I wonder about upsampling a file by loading in all the samples and performing a sinc interpolation per sample across "all time" would yield an improved sample rate converter for pre-recorded files?? - I wonder how long such a calculation would take??

O(n^2) time.

In practice you'd have problems with precision since after enough samples the contribution from the next sample would be less then the smallest number your CPU could do.

but in the heat of the argument in previous posts you were offering some confusing comments.

I understand that, but ive lost my cool over 'various attack' here and.... i dont speak fluent 'text' either I do try though.

Quote

In terms of the posts original question at the top, " what is "time resolution"? ", I don't believe that sample rate conversion would affect the continuous time position of peaks and troughs of any arbitrary signal if, again, the signals you are up/down sampling are contained below the Nyquist frequency of the lower sample rate, the interpolation was theoretically ideal and subject to suitable filtering bringing in no mag/phase changes.

But that is a rare condition.For instance, i like lowpassed music, i prefer listening to music sampled at 24k than 44k - because its more comfortable to my ears. But the time resolution of the 24k isnt the same as the 44k cd track. I dont care, i listen to what sounds nice. i dont need to tell myself the 'time resolution' is a magic quality beyond the samplerate - to enjoy the results.

Each has its own limits on ...frequencyband / timeresolution.Im feeling more comfortable with using those terms interchangeably now

Quote

As a side interest that I might play with, I wonder about upsampling a file by loading in all the samples and performing a sinc interpolation per sample across "all time" would yield an improved sample rate converter for pre-recorded files?? - I wonder how long such a calculation would take??

Sounds like Sebs area of finesse'

Quote

Its all good stuff, just hard to read with the heated debate going on.

No one is doubting that the peaks will move and/or vanish with most signals.

I personally am doubting you can perform the experiment on arbitrary samples, because you won't be able to track the peaks.

Thats not a problem. I will simply loop through all the nodes in the lowersampled record and try to match each against any node in the highersampled record. If a matching node is not found within small distance of a partner in the other record, it will be discounted. It represents a flattering measure of temporal consistency between samples.

Why would I do this? Well it started with this quote in an otherwise great article on Vinyl Myths in the HA wiki:"PCM can encode time delays to any arbitrarily small length. Time delays of 1us or less - a tiny fraction of the sample rate - are easily achievable. The theoretical minimum delay is 1ns or less."

or

PCM can encode "time delays" to any arbitrarily small length. "Time delays" of 1us or less - a tiny fraction of the sample rate - are easily achievable. However, actual locatable conditions such as zero crossings, gradients or levels are only located semi-reliably within 1/2 sample at a given sample rate...

As a side interest that I might play with, I wonder about upsampling a file by loading in all the samples and performing a sinc interpolation per sample across "all time" would yield an improved sample rate converter for pre-recorded files?? - I wonder how long such a calculation would take??

As Mike says, it's O(n^2) - every sample must be looped over for every other sample. Many resamplers (such as libsamplerate, SoX and others) use the windowed sinc interpolation algorithm described by Julius O Smith - you can see the details on his resampling page. Performing the entire bandlimited interpolation calculation would be extremely prohibitive for any decent number of samples (for example 1 minute of 44100 sound would require 1.4x10^13 multiplies). The solution they choose is to window the interpolation function to a certain number of zeros.

The length of the trunctation and the window used effects the parameters of the resampler - width of it's passband and SNR. For sound applications Smith chose the Kaiser window with a relatively high Beta. I am using a similar calculation for other means (doppler simulation) and am busy evaluating the Nutall window to give the trade between passband and SNR I want.

No one is doubting that the peaks will move and/or vanish with most signals.

I personally am doubting you can perform the experiment on arbitrary samples, because you won't be able to track the peaks.

Thats not a problem. I will simply loop through all the nodes in the lowersampled record and try to match each against any node in the highersampled record. If a matching node is not found within small distance of a partner in the other record, it will be discounted. It represents a flattering measure of temporal consistency between samples.

It may be flattering, but it could simply be so wrong as to hide any "useful" (I use that word in a very lose sense!) data. A given peak could vanish as soon as you apply your first (highest) low pass. However, for a random signal, the chance of finding the wrong peak within a given "small distance" will be roughly proportional to the low pass filter cut-off frequency, since the higher the cut-off, the more peaks will remain. (The lower the cut off, the fewer peaks will remain). This you can probably prove - but it tells you nothing about that specific original peak.

If we do these tests digitally, and the real-world source is band limited below the Nyquist frequency, then the Nyquist–Shannon sampling theorem will hold and values representing that signal will theoretically reproduce an exact perfect signal,

Its not clear which 'Nyquist frequency' you mean there, that of the recording, or that of the target samplerate. To be clear -i acknowledge that when they are the same, we have very great, potentialy lossless accuracy of reproduction of each (to each other).

That's good. It means you accept that the signal from any microphone (which is inherently band-limited by simple mechanics), or any analogue recording (which is inherently band-limited by mechanics, electronics, particle properties etc depending on the type of recording) can be (potentially) losslessly reproduced using PCM sampling, given a sufficiently high sampling rate.

That's handy, isn't it? Especially when you look at the bandwidths of some of these things, and realise that 192kHz sampling is enough, even for your definition of "enough"!

This is what I think: The peaks move, not because you've done anything to the "time resolution", but because you've removed spectral components that contributed to the exact peak position. This is not the same as time resolution.

This is what you think: What really changes is time resolution. Maybe the time resolution really is about 1/2 a sample. The "moving peaks" show this.

Let me show you why this "works", but is silly. (This is long - casual readers might like to skip to the conclusion!)

Arbitrary content could have peaks anywhere.

Here is statement 1 (feel free to disprove it): "For a peak to move (but not vanish) due to low pass filtering, that peak must have been "built" from at least two spectral components, at least one below the filter cut off, and at least one above."

The statement holds for any number of spectral components, even infinite, so don't complain we're not talking about real signals here! What it really says is (a) that if low pass filtering a signal changes the signal, there was something above the low pass filter, which was removed (obvious), and (b) that if we really are correctly considering the same peak, then that peak must have been formed by spectral components which overlapped in time. (A bad explanation, but I think you know what I mean.)

You can separate the signals above and below the cut off frequency using complementary low pass and high pass filters. Adding these two resulting signals together would give the original signal. If you hate the idea of complementary filters, just have a low pass filter, then subtract the result from the original to give the high pass version.

Let us consider the high pass output (or the part we're throwing away by low pass filtering, if you like). The lowest possible frequency component in the high pass section would have peaks spaced by just less than 1 over the filter cut off frequency. Higher frequencies would have more closely spaced peaks, but there can be no lower frequencies with more widely spaced peaks. Adding this signal back to the low pass version can, at most, move an existing peak by this inter-peak distance. It can't move it any further if the signal meets statement 1, since any further and you are looking at a new peak, not the original one.

Magic! Thus we "prove" (though it's more of a hand waving explanation!) your +/- 1/2 sample "time resolution", but we see it's really about frequency resolution. To move a peak, what you remove in the frequency domain must contrive to move the peak in the time domain. If it does not (e.g. it's co-timed, or there's nothing above the frequency cut-off anyway), the peak won't move.

So you come down to proving something rather trivial: "if I do something that I know will change the shape of the waveform, then the shape of the waveform will change".

Genius!

It tells you nothing about anything. Why? Because the peak could start anywhere, and could end up anywhere. You are only proving that it could move by up to that amount - you are not proving that it is "quantised" by that amount (which would prove a limit in time resolution), since a peak whose location was entirely due to a frequency component below the filter cut-off could have its position at the same point as the "original" signal.

Conclusion...

Let me give an example of something that is a limit in resolution: quantisation. Quantisation limits the amplitude resolution. The amplitude at that instant can be one quantisation step, or the next, but it cannot be any value in between. (If you dither before quantisation, this error becomes random-like noise, but it is still an error, and still a limit in instantaneous amplitude resolution).

Similarly, low pass filtering introduces a limit in bandwidth. For an ideal brick wall filter, we can have any frequency up to the cut off frequency, but none of the ones above.

So with the +/- 1/2 sample example from CG, are we looking at a limit in time resolution?

Consider this: Signal A has a peak at position A, and frequency components above some arbitrary frequency F. Removing those frequency components moves the peak to position A2, which is slightly different from A.(Look, says CG, a limit in time resolution. Hold on! says DR...)

However a different signal, signal B, which has no frequency component above frequency F, can have a peak at exactly position A, and that peak will not be moved by removing frequency components above frequency F.

Thus we show that suggesting "the move from A to A2 demonstrates some limit of time resolution due to low pass filtering" is factually incorrect. A low pass filtered signal can have a peak at A, A2, or anywhere else.

All you have shown is that frequency components in signal A contributed to the position of peak A, and these have been removed, thus moving the peak. However, there is no limit to where peaks can occur. It is not like the quantisation amplitude resolution limit at all! It is not a time resolution limit, just a predictable effect of low pass filtering a signal.

I'm treading into an area I know nothing about, but I'll make a short comment anyway:

The location of a single mathematical sample peak should not be assumed to be the defined location ofthe attack of particular instrument. Just because filtering leads to a peak slightlybefore or after the original peak does not mean the location of the attack has changedtemporally.

I would think that what we audibly hear as an attack would be more precisely defined as a zone, orperhaps the middle of a series of PCM peaks and troughs of a certain nature. I suspect using thesesorts of definitions, you would find that there is no temporal movement of the instrument's attackdue to quality low-pass filtering above the typical human hearing range before digitization orin the digital domain.

Summary: PCM peak != attack

Meta-Summary: I might be completely wrong. Or I might be stating the completely obvious.

This is what I think: The peaks move, not because you've done anything to the "time resolution", but because you've removed spectral components that contributed to the exact peak position. This is not the same as time resolution.

That is like saying: "the tin can crumples, not because you are doing anything to its form, but because you are destroying its structural integrety"

The difficulties displayed here by those "in the know" in admitting anything is being done to "time resolution" -as sample rate is reduced (!) - is an odd phenomenon.

Explicity - sample rate is the rate of provided instances through time.

Quote

Maybe the time resolution really is about 1/2 a sample. The "moving peaks" show this.Let me show you why this "works", but is silly. (This is long - casual readers might like to skip to the conclusion!)

hmmm, "works" but is silly.... getting a bit obscure, it feels like you are erecting a wall of denial....

Quote

Arbitrary content could have peaks anywhere.Here is statement 1 (feel free to disprove it): "For a peak to move (but not vanish) due to low pass filtering, that peak must have been "built" from at least two spectral components, at least one below the filter cut off, and at least one above."

This is not difficult for me to visual, Ive made such points all throughout this thread.... ill cut to the chase.

Quote

Magic! Thus we "prove" (though it's more of a hand waving explanation!) your +/- 1/2 sample "time resolution", but we see it's really about frequency resolution. To move a peak, what you remove in the frequency domain must contrive to move the peak in the time domain. If it does not (e.g. it's co-timed, or there's nothing above the frequency cut-off anyway), the peak won't move.

What I like about this is the honesty of your description, and it makes the logic quite clear.But your present feeling that you can simply declare what 'results' are 'about' is wrong.Saying 'in a way' 'it is like' ..time resolution, but really "its about" frequency resolution -its insubstantial, a whimsical fig leaf. How you choose to approach a circumstance conceptualy is your choice, but if you want to invalidate an approach such as dealing with the "time domain", you cant simply anounce you find it 'silly' or not the same as your approach which seems now to be exclusively the "frequency domain" These things are sides of a coin.

Quote

So you come down to proving something rather trivial: "if I do something that I know will change the shape of the waveform, then the shape of the waveform will change".

If i reduce the number of provided samples throughout time, the potential accuracy of placement of detail throughout time will decrease, the potential resolution of detail through time will decrease > time resolution is decreased.

I have to snip, because although involved, the objections you are providing here cant be argued against because they are ruleless.

Quote

All you have shown is that frequency components in signal A contributed to the position of peak A, and these have been removed, thus moving the peak. However, there is no limit to where peaks can occur. It is not like the quantisation amplitude resolution limit at all! It is not a time resolution limit, just a predictable effect of low pass filtering a signal.

If there is no limit to where the peaks can occur, how come they cant be arranged to occur in the correct place between records of differing sample rates? Why must they be susceptable to unknown distortions during downsamples with your "no limitations" arguement? Because their precise subsample positions are limited - by every other sample in the record which they are a part of. Thats why you cant normaly use the "unlimited" resolution which you can infer - without distorting all other samples to create the subsample details. (done to provide some limited demostrations in this thread, but not possible in practice where all samples must be treated equally)

Quote

I don't think I can put it any clearer than that!(I shall not be giving up my day job )

Theres no need to bring professional pride into this. These matters are similarly misreported by many professionals. Anyway we are all professionals.

I should be able to post the data tonight on the actual accuracy of reproduction possible of event timings between well utilised sample rates.

Thats why you cant normaly use the "unlimited" resolution which you can infer - without distorting all other samples to create the subsample details. (done to provide some limited demostrations in this thread, but not possible in practice where all samples must be treated equally)

You can't have unlimited resolution without unlimited bandwidth.

This does not mean that peaks move.

As I pointed out in the discussion of the sum of two gaussians, peaks will move if and only if you remove frequency components that contribute to the envelope in a way that moves it.

Again, look up "Hilbert Envelope". You're doing nobody any good by failing to read up on the field you're talking about before you work.

As to your experiments, they prove nothing until you specificially produce the ***exact*** equations, and explain your reasoning.

Your criticisms are not consistent. When I quoted you as implying peaks would not move due to lowpassing, you said I misquoted you while crying 'abuse'. So you have me warned while berating my lack of contemporary study and making inconsistent demands to perform various pet excercises.Im not your student.

Quote

You're doing nobody any good by failing to read up on the field you're talking about before you work.

Untrue. I limit my research to protect my originality. Education is to Innovation what Masterbation is to Procreation I have learned adequate tools at home, school and university and beyond to suit my own designs. Ive never been forced to research previous solutions to computational problems -except for a while when I began learning to program -quite intensively, around the age of 9.

Quote

As to your experiments, they prove nothing until you specificially produce the ***exact*** equations, and explain your reasoning.

You demand exact equations and provide only irrelevant ones. To suggest I have not explained my reasoning in this thread is absurd.

@all, I have only been trying to defend sensible, intuitive, practical use of terminology here by argueing against the idea forwarded - that "time resolution" of pcm is practicaly finer than the samplerate.As explained previously, i understand that processes can maintain timing details in source which is suitably limited to particular samplerates bandwidth. But processes cannont recover timing details once bandwidth is limited. This means downsampling potentialy and normaly does damage 'time resolution'.

Despite a heavy bias to not examine the practical limitations on time resolution of samplerate, some expert contributers in this thread, have acknowledged that measureable timing details will be routinely distorted by downsampling at normal audio rates. I am not concerned with the audibility of such distortion, only its existence and its limits.

It has been much work trying to have it fairly examined in such a heated and onesided discussion. I do suspect others have better experience to estimate the situation than myself, but seem ideologicaly unwilling to do so.

I have tried to rise to the groups challenge and do some work to illustrate the situation, heres a scrappy program, not quite finished, but I hope some might appreciate my input so far...

//What was producing the central spike in//an approxomated but balanced (equally distributed error) measure of a peaks intensity/accuteness,//is a simple discernment of the second derivative (change of change) = //the method used to select peaks strength, peaks in the middle had to be less powerful to//be selected than peaks at the sides, (of the considered sample length)//this then allowed nodes to be matched more often coincidentaly close, tahn not)//the new expression used has no such bias

-This is a curious output that cant be trusted at this stage of the programs developement.It compares a clip of cd audio upsampled x4, with one downsampled x2 and then upsampled x8by ssrc_hp.exeA 176400 (44100*4) Hz sample length in the chart is 256,a 44100 Hz sample length in the chart is 1024( these relate to the distribution bands )more investigation/developement may follow..

Its obviously troublesome to start interprating unsecurely tested/debuged programs, but if some recognise the effort ive put into it (the code ended up taking some valuable hours to get to this stage).- Maybe they could give me the benefit of their prediction of the best distribution possible of detectable conditions in a waveforms pairing between different samplerates/bandwidths.

That is quite a ridiculous claim, especially in the context of computer science. Yes, you can probably come up with some silly sort algorithm that works, but in real life, experienced coders use the best sort algorithm for the expected data by researching the sort algorithms and selecting the best. These sort algorithms were not devised by some unschooled hack, these sort algorithms were designed by highly-educated and talented individuals, working in academic settings with other highly-educated and talented individuals.

Whether you believe it or not, education has the benefit of saving everyone time on meaningless missteps such as this thread. The concept of a "peak" holds little relevance to anything in electrical engineering; any form of frequency filtration changes all these peaks anyhow. Lowpass filter a Dirac pulse and instead of a single peak, you get a repeating set of peaks, with the highest peak potentially moving depending on the structure of the filter. A single peak, when lowpassed, can result in multiple peaks, all offset by some factor.

Your issue is trying to perceive frequency domain effects while visualizing the effects solely in the time domain. "Peaks" are time-domain phenomena that do not have any meaning in the frequency domain. All your down- and upsampling is affecting the frequency domain. Changes in the frequency domain imply changes in the time domain, but the way in which the two are related is less trivial to understand than you seem to think it is.

As you no doubt notice, your program does not simply show that changing spectral content shifts peaks, it shows that there is not a single peak that remains unaffected (if I am understanding the output correctly).

The time and frequency domains are mathematically identical, but are very different to conceptualize. If you wish to discover the ways in which a frequency domain transform affects audio, you must look at it from a frequency domain perspective if you wish to understand well. What you are doing is looking at a frequency domain transformation from a time domain perspective, and are getting complex results that you don't really understand. Because you don't understand them, you assume there's something weird happening there. There isn't. You're just not looking at it correctly.

As you no doubt notice, your program does not simply show that changing spectral content shifts peaks, it shows that there is not a single peak that remains unaffected (if I am understanding the output correctly).

If you had the capability to run the program, you could check such a conclusion on other tracks -trying different methods of lowpassing/downsampling.And you might have noticed that the tracks compared there where of slighlty different lengths due to ssrc's particulars.

Quote

The time and frequency domains are mathematically identical......What you are doing is looking at a frequency domain transformation from a time domain perspective, and are getting complex results that you don't really understand. Because you don't understand them, you assume there's something weird happening there. There isn't. You're just not looking at it correctly.

That is your misunderstanding - that "looking at a frequency domain transformation from a time domain perspective" is problematic. As both domains are equaly valid perspectives on the same data, they can be considered in parallel. To consider time resolution without reference to the time domain is folly, if not impossible.

In the frequency domain a downsample manifests as a reduction in bandwidth. In the time domain a downsample manifests as a reduction in resolution of level through time.These statements are simply not sensibly refutable.

Your criticisms are not consistent. When I quoted you as implying peaks would not move due to lowpassing, you said I misquoted you while crying 'abuse'. So you have me warned while berating my lack of contemporary study and making inconsistent demands to perform various pet excercises.Im not your student.

My statements are absolutely consistant.

I think that it is telling that you will not try a trivial, simple experment, one that requires much less code than you already appear to have posted, that will show you some of the errors implicit in your complaint.

In the frequency domain a downsample manifests as a reduction in bandwidth. In the time domain a downsample manifests as a reduction in resolution of level through time.These statements are simply not sensibly refutable.

And, that does not mean that peaks must move. The question of a peak moving or not, using a linear-phase (constant delay) filter, is strictly a question of what frequency content is removed. No more, no less.

As phase and frequency are absolutely equal to a time delay, I think you'll need to readjust your thinking here quite a bit, and simply accept the reality.

I'll say it again.

Generate two gaussian pulses, one of amplitude .1 centered at 10kHz with bandwidth of 1 kHz (just for simplicity's sake. Center it at the third sample. Add to that sequence a gaussian pulse of amplitude 10, bandwidth 1kHz, at 30kHz, centered at the 5th sample.

Do this at a sampling rate of 96kHz.

Downsample this to 48 and back up. Use linear phase filters.

Wait. You don't even have to downsample. Just filter the 96kHz stream at 20kHz with a 192 tap FIR. Look at the results.

You didn't change the sampling rate. You DID move the peak. Why? Because you filtered out some frequency bands. That's all. No more, no less. Nothing special here.

Not quite, but 'abilities' to locate details at "arbitrarily small" times in pcm samples, keep being presented to me, to counter my claims that "pcm cannot reliably record the times of real events/conditions with subsample accuracy" nor can it maintain such accuracy with further downsampling/lowpassing.

Quote

This does not mean that peaks move.....

Quote

Your criticisms are not consistent....

My statements are absolutely consistant.

Quote

When I quoted you as implying peaks would not move due to lowpassing, you said I misquoted you

Well I dont know what your position is, maybe thats fault of your expression or my comprehension, or combination of both, it happens.

Quote

I think that it is telling that you will not try a trivial, simple experment, one that requires much less code than you already appear to have posted, that will show you some of the errors implicit in your complaint.

Maybe I shall some day, or you could just explain how it would turn out, as ive tried to do with my 'experiments'. But the thing is, I have already devised a valid grasp of comparing timeable details between samplerates/bandwidth, and im tired of your attempts to tutor me. Im sure you are an excellent tutor, surely better than I am an attentive student

Im very interested to hear about other & better methods of locating any sort of conditions in time in a waveform, and we might be able to implement them here to improve our real world study of pcms time resolution.

And, that does not mean that peaks must move. The question of a peak moving or not, using a linear-phase (constant delay) filter, is strictly a question of what frequency content is removed.

I have also restated this relationship throughout this thread.But it follows that if frequency content is removed, ( and this is normaly the case when transfering between common rates, such as 44kHz > 22kHz )Then timings of waveform conditions - such as peaks, all throughout the waveform will be distorted.

This claim wasnt accepted, so I have started a programming project to not only prove it, but record approximately how much conditions can move when their bandwidth/samplerate is reduced.

Quote

I'll say it again. You didn't change the sampling rate. You DID move the peak. Why? Because you filtered out some frequency bands. That's all. No more, no less. Nothing special here.

Thanks for that description. You have shown that a waveform will distort in time and level when its bandwidth is reduced. This is the very same mechanism that distorts waveforms when their sampling rate is reduced. Observing the situation as you did there, you should acknowledge the same occurs from 96>44 or 44>24.

That was basicaly an experiment which proves the existence the effect which I wrote the program to measure.

That is your misunderstanding - that "looking at a frequency domain transformation from a time domain perspective" is problematic. As both domains are equaly(sic) valid perspectives on the same data, they can be considered in parallel. To consider time resolution without reference to the time domain is folly, if not impossible.

I don't disagree. There's a blend of phenomena going on here, and I don't blame you for getting lost and confused. Try follow this: You have data. You understand the meaning of this data in time-domain format. You apply a frequency-domain transform. You re-analyze the data and notice that the representation has changed quite markedly. From here, you are coming to the invalid conclusion that as the representation has changed, there's some loss of "time resolution", a term that is nigh meaningless.

The problem is simple: after the frequency transform, the form of the time-domain data is going to be completely different looking and cannot be sensibly compared to the original time-domain data. In order for the initial and final set of data to be comparable, you must use a representation that allows for comparison. In particular, "peaks" or "nodes" are completely meaningless comparisons. Like myself and many others mentioned before, when a Dirac pulse is lowpassed, you get a time offset and multiple new peaks. The initial and final forms of the data are completely different, and comparisons using "peaks" are no longer valid. This is all due to the frequency-wise transformation. This transformation also can introduce a calculable, constant delay to the signal. These are known phenomena, but do not imply that "time resolution" as I understand it has changed at all.

Quote

In the time domain a downsample manifests as a reduction in resolution of level through time.

Could you restate this in other terms, please? I'm not understanding what you mean by this. I understand what downsampling does to time-domain data, but the way in which you state this is ambiguous. I suspect that by elaborating on this point, we may be able to get to the bottom of your misunderstanding.

That is your misunderstanding - that "looking at a frequency domain transformation from a time domain perspective" is problematic. As both domains are equaly(sic) valid perspectives on the same data, they can be considered in parallel. To consider time resolution without reference to the time domain is folly, if not impossible.

I don't disagree. There's a blend of phenomena going on here, and I don't blame you for getting lost and confused.

You are incredible...... I do have to laugh, even if it might get me in more trouble here.I have to be honest. Im not gonna be anyones doormat.

Quote

Quote

In the time domain a downsample manifests as a reduction in resolution of level through time.

Could you restate this in other terms, please?

Afraid not, it should be easy to interprate if you are in a position to guide this thread.

Ignoring the objective elements of my comments and focusing on the subjective ones will not advance your position here, I am afraid. If my position is so flawed and I am so clueless, then my statements should be absolutely child's play to refute.

Quote

Afraid not, it should be easy to interprate(sic) if you are in a position to guide this thread.

All I was asking for was something simple. Your phrasing was ambiguous and non-rigorous. I was hoping to gain insight into your position by having you restate it in other terms. But go ahead, disregard whatever you like. Your character is showing.

Ignoring the objective elements of my comments and focusing on the subjective ones will not advance your position here,

Im sorry but ill have to pass. I refer others to the rest of my multitudious replies replies amounting to thousands of words now, several illustrations and runnable statistics program. Whatever clarifications you would like to present to others you are of course free to do so without my consideration.

Here is sample nofun examined at 96kHz (bandlimited interpolation) >with frequency response to 22kHzCompared to itself just relowpassed at 22kHz (for experimental control) -sox.exe was used for lowpassing (sinc windowed),and lowpass was checked in a frequency spectrum to be good, but a slighlty gradual cutoff...

Im very interested to hear about other & better methods of locating any sort of conditions in time in a waveform, and we might be able to implement them here to improve our real world study of pcms time resolution.

regards'cg

Then how come you won't accept them when they are offered? Try it. Learn.