So I've become involved in a rather colorful argument (I'm Publius in the thread) with somebody on stevehoffman.tv. The original thread revolved around shooting down an old audiophile canard, about how subsample delays cannot be represented in PCM. In the course of that debate, I've begun to question a couple things.

Is it ever accurate to use the term "time resolution" in any sort of technical context? To the best of my knowledge, it has no universally agreed upon technical definition. Most of the times I've seen it used are either for SACD/DVD-A marketing fluff, or to describe FFT window lengths. I'm tempted to just go quasi-logical-positivist on everybody and say that it is a completely meaningless phrase.

Is there any meaningful time-domain constraint on audio quality that is directly related to the sampling period? Subsample delays (as I've shown above) are not meaningfully related. Bandwidth is a frequency-domain attribute. Pre-echo potentially gets more audible at lower sampling rates, but this is not a concern with sigma-delta ADCs, and it is of debatable audibility at 44.1 to begin with. Some DSP operations may be harder to implement at lower sample rates, but most of the issues involve seem implementation-related. I'm suspecting that there are no clear general limits as to what can and cannot be accomplished in PCM, except with respect to very domain-specific or system-specific situations; and so any claims of 44khz always being limited in ways different from bandwidth may be regarded with skepticism.

So I've become involved in a rather colorful argument (I'm Publius in the thread) with somebody on stevehoffman.tv.

Good luck with that. I got booted for persistently mentioning the DBTs,and for questioning the moderators. That place is the Bizarro world hydrogenaudio. Theres's a typically absurd discussion going on there now about solid state audio gear burn-in, too (it's all down to capacitors, don't you know...they need hours and hours of burn in to sound right, or at least to get the molecules lined up right...prior to that, they sound *terrible* ...I'm thinking someone had best tell the telecommunications industry, the calibration gear industry, the computer industry, the aerospace industry, and heck, anyone who uses high-perforance electronic devices).

Is it ever accurate to use the term "time resolution" in any sort of technical context? To the best of my knowledge, it has no universally agreed upon technical definition. Most of the times I've seen it used are either for SACD/DVD-A marketing fluff, or to describe FFT window lengths. I'm tempted to just go quasi-logical-positivist on everybody and say that it is a completely meaningless phrase.

I don't see why not. It just refers to how precisely you can localize energy in the time domain. Generally this is just the sample period, but not always. Looking at my DSP text, they don't actually say time resolution, but they frequently mention frequency resolution, and by duality whatever applies to frequency would apply to time if you sampled in the frequency domain.

Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713

Yo, that was me (felimid)

QUOTE

Is there any meaningful time-domain constraint on audio quality that is directly related to the sampling period?

Ask yourself:"is there any meaningful space-domain constraint on visual quality that is directly related to the pixel width"

(note that regardless of the quality of anyones particular video card system, bandlimited 'acoustic type' interpolation can be applied to visual data as well as audio (and is a rather good way to treat it, if not the optimal (?))

I know i've put more than enough waffle into that thread to explain why I consider, pixel-width or sample width the explicit limit of record resolution.... maybe just one more try if you have time:The subsample accuracy of frequency resolution (and collections of frequencies) is about the..what do you call it..."potential losslessness" of manipulating recordsafter downsampling, not about the accuracy of reproduction of the original (natural, pre-record, pre-processed sound) of the downsampled record.

I honestly havent been argueing 'for the sake of it' I'd like this to be understood.But everyone is free to draw their own conclusions.

So I've become involved in a rather colorful argument (I'm Publius in the thread) with somebody on stevehoffman.tv. The original thread revolved around shooting down an old audiophile canard, about how subsample delays cannot be represented in PCM. In the course of that debate, I've begun to question a couple things.

Is it ever accurate to use the term "time resolution" in any sort of technical context? To the best of my knowledge, it has no universally agreed upon technical definition. Most of the times I've seen it used are either for SACD/DVD-A marketing fluff, or to describe FFT window lengths. I'm tempted to just go quasi-logical-positivist on everybody and say that it is a completely meaningless phrase.

Is there any meaningful time-domain constraint on audio quality that is directly related to the sampling period? Subsample delays (as I've shown above) are not meaningfully related. Bandwidth is a frequency-domain attribute. Pre-echo potentially gets more audible at lower sampling rates, but this is not a concern with sigma-delta ADCs, and it is of debatable audibility at 44.1 to begin with. Some DSP operations may be harder to implement at lower sample rates, but most of the issues involve seem implementation-related. I'm suspecting that there are no clear general limits as to what can and cannot be accomplished in PCM, except with respect to very domain-specific or system-specific situations; and so any claims of 44khz always being limited in ways different from bandwidth may be regarded with skepticism.

Having read that thread, or some of it, it's clear that there are several issues being confuted by people who do not want to let go of their pet belief system.

The first issue is that of pure delay. You've killed that. You can point out that if you store the data in 16 bit signed, that yes, there is a limit, it's directly related to the sampling rate times the number of levels available for quantization... i.e. directly related to the SNR and the sampling rate.

The second is that of jitter. You need to beat on these people to point out that jitter is different for each sample, and your time delay is not.

Time delay has been reported as audible down to 5 to 10 microseconds in binaural settings with a great deal of care and signal prep involved. No lower.

The cited nanosecods, etc, are just stuff and nonsense.

What you need to do with the jitter crowd is to point out that the spectrum of the jitter is the big deal.

You can have 1 part in 1000 (relative to sample period) if the jitter has only frequencies below 1 Hz.

You can hear much more than that if you have high frequency jitter, and a high frequency signal.

Looking onward, we see the twirp who insists that things have to be periodic in order for subsample resolution to work.

Perhaps he needs to be acquinted with both the Nyquist theorem and the fact that for all real signals, Fourier synthesis works.

Is there any meaningful time-domain constraint on audio quality that is directly related to the sampling period?

QUOTE

Ask yourself:"is there any meaningful space-domain constraint on visual quality that is directly related to the pixel width"

(note that regardless of the quality of anyones particular video card system, bandlimited 'acoustic type' interpolation can be applied to visual data as well as audio (and is a rather good way to treat it, if not the optimal (?))

I know i've put more than enough waffle into that thread to explain why I consider, pixel-width or sample width the explicit limit of record resolution.... maybe just one more try if you have time:The subsample accuracy of frequency resolution (and collections of frequencies) is about the..what do you call it..."potential losslessness" of manipulating recordsafter downsampling, not about the accuracy of reproduction of the original (natural, pre-record, pre-processed sound) of the downsampled record.

I honestly havent been argueing 'for the sake of it' I'd like this to be understood.But everyone is free to draw their own conclusions.

best regards'

I mainly posted here to try to get a different take on what was spinning in my head about terminology. I didn't want to start a rantfest about the SH forums - although to be honest, the reason I posted here was simply because I perceive that the regulars here are just flat-out more technically competent in audio engineering than SH as a whole. (ie, HA people tend to have one or more DSP textbooks.)

Mike may be right. "Frequency resolution" has a clear frequency-domain definition, so it makes sense that "time resolution" would have an analogous time-domain definition. In which case.. you're right. However, it is surprisingly hard to actually find a definition of resolution itself. I don't recall such a definition in my textbook (Lathi).

And I still have trouble reconciling the notion of time resolution being directly coupled to the sampling rate, against so many things being resolvable at the subsample level - delays, peaks, etc. (Ignoring the fact that some of those things, like peaks, can change drastically after a resample.) Perhaps there are two classes of "resolution" - one is based on the sampling rate and represents what any given algorithm sees, while the other is based on the real-valued signal the signal reconstructs to, and represents values that are related to a large number of samples (due to FIR reconstruction), and so can exceed the sample-rate resolution limitation.

And for either class of resolution, the error in your measurement is also affected by your choice of algorithm - ie, a measurement may change significantly if downsampling 96->44.1 as compared to 88.2->44.1, because of the added ringing.

AFAIK, time resolution is most commonly used to refer to the time that a given block represents when converted to the frequency domain. "The short window has a time resolution of 128 samples, while the steady state block has a time resolution of 2048 samples".

As for other uses, I'm sure there are some, I just haven't read the term in other areas.

But what these guys are talking about is the relative timing of interchannel delays, which is a different question and one that is of order 1/(fs * number of quantization levels), which is a pretty small number.

Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713

QUOTE ("woodinville")

The first issue is that of pure delay. You've killed that. You can point out that if you store the data in 16 bit signed, that yes, there is a limit, it's directly related to the sampling rate times the number of levels available for quantization... i.e. directly related to the SNR and the sampling rate.

Just me doing the arguing in that thread really. On the fly (though it was never a prime point) I hedged that what you refer to as 'delay' is dependant on the measure size like you state. However you seem also unwilling to acknowledge too, that such 'delay' is no definite attribute of the undownsampled source, it is an attribute summed circumspectly from the phases of frequencies surviving the downsample. The frequencies which didnt survive the downsample, contained the information required to resolve the true subsample detail of 'time localised energy spikes'.

QUOTE ("woodinville")

The second is that of jitter. You need to beat on these people to point out that jitter is different for each sample, and your time delay is not.

Comments about jitter came right at end... or maybe its grown...eek, anyway there enough on the plate already.

QUOTE ("woodinville")

Time delay has been reported as audible down to 5 to 10 microseconds in binaural settings with a great deal of care and signal prep involved. No lower.

Informative, although I have never argued the against the subsample time resolution of frequencies surviving the implicit lowpass of the sample rate. The arguement was about whether such precision can be fairly refered to as 'time-resolution'

QUOTE ("woodinville")

Looking onward, we see the twirp who insists that things have to be periodic in order for subsample resolution to work.

Ahem* that would be me, but an unkindly misrepresented myself. Again, my point is -"subsample resolution" of what exactly? the subsample levels implied in PCM have to assume all energy above the nyquist frequency is zero - so yes 'things' do have to be periodic, specificaly the component periods of the frequencies which inform the spaces between samples all have to be less than 2 samples in length - because shorter frequencies are not informed by the record. Subsample detail of the waveform is for standardisation purposes assumed to be consistent with the remaining information, because the origional information is lacked. So what can 'time resolution of PCM' refer to? any synthetic measurement we can take? Is there not already a fair candidate for this term?

QUOTE ("woodinville")

Perhaps he needs to be acquinted with both the Nyquist theorem and the fact that for all real signals, Fourier synthesis works.

I am acquainted with it thanks, and understand some limitations. Just trying to pass that on.

Looking onward, we see the twirp who insists that things have to be periodic in order for subsample resolution to work.

Ahem* that would be me, but an unkindly misrepresented myself. Again, my point is -"subsample resolution" of what exactly? the subsample levels implied in PCM have to assume all energy above the nyquist frequency is zero - so yes 'things' do have to be periodic, specificaly the component periods of the frequencies which inform the spaces between samples all have to be less than 2 samples in length - because shorter frequencies are not informed by the record. Subsample detail of the waveform is for standardisation purposes assumed to be consistent with the remaining information, because the origional information is lacked. So what can 'time resolution of PCM' refer to? any synthetic measurement we can take? Is there not already a fair candidate for this term?

QUOTE ("woodinville")

Perhaps he needs to be acquinted with both the Nyquist theorem and the fact that for all real signals, Fourier synthesis works.

I am acquainted with it thanks, and understand some limitations. Just trying to pass that on.

You mention periods "less than 2 samples in length". This is a band limited signal, there are no such periods. I think you are trying to say that since you can't have frequencies higher then the nyquist frequency, then it must be true that you cannot have frequencies closer together then 2/fs. However, this is not true. The Nyquist theory allows exact reconstruction for band limited signals, which means you can resolve infinately many points between any two points within the reconstructed waveform just by upsampling and filtering.

Let me give an example. If you sample at 10Hz and are bandlimited to 5Hz, you can resolve the amplitudes at 1Hz, 1.0000000000000000000001Hz, etc perfectly. Its just that since you're bandlimited, you have no new information at these interpolated points (I.E. they're exactly determined by the value of the adjecent points). But you can still retrieve their values using resampling, sinc interpolation or whatever.

Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713

QUOTE (Mike Giacomelli @ Oct 6 2006, 02:10)

QUOTE (ChiGung @ Oct 5 2006, 16:46)

The subsample levels implied in PCM have to assume all energy above the nyquist frequency is zero - so yes 'things' do have to be periodic, specificaly the component periods of the frequencies which inform the spaces between samples all have to be less than 2 samples in length - because shorter frequencies are not informed by the record. Subsample detail of the waveform is for standardisation purposes assumed to be consistent with the remaining information....

You mention periods "less than 2 samples in length". This is a band limited signal, there are no such periods.

Exactly, frequencies of that period are required to render time localised energy between the samples, which we obviously dont have until we have access to non-downsampled records.

QUOTE

I think you are trying to say that since you can't have frequencies higher then the nyquist frequency, then it must be true that you cannot have frequencies closer together then 2/fs.

I would never knowingly say anything to that effect'

QUOTE

Regarding your periodic remark, I'm afraid I don't follow.

I didnt exactly make the periodic remark, I think woodinville was refering to explainations in the other thread like this:

QUOTE

It was shown that the phase of a sinusoidal pattern which is assumed as perfect and constant can be resolved to a fraction of the sampling interval.This subsample accuracy was possible because the pattern recorded is not a discrete event, it's impression is recorded throughout many consecutive samples and its exact formation is inferable (idealy).For all discrete or unassumable events, PCM records can only specify time of occurence to within a whole length of the sampling interval. Time resolution can only be improved when a known pattern can be observed throughout multiple samples -which is the case for computing the phase of synthetic frequency components, but not at all when trying to refine the temporal location of unassumed events.

For conversion and processing purposes etc PCM is interprated as a composite of exclusively periodic entities (frequencies) But localised energy need not fit into any periodic cycle, so we cant locate it precisely with the periodic tools (frequencies) even though we can locate all the individual tools precisely.

techie: "captain we have located a 'spike' event on the PCM sensor"captain:"what is the position of the spikes peak teki?"techie:"324.37643 sampling intervals exactly captain"captain:"how can you be so precise?"techie:"because time delays are quite precisely encodable in PCM"

But a natural spike, will have an unknown frequency spectrum, the tools to locate the true peak with certainty had to be removed before the downsample, so we can make the best guess by assuming the 'subsample deviators' were all flat anyway but thats just a guess, the true peak could have been anywhere in the sample interval. If it was actualy somewhere other than the record suggests most likely, that information was contained in the lowpassed higher frequencies which now manifest as the unrecorded gaps between samples.

waffle, waffle, waffle.

QUOTE

'It just refers to how precisely you can localize energy in the time domain. Generally this is just the sample period, but not always.

That is how I seesaw it. Ive just been clumsily trying to explain this really, from a few different angles.

The subsample levels implied in PCM have to assume all energy above the nyquist frequency is zero - so yes 'things' do have to be periodic, specificaly the component periods of the frequencies which inform the spaces between samples all have to be less than 2 samples in length - because shorter frequencies are not informed by the record. Subsample detail of the waveform is for standardisation purposes assumed to be consistent with the remaining information....

You mention periods "less than 2 samples in length". This is a band limited signal, there are no such periods.

Exactly, frequencies of that period are required to render time localised energy between the samples, which we obviously dont have until we have access to non-downsampled records.

No. I'm saying that no such information ever existed. Not just in the sampled signal. Ever. The signal is band limited to no more then half the sampling rate, therefore there is NO information between between samples, either in the information we have or the original analog signal.

QUOTE (ChiGung @ Oct 5 2006, 20:30)

QUOTE

I think you are trying to say that since you can't have frequencies higher then the nyquist frequency, then it must be true that you cannot have frequencies closer together then 2/fs.

I would never knowingly say anything to that effect'

You just did. "Frequencies of that period are required to render time localised energy between the samples" Surely you realize how these statements are equivilent?

QUOTE (ChiGung @ Oct 5 2006, 20:30)

QUOTE

Regarding your periodic remark, I'm afraid I don't follow.

I didnt exactly make the periodic remark, I think woodinville was refering to explainations in the other thread like this:

QUOTE

It was shown that the phase of a sinusoidal pattern which is assumed as perfect and constant can be resolved to a fraction of the sampling interval.This subsample accuracy was possible because the pattern recorded is not a discrete event, it's impression is recorded throughout many consecutive samples and its exact formation is inferable (idealy).For all discrete or unassumable events, PCM records can only specify time of occurence to within a whole length of the sampling interval. Time resolution can only be improved when a known pattern can be observed throughout multiple samples -which is the case for computing the phase of synthetic frequency components, but not at all when trying to refine the temporal location of unassumed events.

I was refering to your statement that "things have to be periodic". That doesn't seem related to what you posted above because you make no mention of periodicity, or if it is I cannot follow your logic.

QUOTE (ChiGung @ Oct 5 2006, 20:30)

For conversion and processing purposes etc PCM is interprated as a composite of exclusively periodic entities (frequencies) But localised energy need not fit into any periodic cycle, so we cant locate it precisely with the periodic tools (frequencies) even though we can locate all the individual tools precisely.

PCM does not assume anything is periodic. No frequency treatment is required to dervive it either. The frequency/time trade off you're refering to is a property of the fourier transforms, but not of PCM which could be implemented with nothing but time domain tools.

QUOTE (ChiGung @ Oct 5 2006, 20:30)

techie: "captain we have located a 'spike' event on the PCM sensor"captain:"what is the position of the spikes peak teki?"techie:"324.37643 sampling intervals exactly captain"captain:"how can you be so precise?"techie:"because time delays are quite precisely encodable in PCM"

But a natural spike, will have an unknown frequency spectrum, the tools to locate the true peak with certainty had to be removed before the downsample, so we can make the best guess by assuming the 'subsample deviators' were all flat anyway but thats just a guess, the true peak could have been anywhere in the sample interval. If it was actualy somewhere other than the record suggests most likely, that information was contained in the lowpassed higher frequencies which now manifest as the unrecorded gaps between samples.

We're assuming the signal is band limited, so we absolutely can exactly locate your spike. Think about how an ideal sinc interpolator works. You superimpose sinc functions to get the exact value of the function at that point. There is no approximation like you seem to be thinking. It really is exact.

What you're saying would be true of a signal that was not band limited, but if it was not bandlimited, we would not be able to reconstruct the signal anyway. Alternatively, it would be true if you assumed the signal was not bandlimited, but that you had an antialiasing filter in place to band limit it. In that case, you could in fact say that the antialiasing filter delocalizes your spike by discarding the extra frequency content needed to exactly localize it. However this is NOT what everyone else is discussing which is why they keep saying PCM and not antialiasing filter. We're assuming that input signal is already bandlimited, and thus it can be perfectly reconstructed.

However you seem also unwilling to acknowledge too, that such 'delay' is no definite attribute of the undownsampled source, it is an attribute summed circumspectly from the phases of frequencies surviving the downsample. The frequencies which didnt survive the downsample, contained the information required to resolve the true subsample detail of 'time localised energy spikes'.

Goodness me.

A properly done resampling contains all of the frequencies in the original. All of them.

If you started with a bandlimited signal (i.e. a properly sampled signal), say of DC to 20K, down 96dB at 22.05, and you used a good resampling filter, you have all the frequencies you started with.

If you didn't start with a bandlimted signal, you violated the Nyquist theorem.

QUOTE

QUOTE ("woodinville")

Time delay has been reported as audible down to 5 to 10 microseconds in binaural settings with a great deal of care and signal prep involved. No lower.

Informative, although I have never argued the against the subsample time resolution of frequencies surviving the implicit lowpass of the sample rate. The arguement was about whether such precision can be fairly refered to as 'time-resolution'

Why not? You can easily distinguish times, so you're resolving time.

QUOTE

Ahem* that would be me, but an unkindly misrepresented myself.

Sorry. I've been through this argument a few 100's of times with people who have failed to consider the implications of Fourier analysis and/or proper sampling and reconstruction.

QUOTE

Again, my point is -"subsample resolution" of what exactly? the subsample levels implied in PCM have to assume all energy above the nyquist frequency is zero - so yes 'things' do have to be periodic, specificaly the component periods of the frequencies which inform the spaces between samples all have to be less than 2 samples in length - because shorter frequencies are not informed by the record.

I have absolutely no idea what concept you are trying to convey. Did you perhaps swap out a "longer" sans "shorter" or vice versa here?

QUOTE

Subsample detail of the waveform is for standardisation purposes assumed to be consistent with the remaining information, because the origional information is lacked. So what can 'time resolution of PCM' refer to? any synthetic measurement we can take? Is there not already a fair candidate for this term?

Again, I have no idea what you mean. In the OP over there, the ability to resolve time is demonstrated, end of discussion.

What "original information" is missing? I see none.

Quite frankly I have no idea what you're trying to convey. The OP at the other board refutes what you seem to be somewhat unclearly claiming.

More I can not say until you can express yourself better.

Do not forget that there is no representation of frequencies outside of the Nyquist bandwidth in PCM at any time, so there was never anything there to lose.

Having been one of many people to try sampling (done properly) using a one-shot triggered with some delay around the sampling frequency, I simply can't see what you're worried about. You can easily capture sub-sample delays in input, you can create sub-sample delays digitally, etc.

You can prove sub-sample time domain accuracy using a simple single impulse (or conceptually, a Dirac Delta or Dirac pulse). In a correct system, you can show that the location of the inter-sample peak matches that of the original impulse, even though the pulse itself will be spread out (due to the band limiting).

The reference to periodic functions doesn't mean that sub-sample accuracy doesn't work for any other signal! The sub-sample inter-channel delay of sine waves is a nice example, but you can do just the same thing with a single impulse.

While we're avoiding mathematics, and getting by with examples and hand-waving concepts, consider this: since it works for theoretical infinitely long sine waves, and for theoretical infinitely short impulses, and for practical length "longish" sine waves and for practical length "shortish" impulses (i.e. both theoretical and practical extremes!), that should alert you to the fact that it probably works for any signal in between - including any real world signal.

Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713

QUOTE

We're assuming the signal is band limited, so we absolutely can exactly locate your spike.

You are all assuming that and confidently browbeating with resulting certainties. It is not a valid assumption > for defining PCMs capabalities of accurately reproducing source.The term 'source' is meaningless if it cannot differ in detail from 'record'Of course if the source is known to be bandlimited suitably, the PCM record of it is perfectly complete - there is nothing to discuss in this case. With the precondition that the source is already suitably bandlimited, any samplerate can precisely store any such compliant source -that is not news to me.

I cant believe the loose reasoned, ignorance assuming flack Im taking here....Corrections about whenever 'jitter' was first transiently refered to in the other thread?

Please gimme a break.

Its probably not worth it for me to explain anything again - cause it will be read with such presumption of error that any realisations possible will not make it from the readers page to their predisposed mind.

QUOTE (me)

...such 'delay' is no definite attribute of the undownsampled source, it is an attribute summed circumspectly from the phases of frequencies surviving the downsample. The frequencies which didnt survive the downsample, contained the information required to resolve the true subsample detail of 'time localised energy spikes'.

How stupid do you guys suppose I am if I was talking about a suitably bandlimited source there? Is it ambiguity about the term 'undownsampled?' -If I meant 'upsampled record' I would could have used that simpler term. Does 'undownsampled source' not translate to 'a source which was not downsampled' -therefore being capable of holding higher frequency detail than the downsampled one? I dont think there is any real ambiguity there, or too much following to figure out what I am actualy talking about.

I am talking about aspects of PCMs resolution of unassumable sources. Like how accurately 16kHz samplerate record could render localised details a 44~kHz record on a CD could render, or how accurately CDs format, could render details which the mastering formats used in production process could render. Why is that a strange interpratation of the term 'resolution' with regads to the capabilities of a digital format? When you guys ponder the resolution of a digital camera, do you say it perfectly records what it is pointed at if you take pictures out of focus? Yes it might record the 'out of focus picture perfectly precisely', how in focus does the picture need to be before the digitised record of it neccessarily looses information and is therefore an imprecise record of? Most fundamentaly, how accurately can the digitised record be used to render the actual 'scene', with its near infinite complexety emergent from the natural universe?

You know Im obviously talking about somethings which many of you have not considered before. Then you might benefit from reading my points properly instead of the hypercritical attention shown so far.

I cant believe the loose reasoned, ignorance assuming flack Im taking here....Corrections about whenever 'jitter' was first transiently refered to in the other thread?

Please gimme a break.

Perhaps it's bias, but I don't see what was either poorly reasoned, or ignorant, about pointing out your mistake re: when jitter entered the SHtv thread.

QUOTE

I am talking about aspects of PCMs resolution of unassumable sources. Like how accurately 16kHz samplerate record could render localised details a 44~kHz record on a CD could render, or how accurately CDs format, could render details which the mastering formats used in production process could render.

All of those sources are bandwidth-limited, of course.

QUOTE

Why is that a strange interpratation of the term 'resolution' with regads to the capabilities of a digital format? When you guys ponder the resolution of a digital camera, do you say it perfectly records what it is pointed at if you take pictures out of focus? Yes it might record the 'out of focus picture perfectly precisely', how in focus does the picture need to be before the digitised record of it neccessarily looses information and is therefore an imprecise record of? Most fundamentaly, how accurately can the digitised record be used to render the actual 'scene', with its near infinite complexety emergent from the natural universe?

You know Im obviously talking about somethings which many of you have not considered before.

You flatter yourself.

QUOTE

Then you might benefit from reading my points properly instead of the hypercritical attention shown so far.

Good luck with that'

edit: paranoid disambiguations

You don't appear to take correction well. I suggest you get used it...particularly as you've implied, in the SHtv thread, and here, that Nyquist/Shannon is in need of significant revision. That constitutes an extraordinary claim, and you are going to have to present an extraordinarily well-supported case to back it up. And your writing is going to have to be much, much more clear. Good luck with that.

But perhaps for starters, you can describe in much more detail the attributes of the 'unassumable sources' you are talking about. In what sense are they NOT bandlimited?

Group: Members
Posts: 439
Joined: 9-February 05
From: county down
Member No.: 19713

QUOTE

But perhaps for starters, you can describe in much more detail the attributes of the 'unassumable sources' you are talking about. In what sense are they NOT bandlimited?

Im not taking criticism well at this point, because Ive been at this for many posts now.Look. A 44kHz record is bandlimited at 22kHz right? A 22kHz record is bandlimited at 11kHz. A downsample from 44 to 22kHz looses the information for the band 22kHz to 11kHz.....r i g h t ?If we could all assume that when we downsample, all the information information involved is already suitably bandlimited, that would be an easy world where we could make the claim that 'time resoltuion of PCM' is near as hey perfect -and make it stick. But it is for the very reason that that is an unrealistic assumption, that lowpassing (removing of high frequency energy) is required during good quality downsampling conversion.Specificaly, yes you can say (and I have said it) that a records information is implicitly bandlimited. But you cannot say that it is therefore bandlimited enough to losslessly survive any following downsamples.

QUOTE

You don't appear to take correction well. I suggest you get used it...particularly as you've implied, in the SHtv thread, and here, that Nyquist/Shannon is in need of significant revision.

I said common reports and opinions drawn from it where.

Trying to get back to the topic>

Ask yourself:"is there any meaningful space-domain constraint on visual quality that is directly related to the pixel width"

Sorry, what in particular are you miffed about? "Time resolution" has not stopped being discussed. There *are* lots of insightful comments about it here (I refer to Mike's and Woodinville's especially, as being of rather recent vintage).

We're assuming the signal is band limited, so we absolutely can exactly locate your spike.

You are all assuming that and confidently browbeating with resulting certainties.

We are assuming it because it is true.

QUOTE (ChiGung @ Oct 6 2006, 08:32)

It is not a valid assumption > for defining PCMs capabalities of accurately reproducing source.

PCM is always bandlimited. If it is not, then you do not have PCM. The source may not be bandlimited, but so what? Thats why you have an antialiasing filter.

QUOTE (ChiGung @ Oct 6 2006, 08:32)

The term 'source' is meaningless if it cannot differ in detail from 'record'Of course if the source is known to be bandlimited suitably, the PCM record of it is perfectly complete - there is nothing to discuss in this case. With the precondition that the source is already suitably bandlimited, any samplerate can precisely store any such compliant source -that is not news to me.