I've been part of this past year fixing most of the plugins of my program* so that they don't perform differently at different sample rates.

If nonlinear processing is done in the digital domain, certain high harmonics can reflect back down into the audible range. Any signal that is generated in the digital domain above the Nyquist frequency aliases around the Nyquist frequency. For example, an 8 KHz tone in a 44.1 KHz sampled sytem is distorted by a fourth order digitally-implemented nonlinearity and would be expected to produce a fourth harmonic at 32 KHz. Since 32 KHz is 10 Khz higher than the 44.1 KHz Nyquist frequency of 22 KHz, the 4th harmonic is aliased down to 12 KHz where it is far more audible than it would be in a digital system with a much higher sample rate or an analog system. In a 96 KHz system, the same nonlinearity would produce a foruth harmonic at 32 KHz where it would not be audible.

Aliasing can also impact certain dynamic range compression or expansion algorithms that are based on nonlinarities implemented in the digital domain.

Most kinds of processing that are used in music synthesis and mixing are linear, so aliasing is not a common problem.

If nonlinear processing is done in the digital domain, certain high harmonics can reflect back down into the audible range. Any signal that is generated in the digital domain above the Nyquist frequency aliases around the Nyquist frequency. For example, an 8 KHz tone in a 44.1 KHz sampled sytem is distorted by a fourth order digitally-implemented nonlinearity and would be expected to produce a fourth harmonic at 32 KHz. Since 32 KHz is 10 Khz higher than the 44.1 KHz Nyquist frequency of 22 KHz, the 4th harmonic is aliased down to 12 KHz where it is far more audible than it would be in a digital system with a much higher sample rate or an analog system. In a 96 KHz system, the same nonlinearity would produce a foruth harmonic at 32 KHz where it would not be audible.

A simple non-linear transfer function (i.e. literally mapping sample values to new sample values using a look-up table which, if plotted input vs output on a graph, would show a curve) - that can produce harmonics above the noise into the MHz range. You don't prevent audible aliasing just be using a "slightly" higher sample rate.

A simple non-linear transfer function (i.e. literally mapping sample values to new sample values using a look-up table which, if plotted input vs output on a graph, would show a curve) - that can produce harmonics above the noise into the MHz range. You don't prevent audible aliasing just be using a "slightly" higher sample rate.

It helps a bit, but "just" 96kHz isn't really a solution.

Cheers,David.

This isn't completely true, David. If you have an arbitrary non-linear function, yes you will have sky-high THD generation. However, if you have a function with a known polynomial order (such as a Taylor series approximation of a transcendental function), then you will also have a predictable limit on the number of harmonics produced. This is because because a polynomial function is the equivalent of amplitude modulation - x^2 is the signal x modulating the amplitude of signal x - and AM has predictable sidebands (M-N and M+N, in the x^2 case, that would be 0 and 2x)

Yes, I agree. Sorry, that's what I was implying (thought I'd said it earlier in the thread - but there have been a lot of threads like this) - the way you avoid aliasing is to design the processing properly/carefully, so you know what it's doing. You may then need some oversampling, but you'll know how much.

Just using oversampling to fix aliasing without understanding is what I was criticising (and maybe not what Arny was talking about).

I feel sure some of the commercial DSP/plug-ins that allegedly "work better" at higher sample rates haven't done their calculations properly. It could be they apply temporal parameters per sample rather than per second, but sometimes I think the issue is that any attempts to design out aliasing are inadequate. In this case, even jumping to 96kHz doesn't guarantee the DSP works properly - just less badly. If this is the case, we're not safe until we hit 10s of MHz.

One should sample at the double of the highest frequency.The reverse holds too, if you sample at 44, there shouldn't be any signal above 22kHz in the source.Practical consequence, the input must be band limited.If you want to cover everything up to 20 kHz, the only thing you can do is using a pretty steep low pass filter (brick wall). In general this type of filter produces artifacts like pre-ringing.

If you sample at 88, your problems remain the same, no frequency above Nyquist please.This time our Nyquist is 44. We might decide to use a brick wall again but this time it is much farther out of our audible range so probably has less impact.

As the late Julian Dunn phrased itA direct effect of the higher sampling rate is that for an identical filter design the timedisplacements will scale inversely with sample rate. Hence an improvement can bemade just from raising the sample rate - even for those who cannot hear above20kHz.

There isn't much musical energy at this level.We also might decide to use a smoother low pass filter e.g. starting at 30 kHz.

This forum clearly has lost its edge, I really feel that the world has turned - but without you.

To save everyone having to read the same meaningless dross again, I'm selectively quoting just this line to make it clear that it's Cavaille's post I'm replying to, and this post has effectively dragged me out of a self-imposed retirement from HA due to being repeated hectored by a seemingly psychotic member who shall remain nameless, so you can guess how much this post has got my back up!

Are you suggesting that the generation of today are in some way responsible for preserving audible content above ~22kHz for the sake of future genetically superior human beings who populate the planet long after we're gone? 99.9% of the world's human population don't stand a bat's chance in hell of hearing anything above 22kHz ever, so it's totally pointless sampling accurately at anything above 44.1kHz for the final delivery format. 96 or even 192kHz has its place during the editing process for the blindingly obvious reasons covered by previous posters, but anything beyond 44.1kHz for the final delivery format makes absolutely no sense whatsoever no matter which angle you approach the argument from for the vast majority of the human population.

HA's standards have certainly slipped since I was a regualr contributor if posts like Cavaille's are allowed to remain unedited, so I'm glad (if slightly saddened in some ways) to no longer be a regular HA contributor.

One should sample at the double of the highest frequency.The reverse holds too, if you sample at 44, there shouldn't be any signal above 22kHz in the source.Practical consequence, the input must be band limited.If you want to cover everything up to 20 kHz, the only thing you can do is using a pretty steep low pass filter (brick wall). In general this type of filter produces artifacts like pre-ringing.

Whats the problem with the "Brickwall" filter? If done correctly this evil pre-ringing is happening above 21kHz and shouldn´t matter. You may also decide to allow some aliasing above 20kHz and filter from there on. Not much pre-ringing left at all. Both attempts should be absolutely transparent. Good luck on abxing that against!

Shouldn't the rest of the worlds acoustics be considered as well?I mean, non audible frequencies pass through materials and become audible. So they need to be there if you want a natural sound reproduction.It's not just our eardrums in the room, is it?

To make a comparison with light:Your day at the office would be pretty dark if you were to take the non visible part of the spectra out of the equation.As the flourecent lights fully depends on ultraviolet light, a higher frequency light just outside the visible spectra.

NOTE: I'm a musician and an audiophile, not a scientist. So I could be horribly wrong.And maybe this has been said a million times before.But to me it feels pretty obvious.

You should start to slow the soundwaves down right at the speaker covers.

Shouldn't the rest of the worlds acoustics be considered as well?I mean, non audible frequencies pass through materials and become audible. So they need to be there if you want a natural sound reproduction.It's not just our eardrums in the room, is it?

To make a comparison with light:Your day at the office would be pretty dark if you were to take the non visible part of the spectra out of the equation.As the flourecent lights fully depends on ultraviolet light, a higher frequency light just outside the visible spectra.

NOTE: I'm a musician and an audiophile, not a scientist. So I could be horribly wrong.And maybe this has been said a million times before.But to me it feels pretty obvious.

You should start to slow the soundwaves down right at the speaker covers.

/Levi

To answer your question is: NO!This sounds like audiophile gibberish. The part of your speaker that is pruducing the high frequencies above 20kHz do move some mm˛ of light material with pretty low energy. This can´t activate any other materials in your room.

To make a comparison with light:Your day at the office would be pretty dark if you were to take the non visible part of the spectra out of the equation.

It wouldn't make any visible difference, because they're non visible.

QUOTE

As the flourecent lights fully depends on ultraviolet light, a higher frequency light just outside the visible spectra.

But speakers don't depend on ultrasonic sound to create audible sound (well, there's a device that does, but it's not a normal speaker).

Any hypothetical intermodulation of ultrasonics from the original instruments, mixing in air to create audible components, will be strongest near the instruments themselves - and will be captured, in the audible range, by the microphones at the original event. So you don't need to create them in the listening room - they're already in the recording if they exist.

Far more real (i.e. measurable and sometimes audible) is unwanted intermodulation in the speakers themselves. That never exited in the original performance, and can be measurably reduced by removing all ultrasonics cleanly.

Far more real (i.e. measurable and sometimes audible) is unwanted intermodulation in the speakers themselves. That never exited in the original performance, and can be measurably reduced by removing all ultrasonics cleanly.

Once again a sentense from you i should keep and remember well! I lately looked over some frequency response plots of the latest B&W speakers. They introduce a resonance and peak at 40kHz of +10dB! I can imagine while you feed them with music that has content at 40kHz the tweeter may introduce unwanted behaviour in audible frequencies. I doubt this is what anyone should want.

And just to tell you. Personally, I'm only curious. I have no preference (but vinyl).I've never participated in this kind of discussion before and I haven't done much more reading than the physics in school (15 years ago).

Though I did do some reading before writing this and all I can say is: This is waaay too complicated for me.

Turns out there are actually several ultrasonic speakers. Where the air it passes through slows down the soundwaves, so that it will be audible at a certain point.You can find them at disneyworld and in your local mall. Aimed at you.

But really, I can't understand how anyone without a degree in physics can say either YES or NO to my question.You really need to know your way around Nonlinear Acoustics.Here's a place to start: Nonlinear Acoustics Wiki

So unless anybody can explain this in a simple (and informed) way, I'm just gonna have to leave this alone.Really sorry for wasting your time.

And just to tell you. Personally, I'm only curious. I have no preference (but vinyl).I've never participated in this kind of discussion before and I haven't done much more reading than the physics in school (15 years ago).

Though I did do some reading before writing this and all I can say is: This is waaay too complicated for me.

Turns out there are actually several ultrasonic speakers. Where the air it passes through slows down the soundwaves, so that it will be audible at a certain point.You can find them at disneyworld and in your local mall. Aimed at you.

But really, I can't understand how anyone without a degree in physics can say either YES or NO to my question.You really need to know your way around Nonlinear Acoustics.Here's a place to start: Nonlinear Acoustics Wiki

So unless anybody can explain this in a simple (and informed) way, I'm just gonna have to leave this alone.Really sorry for wasting your time.

/Levi

My understanding is ultrasonic speakers work similar to AM radio - you have an (inaudible) ultrasonic wave modulated by the audio you want to produce. You never hear the ultrasonic signal itself; it's just a carrier for the audible stuff.

So unless anybody can explain this in a simple (and informed) way, I'm just gonna have to leave this alone.

You already were offered an explanation in a simple and informed way:

QUOTE (2Bdecided @ Nov 8 2011, 13:00)

Any hypothetical intermodulation of ultrasonics from the original instruments, mixing in air to create audible components, will be strongest near the instruments themselves - and will be captured, in the audible range, by the microphones at the original event. So you don't need to create them in the listening room - they're already in the recording if they exist.

My understanding is ultrasonic speakers work similar to AM radio - you have an (inaudible) ultrasonic wave modulated by the audio you want to produce. You never hear the ultrasonic signal itself; it's just a carrier for the audible stuff.

Well that's one kind. Where you have a reciever remodulating the ultrasonic sound.

The kind I'm refering to needs no reciever. It's just air slowing the soundwaves down and at a desired distance the sound becomes audible. As I understand it they shape the ultrasonic sound so that when the air has slowed it down to certain wavelengths interferance amongst the short soundwaves produces longer wavelengths and thus a high fidelity audible sound.The mechanism is called Parametric Array.

Any hypothetical intermodulation of ultrasonics from the original instruments, mixing in air to create audible components, will be strongest near the instruments themselves - and will be captured, in the audible range, by the microphones at the original event. So you don't need to create them in the listening room - they're already in the recording if they exist.

@ David & SoapSorry guys I didn't fully understand this explanation when I first read it. Learning as I go along.Very true, this!

But still, it leaves out all the sounds that hasn't been recorded through a mic. Synths, or a lined bass.And more important (i'm guessing) it leaves out all the sounds together, in a mix.

My understanding is ultrasonic speakers work similar to AM radio - you have an (inaudible) ultrasonic wave modulated by the audio you want to produce. You never hear the ultrasonic signal itself; it's just a carrier for the audible stuff.

Well that's one kind. Where you have a reciever remodulating the ultrasonic sound.

The kind I'm refering to needs no reciever. It's just air slowing the soundwaves down and at a desired distance the sound becomes audible. As I understand it they shape the ultrasonic sound so that when the air has slowed it down to certain wavelengths interferance amongst the short soundwaves produces longer wavelengths and thus a high fidelity audible sound.The mechanism is called Parametric Array.

In both of these examples, the ultrasonic wave is used as a carrier wave. The fact that it is demodulated "naturally" without a receiver is irrelevant - the ultrasonic part is still just a carrier wave, and the modulation signal contains the audible frequency information you hear after demodulation.

"The AudioBeam directional loudspeaker works with ultra-sound, modulating the audible sound onto an ultrasonic carrier frequency, much like a radio station does, and then emitting this signal via 150 special piezoelectric pressure transducers.Audible sound is only generated at a distance from the AudioBeam, when the signal is demodulated because of the non-linearity of air."

[EDIT - Also note that these systems are explicitly taking advantage of the fact the ultrasonic carrier wave is inaudible - if the ultrasonic carrier wave was audible on it's own, you wouldn't want to be anywhere near these types of speakers.]

I bet most of these 16/44.1 fanatics rips/ripped their best tapes/vinyls using least 24/96.

Juha

My vinyl and cassette rips are at 24/44.1 (the capture card is an M-Audio 24/196) -- the higher bit depth at transfer merely saves me the minor trouble of having Audition convert it upwards. The technical reason for the higher depth is twofold -- 1) more headroom during transfer (I'm lazy and don't always want to seek out the absolute highest peak beforehand) and 2) keeps rounding errors inaudible when you're passing the audio through a digital processing/production workflow.

If the source audio is an SACD, I'll use an 88 kHz SR...just because I can. But that's for archiving. For listening I'm quite happy to use downsampled versions if need be.