Debatable. If you can, do so, but you'll find that it chews up resources far faster than when working with 44.1. Bitrate (24bit vs 16bit) would be my preference in a pinch, as it gives more flexibility as well as (Debatably) better sound quality. Actually there's loads of articles that'll be far more useful than people's opinions. This: http://www.soundonsound.com/sos/sep07/a ... lmyths.htm is a good starting point, carry on from there

44, unless you have a really specific reason for 96, and your room acoustics and recording chain are top rate, and you have the available resources to "waste" (double the CPU load, double the disk space, etc).

You always want to maintain the highest sample rate as long as possible. However just because you have the capability to record at 96 does not mean that you should. There are a lot of factors to consider and I won't go into detail as I am no expert but space is one of them, power is another, internal components of your AD is another. Even if you have the capability that doesn't mean that your gear operates best at 96. It really is quite confusing and there are a million previous threads debating such questions, one or two of them mine.

Bottom line after all the reading and research I feel I know less than when I started, or else I know more which only makes it more confusing thus it feels like I know less

I can hear a difference between 96 and 48, however what that difference is, is debatable and once I listen to 48 for a while it sounds just fine.

At the end of all of that I would say unless you are recording Classical music I would stick to 48/24, or 44.1/24 and forget about trying to understand any of it.

Some say certain plugins sound better at higher sample rates. And I have seen convincing claims that lower end converters perform better at higher sample rates as well, this seems counterintuitive I know. But space and computing power will most likely be the deciding factor for most home studios. Dedicated DSP cards like those in the UAD line and HD Pro Tools systems up sample to as high as 192khz for their internal processing I believe, which negates the need to record at higher sample rates.

I could be wrong, this is just what I have gathered from my untrained research!

Yes. There are technical reasons for this, but it can be true that, especially for aliasing reasons, some virtual instruments benefit from going up that high. But this is a completely different issue to audio recording.

passerrby3141 wrote:Dedicated DSP cards like those in the UAD line and HD Pro Tools systems up sample to as high as 192khz for their internal processing I believe, which negates the need to record at higher sample rates.

It's nothing to do with DSP cards per se, it's the DSP algorithms used. Some algorithms and processes benefit from upsampling, and some native plugins do this as well where it makes sense. However, upsampling does tend to make the plugin much heavier on the CPU, so plugin designers try to avoid high-cost algorithms unless it makes sense, or is necessary to use them.

Human hearing goes up to about 20 kHz.... but you need to double that to eliminate something called aliasing, which is something like a "rounding error" that causes mucho distortion and strange frequencies.

However, I regularly argue that 80 kHz or higher is necessary to get a good representation of the frequencies around 20 kHz.

If you imagine a sine wave at 20 kHz... and then you sample that twice per period (which is a 40 kHz sampling rate), you don't happen to get a great representation of the wave... For example, if you're unlucky and you happen to sample the sine wave at the two zeros each period, you'll get nothing.

in fact, you're guaranteed to get a "lower volume" reproduction of the wave. Thus, I suggest at least 4 samples per period of the highest frequency you're interested in reproducing accurately.

192 kHz means you're getting 8 samples per period of a 24 kHz wave... that ought to reproduce those high frequencies much better.

Anyhow - that'd be the difference! Much music is "low end heavy" anyway.

Like others have said, I'd stick with 44.1 unless you're doing something very delicate in a very nice room. I used to do everything at 96. It was a pain to mix large sessions because my computer really couldn't handle all of that data. Now my classical recordings are done at 96/24 while pretty much everything else is at 44.1/24. A friend of mine uses 48 to get away from a few of the side-effects of 44.1. He even uses 48 for classical work and there's no problem in the sound. Unless you can do 192k or... SACD rates (2822k etc...) I'd stick with lower rates. 192 and sacd are definitly far superior, but is it worth the strain on your computer?

Ok, i made a rough approximation... because I'm hitting the same problems at 48 versus 96...

Basically, I roughly computed that if you wanted 48 to sound more like 96, if you put an EQ that raised 20 kHz by 1 dB, 10kHz by 0 db, and made a smooth line between the two (eh, I have a hand drawn eq)... it'd be pretty close.

So, then I tested it with some songs in my collection... basically, if you've already boosted the high end on cymbals and similar, it doesn't really matter too much.. just puts a small punch on the cymbals.

Check out Paul Lehrmans artice in Mix Magazine (available on the web) - The Emperors New Sampling Rate.

The title says it all really. Even Ethan Winer, of Real Traps who I respect greatly when it comes to sound (his ears are much more golden than mine) suggests that differences heard (and I'm not saying that people who hear a difference are lying) can be due to minute changes in head position between changing the material, affecting the phase of the sound. Mr Lehrmans article refers to double blind tests where there was no statistical difference and apparently 'educated' ears were used. Of course, he has been flamed within an inch of his life in some places but many people, in particular manufacturers and those who have been saying for years that they can hear a difference have a vested interest in 88.2 and up.

Now, I'm not one to go against the word of many much more experienced and sucessful engineers on this but in my (admittedly very limited and unscientifc) tests I can't hear a difference in my little studio. And as I can run a few more plug ins (not as many as you would think, I run Guitar Rig in HQ mode which does internal upsampling and use UAD plugs which also upsample where required) and many more tracks I keep using 44.1 or 48.

I correct that, I just re-read the article. The sample size was 'hundreds' of people. Perhaps not definitive but pretty persuasive, certainly enough to stop and think about it and hope for more studies. In the meantime, maybe stick to 44.1 ?

I can not find anything wrong with the study at all. I suppose the only thing you could do to improve it would be having some kind of frame to lock the listener head in the exact same position every time. Something like they had in clockwork orange maybe? lol

I know I can not wear any differences in 96KHz audio as opposed to 44.1KHz, but then again my hearing is not great after a few years playing gigs so Id on't really think this proves much.

It reminds of a test I saw in some audiophile forum once where they did a blind test between themselves in order to prove they could hear a difference between 16/44.1 Khz and 320 Kbps MP3. They only guessed correctly about 40% of the time, and conclude that deaf people could probably do as good a job as they did or better, since anyone has 50% probability of guessing right by picking a random answer.

Interweaved wrote:If you imagine a sine wave at 20 kHz... and then you sample that twice per period (which is a 40 kHz sampling rate), you don't happen to get a great representation of the wave... For example, if you're unlucky and you happen to sample the sine wave at the two zeros each period, you'll get nothing.

in fact, you're guaranteed to get a "lower volume" reproduction of the wave. Thus, I suggest at least 4 samples per period of the highest frequency you're interested in reproducing accurately.

This is why you need a sampling rate greater than instead of equal to twice the highest frequency you are sampling.

It's only when the frequency of the sampled signal is exactly half the sampling frequency that this problem occurs, as there are an infinite number of sine waves of frequency fs/2 with different phases and amplitudes that could all give rise to the same sampled data.

If however the sampling rate is even sightly greater than twice the frequency of the sampled sine wave then there is only a single sine wave with a frequency less than fs/2 that will fit through all the sample points, and thats your original signal.

Mark has already tackled this, but it is a common fallacy and deserves the emphasis:

Interweaved wrote:Basically.....

Human hearing goes up to about 20 kHz.... but you need to double that to eliminate something called aliasing, which is something like a "rounding error" that causes mucho distortion and strange frequencies.

The Shannon/Nyquist sampling theorem states that you need to sample at a rate which is more than twice the bandwidth of the required signal. Sampling is a modulation process, and the 'more than twice' element is to ensure that the source and it's modulated sidebands remain separate and separable.

However, I regularly argue that 80 kHz or higher is necessary to get a good representation of the frequencies around 20 kHz.

It is one way to do it, but it is not 'necessary.' Given a wanted bandwidth of 20kHz, how much higher than 40kHz the sample rate needs to be is determined by the efficacy of the anti-alias and reconstruction filtering.

To be honest, 44.1 was too low when the standard was set -- 60kHz would have been a far better choice for a host of reasons. Equally, 96kHz is arguably wastefully high. However, technology hasn't stood still for the last 25 years, and most decent converters can now provide very satisfactory performance sampling at 44.1kHz.

Twenty plus years ago, you needed the very best of converters to achieve adequate performance sampling at 44.1. Thankfully, that is (arguably) no longer the case, and there are some extremely competent mid-price converters out there now.

If you imagine a sine wave at 20 kHz... and then you sample that twice per period (which is a 40 kHz sampling rate), you don't happen to get a great representation of the wave... For example, if you're unlucky and you happen to sample the sine wave at the two zeros each period, you'll get nothing.

This is true, but hardly relevant since the theorem requires the sample rate to be more than twice the bandwidth of interest. In the case you cite, the lower modulation sideband of the source signal will lie precisely on top of it, and what you hear will be the summation of the two, which -- if they are perfectly aligned because the sample rate is exactly twice the source frequency -- could be nothing at all, as you say!

Thus, I suggest at least 4 samples per period of the highest frequency you're interested in reproducing accurately.

Yes, this will obviously work, but it is a bull in a china shop approach, which is wasteful of the information capacity provided by sampling at that high a rate.

The theorem is 100% accurate in its claims. The only issue is in the practicalities of implementing it -- specifically in designing the filters to do what they are required to do without damaging the wanted signal in the process.

There was a time when working at 96kHz (or higher) provided a clear sonic advantage to working at 44.1kHz, simply because the filter artefacts where removed above the human hearing range. However, as filter design and clever techniques like delta-sigma converters have evolved, the difference has become far less pronounced -- in some cases to the extent that the difference is barely audible at all.

Countering that, computer processing power and data storage capacity has increased (and continues to increase) enormously, and while working at 96kHz involved serious overheads for most systems a few years ago, that is less the case now.

Personally, I generally work at 24/96 unless specifically requested not to, because all of my equipment can handle that rate doing the kind of work I do without a significant performance reduction. But on the odd occasions that I am required to work at 44.1 or 48kHz, I don't hang my head in shame at the poor quality -- the difference is often undetectable.

But as a working principle, it makes sense to record source material at the highest possible resolution and quality, because the quality can only suffer after that.

Interweaved wrote:Basically, I roughly computed that if you wanted 48 to sound more like 96, if you put an EQ that raised 20 kHz by 1 dB, 10kHz by 0 db, and made a smooth line between the two (eh, I have a hand drawn eq)... it'd be pretty close.

I think what you are suggesting here is compensation for the transition area of the 'brick wall filter(s)' since some (not all) exhibit a mild roll off as they approach the turnover frequency. But of course, amplitude is not the only aspect of a filter to consider -- phase is as important (if not more so) and by introduceing your HF lift, you will also introduce phase shifts that will change the character of the sound in some (indeterminate) way.

Human hearing goes up to about 20 kHz.... but you need to double that to eliminate something called aliasing, which is something like a "rounding error" that causes mucho distortion and strange frequencies.

However, I regularly argue that 80 kHz or higher is necessary to get a good representation of the frequencies around 20 kHz.

If you imagine a sine wave at 20 kHz... and then you sample that twice per period (which is a 40 kHz sampling rate), you don't happen to get a great representation of the wave... For example, if you're unlucky and you happen to sample the sine wave at the two zeros each period, you'll get nothing.

in fact, you're guaranteed to get a "lower volume" reproduction of the wave. Thus, I suggest at least 4 samples per period of the highest frequency you're interested in reproducing accurately.

192 kHz means you're getting 8 samples per period of a 24 kHz wave... that ought to reproduce those high frequencies much better.

Anyhow - that'd be the difference! Much music is "low end heavy" anyway.

That's a nice thought provoking explanation.

I keep thinking of the maths and mechanics of how the highs are dealt with (and lows) because this is where the audio begins to suffer in standard home recordings.

I believe software and components hold the key to how the sound is dealt with.I mean it's easy for a soundcard to state audio clarity,but add a dodgy plugin(which you don't know about)and it will mess your audio signal up.It's a minefield but I love digital.

This is just as relevant to hardware and analog processing too. At the end of the day, you are putting your signal into a "black box", which is closed and does some undefined process on that signal, the design process and choices of which you weren't part of, and then you get the output.

Without knowledge of what's happening inside the box, you have no idea what's going on, apart from what you can deduce from the output signal.

It doesn't matter whether the box is a software plugin designed by a 12-year old with a poor understanding of audio signal processing, a plugin written by some DSP maths genius, a hardware box an the entry level with cheap poorly designed electronics or a boutique piece of expensive analog gear.

At the end of the day, you have to understand and be familiar with your gear, educate yourself and make informed choices about how to process, why, and what to use, and this will give you better results than just inserting some random magical black box on a signal and hoping it's going to make your song sound better, somehow...