To be honest I'm not sure which side of the debate I fall on. For a long time I thought 44.1/16 was plenty, in recent years I'm not so sure.

Not because I think we can hear sounds above 22Khz (I certainly can't, my hearing only goes to 17.5Khz, and has done since my 20's) but more because of imperfections in practical (read - low cost / commonly available) digital to analog chips and reconstruction filter designs/algorithms.

Not all DAC chips are created equal especially in cheap consumer hardware which surrounds us everywhere, and if your useful signal range goes quite close to the nyquist frequency then the differences between different chips and reconstruction techniques may be noticeable and start to diverge from a theoretically ideal response. (A perfect brick wall filter is not realisable, even in the digital domain)

Although it would seem to be a bit wasteful on number of bits, (one of the article authors main points) it does make some sense to me not to cut things too fine. In any other field it would be considered engineering safety margin. 192Khz does seem completely over the top to me but if 96Khz/24bit is realisable, why not ?

Is there a shortage of bits in the world, a shortage of hard drive space or internet bandwidth ? In the early days of the internet mp3's became popular because they could be downloaded over dialup and stored on small hard drives of the day.

Today both hard drives and internet speeds have reached the point where downloading and storing losslessly compressed 44.1/16 audio is perfectly practical, but very few sources exist. In a few more years downloading and storing losslessly compressed 96Khz/24bit will be perfectly practical.

If bits are cheap, why not ? Am I crazy ?

One comment of the article which I simply can't agree with is "It's true enough that a properly encoded Ogg file (or MP3, or AAC file) will be indistinguishable from the original at a moderate bitrate."

(With "moderate bitrate" being weasel words to me...)

MP3/AAC encoding has improved in leaps and bounds in the last 10 years from the point where I could pretty readily tell an mp3 version of something from the CD version regardless of bitrate (swishy sounding treble being one obvious cue) to the point where I find well encoded 256Kbit AAC difficult to distinguish from the original, but I can still hear a difference on certain songs with critical listening on good speakers/headphones.

If I'm honest a lot of my music is in 256Kbit AAC and it is "good enough" in most cases, but anything I've ripped from physical CD's has now been re-ripped in lossless formats, and I can notice the difference, even though its still only 44.1/16. (Expectation bias ? Who knows...) If I could download music in 44.1/16 lossless from iTunes instead of 256Kbit AAC, I would.

What are others thoughts on the article, and sample rates higher than 44.1/16 and/or lossy encoding ? (two separate issues really, kind of rolled in together in the original article)

Interesting article. He makes a good point about audible intermodulation from inaudible ultrasonic signals, although how real an issue this is in practice is unclear as ultrasonic levels would be low even when present? I have a theory: some people prefer their sound to be accompanied by low-level intermodulation and interference - evidence for this is the popularity of things like 'tube buffers' (often badly designed cathode followers, which add noise and distortion), cheap (quality, design) but expensive (price) interconnects which can be poorly screened or even unscreened so allowing in RFI, NOS DACs with no attempt at reconstruction filtering so pushing lots of ultrasonics into the amplifier.

Misunderstandings of the sampling theorem are widespread and persistent, as he says. Many people have little idea what effect the anti-aliasing and reconstruction filters have, and often get them mixed up with each other.

So is he right? The hard evidence says yes, but anecdotes to the contrary persist. How much of this is due to real issues, and how much is due to ignorance and postmodernism, I don't know. Let the war begin!!

PS many modern recordings are so poorly engineered that the issue is purely academic anyway!

For SPECIFIC records, I can hear the difference between 16/44 and 24/96 (or DSD - that is at 24/88 level). Past the 96kHz is just waste of bandwidth and degrading DAC performances.
But for other recordings, there is nothing to gain from going Hi-Res. If original is garbage, garbage remains.

PS: Any DAC unfiltered output will output "garbage" past 20kHz mark, including DAC's that are 16/44 only. Lots of DYI eliminate the filters (because it is easy and takes no skills to demolish something) and claim that they hear "improvements". Personally I prefer to hear the music the way was created, not with extra harmonics and aliasing.

I have a couple of Denons that upsample to 192kHz. DAC's upsample that futher (PCM1794 goes up to 8x). That is good for a CD recording because it relaxes the demands for the output analog filters and can provide a better upsampling than the one integrated in DAC cips. 384kHz will be probably used so the signal is not OS in the DAC cip (AD1955 for example allows by-passing of the internal OS filter).
But from purelly reproduction, the 24/96 is more than sufficient.

Now, in studio work, where are lots of audio files manipulation in digital domain... I can see the reason for higher sample rates. For the sake of complete compatibility with future delivery in PCM (at any consumer sample rate) or DSD, they use 24bit/352.8kHz.

what do you think of cd players like cambridge audio that upsample to 24/384khz and higher ?

Must be pure marketing as I can't think of one engineering reason to do this. Bits not on the recording cannot be recovered and no ADC gets close to 24bit performance. No DAC that I've seen maintains the same measured performance figures as the sampling rate goes up. I'm with Dan Lavry - slower is better. The optimum sample rate is somewhere between 48 and 96k I reckon.

__________________ The heart ... first dictates the conclusion, then commands the head to provide the reasoning that will defend it. Anthony de Mello