• audio·phile - a person with love for, affinity towards or obsession with high-quality playback of sound and music.

/r/audiophile is a forum for discussion of the pursuit of quality audio reproduction of all forms, budgets, and size. Our primary goal is insightful discussion of equipment, sources, music, and audio concepts.

One final rule: All content in /r/audiophile should be related to audio quality in some form. Moderators reserve the right to delete posts that don't treat or discuss audio quality as a primary objective of a post.

Very true stuff. People tend to underestimate 16/44.1 because stuff exists that is higher. And yet, in test after test, people can't discern 16/44.1 from 24/96. The people who do are not listening blind, or they are confused about the whole process of taking an original 24/96 file, copying it and downsampling the copy to 16/44.1 and comparing those 2 --- and instead they compare 2 completely different masters of a track, 1 original at 24/96 and a totally different original at 16/44.1

I don't know why, but a lot of people really, really don't like to hear people say that 24/96 doesnt sound better to human ears than 16/44.1. But, there is room for science in audiophilia. I think everyone should stop by the sound science subforum of head-fi a little more often, or even the polarizing hydrogenaudio site, too.

You don't have to use $700 cables and listen solely to 24/192 vinyl rips to enjoy your perfectly valid multi-thousand-dollar audio setup.

Consider me somewhere between objectivist and subjectivist, leaning closer to the objectivist tendancies. Some people on Hydrogenaudio would have you believe that a $30 DAC is the pinnacle of the art; I'd say some "objectivists" lean just ever so slightly too far into the "everything is placebo!" attitude. That's fine, though -- it balances the people on the other end of the see-saw, too.

Yeah, and us audiophiles really need to admit...there isn't much of a difference between V0 MP3s and FLACs. I mean, 90% of my library is FLAC, but I do it partially for novelty purposes (the thought of having an exact rip) and rip quality's sake (proper tagging, no mismatched files, etc).

I believe you can have the perfect headphone setup for $1000 (used LCD-2 Rev1s + HRT Streamer + O2 amp), and I honestly don't see much justification in paying more other than portability or general comfort.

EDIT: Just noticed this isn't /r/headphones. Oh well, we're all the same people, right?

I mostly agree with you. You get any source that measures flat and low distortion and looks nice and has the combo of outputs/inputs that you desire. Then you get any amp that has enough power and the right output impedance to be within spec when it is driving your headphones. That's all you need. The headphones themselves are the biggest game-changers. I've heard tons of them, and they differ a lot. You get ones with huge bass and some with barely any bass (and you can't simply EQ to change a sony MDR-V6 to a Stax SR-007), so you choose one that sounds the way you like, and also fits on your head good, too. It's the headphone that's important, the other stuff mostly just works or doesn't.

But it's more than that, and this is where the hobby aspect comes in. Some audiophiles are purely functional. They want great sound and a good listening experience, so they get comfortable good sounding stuff and they get a few shiny metal boxes to make them sing properly and that's that. So they get the setup you described.

Other audiophiles get a really nice amplifier, not necessarily because it is audibly better, but because it looks better, and is fancier, etc. It's kinda of like buying a nice watch even though your cell phone has a clock that works just fine. Some people buy a class A transformer coupled valve amplifier, not because they think it necessarily sounds superior, but because the concept of the tech and the way it works and the way it looks is going to be on their mind some of the time when they listen to the music, and they like that.

Like me, I just love using an optical cable someplace in my setup. I could just as easily use USB or coaxial, but the image in my mind of little pulses of light going down fiber, representing music, is just too cool not to include in my setup. That exact kind of feeling drives a lot of audiophiles.

You actually said it yourself when you brought up v0 vs flac -- "do it partially for novelty purposes." It's nice to sit down in your chair and flip on that ultra-classy amplifier and slide on the headphones and think, "that beautiful amp is awesome."

I also agree with you on V0 vs flac, btw. I've got a stupid-expensive setup and I cannot hear the difference in ABX comparator testing. Now of course, as long as I do it sighted and can see which is mp3 and which is flac, I'll totally swear the flac is crisper. ;)

That was fun to read. Thanks for reminding me that it's not always about facts and perfection. Sometimes we like things just because it feels right, and that's okay, as long as we're honest with ourselves about it.

This is also very true and is a nice feature of optical. Tell the wrong people about that, though, and they'll yell about jitter and sell you a power conditioner to "help" with the electrical noise issue.

I think that's what NwAvGuy is working on. When his ODAC is finished you'll be able to buy it, solder it to your O2 board, put it in a giant fancy brick-lined enclosure and tell people it cost as much as you want. The point is that for under $250 you'll have one of the best-performing DAC/amps at any price.

At that point, you can spend all your money on headphones you prefer, because between frequency response, detail, soundstage and comfort, headphone preference is extremely subjective, and the differences between two brands of $200 headphone are glaringly obvious.

Why do I use FLAC? Not (solely) for the listening quality- but it can be transcoded into any other format without loss of quality- eg that transcode will be as good as if it were encoded from source. If I stream it to my phone at 129kb or 64kb it's still listenable, VBR MP3 can't say that nor can any other lossy format. Disk space is cheap so the extra space needed for flac or ALAC isn't that big of a deal.

I use FLAC because while v0 mp3 may be of high quality it is not immune to lossy artifacts. Particularly pre-echo.

Even with cheaper headphones I can successfully ABX 320k CBR with FLAC by just listening for pre-echo. There are obviously some music tracks that don't show this artifact much or at all, but there is plenty of music I listen to that does show it and thus I need lossless to avoid it.

Regardless of what your opinions are on the value of higher quality basically regardless of the science behind it (we have tons of bandwidth, after all, why not?), this is a fantastic article. It gets everything right and lays it all out.

My favorite part:

All signal content under the Nyquist frequency (half the sampling rate) is captured perfectly and completely by sampling; infinity is not required. The sampled signal contains all of the information in the original analog signal, and the analog signal can be reconstructed losslessly.

What I'd rather have instead of 24/192 audio would be a revamp of DACs in audio players. That's a campaign I could get behind. While the article is correct about the theoretical maximum, it's not true that every DAC in every system does the job perfectly, and that's far more important than the rate to begin with.

Absolutely. NwAvGuy's recent ODAC design update lays out just how difficult it is to properly implement a DAC. If high-end companies (or low-end makers, for that matter) spent the time to test and revise their devices diligently, we'd all benefit.

What I'd rather have instead of 24/192 audio would be a revamp of DACs in audio players. That's a campaign I could get behind. While the article is correct about the theoretical maximum, it's not true that every DAC in every system does the job perfectly, and that's far more important than the rate to begin with.

Where do you get the idea that DACs in audio players aren't excellent? I'm sure there are some duds out there in super cheap junk, but for the most part the available DACs (and ADCs) made for audio applications are absolutely incredible. Your random stupid $10 moderately highfi part has specs— linearity/THD, SNR, etc that would have been impossible with million dollar supercooled unobtainium 30 years ago. Even the cheapest parts are limited by noise in the surrounding support electronics.

The inexpensive ADCs and DACs made for audio are finding their way into all sorts of non-audio applications because of their their excellent specs and low prices. E.g. you see them in digital scales and servo control, because though they do need good linearity they don't need >100dB SNR but it makes the overall design easier.. People doing weak signal radio work are using them too.

Absolutely, the DAC chips can do great. But they're implemented like plastic toys on both sides. It's like having a fantastic powerful perfectly-specified water pump but attaching it with leaky plastic straws.

Do I really, truly know what I'm talking about here? No. I took one EE class in college and got a B minus. But it's pretty clear from my experience that the DAC and amp in my receiver sound different from the one in my iPod, and different still from my android phone, and my Fubar II sounds better than all of them. Not mind-blowingly "omg it wasn't music before" different, but pretty clearly not the same. Even if you were to measure it I'm sure. Shoot, they could all be using the same DAC chip for all I know or care, but there's obviously way more to it than that.

So what I really meant is "DAC and everything surrounding it," but surely the implementation of the DAC portion itself is a large factor in how we can enjoy digital audio better, especially at sample rates verging on our hearing limits and what is arguably a fairly delicate decoding problem to begin with.

Just sayin, it's more important to properly decode and amplify the audio we already have than increase the sample rate and bit depth needlessly.

The worst offenders when it comes to poorly implemented DACs tend to be the 'audiophile' manufacturers, rather than the giants churning out millions of units. The DAC in a Playstation or Samsung TV or indeed an iPod will generally be a very well designed piece of engineering, it's the manufacturers like Schiit and Nuforce that put out the really shoddy implementations. They get away with it because people think it's worth paying extra for some kind of luxury distortion (oh, but it's so warm), and don't believe an oscilloscope could possibly be better than their golden ears.

The sound isn't a function of the DAC itself, I think that was his point - other stages (such as amplification EQ, and filtering) would account for any difference in sound much more than a Digital analogue converter (the integrated circuit itself, not the whole package).

I've used two iBasso products, a D-Zero and a D4. While I had no problems with the D-Zero many users have experienced interference, static, and some other issues. As for the D4, when plugged in to power source/turned on my headphones "pop." By this I literally mean they make a popping sound from the drivers, I'm not electrician, but I'm guessing there's some sort of power problem sending a lot of power through the headphone jack upon power on. I've never experienced that before with any FiiO products.

I still use my D4 as a DAC, but I will be upgrading soon and will more than likely shy away from their products in the future.

A dac* is a dac is a dac for the most part, even the cheap Chinese ones, there is little to no real-world performance difference. The issue with different systems sounding different is almost always an issue with amplification. Iphones and androids (or old fashioned media players that are not a phone) just can't supply enough current to drive anything other than in-ear monitors.

*Edit: DAC IC's - the actual chip, not the complete products that are actually a DAC/AMP/EQ I/O product.

As nullc says you can go on digikey or another electronics component website and grab a DAC for $10 that will handily beat a top range audiophile branded super dac from 10 year ago.

Again to support what Phys1ics says, big companies buying millions of components know what they are doing and will put in solid high quality DACs that are just as good as any you'll find on any audiophile equipment, but in the end they are devices for general use and often battery powered - they're not designed to power some 600 Ohm current hungry drivers.

I'd wager any day of the week that a Iphone and any audiophile DAC you could care to pick would draw in a properly done double blind test (both amplified to the same output level, using the same headphones).

I encourage you to read this blog post. While it's true that it doesn't cost much to get a great DAC chip, implementation is everything, and many engineers just jam them in any which way as long as sound comes out the hole.

Callinet6 seems to be referring to a DAC as the whole device, not just the chip itself, which is probably fine. The implementation of those good DACs, as well as the high-impedance headphone jack on many consumer devices (and many higher-end devices as well) still leaves something to be desired.

Yes but that's nothing to do with the actual Digital to Analogue conversion itself, so saying an Iphone (for sake of example) has a crappy DAC is wrong, and misleading for many people. The DAC itself isn't the issue, it's the current supply and amplification capabilities.
Line output on an iphone is perfectly decent.

Causing come people to buy a more expensive bit of kit than they need when they actually only really want a headphone amplifier (granted most of the amplifiers worth getting are a DAC and amp combined).
That said, I've heard that the FiiO's are fine for the vast majority of people who's phones/player of choice just can't drive their preferred headphones.

I never intended to imply that mass market products can't be improved or are as good as decent well engineered audio gear, but that they are not designed to be. The problem isn't one of neglect, or laziness on the part of the designers, but simple engineering limitations. Be that cost, size, battery consumption so on - it's unfair to expect a general purpose media device to be able to power high impedance audiophile hardware.

I agree with cabinet6 on his overall point - I just happen to have a pet peeve about DACs, and the myths that get perpetuated around them (and the resultant snake oil products that take advantage of them).

If you read my comment history, you'll find that I'm often the first person to recommend great headphones first, an amp if needed, and a DAC only if you want the additional complexity or it's necessary. In fact, I'm a big fan of the UCA202 (or 222) because of its incredibly low price and how well it tests vs. much more expensive products. My setup is a UCA222 taped to a JDS Labs O2 and it has an amazing amount of power to spare when pushing my 600-ohm Beyerdynamic DT990s.

I think we agree on most points, but I'm taking it one step further because when you say "a DAC is a DAC" it may inspire people to buy, say, the NuForce uDac2 which is actually a thoroughly mediocre, overpriced product. The DAC chip itself might be fine, but the execution doesn't compete with the aforementioned $30 Behringer at 4x the price.

I usually tell people that every MP3 player and sound card already has a DAC, and most external DACs haven't proven themselves to be any better than the one your device has already. The higher the price, the more skeptical one should be.

I usually recommend the O2 and nothing else as an upgrade for driving high-impedance phones.

Oh that I agree with you on, and it might be badly configured, or have bad coding on a MCU controlling the rest of the system or various other issues or bad component choices - things can and do often go wrong with these things - especially with overpriced but shiny looking boutique crap.

I think this is just a case of us agreeing, but approaching it from different directions, you from the outside in (the device overall to the DAC) me from the inside out.

I should have been clearer when I said a dac is a dac that I meant the integrated controller itself, the $10 chip DIY audio buffs can buy themselves if they want to build their own dac/amp that match the best of specs in any stupidly expensive audiophile dac, that often use super branded filled with audiophile fluff description pseudo-science but technically inferior chips.

I also use what is basically an O2 (though I modified the design to use a rechargeable li-ion battery rather than a PP3).
For my desktop I have a M-audio fasttrack for output, for my phone/laptop I don't bother, and just use the line-out.

I think the iPhone actually has a pretty highly regarded DAC in it, if I recall correctly. The iPod Classic is one that many people complain about, the Cirrus chip, but many users have also claimed it's solid.

I for one enjoy the sound out of my iPod Classic with the Cirrus chip.

The theorem assumes an idealization of any real-world situation, as it only applies to signals that are sampled for infinite time; any time-limited x(t) cannot be perfectly bandlimited. Perfect reconstruction is mathematically possible for the idealized model but only an approximation for real-world signals and sampling techniques, albeit in practice often a very good one.

I have to ask, can anyone explain to me why there is such a great difference between say normal dts and dts-ma or dd and Dolby true hd. Both are 24bit and have a noticeably greater dynamic range and sound better then their predecessors. I just cant figure out why this wouldn't have a similar effect when jumping from 16bit to 24bit in stereo assuming the material was recorded at 24 bit and you had the capable equipment. Really dynamic range is so important when listening in stereo and if that can be improved I would think the amount of sound loud to quiet and the space between the instruments or separation would all be greater making both a better and more accurate stereo image.

It is also worth mentioning that increasing the bit depth of the audio representation from 16 to 24 bits does not increase the perceptible resolution or 'fineness' of the audio. It only increases the dynamic range, the range between the softest possible and the loudest possible sound, by lowering the noise floor.

The point he's making is that in recording and mastering, 24-bit audio can improve the amount of leeway you have when manipulating the individual tracks. If one track is recorded at a much lower level than is needed in the final mix, it gives you more dynamic range to work with for that track when you boost the volume level so it shows up in the mix. Basically, gives you a lower noise floor during recording (hiss/hum you hear when you turn it up VERY loud) so that when you boost the audio it's still very low.

If you know anything about digital photography, it's like scanning an image at 40 megapixels with 48-bit color. Even if your final image is going to be much smaller and 24-bit, it gives you tons of extra data so that when you start stretching/compressing the tonal range and cropping the image down, you won't wind up with a visibly low-quality image.

It's about collecting far more data than is necessary, so that you have more options in editing and production.

Sure. But when you increase the bit depth of a scan you increase the number of colors. 'does not increase the perceptible resolution or 'fineness' of the audio' seems to imply you do not get more finely spaced volume levels. Is that what he is saying, because that does seem counter intuitive? Or is he saying it is like not getting smaller pixels to continue the analogy.

Okay, well the analogy isn't complete, because whereas speakers are capable of reproducing a wider range of tones than our ears are capable of hearing, typical monitors are nowhere near being able to reproduce the dynamic range of what our eyes can see. I'm ignoring extremely expensive HDR monitors, which are irrelevant to my analogy.

Forget pixels, because that part of the analogy is unnecessary. I'm going to compare audio (the limits of what our ears can hear) vs. colors (the limits of what our monitors can display).

My understanding is that 24-bit recording is something like shooting in RAW. You're capturing more colors than you can physically display onscreen so that you have more information to work with with. Displayed as-shot, a RAW and high-quality JPG are indistinguishable from one another, but if you need to push the exposure or pull up the shadows, you can go MUCH further in RAW than you can in JPG before encountering artifacts and other processing-related garbage. More usable detail and fine shades hidden in what appears to be black shadows is comparable to a very low noise-floor.

Once your RAW image is pushed or manipulated the way it's needed, you can convert it to a high-quality JPG without perceptibly altering the final image.

OK. My impression was that 24 bit gives more finely spaced volume levels, but the authors statement "does not increase the perceptible resolution or 'fineness' of the audio" seems to say I am wrong. I'm not sure if this is his intention.

The pictures of waveforms with stepped bars seems to suggest this, but one of his assertions are that these pictures are misleading.

This is what I was on about earlier, dynamic range and noise floor are very important for stereo playback and how could one not notice this? Like u mentioned earlier, when I first got the right equipment to play back lossless audio, I had few friends over and we did some a/b ing of a few blu rays and jumped between dts ma and dd, and everyone noticed a difference. One scene in particular is in the hulk movie a scene just before he gets trapped at the school and in a big fight( which has amazing audio btw and great for ref). There is a scene with rain hitting the porch roof and with lossless play back it really almost felt like it was raining in the room. This just wasn't conveyed in normal dd and you didn't really catch all the micro dynamics it just sounded like rain. that's kind of my problem with this article and that highlighted paragraph you have there, I feel he glossed right over this what I consider a huge advantage and worthwhile. It's basically the reason you would want to have 24 bit tracks for this alone. I'm not sure how much would be noticed on headphones but in a dedicated stereo room this would be kind of a big deal.

Ah, but you are comparing a lossless and a lossy recording when comparing DDHD to DD. Also, there will be differences in the mastering processes and possibly volume levels.

Looking into it, there are 65k levels in a 16 bit recording. These span 96dB so 96/65k=0.0015dB spacing. There is therefore very little benefit to 24bit when it comes to playback. Certainly less benefit than could be gained if we simply used the 16 bits that 16/44.1 offers (rather than brickwalling to make it loud).

I have a basic understanding of the steps per Octave, and agree with you. One thing I don't understand is this is at 96 db which is the general limit for CDs/ 16 bit, but isn't 24 bit like 120 db? And would the ratio of the octaves be the same but over greater volume therefore havering potentially more space allowing for a greater presentation? Will our ears even notice? Also, I know sacd is a different tech all together and is 1 bit, but isn't there a discernible difference between sacd and cd? Why would this not be true with 24 vs 16? I'm really just trying to figure this out and would love some real world trials to see if you can actually notice the difference. This article is good but does lack physical reference and I very much agree with enhancing the technology we have now, but am young enough and also like progression;). My theory on how it will pan out is similar to how when blu ray entered the market and hd DVD fell to the way side toshiba came out with these very good up scaling DVD players claiming they were almost as good as full hd. Well this happened to little to late and no one really cared because adoption was already taking place and people were content with there regular old DVD players or the new blu ray had basic up scaling built in anyway. I have a feeling this medium will succeed, I by no means think it will supersede 16/44 playback, but even with all the disproof people in audio tend to like to cover there bases just incase. better safe then sorry. Obviously this is where the snake oil happens and companies take advantage of this all the time, but benefits do happen. For instance in the future what if this extra bit rate allows for virtualized 3d of audio maybe we won't need these multi channel systems maybe with advanced/ active room correction everything could come from one source... Hmmmm :).

I'm not an analogue signalling expert so I'm not even sure how feasible your experiment is. I do know that all analogue components introduce distortion (even long cables) so the effect might be hard to disentangle.

For me it's two fold bennefit. One, more headroom mainly useful for classical music, but who wouldn't like more of it? Second is that switching to this format has the possibility for ending the loudness war. Just for those two reasons which mainly have to do with mastering, but if we can show that making good sounding music pays it will be worth any potential perceived or imagined loss of fidelity.

The loudness war and higher resolution distribution are orthogonal. Nothing about 44.1k/16bit distitribution necessitates destroying the audio with dynamic range crushing loudness maximization.

People crush the dynamic range because it sound good on initial impression, it's attention getting, and it makes things play with a consistent volume level.

Its true that many of the HD recordings out there have better mastering— but this is because they're being marketed to a different audience. It has nothing to do with the format.

The article points out that the extension to 96kHz bandpass can be counterproductive from a quality standpoint (unless the speakers and amplifiers have well controlled tolerance to ultrasonics) — and it takes up space which could usefully used to offer surround, remixable seperated tracks, alternative versions, etc.

It's also worth noting that a lot of the music that has this "crushed dynamic range" is frankly in genres where it's moot because it's not there to begin with. In electronica/pop/hip-hop etc, most everything is synthesized. Because of how incredibly complex it is to digitally make music that even begins to fill the DR of 16/44.1, it just doesn't get "filled."

Most of the music I listen to isn't rushed in any way, it's mixed to (presumably) normalizing volume to around -10dB and then gain is applied for highest peaks at -0.2dB which is pretty much the roof before you start getting distortion.

I don't do it myself (I'm not an audio engineer/tech/producer) I just notice that most of the bands I listen to have their stuff peaking at -0.2. Same goes for movies. I dunno if it's an industry standard or just common practice, but I see plenty of it.

Probably the result of true peak normalization, it's not a standard. (the closest thing to a stand in that space would be something like EBU R128, which would have signals much quieter than most of that stuff if people followed it).

When a digital signal is reconstructed into analog form there may be peaks between the samples which are substantially higher than the samples. These peaks will cause clipping either in the DAC's digital reconstruction filters or in its analog output. You can prevent this by computing the normalization against a drastically up-sampled input (EBU recommends 192khz as a minimum)... or just by inserting some headroom manually.

I was suggesting that the new format is focused on a change in the audience and delivery method. Not many people are listening to radio anymore, so why try and blast peoples ears off with compressed music to get attention?

If popular Headphone makers that shall go unnamed are pushing for a new standard I hold out hope that the format will be used correctly no matter the genre.

Erm. If you're thinking old classical, microphones weren't sensitive to tones in the upper 2/3 or so of 192kHz, so there is no "headroom" there. Same goes for 24bits, unless the original quality is there you're imagining a difference or enjoying the noise/crackle introduced by upsampling.

Second is that switching to this format has the possibility for ending the loudness war.

No, you can push loudness just as far in 24/192 as you can in 16/44.1. Loudness is measured in decibels, which neither kHz or bit depth is.

The interesting thing I find about these types of articles is that many sections are written really just to push an agenda.

"All signal content under the Nyquist frequency (half the sampling rate) is captured perfectly and completely by sampling; infinity is not required"

That is an incorrect statement. Of course it's not perfect or complete, otherwise there would be no room to make a 48k or 96k sampling of the data. What he should write is that it is enough for you to not notice. But then it would be open to debate.

Also with regard to 16bit and 24 bit. Headroom is really only half the issue. Easy to argue that we don't need more headroom than we already have, yes it is loud enough. But the real issue is that 24 bit pushes the 16bit dynamic audio spectrum from 65k to 16million steps. Sort of like way back when 16bit images were suppose to be good enough with 65k colors, then when 24 bit images with 16million colors started to come out lots of people could see the difference even though the you weren't supposed to.

Personally I think 24/48 audio is a good thing, but higher sampling is still suspect IMO.

"All signal content under the Nyquist frequency (half the sampling rate) is captured perfectly and completely by sampling; infinity is not required."

Your rebuttal to this statement is as follows:

That is an incorrect statement. Of course it's not perfect or complete, otherwise there would be no room to make a 48k or 96k sampling of the data. What he should write is that it is enough for you to not notice. But then it would be open to debate.

You are misinterpreting the quotation you are arguing. All signal content under the Nyquist frequency (which is 22.05khz for a 44.1khz sampling rate) is in fact captured perfectly by sampling. This is a separate fact from your statement, which asserts that if this were true there would be no reason to sample at 48 or 96khz. As the article correctly states, there is no reason to capture audio signal that may exist above approximately 20khz, because other than intermodulation distortion within our audible bands (well under 20khz), we truly don't hear it. You could sample at several MHz if you wanted to, but there's no reason to.

As for bit depth, you might have glossed over his dynamic range discussion. 65k steps spanning a dynamic range of 96dB (or 120dB as he points out is practically possible) results in individual steps of a small fraction of a dB. We are unable to discern amplitude differences that small in pure tones, no less in a specific band within a broadband signal.

I do assert that 24 bit sampling is important for audio recording and production, as I agree with the article's logic here.

This article might be technically correct, but it is a bit doom and gloom. I'm looking forward to 24/192 for two reasons:

I'm hoping 24 bit tracks will be mastered like vinyl, with more dynamic range. Of course this could be done with 16 bit releases, but it isn't. Having regular and audiophile formats will hopefully lead to regular and audiophile mastering.

Some day we will have hifi equipment that sounds better with 24/192. I get that the difference is subtle, and modern amps and DACs are not designed for 16/44, but this will change quickly once the new formats become popular!

I'm hoping 24 bit tracks will be mastered like vinyl, with more dynamic range

A friendly FYI:

"The dynamic range of vinyl, when evaluated as the ratio of a peak sinusoidal amplitude to the peak noise density at that sine wave frequency, is somewhere around 80 dB. Under theoretically ideal conditions, this could perhaps improve to 120 dB. The dynamic range of CDs, when evaluated on a frequency-dependent basis and performed with proper dithering and oversampling, is somewhere around 150 dB. Under no legitimate circumstances will the dynamic range of vinyl ever exceed the dynamic range of CD, under any frequency, given the wide performance gap and the physical limitations of vinyl playback. -hydrogenaudio.org

I know that a CD can have more dynamic range, but it often doesn't. As others have said this is due to it being mastered for the gym car etc. My hope is that 24/192 will be mastered for hifis, as vinyl is at the moment.

24/192 as mentioned is a bad thing though, now 24/48 I could get behind as a "HD Audio" for audiophiles with a proper DR keeping mastering. 48 kHz doesn't really offer anything of audio quality, but it allows a noticeable branding difference without the massive downside of 192 kHz.

Either way 44.1 is more than enough, the problem is, and has always been the loudness war.

Confirmation bias will play a big role in marketing, selling and profiting off of HD audio. As you have said.

Confirmation bias is hard to conquer. Just last night I was flipping between 16/44 and 24/96 versions of some Miles Davis. My final conclusion was that the 24/96 did sound better, if you listened for it. It sounded to me sharper, the 16/44 version sounded blurrier.

But can I honestly say all the double-blind, quality controlled tests are wrong because of my little experiment, or was it a case of clear confirmation bias? I think the latter.

Your first point is valid, but I don't understand your second point at all. The article clearly explains that there is no perceptible gain, not even theoretically, from 192 kHz audio, and that if anything it will sound worse than 44.1 kHz due to intermodulation distortion. It also lays out quite clearly that while 24 bit won't sound worse, it won't sound better either, and not due to limitations in audio equipment, but due to limitations in human hearing.

Then how could better hifi equipment possibly make 24/192 sound better in the future?

Ultrasonic distortion feels to me like a systems processing problem. At 44 or 48 kHz it's a very minor problem that hasn't needed solving, whereas at 192 kHz it will be a much bigger problem. I'm confident that future DACs will have the ability to digitally remove ultrasonic distortion.

I'd make the same argument for upsampling. Higher sample rates will allow DACs to do more sophisticated processing than current upsampling, and produce a higher quality output. I get that most of the extra data will be junk, but I still believe that we will figure out how to get a higher fidelity sound from higher frequency sources.

I wouldn't expect an instant leap in quality, but 24/192 will open up new possibilities to music technologists, and that should ultimately lead to better sound for us.

I still can't understand your argument. Sure, it's probably feasible to build a setup that can handle the intermodulation distortion from higher sample rates, but that will still just get you back to something that is indistinguishable from 44.1 kHz.

I also don't see what upsampling has to do with anything. You can upsample as much as you like in the DAC/ADC, you still don't need higher-resolution source material.

The argument the article is making is basically that 16/44 provides perfect, exact reproduction of the original analog signal, so how could you possibly improve on this?

No it won't - higher bitrate and sampling frequency won't help at all - there is more than enough science (which the linked article has a nice summary of) to leave little doubt this is the case.

It's not a processing problem, or a distortion one - it's a human biology one - our ears are just not good enough.
So short of bionic ears, or a evolutionary leap in the fidelity of human hearing it's really a waste.

Just like the articles light analogy - there is no point recording, or playing back beyond human range frequencies.

The only one you could argue for is really low frequency bass, but unless your playback system is made up of 4 or more 10 inch drivers and about 400 watts of amplification (such as those found in bass amp cabinets) you're not going to have enough power for that "feel" anyway.

That and even bass drivers are not designed to go that low, so you'd probably blow them.

The dynamic range of vinyl records often seems higher than that of the CD version because of the limitations of vinyl, not any technical superiority. It's just not possible to get the amount of compression and overwhelming bass common in modern digital releases onto a vinyl record, so instead the master tends to seem more restrained, largely immune from the effects of the loudness war. You also have to remember that they're mastered for different audiences - a digital release will go on people's iPods for playing in the gym or the car, where a huge dynamic range is a disadvantage if anything. Vinyl will generally only get played on a decent quality system, in a reasonably quiet environment.

It's worth noting that while you pretty much can't hear anything over 20kHz, it still effects the sound below that. Same goes for stuff at, say, <10Hz. Although at that deep of a tone in bass, you feel it more than you hear it.

Serious question, would the "affected" sound below 20KHz not have been affected during the recording, and thus be recorded as such, that is, affected by the presence of the above-20KHz sounds? If the sounds you cannot hear can cause an effect in the sounds you can hear, then it also causes an effect in the sounds as they were recorded, and were those effects, if they are indeed audible in the 20-20KHz range, were they not recorded and therefore present in the CD you listen to? So if they do affect things, then that's fine, because the audible effects of their presence were recorded when they happened, even if the inaudible causes of those effects were not.

Upvote for correct use of "effect" and "affect." But to anticipate Scythels's argument, maybe there would be new (though not necessarily desirable) effects when instruments recorded in isolation are reproduced together during playback? Just a thought.

Serious question, would the "affected" sound below 20KHz not have been affected during the recording, and thus be recorded as such, that is, affected by the presence of the above-20KHz sounds?

This depends. If the microphones used are sensitive past 20kHz, then no. The sounds above 20Khz would be in their "original state" until playback. If the microphones are not sensitive to the extreme upper range then the effects would be present at the time of recording.