In my opinion, the problem we need to focus on is not 24/192 distribution per se, but rather not using the best engineering practices and/or features available for 16/44.1 content. The tests that compare 24/192 content with directly derived 16/44.1 are partly missing the point because the 16/44.1 content found elsewhere is not always derived in such fashion—sometimes deliberately so to appear worse. It's not a big secret that DVD-A and SACD are usually made from different masters that technically could be downsampled to 16/44.1 without loss, but weren't because then there'd be no point buying the more expensive DVD-A and SACD.

As long as the CD is the lowest common denominator with the worst mastering and channel content, while vinyls continue having better headroom and SACDs more channels, the initiative is rather pointless. There is still reason to buy these alternatives because they may sound better for reasons completely unrelated to the physical limitations (or quirks) of the media. (Well, number of channels aside.)

But to be brutally honest, I don't believe in its success either way, because it's basically asking companies to make less money than they do, and I don't expect them to be interested.

What about recent CD versions that obviously don´t sound as good as the HD release that is released around the same time like on HDtracks.com?I think people that discover such a problem should send back their CD as BROKEN! Maybe this will help in a way?

There only has to come in one person to the discussion with some weird numbers no one challenges and this article is declared wrong.

...or bump some thread based on a presentation that could not be made here because it is not compliant with our rules by providing some update that is over two years old which doesn't make it any more compliant.

Just because the haystack is large does not mean there's a needle in there.

For the sake of the sanity and for spared minutes of the lives of everyone reading this thread, and for the love of humanity, do not read the thread corresponding to Monty's article on the SACD threads linked. Please.

For the sake of the sanity and for spared minutes of the lives of everyone reading this thread, and for the love of humanity, do not read the thread corresponding to Monty's article on the SACD threads linked. Please.

Please.

..and then of course, I had to. You ***!

Cheers,David.

As bad as it was, it wasn't as bad as much of the discussion on Slashdot. The ACs couldn't stop confusing 192kHz and 192kbps. That was truly disheartening.

It's one thing for someone who doesn't care to not understand. It's quite another to care deeply, have no functional grasp, and be uninterested in acquiring one.

As bad as it was, it wasn't as bad as much of the discussion on Slashdot. The ACs couldn't stop confusing 192kHz and 192kbps. That was truly disheartening.

Doesn't make me facepalm that much. Compared to those who buy the TV preacher arguments for the top 96kHz octave, then mistaking 192 kHz for the fairly sensible 'so they want to sell 192 quality?' seems to me as more of ignorance and less of stupidity.

For the sake of the sanity and for spared minutes of the lives of everyone reading this thread, and for the love of humanity, do not read the thread corresponding to Monty's article on the SACD threads linked. Please.

Please.

Heh. And that's even though I screwed up the URLs in that other post. These should work better:

You've got to feel sorry for the guy or girl (Arnaldo) who puts their fingers in their ears and goes "la la la" to every rational post, repeating the word "debunked" in response to the study to convince themselves that this is the case. Then admit they own over 800 SACDs. With that level of emotional and financial investment, they have to disbelieve M+M.

Though maybe there's no need to feel sorry for them, because M+M clearly say that most SACDs do sound great because they're often used to showcase the best recordings+mastering out there. CDs could sound just as good, but sometimes the same content+mastering isn't available on CD.

If a presidential candidate were to declare that the earth is flat, you would be sure to see a news analysis under the headline ''Shape of the Planet: Both Sides Have a Point.'' After all, the earth isn't perfectly spherical.– Paul Krugman

We're talking about a forum that's called "sa-cd.net". A meaningful debate can only take place on neutral ground. And while HA relies on tangible evidence, it is not so neutral either…

so the debate on their site is clearly meaningless.

But on CA I got confused. Julf plots a graph that happens to have an amplitude resolution (determined by I assume a sufficient number of bits) to show that a shift (faster than NF) in a waveform at a particular sampling rate. However if I had few bits (or if I had a waveform of lower amplitude) then the shifted wave might not be captured with the correct shift.If this shift is important to what we hear (I am not clear if these differences can be heard) then could there be a benefit from improvements in the redbook?

Miska correctly points out that changing volume changes the frequency whilst that volume change is happening, however their chart should have just shown a sine wave of changing amplitude.

Modern work flows may involve literally thousands of effects and operations

How should this be interpreted—of the order of 1000 effect invocations × order of 1000 arithmetic operations = order of a million arithmetic operations per sample?

I would not worry about the number of arithmetic operations but only about the quantization part where the high precision result of computations is converted again to 24 bit, for example. And for the 24 bit case, there's enough headroom for 1000 effects because the accumulated quantization noise would still be below the noticable threshold. I don't know current practice, but I believe most "serious" processing chains are completely in 32 bit floating point precision or even higher.

I would hardly call float32 a danger zone. Even when the signal is greater than 0.5 of full scale (which is a small percentage of the time) float32 has a full 24 bits of resolution. When the signal is down at 2^-24 you still have 24 bits, that is 2^-48 of full scale.

When you perform repeated arithmetic operations on the data, if the rounding is uncorrelated (which it should be) then the increase in digitization error only increases as the square root of the number of operations. One thousand operations increases the error by 5 bits.

Yes, but if "Modern work flows may involve literally thousands of effects" is true, and each effect might have a 1000 operations, then, on a pure float32 system (and presumably those compiled for SSE), that's over 10 bits of noise to be subtracted from 25.

I doubt though that work flows do actually involve thousands of effects, at least not on an individual track within the multi-track.

Yes, but if "Modern work flows may involve literally thousands of effects" is true, and each effect might have a 1000 operations, then, on a pure float32 system (and presumably those compiled for SSE), that's over 10 bits of noise to be subtracted from 25.

How do you arrive at 10 bits instead of 5 bits? Don't you think the "sqrt rule" applies here? if you add N orthogonal noise signals (of equal colour) with an RMS of each X, the expected RMS of the result is X*sqrt(N).

I would hardly call float32 a danger zone. Even when the signal is greater than 0.5 of full scale (which is a small percentage of the time) float32 has a full 24 bits of resolution. When the signal is down at 2^-24 you still have 24 bits, that is 2^-48 of full scale.

When you perform repeated arithmetic operations on the data, if the rounding is uncorrelated (which it should be) then the increase in digitization error only increases as the square root of the number of operations. One thousand operations increases the error by 5 bits.

that sounds about right and even then that is worse case, as you could simplfy the operations before performing them in the first place thus limiting the places where rounding occurs.

Does the statement of the benefits of mixing/mastering >44/16 also extend to recording? Or record at 44/16 and then upconvert for mixing/mastering?

BTW, did we get an asnwer to this?I agree that mixing needs to be >than 44/16, but you raise an interesting question about recording.

I would like to use image processing as an analogue to this.

RAW does not only offer lossless over JPEG, it also offers the ability to store a wider range of spectrum (Blacker and Black / whiter than white / highlights / shadows / clipping /crushed and all the other terms you see in this space). When doing your mixing (or post processing) then this additional information is beneficial.

I would suggest that the same applied here that to get the most of your mixing, ie your post processing, you would want as much information as possible.

However, the difference here maybe that the dynamic range of a jpeg is not high enough let alone the fact it is lossy.

Yes, it was addressed in the original article, in the "When does 24 bit matter?" section.

tl;dr: Recording/tracking with 24-bit resolution allows you to set your reference level lower, leaving more headroom for unexpected peaks ("whiter than white"), while still retaining a signal to noise ratio greater than that of 16-bit.

I would suggest that the same applied here that to get the most of your mixing, ie your post processing, you would want as much information as possible.

No. You would only want as much information as necessary. Any information that, after all mixing/processing/etc., doesn't make it to the (audible portion of the) output only wastes time/resources without adding or improving anything.

How about just wide public test for 16/44.1, 22/44.1 /, 16/96, 24/96 or any other combination (with obligatory ABX).It will be a good lesson for two guys "I'am audio engineer" and "I'am audiphhhh what?".