DD, DTS, and Linear PCM on Concert DVD

Some concert DVDs offer the option in the audio setup menu of listening to the soundtrack in Dolby Digital, DTS, Linear PCM. Dolby Digital and DTS both compress the soundtrack into a relatively low bit-rate. For example, Dolby Digital encodes 5.1 discrete channels into a single datastream with a bit-rate of 384kbs (384,000 bits per second).

If you select the Linear PCM option available on some discs, the DVD player's output is a stereo digital audio signal with a bit-rate of 768kbs per channel--about ten times more data per channel that the Dolby Digital track. In my experience, the PCM track sounds significantly better than Dolby Digital or DTS. The treble is smoother and less grainy; midrange timbres sound more natural and lifelike rather then sunthetic; and the soundstage has a more open quality.

The situation will, we hope, become moot when HD DVD and Blu-ray Disc become available. With up to eight discrete channels of true high-resolution audio and bit-rates as high as 4.6 million bits per second per channel, concert DVDs are going to sound a whole lot better.

My understanding is that most if not all DVD-players will play two-channel linear PCM, if set up properly, and from DVD-Video discs, too. It is a shame this is not more widely know; you do not need a DVD-A or SACD player to get better resolution than that from CD.

Arnold; by the same calculation CD is 705.6 kbps per channel. (16 bit; 44.1 kHz) Correct?

What is your view, now, on the minimum resolution for digital to be indistinguishable from analogue - "as good as"? Does anyone claim 192 kHz sounds different from 96 kHz? 192 kHz DVD-A discs must be rare; I do not know of one.

Yes, transferred audio is Sample size x Sample frequency. (To be exact, physical bitrate is higher, due to some extra bits used for transfer protocol)

I do claim the more bits the better, yet it's just my personal and limited experience. The first time I listened to the 192/24 stereo track of Eagles' Hotel California, I ended up having goosebumps. But IMHO not even this "state of the art" bitrate sound 100% analogue.

It is difficult to believe someone, somewhere, has not done an experiment. It seems to me it would not be difficult to determine the digital sampling frequency threshold of human hearing.

For example, if people can distinguish 96 from 44 kHz, can they distinguish 192 from 96 kHz? If so, can they distinguish 384 from 192 kHz? 768 from 384 kHz...? ...and so on. There has to be point where doubling the sampling frequency makes no audible difference at all.

I once made some 8-bit audio files and the difference between 12 kHz and 24 kHz sampling was completely obvious, even on computer speakers. So 12 kHz is above the threshold, OK.

So, where is the threshold?

The same approach could be taken to bit rate.

As you may recall, Arnold, I used to believe DVD-A was clearly superior to CD. I am no longer quite so sure about this. I was using better amp inputs for DVD-A (the "Ext. 5.1" on an NAD receiver, compared with "CD") and was not aware that this, in itself, made a difference.

Also, the problem with relying on commercially-produced discs is that comparisons based only on resolution are impossible - everything is re-mixed, remastered, or presented in surround.

There are quite a few actually John - I have some - Donald Fagans 'Kamakiriad' I think is one - Steely Dan's 'Going out of Business' I believe is another. It is late and I would wake Mrs R if I try to go through the collection right now, but there are more of these in there. Of course 192/24 can only be found on the two channel mlp tracks and it IS difficult to tell the difference as the surround 96/24 sound is so darn excellent and fullfilling - especially on such well mixed titles as these. It would be easier to compare the 2 defintion versions of the same recording but that won't happen of course.

Using the ext 5.1 inputs is a must for cd listening but the hi-res recordings are still superior imho - how much depends on the qualities of all types naturally. A very good cd can maybe pip a poor hi-res recording - Rod Stewart's DVD-A come to mind as a poor example.

Thanks John and Arnold for your input. I will bow to your greater knowledge on this subject:

"The first time I listened to the 192/24 stereo track of Eagles' Hotel California, I ended up having goosebumps. But IMHO not even this "state of the art" bitrate sound 100% analogue."

"So, where is the threshold?"

Exactly. Is there a bit rate/sampling frequency that is equal, with scientific certainty, to analogue? Or is it impossible to quantify analogue the same way as digital? Is the only way to evaluate the quality of any digital recording by sensory perception alone? Not that there's anything wrong with that. After all, serious listeners have been doing this since Edison. But it does seem that, in this technological time we live, the question of digital vs. analogue could easily be answered without relying on our, or someone else's, golden ears.

But maybe all this concern about digital modes is irrelevant. Most recordings begin with the microphone, an electro-mechanical non-digital device. And the audio delivery system ends with the loudspeaker, which is a microphone in reverse. It seems to me these two have far more to do with perceived sound quality (along with the rooms they're in) than whatever happens in between them, within reason.

Similar questions can be asked in the video world. How many megapixels are needed to equal the quality of a good 35mm film frame? What's the point of broadcasting a 40-year-old movie in high-definition video?

There's no doubt digital is here to stay. However, there is good reason to question how good digital signals have to be to equal the best of analogue. All this has been hashed and rehashed on this forum before, and the debate will continue, I think.

"Is the only way to evaluate the quality of any digital recording by sensory perception alone?"

No, there are other ways. But perception is what we are trying to achieve, and over-rides everything else. The other things one can measure may be useful in understanding why we perceive the sound as better or worse, or they may be irrelevant. As regards bit size and sampling frequency, the question is: above which values do they cease to make any difference to sound quality?

I cannot answer that.

The analogy with digital photogaphy is very good, John. We could re-phrase the question and ask above which values for number of pixels, and number of bits in each pixel, can we no longer tell the difference between different digital pictures. And, even above those values, does there remain any perceivable difference between a digital picture a conventional one?

Maybe the road to "perfect" digital audio is not about resolution but combination of technologies? Currently the sound waves are modelled in two ways:

1. PCM: 192000 times per second, curve "height" is represented by a binary number using 24 bits. This number is very big and allows us to determine height with a lot of precision. But 192000 samples per second is not often enough to avoid little "sawtooths" on our curve image. The "sawtooths" is percieved as distorsion, but is actually supersonic harmonies of the original tones.

2. DSD: 2.8 million times per second, a binary number (1 or 0) indicates if curve is heading up or down. We the sample so often that the digital "sawtooths" becomes very very small. The drawback is that the 1-bit sample doesn't indicate if curve goes up a little or a lot, or even levels out. This becomes a true problem when trying to represent high amplitude and high frequency, and also for representing silence.

(0000 = read error to be interpolated) (unit dB is just an example to mark that we are measuring sound pressure)

This way we combine DSD's incremental/decremental approach with PCM's quantification approach, at a high sample frequency. And it is possible to squeeze 5+1 channels into a DVD disc, even without compression.

My birthday gift to Mrs. Layne is new mobile, and hers to her sister an MP3 player. Spreading like a virus, really :-)

Yes, I'm thinking of a 4-bit DSD, representing soundpressure increase/decrease more precisely in a PCM way. Smooth instead of sawtooth, since sample rate is very high. Entire scale could for example be:

Silence is already covered by the "1000". Steady at environment level -> no air waves -> no sound. So the number "0000" is not needed and can be used to detect scrathes in disc.

Now, what if our digital model of the waves is precise enough to avoid subsonic and supersonic distortion that could destroy analogue circuits and loudspeakers? Then it could just be translated to electrical current, sufficiently strong to drive mechanical elements. That is, it would allow for a very simple combination of DAC and amplifier indeed.

Just two things more: 1.- I think actually millibar is a better measure than dB, scientifically speaking. 2.- There would have to be some security mechanism that prevents a +7 +7 +7... sequence that makes equipment catch fire.