Earlier I was listening for quiet chords near the end. These last two reports, I used quiet chords before the big sound at ~21 secs.

Hate to muddy the waters, and I appreciate all the responses above, but this report here was generated on the "cheap" earphones with motherboard chipset DAC.

I have to think it's not about fidelity of the equipment, it's figuring out what to listen for. Listening for jitter is *unlike* other ABX comparisons I've done before. If it helps, I try to imagine the sharpest focus of sound in terms of how "narrow" I can hear the piano attack, as though it were a spatial measure. The narrower attack is 'n'. It is difficult because I'm continually tempted to chase mirages of differences in other details. If I stick to "focus" and "narrow" I get a result.

Can somebody else please attempt to DBT these tracks, I've tried and failed miserably (predictably) , what UMS can do should not be possible (based on the differences between the tracks at different frequencies) I do not know if something else is at play here, in any case we really need others to try this.

Can somebody else please attempt to DBT these tracks, I've tried and failed miserably (predictably) , what UMS can do should not be possible (based on the differences between the tracks at different frequencies) I do not know if something else is at play here, in any case we really need others to try this.

Is there some function of the jittering algorithms in the software used that could be providing other artifacts? Another possibility (I think): Is there some interaction of the jittered file with my particular system which could somehow be highlighting or amplifying the jitter artifacts of the file? foobar of course, followed by RME Babyface, followed by Schiit Asgard 2 into Beyerdynamic 770 Pros.

Replication. This is just path30jr versus path30n again, to establish the reliability and consistency over more rounds.

Back to the Beyerdynamic 770 Pros for these. I'm listening in the mids, if that helps any. The notes are "shaped" differently not in the bass extension or treble extension, but in the core of the piano attack, where it seems that the 'n' is focused, while the jr has a slightly 'flattened out' aspect. This was listening for the quiet chords right near the end again.

Since the "noise" component was mentioned above, I'll mention that I'm not listening for noise, since that gives a null result--no discernible difference I can detect on that basis.

This is Voxengo's static plot of the few quiet seconds at the end, derived from the difference file after it has been boosted (so never mind the scale at the right, it overstates the difference).

The main thing I was wondering about for this analysis was the mids, and it looks like that is where this tool plots the great signal strength of the difference file. This is only for the passage ~27 to 29 seconds near the end.

The notes are "shaped" differently not in the bass extension or treble extension, but in the core of the piano attack, where it seems that the 'n' is focused, while the jr has a slightly 'flattened out' aspect.

The noise component of the jitter added has some amount of stereo content (it is not exactly the same on the two channels), but we are talking about ns range differences, which should not be audible. But a pure mono jitter might be worth testing.

Did you try the other, higher jitter samples again ? If you get 1% chance of guessing with path30jr, then a 0.0% (with a 20/20 or similar result) should be possible with path30j.

This is Voxengo's static plot of the few quiet seconds at the end, derived from the difference file after it has been boosted (so never mind the scale at the right, it overstates the difference).

The main thing I was wondering about for this analysis was the mids, and it looks like that is where this tool plots the great signal strength of the difference file. This is only for the passage ~27 to 29 seconds near the end.

I used audacity to examine the last 3 seconds of both files, at no point does the difference ever exceed 0.046 db, yes there is more energy at the 0 - 3000 hz range but the differences are still absurdly low, in fact the biggest differences are in the 5Khz to 8Khz range but seldom exceed 0.02db

I used audacity to examine the last 3 seconds of both files, at no point does the difference ever exceed 0.046 db, yes there is more energy at the 0 - 3000 hz range but the differences are still absurdly low, in fact the biggest differences are in the 5Khz to 8Khz range but seldom exceed 0.02db

The graph above cuts off before the last big chord, so it's missing some of the high-frequency energy from strings struck hard.

I am sure I missed this already discussed numerous times, but what is the method used to synthesize the jitter in the tests here?

It is basically a variable delay at 768 kHz sample rate. The signal used to modulate the delay time is a mix of several sine waves (to create sidebands) and lowpass filtered white noise. The noise has uniform distribution before the filtering, and the filter is a -6 dB/octave lowpass with a -3 dB frequency of 4 Hz followed by a -12 dB/octave Butterworth lowpass with a -3 dB frequency of 60000 Hz; the latter one is there mainly for anti-aliasing purposes. The noise generator is seeded from the system time. Also, the noise has some stereo separation (it is a mix of a mono and stereo component with a ~85:15% ratio of power). This might not actually be a good idea or an accurate simulation of real hardware (especially for low frequency noise), and was basically left in the code from an earlier version. Although the resulting ITD should in theory still be well under the threshold of audibility, in a new test, the "stereo noise" should probably be removed, or limited to high frequency noise only.

I doubt the seeding would be of too much significance in practice, except for very low frequency "wow/flutter" components.

It is basically a variable delay at 768 kHz sample rate. The signal used to modulate the delay time is a mix of several sine waves (to create sidebands) and lowpass filtered white noise. The noise has uniform distribution before the filtering, and the filter is a -6 dB/octave lowpass with a -3 dB frequency of 4 Hz followed by a -12 dB/octave Butterworth lowpass with a -3 dB frequency of 60000 Hz; the latter one is there mainly for anti-aliasing purposes. The noise generator is seeded from the system time. Also, the noise has some stereo separation (it is a mix of a mono and stereo component with a ~85:15% ratio of power). This might not actually be a good idea or an accurate simulation of real hardware (especially for low frequency noise), and was basically left in the code from an earlier version. Although the resulting ITD should in theory still be well under the threshold of audibility, in a new test, the "stereo noise" should probably be removed, or limited to high frequency noise only.

I doubt the seeding would be of too much significance in practice, except for very low frequency "wow/flutter" components.

I suppose it should not.

I presume jitter/delay would take on probably one of say 16 fractional discrete values of original sampling period, depending on the rate? Or is another method used to mimic the delay?