There were several ways I could test the performance of this player. I could use the analog outs to my pre/pro, which allows the players DACs do the conversion and apply the analog filters. Another way is to send the digital signal to my processor. When using the analog out, the sound was sweet with detail and a wide soundstage. When I toggled between the analog and the digital outs, I felt the analog signal to have a slight edge in detail…but very slight. When I selected different filters, I could not distinguish much difference between them. That is not say they did nothing, but any changes were subtle at best.

The digital signal going through my pre/pro said it was faithfully getting a 96 kHz signal (or 44.1 or 192). Speaking of the upconversion, I could hear a little better definition from 44.1 kHz (off) to 96 kHz, but from 96 kHz to 192 kHz, I felt the overall sonics were indistinguishable. I think there comes a point where the human ear simply cannot hear the benefits of the high sample rates. The human ear is the ultimate "brick wall filter". Why does the 96 kHz sound better than 44.1 kHz? Aren't 16 bit CDs still 16 bit even if the frequency is pumped up to 96 kHz? Can you turn the proverbial pig's ear into a silk purse? Well, sort of. What happens when you go from 44.1 kHz to 96 kHz is that it allows the player to spread out the frequency and uses a less steep filter at the DAC end stage. This gentler sloping filter allows for smoother sound to be output, which sounds better to the ears. I know that this is a point of contention between audio buffs and some may argue that upsampling is better than upconversion, but in the end, they both achieve a smoother, less digital sound.