When the source current and source resistance are optimized for the given headphone load and similar maximum output power (~50 mW at 1% THD), the distortion pattern vs. output power is remarkably similar.

One plot below is simulation, the other measurements. The J-Mo 2 simulation closely matched the actual measurements, it wasn't worth my while to generate a full simulated data set when I already had the measurements on hand. No reason to suspect that the Szekeres sim is inaccurate, either.

The take home message is the distortion characteristic of a MOSFET follower is what it is, and unavoidable. Take it or leave it, as it were. However - and this is key - if you don't optimize the stage for the headphone impedance, the distortion for a given output power will increase significantly.

I've often wondered however, whether it's distinctive sound is because it is unusually free from noise and artifacts, or because its unusually prone to heavy second harmonic distortion.

It's not hard to set this up in LTSpice, but I haven't seen it done before. So, for your education and enlightenment, I present the harmonic distortion vs. output power data for the original "classic" circuit as uploaded to Headwize all those years ago. The LTSpice asc file is also included I you want to play along. The harmonic data is generated by hand, reading the FFT peaks for 10 or so different input voltages.