When doing the measurements @Klasse cited, I was aiming for a 90db peak area on FR curve. Is that not loud enough?

I haven't been able to find any best practices (static.changstar.com's been spotty the last week or two) about this.

I'm rebuilding my measurement rig this weekend so that they stress the headphone bands less. I'll try measuring at a louder volume then to see how distortion changes and if I can replicate others' results more closely.

You need to not only measure at a loud enough level to excite the headphone's distortion, but you also need a recording chain (mic-pre-A/D) that has low enough noise and distortion so you have enough dynamic range. Think about it has having enough "room" underneath the signal.

So you could very well have been playing at 90dB SPL, but your gain settings were not set right.

In terms of loudness registered in ARTA (or whatever app) you are using for measurements, it can vary depending upon measurement method, calibration of mic, weighting, etc. For my plots, the 0db point equates to 100db. I generally try to align the 500-1kHz area to 90-87db.

There are two places to adjust input gain. It's matter of playing with it to get the lowest noise results:

Microphone preamp

Windows Control Panel - Sound - Recording Devices

Also be sure to check sampling frequency. MLS, sweeps, noise OUT should be the same as the signal IN.

For obvious reasons, harmonic distortion measurements should always indicate reference SPL. I know 90db is very high for normal listening, but I like to artificially bump it up this high because it makes differences easier to see, and also because even at this higher level, there is correlation to what is actually heard at normal listening levels.