The Controversy of ABX Testing

For those of you that have been living under an audio rock, the ABX Test is either a) a complete waste of time or b) the only method of sensory testing capable of any degree of verisimilitude.

Interesting debate, in theory. In practice, this turns out to be a semi-literate slugfest fought only by zealots that are habitually guilty of staking out positions that are colorfully impossible to defend. Most damning of all? They’re painful to read.

The philosophical problem here is called “question begging”. I like calling it this, rather than simply noting that Harley is guilty of “assuming his conclusions”, but whatever you want to call it, it pretty much trashes any conclusion he’s attempting to reach. More strongly, his piece becomes indistinguishable from self-serving blather (“I am not irrelevant!”) by the end as he does nothing for his cause other than dig himself into a large hole that his opponents can happily fill in for him at their leisure.

The opposite end of the spectrum, unfortunately, is camped out by a group of apparently congenitally grumpy “pseudophiles” that are too cheap, too bored, or too lazy to actually do what it takes to answer the matter definitively (at least for themselves). Whatever their issue actually is. This last is important to note because the continual evolution of the complaint makes actual argumentation pointless and impossible. Witness the lunacy here (and don’t forget your waders, as the shit gets pretty deep) at a 3-year-old revenant thread at Computer Audiophile: The Controversy of ABX Testing.

I’ve talked about ABX testing in the past. Frankly, the “controversy” is getting old, at least for me, but for many, it continues to be blooming ever-new. The latest edition was lobbed from Theresa Goodwin on PFO: Why ABX Testing Usually Produces Null Results with Human Subjects. Great title! But sadly it isn’t actually argued for, rather the whole piece is another question-begging apologia attempting to justify the job of an audio-reviewer. As arguments go, it was even worse than Harley’s attempt (and that’s saying something). Theresa? Please tell me this was just a draft!

Here’s the scoop: it’s not voodoo, people. Its science and science is hard stuff. Sensory testing? Science! Therefore, it is hard stuff. Crafting scientific studies? Yes, more hard stuff. Analyzing statistical results? Hard stuff! Mounting a rigorous scientific study with adequate statistical sampling that can stand up to peer review? Expensive stuff. Sure, it can be done on the cheap — just find yourself a tenured professor with enough minions at his disposal and consider it job-done. Odd that this hasn’t happened yet. Yet, here we are.

What I find most objectionable is that this is an empirical question. That is: “does some given audio component have any impact on the overall sound quality of my audio system, and if so, what is it and to what degree?” Empirical! You can test this! Does it have to be an ABX test to be valid? Of course not. Check the thread on Computer Audiophile for comments by Alan Sircom, the editor of Hifi+. He’s got some excellent proposals that are not nearly as fraught with common epistemic difficulties many naysayers have with ABX protocols. The point? It’s empirical. Testable. There is a fact to the matter at hand. And no, I don’t have to rely upon the hyperbolic proclivities of a professional wordsmith to learn of them. No sir! In fact — I don’t even have to wait for the journal article in next month’s Science or Nature. No? No! I can, in point of fact, simply buy, beg, borrow or steal said component and simply try it out my damn self.

This should have been Harley’s or Goodwin’s point: if you don’t believe me, no worries — try it out for yourself. Done! Attacking scientific methodology — and with faulty logic to boot — was an exercise in the excruciating. Right before my head exploded, my eyes bled. What were they thinking?!? Ugh. Editors! We need better editors!