Euphonic Distortion: Naughty but Nice?

In 1977, just as I was about to take my first faltering steps in hi-fi journalism, the UK's Hi-Fi News ran two articles, translated from French originals by Jean Hiraga, that seemed to me and many others to turn the audio world we knew upside down. The second of them, "Can We Hear Connecting Wires?" was published in the August issue and is the better remembered because it introduced many English-speaking audiophiles to the contention that cables can sound different. The earlier article, published in the March issue, was less earthshaking but still an eyebrow-raiser of considerable force. Simply titled "Amplifier Musicality," it was a response to the word musicality being increasingly used in subjectivist circles to describe the perceived performance of amplifiers and other audio components. It was implicit that musicality was a quality not captured by conventional measurement procedures—a lack of correlation that Hiraga's article sought to address.

To paraphrase, Hiraga contended that it wasn't the quantity of an amplifier's nonlinear distortion that determined its sonic footprint so much as its quality; not the absolute level of that distortion but its pattern. To anyone brought up on the notion, as I had been, that total harmonic distortion (THD) was a reliable measure of an amplifier's nonlinearity, this was a shocking proposal indeed. But the idea that the nature of an amplifier's nonlinear behavior is as important as its amplitude was itself nothing new, even if the hi-fi world at large had up till then largely succeeded in ignoring it. In fact, the notion had first emerged in the US as long ago as the late 1930s (footnote 1) and been refined by D.E.L. Shorter at the BBC in the early 1950s (footnote 2).

This, and later work at the BBC by E.R. Wigan (footnote 3), had shown that much better correlation between sound quality and harmonic distortion measurement was obtained if the amplitude of each harmonic were appropriately weighted before being summed into an overall distortion metric. The Radio Manufacturers Association in the US had originally proposed that each amplitude be multiplied by n/2, where n is the harmonic number. The amplitude of the second harmonic thereby remained unchanged, while that of the third was increased by a factor of 1.5, the fourth by a factor of 2, etc. Shorter's work, confirmed by Wigan, suggested that a more draconian weighting regime was needed in which each harmonic amplitude was multiplied by n2/4. Again this leaves the amplitude of the second harmonic unchanged, but that of the third is now multiplied by 2.25, the fourth by 4, etc.

Had the Hiraga article simply reiterated this work, it would have been useful but far from controversial. What made the difference was Hiraga's quite different contention: that a particular harmonic pattern is desirable, and that an amplifier that departs from this will sound less natural even though the total amount of distortion it introduces may be very much less. Correlating subjective assessments of different amplifiers' sound qualities with their distortion spectra, Hiraga concluded that the ideal harmonic pattern displays progressively decreasing harmonic amplitudes, with even-order (second, fourth, sixth, etc.) and odd-order (third, fifth, seventh, etc.) harmonics all expressed. According to this view, an amplifier that produces dominant odd-order harmonics—behavior typical of push-pull designs—can never sound as natural as one in which both even and odd harmonics are present with progressively declining amplitudes.

What Hiraga was claiming, in short, is that certain patterns of nonlinear distortion are euphonic—ie, pleasant to the ear—and others not. If these benign patterns are absent, then the resulting sound will be less believable. It was, and remains, a contentious extension of the Shorter-Wigan view because it embodies this idea that nonlinear distortion can sound nice, whereas the traditional view is that it can only ever sound nasty—to a reasonably educated ear, at any rate.

Surprisingly, the notion of euphonic distortion is widely held in the audio world, even among those who would normally be classified as staunch objectivists. For instance, the classic objectivist dismissal of a stated preference for tube amplifiers is that the listener likes the distortion they produce. In this case, the distortion is presumed to have a pleasant but still detrimental effect on fidelity, which isn't the same as Hiraga's contention that particular patterns of distortion actually enhance fidelity. But both arguments rest on the notion of euphonic distortion, even though the evidence for its existence is anecdotal and the arguments that have been proffered in support of it are weak. So: does euphonic distortion exist, or is it a myth?

Exceptions
Before you rush to demur, of course there are certain circumstances in which harmonic distortion can have a positive influence. The oft-quoted example is guitar amplifiers, where the higher distortion levels and softer clipping of tube designs result in an instrument sound that has often been preferred to that achieved with lower-distortion solid-state equivalents. But the concept of fidelity has no relevance in this case: the amplifier simply becomes a contributor to the instrument's timbre. Similar distortion in a domestic amplifier would be a quite different matter, because it will not only change the guitar sound a second time over but will also impose its effect on everything that the amplifier reproduces. I suppose there may be a small proportion of listeners who like voices, violins, and vibraphones all to sound a bit like electric guitar—but I don't imagine there are many Stereophile readers of this description.

Harmonics also play a central role in the so-called missing-fundamental effect, a psychoacoustic phenomenon in which a fundamental tone is perceived even though it is not present. The fundamental is inferred from its harmonics, and heard even though it is missing or at least considerably attenuated within the signal. Processors like Waves' MaxxBass purport to exploit this effect to improve the perceived bass extension of small loudspeakers, but there is no obvious reason to suppose that an increased level of low-frequency harmonic distortion generally has a positive influence on bass performance. This behavior is typical of transformer-coupled tube amplifiers, which, on the contrary, are more often criticized than lauded for their bass.

Explanations
Theoretical justifications of the Hiraga contention have typically called on one of two different phenomena: either the consonance (or dissonance) of harmonic products, or the distortion inherent in human hearing. Of the two, the latter justification—that the harmonic pattern generated by an amplifier should match that of the human ear—is the simplest to refute because it is plainly illogical. As the ear introduces its own distortion imprint in any case, why should there be a need to imitate this within the reproduction equipment? It makes no sense unless, perhaps, the level of distortion is high enough to disrupt the ear's natural pattern, in which case the phase of each harmonic ought also to be significant.

The other justification, concerning the consonance or otherwise of different harmonics, is fine so far as it goes; it just doesn't go far enough. Any device that introduces harmonic distortion on a sinusoidal input will also produce amplitude intermodulation distortion—sum and difference frequencies—on a complex input like a music signal. These intermod components are generally dissonant, and, what's more, they very quickly become the dominant component of the distortion as signal complexity increases. A seminal paper written during World War II by two British Post Office researchers investigated this and came to some startling conclusions (footnote 4). For instance, if the signal comprises 30 or more component frequencies, then the distortion power contained within the harmonic distortion products will be at least two orders of magnitude lower than (ie, less than 1% of) that contained within the intermodulation products. In such circumstances it is difficult to credit that the consonance or dissonance of the harmonic components can have any significant effect on the perceived quality of the distortion.

Synthesis
The problem with Hiraga's observations, and with any others that relate sound quality to a device's harmonic structure, is that they fail the primary scientific imperative to control other parameters. Without this, you risk finding false correlations—of assuming a causal link that doesn't exist. It may be that some other, unsuspected factor is responsible for the perceived differences. The way to avoid this—in this instance and many others—is to use digital signal processing to synthesize the distortion concerned. Everything within the audio system used to assess the subjective effect can then remain constant, so that any perceived differences can be reliably ascribed to the signal characteristic that has been manipulated.

The sidebar, "Crunching the Numbers," describes the math of nonlinear distortion synthesis for those who are interested in the detail. In fact, a simpler method than the one described can be used if the sole purpose is to model static nonlinearity, as here, in which the lookup table that converts input sample values to output sample values is constructed simply by summing cosine waves of the appropriate relative frequency and amplitude. But the described method, which calculates the parameters of the transfer characteristic, is more generally applicable in that it permits the distortion model to be elaborated to include frequency dependence. That isn't done here, but any attempt to model the nonlinear behavior of a real audio component would require it.

Footnote 1: "Specifications for Testing and Expressing Overall Performance of Radio Broadcast Receivers, Part Two: Acoustic Tests," Radio Manufacturers Association, December 1937.