Header

October 16, 2011

INTRO: The Behringer UCA202 was the first USB DAC I reviewed on this blog back in February. It sells for around $30 and has been around for many years. I’ve revised several of my tests and I was curious to see how it compares, apples-to-apples, with more modern inexpensive USB DACs like the Turtle Beach Micro II. I only tested the line outputs in this review as the original review established the headphone output has some serious problems—most notably a very high output impedance.

BEHRINGER UCA202: The UCA202 is larger than most newer inexpensive USB DACs (many of which are about the size of a USB thumb drive). The UCA202 is about the size of a deck of cards and has a fairly long (1 meter) attached USB cable. Like the Micro II it has an optical digital TOSLINK output. Unlike most newer inexpensive USB DACs, it also has a volume control but it only affects the headphone output. It has RCA line outputs and inputs.

WINDOWS INSTALLATION: The UCA202 installed smoothly in both XP and Windows 7 without needing any drivers. Windows reported it as a “USB Audio CODEC”. Three sample rates are available 32, 44 and 48 Khz all at 16 bits as shown to the right in Windows 7.

SUBJECTIVE SOUND QUALITY: Running the UCA202’s line outputs into my 02 headphone amp the sound quality was good with no obvious problems. There was some hiss audible at high gain settings on the amp but at realistic volume/gain settings it was reasonably quiet. Someday I plan to do an ABX blind comparison between the UCA202 and other DACs using the O2 amp.

MEASUREMENT SUMMARY: The overall results are substantially better than the Micro II and mostly quite good for a $29 DAC. The UCA202’s weak areas are noise and low level linearity (and the poor headphone output). To save space and leave room for more columns in future reviews, I’ve replaced my previous “Excellent”, “Very Good”, etc. ratings with a letter grade from A to F where A is excellent and F is Fail (unacceptable).

BOTTOM LINE: So far the UCA202 is still the best low cost USB DAC I’ve tested if you don’t use it to drive headphones. I’ll be testing other low cost DACs in the coming weeks to see how they compare. The UCA202 isn’t as small as some newer USB DACs but it’s still relatively portable.

TECH SECTION

TECH INFO: The UCA202 uses the ubiquitous TI PCM2902 integrated USB DAC chip. It’s an old design, but as you’ll see, it easily outperforms the much newer C-Media chip in the similarly priced Turtle Beach Micro II when using the line output. In this case, newer isn't better.

LINE OUT ONLY: I only tested the line outputs as the previous UCA202 review established the 47 ohm headphone output impedance renders it a poor choice for most headphones. Unlike other headphone DACs I’ve tested, you won’t see headphone loads being used in the tests below. All tests were run with a 10K load unless otherwise specified. Please see the original UCA202 review for tests of the headphone output.

FREQUENCY RESPONSE: The frequency response with a 10K load (such as a headphone amp) via the line outputs was substantially better than the TB Micro II. It’s within 0.4 dB from 10 hz to 20 Khz. There are no significant issues here:

THD+N vs OUTPUT: This test starts at 10 mV where noise dominates the measurement. Despite having higher noise than the Micro II, the UCA202 distortion drops much quicker implying it has much less quantization distortion at low levels. Because the UCA202 is likely to be used at line levels with an external amp the distortion will be around 0.01% or even less at typical levels. It has slightly less maximum output (about 1.1 Vrms) compared to the Micro II but much lower overall distortion. The Redbook standard for digital home gear is 2 V rms. But USB powered DACs usually produce around 1 – 1.5 Vrms:

THD+N 100 hz 0 dBFS: With the PC volume at maximum, and a 0 dBFS input, the UCA202 produces about 1.1 Vrms at very low distortion. This is excellent performance with all harmonics below the magic –80 dB threshold. Way out of band you can see a spike at the 44 Khz sampling frequency which is fairly normal—especially in low cost DACs. The odd bump in the noise floor above 20 Khz is related to intentional noise shaping in the PCM2902 DAC. It’s an intentional design technique to lower noise within the audio band:

THD+N vs FREQUENCY: Here’s the THD+Noise plotted from 20 hz to 20 Khz into 10K (blue) at 775 mV (0 dBu). The Micro II is shown, for comparison, in yellow. The input is –3 dBFS to prevent any digital overload of the DAC. The UCA202 does much better here staying below the ideal 0.01% threshold until about 11 Khz. This is very good performance for a wideband test:

SMPTE IMD: This an excellent result with essentially no IMD products and everything well below –80 dB. In comparison, the Micro II had 170 times more distortion on this test:

CCIF IMD 44 Khz: The UCA202 does extremely well here for an inexpensive DAC with more than 50 times less distortion than the Micro II at the same level into the same 10K load. More important, all distortion products in the audio band are well below –80 dB. The two spikes around 25 Khz are a little alarming but only your dog might hear them. See the Micro II review CCIF section for both it’s very poor performance on this test and the DAC1’s reference plot. This is excellent for any reasonably priced DAC running at 16/44:

CCIF IMD 48 Khz: The UCA202 also does well, although slightly worse, at 48 Khz:

NOISE & LINEARITY: The UCA202 is only average for noise and worse than average for linearity. A –90 dBFS signal is reproduced at –93.8 dB for an error of 3.8 dB. There’s less than 1 dB of error at –80 dBFS so the problem is limited to extremely low levels. Noise referenced to my old 400 mV dBr reference would be 83.1 dB and against full output it’s 91.8 dB both A-Weighted. The good news is the noise will be upstream of the volume control if an external amp like the O2 is used and the PC volume is set to maximum. That means you’ll get around 90 dB of real world S/N ratio which is sufficiently quiet. But if you plan to use the PC’s volume control, the noise and/or linearity might be an issue in some circumstances. The spikes at 2, 3, 4 and 5 Khz are likely quantization distortion of the 1 khz low level signal and are relatively typical. I don’t know what’s responsible for the spike at 30 hz but it’s very inaudible.

JITTER: Here’s the spectrum from the dScope’s J-Test for jitter. The two things to look for are the number and level of symmetrical sidebands and the “spread” at the base of the 11025 hz signal. Compare the result below to the Micro II’s Jitter to see how much better the UCA202 does. This is a very respectable result for an inexpensive USB DAC and the frequency accuracy (clock accuracy) is very good as shown by the frequency reading on the left:

TECH COMMENTS: Used to drive an amp, powered speakers, or other source with a line input, the TI PCM2902 based UCA202 blows away the Turtle Beach Micro II based on the C-Media CM102. The only weak areas are the noise and linearity below –80 dBFS. But neither of those is likely to be an issue if you leave the PC volume all the way up and use the volume control on the amp/speakers.

October 12, 2011

INTRO: This is the first in a series of inexpensive portable USB DAC reviews I’ll be publishing in the next week or two. The idea is to test the DACs with a high impedance load (such as the O2 Headphone Amp) and a typical headphone load. The DACs are all small, easily portable, and USB powered. The $25 Turtle Beach Audio Advantage Micro II is just such a DAC. According to Turtle Beach, it’s supposed to provide “higher quality sound” compared to internal PC audio. I also have updated how I test and present DAC results.

TURTLE BEACH MICRO II: The Micro II is a small “dongle” with an attached 2 inch “pigtail” USB cable. It has a single 3.5mm jack which serves as line out, headphone out, and an optical digital out (using a supplied 3.5mm-to-Toslink adapter). There’s no volume control, other controls, or inputs of any kind--just a blue LED.

WINDOWS INSTALLATION: The Micro II installed smoothly in both XP and Windows 7 without needing any drivers. Windows reported it as a “USB Sound Device”. The only sample rates and bit depths available are 16/44 and 16/48 as shown to the right in Windows 7.

SUBJECTIVE SOUND QUALITY: There was moderate hiss with my Ultimate Ears IEMs but the Micro II was fairly quiet with less sensitive headphones. The sound quality, however, was seriously odd. Playing familiar well recorded audiophile tracks the Micro II made them sound shrill, glaring, and harsh regardless of what headphones I used. I was really curious to measure the Micro II and find out why it sounded so obviously bad.

POOR DRIVER DESIGN (updated): I first checked the Micro II’s frequency response and it was reasonably flat out to 15 Khz so the poor sound was still a mystery. Then I checked the 1 Khz THD and while it wasn’t great it also wasn’t bad enough to explain the poor sound quality. When I dropped the level to see if the distortion would drop, I found the problem. The Micro II was displaying a horrible linearity problem. Dropping the input from 0 dBFS to –20 dBFS should drop the output by 20 dB as well. But it only drops 8 dB! That’s a massive 12 dB error. The net effect is the Micro II was heavily compressing music—making softer sounds much louder than they should be.

BAD SOUND EXPLAINED: It turns out, as described in the comments to this article, the C-Media CM102 integrated USB sound chip apparently used in the Micro II has a “feature” called Dynamic Range Control (DRC) that defaults to on. Confusingly, there’s an advanced option in the Windows 7 sound options for the Micro II simply labeled “Loudness”. And, worse, it’s enabled by default. You have to uncheck the box to stop the Micro II from heavily compressing anything you play through it.

DRC vs LOUDNESS: The Turtle Beach choice of calling the C-Media’s DRC option “Loudness” is very misleading. In audio, Loudness Compensation involves changing the frequency response at low listening levels to compensate for human hearing. It’s generally based on the Fletcher-Munson Equal Loudness Curves. In this case, however, it has nothing to do with changing the frequency response—only the overall dynamic range. I don’t know what Turtle Beach’s driver calls this feature as it’s an unsigned driver so I didn’t install it on my test bench PC that runs the dScope software. The whole idea is to not need proprietary drivers.

WHAT WERE THEY THINKING (updated)? The “loudness wars” are already out of control without any further help from Turtle Beach. A lot of pop music has a peak to average volume difference of only around 8 – 12 dB as the labels keep compressing music ever further in an effort to have it stand out as being louder. The last thing pop music needs is another approximately 12 da lot more compression but that’s exactly what you get, by default, with the Micro II. For anyone unaware of the option, or who knowingly leaves it on, the Micro II is likely to sound significantly worse than the internal audio of just about any computer its plugged into.

MEASUREMENT SUMMARY: The overall results, even with DRC disabled, are not terribly impressive. The high frequency distortion, in particular, is poor. Here are the results compared to the more expensive FiiO E7 (some tests were run slightly differently but I’ve tried to adjust for that in the E7 numbers):

Measurement

TB Micro II

Fiio E7

Frequency Response 20hz-15Khz 33 ohms

+/- 1.8 dB Fair

+/- 0.1 dB Excellent

THD 0 dBFS USB 10K

0.20% Fair

0.05% Good

THD 1 Khz 10K Ohms -3 dBFS

0.022% Good

0.03% Good

THD 1 Khz 33 Ohms -3 dBFS

0.12% Fair

0.03% Good

IMD CCIF USB

0.28% Poor

0.03% Good

IMD SMPTE

0.03% Fair

0.008% Excellent

Noise A-Weighted

-93.8 dBu Fair

-96.7 dBu Good

Max Output 33 Ohms Vrms/mW

1.26v 52mW Good

57 mW Good

Max Output 10K Ohms Vrms

1.34v Good

1.4v Good

Output Impedance 100hz

0.95 Ohms

0.13 Ohms Excellent

Jitter USB 16/44 Jtest

Fair

Very Good

BOTTOM LINE: In my opinion the “Loudness” feature enabled by default is an epic fail. Someone either got sloppy or they have very odd priorities for a “high quality” USB DAC. Putting that aside, the rest of the performance of the Micro II still isn’t very impressive. The next several reviews of low priced USB DACs will help put the Micro II’s performance in perspective.

TECH SECTION

FREQUENCY RESPONSE: The frequency response with a 10K load (such as a headphone amp) at 16/44 is acceptable but not great. There’s a fraction of a dB of variation below about 30 hz and it’s down –1 dB at about 15 Khz. The steep roll off above 12 Khz is typical of a cheap DAC running at 44 Khz and is due to cost savings in the digital and analog filters. The slight peak around 8 Khz is also disturbing as it indicates either poor DAC filtering and/or potential instability in the headphone amp. Into 33 ohms you can see a slight drop due to the output impedance and a low frequency roll off of about –1.7 dB at 20 hz. That’s borderline audible. Into 16 ohms it would be even worse and more likely to be audible. This indicates less than ideal capacitor coupling in the output:

THD+N vs OUTPUT: This test starts at 10 mV and the rise at below 250 mV is more likely due to more quantization error than noise. The unweighted noise should be below 0.004% but the distortion is more than ten times higher. The lower blue plot is into 10K and the distortion is around 0.025% which is under the worst case guideline of 0.05%. Into 33 ohms, however, it’s over 0.1% above about 700 mV which could be audible under some circumstances. In both cases, the maximum output level is around 1.3 Vrms. This works out to 52 mW into 32 ohms, 104 mW into 16 ohms and only 6 mW into 300 ohms:

THD+N 100 hz 0 dBFS & OUTPUT IMPEDANCE: I now run this test at 100 hz as that’s where output impedance is usually most critical due to the resonance frequency of many headphones. A low 100 hz output impedance keeps the frequency response accurate and provides electrical damping of the driver which can improve the quality of the bass performance. The test is run at 0 dBFS input to reveal any digital overload problems such as the NuForce uDAC-2 exhibits. The Micro II does not reach clipping even at full volume into 15 ohms. The distortion here is mostly in the DAC itself and remains similar into 100K even at lower volume settings. The resulting output voltages at 100K and 15 ohms are used to calculate the output impedance. The Micro II’s distortion is relatively poor at 0 dBFS. It hit 0.14% into 100K and 0.21% into 15 ohms. The output impedance was 0.95 ohms which is acceptably low and it’s slightly lower at 1 khz due to less impact by the output capacitors:

THD+N vs FREQUENCY: Here’s the 1 Khz THD+Noise plotted from 20 hz to 20 Khz into 10K (yellow) and 33 ohms (blue) at 775 mV (0 dBu). The input is –3 dBFS to prevent any digital overload of the DAC. The increase in low frequency distortion into 33 Ohms is another sign of a capacitor coupled output. The rise from 0.05% to 0.15% is likely the output capacitor’s non-linearity. The drop above about 6 Khz is related to the bandwidth limit of 22 Khz as the harmonics move past the audible band. The sharp rise again above 10 Khz is due to very poor high frequency performance in the DAC (and/or filter) despite the fact the harmonics are cut off:

SMPTE IMD: The result here is marginal but acceptable at this price. Ideally all distortion products should be below –80 dB but that’s not the case. The spread (or “mountain”) at the base of the 7 Khz signal is another bad sign. This test is run below the DAC’s digital limit and also well below the maximum output (at 0 dBu) into 33 ohms. It’s somewhat better into 10K but not a lot:

CCIF IMD BENCHMARK DAC1 PRE: This is a more challenging test, and again, the goal is to have everything except the 19 and 20 Khz signals below –80 dB. To show how it can look, here’s the result with the Benchmark DAC1 Pre:

CCIF IMD MICRO II 44 Khz 33 Ohms: By comparison, here’s the same test as above from the Micro II. There’s an entire “forest” of distortion products above –80 dB with the 1 khz difference signal at –52 dB which is very likely audible. The digital/analog filters in the Micro II are in real trouble here as might be the DAC itself. The two spikes around 15 Khz exceed –40 dB and may also be audible. This is admittedly a tough test for a cheap DAC running at 16/44 but this is still a much worse than average result made worse by the headphone amp struggling with a 33 ohm load:

CCIF IMD MICRO II 44 Khz 10K: Removing the load results in the 1 Khz difference signal improving significantly from –52 dB to about –72 dB dropping the reading by a factor of ten. But note there are still a lot of spikes above –80 dB and even above –60 dB within the audio band. Worst of all, the spikes at 15 Khz are still crossing –40 dB:

CCIF IMD MICRO II 48 Khz 10K: DACs will typically do better on this test running at 48 Khz but it depends on their filtering and design. In this case, things get significantly better but are still not great with the 10/11 Khz signals still around –40 dB and several other spikes still above –80 dB. If your operating system lets you run a DAC at 48 Khz, 99% of digital music will be re-sampled from 44 Khz up to 48 Khz by the operating system with mixed results. In this case, it’s hard to say which would yield the better result. In XP it’s not an option as the DAC is forced into 44 or 48 Khz depending on the source sampling rate:

NOISE & LINEARITY: I’ve changed this test slightly to use units of dBu rather than my previous dBr referenced to 400mV. 0 dBu is 775 mV. The Micro II’s –93.8 dBu A-Weighted noise referenced to 400 mV would be 88 dBr (it’s always a difference of 5.7 dB). That’s decent noise performance for a USB powered DAC but falls well short short of what’s required for reasonable silence with the most sensitive IEMs. The goal is –103 dBu. I’m also now showing the absolute (unweighted) noise in microvolts. The linearity was fairly good with an error of only 0.8 dB at –90 dBFS:

-20 dBFS LINEARITY WITH “DRC”: With the “Loudness” option enabled (which is on by default) here’s what happens with a –20 dBFS 1 Khz signal. It’s played back at –8 dB instead of –20 dB. The Micro II is raising the level by a whopping 12 dB as part of its “Dynamic Range Control” feature. This is why, by default, it sounds relatively awful:

JITTER: Here’s the spectrum from the dScope’s J-Test for jitter. The side bands are average at –110 dB but the “spread” of the signal is relatively poor indicating significant low frequency jitter. The frequency accuracy (clock accuracy) is very good as shown by the frequency reading on the left:

RMAA RESULTS: Out of curiosity I tested the Micro II with RMAA with the DRC/Loudness option enabled (10K load). While the frequency response was very similar, the THD spectrum showed some significant differences although the 2nd and 3rd harmonics were similar. The IMD was even worse than the dScope measured and could have been a clipping/level problem. The noise measured a relatively poor –77.5 dB for which I have no explanation. It presumably limited the dynamic range to a similar value. Interestingly, there’s no solid indication of the DRC compression. RMAA missed the huge linearity problem. The results are shown below along with the sound hardware by itself (2nd column) in loopback. For more, see my RMAA article:

TECH COMMENTS: The good news is the output impedance is below 2 ohms, the noise/linearity is decent, and the frequency response and midrange distortion are semi-acceptable into a 10K load like a headphone amp. Into 32 ohm headphones, however, the distortion rises to unacceptable levels and the high frequency distortion into any load is even worse.

October 5, 2011

ARTICLE UPDATE: I’ve been publishing roughly one article a week but not always on the usual Wednesdays. Lately I’ve published and updated reference material for a guest article Tyll Hertsens asked me to write for InnerFidelity:

O2 UPDATE: There are a half dozen or so group buys around the world for the O2 in various flavors and forms. Much to my surprise, somewhere around 1000 circuit boards have been ordered making the O2 one of the most popular DIY headphone projects anyone can remember. That number is even more amazing when you consider most made their purchasing decision when there were only a couple O2 prototype amps in circulation. The boards have been ordered and there should be lots of people building the O2 in the next few weeks.

CHALLENGE UPDATE: I put out an open challenge to compare the O2 on a test bench or in blind listening to much more expensive headphone amps. I also threw out a challenge to hear the difference between op amps. It’s been many months and not a single person has come forward. Where are all those who swear they hear obvious differences between gear or have been critical of the O2?

FUTURE ARTICLE POSSIBILITIES: There are only 11 more weeks left in the year and here are some possible articles (no promises) to wrap up 2011:

Balanced Audio – The main advantages and disadvantages of balanced audio.

O2 vs DIY (Pimeta/JDS/AMB/Gilmore/?) – How does the O2/ODA stack up to other popular DIY amps besides the Mini3?

REVIEW DECISIONS: I get near daily requests to review some piece of gear someone already owns or is contemplating buying. Many are fairly obscure but there has been some consensus. Various inexpensive small USB DACs are the number one request. And the FiiO E6, E10, E11, and new Sansa Clip Zip, are also frequently requested. The E10 isn’t even widely available yet so it might not make it this year but the hopefully I can get the rest reviewed.

TECHNICAL TOPICS: There’s been a clear consensus for an article on balanced headphone gear but I’m open to suggestions for other technical topics. Are there suggestions for other technical articles with broad appeal I’ve not covered yet

October 1, 2011

MEASUREMENTS & AUDIOPHILES: One of the goals behind this blog is to explore some of the more popular audiophile beliefs. Which ones are true, partly true, or completely false? When it comes to measuring audio gear there are many different beliefs but I often run into variations of these three (photo: Leon Wilson):

Measurements use test signals, not music, so they’re of limited use

Measurements fail to account for real world usage and loads

Measurements matter little as you can only trust your ears.

MUSIC vs TEST SIGNALS: Intuitively music is far more complex than test signals. So it’s not surprising many believe such measurements cannot accurately convey the performance of audio gear. But there’s lots of well documented research demonstrating the right measurements using test signals can help predict the sound quality of a lot of audio gear. Some things to consider (photo: Inha Leex Hale):

Sine Waves - Sine waves are not some abstract signal created in a lab. They’re the primary building block of all sounds we hear. Analogies would be a single color of light or a pure chemical element from the periodic table. All the colors we perceive are combinations of individual wavelengths of light. And everything we experience in the physical world is made up of elements from the periodic table. And, in much the same way, music is just a collection of sine waves. A perfect sine wave is a single pure tone and has no distortion of its own. It's the most pure component of sound.

Steinways & Yamahas - The note "A" above "Middle C" on a piano strongly resembles a 440 hz sine wave. It's a relatively pure tone at a single frequency. The wood structure, nature of the strings, hammers, etc. all slightly alter that 440 hz sine wave. A Stieinway grand might have a slightly faster attack, a longer decay and a different set of distortion products than a Yamaha grand. These subtle properties are well enough understood it’s possible to simulate the sound of different pianos using software (photo: Mrs Logic).

Amplifier Distortion – Just as the sound of a particular grand piano can be simulated by understanding its distortion, the same can be done with amplifiers. In the mid 80’s Bob Carver challenged high-end audio magazines and ended up duplicating the sound of a seriously expensive Conrad Johnson tube amp using one of his inexpensive mainstream solid state amps. He did this by simply measuring the tube amp using test signals. The golden-eared editors had a very difficult time telling which amplifier was which despite the massive price difference. There are many other examples that demonstrate the power of proper measurements. If Bob Carver can “describe” and essentially equal the sound of a high-end audiophile amp using measurements that says a lot.

YACA (Yet Another Car Analogy) – Much as music is more complex than test signals, public roads are more complex than a closed circuit on a racetrack. But when someone wants to figure out if Car A outperforms Car B they take them to a track where they can be evaluated under identical controlled conditions. There are too many uncontrolled variables in real world driving such as other traffic. Few dispute a racetrack is the best overall way to evaluate acceleration, braking and handling limits of cars. A test bench for audio gear offers the same for audio gear. A fair and valid performance comparison is only possible under rigidly controlled conditions—not casual listening or driving.

Correlation – Lots of studies compare measurements made with sine waves to perceived distortion when listening to music. And, for decades, the research has supported a strong correlation between the measurements and what we hear. It’s not black and white when you’re dealing with human perceptions, but virtually all of the research has pointed towards various measurement thresholds that help define what people can perceive under various conditions.

Sufficiently Transparent – When there are audible differences between audio gear it’s sometimes difficult to definitively pick a clear winner as that’s subjective and personal preferences will bias the result. This is especially true with things like headphones, speakers, and phono cartridges. But with electronics like amplifiers, DACs, pre-amps, etc. measurements can help a lot. It’s been shown once all the right measurement thresholds are met, the equipment in question becomes essentially “transparent” in the signal chain—i.e. it doesn’t alter the audio signal in an audible way. For example, Meyer & Moran demonstrated you can insert an A/D and D/A operating at 16 bits and 44 Khz into a high resolution SACD signal path and even skilled listeners could not tell when the extra hardware was present. The A/D and D/A were sufficiently transparent and did not alter the sound enough for anyone to detect.

Myth Busted – Sine waves are not some artificial signal with no basis in reality. They’re a well proven method to reveal distortion in audio gear and they’re a building block of all sound we hear including music. And audio engineers have more tricks up their sleeves than just sine waves.

REAL WORLD USAGE & LOADS: Some claim measurements fail to account for real world conditions. But that’s all in how the tests are done. Some tests account for a much wider variety of conditions than typical listening tests:

Test Loads – Proper tests are done using proper loads. And it’s not uncommon to run at least some tests with reactive loads or even real loads. It’s not difficult to model a loudspeaker or headphone driver on a test bench. And it’s easy to compare the results with various kinds of loads including real ones. Hence there’s a pretty good understanding and body of evidence as to how simulated loads affect the performance of audio sources.

Worst Case Testing – It’s not that difficult to come up with worst case operating conditions that represent real world usage. I did just that in developing the criteria for the O2 headphone amp. Once you establish the worst case criteria, tests can be run to verify the performance under those conditions. If the gear measures well enough to be transparent even under worst case test conditions, it’s a safe bet it will also be transparent under realistic conditions in the real world.

Audio Differencing – It’s possible to test many kinds of audio electronics using a method known as analog or digital audio differencing. These tests can be done using real music and real loads (i.e. headphones). In essence, the input and output are matched in level and subtracted from each other. This method was originally put forward by Baxandall and Hafler as a method for evaluating power amplifiers under real world conditions. Differencing captures nearly all forms of distortion and can be quantified objectively by measuring its level and spectral properties. It can also be quantified subjectively by listening to the nature of the difference signal and how unpleasant it is.

Myth Busted – If anything it’s more common to fully challenge audio gear on a test bench than in listening tests. Just like testing a car on a track, it’s generally best to find the ultimate limits on a test bench rather than in real world usage.

TRUSTING YOUR EARS: Audiophiles put a lot of trust in their ears and subjective impressions certainly matter. But there's a problem. Human senses, including hearing, are greatly influenced by other factors (photo: Travis Isaacs).

Primal Brain - A primal and involuntary part of our brain constantly filters our senses to avoid sensory overload. For example, your brain automatically filters out other conversations at a noisy party so you can better hear the person you’re talking to. Check out how the brain involuntarily filters what you hear with this brief BBC video demonstrating the fascinating McGurk Effect

Seeing & Hearing – In the video above, if you close your eyes your hearing is accurate but with your eyes open, your brain deceives you. It turns out when listening to audio gear you have go one step further and remove the knowledge of what gear you’re listening to. Otherwise much the same thing happens—the brain tries to help out and serves up an altered version of what you’re listening to. The objective geeks call this “sighted listening bias” and it’s been well proven in many studies.

Bed Sheets & Hearing – The simple act of throwing a bed sheet over an equipment rack can make allegedly obvious differences in sound quality disappear. The listener’s abilities, room, music, and the hardware remain unchanged, yet just removing the knowledge of which equipment is playing removes the previously audible differences. This has been proven again and again, even in the homes of audiophiles, by listening tests such as this one: Matrix Audio Test

Conditions Are Different – Just because a headphone amp sounds good with 300 ohm Sennheisers doesn’t mean it will sound equally as good with 25 ohm Denon headphones. And gear that might sound great with classical music may fall on its face with the next guy’s hip hop. So listening tests are often only valid for a particular set of audio preferences, conditions, type of music, volume, etc. And all those things differ widely from one person to the next. I’m not saying these tests are useless, but they’re highly subjective and very difficult to compare between people and conditions.

Ears Are Different – If Michael Fremer at Stereophile listens to a piece of gear and declares it best-in-class what does that really mean? It’s much like a wine critic saying the same thing about a particular wine. But the next wine critic will often choose a different wine as he had different tastes. The same is true in audio and it’s a fundamental problem with subjective listening. Everyone’s tastes, preferences, priorities, hearing acuity, listening skills, etc. are different. One man’s “detailed” is another man’s “excessively bright”. So it’s difficult to trust someone else’s ears. And, there are not many stores where you can walk in and audition high-end headphones to hear them with your own ears before purchasing. Measurements greatly supplement these subjective reviews and provide a much better means of comparison.

No Contest – With other controversial topics, say global warming, there’s typically conflicting research. But, in this case, there’s lots of research supporting the problems with sighted listening and essentially nothing credible opposing any of it. If there’s some problem with blind listening tests, why hasn’t the well funded high-end audio industry managed a single study supporting the supposed advantages of their products and/or sighted listening?

Myth Busted – The points above explain why it’s difficult to trust your own ears and trusting someone else’s ears is much like trusting a wine critic. Hence the classic “you can only trust your ears” belief is highly suspect. Objective measurements, however, generally can be trusted as they’re immune to all the issues above.

OTHER RESOURCES: If you’re still not convinced the three popular beliefs are more myth than reality, or if you want more information, the following links are worth a look:

Audio Myths Workshop Video – This is a fascinating video covering everything from human psychology to assorted listening trials. One of the presenters is Ethan Winer who has also made available audio files to allow your own comparisons.

Science and Subjectivism in Audio – Douglas Self shares his views on this debate in an older but still poignant article. He’s the engineer behind some high-end gear such as the current flagship Cambridge Audio products. He’s published some of the most highly regarded books on audio design in the world.

Subjective vs Objective Debate – This is my own article covering more of the philosophical differences between the “trust your ears” and “measurements are best” camps. The comments are also enlightening.

Testing Methods – I share some of my thoughts on how I test audio gear and why. Tests should be done in standard ways to allow fair comparisons between gear and they should be verifiable by others.

BOTTOM LINE: If you want to know the ultimate performance limits of a car you take it to a test track or race course. And if you want to know the ultimate performance of audio gear, you put it on a test bench and use an audio analyzer to make appropriate measurements. Subjective impressions are still important in both cases. For example, the numbers tell you nothing about how easy the controls are to use. But when it comes to determining if the BMW or Mercedes is the higher performance car, measurements offer the best answer. The same is true when comparing audio gear.