Posted
by
CowboyNeal
on Thursday May 31, 2007 @07:22PM
from the perfect-pitches dept.

notthatwillsmith writes "Maximum PC did double-blind testing with ten listeners in order to determine whether or not normal people could discern the quality difference between the new 256kbps iTunes Plus files and the old, DRM-laden 128kbps tracks. But wait, there's more! To add an extra twist, they also tested Apple's default iPod earbuds vs. an expensive pair of Shure buds to see how much of an impact earbud quality had on the detection rate."

Apple's iTunes store--in partnership with EMI--is now hawking DRM-free music at twice the bit rate of its standard fare (256Kb/s vs. 128Kb/s) and charging a $0.30-per-track premium for it. We're all for DRM-free music, but 256Kb/s still seems like a pretty low bit rate--especially when you're using a lossy codec.

So we decided to test a random sample of our colleagues to see if they could detect any audible difference between a song ripped from a CD and encoded in Apple's lossy AAC format at 128K/s, and the same song ripped and encoded in lossy AAC at 256Kb/s.

Our 10 test subjects range in age from 23 to 56. Seven of the 10 are male. Eight are editors by trade; two art directors. Four participants have musical backgrounds (defined as having played an instrument and/or sung in a band). We asked each participant to provide us with a CD containing a track they considered themselves to be intimately familiar with. We used iTunes to rip the tracks and copied them to a fifth-generation 30GB iPod. We were hoping participants would choose a diverse collection of music, and they did: Classical, jazz, electronica, alternative, straight-ahead rock, and pop were all represented; in fact country was the only style not in the mix. (See the chart at the end of the story for details.)

We hypothesized that no one would be able to discern the difference using the inexpensive earbuds (MSRP: $29) that Apple provides with its product, so we also acquired a set of high-end Shure SE420 earphones (MSRP: $400). We were confident that the better phones would make the task much easier, since they would reveal more flaws in the songs encoded at lower bit rates.

METHODOLOGY

We asked each participant to listen with the Apple buds first and to choose between Track A, Track B, or to express no preference. We then tested using the SE420's and asked the participant to choose between Track C, Track D, or to express no preference. The tests were administered double-blind, meaning that neither the test subject nor the person conducting the test knew which tracks were encoded at which bit rates.

The biggest surprise of the test actually disproved our hypothesis: Eight of the 10 participants expressed a preference for the higher-bit rate songs while listening with the Apple buds, compared to only six who picked the higher-quality track while listening to the Shure's. Several of the test subjects went so far as to tell they felt more confident expressing a preference while listening to the Apple buds. We theorize that the Apple buds were less capable of reproducing high frequencies and that this weakness amplified the listeners' perception of aliasing in the compressed audio signal. But that's just a theory.LEAVE IT TO THE OLD FOGEYS

Age also factored differently than we expected. Our hearing tends to deteriorate as we get older, but all three of our subjects who are over 40 years old (and the oldest listener in the next-oldest bracket) correctly identified the higher bit-rate tracks using both the Apple and the Shure earphones. Three of the four subjects aged between 31 and 40 correctly identified the higher bit-rate tracks with the Apple earbuds, but only two were successful with the Shures. Two of three under-30 subjects picked the higher-quality tracks with the Apples, but only one of them made the right choice with the Shures. All four musicians picked the higher-quality track while listening to the Apples, and three of the four were correct with the Shures.

Despite being less able to detect the bit rate of the songs while listening to the Shure SE420 earphones, eight of 10 subjects expressed a preference for them over the Apple buds. Several people commented on the Shure's ability to block extraneous noise. While listening to the SE420s, one person remarked "Wow, I'd forgotten that wood-block sound was even in this song." Another said "The difference between the Shure earphones and the Apple earbuds was more significant than the difference between the song encoded at 128Kb/s and the one recorded

It would be crazy to pay that premium if you're going to buy the entire album.

DRM'd and DRM-free albums cost the same. There is no reason to buy the DRM, if you are buying a whole album.

In the end, Apple's move doesn't change our opinion that the best way to acquire digital music remains buying the CD:

They tested music ripped from CD and encoded by iTunes. That makes this test irrelevant to the music to the iTunes store, since that music comes from the original masters (higher quality than the CD) and is encoded using customised settings (per-album or per-song), while iTunes uses some fairly general settings.

On my own, completely unscientific, tests, the 256Kb/s tracks are noticeably better. I upgraded a couple of albums yesterday and discovered I could hear the lyrics clearly in a few places where they had been obscured by instrumentals in one of them. The difference is only noticeable if you are specifically listening for it though; I wouldn't be able to tell you the bitrate in a blind listening (hearing them one after the other I probably could).

Having the songs DRM-free is definitely worth it though. I stopped buying music from iTMS when I started owning multiple portable devices that could play back AAC, but not Apple DRM.

They tested music ripped from CD and encoded by iTunes. That makes this test irrelevant to the music to the iTunes store, since that music comes from the original masters (higher quality than the CD) and is encoded using customised settings (per-album or per-song), while iTunes uses some fairly general settings.

So then, it seems that there would be an even more noticeable difference between 128Kb/s and 256Kb/s. Which means if using this lower quality 128Kb/s track, the research showed that the difference in quality isn't worth an extra 30 cents, then doesn't it still hold true that a higher quality 128Kb/s track purchased from iTunes would be even closer in quality to the 256Kb/s track, and still not worth the extra 30 cents?

If ripping a CD to iTunes at 128Kb/s creates a lower quality track than purchasing a 128Kb/s track from the iTunes Store, then I think ripping from a CD to iTunes actually adds more weight to the argument that the 256Kb/s tracks are not worth an extra 30 cents.

That makes this test irrelevant to the music to the iTunes store, since that music comes from the original masters (higher quality than the CD)

Do you have any actual evidence that iTunes tracks are encoded from master tracks that are higher quality than CD (i.e. greater than 44.1kHz/16bit)? I have a hunch they're encoded from the same 44.1kHz/16bit file that you'd get if you ripped the CD yourself...In fact, I know they've done exactly this in at least once case, my own album...but I'm not signed to a major label, so it's possible things are different, but I doubt it...

When Jobs introduced the music store, he stated that this is exactly the case. It's not universal, but for some (many? a few? most? I have no idea) they went back to the original masters and used those to for the iTunes music store.

Nearly all music is recorded and processed at 48kHz. The Red Book standard unfortunately went with 44.1 (for some esoteric reason having to do with syncing with an analog video standard or something back in the 80s). So, there's at least a down-conversion from 48 to 44.1, which isn't the end of the world but you lose some fidelity in the process since its really hard to do that "right" (and its only been recently that people have stopped using langrangian techniques and used truncated sinc functions or polyphase filters to do a decent job without it taking 50 forevers)

I always assumed that 44.1kHz was chosen because they took the necessary (Nyquist) sample rate to be able to record up to 20kHz (40kHz), and added a bit for good measure. There's always been that rumor that the time length of a CD was chosen to be able to fit Beethoven's Ninth Symphony, so I always figured they knew they wanted 16 bit, and a length of about 74 minutes, and just picked the >40kHz sampling rate that would get them there with that fancy new "CD" technology that was being developed. I'm happy to know that we're all using 44.1kHz for an even stupider reason;-).

no you're quite correct. the gp seems to think that they break out the 1\2" masters to re-encode for iTunes. They don't. Much music from the last 15 years was mastered at 48 or 44.1 digitally and there isn't even a higher quality master to be had.

I wouldn't call 24bit 48KHz highly superior - just a bit better. In mastering the 16/24bit question is largely irrelavant unless the source has wide dynamic range (like classical not pop/rock/dance). 96/192KHz sampling is now common but has not been for the last 15 years. Cheaper recordings in the 90s would master to DAT (48/44.1 @ 16bits), more expensive would be 30ips 1/2" analog (possibly with Dolby SR NR or not).There's an informative piece about bit rates here:

While listening at 24bit 48kHz is certainly just a bit better, *recording* at 24bit is certainly "highly superior" to recording at 16 bit. The larger dynamic range means that one can record at a much lower level into the computer, and not have to worry about clipping on the high end or quantization error/noise on the low end...just as a matter of convenience, 24bit recording is vastly better than 16bit

We're all for DRM-free music, but 256Kb/s still seems like a pretty low bit rate--especially when you're using a lossy codec.

Are they on crack? 256 Kbps is quite a high bitrate for a lossy CODEC. Their wording is also really bizarre. A low bitrate would be worse for a lossless track, because an uncompressed or lossless track, by definition, should have a much higher bitrate than a track compressed with a lossy CODEC.

It seems obvious to me they do NOT know what they were doing.RTFA or not. ( Guess which I chose? )10 subjects is hardly enough to prove ANYTHING, other than that theyhave no idea how to perform a remotely rigorous scientific analysis.

You can expect 2 idiots, 3 to be biased, 4 to be honest, and 1 to lie.

I think 100 would begin to scratch the surface. I'm not trying to bea snarky science dick, this is self evident. This is epinion.com bullshit.

"When I get around to them, I might rip cassettes at a lower rate (128 or 160) because there's so much missing already compared to the other source formats...or maybe not."

I have The Best Of Charles Mingus on a Compatible Stereo Cassette from the early 80's (ATLANTIC CS 1555) and it sounds amazing. The folks responsible for mastering this cassette saturated every magnetic particle with information, producing one of the best sounding recordings I've ever heard. I recorded it as a 24-bit AIF for archive pu

My understanding of a lossless CODEC is that there is a limit to amount of compression that can be accomplished because to go below that amount would cause it to become lossy since elements of the original file would have to be deleted to compress it further.

Lossless codecs aren't lossy codecs that just haven't been cranked down enough. The fundamental difference is that lossy encoding is happy throwing away parts of the input that it thinks you won't miss. But take the example of a sine wave at a const

Many similar tests have proven that most humans have trouble detecting any change in audio quality above 160->192 Kbps or in mp3s. A quick web search will show that even "audiophiles" really can't discern the difference. 128 has a clear "tinny" quality that disappears as the bit rate goes up. Based on this, I believe that 256 tracks as compared with the original cds would never be accurately identified. Clearly this should have been a part of this test. The idea that "lossy" means "audible" has not been proven in any real world tests.

True, the only time you will generally notice the difference is if the track has a crowd clapping or drumkit (hi-hat) cymbals. At 128k I think cymbals sound horrible and undefined. At 192k I start not to be less annoyed.

Actually, I notice a huge difference between 128K and 192K when listening to classical music. For music that doesn't contain the brashness of percussion or brass instruments, the distortion at lower encoding levels is fairly good; however, brass instruments (including brass cymbals) in particular are unbearably distorted when 128K is used but come across rather cleanly when >192K is used. I've finally accepted that a variable rate between 224K and 320K is where I need to encode my tracks in order to make them as close to the original CDs as I can tolerate without using the actual CDs.

Classical music usually has a wide dynamic range whilst most of the rest doesn't. The audio engineer working on a pop track runs everything through an audio volume-level compressor, bringing every sound to more or less the same volume level. In classical music it is quite normal to play certain things at the level of a whisper.

This means that most of popular music never uses the digital bits representing these low-volume whispers but confines itself to loud shouts and blaring synths, so a lot of the 'bandwidth' on a CD is wasted because of it. Classical music on the other hand uses most of the available bandwidth thanks to the sane use of audio level compressors. When this wideband signal is to have its data compressed then it requires a lot more storage space than the popular music would.

The MPEG community uses a MUSHRA test* to judge the quality of new codecs and to decide on bitrates etc. If there are n-codecs under test than the subject can switch A-B style between n+2 different versions of the same piece of music. These are the n-codecs and a reference or lossless version. He does not know which is which. He can also switch to one which he knows is the reference track (so the reference track is in there twice, labelled in one case and not labelled in the other). The task is to rate (0-100) each of the unknown tracks based on how similar it is to the reference track. One important thing to remember about the task is that the subject must rate similarity, rather than 'quality' or anything else. A certain codec could, for instance, add a load of warm bass to a piece of music making it more pleasurable (maybe) to listen to, but decreasing its similarity to the reference piece. The idea is that the subject should be able to pick the reference track from the unknowns (giving it a score of 100) and then rate all of the other unknowns in terms of similarity to the reference. The codec with the highest score wins. This type of test would be carried out for each of a number of pieces of music, with a lot of listeners.

* sorry, I've no good link- it's in ITU-R BS.1534-1 "Method for the subjective assessment of intermediate quality level of coding systems".

We theorize that the Apple buds were less capable of reproducing high frequencies and that this weakness amplified the listeners' perception of aliasing in the compressed audio signal. But that's just a theory.

Can anyone explain this to me? I know what aliasing is; basically it's when your top frequencies hit the Nyquist limit and kind of bounce back downward (how's that for scientific?), and I know what it sounds like. However, the last time I checked, you'd remove aliasing by cutting high frequencies out of the final analog wave with a lowpass filter. Unless something's radically changed since then, wouldn't the presumably lower-response Apple buds actually show less aliasing that the expensive ones that can better reproduce the higher (and unwanted) frequencies?

Or have I been trolled into reasoning with audiophiles? If that's the case, let me know so I can pack up and go home.

Can anyone explain this to me? I know what aliasing is; basically it's when your top frequencies hit the Nyquist limit and kind of bounce back downward (how's that for scientific?), and I know what it sounds like. However, the last time I checked, you'd remove aliasing by cutting high frequencies out of the final analog wave with a lowpass filter. Unless something's radically changed since then, wouldn't the presumably lower-response Apple buds actually show less aliasing that the expensive ones that can b

I can explain this to you, but it will probably easier to use an analogy to get the point across.

We know that a listening device (in this case earphones) has a certain frequency response, and can introduce noise into the source. Some listening devices produce less noise, and have more accurate frequency responses. In terms of simple examples, think: (Speaker > Landline > Mobile > Tin-can phone) (I know, the phones have sound systems behind them that affect the sound, but you get my point.).

Well, you know what? This is also true of encoding audio in a lossy format. So, instead of thinking of the anti-aliasing, imagine that we are encoding into another format. In the case of the apple phones, think of the transitions as (Source -> 128k AAC -> 192k MP3 (The apple phones)) versus (Source -> 256k AAC -> 192k MP3 (The apple phones)). Since additional noise is being introduced into the system, it should be pretty obvious which comes from the higher quality source. If we imagine the Shure headphones as having a perfect response, it will be (Source -> 128k AAC -> FLAC) versus (Source -> 256k AAC -> FLAC). There is no additional noise added, so you have to discern entirely based on the difference between the two AAC files.

To get back to the issue of aliasing, aliasing is what happens when a signal of one frequency gets recorded in a medium without enough precision to record that frequency. The nyquist limit says that for any frequency, you need twice that frequency in recordings to be able to capture the frequency (so a 5Khz sound can be heard on a 10Khz recording) but that assumes that the recording is in phase with the sound, and so it's a little more complicated than that. In any case, you can think of aliasing as the "beat" between two different frequencies. For example, if you listen to a sound at 3000 Hz and one at 3100 Hz at the same time, you will hear a 100 Hz "beat" that is the difference between the two. However, if you listen to the 3000 Hz frequency, and then the 3100 Hz Frequency, you might not be able to tell the difference between the two. It's only when playing the two sounds together that you hear the beat (just like you won't notice aliasing unless you actually record it into another format.)

There's so many factors involved with these things that it is very hard to make a judgement. A well organised test would specifically select songs that do not compress well with lossy codecs. It is conceivable that music with a fairly even PSD (power spectral density) would not compress as well as one with a PSD that focusses more on certain areas, since the amount of information stored would have to be spread across a greater range. Hence the higher bitrate should sound better because more detail is preserved. Think speech quality (telephone, AM radio) vs CD quality, it sounds like the original, but the detail is all missing. That's what that extra bitrate adds back in. 128K is acceptable to the majority of people out there. Some people are more sensitive, I know people who work in professional audio and they can't stand 128K, personally, the vast majority of times I can't tell. I generally use OGG at around 160Kbit, and when an mp3 gets played I do get a sense that it is not quite the same, but it's not conclusive - it could just be the encoder used.

The headphones do make a difference. I used the stock headphones with my portable music player. Dropped them in/on/off something and broke them and got a set of Sennheiser ear buds. They do not cost $400. The interesting thing is I perceived the same effect as the people in the test: A reduction in bass 'kick' but a clearer response. There is definitely a lot to be said for good quality listening equipment, but in that arena, proper over the ear headphones are the only way to go. They aren't that practical though. The standard ear buds don't have the high frequency response and clarity you can get from slightly more expensive ones. Spending as much on your ear buds as on the player itself seems a little excessive though. You could probably get a larger size player, decent headphones, and use FLAC and get better quality than 256K mp3 through a set of very expensive ear buds. Also, you are going to be even more upset when they end up in your beer or something.

Finally, spotting mp3 artefacts is a strange thing. I'd never noticed any (at 128K) until someone pointed out the sound to me (usually it's cymbals). From then on, it became much clearer, and now I notice it a lot more (again it's mostly cymbals). Some songs are more susceptible than others, again I guess it is related to the make-up of the music.

Essentially I have come to the conclusion that: OGG sounds better than MP3 (although some of the audio professionals I know think the oposite), ear buds can only go so far and break - not worth spending a fortune, but worth spending a little, and that if you _really_ want to hear stuff at the finest detail, you should invest in some good over the ear headphones. It's a different experience: the noise occlusion, crisp, clear sound, and defined and powerful bass. The main thing you notice is that strong bass does not corrupt the higher frequencies, giving a very different overall feel of the sound, one that is, in my opinion, quite unique.

The open reel tape used in the studio was recorded at ether 15 or 30 ips.

And you had to either pretty wealthy to use virgin tape or hope the previous recordings would be properly wiped. It's an analog medium with the main advantage that overdriving the inputs gives a nice effect ("warmth") - compared to early digital boxes who just clipped and truncated instead of dithered. Every time you have to play or record tape, it degrades a little bit; surely you know of the multitracking in Bohemian Rhapsody that went on and on until the tape was nearly transparent

Furthermore, vinyl is lowpass filtered at 16khz anyway. Gone are the harmonics. The higher fidelity is in the first few playings; after that, the medium degrades. What use is it to have something that'll play properly 10-20 times?

good CRO2 tape and a quality recording and playback deck and you really couldn't tell the difference between live and tape.

Live sound is always a compromise; always an unpredictable venue, crowd, and response (and in the worst case a clueless mixing engineer or band member who decides that eleven is just not enough for his guitar); soundchecks just can't fix this.

There is absolutely nothing wrong with digital. The whole 24/96 deal is a godsend because it means much more headroom. Having it in digital format means that you can play and record without ghosts from the past, without degradation. This caused some engineers to add noise afterwards to get rid of the sterility - but what they call sterility is simply unheard-of silence that couldn't be had previously. Engineers back in the day would've killed to have the possibilities we have now.

As for sounding plastic, I think you're confusing the medium with the mixing. Are you familiar with the term "loudness wars"?

You're preaching to the deaf. All the rational arguments in the world aren't going to convince the $400-volume-knob crowd that the godless computers aren't ripping the color, warmth, texture, flavour, and smell out of their wax-cylinder and vacuum tube audio.

After all, just look at this chart: you can clearly see how digital audio is ultimately a series of ugly, jagged, sharp steps, while analogue audio is infinitely variable...

That's it. "best" Not "like the original", which is a poor substitute for "best".

The problem is, "best" is subjective. One's person's "best" is not the same as another. When comparing against the original, we have a baseline to compare against.

And example of this would be that different codecs preserve certain frequencies differently. Different people are more sensitive to changes in different frequencies. If it just happens that a codec does preserve a those particular frequencies that you are sensitive to, then of course you will feel that that codec is bad.

Noise canceling ear buds are never good and definitely not worth the money to begin with. The Shure product is actually noise isolating, therefore allowing you to play music at a lower volume, and be able to hear even more details. Also, noise isolating ear buds tend to also block out more noise than noise canceling ones do, at least in terms of the decibel rating.

Personally, I prefer a set of good earphones (without noise canceling, mind you, perhaps a good set of Grados) for those times at home, and in noisy environments, nothing beats a pair of decent in ear noise isolating ear buds. They are essentially ear plugs with embedded speakers, absolutely amazing products. Check out a pair of Shures or Etymotics, definitely won't disappoint.

It's a matter of personal taste, but I was given a pair of very expensive noise-blocking earbuds, and I *hate* them. Firstly, to block the noise, you have to jam them into your ears till it hurts. And then, the "sound-stage" is moved to directly between the earbuds, so the orchestra sounds like it is inside my head(*). Ugh. I tend to prefer lightweight in-ear headphones with a folding headband for travel (much more comfortable), and proper fullsize headphones (not necessarily especially expensive) for non mobile listening. On aircraft, I've given up on classical music completely.

(*)If interested in this effect, try playing with sox, and the "earwax" plugin. Some samples are on the web too.

Just to make this clear... the HD202s are closed-back headphones, not 'noise cancelling' in the strict sense, which would imply an active microphone/counter-noise system. Closed-back cans do block out a lot of noise, and can sound a lot better than the actual noise cancelling stuff.

The result isn't as useful without knowing how those that didn't pick the high bit rate were split up. Out of the 4 that didn't pick high bit rate with Shure headphones, how many picked low bit rate, and how many couldn't tell the difference?

As for ABX, it seems like the most demanding possible test, which I agree makes it attractive in theory. But in real life, the relevant question is "does this sound good" without a back-to-back reference sample for comparison. I also keep my photo collection in.jpg. Can I see the jpg distortion if I do a 1:1 blowup and carefully compare to a TIFF image? Sure. But at normal viewing size and distance, it just doesn't bother me, and that's my pers

The sample-set should also include musicians and audiophiles into the mix. They are far more likely to give an objective opinion compared to people randomly pulled off the street. Both know what to listen for and are well tuned in finding the distortion which is inherit in lossy compression.In my personal experience, I have listened to mp3 as well as other competing formats for over 10 years and it is very easy for me to discern the difference in bitrates. I wasn't able to do this at first, but I tuned m

The sample-set should also include musicians and audiophiles into the mix. They are far more likely to give an objective opinion compared to people randomly pulled off the street.

Bullshit. First of all, the testing procedure should be designed to eliminate subjectivity. That's the purpose of double-blind testing. Second, why would anyone but a musician or audiophile care what a musician or audiophile has to say on this issue? Are they experts on hearing? The latter group would be particularly useless

100% certainty that 10 people sample-set is too little for a Yes-No experiement.

Probability that a bunch of editors will fuck up the statistical design of an experiment: 96.2% Seriously, you're writing for a magazine with decent readership, and you can't spend a week finding 90 more people at a coffeeshop who are willing to listen to music for 15 minutes apiece? Possibly get some statistical validity?

100% certainty that 10 people sample-set is too little for a Yes-No experiement

Really? We are testing the hypothesis that people can tell 128k and 256k apart. If the hypothesis is false, then it will be 50/50 whether they get one right or not. The chances of getting 8 or more right out of 10 when an individual trial has probability 1/2 is C(10,8)(1/2)^8(1/2)^2 + C(10,9)(1/2)^9(1/2) + C(10,10)(1/2)^10. That's 56/1024, or 6.3%. That's pretty good grounds for rejecting the hypothesis.

Test confirms the generally known (but debatable) points:
1. Not many can detect the improvement of higher kbps
2. Expensive earbuds are way better than the default ones.
3. 128kbps AAC isn't all that bad.

The iPod revolution has caused a massive resurgence in big heaphones. In fact, in many ways it's a whole new trend. The big headphones, in the past, were usually worn at home, plugged into a nice amplifier. Or used in the recording studio, or for DJing. In the Walkman era, the headphones used were the cheap, compact outer-ear headphones. During the portable CD player era, it was black earbud

The fact I couldn't play the music on my (Nokia) phone's built-in music player was the reason I stopped buying from iTMS. I'll probably start again now. 256Kb/s AAC is the same quality as the music I've ripped from CD, and the convenience is a huge incentive.

"Eight of the 10 participants expressed a preference for the higher-bit rate songs while listening with the Apple buds, compared to only six who picked the higher-quality track while listening to the Shure's."

I don't buy this. I have a friend who claims to be an audiophile - and he is - with sound equipment worth well over $40,000. He states that the more expensive and professional your gear is, the easier it is to spot low quality music.

So the article contradicts with his statement, and I have to agree with him on this one. Logically speaking, professional speakers should produce results far closer to the source than the ones that aren't.

Yeah, well I used to have a gf who claimed to be a Scientologist, and she gave over $40k to the church. She states that some alien is responsible for blowing up volcanos that created humans, but you know what... the bitch is just wrong.

You're assuming A) a minimum level of quality, so that "better" means "better in all areas" not just "better on average". and B) that it's impossible for deficiencies one area to compliment improvements in another. In short, while what you say is generally true, there are a lot of variables.For example, the data loss in AAC encoding is most noticeable at higher frequencies; it's possible that A) the Apple headphones have better clarity at higher frequencies than the Shure headphones, or B) that the Apple he

Logically speaking, professional speakers should produce results far closer to the source than the ones that aren't.

Er, WTF? Audiophiles don't use 'professional' kit they buy posh shiny Audiophile setups. If you want to listen to music as the recording engineer intended, buy a set of decent powered studio monitors for far less then supposed audiophile setups. You'll be far closer to the intended sound than any artificial response you get from consumer gear. And yes, audiophiles are consumers too, just consu

Kinda, sorta, but not really.Many rercording engineers preview their mixes on the most attrocious speakers they can find to check that it still sounds OK on the kind of equipment. It will sound much better on better gear and they know it - but they know how 90% of people will listen to it and want to cater to that possibility. It's not about recapturing the way they intended it (that is in their head, not on a studio monitor or an audiphile rig). (Why else do you think pretty much all the CDs released have

I agree with your statement that audiophiles don't use "professional" equipment, but I disagree with your statement that studio monitors will give you the sound that the recording engineer intended. This is because, as you imply, there is a distinct difference between accurate speakers and good-sounding speakers, and recording studios use accurate speakers, while consumers, even audiophiles, are better off with good-sounding speakers.

If you working in a recording studio, you want accuracy at all costs. You must hear everything distinctly, because you need to make important decisions based on what you hear. If "it sounds great" is all you are getting from your speakers, you won't make those tough decisions (more cymbals, different reverb, more compression on the vocals, or whatever.) You'll just leave it alone and it won't be as good as it could be. However, those extremely accurate speakers that are perfect for recording studio use are NOT pleasant for casual listening. Everything is too crisp and sharp, and they will tend to make you want a break from all that detail.

When I'm working on a mix in the studio, I want everything in very crisp detail so I can make judgments; when I'm listening to the final product, I want the music to "hang together" and present itself to me as a coherent whole. There are other differences between studio monitors and "normal" speakers (for example, consistency of frequency response) but this relatively subjective factor of detailed sound vs. coherent sound is one of the more important ones I have experienced.

The recording engineer did not intend for you to listen to the music on studio monitors. Studio monitors are a tool with a specific use, and that use is not everyday listening. The attributes of a good studio monitor just don't match up with the attributes of a good audiophile speaker. This is why audiophiles buy certain kinds of speakers, and recording engineers buy other kinds. I've been lucky enough to own both kinds of speakers, and I've tried using them for the wrong purpose with less-than-stellar results. Mixes made on good-sounding speakers are inconsistent on other speakers, and music played through accurate speakers isn't as pleasant to the ear.

I have NO accurate speakers. Instead I cut even more costs and just have a few separate stereos with different speakers hooked up. I use my high-quality Shure studio headphones for recording, then when I'm done, I play it back on all three systems, and I note just how it sounds on each system, so I know over a wide range of speakers/amplifiers (from car amp to hosue speakers, car speakers to house amp, etc.) what I can expect to hear. I listen to it as if I'm hearing it out of Joe Sixpack's home stereo rig,

I have a friend who claims to be an audiophile - and he is - with sound equipment worth well over $40,000.

I can't tell if you're being sarcastic or not. Assuming you're not......having $40,000 in sound equipment says about as much about your ability to judge sound quality as spending $300 on Celine Dion tickets says about your taste in music.

Clearly these test are inadequate, or at least they haven't disclosed enough information on the testing conditions. As any true audiophile knows, headphone performance is strongly affected by atmospheric conditions; I'll bet that if they had bothered to maintain proper water vapor saturation levels in the test facility the complete the inadequacy of the ear buds would have been obvious to everyone involved, because sensory receptors (hair cells) in the human ear only achieve full sensitively under controlled conditions.

No doubt they also failed to account for magnetic field alignment; the flaws of low bit rate reproductions are much easier to perceive when the listener is not aligned with Earth's natural axial vectors. The solenoidal force lines ruin the high band pass attenuation of any digital audio and will make both low and high bit rate reproductions equally poor, so naturally there wasn't a strong correlation among the test subjects.

Hardly a conclusive or thorough study - were it really double-blind, some subjects should have heard two 128 Kb/s tracks, while others heard two 256 Kb/s tracks, and there should have been a "no difference" option. Also, some types of music, or some particular musicians, make it much easier to discern difference between bitrates, but every subject listened to a different song.

Personally, I can tell the difference between 128 and 256 versions of most Radiohead songs, where there are frequently numerous l

Despite what Apple charges for a set of its replacement buds, the earphones that come with 90 percent of the digital media players on the market are throw-away items--they're only in the box so you'll have something to listen to when you bring the player home.

I'm a musician. I've recorded and released an album [cdbaby.com] (sorry for the shameless plug but it's only to put my post in context - honest). I own expensive studio earphones, have experience mixing and mastering etc.

I don't own a 5th generation iPod but I do own an iPod Shuffle that has since stopped playing MP3s. It still works as a storage device and I still have the headphones. I kept on to the headphones because I prefer them over all other ear buds I have. They don't beat the studio headphones, but I would not consider them "throw aways". I found they're pretty good quality and I began using them with all of my portable devices. I would generally agree that most ear buds that come with cd players and probably many other mp3 players are of relatively low quality, but I was very impressed with the ones that came with the iPod Shuffle. I will never throw them away.

Your shameless plug just sold at least one of your CDs. And ignore the AC's comment--they obviously didn't give your music a listen. You are most certainly a musician. I, however, am a drummer, which just means I hang out with musicians. *BADDA BOOM*

To our subjects ears, there wasnt a tremendous distinction between the tracks encoded at 128Kb/s and those encoded at 256Kb/s. None of them were absolutely sure about their choices with either set of earphones, even after an average of five back-to-back A/B listening tests... We'd be more excited if Apple increased the bit rate even further, or--even better--if they used a lossless format.

Ok, so by DOUBLING the bitrate, there was only a marginal increase in quality... to the point where on a good set of he

Me, personally, what I find unsatisfying about compressed music is that the treble is the first thing to go, and even at high bit rates AAC and MP3 each seem to just make all cymbals, brushes, triangles, and synthetic tones in the high registers sound equally like white noise.

I found a tonality frequency setting in LAME that seemed to cure this problem, but neither iTunes nor ITMS seems to let you adjust or purchase based on this issue.

Perhaps not everyone is sensitive to this, but maybe there are other settings or aspects of compression that other people are sensitive to which I am not...leading one to the possible conclusion that compressed music might be made better by personalizing each rip to the hearing response of the listener rather than compromising on an average human hearing model.

Most of my friends seem to have quite a bit of hearing loss (all under 25). I don't seem to have much, though, and I've worked in steam turbine and gas turbine power plants (exceedingly loud places). If these test subjects were anything like my friends they have to turn up music so loud that it is impossible to tell the difference between a cell phone speaker and an Imax theater.

Maximum PC did double-blind testing with ten listeners in order to determine whether or not normal people could discern the quality difference between the new 256kbps iTunes Plus files and the old, DRM-laden 128kbps tracks.

The unexpected age results (that older people were better at telling the difference for the bitrates) may well be a consequence of music choice. Each subject picked their own music, and it is very clear that these quality differences are more noticable in some types of music than in others. The first time I played an iTunes purchased classical piece on a cheap component stereo system, I thought something was broken. I hadn't noticed a problem with most popular music, but I find some jazz and most classical digitized at 128bps un-listenable on my low-end component stereo.

One of their key ideas was having the participants submit music they were intimately familiar with. Unfortunately, they should have taken the idea to its logical conclusion: having each participant tested only with songs they submit. Also, they could have at least published the statistics on how participants performed on the song they submitted.

I find it easy to tell the difference between say lossless or even 320 and 128/192 when listening to music I'm very familiar with. But give me a set of random s

The big difference that the 256 Kb/s + DRM-free option makes for me is that now I'll buy albums from iTunes Store. Previously I would use iTunes to buy one to three tracks if there was some artist I liked but didn't want a whole album from. But usually I order the CDs online for $8 to $14, rip them to AAC at 192 Kb/s, and put the disc away to collect dust on my overflowing CD rack. Now I can get higher quality cheaper and faster.

Yes, ideally I would rip all my music to a lossless format. And ideally everything would be available on SACD at 2822 KHz rather than 44.1 KHz CDs. But that's just not practical with my 500+ album collection. It'd fill up my laptop's hard drive real quick and allow me to put only a fraction onto my iPod.

I'm also disappointed that the article only tested the tracks on iPods with earbuds. Most of my listening is on a decent stereo system fed from my laptop. Ripping is about convenience, not portability. I only use my iPod when riding the Metro or an airplane. With all the outside noise the bitrate doesn't matter.

And being DRM-free isn't just a matter of idealism. I get frustrated when I go to burn an MP3 CD for my car and discover that one of the tracks I selected is DRMed. Sure there are ways to get around it, but it's just not worth the bother.

It only matters what you hear with your music and your listening conditions.

I sometimes like to listen to classical on a cheapish low-end component stereo. At 128bps, the quality is so noticiably bad for me as to make it pretty awful. But I don't have that problem with many other types of music under other listening conditions (car, iPod, computer speakers). So when I get a chance (I'm travelling now), I'll see what 256k does for me under the conditions that matter. The results may mean that I'll buy

I figured the best thing to use for evaluation was some music I was already very familiar with: "Hunky Dory" by David Bowie (1971). A very well-recorded album featuring all styles from straight-forward rock to lush orchestrations.(good deal too; the LP was just $9.99, though the individual tracks were $1.29)

Must admit I was not disappointed. Previously I've rarely bought any music from iTunes - just my own CDs ripped to MP3 at 192 or 256K. The 256K AAC sounded great on the big speakers. Very clean and

This test, to a large extent, tells us about the output of the codecs, rather than tell us about the differences between 128k/256k encoding. For a really meaningful test, we must ensure that each song was encoded using the exact same settings.

I can create 256k MP3's which sound worse than 128k MP3's, both from the same WAV. There are a large number of customizations you can use in the encoding process which can really affect the output.

Hydrogen audio forums provide a lot of very good information including well designed double blind comparisons between codecs and bits rates. See this page for details and the links to other testing sites.
http://wiki.hydrogenaudio.org/index.php?title=List ening_Tests [hydrogenaudio.org]
All in all an excellent resource for any serious listener.

First, to those who made comments about 128k encoding, you may be thinking of mp3. (Or maybe not, who knows...) From what I've heard, AAC, Vorbis, and AC3 all sound better than mp3 at similar bitrates.

Second, I remember there was a comment on Slashdot awhile back, before they actually came out with these, and I want to confirm... Apparently, CDs are recorded at a certain physical bitrate/frequency, and there are digital masters which are at a higher rate... it's late, so I'm not entirely coherent, but think of it as somewhat equivalent to resolution of a DVD (quality of video is proportional to resolution (HD vs normal) and bitrate). The point was that 256k may actually sound better than a CD, since it comes from a better source than a CD.

If so, this whole test is BS, since they did not do a comparison of CD vs 128k (either iTunes-DRM'd or custom-ripped) vs 256k (un-DRM'd, from the iTunes store). Specifically, I'd want to hear 256k vs CD. But at the same time, I don't know if any iPod, or specifically the one they are using, would be able to handle the higher resolution. If not, you'd have to specifically check your soundcard, too.

And finally, again vaguely remembering this from a Slashdot comment (so correct me if I'm wrong), but there was some comment about "The 30c may seem small, but imagine buying a whole album at these prices..." And I seem to remember that a full album is always $9.99. Still high compared to, for example, the minimum you'd pay for a FLAC-encoded album at MagnaTune, but if you're buying a whole album (and if that's accurate), you may as well just opt for un-DRM'd -- especially if it sounds better than a CD (which would probably cost more anyway.)

But then, of course, I'd like to hear a much bigger study, with more rigorous controls in place. 10 people is just not enough, no matter how you set it up.

And personally, if I had any money to spend on music, I'd be buying un-DRM'd stuff. But probably not from iTunes -- not till there's a web interface (or at least an API) that doesn't require me to download their client software. After all, if I'm buying a single file, the point of the client software is to implement the DRM, and if I buy the un-DRM'd version... Not that it shouldn't also work in the iTunes client, but it'd be nice for it to work natively in AmaroK, or just in a browser.

"And as much as we dislike DRM, we just don't think DRM-free tracks alone are worth paying an extra 30 cents a track for.. It would be crazy to pay that premium if you're going to buy the entire album. We'd be more excited if Apple increased the bit rate even further, or--even better--if they used a lossless format."

First off, I've yet to see a lossless formula that WORKS. And by works I mean is easily convertible into mp3/aac so I can use it on a portable player I already own, able to be used. I've seen APE and FLAC, both are too much hassle, and the APE files I got were in japanese. Here's a little fact, Ape doesn't necessarily know how to correctly encode Japanese into ID3, end result? Buffer overflow, bad data. Oh and if they work? They are larger than mp3s and AAC. Lossless codec means all the data has to remain, trust me, that's not a good thing when coupled with all the other little hassles it has.

Second. It'd be crazy to spend 99 cents just to license your files so that you can only use as Apple approves. Paying money to crack the music so I can use it as I want is illegal according to them so why am I paying the money to get locked into their plan. However DRM free music is easily worth 1 dollar and 30 cents because it's mine (It AAC but I can live with that). I don't have to ask permission to use it in another player, I don't have to ask permission to convert it to a data format I choose. Personally I'm fine with 192 for most recordings, I'm not an audiophile, I'm just a listener. If you want the highest grade data, or are an audiophile you'll be buying CDs or fully lossless data, you're not going to fuck with iTunes anyways.

Btw their other idea is to get rid of the apple iBuds and get quality recievers. Hint, This is what got the less interchangable results? I don't exactly see why getting a "higher quality" headset would be desirable if it creates less of a difference instead of more of a difference between two bit rates. Higher quality means I should hear everything. If you are asking people "can you hear the difference" they already should be listening as hard as they can. The theory they try to explain it with doesn't make much sense. They are telling us 30 cents doesn't make a difference but they are trying to sell us on dropping 400 bucks on noise reducing headsets you can get for around 100 if you're clever. Hell they are EARBUDS!!! So far I've notice two things about earbuds. They are uncomfortable, and they are worthless compared to my headphones. If you're talking about noise reducing earbuds just be smart buy a good set of headphones.

Overall a throw away article, I'm still only going to buy DRM less music (I expect you out there to do the same, I'm assuming 30 cents won't kill you, but that's your choice) and hope to soon if Apple ever put the DRMless music out, and had the music I listen to (so far not really). I'm assuming you all are STILL buying music like you are going to. The only mind's this article changes is the cavemen hiding under the rock who still scream "ahhh cds bad", and he's still trying to figure out our compooters, so showing him the internet might not be smart just yet.

I've been very skeptical about these subjective tests ever since I read about one in 'New Scientist' many years ago.

Back when the great audiophile debate was between CD and vinyl, New Scientist magazine put a load of audiophiles to the test by playing them the same piece of music from CD and then from Vinyl and asked them to identify which version was from which media and describe the differences between them.

What they didn't tell them was that they simply played the same CD track twice so any differences they thought they heard were purely a result of their own perception fantasies; it didn't stop them from describing in some detail how the two tracks varied though.

But you misinterpreted the part about DRM being the only value. They're saying that their subjects aren't sure that 256 Kb/s sounds better than 128 Kb/s. Therefore if there's any advantage to the new tracks it's not a difference in sound quality. They're just easier to copy, burn, and transcode.

Outside of Apple, the biggest supporter seems to be Sony. Both the PS3 and PSP (and newer Sony phones and Walkmans) can play it, and it's the default codec for the CD ripping ability of the PS3. So the DRMless iTunes songs will benefit PS3/PSP owners quite a bit allowing them to buy songs from iTunes and use them on their machines.

AAC is supported on everything except cheapo Taiwan guys having MS sponsor them (with Wmedia).All my files on my Nokia 9300 Symbian phone are AAC even including ringtones embedded in device ROM. The other, "real" phone of mine, not anything close to smart (SE K700i) has everything in aac too. In fact thanks to AAC's better compression, I can use its 46mb flash memory for music.

"(Advanced Audio Coding) An audio compression technology that is part of the MPEG-2 and MPEG-4 standards."http://www.answers.com/aac [answers.com]