My point is that when master A sounds better than master B when encoded to AAC, A will sound better than B when left lossless too, unless you encounter AAC artifacts, and you are able to defeat them with your new master. You could then also try to diminish that problem by choosing a higher encoding bitrate. I doubt the mastering engineers consistently find AAC artifacts at common bitrates.

Yes, audible degradation is necessary for the argument to float. I dont know what bitrates we are talking about, but let us just assume that they are low enough for this to really be an issue (I am sure that much of the talk from producers stem from the lack of blind testing).

My point was that if some piece of classical music sounds bad through a given AAC encoder/bitrate, the extreme artistic possibility would be to use an entirely other piece of music (e.g. Britney Spears) that might or might not sound equally degraded at that bitrate. A mastering technician might do other significant changes to the mix that divert our attention from problems (or avoid problems in the first place). These are artistic choices that an encoder will never have the freedom to do, but I think that the technically savvy people on this site tend to under-appreciate the craftman-ship (or lack thereof) of mastering a song or an album. It is an interesting challenge that require both technical and artistic skill.

Sadly (for many of us), even highly acclaimed mastering engineers seems to lack technical skills that seems basic to us (or at least the ability to describe the technical causes in a meaningful way), and at the same time seem to have an artistic vision that is very far from what many of us prefer.

I believe that it is common in "hand crafts" such as music making and cooking, where knowledge is of a more intuitive, "pass-on-by-example" and training nature (as opposed to the engineering route that is more theoretical and QED-oriented), to perhaps do the right thing (e.g. "keep hot signal levels, but avoid clipping", "fry your steak at hot temperature before adding butter"), but for very wrong reasons (i.e. "the magic sound fairy will bite you in the nose otherwise"). I can see how experience and feedback will (best case) tend to direct such professionals in to doing stuff the right way through some form of natural selection, but since theoretic analysis is seldom needed or celebrated, why would one expect it to accumulate into the most celebrated professionals?

Reference:"Molecular Gastronomy: Exploring the Science of Flavor", Hervé This

The notion that mastering engineers who deliberately compress their recordings to 10-12 bits and allow distortion through digital clipping are also among the few human beings able to to consistently identify AAC encodes, and constantly run into problems with iTunes encodes, should not strike only me as odd, to say the least.

QUOTE (krabapple @ Apr 2 2012, 08:17)

It's possible that calling famous mastering engineers 'monkeys' in the subject might be considered aggressive.

Now that I vented a bit, like skamp aptly said, I also would be in favour of changing that particular wording. Maybe change it to "Mastering engineers don't understand lossy formats" or similar, to be less ad hominem and more about what exactly bothered me.

I also would be in favour of changing that particular wording. Maybe change it to "Mastering engineers don't understand lossy formats" or similar, to be less ad hominem and more about what exactly bothered me.

How’s that? If you want any other changes, you can request them via the Report button.

Just found this article via twitter, circled among mastering "engineers" (in fact Heba Kadry reposted it, the girl who mastered the latest Mars Volta album, which reaches -12.79 dB on my RG scans, and is generally mastered in a horrible fashion).

This further backs my impression that most of them don't have a single clue of what they are doing. The section about the mastering practices of Rubin and Meller are especially eye-opening to me. Masterdisk "engineers" also apparently are now out to rape the Rush back catalogue. Further down they cite phase-reverse tests to prove AAC files are different from the original (wow, REALLY?).

The good thing is, I can use this article to decide which releases to avoid in the future. But I'm really at a loss what we can do beside that. I'm really fed up with mastering "engineers" destroying music releases.

On QQ there is a topic where Steven Wilson (Porcupine Tree and other projects, surround mixing for others too, KC, Jethro Tull) answers questions, at one point he says this: "I prefer to provide the mastering engineer with the maximum possible dynamic range and let them judge how much to compress, if any."

Some lossy codecs can introduce audible temporal smearing - and the effect of such smearing (apart from the obvious!) in terms of perceived frequency response is to make the mix sound brighter. EQing the mix to reverse this subjective effect would be possible.

Some lossy codec introduce clipping. If this wasn't being explicitly avoided, then it's easy to believe that certain amounts of clipping in certain mixes would bias the perceived frequency response, stereo effect, density of sound etc in a certain way, and all these changes could be counteracted, to a certain degree, by fiddling with the mix.

So the concept of re-working a mix to counteract the effects of lossy coding seems perfectly reasonable to me.

The problem is, to my ears, it's perfectly reasonable for sub-64kbps encoding today, or circa 1997 128kbps mp3 - but it's an absolute nonsense for 256kbps AAC encoding. I'd love to be proven wrong, but I suspect, like others, it's all down to a lack of rigorous blind testing. Which is hardly surprising. Mastering engineers, as a rule, don't blind test changes to their mix. They just make them.

If they do turn out to be the only people in the world who can routinely ABX 256kbps AAC (and I agree with Kohlrabi on the likelihood of that), then I guess they're "improving" AAC mixes for all the other mastering engineers out there - but if all the people who can hear a difference demand lossless, and all the people who can't hear a different are buying AAC, then they're wasting their time!

Except it's great Emperor's New Clothes marking. Like much of the audio industry.

Cheers,David.

P.S. I still buy CDs, so I don't care. On the rare (but getting less rare) occasions that I can't get a CD or lossless download, I still feel cheated.

... Because it seems that in the end it's the mastering engineer that gets to decide how a record sounds, not the artist, not the mixer, not the producer.

They wish.... From my limited contact with mastering engineers, the majority of them would rather produce a clean, unsmashed master. They don't often get the opportunity, though. The artist and producer normally specify the desired result and if they don't get what they asked for, it gets done again.

Though, frankly, the line between "clipped because they want that sound", and "clipped because they were trying to go louder", is a fine one which I doubt many in the industry have the tools, ears, or freedom to call properly.

Heba Kadry again:"While Im all for setting a dynamic standards for records, people fail to realize that these days mixes are already brickwalled to an extreme"

Ted Jensen, who mastered Death Magnetic, claimed the same thing. Then the Guitar Hero 3 version surfaced. As far as I know that version isn't mastered as "hot", and doesn't sport the bad engineering trademark of digital clipping like the CD version (it comically digitally clips below digital fullscale).

QUOTE (skamp @ Apr 4 2012, 16:23)

Honest question: why do we automatically blame the mastering engineers?

Because, from my understanding, the recording mixes are not digitally clipping. Because they spread FUD about lossy formats.

... Ted Jensen, who mastered Death Magnetic, claimed the same thing. Then the Guitar Hero 3 version surfaced. As far as I know that version isn't mastered as "hot", and doesn't sport the bad engineering trademark of digital clipping like the CD version (it comically digitally clips below digital fullscale). ...

The GH3 version doesn't prove that Ted didn't receive a hot version to master. It does prove that the original recorded tracks of the individual instruments weren't overly compressed, and these were what were supplied to the GH3 developers. If Ted is telling the truth, it means they were compressed at the point where they were mixed down to the stereo mix supplied to him. This is a common practice at the mix stage, where the artists and producers try to get the sound as close as possible to their references (other smashed music). The problem is that the mix engineers rarely have the equipment or expertise to do the job properly. Anecdotally, there's also a certain amount of ego involved - to see who can pi** higher up the wall.

To temper that a little, he does seem to expect (or at least, unfairly imply it's possible for) automatic calculations of dynamic range to tell you things that they cannot. They cannot say anything about the sound quality, microphone placement, mixing technique, use of EQ, overall style etc of a track. Modulated white noise or sine wave can have a huge dynamic range. Picking completely different records, and suggesting that the DR values don't seem to correlate with sound quality is a red herring - it's one factor out of many which make a good record. The fact that other factors can outweigh it doesn't mean it's unimportant. A nice smile does not a beautiful person make - but if they never smile, then I don't think I'd like to spend my life with them. Dynamic range does not a good record make, but if most of it has gone, then I probably don't want to listen to it on a decent stereo for long.

Also, AFAIK, that specific DR rating system doesn't actually flag up clipping, or intentional mix distortion. You can make a track with one or both, but still have a large "dynamic range". DR calculators can measure the range, but can't spot specific faults. It's the specific faults (especially audible track or mix clipping) that really annoy me. Even then, someone might want a square wave applied for an effect - but they probably wouldn't have chosen to turn the three loudest bass hits/peaks in the track into a square wave if it wasn't for the loudness wars. For now, it probably takes a human to judge the difference.

Still, if the world does slowly move to a mostly SoundChecked/ReplayGained future - and at the same time people hear more new music through mediums that don't use radio-style compression - then "that loudness wars sound" will be largely consigned to history, exactly as both Justin and Greg suggest. It such a future, you can switch the effect on or off for a verse or chorus or line as you wish.

^Agreed. A "decent stereo", btw, includes a DAP and some good headphones these days and probably is more affordable than ever.

He's arguing,

QUOTE

Somewhere along the way, “loud” has morphed to become more than a level — It’s now an aesthetic choice of its own, and has even transcended perceived volume.

...which I don't think anyone seriously has issues with. (Though one could also argue that younger people's hearing is seriously f'd up.)

But what if, for the same material, there's a CD release with DR6 and a vinyl release with DR10 (or even DR12)? This is not at all uncommon these days. Good for vinylphiles, but not exactly fitting my definition of "unified artistic vision". Plus, I like CDs and I know that they could live up to the promise of "perfect sound forever" nowadays. There certainly is no technical reason to make CDs hotter than LPs.

Normally you would expect the artist to be creating their vision in the studio, when then serves as a reference for whatever media are created."The medium is not the message."Which gets us back to the start of this thread and the unfortunate attempts to compensate for differences in lossy formats that may not even be real. For CDs, this feedback loop seems to be broken in many cases.

Justin again:

QUOTE

In fact, I had trouble finding many pop albums that should be worth listening to according to the Dynamic Range Database. Any rating in their system under 9DR is marked “bad”, and even Dark Side of the Moon with its album average of 10DR is labeled “transitional”.

Actually "bad" is DR7 and under. (Ahem.) But as with any single-number indicators, keeping the salt shaker handy is not a bad idea. A metal album at DR6 can actually still be quite listenable, while a pop record would be sounding like Cyprinus carpio at this point. (Florence and the Machine's Ceremonials is only DR5. Ugh. Lana Del Rey's debut is no different, but the vinyl is DR11.) It's the guitars that skew the DR reading. Reverb tends to do the same - Enya's Watermark sure sounds a lot more dynamic to these ears than DR12, which would seem to be an average value for a production from the olden days (which are usually DR11-12).

Not sure why DSOTM (overrated, overrated album btw) is only DR10, WYWH is DR12. Old versions of Fleetwood Mac's Rumours are DR14 or DR15, respectively. (Those are pretty "dry".) Most of Chris Rea's '80s albums are DR13-14. The Cars' Heartbeat City is DR14. Same for Peter Gabriel's eponymous 4th a.k.a. "Security" (pre-remastering) - now that, ladies and gentlemen, is how you masterfully use dynamics. The people involved presumably cut their teeth on prog-rock in the '70s and possibly classical before that.

Justin further argues

QUOTE

I’ll admit that Aja is good for what it is, but if you find an engineer who wants to make your record sound just like it, he probably arrives at the studio wearing a gray, thinning pony-tail and a polyester polo-shirt proudly embroidered with the words “Out-of-Touch”.

(That sort of argument really screams hipster, but that's another story.)He's got a point in that quality apparently became a matter of fashion. Being the old quality geek that I am, this strikes me as problematic. Sure enough, I've heard plenty of innovative, "bleeding edge" material with dynamics approaching the flatness of a postage stamp. If my ears are any sort of indication, nowadays it is easily possible to make recordings so unnaturally dense that they result in nausea on the part of the listener. (I might be more sensitive to this than most, I don't know. Putting on some classical or whatever, it sure feels nice to listen to music that's not screaming "LISTEN TO ME!11" at the top of its lungs all the time.)

Why don't people in the industry listen to listeners anyway? It's the end user who ultimately has to put up with whatever they're putting out after all. AFAIK, artists commonly are sick and tired of hearing their own stuff at the point it's mastered, so they usually aren't of much help.

Why don't people in the industry listen to listeners anyway? It's the end user who ultimately has to put up with whatever they're putting out after all. AFAIK, artists commonly are sick and tired of hearing their own stuff at the point it's mastered, so they usually aren't of much help.

They do listen to the listeners -- for example, convening listening panels to compare potential singles. The problem is that they don't level match, so whatever is louder tends to get rated better. THAT is why we have the loudness wars in the first place.

They do listen to the listeners -- for example, convening listening panels to compare potential singles. The problem is that they don't level match, so whatever is louder tends to get rated better. THAT is why we have the loudness wars in the first place.

Not level matching essentially boils down to neglecting that people have something called a volume control. One would think that they'd noticed that at some point? Or did nobody ever take the time to think about things like these (which would be sad, but not unlikely)? Then again, even if they had, who likes to admit that they've been wrong...

Not level matching essentially boils down to neglecting that people have something called a volume control. One would think that they'd noticed that at some point? Or did nobody ever take the time to think about things like these (which would be sad, but not unlikely)? Then again, even if they had, who likes to admit that they've been wrong...

Interesting point stephan_g, maybe someone should conduct volume-matched blind tests with this in mind. Even better if this could happen where the conclusions would reach lots of people say on the Mythbusters show. The material should be prepared by the same people that work for the big labels, only that in the mastering stage there's one moderate master and one loud master made. For example 9:30-11.20 in this Bob Katz video http://youtu.be/u9Fb3rWNWDA?t=9m30s

BTW I listenented to Noctourniquet and liked it, I'd even deem the production values as great overall but I'm hearing lots of pumping and some harshness in the loud parts. I'll go along with the harshness as an aesthetic choice but the pumping just sounds like crap. As an amateur producer I recognize it as a secondary effect of heavy hard limiting, you could achieve similar effects with a saturation effect without the pumping.As for Rick Rubins false statements sadly it seem it takes a certain kind of assertive person beside talent to have the priveledge of working with major label bands like TMV RHCP, The Strokes etc.

Not level matching essentially boils down to neglecting that people have something called a volume control. One would think that they'd noticed that at some point? Or did nobody ever take the time to think about things like these (which would be sad, but not unlikely)? Then again, even if they had, who likes to admit that they've been wrong...

Except psychoacoustically, small differences in 'volume' are not necessarily perceived as 'loudness' changes -- they are perceived as quality changes. That's why the biasing effect of level mismatch is so insidious.

(Yes, at some point as the difference increases it will be recognized as a difference in volume.)