I bought the CD Toque Dela from Marcelo Camelo at amazon.de directly (not from an amazon market place reseller).On amazon it says Import, but obviously it's not from Brazil because it states made in the EU on the CD package.Looked at the spectrum and there is a complete cut off on 16 kHz. Tau Analyzer says source is MPEG.Now I'm curious. Is any Brazilian here who has got the original Brazilian CD?If so could you please check the CD with Tau Analyzer: http://true-audio.comI've got a strong feeling that the original Brazilian CD is not sourced from MPEG.

I've payed 30€ to get a CD that is sourced from mp3. This really annoys me. The mp3 download goes for under 10€.I mean what *#@* work at Universal Music? Are they not able to get the original lossless files from Brazil to Europe to decently master the CD in Europe?Do they think their customers are fools?Instead of spending their money on paying lawyers and lobbyists, they should spend the money on delivering a decent quality to their customers.In future I will watch out for Universal Music releases and a give a wide berth to them.

You don't *know* that it was from a lossy source - it *might* be. Programs such as Tau Analyzer are not 100% reliable...not even close IME.

A more likely explanation is that the digital source audio was at some point recorded onto a DAT at a 32 kHz sampling rate, which would give a sharp 16 kHz cutoff frequency.

OTOH, it has been shown that Universal applies audible watermarking to many of their online digital releases that are distributed by music sites other than UMG's own online store, indicating that they don't have a great deal of respect for their customers.

So, while I still think that a 32 kHz sampling rate is the most likely explanation with this particular CD, I, too, "watch out for Universal Music releases and give a wide berth to them," certainly their online releases.

Edit: Could you post a <30 sec. clip from the CD? It would be interesting to at least see and hear an example of it.

As long as the recording has no audible MP3 artifacts, I don't really see the problem. With every CD you only get what the mastering engineer decided to put onto the disc, i.e. you don't know anything about the treatment the recording gets before being mastered to the CD. IMHO there are far worse highly audible mastering practices (digital clipping, overuse of DRC, that kind of thing) out in the wild today than a transcode from MP3, which likely is inaudible. A lowpass at 16kHz is certainly not a sufficiency to show that this was transcoded from MP3, anyway.

There's an increased chance of encoding/transcoding artefacts when the OP encodes their $30 CD to mp3

Hmmm.... is this true with modern, transparent codecs? How does a nominally transparent codec suddenly lose it's ability to determine what's audible and what's not, just because another codec was used somewhere back in the processing chain?

The wiki has some words on the subject but they weren't very enlightening I'm afraid, and the threads referred to are somewhat old, so perhaps out of date w.r.t. latest codecs.

How does a nominally transparent codec suddenly lose it's ability to determine what's audible and what's not, just because another codec was used somewhere back in the processing chain?

'Transparent' is not akin to 'equal', it is akin to 'approximately equal'. 2x0=0, but 2x 'approximately zero' is not necessarily 'appriximately zero'. If you cannot distinguish a given-size E from same-size F at a certain distance, that does not mean you cannot tell an E from a Γ (here I just 'doubled the artifact', right?)

Now a 256 kb/s mp3 file doesn't have more information content than what could be stored at 256 kb/s, so in principle it is possible to decode to PCM and losslessly compress back to 256 – proof: brute force, generating every single possible mp3, decode and compare (don't hold your breath). But no such compression algorithm exists. In reality, the encoder happily throws away something, oblivious about the origins of the file.

If you have access to MS Excel, try the following: (1) generate a spreadsheet. (2) Copy the file. (3) Protect the copy (that is encryption). (4) .zip both files. (5) Make an encrypted .zip file of the first one. (6) Compare the file sizes. (If they are approximately equal, then your Excel is newer than mine ...) But the information content should be the same, right?

then the codec has suddenly forgotten that F -> Γ is not transparent (which it knew in the 1st pass).

F -> Γ might be transparent in this hypothetical scenario. There's nothing for the encoder to magically forget.

E -> Γ might not be transparent, but the encoder doesn't know it was an E in the original. It just has an F, and it knows how to handle an F.

Still, this metaphor, though it is logically sound and attractive, isn't quite how lossy transcoding might exacerbate flaws, since it's not just about what the encoder throws away, but also about what the encoder creates.

In other words, is an artifact difficult to encode? (where "difficult" means that an input signal has a high chance of being distorted after encoding to a certain bitrate X). Will a barely-audible ringing artifact from pass 1 be doubled or tripled or quadrupled by a second pass, because it is essentially new and difficult information? The short answer is "Yes", but the linked thread doesn't discuss the inner workings of encoders; just the data.

Noise adds. all lossy codecs add noise. it has to be audible eventually.Temporal smearing smears ever further. most lossy codecs add temporal smearing. it has to be audible eventually.

Whether it takes one, two, ten or 100 generations is the question - but if I'm paying for a CD, I shouldn't have to worry that it's already been through one generation!

Cheers,David.

P.S. Karaoke / centre-cut often fails miserably with otherwise "transparent" lossy files. Algorithms that "create" surround sound from stereo can face similar problems: they work fine with lossless audio, but make inaudible artefacts in lossy files become very audible.

We are missing the point here. Imported albums should keep the original PCM, it's not a remaster or another edition, he says it's just imported. I understand he may not hear any difference but again, that's not the point. What if he wants to rip it? It will degrade the quality even more since it's lossy to begin with.30€ for this album is a rip off, even 1€ would be IMO.

I do still like to buy Audio CDs because of the following points:* I do rip the CDs to a lossless archive and then convert it to different formats (lossless/lossy) depending on the players (media center, car audio, cellphone...)* With a pressed CD it's safe to get the audio without an individual watermark

With this CD I may run into problems (artifacts) when recompressing to lossy and also if I'd known before that it was mastered from lossy I just could have bought the mp3 version and saved 20€.

It's probably up to us observant customers, to warn others from such ripoffs by posting warnings in amazon reviews and the likes.

Thanks for the sample, and now I'm much more inclined to believe that it is indeed from a lossy source, though not because of the 16 kHz cutoff.

In the first 11 seconds, eahm's spectrogram shows noise extending cleanly up to almost the Nyquist limit, although slowly decreasing in level.

In contrast, your sample shows a hard cut at approx. 13.5 kHz, with occasional spikes up to 16 kHz. While that's not the classic MP3 sfb21 issue, it's definitely not an artifact I've ever seen/heard from a resampler.

In other words, is an artifact difficult to encode? (where "difficult" means that an input signal has a high chance of being distorted after encoding to a certain bitrate X). Will a ringing artifact from pass 1 be doubled or tripled or quadrupled by a second pass, because it is essentially new and difficult information? The short answer is "Yes"

So whereas an ideal lossy codec would remove stuff that you can't hear, and replace it with other stuff that you can't hear, and that would be inherently repeatable, current codecs are unable to distinguish "other stuff that you can't hear" from stuff that you can hear, so something has to give on a second pass.

Was reading this thread and joined the board to add a different viewpoint to why these tracks were mastered like this.

Does *anyone* have tracks that are not altered by mpeg encoding?

I kinda felt like this was intentional. GLU has always had superb control over their high-frequency transients, and I feel like it was done that way to thwart additional MP3 encoding, to take advantage of these encoding transients are part of the music (something that inevitably will be short lived in the grand scheme of things), and/or to give the finger to corporate music sales. As far as whether or not the artifacts are subjective, I gotta say that on THIS album they are obvious, and seemingly on purpose.