USB-C and Lightning headphones aren’t great news for everyone

The 3.5mm port is dying – at least when it comes to smartphones. If the persistent Lightning headphone rumor wasn’t enough to persuade you, the fact that Motorola beat Apple to the punch should be. Motorola’s new Moto Z and Moto Z Force don’t have that familiar circular hole for your cans to plug into, and it now seems inevitable that almost every phone within a few years will forgo the port in favor of a single socket for both charging and using headphones.

It’s all about control. You can’t put DRM on a 3.5mm jack, but you can do so on a digital port or wireless connection. Imagine only Beats headphones being certified to pull the best quality audio out of an iPhone, protected through Apple DRM.

You know it’s going to happen.

About The Author

125 Comments

Imagine only Beats headphones being certified to pull the best quality audio out of an iPhone

That’d be the biggest waste ever. Have you tried Beats headphones? Apple could pipe out the highest bitrate, best mastered PCM audio file ever and it’d still sound like veiled mud on a Beats headphone!

Meh, whatever. It’s about time I got myself a dedicated DAP anyway. You’ll have to pry my Oyaides from my cold dead hands.

Hope Apple are prepared for a f–kton more warranty claims than usual. Sure, 3.5 jacks fail, but I’m certain they have had at least five lighting port failures for every 3.5mm jack failure. The lightning port is far less robust. People have had lightning port snafus from just using their iPhones while charging.

signal chain says you have to have the best possible source before rendering it, or why bother. just shining turds if you listen to AAC or MP3 through expensive headphones. [/q]

I’ve heard that a lot from audiophiles over the years. I’d find it more convincing if any of them could consistently tell the difference between MP3 and lossless in blind listening tests.

Even when it comes to particularly difficult to encode audio samples the differences are slight, requiring trained ears and careful listening with high fidelity equipment to identify. With portable audio, even with expensive headphones, I find it implausible that anyone would really notice a difference.

I use lossless files for archiving so that I won’t have to transcode between lossy formats in the future, but even 256Kb/s VBR MP3 is more than enough for any portable listening. With modern codecs at a decent bitrate the file format used is one of the least important things when it comes to sound quality.

in the visual realm speakers would are akin to eyeglasses. you wouldn’t tell people that can see pixels in your low-resolution output to just get better glasses would you?

Speakers are the things that actually output the music we hear – the equivalent in the visual world would be the display itself.

[q]use lossless audio, 24bit if available, then a good DAC and amp and almost any speakers will sound good. they will at least sound the best they are capable of, aka operating at peak efficiency.

Speakers/headphones are by far the most important component determining the quality of sound from an audio system. Obviously it’s better to have a good all round system, but it makes much more sense to connect expensive speakers to a cheap MP3 player & basic amp than connect cheap speakers to an expensive source playing 24bit lossless audio. Those cheap speakers would be incapable of delivering any of the subtle benefits of the higher quality source.

signal chain says you have to have the best possible source before rendering it, or why bother. just shining turds if you listen to AAC or MP3 through expensive headphones. [/q] Correct. Crap in, crap out. No speaker of any kind is capable of miracles.

I’ve heard that a lot from audiophiles over the years. I’d find it more convincing if any of them could consistently tell the difference between MP3 and lossless in blind listening tests.

Even when it comes to particularly difficult to encode audio samples the differences are slight, requiring trained ears and careful listening with high fidelity equipment to identify. With portable audio, even with expensive headphones, I find it implausible that anyone would really notice a difference.

Most “audiophiles” I’ve encountered are either clueless or fixated on what may be true only on paper but not real world experience. Even highly experienced audio professionals don’t rely solely on their ears. Absolutely nobody has perfect hearing and any pro with his rate is always going to use visual aids as well.

I use lossless files for archiving so that I won’t have to transcode between lossy formats in the future, but even 256Kb/s VBR MP3 is more than enough for any portable listening. With modern codecs at a decent bitrate the file format used is one of the least important things when it comes to sound quality.

Even `modern` methods have deficiencies in certain areas. If you’re going to compress audio into a playable form, you should select a codec with the most acceptable drawbacks in your opinion.

use lossless audio, 24bit if available, then a good DAC and amp and almost any speakers will sound good. they will at least sound the best they are capable of, aka operating at peak efficiency.

No, ..no, ..no. Nothing about that blanket comment is correct. The best bit-depth to use depends on the source itself and what kind of processing will be done. Every single piece of the signal chain is equally important from the sources going into a recording, to speaker that finally presents it to your ears. If you introduce crap at any point you have damaged/degraded the signal. Period.

[q]Speakers/headphones are by far the most important component determining the quality of sound from an audio system. Obviously it’s better to have a good all round system, but it makes much more sense to connect expensive speakers to a cheap MP3 player & basic amp than connect cheap speakers to an expensive source playing 24bit lossless audio. Those cheap speakers would be incapable of delivering any of the subtle benefits of the higher quality source.

A cheap amp can ruin a great source and plugging expensive speakers into it won’t fix it. If you have to focus on just one piece of the puzzle then choose between the amp and the speaker, but do so with the understanding that both are equally as important. A sacrifice with one can’t be reversed or overcome by the other.

When it comes to portable use on devices like iPhones it seems like a non-issue, and is certainly unimportant compared with the quality of headphones/speakers used.

A cheap amp can ruin a great source and plugging expensive speakers into it won’t fix it.

I don’t think I’ve ever run into a properly functioning amp so bad that it can ruin a great source. I have run into speakers that sound terrible regardless of what they’re plugged into.

[q]If you have to focus on just one piece of the puzzle then choose between the amp and the speaker, but do so with the understanding that both are equally as important. A sacrifice with one can’t be reversed or overcome by the other.

Every component may be important, but that doesn’t mean that their impact on sound quality is equal. Looking online, there’s a lot of disagreement about how much things like amps and cables really matter in an audio system.

The blind testing of amplifiers that I’ve seen generally finds little evidence of difference in sound quality between different amps (if they aren’t overdriven and clipping). That includes testing of cheap amps against expensive ones.

In contrast, people can consistently tell the difference between different speakers in double-blind ABX tests.

The sound quality of AAC and MP3 files isn’t inherently “crap”. The idea that listening to lossy files through expensive headphones amounts to “shining turds” isn’t correct at all. [/q] That’s not what I said. My statement had to do with expensive headphones not being able to perform miracles.

Any deficiencies are so small that I’m unable to find a single listening test where people could consistently tell the difference between high-bitrate lossy and lossless files.

You’re implying that the differences are meaningless, which would also imply all “modern” codecs result same quality at same bitrates and that’s absolutely false.

This test is the nearest thing, as there was one interesting result where an individual listener successfully chose between 256Kb/s AAC and lossless:

Listener tests you find on the internet are worth about as much as a wet napkin.

When it comes to portable use on devices like iPhones it seems like a non-issue, and is certainly unimportant compared with the quality of headphones/speakers used.

First, the quality of the speakers will always be important. Secondly, you’re comparing apples & oranges. One of those things is responsible for constructing the signal itself while the other is responsible for reproducing that signal into an audible form.

I don’t think I’ve ever run into a properly functioning amp so bad that it can ruin a great source. I have run into speakers that sound terrible regardless of what they’re plugged into.

That doesn’t make what I’ve said any less true. In all my years of working in the field, I’ve yet to see a professional studio using mediocre amps for anything. If you’d like to learn more, read up on converters.

Every component may be important, but that doesn’t mean that their impact on sound quality is equal. Looking online, there’s a lot of disagreement about how much things like amps and cables really matter in an audio system.

I have zero care about what self-proclaimed “audiophiles” on the internet have to say so let’s go ahead and remove them altogether. Now, the impact a bad piece of equipment has on the audio entirely depends on what kind of impact it’s causing. Any piece of the audio chain is capable of introducing minute impact or drastic impact so it’s completely meaningless to argue in general what matters most. What matters most depends on each specific scenario – there is no one-size-fits-all answer.

The blind testing of amplifiers that I’ve seen generally finds little evidence of difference in sound quality between different amps (if they aren’t overdriven and clipping). That includes testing of cheap amps against expensive ones.

That only means they’re not actually using the correct equipment for that specific test.

[q]In contrast, people can consistently tell the difference between different speakers in double-blind ABX tests.

That says nothing about any other kind of test.

I’ll share this with you… The people who really know & understand this subject don’t debate or argue about it because their real world knowledge & experience always trumps someones flawed logic or opinion. A lot of the people trying to find answers don’t or aren’t going about it correctly. According to the internet, the world is filled with audio professionals, or “audiophiles” thinking they’re as good or better. In reality the only people who truly know their stuff are the guys who’ve got years & years of hands-on experience – not the people who read some technical paper or conduct some ill-thought out test.

That’s not what I said. My statement had to do with expensive headphones not being able to perform miracles. [/q]

You responded “Correct. Crap in, crap out.” to someone who described listening to AAC or MP3 through expensive headphones as “just shining turds”. Apologies if I somehow misinterpreted that…

Obviously decent headphones can’t perform miracles — they won’t help much if your source is an old 128Kb/s CBR file downloaded a decade ago — but they will make decent quality lossy files sound fantastic.

You’re implying that the differences are meaningless, which would also imply all “modern” codecs result same quality at same bitrates and that’s absolutely false.

I made it clear that I was specifically talking about audible difference between high-bitrate files. Obviously if you want to encode files at the minimum possible size, while maintaining acceptable quality, your choice of encoder and settings become more important.

All I’m saying is that high-bitrate lossy files (AAC, MP3 or pretty much anything else), created with modern encoders, have very few audible deficiencies. They do benefit from decent speakers and headphones and are certainly all that’s needed for portable use.

Listener tests you find on the internet are worth about as much as a wet napkin.

Your individual opinion is worth a whole lot less.

Listening tests become meaningful when you have dozens of examples, featuring thousands of subjects, and they all show the same result. Even if some had faulty methodology, the different software used and different people involved makes it unlikely that they’d all be incorrect. I think it’s telling that the only response from naysayers is to hand-wave all those tests away, rather than providing a shred of counter-evidence.

If there was a significant difference between lossless audio and different high-bitrate lossy files then it wouldn’t be such a challenge to prove it. The idea that someone listening normally to music on a device like a smartphone will notice any subtle differences seems downright absurd.

That only means they’re not actually using the correct equipment for that specific test.

In my opinion the most meaningful testing equipment in a test of audio gear is the listener’s ears, with double blind testing used to deal with placebo effect.

If people can’t tell the difference between two components in a blind test then I think that’s evidence that the differences are minor to the point of irrelevance. Even if some specialist equipment can detect a difference, it doesn’t really matter if people can’t actually hear it.

To me it makes sense to focus on the components that actually make a demonstrably audible difference to the sounds people hear.

[q]In reality the only people who truly know their stuff are the guys who’ve got years & years of hands-on experience – not the people who read some technical paper or conduct some ill-thought out test.

I’ve run into professionals with years of experience who buy into some of the worst audiophile snake oil, e.g. ridiculously expensive cables. Common sense and conventional wisdom quite often end up debunked when put to the test. I’ll certainly take actual testing over anyone’s personal opinion.

Obviously decent headphones can’t perform miracles — they won’t help much if your source is an old 128Kb/s CBR file downloaded a decade ago — but they will make decent quality lossy files sound fantastic. [/q] No they won’t. They’ll make decent quality audio sound decent. They will never make decent quality audio sound fantastic.

I made it clear that I was specifically talking about audible difference between high-bitrate files. Obviously if you want to encode files at the minimum possible size, while maintaining acceptable quality, your choice of encoder and settings become more important.

All I’m saying is that high-bitrate lossy files (AAC, MP3 or pretty much anything else), created with modern encoders, have very few audible deficiencies. They do benefit from decent speakers and headphones and are certainly all that’s needed for portable use.

Your choice of encoder is equally important no matter what target bitrate or quality measure you intend to use. It’s simply not true that encoders really only differ at lower bitrates.

Your individual opinion is worth a whole lot less.

You couldn’t be more wrong. My opinions and statements are based on nearly three decades of first-hand experience as a professional in the field. Either this is my area of expertise or I’ve been a master at fooling people and tricking them out of their money for a long long long time.

Listening tests become meaningful when you have dozens of examples, featuring thousands of subjects, and they all show the same result.

Absolutely wrong. Flawed tests produce flawed results regardless of how many people participate in the test itself. This happens all the time so you should be more concerned with the experience and qualifications of the persons designing and conducting the tests then those who participate.

If there was a significant difference between lossless audio and different high-bitrate lossy files then it wouldn’t be such a challenge to prove it. [/q] It’s not `such a challenge` to prove when you have a proper testing environment. If the lowest common denominator in your test is garbage, you’ve ruined the test. Unless the test was about ruining audio.

The idea that someone listening normally to music on a device like a smartphone will notice any subtle differences seems downright absurd.

I haven’t seen anyone make that claim so I don’t know what you’re replying to.

In my opinion the most meaningful testing equipment in a test of audio gear is the listener’s ears, with double blind testing used to deal with placebo effect.

Of course it is. The buck always has and always will stop with the listener, but that in no way discredits the importance of everything else used in the process of creating and reproducing audio.

If people can’t tell the difference between two components in a blind test then I think that’s evidence that the differences are minor to the point of irrelevance.

You simply can’t arrive at that reasonably without knowing the specifics of the test. Only when you’re absolutely sure those two components are isolated and nothing else is introducing a negative affect can they be properly tested. Too many times do people try to conduct audio tests and neglect other very important aspects because that’s not what they’re actually testing.

To me it makes sense to focus on the components that actually make a demonstrably audible difference to the sounds people hear.

As I’ve already stated, every single component can have anywhere from negligible to tremendous impact on the audio. You can not disregard any point in the signal path. That ignorance is exactly why so many people make mistakes.

[q]I’ve run into professionals with years of experience who buy into some of the worst audiophile snake oil, e.g. ridiculously expensive cables. Common sense and conventional wisdom quite often end up debunked when put to the test. I’ll certainly take actual testing over anyone’s personal opinion.

It’s true that not all professionals are equals no matter what field you’re talking about. I’ve come across countless individuals working professional in audio & video production who lacked knowledge or a greater understanding of how to do their job. Those people are everywhere and they aren’t highly regarded. I know by your descriptions that you’ve never run into anyone in the upper echelon – people who really know their sh.t..

As far as trusting tests over opinions.. You can absolutely do that if you wish but you can easily been placing your trust in something that was flawed in its very design and the test results you believe so much in are more meaningless than having any real value.

If audio is a truly interesting subject to you, I strongly recommend you don’t discard any sources. Read everything you can and take it all with a grain of salt. Some of it is subjective and some of it isn’t.

OK, now, I do believe that you may have a long, meaningful and skilled experience with sound processing, but the thing is, if you buy or download music produced by good professionals, or even if you rip it with a good ripper and from a good source, the weaker point will be the mechanical reproducer most of the times. I have yet to see a bad DAC in the last 10 years but saw a lot of bad amplifiers and a lot more bad speakers and headphones than anything else. Sure, regular people may introduce a lot of distortion when adjusting the sound for their like, but this is not what the other guy was talking about, I think.

And there is also the scam pushing people to buy cables with gold-plated contacts, super amplifiers whose distortion curve is only observable when the sound is almost ruining your cochlea (or impairing it for some time) and speakers with unbelievable price tags whose performance is minimally better when analyzed with professional equipment and indistinguishable to most human ears instead of just a good set. Of course, this phenomenon is not exclusive to the sound industry, you have it also on cars that are capable to reach more then 200 mph when the limit all over the globe rarely reaches 75 mph and on many other fields like those of screens density, camera resolution and on and on, all they reaching a point where they just surpass the biological human sensors capabilities.

So, yes, I agree with the other guy, if you buy good music done right, or even bad music done right, don’t turn the volume to the max, and must decide between a top device with original headphones or a just good device with good phones, most of the times, almost invariably in my own sampling, your experience will be better if you buy a good headphone.

And there is also the scam pushing people to buy cables with gold-plated contacts, super amplifiers whose distortion curve is only observable when the sound is almost ruining your cochlea (or impairing it for some time) and speakers with unbelievable price tags whose performance is minimally better when analyzed with professional equipment and indistinguishable to most human ears instead of just a good set. Of course, this phenomenon is not exclusive to the sound industry, you have it also on cars that are capable to reach more then 200 mph when the limit all over the globe rarely reaches 75 mph and on many other fields like those of screens density, camera resolution and on and on, all they reaching a point where they just surpass the biological human sensors capabilities. [/q] You’re right about that, but I hope you’re not mistaking anything I’ve said for any of that. I’m trying to stress that yes everything matters and yes everything has the potential to degrade quality. That’s not meant to be a sales pitch of any kind, just a simple fact of reality.

[q]So, yes, I agree with the other guy, if you buy good music done right, or even bad music done right, don’t turn the volume to the max, and must decide between a top device with original headphones or a just good device with good phones, most of the times, almost invariably in my own sampling, your experience will be better if you buy a good headphone.

Better headphones are a benefit when the source is of higher quality than the lesser headphones are able to produce. There’s no advantage to better headphones when the source is of lower quality than the lesser headphones. That’s such an easy concept to grasp I truly don’t get why people have it confused. There’s no such thing as miracle headphones that magically turn bad quality into good, or decent quality into fantastic. That is simply not how it works. But, sadly, that doesn’t stop people from believing they’ve got magical equipment performing miracles.

Better headphones are a benefit when the source is of higher quality than the lesser headphones are able to produce. There’s no advantage to better headphones when the source is of lower quality than the lesser headphones. That’s such an easy concept to grasp I truly don’t get why people have it confused. There’s no such thing as miracle headphones that magically turn bad quality into good, or decent quality into fantastic. That is simply not how it works. But, sadly, that doesn’t stop people from believing they’ve got magical equipment performing miracles.

You need to work on your dismal reading comprehension if you think that’s what I said.

The point I made is that high-bitrate lossy files are so close in quality to their lossless source that any difference is almost always imperceptible. With modern encoders, and the use of high-bitrates, the file format is unlikely to ever be a significant quality bottleneck, regardless of the hardware playing it. This is backed up by numerous double blind ABX listening tests comparing different formats.

There’d be little real benefit in replacing those lossy files with anything higher quality, not even the lossless original they were created from. In contrast, changing headphones can make a significant and easily heard difference to the sound quality.

I’m not sure why you’re finding that confusing, or what blathering on about “miracles” is meant to prove…

The above amounts to nothing more than an exercise in showing how little you actually know about encoders and how they work. These kinds of silly comments (not points) you’re making are a clear sign you’re not interested in learning anything.

The above amounts to nothing more than an exercise in showing how little you actually know about encoders and how they work. These kinds of silly comments (not points) you’re making are a clear sign you’re not interested in learning anything.

And this kind of smug non-response, failing to address the points I made or deal with the evidence supporting them, is a clear sign that you don’t have anything to teach me.

speakers and wires and amps can all do much better than play mp3 files.

lossy coding is “perceptual coding” meaning they try to remove things they think you can’t explain. it works. get us to agree ‘scientifically’ on what is missing from a piece of lossy music. you can’t. it’s music, it’s the art form most tied into our emotional state.

i’m a broken record and you won’t change me. i’m not an audiophile, i’m poor and have worked in and around recording studios most of my life.

mainstream consumer and tech person is usually confused about audio. nothing new there. but more ignorance now more than ever.

you need pure source file, not degraded, to start with.

i’d take a pure source file + good DAC + good amp playing through any set of speakers over a phone playing through expensive speakers. i love music, and i love real instruments and real reverb and real voices. lossy compression kills all of that.

No they won’t. They’ll make decent quality audio sound decent. They will never make decent quality audio sound fantastic. [/q]

By decent quality, I mean files where the difference from the source is imperceptible (except in extremely rare and exceptional cases). My point is that, when that’s the case, it’s choice of headphones that can make a big difference to sound quality, not the file format of the source being played.

Your choice of encoder is equally important no matter what target bitrate or quality measure you intend to use. It’s simply not true that encoders really only differ at lower bitrates.

It is true when talking about audible differences, as evidenced by the various listening tests that consistently fail to detect any difference. If you have any evidence to the contrary you’re free to present it…

Absolutely wrong. Flawed tests produce flawed results regardless of how many people participate in the test itself. This happens all the time so you should be more concerned with the experience and qualifications of the persons designing and conducting the tests then those who participate.

Not that I find this kind of appeal to authority very convincing, but some of those tests have been conducted by people who have extensive experience. There are professionals with relevant qualification on sites that conduct tests (e.g. hydrogenaud.io), including people involved in developing audio hardware and codecs.

Do you actually have any criticism of the methodology used in the ABX double blind tests I’m talking about?

Do you have any evidence that each and every test has somehow been incompetently carried out?

It’s not `such a challenge` to prove when you have a proper testing environment. If the lowest common denominator in your test is garbage, you’ve ruined the test. Unless the test was about ruining audio.

Then where is the evidence to support your argument?

Have people with “proper testing environments” simply never bothered to carry out the relevant listening tests?

I haven’t seen anyone make that claim so I don’t know what you’re replying to.

These are comments under an article about the retirement of the 3.5mm jack on smartphones. This particular comment thread started when I responded to ezraz, who was talking about portable audio.

You responded to him too, agreeing with his assertion about the low quality of MP3 and AAC, after he claimed that listening to them through expensive headphones was akin to “shining turds”. That’s primarily what I’m disagreeing with here.

[q]Only when you’re absolutely sure those two components are isolated and nothing else is introducing a negative affect can they be properly tested. Too many times do people try to conduct audio tests and neglect other very important aspects because that’s not what they’re actually testing.

I’ve often heard this kind of thing when testing disagrees with someone’s opinion. I personally find it implausible that all these tests, many conducted by experienced and qualified people, could invariably be so badly flawed.

But even if that was the case, to me it indicates what tiny differences we’re really talking about here. If there were significant differences (as there are with different speakers and headphones) they’d easily be identified even in a supposedly flawed test (as they are with different speakers and headphones).

If none of the tests devised are perfectly sensitive enough to allow people to hear any differences that do exist, I find it unlikely that those differences will ever be heard in real world listening by the vast majority of people.

Dave_K, rather than going in circles until the end of time, I going to opt not to pick apart your entire post. You lack the experience I have in this field, and I lack the ability to ignore what that experience has taught me so we will always be at odds… That’s ok and not meant as a swipe. I will comment on the following though:

If none of the tests devised are perfectly sensitive enough to allow people to hear any differences that do exist, I find it unlikely that those differences will ever be heard in real world listening by the vast majority of people.

Most people aren’t using more than mediocre-decent equipment. They naturally use and are used to consumer grade stuff, and in some cases “pro-sumer”, which is often more marketing than anything else. For that reason alone most people are already listening to audio at a deficit. Additionally many people don’t even pay attention to differences until you point it out to them. It’s not that they don’t hear them, it’s that they don’t pay enough attention as a listener to actually hear the audio in detail. I’m the opposite – I only hear audio in it’s detail. I wish I could revert back to a state where you think everything rests on the quality of your headphones or speakers. Everything is real simple that way.

Additionally many people don’t even pay attention to differences until you point it out to them. It’s not that they don’t hear them, it’s that they don’t pay enough attention as a listener to actually hear the audio in detail. [/q]

You really think that keen audio enthusiasts taking part in listening tests, e.g. ABX tests where they repeatedly listen to audio samples and attempt to differentiate between them, aren’t actually listening to the audio in detail?

I find that less plausible than the idea that those differences aren’t perceived because they simply aren’t perceptible.

[q]I’m the opposite – I only hear audio in it’s detail.

I’ve heard that before from “golden eared” audiophiles confident that they could hear the differences that thousands of others had missed. A few ABX test failures later and they were a lot less arrogant about their superior listening abilities…

You really think that keen audio enthusiasts taking part in listening tests, e.g. ABX tests where they repeatedly listen to audio samples and attempt to differentiate between them, aren’t actually listening to the audio in detail? [/q] I neither said nor implied that. I referred to `most people` in a general sense, not `most people participating in an explicit test`.

[q]I’ve heard that before from “golden eared” audiophiles confident that they could hear the differences that thousands of others had missed. A few ABX test failures later and they were a lot less arrogant about their superior listening abilities…

It doesn’t matter what you’ve heard from these so-called “audiophiles”. People stupid enough to label themselves as such are very obviously not who you should be listening to. Now, ….

When I’m the audience, I know what I’m listening to and I know what to listen for. People like me who have an extensive background in this field will all tell you that they deconstruct audio they listen to it. It’s a matter of knowledge & experience, not magic or gold. That happens to everyone who is a great student, with great teachers, and decades of hands-on training. It’s absolutely laughable, to put it mildly, that you think an `average Joe` is in any way comparable what what I just describe.

This exchange is why most of us never bother responding to the silliness from “audiophiles”, couch experts, and average Joes. It almost always winds up being a total waste of time because people don’t actually want to learn anything. You guys just want to argue subjects you know little-to-nothing about.

I neither said nor implied that. I referred to `most people` in a general sense, not `most people participating in an explicit test`. [/q]

In that case, I’m not sure what the point of what you wrote actually was. I’ve been talking about listening tests, and the fact that tests consistently show that people can’t differentiate between high-bitrate lossy files and the lossless original.

My point was that if careful listening in ABX tests can’t detect any difference, it won’t be a factor in normal listening. Yes, that includes normal listening by experienced people with good equipment who are paying attention to the music.

What you haven’t addressed, if you want to dismiss those listening tests, is how they could all possibly go wrong.

When I’m the audience, I know what I’m listening to and I know what to listen for. People like me who have an extensive background in this field will all tell you that they deconstruct audio they listen to it. It’s a matter of knowledge & experience, not magic or gold.

You may be dismissive of audiophiles, but you sound just like them. Where are the listening tests showing that people like you can differentiate between high-bitrate lossy files and lossless?

[q]This exchange is why most of us never bother responding to the silliness from “audiophiles”, couch experts, and average Joes. It almost always winds up being a total waste of time because people don’t actually want to learn anything. You guys just want to argue subjects you know little-to-nothing about.

Here’s a tip: if you want to convince people of your claims, try providing evidence to back them up.

I’m happy to learn, but I’m not going to believe every assertion made by some random person on the internet. Simply claiming to be an expert, while spouting off things that are contradicted by years of testing, really isn’t very convincing.

Dave_K, I didn’t participate so I could school you. I’ve already stated several facts that anyone can verify for themselves and a few posts back that’s essentially what I advised. If you really want to learn, you will put in the energy and effort to do so but I won’t waste my time as my experience with people like you, who instead of listening would rather debate everything, aren’t all that interested to begin with.

At the end of the day I don’t care what you believe. It’s obvious this isn’t your area of expertise so we really have nothing in common and this becomes less interesting to me with each post. I don’t know what you do for a living but I’m sure the reverse would be true were we talking about your area of expertise.

Continue to believe in those “audiophiles” you hang around with but just know there’s a plethora of more accurate information and fact available should you ever decide to truly want to elevate your understanding of the subject. If/when you do, you’ll realize how backwards and wrong you’ve been.

Continue to believe in those “audiophiles” you hang around with but just know there’s a plethora of more accurate information and fact available should you ever decide to truly want to elevate your understanding of the subject. If/when you do, you’ll realize how backwards and wrong you’ve been.

Here you are acting like the worst of those audiophiles – making claims without backing them up with evidence. Do you really expect people to just accept what you say because you call yourself an expert?

I’ve taken the time to look into your “facts”, examined listening tests designed specifically to test them, and found compelling evidence that you don’t know what you’re talking about.

If you had “accurate information” to challenge what I’d said then I’m sure you’d have presented it. Instead you’ve just evaded the points I’ve made, blathered on about your unproven expertise (as if that’s meant to impress people), and tried to twist what I’ve said into a strawman you can knock down.

Try putting aside your arrogant assumptions about your own expertise for a moment, actually examine the evidence with an open mind, and maybe you’d learn something yourself.

You really think that keen audio enthusiasts taking part in listening tests, e.g. ABX tests where they repeatedly listen to audio samples and attempt to differentiate between them, aren’t actually listening to the audio in detail?

I neither said nor implied that. I referred to `most people` in a general sense, not `most people participating in an explicit test`.

I’ve heard that before from “golden eared” audiophiles confident that they could hear the differences that thousands of others had missed. A few ABX test failures later and they were a lot less arrogant about their superior listening abilities…

It doesn’t matter what you’ve heard from these so-called “audiophiles”. People stupid enough to label themselves as such are very obviously not who you should be listening to. Now, ….

When I’m the audience, I know what I’m listening to and I know what to listen for. People like me who have an extensive background in this field will all tell you that they deconstruct audio they listen to it. It’s a matter of knowledge & experience, not magic or gold. That happens to everyone who is a great student, with great teachers, and decades of hands-on training. It’s absolutely laughable, to put it mildly, that you think an `average Joe` is in any way comparable what what I just describe.

This exchange is why most of us never bother responding to the silliness from “audiophiles”, couch experts, and average Joes. It almost always winds up being a total waste of time because people don’t actually want to learn anything. You guys just want to argue subjects you know little-to-nothing about.

Things murdered by lossy compression:

HI hats.

Splash cymbals.

Electric bass string noise.

Analog synth squeals

Air in the room

Reverbs

vocalists breath and lip sounds

kick drums.

bowed instrument’s timbre.

i could go on. no speaker will recover what’s already gone.

you can’t shine a turd with expensive headphones.

why listen to tech nerds about audio, when you know they are clueless. listen to musicians and producers and engineers in the field. listen to experts. 24bit audio has been the professional standard for 20+ years now.

the internet becoming the distro platform set audio quality back 3 decades.

Then it should be absolutely trivial for you, and other people who make similar claims, to identify lossy from lossless in ABX listening tests.

Why isn’t that the case?

it’s very easy, almost always, to tell the difference between lossy and the master recording. you are a strange creature – the internet guy that claims 72dpi = 1000dpi because our ears suck because of some listening test you’ve only heard of.

Inevitably you link me to xiph . org. I just don’t get this virus of low quality when it comes to audio. it’s an internet-era invention. making fun of audiophiles has been around for decades but this willful ignorance of our ear and hearing hiding behind mp3 convenience modernity is really infuriating.

imagine someone told you the best OS ever EVER was written in 1978, and will never be improved, and can’t be, because no one could possible notice anyway. OS programmers screamed that there could be better but there’s fella’s like you to let them know that they are crazy, low quality is the new norm.

1978 refers to redbook, the standard you based your church on. it’s about 1000k effective bitrate for a stereo file. that was a lot of data to move in the 80’s when CD’s came out. now it’s nothing. in the 90’s when the internet took over they couldn’t push 1000k around in real-time so data compression was invented. research perceptual coding and then google “ghost in the mp3” for more information about how they got it down to 200k and below and sold it to consumers.

stupid youtube clips stream at 4000k these days. we have plenty of bandwidth for lossless 24bit master-quality audio but your ignorance and mockery keeps it from the masses.

i suspect you also don’t mind working for 10% your normal rate? who would notice?

Additionally many people don’t even pay attention to differences until you point it out to them. It’s not that they don’t hear them, it’s that they don’t pay enough attention as a listener to actually hear the audio in detail.

You really think that keen audio enthusiasts taking part in listening tests, e.g. ABX tests where they repeatedly listen to audio samples and attempt to differentiate between them, aren’t actually listening to the audio in detail?

I find that less plausible than the idea that those differences aren’t perceived because they simply aren’t perceptible.

I’m the opposite – I only hear audio in it’s detail.

I’ve heard that before from “golden eared” audiophiles confident that they could hear the differences that thousands of others had missed. A few ABX test failures later and they were a lot less arrogant about their superior listening abilities…

It’s because ABX tests don’t work for judging musical quality. They are highly flawed but no test formats have been accepted to replace them.

The test gives garbage data. Which is why you can find all kinds of proof based on that test. The test gives bad results so you have to throw it all out.

Go back to people who get paid for this. People who curate catalogs of valuable music. 24bit or bust, baby!

but < 320kpbs fixed bitrate or < 220kpbs VBR stuff – you certainly don’t need “magic ears” to hear the quality loss if you listen carefully. Of COURSE you still need a 20%-plus quality drop to really pick it out mind.

Here’s the flaw —- If the only result of an ABX test is that there is no such thing as high quality, does that make it truth?

If higher and lower quality audio exists – why can’t an ABX prove it? Because the test is garbage in this use. [/q]

Actually, when testing low quality files, people generally can identify them in ABX tests. ABX tests have been used in the development of audio codecs, helping the developers improve transparency at lower bitrates.

It’s only when higher bitrates are used that the audible differences become more and more difficult to detect. Eventually they reach a point where pretty much every test indicates that the differences are inaudible.

You want some information on what’s wrong with ABX tests?

I had a look through the links you posted. Unsurprisingly those sources (a couple of them pushers of snake oil like overpriced “audiophile” power cables and magical CD “treatments”) make the same assertion you’ve made – that testing itself creates artificial conditions that prevent differences from being heard.

I’ve already explained why I find that unconvincing, especially when the difference is being portrayed as outright destruction of the sound quality, not a subtle difference that’s easily missed.

A couple of those links talk about experiencing “emotional” differences in the music over relaxed long term listening. Yes, that’s a really difficult thing to test, and wouldn’t be picked up by ABXing short samples. However, you’ve claimed that specific sounds are “murdered” by lossy compression – if that’s accurate then the difference should be detectable within the limits of any listening test.

[q]Or just think about it, try to do one yourself. You can fool yourself with an ABX test, you can get null results on your own self, which proves the math behind it is flawed.

I have done a number of ABX tests. In fact, that’s what convinced me to stop filling up my player with FLAC and use MP3 files instead.

What it proved to me was that my biases and assumptions had an impact on what I thought I was hearing. It made me much more sceptical about subjective claims about audio quality that aren’t backed up by testing.

and your decision, based on ABX tests, led you to purposely remove 90% of the audio data from your files.

that’s 10% of what the artist intended you to get.

that’s what the ABX test has done – it has convinced you that a xerox copy is good enough, even when you have the original. that’s sad.

256k < 1000k < 5000k

mp3 < CD < 24bit audio

these are truths. the master is the best version, any degradation you apply in the name of convenience is just degradation. enjoy your degraded music. know that you’d enjoy it far more if it was the real version.

You haven’t argued with ezraz before on the topic to save time, here is his defence:

There isn’t a scientifically reliable way of conducting listening tests. So any use of those to determine quality is flawed, therefore just listen to what audiophiles and companies that sell to them say and take it as gospel. No need for science in audio!

“Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.”

Yeah at some point you’ve gotta conclude that storing higher frequencies is futile. Mind you, I personally don’t know where this limit is, and since I’m middle aged I wouldn’t be a good person to judge this anyways.

As far as the argument about poor fidelity at higher frequencies, that could be true especially with the cheap equipment the majority of us are using. But in principal ~22HZ is a human limitation. For electronic equipment and even other species it’s merely an arbitrary cut off point.

Someone’s got to think about the cats and dogs, and what about those beluga whales damn it. If we up the rate from 192khz to 256khz, then our audio recordings will finally be worthy of all species on earth (assuming we team up and eat all the remaining porpoises, of course).

Yeah at some point you’ve gotta conclude that storing higher frequencies is futile. Mind you, I personally don’t know where this limit is, and since I’m middle aged I wouldn’t be a good person to judge this anyways.

You have to remember that 56KBps/44.1Khz assumes perfect hearing. I’ve seen claims that bitrates as low as 40Kbps/32Khz are enough for real world use.

To be fair, we haven’t held any hearing olympics so the absolute best is not readily known, it’s just an assumption.

I tested myself with an app on my phone and can hear up to 15.98khz at the volume levels my phone is capable of outputting. It’s a fairly noisy environment though with no headphones, so maybe I could hear more someplace quite. Generally though I’d agree a 32Khz sample rate is enough *for me*. Maybe a bit more to give the lowpass filter some headroom to perform a cutoff and prevent aliasing.

I stopped collecting music after mp3s so I have no reference point with AAC bitrates. It obviously depends on codec properties and the number of channels and whether those channels are compressed jointly, etc. In general I was content with 128kbps but that’s not to say there wasn’t a clear improvement with 192kbps, I personally just didn’t care that much.

This may not be true of audiophiles, but I suspect the perception for most people would be that tuning the histogram for a richer mix would sound better than increasing the rates to increase fidelity. In other words, tuning can make it sound even better than the real thing.

If you are past 11 there is a very good chance that you will be incapable to hear anything beyond 16 KHz, also, not a lot of people can hear near the 20 Hz too.

Want a good sound experience ? Don’t use cheap headphones, don’t turn the sound to the max (amplifier distortion curve get very noticeable close to the limits of if) and don’t get music ripped by random blokes.

Unluckily, there are “incentives” running for some time to insert on records some “enhancement” to the frequency response curve, and some speakers vendors do it too. I’m not sure, but it may have started when equipment manufacturers noted that even people that bought expensive equipments liked to increase the bass, middle and treble ranges.

it doesn’t matter if you “think” you can hear it or not – 256k does not move the speakers, and thus the air, nearly as much as 1400k or 5000k of bandwidth. [/q]

That’s a downright bizarre claim.

[q]music compression formats are a poison. it was necessary long ago, even i used lossy to fit crap on my early devices.

but i started to hate music, or at least find it grating and lacking in punch. then i went back to lossless and WOW there it is again!

That sounds more like a psychological issue than anything grounded in reality. The often massive impact of placebo effect is the reason why double blind tests are necessary. Are there any such tests that back up your claims?

The sound quality is exactly the same as my $20 Skull Candy cans. You’re paying $200+ for chunks of metal they hide in the ear pieces to make them feel heavy and “expensive”.

I have a $50 set of Sony studio monitor headphones, still nowhere near audiophile or really even studio quality, and they make Beats cans sound like $3 no-name earbuds. You’re literally paying for the brand and nothing more, like a $300 pair of designer jeans when a $50 pair of Levi’s (also overpriced) looks just as good and lasts longer.

This is an absolutely terrible idea, and Thom is bang on with respect to phone makers (Apple) using this as a way to lock customers in to their accessories, but I thought at least part of the problem with the old school headphone jacks is that they don’t fit into the new thinner phones all that well.

This is an absolutely terrible idea, and Thom is bang on with respect to phone makers (Apple) using this as a way to lock customers in to their accessories, but I thought at least part of the problem with the old school headphone jacks is that they don’t fit into the new thinner phones all that well.

exactly. size is an important factor. by modern mobile standards it is a big port.

it also can’t be extended any further with features. it’s maxed out, what 4 rings already?

Honestly this is a tinfoil hat argument, especially as the bulk of Music sold today is DRM free and there are plenty of other ways to get digital audio out of your phone if you want to copy it.

Most device vendors have been reducing ports on devices for years, to facilitate smaller/thinner size. The modern USB-C and lightning ports are smaller, more robust, and have ample bandwidth to carry audio in addition to power.

Technology marches on Thom. The 3.5 audio port is giving way to something more modern. There is no conspiracy.

Honestly this is a tinfoil hat argument, especially as the bulk of Music sold today is DRM free and there are plenty of other ways to get digital audio out of your phone if you want to copy it.

Most device vendors have been reducing ports on devices for years, to facilitate smaller/thinner size. The modern USB-C and lightning ports are smaller, more robust, and have ample bandwidth to carry audio in addition to power.

Technology marches on Thom. The 3.5 audio port is giving way to something more modern. There is no conspiracy.

DRM will come back in the next compressed file format. Thom is right. also this new lossy format (based on Meridian’s MQA) will also force sell new DAC’s.

What’s the tension/torsion rating for USB Type-C vs 3.5 mm headphone jack? That’s the important metric to compare. Especially on the cable end of things.

I’ve had to replace a lot of MicroUSB cables over the past two years because they no longer make a solid connection to the post in the USB port unless you press on them in a certain direction with just enough force to *not* break the centre post.

And, I have a handful of devices where the centre post has been cracked/bent/broken and no longer makes a USB connection of any kind.

All because there’s just enough slip in the socket to make up/down pressure on the connector bend the centre post. With headphones, the jack is the post, so there’s nothing to worry about; there’s nothing to put pressure on!

USB Type-C is better than MicroUSB, but I don’t see how a mm-wide post can be stronger than a solid 3.5 mm plug.

Honestly this is a tinfoil hat argument, especially as the bulk of Music sold today is DRM free and there are plenty of other ways to get digital audio out of your phone if you want to copy it.

Selling music is declining rapidly in favour of subscription based models like Spotify, which do use DRM.

Also there is precedent. When Blu-Ray players first shipped, they were allowed an “analog hole” for HD output in component video, but today any players that still have component output can be forced by the content producer to output only in SD.

Honestly this is a tinfoil hat argument, especially as the bulk of Music sold today is DRM free and there are plenty of other ways to get digital audio out of your phone if you want to copy it.

Most device vendors have been reducing ports on devices for years, to facilitate smaller/thinner size. The modern USB-C and lightning ports are smaller, more robust, and have ample bandwidth to carry audio in addition to power.

Technology marches on Thom. The 3.5 audio port is giving way to something more modern. There is no conspiracy.

Agreed. We already have USB headphones for PCs and nothing has changed. No one is preventing a manufacturer from using a 3.5mm port either.

But every computer can take a USB headset, although driver support isn’t always created equal (*cough* *cough* Linux). How likely is it, do you think, that I’d be able to connect a USB C headset to an iPhone, or a lightning headset to a USB C-equipped Motorola and still enjoy 100% of my content? Something tells me it aint gonna be happening any time soon.

I agree. The thing is future tech is always at risk of being manipulated by the pro-DRM associations like RIAA/MPAA. In a way we’re extremely fortunate that they did not get their hands on peripheral buses like USB in the early years of development. DRM can’t be added now without breaking compatibility with USB peripherals already on the market, which would make any DRM extremely noticeable and unpopular. So I doubt DRM enabled USB peripherals could achieve critical mass when so many USB devices are already grandfathered in as DRM-free.

However with other tech like HDMI where they got involved from the start, they were successful. If they got another opportunity to add DRM to some new standard or markets from the get-go, it’s hard to think they wouldn’t try to take it.

If you can connect peripherals from an Android device to Android devices made by other manufacturers then they are interoperable. Forget about Apple, even in the PC world they tried to be as incompatible with the rest of the world as possible,

First, I have USB-C on my Nexus, (which I seriously like, especially when I can charge my phone from flat to 95% in 15 minutes). I also have a headphone jack, but it’s never been used.

Bluetooth has been a better option (for me) than a wired headset for years.

… and please don’t break out that audiophile angle. You’re listening to mp3’s on a freakin’ phone, not a vinyl disc (or even a good quality CD) on a professional amplifier system, and then you’re ramming it down a 1/4″ speaker.

As for the whole beats/drm thing, I’m pretty sure that UEFI secure boot meant the end of linux 10 years ago– If Apple wants to commit market suicide by forcing people to only use beats with their magic usb DRM, then that’s Apple’s problem, not me.

Bluetooth, makes the headset more expensive, you could of course use a separate receiver which have a 3.5mm audio jack.

Would you buy a new headphone if the batteries are dead, or would you try to replace them ? what if the spare parts are not available ?

To be honest, bluetooth audio receiver are great ( if you disregard the latency ). But I do prefer wired headphone, because of the choice I have in size and type of headsets (earbud or headphones, over the ear or on the ear ,open, closed, with noise cancelling or not ).

BT specs or audio codecs are upgraded? Just replace the receiver and keep using the wired headphones that you prefer.

Headphones die? Get another pair and plug them into the receiver.

Want ear buds today, over-the-ear cans tomorrow, and a giant speaker on your backpack the next day? Just plug in the ones you want to use.

Battery dies in your dongle? Unplug the headphones from the dongle, plug them directly into the phone, and charge the dongle.

Battery in the dongle won’t keep a charge? Replace it.

I have a couple of pairs of wired headphones, yet they are never plugged directly into the phone (even though it has a nicer DAC/amp/equaliser for the headphone port); they’re always plugged into a Sony SBH20 BT receiver/dongle. Why? Convenience. It’s a royal pain to deal with cables between the phone and head. This way, the cable gets tied up and goes from the ears to the shoulders where the dongle is clipped. Doesn’t matter where the phone is, I get audio. No cables to get snagged on anything without smacking myself upside the head first.

BT specs or audio codecs are upgraded? Just replace the receiver and keep using the wired headphones that you prefer.

Headphones die? Get another pair and plug them into the receiver.

Want ear buds today, over-the-ear cans tomorrow, and a giant speaker on your backpack the next day? Just plug in the ones you want to use.

Battery dies in your dongle? Unplug the headphones from the dongle, plug them directly into the phone, and charge the dongle.

Battery in the dongle won’t keep a charge? Replace it.

I have a couple of pairs of wired headphones, yet they are never plugged directly into the phone (even though it has a nicer DAC/amp/equaliser for the headphone port); they’re always plugged into a Sony SBH20 BT receiver/dongle. Why? Convenience. It’s a royal pain to deal with cables between the phone and head. This way, the cable gets tied up and goes from the ears to the shoulders where the dongle is clipped. Doesn’t matter where the phone is, I get audio. No cables to get snagged on anything without smacking myself upside the head first.

nothing ‘optimal’ about bluetooth audio. yet another degradation in the name of quality.

lossy (bluetooth is another layer of lossy) is like a plastic violin – why bother?

I don’t mind a digital audio standard to eventually replace analog connectors. Something like bluetooth but for wired components.

Consumers are very fortunate that analog jacks just work. What I hate is when peripherals are made to be incompatible with one another. I don’t want to have a different pair of headphones for every device, it’s wasteful, it’s complicated, it’s stupid…but I don’t have much faith in manufacturers resisting the temptation to break compatibility just to sell more accessories.

If Apple would limit headphone usage, they will sell less phones. It’s not a monopoly. So that will NOT happen. The thing that will happen is they will sell you an additional $29 lightning-to-3.5 and that’s it.

If Apple would limit headphone usage, they will sell less phones. It’s not a monopoly. So that will NOT happen. The thing that will happen is they will sell you an additional $29 lightning-to-3.5 and that’s it.

Plus, DRM era is gone. It’s all about subscriptions.

I think apple is going to license MQA from Meridian, rename it AQA and make it their new format for their streaming store, while at the same time closing their download store (still stuck on 256k AAC).

This AQA will have DRM and vendor lock in and be backed by what’s left of the modern music industry.

The archive types — all of the 20th centuries music — will hate it and hopefully stick with hi-res PCM. 24/192 PCM FLAC sounds amazing and no new format is needed.

While USB headphones will enable DRM, Bluetooth does as well, and people like Bluetooth headphones and headsets. This is probably a reaction to Bluetooth being popular more then a move to implement DRM on headphones. They haven’t done it with Bluetooth devices, so I don’t see them doing it with USB devices.

Companies will make crappy, incompatible accessories all on their own with out any help from the Apple or Samsung.

Anyway, I really want this.

I have a USB headset, but I can’t use it with my phone. If you’re thinking, “Why not just get a regular 3.5mm headset?” They show up as a second sound endpoint, and specific applications can be use them without everything being piped into them or accessing the mic. This means my softphone will use the headset while other applications will send output to the speakers and/or use other mics.

Let’s complain about the VoIP phone manufacturers who haven’t put a USB port or anything else remotely standard on their handsets. Good god that’s a mess.

While USB headphones will enable DRM, Bluetooth does as well, and people like Bluetooth headphones and headsets. This is probably a reaction to Bluetooth being popular more then a move to implement DRM on headphones. They haven’t done it with Bluetooth devices, so I don’t see them doing it with USB devices. [/q]

Do you have more information about bluetooth DRM? I honestly don’t know much about it, but to my knowledge the bluetooth standard itself never supported DRM and most/all devices supporting the A2DP audio profile do not support DRM. Am I mistaken?

Obviously DRM could be added, but then it would be incompatible with all accessories on the market, and it might even be prohibited from using the bluetooth trademarks for that reason.

What I found was an OMA DRM standard to block file transfers between phones via bluetooth.

But to me that’s different because the DRM is part of the phone rather than in bluetooth itself. To understand my logic, consider an SFTP server could be modified to block the transfer of DRM files, but I would not use this fact to suggest that SFTP “enables DRM”.

I have a USB headset, but I can’t use it with my phone. … This means my softphone will use the headset while other applications will send output to the speakers and/or use other mics.

I agree with you there’s merit in going digital. I just don’t want to end up in a situation where we’re stuck with tons of incompatible proprietary audio peripherals and we end up worse off than when we had a standard analog jack…this could very easily happen in just a few years.

[q]Let’s complain about the VoIP phone manufacturers who haven’t put a USB port or anything else remotely standard on their handsets. Good god that’s a mess.

My house phone has a standard audio headset, but it’s an old-school analog phone. I can only imagine what accessory connections look like these days.

I agree with you there’s merit in going digital. I just don’t want to end up in a situation where we’re stuck with tons of incompatible proprietary audio peripherals and we end up worse off than when we had a standard analog jack…this could very easily happen in just a few years. …

OK, I am really curious, what would be the benefit of going digital if it is already digital until the last minute, when you really need to generate the analog signals our biological sensors where designed to deal with ?

The only usefulness I could think would be if you wanted to meddle with the stream quickly and at hand, like skipping songs or altering the volume level using a control on the cord but it can already be done with 4 wire headphones (OK, there are incompatibly wiring methods right now, thanks again to Apple, but it is easy to fix).

OK, I am really curious, what would be the benefit of going digital if it is already digital until the last minute, when you really need to generate the analog signals our biological sensors where designed to deal with ? [/q]

Ok, here are some reasons I can think of, but I’ll let you decided if they matter or not.

A stereo plug isn’t ideal for multiple channels/surround sound.

A stereo plug isn’t ideal for bidirectional audio (such that might be desired playing a MMORPG game)

A boombox would typically need an extra analog amplifier, making the first one redundant. One practical problem that I’ve seen in real A/V setups is the ambiguity of which device gets used for volume control. I’ve seen times when the source is way too high or too low outside of it’s linear gain range and the external amp is adjusted to compensate (or visa versa) but it sounds muddy and terrible. While this is “user error”, it would be possible to have a digital protocol that always sets the volume correctly.

Analog components should match impedance, otherwise the signal can form into a standing wave and end up interfering with itself at certain frequencies. If the DAC is done in the external unit, then all the circuits can be perfectly matched at the factory. With much frustration, I had to learn about some of this stuff when setting up an analog POTS (telephone) to VOIP bridge. It’s even affected by the capacitance of the cables.

Someone might have a stereo system with digital effects and/or equalizers, which could result in a silly triple conversion between analog and digital.

[q]The only usefulness I could think would be if you wanted to meddle with the stream quickly and at hand, like skipping songs or altering the volume level using a control on the cord but it can already be done with 4 wire headphones (OK, there are incompatibly wiring methods right now, thanks again to Apple, but it is easy to fix).

They probably have ways to communicate this info over a stereo plug, but a digital standard (like CEC is to HDMI) is probably going to be better in all respects. Maybe a car stereo that prominently displays a title of what’s playing, etc. It’s small things of course, but they’re nice touches.

When it comes to mobile peripherals, there’s a lot of overlap with bluetooth. When I got my first phone, I strongly wanted a wired headset, but I gave the bluetooth headset a try because the wired headset was proprietary and I wanted to avoid that. I hated the bluetooth headset as much as I thought I would. This is the weird sort of balance I’m forced to play out as a consumer, yea I’m an unusual specimen

OK, fair reply, I was kind of thinking about the headphones or speakers being the last device on the chain because I already use the USB connection when transmitting sound to intermediate devices but I can see how some people may be obligated to use the phone jack to connect to old or unhelpful equipments. Also, I did not think too much about the distance of cables but have seen first hand how they affect the modern multimedia rooms people are building in theirs houses.

The 3.5mm jack cannot be feature extended further and is actually very large compared to modern connectors. It’s had a good run of 120+ years.

But that’s for the nerds. The real reason it’s being killed is for DRM. That’s the money talking.

Previous attempts at DRM from ’96-’06 were all outside of the file, outside of the conversion, and put on as an added layer. They relied on OS’s, 3rd parties, and very slow internet to even work, and most didn’t work well. Paying customers hated it so badly it was killed.

MQA changes that. It’s a new encoding method to replace PCM (used in MP3/FLAC/OGG/WAV) and DSD. It’s tricky because it can be stored in existing file containers and should thoroughly and convincingly confuse all but the nerdiest digital audio type. Perfect for selling more lossiness. Already loaded with nonsense terms before Apple buys it.

Why another new encoding method? MQA has lossy compression concepts in the encoding itself along with DRM hooks.

It’s like taking MP3 compression ideas, rewriting them (in perhaps a more refined way), and putting them into the actual encoding itself, not as a 2nd pass after encoding. Pretty cool idea but also pretty unnecessary since we have enough bandwidth now to use lossless FLAC.

The hidden part of MQA is the DRM hooks in the encode itself, to only play a legal version of the song and only play on MQA approved DAC’s. Meridian (creators of MQA) aren’t saying anything here.

Bottom line — spend a few hundred bucks now and get a dedicated DAP. Re-rip all your CD’s lossless and start buying lossless music again. Support the artists you love in tangible ways more than streaming 10% versions of their songs for $0.00000003

Be pono – fathers day sale right now — $300 for a ponoplayer, a 64gb card, and $25 music. Thats a great deal.

Cheap and effective way to make your CD content available on various devices: use Exact Audio Copy (for Windows, but apparently works under Wine too) to rip CDs. Set it up to *not* throw away wav files once it’s created compressed files. Doing this, I can play wav files on the computer, and mp3 on the phone.

As for buying download-only, thankfully many of my favourite artists use Bandcamp, CD Baby, or some other site allowing wav and/or flac as well as mp3. As far as I’m concerned, the mp3-only Google Play is the last resort.

Cheap and effective way to make your CD content available on various devices: use Exact Audio Copy (for Windows, but apparently works under Wine too) to rip CDs. Set it up to *not* throw away wav files once it’s created compressed files. Doing this, I can play wav files on the computer, and mp3 on the phone.

I’d recommend EAC too – it does a good job even with scratched CDs.

One thing that’s possible with EAC is to have it simultaneously create both lossy files like MP3s and lossless compressed files like FLAC. No need to keep uncompressed WAV files for playing on the computer – you can save space without sacrificing any quality.

Classic false simplicity. You make the product photos look great, but you have to carry a bag of adapters just to make the thing work.

Couple of folks are trialing Surface Pro’s here at work. They never detach the keyboards, have display adapters for screens, network adapaters and hubs attached. They are so much *simpler* than an unsexy business laptop that does it all in one thing.

Couple of folks are trialing Surface Pro’s here at work. They never detach the keyboards, have display adapters for screens, network adapaters and hubs attached. They are so much *simpler* than an unsexy business laptop that does it all in one thing.

Personally I’d rather have the ports built in, these dongles are reminiscent of the days of pcmia!

I went on a business trip to an area that had no wifi, we needed to connect physically inside their lan. My coworker had a macbook pro but forgot his ethernet dongle. Granted he probably doesn’t need it often, but we did need it that time. Fortunately my laptop had an ethernet port though.

1. Why there are always people claiming that the removal of X or Y is purely bad is beyond me. Yes we have seen the end of the floppy disk, the Centronics Port, ADB, the mechanical keyboards, etc., etc. Guess what: The world has not stopped turning and the end of the notorious unreliable floppy disk was a good thing (and in some cases like the mechanical keyboards, there is even a revival because there is a market for higher quality keyboards)!

2. Claiming that change is done purely for commercial reasons (or some other conspiracy theory about DRM or whatever) is silly. If you don’t like a product, don’t buy it. There is now way a single company can force a change. This is a buyers market and there is no monopoly at all. If the headphone jack on mobile phones is gone in 5 years, then because the majority of people favor and buy those products. Deal with it.

3. Some people on this site (including Thom) claim that Apple ist the sole responsible entity for the removal of the headphone jack on mobile phones. WTF?! There is not a single iPhone without a headphone jack today and there is none announced. What are you talking about? If you have visions, go see a psychiatrist.

Seriously Thom, most of the time you seem like a smart guy, then you come out with something like this.

Why the 3.5mm headphone jack is bad for phones.

A. Mechanical

A1. Size. It forces the phone to be at least 5mm thick.

A2. Mechanical Vulnerability. Catch your headphone cable on something. If you’re lucky, it just pulls loose from your handset. Less lucky, the jack is damaged. Having a bad day? The socket is torqued enough to wreck the pcb in your hanset.

USB connectors don’t do that.

B.Electrical.

B1.The sound sucks (part 1). The 3.5mm jack was never designed for good fidelity. That was what the big 6.35mm audio jacks were for. 3.5mm jacks were used initially because the sockets are cheap, and later, as electronics shrank, because they are small. At no time have they been chosen because they are best.

B2. The sound sucks (part 2). Transmitting undistorted analog along a cable is hard. Most of the solutions are expensive. Transmitting uncorrupted digital is easy and cheap. A well concieved audio system will therefore transmit digital along any cables and place the DAC right next to the transducer.

Now I’m sure that Apple would love to force customers to buy only approved headphones, and they would retain a decent proportion of their customer base. These are people who put their thin, metal, phone in their jeans back pocket, sit on it, and complain that it’s now bent, convince themselves that Apple sold them an inferior product, and when the new model comes out they buy it anyway. The rest of us will take, or already have already taken, our custom elsewhere.

I’m not a great believer in market forces as a rule, but I’m fairly comfortable predicting that the only people who get hurt by this are those fools who were in any case soon to be parted from their money.

Yea. The point I meant to make, but didn’t because I was, frankly, very drunk*, and also because I got distracted by the beauty and truth of my description of an apple customer**, was this: I’ve been wanting the 3.5mm jack to go away for a long time, for the reasons I gave. DRM never occurred to me. It probably isn’t the driving force behind these changes.

*As a Dutch guy, Thom shares some collective responsibility for this. It was the Heineken that did it. I don’t feel at all well.

** I should have pointed out that apple customers don’t just ‘buy it anyway’. They will queue all night to be the first to do so. Such people should not be allowed out unsupervised.

… I have. And I’ve personally trashed a handset this way. I was mildly vexed for a few moments. To be honest, It’s that possibility, not the destroyed headphone connector that I care about. Yes, microUSB is a little fragile. I’ve wrecked more more than 2. But I’ve never damaged the socket or the pcb it’s mounted on. The point is that a safe digital connector is possible. Analog requires a low impedence connection, which implies a large contact surface area, which in turn implies a connector with enough leverage to do real damage.

… If you’re spending that kind of money, you should certainly be happy with the sound you get. Don’t you wish you could get the same sound quality for less?

B2) Fine, you need to buy a separate headphones for each kind of device.

… in practice, the industry will standardise on one digital I/O connector in the same way that it standardised on the 3.5mm jack.

[q]I need both audio output and charge for use in my car.

…So do I. So do many, perhaps most, cellphone users. Therefore expect good solutions to be made available by those manufacturers that want to remain in business. I’d expect wireless charging car-cradles to become common, but if there is a better solution, that’s what will be offered.

… in practice, the industry will standardise on one digital I/O connector in the same way that it standardised on the 3.5mm jack.

While we’re being optimistic, let’s standardize on lithium batteries and ink cartridges too!

The simple stereo jack lasted several decades and may have longer to go still. These days, we really tend to burn through “standards” much faster. USB is a logical choice, but even USB connectors themselves keep changing enough to keep things confusing. I already find the plethora of USB cables I have to keep around annoying, and now there’s another one.

For all the benefits of digital that I can appreciate, the one big pro that undeniably goes to analog right now is the ubiquitous nature of the analog jack. Want to use your headset across your laptop, phone, mp3 player? Unless they all use an analog audio jack, you’ll very likely need to carry adapters to do that. While it’s obviously possible to standardize all digital accessories down the line, that’s by no means a given. Manufacturers like to change things intentionally to make them less reusable and to sell more accessories. Whether you or I support this is irrelevant if consumers at large play along, and they often do.

This is exactly why, as I’m sure some have noticed, I’ve been playing both sides of the field in these comments. I like the improvements that digital can offer, but we don’t really know where this ends up. A decade down the line we’ll need to come back and revisit this

Seriously Thom, most of the time you seem like a smart guy, then you come out with something like this.

Why the 3.5mm headphone jack is bad for phones.

A. Mechanical

A1. Size. It forces the phone to be at least 5mm thick.

A2. Mechanical Vulnerability. Catch your headphone cable on something. If you’re lucky, it just pulls loose from your handset. Less lucky, the jack is damaged. Having a bad day? The socket is torqued enough to wreck the pcb in your hanset.

USB connectors don’t do that.

A1: No, it doesn’t. There are plenty of phones out there that are sub-5 mm with headphone jacks. The absolute minimum size will be around 4 mm, would be my guess.

A2: You have a bigger chance of breaking a MicroUSB port than a headphone jack when snagging the cable on something. That centre post in the MicroUSB port isn’t that strong. Which is stronger: a mm thick post, or a 3.5 mm jack? A USB Type-C port may be stronger than a MicroUSB port, but it won’t be as strong as a headphone jack.

Ripping the PCB off the motherboard is an issue with both ports. USB ports have an extra centre post to worry about. Multiple points of failure here.

In all this discussion it seems that almost everyone is assuming that these devices will only deliver digital audio.

I haven’t managed to find any evidence either way, but I would be surprised if they didn’t support analogue audio.

Intel have stated their intention to promote audio over USB-C. To this end they’ve been working on a new standard for this which supports both digital and analogue. Their reasoning is that existing headphones can be used with a simple adapter, and in the future a gradual transition may occur to analogue headphones using the new connector while allowing high end headphones to be developed that utilise the digital audio while adding “extra features”.

This could well be the first implementation of Intel’s plan.

As for motivation, I could believe that there may be evil DRM thoughts behind the managerial support, but as a hardware developer I think the overwhelming reason is to simply shift to a single port. Not only do they get rid of the physical port they may well be able to replace two separate sets of electronic components for audio and USB with a single set (at least if this does take off I can imagine chip manufacturers making combined chips).

USB-C can carry the same three analog lines that go to a headphones jack, so the lack of a 3.5mm jack will only mean you will need a passive adapter.

Adapters are a pain in the ass, but these would be _standard_, unlike those those for HTC-only USB-mini analog audio output that some of us suffered a few years ago. Being standard means thay will come with new analog headphones, and that you can expect them to be good quality if those headphones are any good.

So, no convenience really, and the thickness these phones shed would’ve been better kept and stuffed with battery; but I don’t think that this time it is a conspiracy to restrict our rights and funnel our money and our data into Their Systems.

1 – good speakers are most important – false, unless everything up the chain is high quality already. chances are your existing speakers can drive much more signal than they are being given.

2 – expensive cables are necessary – false, unless you have a very high end system with a perfectly tuned listening room. this is just a way to attack audio people, the expensive cable thing. i use $5 cables from monoprice or whoever.

3 – lossy is not really lossy, because no one can tell – false, everyone can tell, given the proper musical material and listening skills. it has nothing to do with the age or accuracy of your ears. it has nothing to do with the quality of speakers. it has everything to do with the accuracy of the music being rendered before it even gets to the speaker.

the only truth to audio playback is signal chain theory:

nothing can improve what came before it. each step can only degrade (analog) or reproduce exactly (digital).

this shows us that starting the chain with a lossy file is like starting your poster project with a 72dpi image. have fun printing that – maybe no one will notice! maybe xiph.org can prove to us that 72dpi = 1200dpi. that’s basically what they are claiming.

xiph.org claims that your ears -no wait – entire body – can only hear ~ 256k of bitrate. please. utter nonsense. those people are fools, no matter how long their formulas are. fools.

kill lossy now. it’s hurting us. people are going freaking crazy because there’s not even good music anymore, even if you have good music you play it back on crap 10% versions pretending to be full versions. it’s placebo. we expect the full meal but we are given 10%. it makes us crazy.

play the drums (real, not machine). hit them and listen. hit each drum and listen. hit each cymbal and listen. listen to the decay as it fades out. listen to the attack – how fast it rises, what character it has. roll on the snare. roll on the hi-hat as you work the pedal – listen to thousands of variations.

if you can record it, do so. set your DAW to 24bit, put three mics around the drums Botnick-style (over the snare, in front of kick, and off side of the toms in a equilateral triangle).

play it back in that same room at 24bit. play it then hit the real drums. you will hear a degradation between the live and the 24bit recording. even though you are capturing 5000k/sec of signal, microphones aren’t as amazing as our ears and the rest of our body feeling that vibration.

then downsample and dither the file to 16/44. play that back and you will probably hear it as smaller, thinner, and slightly less accurate than the 24bit file.

then take the downsampled file and lossy compress it to 256k mp3. when you play that it will practically be a different drum set, in a different room, with all kinds of crispy sounds and artifacts that weren’t there originally.

honestly, if you don’t hear anything different after this test you are a mental patient. ears do get damaged and degraded but never in a digital lossy way. we can all detect the lossy. some of us just don’t care and like to call names.

some of us do care.. i think it’s critical to understand what we as a society continue to do to our own music – the very thing that keeps us sane, and we attack it and degrade it for the sake of convenience. it’s not convenient to get 10% when you expect 100%.

Amazingly, not all of us have a real drum kit handy, or the gear to record it as you specify. It doesn’t seem like the most relevant test when talking about the formats used for playing back recorded music.

Of course the biggest flaw in the test you’ve described is that is isn’t blind. There’s nothing to stop the influence of placebo effect, caused by the listener’s assumptions about how it should sound.

[q]ears do get damaged and degraded but never in a digital lossy way. we can all detect the lossy. most of us just don’t care.

Thousands of people care enough to have participated in numerous listening tests. Yet the participants in those tests consistently fail to detect the lossy files once they reach a decent bitrate.

How do you explain those results if there’s really such a huge difference between the lossy and lossless files?

Amazingly, not all of us have a real drum kit handy, or the gear to record it as you specify. It doesn’t seem like the most relevant test when talking about the formats used for playing back recorded music.

Of course the biggest flaw in the test you’ve described is that is isn’t blind. There’s nothing to stop the influence of placebo effect, caused by the listener’s assumptions about how it should sound.

ears do get damaged and degraded but never in a digital lossy way. we can all detect the lossy. most of us just don’t care.

Thousands of people care enough to have participated in numerous listening tests. Yet the participants in those tests consistently fail to detect the lossy files once they reach a decent bitrate.

How do you explain those results if there’s really such a huge difference between the lossy and lossless files?

Very easy to explain. Hard for some to accept: ABX does not work for music quality. It has more than 1 fatal flaw yet it’s results are still pointed to.

Note only those that deny high quality exists use ABX tests as proof. This is because an ABX test for music quality only returns NOISE. No real useable data, because the test is a disaster.

People who build sound circuits take hours if not days to do listening tests. No known ABX test you can point to lets listeners live with the sound and review it in multiple listening environments, blind or not.

People who build sound circuits take hours if not days to do listening tests. No known ABX test you can point to lets listeners live with the sound and review it in multiple listening environments, blind or not.

Actually, most of the online tests provide the samples for people to analyse at their leisure. They can listen to them and compare them as many times as they like, on the equipment of their choice, before actually completing the test.

Of course people can conduct their own ABX tests using their own music and equipment, things they’ve listened to a thousand times, and still get the same result.

What I find interesting is that you’ve claimed that lossy files are easy to detect – that anyone can do it. You’ve talked about various specific sounds being “murdered by lossy compression”. You’ve made it sound like there’s a night and day difference between lossy and lossless, not a difference so subtle that it requires people to “live with the sound and review it in multiple listening environments” before it’s detectable.

If the differences between high-bitrate lossy and lossless are so small that the conditions present in pretty much every listening test have been enough to erase them, it really can’t be that big of a deal.

People who build sound circuits take hours if not days to do listening tests. No known ABX test you can point to lets listeners live with the sound and review it in multiple listening environments, blind or not.

Actually, most of the online tests provide the samples for people to analyse at their leisure. They can listen to them and compare them as many times as they like, on the equipment of their choice, before actually completing the test.

Of course people can conduct their own ABX tests using their own music and equipment, things they’ve listened to a thousand times, and still get the same result.

What I find interesting is that you’ve claimed that lossy files are easy to detect – that anyone can do it. You’ve talked about various specific sounds being “murdered by lossy compression”. You’ve made it sound like there’s a night and day difference between lossy and lossless, not a difference so subtle that it requires people to “live with the sound and review it in multiple listening environments” before it’s detectable.

If the differences between high-bitrate lossy and lossless are so small that the conditions present in pretty much every listening test have been enough to erase them, it really can’t be that big of a deal.

the second you decide to care and feel the music it’s relatively easy to spot lossy. sometimes immediately, sometimes it takes a few minutes to let the fatigue kick in.

listen i’ve been fooled to. a 256k MP3 or AAC, especially from a modern artist, is hard to tell.

but i don’t give a crap about the bad modern artists that are making fake music using fake instruments. i care about the beautiful stuff – the analog stuff – the real stuff – that humans have created over the last 100 years.

side note — part of the reason modern music sounds like shite is this restrictive, damaged, horrible distro format. why mix and master properly, why use real instruments, when it’s going to come out the paper bag of lossy compression anyway?

“but i don’t give a crap about the bad modern artists that are making fake music using fake instruments. i care about the beautiful stuff – the analog stuff – the real stuff – that humans have created over the last 100 years. ”

Amazingly, not all of us have a real drum kit handy, or the gear to record it as you specify. It doesn’t seem like the most relevant test when talking about the formats used for playing back recorded music.

Of course the biggest flaw in the test you’ve described is that is isn’t blind. There’s nothing to stop the influence of placebo effect, caused by the listener’s assumptions about how it should sound.

ears do get damaged and degraded but never in a digital lossy way. we can all detect the lossy. most of us just don’t care.

Thousands of people care enough to have participated in numerous listening tests. Yet the participants in those tests consistently fail to detect the lossy files once they reach a decent bitrate.

How do you explain those results if there’s really such a huge difference between the lossy and lossless files?

First off, you owe it to yourself to get on a drum or 5 and explore how your body picks up sound. It’s the most relevant thing imaginable for this discussion. Your headphones won’t sound the same after, that’s for sure.

ABX listening tests are highly flawed and the only people pushing them are those that claim there is no thing known as quality sound.

Show me the ABX test that says a fender guitar sounds better than a generic guitar; the ABX test that tells me which orchestra is better; the ABX test that tells me which mix of a Beatles song is better?

You can’t because there are none. ABX cannot prove quality, it can only muddy the water, which makes it very useful for the forces against quality, such as xiph dot org.

So as best I understand what you just said… there’s such a thing as “quality audio” but there’s no way to measure it, so therefore we shouldn’t try and instead should just take the word of “audiophiles” who usually have an interest in selling either their equipment, music, recording services or… all three? Do I get that about right?

So as best I understand what you just said… there’s such a thing as “quality audio” but there’s no way to measure it, so therefore we shouldn’t try and instead should just take the word of “audiophiles” who usually have an interest in selling either their equipment, music, recording services or… all three? Do I get that about right?

no you aren’t right.

you should use your own ears. stop listening to people on the internet that aren’t in pro audio, or have other agendas.

sure that can include me. don’t take my word. follow your own ears. i’m trying to give you factual information to help you become a critical listener.

Show me the ABX test that says a fender guitar sounds better than a generic guitar; the ABX test that tells me which orchestra is better; the ABX test that tells me which mix of a Beatles song is better?

This is utterly irrelevant as ABX tests aren’t designed to do any of those things.

ABX tests are used to blindly compare ‘A’ to ‘B’ and determine whether there’s any audible difference between them.

If someone can hear a difference then which they prefer is simply a matter of personal preference. However, if no difference can be identified, and that result is repeated by numerous people, it is evidence that audible differences don’t exist, or at least that they’re so subtle they’re unlikely to be heard.

When ABX tests consistently fail to detect a different between two audio formats, wild claims about the sound of one being “murdered” and “crap” start to look implausible.