Baselworld is only a few weeks away. Getting the latest news is easy, Click Here for info on how to join the Watchuseek.com newsletter list. Follow our team for updates featuring event coverage, new product unveilings, watch industry news & more!

IF your ripping a cd to your computer, i would remmond you use either a external CD players/ recorder, Or you use a good program that can extract in either flac or wav, and you can try both of them so you can tell yourself which sounds better to you.

Yes, while there are no inherent audible differences due to the format, there's a good chance that the same old music gets remastered every time it is marketed in a new format.

Quote:

IF your ripping a cd to your computer, i would remmond you use either a external CD players/ recorder,

Speaking as someone who has personally tried all of the alternatives, this is the worst way to do it. It's as bad as transcribing vinyl. You have the problem of matching your recording to your playing, which either means hand-and-eye coordination, or post editing, or both. Furthermore, CD players don't do nearly as good of a job with damaged CDs as a good ripping program, not even close.

Ripping also solves the problem of getting clean starts and finishes. It is all handled automatically. Most ripping programs even automatically rename your rips to match the CDs they come from, and apply file tags where relevant.

Ripping DVD-As seems to be possible and with the right software and hardware, it works about the same as ripping CDs.

I've read of some attempts to rip SACDs based on the fact that PS/3s are programmable, and their DVD drives are capable of reading SACDs.

Quote:

Or you use a good program that can extract in either flac or wav, and you can try both of them so you can tell yourself which sounds better to you.

The audible difference to expect is that ripping is far less likely to have skips or clicks than any transcription that involves a trip through the analog domain (e.g. a transcription from a regular CD player).

Ripping also completely bypasses your PC's audio interface, so the digital music stays neatly and cleanly wrapped up in the digital domain. Most on-board audio interfaces are pretty good, but older laptops and desktops can still be problematical. A lot of laptops don't even have line level inputs to connect to a CD player.

That is not right Arny. I was asked how it was that clock accuracy would differ if one played FLAC vs WAV. I showed that FLAC uses more CPU and hence it can create a different load in the system producing the clock. The papers and many others I did not list attest to the fact that this phenomenon exists (or you could have asked anyone who has designed such a circuit).

Actually, you said it, not showed it.
On my i7 920, I can't see any appreciable increase in CPU utilization when I'm playing a flac file. Engineers at Linn Audio have mentioned in their forum that they have measured various cpu / memory issues in WAV vs FLAC play back and have found that cpu utilization necessary to convert the FLAC file is offset by having to move half as much data. They measured the voltage drop on the rails and found it to be in the micro-volt range, and they have measured clock jitter and found it to be the same.
Other observers have also calculated that decoding a FLAC file consumes less that 1% of available CPU resources with a modern chipset.

Furthermore, and this can not be overstated, all of the conjecture of clock instability, jitter, power supply instability are moot because the effect persisted even when the file was converted back to wav.

All it takes is a bit of stretch to upset the clock . I suggest reading this AES paper (and many others that describe this issue)

I'm mostly staying out of this, but I can't let this one pass without comment. Using that article to back up your claims is no better than when "power conditioner" companies show a change at the output of their filter rather than the output of the connected audio device. That particular logical fallacy is called a red herring.

On my i7 920, I can't see any appreciable increase in CPU utilization when I'm playing a flac file.

Your tool is not accurate enough. It is like needing to measure .001 mph using your car's speedometer. Remember again how little it takes to impact the S/PDIF clock to make it less accurate than the spec requires (less than 0.5 trillionth of a second).

You need instruction level tools to know which is which. And at any rate, please remember that I have repeatedly said that there is no prediction here as to which is better as the overall load is random due to vagaries of the OS.

Quote:

Engineers at Linn Audio have mentioned in their forum that they have measured various cpu / memory issues in WAV vs FLAC play back and have found that cpu utilization necessary to convert the FLAC file is offset by having to move half as much data. They measured the voltage drop on the rails and found it to be in the micro-volt range, and they have measured clock jitter and found it to be the same.

I have read that before and assume you mean this post:

"We have done extensive measurements on power supply disturbance recently, and have compared results for both FLAC and WAV streaming. Our findings are as follows :

1. If we measure the power rail that feeds the main processor in the DS we can clearly see identifiable disturbance patterns due to audio decoding and network activity. These patterns do look different for WAV and FLAC - WAV shows more clearly defined peaks due to regular network activity and processing, while FLAC shows more broadband disturbance due to increased (but more random) processor activity.

2. If we measure the power rails that feed the audio clock and the DAC we see no evidence of any processor related disturbances. There is no measurable difference (down to a noise floor measured in micro-volts) between FLAC and WAV in any of the audio power rails.

3. Highly accurate measurements of clock jitter and audio distortion/noise also show no difference between WAV and FLAC.

The extensive filtering, multi-layered regulation, and careful circuit layout in the DS ensure that there is in excess of 60dB of attenuation across the audio band between the main digital supply, and the supplies that feed the DAC and the audio clock. Further, the audio components themselves add an additional degree of attenuation between their power supply and their output. Direct and indirect measurements confirm that there is no detectable interaction between processor load and audio performance."

As you see, they clearly say they found "increased (but more random) processor activity." And that there was *measurable* power supply fluctuations in decoding audio in each format combined with the characteristics being different. So clearly my point is proven by their findings that CPU activity does impact the voltage rails feeding it.

Perhaps you meant the voltage to the DAC doesn't change. Before I get into that, please keep in mind that their scenario is a networked playback which increases the CPU usage when fetching more bits in the case of .Wav due to overhead of the TCP/IP code in the kernel. The authors in our test did not do that and played things locally. So you should assume lower overhead for the .wav file in this instance.

Back to the point of DAC voltage, the Linn testing was done on their box with great attention to quality as evidenced by the last paragraph. The authors of the test in TAS did not use a Linn DS which is a dedicated music server/player built by a high-end audio company in an integrated manner with attention to audio quality.

The authors in our test used an off-the-shelf PC which lacks any measurement/quality assurance with regards to jitter and at any rate, runs a different OS and works differently than the Linn. Additionally, the test in our situation in one case involved a 25 foot cable to an external DAC so the quality of the driver matters a ton more here than Linn's case where the parts were internal.

All that said, I want to make sure that it is not forgotten that I do not believe that the audio fidelity necessarily changes for the worse with Flac. I am only saying that the system load changes in character when running Flac vs. Wav. The Linn report 100% supports this and clearly so.

Quote:

Other observers have also calculated that decoding a FLAC file consumes less that 1% of available CPU resources with a modern chipset.

You keep mentioning this point and I keep saying it is unrelated to the point being discussed. It matters not whether we max out the CPU or not. Only that the character of it has changed (note how the Linn engineers were aware of the same and noted it in the post).

Let me try this. Take two situations: 1) your system is 100% idle and CPU doing absolutely nothing and 2) CPU usage is 0.1%. This means your Perfmon will show essentially 0 in both cases as you state.

Now let's further assume your CPU is single core, running at 2.5 Ghz and the CPI (cycles per instructions) is 1. That means that your CPU is executing 2.5 million instructions more in the second instance versus none in the first so there is a big difference here in the character of the system.

Keep in mind that there is no such thing as "1%" CPU usage. The moment the CPU executes anything, it is 100% busy during that time. What you see in perfmon is an average load of a binary system: either 100% busy or 100% idle. For that 2.5 million instructions then, your system peaked to its full working load whereas in the idle case, it did nothing. That difference is distinct and significant when we care about small power supply disturbances.

Think of how little the tires in your car can be out of balance to cause your steering wheel to vibrate. Surely your tire will not be out of round by just looking at it.

We can therefore say that any CPU activity creates massive peaks in power consumption relative to idle. Taking an i7 processor with a TDP of 130 watts, Intel specs peak current consumption of a whopping *150* amps! If your cooktop is broken in the morning, you can cook your eggs over your CPU. In our .1% case, the average current is pretty low so you would have to use the real stove . But the instantaneous power consumption is still well into many amps. And it is these current pulses that are the problem, not steady state usage.

Quote:

Furthermore, and this can not be overstated, all of the conjecture of clock instability, jitter, power supply instability are moot because the effect persisted even when the file was converted back to wav.

I am afraid that is still an incorrect conclusion . As long as the two files are different, then the system activity is different. If there is a difference, no matter how remote, then you can't say jitter did not change. It very well could have.

What points to the results being wrong is not that. It is the fact that with every conversion, they claim the fidelity got worse and with multiple listeners. Therefore we get to multiply our small probabilities above by each other and get to an astronomically small number, essentially equal to can't happen especially since they provided no measurements to show objective differences in system output. If they had not done these consecutive tests, you could not have dismissed what they found easily. As it is, even my theory needs to be put to test by having them repeat the tests.

I'm mostly staying out of this, but I can't let this one pass without comment. Using that article to back up your claims is no better than when "power conditioner" companies show a change at the output of their filter rather than the output of the connected audio device. That particular logical fallacy is called a red herring.

--Ethan

It is not a red herring in this instance. People here, I think all of us, are trying to find a proof for the test being wrong. A proof cannot be a "maybe." If you cannot prove that there can never be a difference, then it remains probable. If it is probable, then we can't veto the results out of hand. We can guess that it he case. We can hope that is the case. But we can't prove that is the case.

Remember, taken at face value, blind tests across many testers has shown that they heard something. Is it your opinion that if someone hears something, measurements would not show an objective difference?

Straw man argument. At the very least, the lack of use of generally accepted experimental controls for work like this makes the results suspect to the point of being irrelevant.

The lack of something is not the same as proof of something. You need to demonstrate how the lack of controls resulted in what they found. You have not even read the report let alone be able to substantiate that statement.

Quote:

They have made far-reaching claims that other findings contradict. Their claims, if applied broadly, would shake the digital world to its core.

That's right. Such was the case with Einstein's theory of relativity. That Time and Space were linked unlike what Newtonian physics had taught us until that point. I am sure there were a lot of people like you who were upset at those findings but thankfully, them being upset didn't amount to anything .

I am sure the whole notion of jitter, at trillionth of a second no less, causing the waveform of a DAC to change is an unbelievable thing to millions of people. Yet we can objectively show that with our measurement systems.

Put another way, the mere fact that a test result is different than our notions does not automatically make it wrong. And certainly in this space where there is incredible paucity of tests results. It is not like I can to and find a dozen Flac or media player tests at AES which included hundreds of people and dozens of expert listeners, following ITU advice. If we had that, then we would not be discussing this point.

What we have is our opinion that this result is wrong. All of us are too damn lazy to go and run our own tests and instead sit here waste time arguing .

Quote:

But Amir, you seem to be unable to appreciate this.

No. I have already said I think the results are wrong. The difference between us is how we get there. You throw out stuff that are suppositions and innuendos, not backed by the science and engineering of the test fixture. And do so without even the common courtesy of reading their test conditions and write up. That is junk science and evaluation in my book so I push back even though I agree with your conclusion regarding the validity of the results.

Quote:

Amir, you have been skating on very thin ice lately. Ice no thicker than that currently on Puget Sound... ;-)

Well, OK. Say, why do you hear your hard disk in your laptop as you listen to it? Maybe they could too and that was the difference! They liked the rhythm of the hard disk for .wav better than Flac. The former is proportional to the music, the latter, not so much. You sure you did not lend your laptop to them?

Quote:

You have been contradicted by two witnesses with hands-on experience.

Well, OK .

Quote:

And then wonder of wonders Amir, you continue to assert that you are right!

I am afraid that is still an incorrect conclusion . As long as the two files are different, then the system activity is different. If there is a difference, no matter how remote, then you can't say jitter did not change. It very well could have.

So if I rip "Misty" to wav, compress it with a loss-less compression algorithm, extract it back to wav, and play it; it's somehow fundamentally different from playing the original file stored elsewhere on my computer?

So if I rip "Misty" to wav, compress it with a loss-less compression algorithm, extract it back to wav, and play it; it's somehow fundamentally different from playing the original file stored elsewhere on my computer?

Yes. If you create an instruction trace for your computer, it will do things differently in each instance. It will even be different if you just copy the file from one place to the other without conversion to and from lossless!

The above is a troubling observation relative to the rest of the tests the authors ran. If we accept that system activity changes sound in this manner, it means that the rest of their results were invalid. For example, they tested fidelity of files ripped at different ripping speeds. They concede I believe that the rips were all identical yet sounded different. If we agree that having a file in two different places makes the sound different, then you can't compare two rips and assume the only difference is rip speed! In that sense, we could use the conclusion in one part of the test and invalidate another part. And then maybe we can circle back and invalidate the original test.

Yes. If you create an instruction trace for your computer, it will do things differently in each instance. It will even be different if you just copy the file from one place to the other without conversion to and from lossless!

The above is a troubling observation relative to the rest of the tests the authors ran. If we accept that system activity changes sound in this manner, it means that the rest of their results were invalid. For example, they tested fidelity of files ripped at different ripping speeds. They concede I believe that the rips were all identical yet sounded different. If we agree that having a file in two different places makes the sound different, then you can't compare two rips and assume the only difference is rip speed! In that sense, we could use the conclusion in one part of the test and invalidate another part. And then maybe we can circle back and invalidate the original test.

Personally, I reject pretty much everything.
20ns of jitter has been shown to be inaudible per multiple AES publications. 0.5 ns produces jitter below the theoretical limit of a 16 bit signal.
If minor fluctuations in CPU load produced audible distortion then the whole concept of digital audio would be impractical. My OS has random fluctuations of 10-20x greater than the workload of extracting a flac file and that pretty much happens continuously. The concept that any minor variation of a data fetch has audible consequences would render computer playback of any audio file chaotic regardless of its data structure.

That is not being objective . But then again, most of us are not. There are people rejecting medical care, God, etc. The fact that they do, doesn't validate or invalidate something. Now, if you are Einstein and you propose something that invalidates Newton, then folks listen. Us? Not so much .

Quote:

20ns of jitter has been shown to be inaudible per multiple AES publications.

Jitter is not just a number. It has spectrum and amplitude. It spectrum is far more important than its absolute number. As is the frequency that it is operating on. Random jitter for example is benign even in very high values since its manifestation is only increased noise floor. Change that to a impulse and all of a sudden, it become an entirely different animal. One of the most often cited audibility tests for jitter (Ashihara et al) for example used random jitter. A study that wasn't even worth doing based on that one fact.

Without knowledge of what I just said and tons more related to mathematics and architecture of digital audio systems, you cannot believe the one liners people throw around. If you don't believe me, read this AES paper which has an extremely thorough overview of the topic: http://www.scalatech.co.uk/papers/aes93.pdf. I am sure you find the paper dense but you need to be able to follow it all to have an informed opinion on the topic.

I encourage you to read the section named "Audibility of Jitter Errors." The paper is not text so I can't quote it here. But read through it and see how it goes over previous findings and declares them well above proper recommended levels for inaudibility.

Remember, your job here is not to say so and so didn't hear a difference but that the authors of the test in question did not. The fact that 10 different dudes, listening to some other music on some other gear, didn't hear jitter doesn't invalidate the results of the tests in question. You can encode an MP3 file and declare it the same as original, and I can encode a different files and hear a difference. We would both be right in that case.

Indeed, if you read the forums, people keep saying they have done their own tests and didn't hear a difference and the authors counter that they did not use their test setup, their content, their training, and so the results cannot be interchanged.

A level which is routinely violated. For example, measurements of consumer AVRs shows that they have 4 to 7 nanoseconds of jitter over their HDMI port.

Also keep in mind that while I use the .5ns number, other use a smaller one because we can hear through the system noise.

Now if we had said measurements from the authors PCs like the Linn folks performed, then all would be well. But we don't have that. You like to leap to a conclusion which I cannot do in good conscious.

Quote:

If minor fluctuations in CPU load produced audible distortion then the whole concept of digital audio would be impractical.

We don't know that it *will*. But we know that it can. Remember, while we keep talking about jitter, that is not the only factor. Electrically coupling the PC to the DAC as the authors did with their S/PDIF connection can also transmit noise from the PC to the DAC.

Quote:

My OS has random fluctuations of 10-20x greater than the workload of extracting a flac file and that pretty much happens continuously.

That's right. No dispute about that. So we know that we are impacting the load of the system. We lack measurements of your system and theirs to know the impact objectively.

If we are to trust your ears, then we better trust theirs more because they used blind testing and across multiple listeners! Think about that. The whole reason we do blind testing is to confirm our understanding of what is audible and what is not. Unless you can find fault in their methodology, then we have prima facie evidence that the distortions are audible despite our notions and previous tests to the contrary.

You have a theory of inaudibility they have real results! We are heavily handicapped here. So we better have pretty strong science to throw at them, and not our opinion of what is audible and what is not.

Quote:

The concept that any minor variation of a data fetch has audible consequences would render computer playback of any audio file chaotic regardless of its data structure.

Not really. Things change. The question is, was it audible? The authors in this test they say it was. They do not attempt to explain why. What I am trying to say is that our assertions that something "can't be" better be just that. Can't be. Your own evidence from Linn shows that the CPU power changed even in an audiophile device and importantly so did the spectrum. It matters not that this turns our world upside down. Science doesn't care if it is not fair .

Arny is hearing his hard disk in his laptop. My son's desktop does the same thing. Should we declare those observations as false because we have this idealistic view of digital audio reproduction?

Without knowledge of what I just said and tons more related to mathematics and architecture of digital audio systems, you cannot believe the one liners people throw around. If you don't believe me, read this AES paper which has an extremely thorough overview of the topic: http://www.scalatech.co.uk/papers/aes93.pdf. I am sure you find the paper dense but you need to be able to follow it all to have an informed opinion on the topic.

The first problem we see here is that the referenced paper is more than 20 years old. Amir apparently wants people to believe that Audio is a very static area, and that we face the identical same problems in the audio systems that are before us as Dunn and Hawksford were worrying about in 1992.

One can see a specific example of this problem in figure 25 on page 21 of the cited document. It shows a scheme for de-jittering a "Musical Fidelity" DAC with a "Jitter rejection unit". Now, 20 years later virtually every AVR contains the circuity shown in the "Jitter rejection unit" as a standard feature.

Reviewing the user manual for the current product from the same vendor we find the following: "Our well tuned filtering circuit gives immeasurably
small jitter, noise and distortion artefacts allowing astounding imaging, detail and transparency, to deliver all music types exactly as the artist originally intended."

IOW, Musical Fidelity has moved on and incorporated a jitter reduction circuit as a standard part of their product, and the 1992 experiments would have a completely different outcome if rerun with their 20-years-later updated product.

Note also that the paper Amir refers to references standard AES 11-1991. The current standard is AES 11 -2009, and contains significant updates. The text of the referenced paper points out that versions of standard AES-11 were marginal for TOSLINK interfaces prior to AES 11-1991. That standard is now totally obsolete and no doubt includes additional updates that further refine industry standard use of TOSLINK interfaces.

I hope that the above shows the folly of naively referencing obsolete technical information, which Amir habitually imposes on this forum.

We wouldn't know that from all the ca. 1991 references that you cite, Amir.

Quote:

The question is, was it audible? The authors in this test they say it was.

Every internet troll of audio forums says that whatever hobby horse they are flogging today is audible. So do any number of high end dealer and manufacturer web site.

Quote:

They do not attempt to explain why.

Sure they do. They blame it all on FLAC and computers. The hidden subtext is that it is far less risky to rely on traditional high end audio technology.

Quote:

What I am trying to say is that our assertions that something "can't be" better be just that. Can't be.

What you are ignoring Amir is the fact that some assertions of "can't be" are totally justified. Thermodynamic efficiency of heat engines can't be greater than 100%, all things considered. Heck it mostly can't be more than 30-40 percent. You can take that to the bank!

Quote:

Your own evidence from Linn
shows that the CPU power changed even in an audiophile device and importantly so did the spectrum.

Let's look at the actual quote:

"Engineers at Linn Audio have mentioned in their forum that they have measured various cpu / memory issues in WAV vs FLAC play back and have found that cpu utilization necessary to convert the FLAC file is offset by having to move half as much data. They measured the voltage drop on the rails and found it to be in the micro-volt range, and they have measured clock jitter and found it to be the same."

Amir, how could you miss the point more?

They say "...CPU utilization necessary to convert the FLAC file is offset by having to move half as much data"

Doesn't offset mean the same as nulls the difference out so that it stays the same?

Amir, what dialect of English leads to a more erroneous interpretation of clear text than you are now guilty of?

"Engineers at Linn Audio have mentioned in their forum that they have measured various cpu / memory issues in WAV vs FLAC play back and have found that cpu utilization necessary to convert the FLAC file is offset by having to move half as much data. They measured the voltage drop on the rails and found it to be in the micro-volt range, and they have measured clock jitter and found it to be the same."

Amir, how could you miss the point more?

They say "...CPU utilization necessary to convert the FLAC file is offset by having to move half as much data"

It is not the actual quote. I post the actual quote. What is above is Swampfox's understanding of what *they* said which was different. This is what they actually said and is quoted word for word again. Note the use of "we" versus "they" above:

"1. If we measure the power rail that feeds the main processor in the DS we can clearly see identifiable disturbance patterns due to audio decoding and network activity. These patterns do look different for WAV and FLAC - WAV shows more clearly defined peaks due to regular network activity and processing, while FLAC shows more broadband disturbance due to increased (but more random) processor activity."

Quote:

Amir, what dialect of English leads to a more erroneous interpretation of clear text than you are now guilty of?

Before I answered, I went and dug up the original post from because I had read it before. Swampfox did not come back with a different statement so I trust what I quoted above is what they really said. Maybe you can answer your own question .

It is not the actual quote. I post the actual quote. What is above is Swampfox's understanding of what *they* said which was different. This is what they actually said and is quoted word for word again. Note the use of "we" versus "they" above:

"1. If we measure the power rail that feeds the main processor in the DS we can clearly see identifiable disturbance patterns due to audio decoding and network activity. These patterns do look different for WAV and FLAC - WAV shows more clearly defined peaks due to regular network activity and processing, while FLAC shows more broadband disturbance due to increased (but more random) processor activity."

Before I answered, I went and dug up the original post from because I had read it before. Swampfox did not come back with a different statement so I trust what I quoted above is what they really said. Maybe you can answer your own question .

Here's the full quote:

"We have done extensive measurements on power supply disturbance recently, and have compared results for both FLAC and WAV streaming. Our findings are as follows :

1. If we measure the power rail that feeds the main processor in the DS we can clearly see identifiable disturbance patterns due to audio decoding and network activity. These patterns do look different for WAV and FLAC - WAV shows more clearly defined peaks due to regular network activity and processing, while FLAC shows more broadband disturbance due to increased (but more random) processor activity.

2. If we measure the power rails that feed the audio clock and the DAC we see no evidence of any processor related disturbances. There is no measurable difference (down to a noise floor measured in micro-volts) between FLAC and WAV in any of the audio power rails.

3. Highly accurate measurements of clock jitter and audio distortion/noise also show no difference between WAV and FLAC.

The extensive filtering, multi-layered regulation, and careful circuit layout in the DS ensure that there is in excess of 60dB of attenuation across the audio band between the main digital supply, and the supplies that feed the DAC and the audio clock. Further, the audio components themselves add an additional degree of attenuation between their power supply and their output. Direct and indirect measurements confirm that there is no detectable interaction between processor load and audio performance."

Let me summarize:
I am not disputing that jitter exists.
I am not attempting to dispute that they heard a difference,

What I am disputing is the conclusion that jitter may explain an audible difference in the experiment being discussed as a matter of logic and experimental design because:

Facts not in dispute by anyone including the author:
FLAC is a loss-less process.
Neither WAV nor FLAC contain timing information.
The OS has no preference for location, retrieval, or priority of FLAC vs WAV.

Data presented:
The same song sounds differently when it is a FLAC file.
The more the song is compressed the bigger the difference.
Converting the song back to WAV from FLAC retains the difference.
Repeatedly compressing and expanding a WAV file with FLAC progressively degrades the sound of the file.
It is reproducible with different files.

Logical inconsistencies proposed:
1)Processing time causes jitter in FLAC file.
This can't be true from the above data. The file that has been expanded back to WAV no longer has CPU overhead associated with expanding a FLAC file, yet retains the change in sound.

2)The OS induces jitter when it fetches the flac file.
a) Inconstant because once FLAC file was expanded it retained the audible difference. Thus a reconverted FLAC file would have the same jitter as the virgin file, unless the OS gave preference to the virgin, which is indisputably not true.

3) Different locations on the HD have less jitter.
a) see 2
b) some of the original virgin files would sound worse than reconverted FLAC files and would have been ranked accordingly, rendering the "once converted back to WAV" finding null.

The only logical conclusion is that FLAC conversion damages the file,
which goes against FLAC being a loss-less compression, which is an undisputed fact per the author.

I could go into the story have how Galileo came to the conclusion that gravity affected all objects the same based on logical inconsistency in the alternative, but I'll spare you all.

If you cannot prove that there can never be a difference, then it remains probable.

Probable or possible? This sounds like typical John Atkinson logic, presenting the most unlikely things that could affect fidelity seem likely: "Many people have heard LPs sound better after being demagnetized, and you can't prove them wrong." Ad nauseum.

Quote:

Remember, taken at face value, blind tests across many testers has shown that they heard something. Is it your opinion that if someone hears something, measurements would not show an objective difference?

In my (admittedly spotty) reading of this thread, I noticed several people comment that the tests were done poorly. In the grand scheme of things, it makes no sense that lossless compression will change the sound of a recording. Extraordinary claims demand extraordinary proof.

This sounds like typical John Atkinson logic, presenting the most unlikely things that could affect fidelity seem likely: "Many people have heard LPs sound better after being demagnetized, and you can't prove them wrong." Ad nauseum.

But this is not the typical food fight. We are sitting here, not having been there when the tests were conducted, speculating on reasons we could invalidate them, without the authors present to dispute. If we can show that 1+1 = 2, then we are done. Our proof needs to be that exact. Otherwise, it invites arguments from the other side as they have already said: "you don't have our system or our ears so your observations don't count."

If we want to be dismissive out of hand, then sure, any standard suffices but then it doesn't disprove their findings. It only shows that we individually don't believe.

Quote:

In my (admittedly spotty) reading of this thread, I noticed several people comment that the tests were done poorly. In the grand scheme of things, it makes no sense that lossless compression will change the sound of a recording. Extraordinary claims demand extraordinary proof.

--Ethan

It being "poor" is a mischaracterization put forward by Arny who has not even read the article which describes their methodology. I have repeatedly asked him what specifically makes it poor enough to generate these results and he has not answered.

For my part, I have read it. And while the description doesn't rise up to the standards I have for disclosure, at face value the work is anything but "poor." They say they uses a mix of single and double blind tests. They say they only published results where both of them agreed in independent blind tests. They said testing was single blind many times but that no talking was allowed and eyes were closed. They said they had other people do the listening tests when the results were controversial. They said they trained themselves on the particular musical pieces. They say they level matched to .1db. On and on.

Believe me, as a guy who maintains his library in lossless audio, I want to prove them wrong on a technicality as much as the next guy. But once I read the article, there was not a lot of wiggle room there. So I am compelled, as much as I hate it, to consider the validity of the entire endeavor. I say that because I know if I followed in their footsteps, flaws and all, I cannot duplicate what they say. If you know how to generate the bad results they did despite the above measures, please explain it.

It is not a red herring in this instance. People here, I think all of us, are trying to find a proof for the test being wrong. A proof cannot be a "maybe."

Amir, Your gross logical flaw here is cherry-picking the form of the hypothesis to be proven, whether positive for differences or negative for differences.

Amir you've picked the negative form of the hypothesis, which is bad logic, since it is well known that negative hypothesis are difficult or impossible to prove. I suspect that you know this well-known fact and are taking advantage of other people's ignorance to lord it over them.

The correct form of hypothesis to chose to prove is almost always the positive form. Common sense supports this idea - do drug manufacturers or even all of medical science test drugs to prove that they don't work? Definitely not.

The usual approach is to test the positive form of the hypothesis.

Seeing no reliable proof, whether theoretical or experimental of the hypothesis that FLAC changes sound quality, it remains unproven.

Seeing no reliable proof, whether theoretical or experimental of the hypothesis that FLAC changes sound quality, it remains unproven.

In logic, and philosophy in general, the conditional truth can not negate the necessarily true (or 'a priori' knowledge can't be negated by 'a posteriori' knowledge).
The results published, from what the author has posted on other blogs, demonstrates an experiment that does just that, and thus can be dismissed in entirety.

It being "poor" is a mischaracterization put forward by Arny who has not even read the article which describes their methodology.

I have a part 3 of the article that calls them "blind" which in this context generally means single blind.

Quote:

I have repeatedly asked him what specifically makes it poor enough to generate these results and he has not answered.

Yet another example of Amir simply dismissing answers that he can't for some reason properly rebut - probably because he is incapable of understanding them.

There is almost no common ground between Amir and any reasonable person in this or many other regards since any reasonable person would have dismissed the stated conclusions based on even just a basic knowledge of science and technology.

Probable or possible? This sounds like typical John Atkinson logic, presenting the most unlikely things that could affect fidelity seem likely: "Many people have heard LPs sound better after being demagnetized, and you can't prove them wrong." Ad nauseum.

This is basically the same kind of using an improperly stated hypothesis in order to frustrate reasonable efforts at proof that I have recently pointed out in this thread.

In these situations (which I have also observed) John Atkinson picks the negative form of the hypothesis. This is is bad logic, since it is well known that negative hypothesis are difficult or impossible to prove. I suspect that he know this well-known fact to take advantage of other people's ignorance in order to confuse them.

The correct form of hypothesis to chose to prove is almost always the positive form. Common sense supports this idea - do drug manufacturers or even all of medical science test drugs to prove that they don't work? Definitely not.

I can imagine a universe in which the FDA approves all drugs that are submitted for approval until someone proves that they are totally ineffective. We can call this John Atkinson's and Amir's universe. ;-)

The correct form of hypothesis to chose to prove is almost always the positive form. Common sense supports this idea - do drug manufacturers or even all of medical science test drugs to prove that they don't work? Definitely not.

I can imagine a universe in which the FDA approves all drugs that are submitted for approval until someone proves that they are totally ineffective. We can call this John Atkinson's and Amir's universe. ;-)

Actually, the FDA allows eye drops that go generic to be automatically approved as long as the active ingredient is the same. The manufacturer does not have to prove that it's effective. There are many factors other than the active ingredient that play a role, from the formulation of the bottle itself, to inert ingredients and pH.
It's been too long since my formal training, but most studies are based on the null hypothesis. In other words, the hypothesis is that flac files are not different than wav files. You can never completely prove this, you can only reach conclusions with a certain degree of probability. In other words, with scientific studies, you don't prove that something works, you prove that within a degree of certainty that is doesn't not work.

...Amost studies are based on the null hypothesis. In other words, the hypothesis is that flac files are not different than wav files. You can never completely prove this, you can only reach conclusions with a certain degree of probability. In other words, with scientific studies, you don't prove that something works, you prove that within a degree of certainty that is doesn't not work.

Right - the idea of absolute proof went out decades ago. The current take on science is that "All findings of science are provisional. We accept them as being provisional until we find something better."

So, by casting the problem as a negative hypothesis and demanding absolute proof, crafty individuals can put anybody at an apparent disadvantage when they argue with them.

I listen to NPR a lot, and Michael Feldman's "Whad'Ya Know " call in quiz is a fine example of single blind testing.

Quote:

Originally Posted by wikipedia

Prior to the playing of the first Quiz, an audience member is given a list of the four disclaimers to read, which state who can or cannot play the quiz. The disclaimers, read on the air every week, remain constant, with the exception of disclaimer 2, which is a short joke referencing a current event:
"All questions used on Whad'ya Know? have been painstakingly researched, although the answers have not. Ambiguous, misleading, or poorly worded questions are par for the course. Listeners who are sticklers for the truth should get their own shows."
Here the audience member reads a short statement making light of a current event. (Sometimes this quip takes the third position rather than the second.)
"Persons employed by the International House of Radio or its member stations are lucky to be working at all, let alone tying up the office phones trying to play the quiz. Listeners who have won recently should sit on their hands and let someone else have a chance for a change."
"All opinions expressed on Whad'ya Know? are well-reasoned and insightful. Needless to say, they are not those of the International House of Radio, its member stations, or lackeys. Anyone who says otherwise is itching for a fight."

If Amir would periodically remind us that he is following a similar set of rules when he posts on AVS, I wouldn't be so hard on him! ;-)