I see the merits in a blind test to remove visual bias provided the actual blind test doesn't add its own bias - having the listeners/operators of the test participate knowing the sound character of one of the speakers in the test is a huge bias. I don't think people here are realizing.

It's come up before in discussion. I know sonicfox has made the point, and I'm sure it has been suggested by others. I, myself, am not convinced that direct double-blind ABX listening tests are the final word in speaker comparisons, as I think spending time with each speaker on its own DOES have value. We often talk of how speaker break-in is the brain breaking in, not the actual speaker, and we usually suggest a longer time to get used to the sound than anyone is going to dedicate to a listening test. So how are we supposed to fairly compare speakers when our brains don't have the proper time to acclimate to each speaker's sound? Unfortunately, we have no better way to compare the subjective performance of speakers. In any case, I'll defer to the people who have put in more hours of actual research.

. . . I didn't know you were running a pair of VP180s together, is there much more of an advantage using that configuration over a single VP180? How far apart are the top and bottom 180s?

My screen is 74" high so they are about 76" apart, not more. Yes, certainly there are advantages:- The dialogue is locked in the center of the screen.- The sound is seamless with the main left and right speakers having their tweeters also at the middle of the screen.- Because of the eye-brain role/relation when you see an action at the top or the bottom of the screen the sound seems to follow it perfectly.- All the benefits of having twice as much transducers/drivers to do the work.

I recently move all my speakers to the v3 models, added a pair of heights and a second A1400-8. Each A1400-8 is powering one side plus one center speaker - 5 Axiom speakers each.

. . . A second A1400 would be great, and dual VP180s is where I'm leaning for center channel.

You should; once you have lived with duals there is no getting back to singles. I was forced to for awhile and it was painful !

Originally Posted By: CV

. . . When are you going to grace us with new pictures of your system?

All the surfaces of my dedicated Axiom Home Cinema are black (ceiling, floor, side and rear walls). Previously, I add one side wall deep blue and the other one dark brown. Even if you had to very carefully look at them to confirm if they were black or not the fact that they are now pitch black has improved the sensation when watching the picture. [/quote]

So far, even when I bring all the extra lights that I can in the room to take pictures - they don't look good and unfortunately, they don't show much either. My Axiom speakers blend with the walls; you see drivers only and Axiom name plate logos. Highly frustrating for an amateur photographer . . .

Originally Posted By: CV

. . . I keep wanting to move there and work at Axiom.

I completely understand not only the products are outstanding the Axiom family is exceptional !

Double-blind is a catchy term but most people don't understand the strictness you have to adhere to properly conduct a true, controlled double-blind study. Controlled audio listening test are much more difficult and costly to setup and have many other variables that can introduce bias from both the experimenter and listening panel that almost make it a impossible to adhere to the double-blind standard (compared to the food example).

A true, double-blind listening test IMO you would need all these things.

Expertise: 3rd party (multiple) expert(s) in the field choosing the individual speakers and setting up the listening test and another 3rd party that is an expert in statistics and basic computer statistical analysis like linear regression/ANOVA that can take the data and perform and interpret the significance of the data.Speaker Shuffler and acoustically transparent curtain: Adhering to the double-blind standard would be that you control any bias big or small and that would mean having all speakers being played in the exact same position in the room through a speaker shuffler and having the speakers not visible to the listeners.

Large random sample (100+) of people that we nothing about except having their hearing checked to get on the panel. They use their own source material, one at a time in the listening room, taking all the time they want.

The process would be expensive, time consuming and STILL be subject to plenty of error. That said, a controlled single blind test using random listening panels is more than appropriate for testing audio equipment. We are not talking about saving lives here.

_________________________
I’m armed and I’m drinking. You don’t want to listen to advice from me, amigo.

Do High School Students Prefer Neutral/Accurate Loudspeakers? Given that the high school students preferred the higher quality music format (CD over MP3), would their taste for accurate sound reproduction hold true when evaluating different loudspeakers? To test this question, the students participated in a double-blind loudspeaker test where they rated four different loudspeakers on an 11-point preference scale. The preference scale had semantic differentials at every second interval defined as: 1 (really dislike), 3 (dislike), 5 (neutral), 7 (like) and 9 (really like). The relative distances in ratings between pairs of loudspeakers indicated the magnitude of preference: = 2 points represent a strong preference, 1 point a moderate preference and = 0.5 point a slight preference. The four loudspeakers were floor-standing the models (slide 22): Infinity Primus 362&#65532; ($500 a pair), Polk Rti10 ($800), Klipsch RF35 ($600), and Martin Logan Vista ($3800). Each loudspeaker was installed on the automated speaker shuffler in Harman International’s Multichannel Listening Lab, which positions each loudspeaker in same the location when the loudspeaker is active. In this way, the loudspeaker positional biases are removed from the test. Each loudspeaker was level-matched to within 0.1 dB at the primary listening location. Listeners completed a series of four trials where they could compare each of the four loudspeakers reproducing a number of times before rating each loudspeaker on an 11-point preference scale. Two different music programs were used with two observations. At the beginning of each trial, the computer randomly assigned four letters (A,B,C,D) to the loudspeakers. This meant that the loudspeaker ratings in consecutive trials were more or less independent (slide 23).

Results: High School Students Prefer More Accurate, Neutral Loudspeakers When averaged across all listeners and programs, there was moderate-strong preference for the Infinity Primus 362 loudspeaker over the other three choices (slide 25). In the results shown in the accompanying slide, as an industry courtesy, the brands of the competitors’ loudspeakers are simply identified as Loudspeakers B,C and D. As a group, the listeners were not able to formulate preferences among the three lower rated loudspeakers B,C, and D, which were all imperfect in different ways. For an untrained listener, sorting out these different types of imperfections and assigning consistent ratings can be a difficult task without practice and training [5]. The individual listener preferences (slide 26) reveal that 13 of the 18 listeners (72%) preferred the Infinity loudspeaker based on their ratings averaged across all programs and trials. When comparing the student's rank ordering of the loudspeakers to those of the trained Harman listeners (slide 27), we see good agreement between the two groups. The one exception is Loudspeaker C, which the trained listeners strongly disliked. The general agreement between trained and untrained listener loudspeaker preferences illustrated in this test is consistent with previous studies where a different set of listeners and loudspeakers were used [5],[6]. As found in the previous study, the trained listeners, on average, rated each loudspeaker about 1.5 preference rating lower than the untrained listeners, and the trained listeners were more discriminating and consistent in their ratings[5],[7]. The comprehensive set of anechoic measurements for each loudspeaker is compared to its preference rating (slide 28). There are clear visual correlations between the set of technical measurements and listeners’ loudspeaker preference ratings. The most preferred loudspeaker (Infinity Primus 362&#65532;) had the flattest measured on-axis and listening window curves (top two curves), and the smoothest first reflection, sound power and first reflection/sound power directivity index curves (the third, fourth, fifth and sixth curves from the top). The other loudspeaker models tended to deviate from this ideal linear behavior, which resulted in lower preference ratings. Again, this relationship between loudspeaker preference and a linear frequency response is consistent with similar studies conducted by the author and Toole [9],[10]. Finally, sound quality doesn't necessarily cost more money to obtain as illustrated in these experiments. The most accurate and preferred loudspeaker - the Infinity Primus 362&#65532; - was also the least expensive loudspeaker in the group at $500 a pair. It doesn't cost any more money to make a loudspeaker sound good, as it costs to make it sound bad. In fact, the least accurate loudspeaker (Loudspeaker C) cost almost 8x more money ($3,800) than the most accurate and preferred model. Sound quality can be achieved by paying close attention to the variables that scientific research says matter, and then applying good engineering design to optimize those variables at every product price point.

Double-blind is a catchy term but most people don't understand the strictness you have to adhere to properly conduct a true, controlled double-blind study. Controlled audio listening test are much more difficult and costly to setup and have many other variables that can introduce bias from both the experimenter and listening panel that almost make it a impossible to adhere to the double-blind standard (compared to the food example).

A true, double-blind listening test IMO you would need all these things.

Expertise: 3rd party (multiple) expert(s) in the field choosing the individual speakers and setting up the listening test and another 3rd party that is an expert in statistics and basic computer statistical analysis like linear regression/ANOVA that can take the data and perform and interpret the significance of the data.Speaker Shuffler and acoustically transparent curtain: Adhering to the double-blind standard would be that you control any bias big or small and that would mean having all speakers being played in the exact same position in the room through a speaker shuffler and having the speakers not visible to the listeners.

Large random sample (100+) of people that we nothing about except having their hearing checked to get on the panel. They use their own source material, one at a time in the listening room, taking all the time they want.

The process would be expensive, time consuming and STILL be subject to plenty of error. That said, a controlled single blind test using random listening panels is more than appropriate for testing audio equipment. We are not talking about saving lives here.

Dr. House you are a voice of reason on his forum. Excellent points.

I read the Axiom M3 vs B&W blog here on this site and found it quite disturbing that an Axiom employee (Alan) whom is extremely familiar with the sound participated in an Axiom run speaker comparison and guess what.... he picked the Axiom speakers. Big surprise! Sadly the blog doesn't list the bias that Alan is a trained Axiom listener and knows the sound of his own speakers sighted or blind. Also the blog refers to it as a Double Blind Test when according to your post it clearly is not.

I guess this is how you can declare Axiom speakers in a blind test cannot be beaten and only tied. What a way to never lose a blind test!

Anyone that has a vehicle, knows the sound of their vehicle when it's idling because we hear it so frequently. I would venture to bet that everyone would be able to pick out the sound of their vehicle when juxtaposed to another, regardless of the environment.

That comparison is quite pointless, similar to comparing a cello to a violin. The real question is would you be able to consistently pick out a particular speaker over other "similarly good" speakers playing a recording of your vehicle's idling sound?

Your car's engine is an instrument. It has its own tone. It's not trying to be neutral, transparent, colorless, etc, like good speakers are designed to be.

_________________________
"I wish I had documented more…" said nobody on their death bed, ever.

Anyone that has a vehicle, knows the sound of their vehicle when it's idling because we hear it so frequently. I would venture to bet that everyone would be able to pick out the sound of their vehicle when juxtaposed to another, regardless of the environment.

That comparison is quite pointless, similar to comparing a cello to a violin.

Your car's engine is an instrument. It has its own tone. It's not trying to be neutral, transparent, colorless, etc, like good speakers are designed to be.

Peter, that was simply a hypothetical to illustrate my question of whether or not our brain can become trained to a specific sound, based on what I mentioned in the rest of my post.

Originally Posted By: pmbuko

The real question is would you be able to consistently pick out a particular speaker over other "similarly good" speakers playing a recording of your vehicle's idling sound?

Musica. A sound of distinct pitch, quality, and duration; a note.b. The interval of a major second in the diatonic scale; a whole step.c. A recitational melody in a Gregorian chant.2.a. The quality or character of sound.b. The characteristic quality or timbre of a particular instrument or voice.