1. Are these tests being done blind? Any speaker evaluations must be done that way so that the look of the speaker doesn't effect the results. I have done tests at Harman and Paradigm and they are ALWAYS blind. A MartinLogan has a visual advantage in my opinion over say a Gallo and or a Vandersteen as the later are some ugly ass speakers in comparison if you are just looking at thier exteriors.

You can't see the speakers. We give the test subjects dials. Then they wear eye coverings. At that point the screen is removed and the test begins.

Quote:

2. How could you demo each track for 3 minutes. That is SO LONG. Its a demo. Normally, people tune out of a demo after the first verse which is rarely past 1 min in the song.

We don't. That would make it entirely too long.

Quote:

3. Who gave you the speakers?

Either bought or donated.

Quote:

4. What order are the speakers played in?

Random. Including speaker designations being switched without the test subjects knowing.

Quote:

5. With over 2000 demos in your test - who is paying for the hearing tests? Who paid for the speakers?

We did through money gathered from the university, government grants, and donations. We have more than a few big donors.

Quote:

I am not trying to bust balls here. It just seems like there are some serious flaws to the science of this test. I know Harman does something similar at their test facilities here in California but a LOT of the variables are taken out of play and they are blind tests.

There are no flaws to the science, trust me it's run better than the corporations I've worked for. The Professor behind these tests made sure the science was done right. It's a controlled test. It started out testing cables. Everything else was added when the donors showed interest. The cable tests should be done by the end of the year. They're just crunching the numbers now on the last 800 or so test subjects. I don't know if the final numbers are going to match the other numbers, but that should be interesting to read the report when finished. The cable testing hasn't been my gig, I'm with the speakers.

There are some "flaws," with the test in that we can't flat say "this speaker is better than all others." Why? We didn't test all speakers. We didn't test all speakers from the manufacturers we tested. We couldn't afford to. Basically we tried to buy the 2nd to top of the line speaker for different lines.

We should be able to however say that certain types and certain cones, crossovers are "preferred." This data can be sold to speaker manufacturers.

You guys should have known after the 900+ posts Lotus has made, that he is pretty thorough about what he does. I am kind of surprised at the attacks before he has really even posted any in depth details/findings on the experiment.

We did through money gathered from the university, government grants, and donations. We have more than a few big donors.

No wonder our economy is in the dumps, the government is supporting audio research.........

Sorry Lotus, I believe this study less and less the more you go one about it, the fact you now have 8,000 subjects, oh and my time estimate is very close to what you said (I guessed 17 minutes, you said 15) makes me totally convinced the logistics of such a study make it essentially impossible......

Wow! The attacks on Lotus on this thread are almost shocking. Are you doubters working for speaker companies (like Bose) who feel threatened by the potential influence of the results of this test, or what?

Let's take a step back for a moment here and consider what Lotus is attempting. He and his collegues are trying to evaluate a large number of speakers using what seems to be reasonably objective scientific measurements of subjective evaluations. Yes, there are always issues with bias and other contaminating variables, but hopefully, with a significant number of data points in a trial such as this, the results can provide a reasonably accurate portrayal of relative speaker performance.

By no means will the final rankings provide the ultimate reference in speaker performance. However, for a great number of us as consumers--those of us who have neither the time nor interest in personally comparing hundreds of products in a given price range--we can consider these results as a valuable reference to help us select speakers for future purchases. I find a list of evaluative rankings like this especially valuable for identifying value--those models that compare favorably with others at a significantly higher price point. You so often hear indivual reviewers hype speakers (or other products) with similar language, but I consider testing like this to have at least as much weight, if not more, than such limited and often biased tests done on an individual basis.

I, for one, applaud these efforts, at least on a conceptual level. I reserve final judgement of the results, however, until I see more details about the specific methodology, the organizations involved, the sources of funding, and the final list of tested products.

Wow! The attacks on Lotus on this thread are almost shocking. Are you doubters working for speaker companies (like Bose) who feel threatened by the potential influence of the results of this test, or what?

Let's take a step back for a moment here and consider what Lotus is attempting. He and his collegues are trying to evaluate a large number of speakers using what seems to be reasonably objective scientific measurements of subjective evaluations. Yes, there are always issues with bias and other contaminating variables, but hopefully, with a significant number of data points in a trial such as this, the results can provide a reasonably accurate portrayal of relative speaker performance.

By no means will the final rankings provide the ultimate reference in speaker performance. However, for a great number of us as consumers--those of us who have neither the time nor interest in personally comparing hundreds of products in a given price range--we can consider these results as a valuable reference to help us select speakers for future purchases. I find a list of evaluative rankings like this especially valuable for identifying value--those models that compare favorably with others at a significantly higher price point. You so often hear indivual reviewers hype speakers (or other products) with similar language, but I consider testing like this to have at least as much weight, if not more, than such limited and often biased tests done on an individual basis.

I, for one, applaud these efforts, at least on a conceptual level. I reserve final judgement of the results, however, until I see more details about the specific methodology, the organizations involved, the sources of funding, and the final list of tested products.

DC,

My problem with this study is multifactorial.

First off, attempting to obtain 'science' from something as objective as preferences is not science, by definition it is quantification of taste, biased by the test subjects, music selected, room, etc... You even said something to this point in your post; "objective scientific measurements of subjective evaluations"

Second, I don't believe the study is being done as the cost to perform such an assay of subjectivity is HUGE and their os no reason to do such a 'study' as the data is inherently biased. EVEN if the study is being done, you could easily change any one variable slightly and have completely different results. No speaker maker would fund this as they are all pretty tight, even the big box dealers are feeling the economic crunch.

Third and most important, the 'goal' of the study is irrelevant. There is no 'best' speaker. On a one by one basis, their can be, but to say X speaker is best is total BS. My musical tastes, components, cables, and hearing are different than yours, Lotus', and everyone else in the world. What is best to me may not be any good for you at all and vice versa.

Oh, and to be totally clear, I am not involved with sales of any speakers or audio gear for that matter. I am a reviewer and editor in this business as well as a hard core audiophile, nothing more. I have no financial gains or losses based on these supposed results, nor do I find any significance to them if they actually do come to fruition.