I'm not sure about the reason for this experiment. This experiment, along with a massive number like it (high/low gravity, yeast viability, generation mutations, etc) have been done. Not just done but done in a properly controlled lab setting. Kai and I have chatted about things like this quite a bit. Its a big pet peeve of mine for people to blindly do experiments without doing any basic research to see if anything like this has been done. Kai does a good job of finding original sources and then replicating the experiment to the best of his abilities. I'm not crapping on the effort here boys, what Im still wondering is that how this data is supposed to hold water. The vast majority of studies is not based on subjective analysis rather objective. Samples pulled are tested for numerous things along with being put on an HPLC to determine exactly what the differences are. Meaning levels of ethanol, isoamly alcohol, ethyl hexanoate, etc etc. There are tables that indicate sensory anaylsis of the human palate and the correlation with the amount of some chemical. Meaning that they can correlate the amount of X in a beer to the average human perception of it. My case in point: http://www.mbaa.com/TechQuarterly/Abstracts/1996/tq96ab09.htm Patino does some very good work.

To this specific experiment, in my professional opinion, here are the biggest problems I see. This is not meant to bring down the one doing the experiment but to help them, and everyone else, where small changes can make very large effects:

- vernacular - if one are going to do research use the terms that everyone uses in the industry. Pitching rate is always discussed at millions of cells/ ml not billions/ L. - yeast age - 52 days is very very old for a slurry even under the best conditions. CO2 toxicity is a big deal.- yeast count - Assuming number of cells in a 'starter' is an absolute no-no. If one doesn't count the yeast, the experiment can't be done.- yeast starters - the starters need to be done exactly the same way, same speed stiring, etc etc. Regardless of anything else, they should have at least been done together and then split at the very end.- Yeast viability - Irregardless of your actual number you are pitching you have no idea of how viable they are (eg. methylene blue stain). Are you sending in old grannies or soldiers? Very important. Additionally, decanting starters is very hairy in that how much is too much to decant, how much did you lose etc etc.- Experimental controls - Three beers are needed. An underpitch, an over pitch and a 'correct' pitch. Two beers doesn't give enough variables.- OG - Its just too high. What would be a yeast pitch rate experiment to one has change, instantly, to a yeast pitch rate of high gravity beers...unless you wanted to do a high gravity experiment but I didn't read that.- Open fermentation and headspace - It wasn't clear to me if this experiment was done fermenting 'open' in buckets or in buckets with a lid. If they were closed the head space was absolutely massive which could skew the experiment. Books have been written on 'fermenter-head space' specifics.- Yeast choice - The yeast type makes a massive difference in the outcome of the experiment. - Sensory evaluation - should have been done using a double blind test and not a triangle test. The double blind takes all of the bias out.- Format of sensory form - Its much easier to get good data but using a polar type plot for assessing peoples subjective perceptions. Also called a 'spider plot'. http://www.appellationbeer.com/images/20091217-spider.jpg- Data presentation - the data should be presented in a histogram format with the average indicated. This way one can acutal see where each individual lands. A simple sd and T-test would be very easy to do if you did the spider plot.- Summary of the summary - Using a starter makes better beer. This had absolutely nothing to do with the actual experiment.

Yes this list is extensive but all the points I've listed are not exhaustive. They all need to be addressed for all experiments and not just this one. That's why data is always peer reviewed. Point short, there is nothing I can ascertain from the data presented. There are too many holes for even the smallest assumption to be made.

This is the world I work in. When data is presented its up to the researcher to be able to support it. If someone doesn't show people whats expected for make an actual assumption they we are all living 'blind' and will allow falsehoods to continue and hearsay to continue.

While I agree with Kristen on the importance of peer review we have to acknowledge that most of us don't have access to most of the good papers. Though our experiments do oftentimes lack the control and precision of professional reseach I support and encurage them. Maybe at some point one of us runs into something "odd" that warrants better experients and reseach.

Kristen came up with quite a list of concerns. Many if which I totally missed. Though it may reduce the validity of the conclusion it's good that you data was detailed enough to allow for this critique.

That aside, I think this experiment was supposed to contribute some practical information about making a starter versus using average age yeast from an LHBS directly. And while this is far too complex of a topic for generalizations without knowing OG, amount of aeration, and a number of other variables, I think it actually reflects homebrewing conditions better than most laboratory experiments.

So, it may not be that useful to the brewing chemists at Coors. Likewise, most of what they do is not useful to me.

That aside, I think this experiment was supposed to contribute some practical information about making a starter versus using average age yeast from an LHBS directly. And while this is far too complex of a topic for generalizations without knowing OG, amount of aeration, and a number of other variables, I think it actually reflects homebrewing conditions better than most laboratory experiments.

So, it may not be that useful to the brewing chemists at Coors. Likewise, most of what they do is not useful to me.

Dr England's comments regarding the poor scientific methodology employed with most zymurgological "studies" are accurate.While detractors may argue that forums (fora, for you Latin purists) like this one and unscientific magazines like Zymurgy that primarily target the general public are not to be held to the same standards as professional, peer-reviewed articles and journals, both Kristin and Kai make good points about applying more rigorous scientific standards to any "studies" that do get published.

I applaud those who find the homebrewing hobby (or "obsession") so fascinating that their childlike curiosity compels them to conduct an experiment. But, most of these studies are quite poorly designed with regards to their hypothesis, materials and methods employed, the use of objective and standardized metrics, and arriving at unsupported and biased claims and conclusions.

I applaud those who find the homebrewing hobby (or "obsession") so fascinating that their childlike curiosity compels them to conduct an experiment. But, most of these studies are quite poorly designed with regards to their hypothesis, materials and methods employed, the use of objective and standardized metrics, and arriving at unsupported and biased claims and conclusions.

Well, that's slightly patronizing. I agree with "Dr. England's" last sentiment (and I'm not questioning his credentials, but the only professor I ever had who actually told us to call him "Dr." was completely clueless) -- that allowing word of mouth myths to perpetuate only harms the hobby. But most of his objections are basically excuses to dismiss without even commenting. Not sure about the reason for this experiment? Don't be obtuse. It you think that testing different procedure or ingredients in a homebrew setting with only "subjective" sensory analysis is a waste of time, just come out and say it. But a useful contribution to making great beer is not necessarily the same as one that help us understand the metabolic pathways of yeast in a laboratory.

Your quote about dogmatic belief can apply to scientists as well. I'm sure mathematicians smirk when they see biologists who are absolutely convinced they know what's going on with life

Well said, in both posts Narvin. I'm not a ph.D or a Doctor and thank GOD I don;t wear flip flops or a loud tropical shirt but I thought the experiment was much more approachable to the average homebrewer than some of the others I have seen, and was fairly well thought out as well.

Well said, in both posts Narvin. I'm not a ph.D or a Doctor and thank GOD I don;t wear flip flops or a loud tropical shirt but I thought the experiment was much more approachable to the average homebrewer than some of the others I have seen, and was fairly well thought out as well.

+10

Most homebrewing experiments that people conduct are similar to this. Brew up a batch, split it up and vary a single variable. Try as hard as possible to keep everything else the same and see what happens. After that it’s based on our perceptions to evaluate the results. As mashweasel points out there are plenty of factors which can skew these kinds of results.

Sean went out of his way to keep the variables as stable as he could and then conducted a much larger tasting than most would. And now he’s presenting his data to us.

Am I going to take the results as absolute gospel and go forth with no questions? No, of course not. But this was a very interesting and informative experiment and gave me several things to think about. I think this was a great homebrewing experiment and I’m grateful for the information.

The points I list are not meant to be comprehensive in the fact that they prove/disprove any data. They are thought points.

Narvin, you have a very good point. I don't give indepth reasons for my points. I will absolutely go over point by point. That is a very good question. Just let me know what you think are the points I should cover and will. NP.

Every single publication offers free abstract's. The links I give show this. Most other publications can be found freely. If not, then a simple email to the person doing the study is all thats needed for more information. Nearly every person I know would email back freely with additional input that can't be read with plain 'text'.

As for the 'doctor' business, I am also one that is much laid back and has, nor will, be called doctor. My title is well earned as is why I use it. Nothing more.

As for the 'good enough' theory of home brewing 'experimentation' that really needs to stop. Nearly all of the points I list are very simple things to cover. Nothing that requires a ton of ability or research. Additionally, I haven't met a single home brewer that compares their wares to commercial products. We can't have it both ways. We can't say that our beers are better than the commercial thing but in the same breath say that we aren't held to any standard of experimentation.

Guys, again, the entire purpose of this post was not to show what I can see that you can't but to offer talking points in hopes that all the home scientists would see these things in future studies that they carry out. Its definitely not attacking the people carrying these out. I applaud these people.

Please continue the conversation and enough with the + (plus) agreements. State your questions. Thats how we all get better. My point rarely comes across the first time through of which I apologize. Kai and I have have had numerous conversations which entail this very thing. When one experiments in a vacuum one learns nothing.

Its a big pet peeve of mine for people to blindly do experiments without doing any basic research to see if anything like this has been done.

Thanks for taking the time to respond in so much depth, Kristen. I did search for previous research in this area (I have access to a couple of the academic databases) and found some good papers, but they all dealt specifically with lagers, and none of them involved a large blind tasting. I'm sure those papers exist too and I simply don't have access to them, but I think it's important to keep in mind that most home brewers don't have access to any of the literature at all.

I'd like to respond to a few specific points. For the rest I'll defer to your superior knowledge and experience, and simply plead that I was working within the limitations of some very basic equipment.

- Experimental controls - Three beers are needed. An underpitch, an over pitch and a 'correct' pitch. Two beers doesn't give enough variables.

I'm not sure why that would be the case. Is an experiment that has one control and one experimental group inherently invalid? I do have *some* training in conducting a controlled experiment and I've never come across a statement like that.

- OG - Its just too high. What would be a yeast pitch rate experiment to one has change, instantly, to a yeast pitch rate of high gravity beers...unless you wanted to do a high gravity experiment but I didn't read that.

The OG was targeted to be at the high end of what White Labs and Wyeast say is "pitchable" for their standard (home brewer) products. Both manufacturers say a starter is only needed for OGs above 1.060.

- Open fermentation and headspace - It wasn't clear to me if this experiment was done fermenting 'open' in buckets or in buckets with a lid. If they were closed the head space was absolutely massive which could skew the experiment. Books have been written on 'fermenter-head space' specifics.

The fermenters were sealed and you're right, the head space is massive. Perhaps not ideal, but at least consistent between the two fermentations.

- Summary of the summary - Using a starter makes better beer. This had absolutely nothing to do with the actual experiment.

That's snarky and unscientific and maybe even inappropriate, but it actually does speak directly to what the experiment is designed to assess - do home brewers prefer beers pitched at a standard rate, or at the pitching rate associated with using a smack pack directly?

Point short, there is nothing I can ascertain from the data presented. There are too many holes for even the smallest assumption to be made.

I am disappointed to hear that you think that, since I respect and value your opinion. I guess all I can say is that I feel I learned some things as a result, and that those with a higher standard for proof will have to conduct their own experiments, or rely on the peer-reviewed literature.