mith wrote:It's one of those stat professor semantics thing, the null hypothesis is either correct or not, there is no probability associated with it.

no... it doesn't mean the same thing... if p>0.05 the null hypothesis isn't 'correct'... you can't say if it is correct or not using the t-test... you would need a p value of 1 to state this (which is impossible)... you can say if it is incorrect or there is not enough evidence that it is incorrect...

it means that there is more than a 5% chance that the results are due to chance and therefore the results are not stastically significant at the 95% confidence interval... this is because there is not enough evidence that the null hypothesis is incorrect...

It is a subtle point and I don’t want to belabor it. Both ways do, in fact, say the same thing: rejecting the null at p=0.01 is the same thing as saying there is only a 1% chance that the null is correct (maybe I should have said, “True”). The power of a statistical test is the probability of committing a Type I error—falsely rejecting the null when it is true in favor of an alternative. That Pr[Mean1 = Mean2] = 0.01 means that it is unlikely the null hypothesis is true relative to an alternative. In making the decision to reject the null at the 0.01 level, you are saying you will accept the 1 in 100 chance of making a Type I error in favor of an alternative that is more likely to be true based on the observations. I’m not saying that the phrase “can happen by chance” is wrong—it is, in fact, the customary language of statistics. But our questioner seemed to be having trouble even with the idea of a null hypothesis; I figured formal language would be a wash here, so I tried to use something more akin to plain English. If that offends statistical purists, then I’m offensive, but not wrong.