When people read, hear, or prepare research summaries,
they sometimes have misconceptions about what is or isn't "sound
practice" regarding the collection, analysis, and interpretation
of data. Here are some of these common (and dangerous) misconceptions
associated with the content of Chapter 7.

In hypothesis testing, it's logically impossible
to prove that the null hypothesis is true but it is possible to prove
that the null hypothesis is false.

The null hypothesis is the opposite of the researcher's
hunch.

The level of significance always specifies the probability
of a Type I error.

A "p < .001" result is more meaningful
than a "p < .05" result.

All things considered, the best level of significance
is .05.

The p-value that becomes available after the data
are analyzed specifies the probability that the null hypothesis is true.

If the level of significance is set equal to .05,
there will be a 95 percent chance of avoiding a Type II error.

The reported p-level is the level of significance

If the null hypothesis is rejected with the level
of significance is set equal to .05, there will be a 95 percent chance
of rejecting the same null hypothesis if the study is replicated.

A Type I error is the worst kind of inferential mistake
a researcher can make.