A student familiar with the definition of z-scores wonders why we use standard
deviations to calculate them. Illustrating two ways, Doctor Peterson explains the
concept of scaling that motivates this statistical measure.

If a random sample is taken from a normal distribution, the mean and
variance of that sample are independent. Further, this independence is
only true for normal distributions. I don't understand how they can be
independent when they are from the same sample values.

The critical values for chi-squared increase as the confidence level
increases. My question is why does a distribution "pass" this test if
the calculated value is less than the critical value and not vice-versa?

A pollster wonders how to calculate margin of error. Does the population size even
play a role, or do you need only the sample size? Lacking actual results from the questionnaire, Doctor Wilko still provides guidance with explanations and examples, before returning to address the question about sufficient information.