Contents

The Nature of Data & Variation

No mater how controlled the environment, the protocol or the design, virtually any repeated measurement, observation, experiment, trial, study or survey is bound to generate data that varies because of intrinsic (internal to the system) or extrinsic (due to the ambient environment) effects.

For example, a UCLA study of Alzheimer’s disease*, analyzed the data of 31 MCI and 34 probable Alzheimer’s disease patients. The investigators made every attempt to control for as many variables as possible, yet, the demographic information they collected on the subjects contained unavoidable variation. The same study found variation in the MMSE cognitive scores even in the same subjects.

Approach

Once we except that all natural phenomena are inherently variant and there are no completely deterministic processes, we need to look for models and techniques that allow us to study such acquired data in the presence of variation, uncertainty and chance.

Statistics is the data science that investigates natural processes and allows us to quantify variation and make population inference based on limited observations.

Model Validation

Checking/affirming underlying assumptions.

Each model or technique for data exploration, analysis and understanding relies on a set of assumptions, which always need to be validated before the model or analysis tool is employed to study real data (observations or measurements that are perceived or detected by the investigator).

Such a priori model conjectures or presumptions could take the form of mathematical constraints about the properties of the underlying process, restrictions on the study design or demands on the data acquisition protocol.

Common assumptions include (statistical) independence of the measurements, specific limitations on the shape of the distribution that we sample/observe data from, restrictions on the parameters of the process we study, etc.