Contact Information

Writing Intensive
Course DesignsMathematics

Modern Statistics with
Computer Analysis
Developed by Bob Tardiff

Modern Statistics with Computer
Analysis is an introduction to using widely accepted
statistical techniques to extract meaningful information
from data. There are three major concepts that
students must understand in order to extract meaningful
information from data. First, students need to develop
the ability to recognize how and why the data were
gathered. Students learn how to summarize or describe
the data accurately using standard techniques.
Finally, if the method used to acquire the data meets
certain standards, then students learn how a statistical
inference can be made and how to measure the reliability
of the method used to make that inference.

Students completing Modern
Statistics with Computer Analysis are expected to
have an understanding of how

To understand how methods for
acquiring data affect the data’s usefulness;

To use standard descriptive
techniques to describe data; and

To use widely used inferential
techniques to infer characteristics of a larger
group.

Measuring the reliability of
inferential statistical techniques relies on probability
theory and consequently, inferential arguments are
delicate. The classical approach to measuring
reliability, which is the primary focus of this course,
is to measure the reliability of the method used for
making the inference as opposed to measuring the
reliability of the inference itself. The Bayesian
approach to inference addresses the reliability of the
inference itself but is computationally much more
difficult to implement, and more importantly, the
Bayesian approach must assign probabilities to
quantities that many view as non-random.

Cheap computing is giving rise to
some profound changes in statistics. Classical
statistics evolved in the early to mid 20th
century focused on normal distribution theory because
the requisite computations were straight forward and
readily done on a mechanical calculator. However many
of these classical techniques, though widely accepted,
lack robustness; i.e., they are sensitive to departures
from underlying mathematical/probabilistic
assumptions. Cheap computing has given statisticians
a tool for developing and continuing to develop robust
techniques. These techniques which are
computationally intense are increasingly becoming part
of mainstream statistics.

Cheap computing is also giving rise
to massive data sets, data sets with millions of data
points each of which has several dimensions.
Techniques for describing, summarizing and extracting
meaningful information from these massive data sets or
what is now known as data mining, is at the
forefront of today’s statistical research.