precision vs. accuracy

Precision and accuracy are two terms used to describe groups of measurements, usually in scientific or mathematical situations. Precision describes how close measurements are to each other, whereas accuracy describes how close measurements are to the accepted measurement.

Lord Brawl asked, "What about accurate, but imprecise?" This condition can't logically exist. For a set of data to be accurate, they must all be close to the accepted value, therefore they would be precise as well.

Another example of the usage of the terms precise and accurate, as pointed out by rootbeer277, is target shooting. If, for instance, you throw three darts, the outcomes could be as follows:

1. The darts are all clustered very closely around the bullseye. Your throws are both accurate and precise.

2. The darts are all clustered closely together, but not near the center of the target. Your throws are precise, but inaccurate.

3. The darts are scattered around the dartboard in no particular pattern. Your throws are neither precise nor accurate.

Keep in mind that these terms are both relative; there is no definition for how close a set of measurements must be to be precise, and no definition for how close the measurements must be to the accepted value to be accurate.

Precision and Accuracy are two independently computable numbers used to summarize a data set.

Precision is the tightness of the data set. Visually, high precision is a small cluster and low precision is not clustered. You may also know precision as its inverse, Error. Little error means high precision, and vice versa. (See also: significant digits, significant figures) A simple precision calculation would be the magnitude of the difference between the maximum and minimum values in the data set. Another way is using standard deviation. Example: precision of {1, 2, 1.5, 1, 1.75, 2, 1} = |2 - 1| = 1

Accuracy is the closeness of the data set to a true or theoretical value. Visually, high accuracy is centered on the target value and low accuracy is offset from it. One way to calculate this is to subtract the mean of the data set from the target value. Example: accuracy of {1, 2, 1.5, 1, 1.75, 2, 1} with respect to 4/3 (1.333...) = 4/3 - 9.75/7 ≅ -.06

Values closer to zero (within the context of the data) indicate higher precision or accuracy.