2019

Ratio requirements need to be re-examined

by
Philip Stein

Early in the 20th century, establishment of laboratories devoted
to measurement science, such as the U.S. National Bureau of Standards,
now the National Institute of Standards and Technology (NIST),
focused formal attention on measurements as a subject worthy of
study in itself. Engineering applications of measurements began
during this period with the understanding and codification of
the need to stabilize commercial measurement applications over
long periods, leading to our modern practices of calibration and
traceability.

Anyone who has learned anything at all about calibration and
traceability in the last 100 years has been taught the 10:1 principle.
This ratio, 10 to one, is usually referred to as the test accuracy
ratio (TAR) or test uncertainty ratio (TUR), and conventionally
describes the ratio of the accuracy of the unit being calibrated
to the accuracy of the standard used for the calibration.

When calibrating an instrument, we're told to always use standards,
masters or reference materials whose accuracy is 10 times as great
as that of the unit under test (TAR of 10:1 or greater). The reason,
presumably, is to ensure that any errors in the calibration are
negligible compared to the resolution of the calibrated instrument.
There are several flaws in this logic, and several problems with
the 10:1 principle.

Color variability

Foremost, at least in my mind, is that the accuracy of the standard
used is only one of a large host of issues that can affect the
quality of the calibration, and it's often one of the smallest
of those problems. For example, one of my consulting customers
measures the color of toner powders used in the printing industry.
Its customers were complaining about lot-to-lot variability of
product, and the metrology system was suspected as the culprit.

The company had seven integrating-sphere colorimeters whose
precision extended to 0.1 unit (in the international color system
known as CIELAB). Its reference standard system was a more precise
colorimeter that read to 0.01 unit. After traceably calibrating
the reference system, it would make up standard samples of its
own toners for distribution to working instruments around the
world as references.

What I found was variation in the preparation of samples presented
to the colorimeter, both during the calibration process and during
routine measurement of product. A series of measurement capability
studies (the analog to process capability studies), showed that
errors due to sample preparation were more than 100 times as large
as those due to calibration inaccuracies in the instrumentation.
The 10:1 principle was being faithfully followed, but the measurements
were no good. The capability studies also measured the measurement
error contributed by several other sources of variation.

Measurement system

The principle demonstrated here is that calibration, traceability,
sample preparation and dozens of other factors that affect accuracy
are all part of a measurement system. Accuracy ratios, by focusing
on one small aspect of the system, lead everyone to ignore (or
at least pay too little attention to) other parts of that system,
often with disastrous results.

The 10:1 rule can also wind up imposing absurd and costly requirements.
Another client makes plastic molded parts, and the dimensional
tolerances called for are plus or minus 0.020 inches. Given the
intended application for the parts, this is a reasonable specification.
The calipers used to measure the parts are, therefore, calibrated
to 0.002 of an inch, well within their capability (although the
measurement system, which includes distortion of the parts while
being measured, may not meet that capability). The calipers are,
in turn, calibrated against a set of gage blocks, which can easily
meet the required accuracy of 0.0002 inches. When we send those
blocks out for calibration, however, they will have to be measured
to within 0.00002, and the service that does our lab work will
require 0.000002 (two millionths) of an inch from NIST.

Well, we're not outside the realm of possible accuracies for
any of these processes, but surely something is wrong. A reasonable
accuracy for calibrating the calipers is one or two thousandths,
period. The rest of this process, including having a requirement
for traceability at all, is vast overkill.

I do realize that there are many applications where accuracy
and traceability of dimensional measurements are crucial, both
for the immediate customer and to support a more global interchangeability.
What's happened here, though, is that blind application of the
rule has resulted in unnecessary costs and trouble--and I believe
that a large majority of calibrations done in the United States
today fall into this overkill category.

Softer ratio requirements

We can reduce, but not eliminate the problem by softening the
requirements for a ratio. ISO 10012-1, for example, supporting
the ISO 9000 series, recommends at least 3:1 and prefers 10:1
(but does not require any specific value). ANSI/NCSL Z-540-1,
another important measurement standard, calls for 4:1.

A lower ratio reduces the chance of requiring absurd calibrations,
but doesn't address the measurement system issue--that the ratio
of the standard to that of the instrument being calibrated is
often only a small part of the problem. The real problem is that
the 10:1 rule is used instead of thinking, and instead of applying
good measurement science and engineering.

In 1999, the metrology community is struggling to reduce the
use of the TAR and replace it with the uncetainty budget. Uncertainty
budgeting does respect the measurement system issue. In fact it's
based solidly in the understanding that there are many sources
of variation in a measurement, and a specification of the quality
of a measurement must include them all. Making an uncertainty
budget can be a highly technical and time-consuming effort, but
it doesn't have to be. Subsequent columns will address this important
topic in detail.

PHILIP STEIN is a metrology and
quality consultant in private practice in Pennington, NJ. He holds
a master's degree in measurement science from George Washington
University, in Washington, and he is a Fellow of ASQ. For more information,
go to www.measurement.com.

Column To Focus on Measurement

"MEASURE FOR
MEASURE" is a new department that will appear every other
month in Quality Progress. It is being written
to provoke discussion of some controversial topics in measurement.

The highly technical details of precision measurements are often
boring if you're not a specialist, yet everyday measurements are
crucially important to us as quality professionals, as consumers
and as citizens. Consider the area of standards. The ISO 9000
and 14000 series of standards and their supporting documents include
IS0 10012--specialty standards that deal with measurements in
the 9000/14000 context. The QS-9000 series of automotive standards
has a separate measurement systems analysis volume, and the new
TL-9000 telecommunication standard refers to hardware, software
and service metrics that are being developed, although they have
not yet been published.

These and other examples indicate a strong, new worldwide interest
in measurements and measurement science that demands our attention
and understanding. Almost everyone in this country--and around
the world--is fanatically interested in measurements and statistics--as
long as we don't get too technical about them. We follow every
possible measure about our favorite sports teams and the performance
of their players. We watch our weight and the gas mileage of our
cars and trucks, and we measure, track and work hard to improve
every aspect of our business life--especially our own income and
the bottom line of our employer or enterprise.

Somewhere behind the scenes of all these measures is metrology--measurement
science. Measurement has been with us, in the form of weights
and measures for commerce, since before the Rosetta stone. The
science of measurement began with Galileo and other Renaissance
philosophers who realized the need to study and understand the
measurements they were making.

Wherever there is a measurement, there is measurement science
to be done. Even financial measures, such as profit and productivity
can benefit from some application of metrology, and subsequent
columns will address this topic. In addition, recent developments
and changes in national and international quality system standards
have made it important, even essential, for every quality professional
to understand and be able to work with the basics of measurement.

For those who'd like to know more about metrology, an excellent
resource is ASQ's Measurement Quality Division. It publishes a
newsletter, conducts a conference every year and has a wonderful
Web site at www.metrology.org.
­Philip Stein