Table of Contents

Measurement Errors

All measurments will have errors. Either random errors or systematic errors. These errors have to be represented well in writing down the value of the quantity. We must also be aware of how errors propagate through the system.

Types of errors

This figure indicates two major insertion points of noise. However, that does not mean these are the only points where measurement errors occur. We distinguish errors that are caused by the system and errors due to the environment.

Systematic errors have a source within the system. For example, a calibration error of one of the measurement devices may give a bias error, which is a systematic error. Another example is the drift of a sensor resulting into an unexpected offset in the measurement. Systematic errors can be minimised by improving the measurement system. As a rule of thumb, we can say that a calibration with a ten times more accurate measurement method is needed.

Random errors are also called noise. They can not be minimized by measuring more accurately because they have an external source: they don't reproduce. Random errors are caused by inherently unpredictable fluctuations in the readings of a measurement tool or in the experimenter's interpretation of the reading. Random errors are in many cases normally distributed, so the size of the error can be minimized by taking more measurements. Although most random errors have an external source, some specific random errors originate from within the system. An example is quantization noise in an analog to digital conversion, which gives a uniformely distributed noise. Another example is the noise in the electronics of the measurement tool itself.

The two insertion points of noise in figure 1 represent both environmental noise. The first source is in the measurement domain. This can for example be a motion artefact. An example of noise originating from after the transduction is electronic $50 Hz$ noise after bad shielding.

Besides the classification of errors into random and systematic errors, we can also speak about absolute and relative errors.

The absolute error is the difference between the measured value and the real value. For example, if we measure $1002 \Omega$ and we know the measured resistor is actually $989 \Omega$, then the absolute error is $13 \Omega$

The quantization error as mentioned before, is also observed as rounding errors when reading a value from a display. The last digits are not represented, so for example $14.3476$ can be written as $14.3$ while introducing an absolute error of $0.0476$.

Errors can be reduced or compensated in some situations. This is partially explained in the chapter about Sensor/Actuator systems in the section Sensor/actuator network concepts. The most common methods are:

Feedback

Stimulus-response measurement

Differential measurement

Compensation (feed-forward)

Multivariate analysis

Averaging

Accuracy and precision

Consider a multimeter that has a reading of $1.000341 V$. This is a high precision reading, but we do not know whether it is accurate (correct). The words accuracy and precision are sometimes mixed up, but have completely different meanings. The most important mathematical tools we have are the average reading of a set of measurements and the standard deviation of the readings. The question is how they relate to accuracy and precision.

Accuracy is defined by how close our average is to the “real” value. So, after defining the average as

It can be understood why we use a root-mean-square for determining the precision:

Noise, tolerances and variances can result in positive and negative numbers: may cancel out in an average

Relates to electrical power (remember that $P = U \cdot I = U^{2}/R$, so in fact we compare powers)

As shown in figure 2, the accuracy is the proximity of measurement results to the true value (“trueness”). It relates to the systematic error which can only be reduced if we determine the offset by a method with better accuracy and compensate for it.
Precision is the repeatability, or reproducibility of the measurement. It is determined by the random errors in the measurement (which can be reduced by taking more measurements) and by the resolution of measurement system.

Fig. 2: Accuracy and precision shown in the frequency of occurrence of measurements

In experimental research we distinghuish:

Validity is whether an instrument actually measures what you think it is (what we would call a cross sensitivity from an engineering perspective). We distinguish

Criterion validity when you can compare it to a real objective value

Concurrent validity when data is recorded with respect to established criteria or a known dataset

Predictive validity when the data can be used to predict new values at a later stage

Content validity when the data covers the full range of the construct, so no influences are overlooked

Reliability is whether an instrument can be interpreted consistently accross different situations: whether it reproduces in a test-retest reliability

So validity maps to accuracy (trueness) and reliability to precision.

Example

In an example we take eight measurements of the resistance of a single resistor. What can we say about the resistance $R$?

Measurement

value found

$1$

$1002 \Omega$

$2$

$960 \Omega$

$3$

$1047 \Omega$

$4$

$1010 \Omega$

$5$

$913 \Omega$

$6$

$986 \Omega$

$7$

$1037 \Omega$

$8$

$955 \Omega$

Tab. 1: An example of eight measurements of a single resistor

First of all, the average value, or mean value is equal to $(1002 + . . . + 955) \div 8 \approx 989 \Omega$. So the best estimate for $R$ is about $989 \Omega$. But, how accurate is this number? Both wordings precision and accuracy determine the error (uncertainty) in the measurement.

This means that 95% of the measurements is between the average and plus/minus two sigma: $989 \Omega \pm 2 \times 16.9 \Omega$.

Tolerance

With the previous example, we took eight measurements of the same resistor. The systematic error (accuracy) is the result of the measurement tool which was the same with all eight measurements. There was also a random error (precision limitation) due to noise in the measurement. A similar experiment could be done with eight different resistors from the same batch. These should have a similar resistance, but there will be variation in the resistor values due to fabrication processes.

This random variation is indicated by the tolerance. Sometimes the $2\sigma$ or $3\sigma$ range is used to define a tolerance. The tolerance is the permissible limit of variation in an object. The production process is optimized until all components are with specification (withing the tolerance limits), or sometimes devices outside the specification range are discarded.

The effect of taking more measurements

Most errors have a normal distribution, meaning it follows the probability density curve of Gauss

with the standard deviation $\sigma$ and the average $\mu$. The Gauss curve was already visible in figure 2. By taking sufficient measurements, for example $N$, we can determine the shape in the Gauss curve. The location of this peak corresponds to the average $\mu$ and the width of the curve to the standard deviation $\sigma$. For a reasonable number of measurements ($N>15$), $95\%$ of the measurements lies between $\mu-2\sigma$ and $\mu+2\sigma$. The standard deviation $\sigma$ decreases with the square root of $N$, and so the precision increases with the square root of $N$. We can now see that with random errors, the precision can be increased by taking more measurements. For systematic errors, this averaging does not help, we still have the same offset in the vaue of $\mu$.

For a systematic error of zero ($\mu=0$), we can say that the random error is equal to $\pm 2\sigma$. When endlessly repeated measuring, the real value $x_{0}$ is equal to the average $\mu$. When measured $N$ times, the formula for the real value $x_{0}$ with a probability of $95\%$ is
\begin{equation}
x_{0} = \mu \pm \frac{2 \sigma}{\sqrt{N}}.
\label{eq:NinetyFiveConfidence}
\end{equation}
The systematic error can be approximated by $\left | x_{0}-\mu \right |$ for sufficient high $N$. However, because we do not know the real value $x_{0}$, we have to use the independent reference (calibration) measurement that has a ten times higher accuracy.

Significant digits

The accuracy of a measured value is represented in the number of significant digits (‘meaningfull digits’). So from the number of digits we can recognise the accuracy of the number. The number of significant digits is the total number of digits without noticing the comma, where a zero on the left side does not count.

For example $6.34$ has three significant digits. This means that the real value lies between $6.335$ and $6.345$ and $0.2$ has one significant digit. Note that $0.02$ also has only one significant digit because the leading zeroes are not significant!

The value of $3000 m$ lies between $2999.5$ and $3000.5 m$.

The value of $3 km$ lies between $2.5$ and $3.5 km$.

When a value is measured with a certain instrument, the accuracy can be denoted explicitly, e.g. a force can be measured as $23.4N \pm 0.3 N$.

Once the standard deviation $\sigma$ of a measurement is known, we can use that for the representation of the number

Error propagation

In the measurement chain (or in our model), the reading may be the result of a mathematical operation on two input variables. For example, the length of a bar may be the sum of the first part plus a second part. Or, as another example, the output of a sensor is the product of the quantity to be measured times the sensitivity of the sensor. The question is what happens to the error of the output if both values (length 1 and length 2, or sensitivity and quantity) have noice and uncertainty. There are some basic rules to determine the error propagation under mathematical operations for the 'worst-case' estimation:

If two quantities are added or subtracted, the individual absolute uncertainty is added in the result

If two quantities are multiplied or divided, the percentages of uncertainty are added to get the percentage of uncertainty in the result

When finding the square root of a quantity, we divide the percentage of uncertainty by two. For squaring, the percentage uncertainty is multiplied by two.

Note that when dealing with error propagation one has to handle random errors and systematic errors strictly separate. In case of a systematic error one has to take the sign into account with a difference or quotient of quantities. And, also with systematic errors, one has to subtract the errors (absolute respectively relative) with a sum or product of quantities.

In case of a calculation, for example on a calculator, we normally take a simple approach for determining the number of digits:

With a product or quotiënt the number of significant digits of the result is equal to the smallest number of significant digits of the original numbers.