I. Overview In the engineering style, instrument performance indicators are usually described by accuracy (also known as accuracy), variation, and sensitivity. The instrument calibration instrument usually also adjusts the three items of accuracy, variation and sensitivity. Variation refers to the maximum difference between the indicated value of the meter when the measured value of the meter (which can be understood as the input signal) reaches the same value from different directions multiple times, or the meter is measured under the condition that the external conditions are unchanged. The degree of inconsistency between the parameter from small to large change (forward characteristic) and the parameter from large to small change (reverse characteristic), the difference between the two is the instrument deterioration, as shown in Figure 1-1-1. The variation size is taken as a percentage of the ratio of the maximum absolute error to the range of the meter scale: The main causes of the variation are the gap of the instrument's dynamic mechanism, friction of moving parts, and hysteresis of elastic components. Winning the continuous improvement of instrument manufacturing technology, especially the introduction of microelectronics technology, many instruments are fully electronic, no moving parts, analog instruments are changed to digital instruments, etc., so the indicator of deterioration appears to be inconsistent with intelligent instruments. So important and prominent. Sensitivity refers to the sensitivity of the meter to changes in the measured parameter, or the ability to respond to changes in the measured quantity. It is the ratio of the increase in the output change to the increase in the input change under steady state: Sensitivity is sometimes called "magnification ratio", which is also the slope of each point on the tangent line of the meter's static characteristics. Increasing the magnification can increase the sensitivity of the meter. Simply increasing the sensitivity does not change the basic performance of the meter, that is, the accuracy of the meter has not improved. On the contrary, the phenomenon of oscillation sometimes occurs, causing the output to be unstable. Meter sensitivity should be maintained at an appropriate amount. However, for instrument users, such as chemical industry instrumentation, instrument accuracy is an important indicator, but in actual use, the stability and reliability of the instrument are often more emphasized, because chemical industry testing and process control instruments are used for metering. Not many, but a large number are used for detection. In addition, the stability and reliability of testing instruments used in process control systems are more important than accuracy. 2. Accuracy Instrument accuracy is referred to as accuracy, also known as accuracy. Precision and error can be said to be twin brothers, because of the existence of error, there is the concept of accuracy. In short, the accuracy of the meter is the accuracy with which the meter's measured value is close to the true value, which is usually expressed by the relative percentage error (also known as the relative reduced error). The so-called standard value is a value measured by a standard meter with an accuracy of 3 to 5 times higher than the measured instrument. Instrument accuracy is not only related to the absolute error, but also to the measuring range of the instrument . The absolute error is large, the relative percentage error is large, and the accuracy of the instrument is low. If two instruments with the same absolute error have different measurement ranges, then the instrument with a large measurement range has a relatively small percentage error and high instrument accuracy. Accuracy is a very important quality indicator of the instrument, and it is usually standardized and expressed by the accuracy level. The accuracy level is the maximum relative percentage error minus the sign and%. There are 0.005, 0.02, 0.05, 0.1, 0.2, 0.25, 1.0, 1.5 according to the unified national regulations. 2.5, 4, etc. The accuracy level of the instrument is generally marked on the instrument scale or label, such as 0.1, 0.2, 0.5, etc. The smaller the number, the higher the accuracy of the instrument. To improve the accuracy of the instrument , an error analysis is required. Errors can be generally divided into negligent errors, slowly varying errors, systematic errors, and random errors. Negligence error refers to the error caused by human during the measurement process. One can be overcome, and the other has nothing to do with the instrument itself. The slowly changing error is caused by the aging process of the internal components of the instrument. It can be overcome and eliminated by replacing components, parts or by continuous correction. Systematic error refers to the error that occurs when the same measured parameter is repeatedly measured, the same numerical value or sign, or an error that changes according to a certain rule, which can not be caused by accidental factors that people currently know. The size and nature are not fixed and difficult to estimate, but the impact on the detection results can be estimated theoretically through statistical methods. Error sources mainly refer to systematic errors and random errors. When error is used to indicate accuracy, it refers to the sum of random error and system error. Reproducibility (repetitive) Measurement reproducibility is the degree to which the measurement results are consistent under different measurement conditions, such as different methods, different observers, when the same detected quantity is detected in different detection environments. Measurement reproducibility will become an important performance indicator of the instrument. The accuracy of measurement is not only the accuracy of the instrument, but also the influence of various factors on the measurement parameters, which is a comprehensive error. Measurement reproducibility is usually estimated using uncertainty. Uncertainty is the degree of uncertainty of the measured value due to the existence of measurement errors. It can be expressed by the variance or standard deviation (the positive square root of Deng's variance). All components of uncertainty are divided into two categories: Class A: components determined by statistical methods Class B: Components determined by non-statistical methods. 4. Stability The stability of a meter's performance over time is defined as stability (degrees) within specified operating conditions. Instrument stability is a performance index that chemical industry instrumentation workers are very concerned about. Because the environment in which chemical companies use instruments is relatively harsh, the measured medium temperature and pressure change are relatively large. When the instrument is used in this environment, the ability of certain parts of the instrument to remain unchanged over time will decrease. Stability will decrease. 徇 Or characterization of instrument stability has no quantitative value, chemical companies usually use instrument zero drift to measure the stability of the instrument. The zero position of the instrument has not drifted during one year of operation. On the contrary, the zero position of the instrument has changed within 3 months, indicating that the stability of the instrument is not good. The stability of the instrument is directly related to the scope of use of the instrument, and sometimes directly affects chemical production. The impact of poor instrument stability often has a greater impact on chemical production due to the decline in the accuracy of the dual instrument. The stability of the instrument is not good. The maintenance of the instrument is also large, which is the last thing that instrument workers want. V. Reliability Instrument reliability is another important performance indicator pursued by instrumentation workers in chemical companies. Reliability and the amount of instrument maintenance are opposite to each other. A high instrument reliability indicates a small amount of instrument maintenance. Otherwise, a poor instrument reliability results in a large amount of instrument maintenance. For the detection and process control instruments of chemical companies, most of them are installed on process pipelines, various towers, kettles, tanks, and vessels, and the continuity of chemical production, most of which are toxic, flammable and explosive environments, these harsh conditions provide instrument maintenance Many difficulties have been added. One is to consider the safety of chemical production, and the other is to the personal safety of instrument maintenance personnel. Therefore, the use of inspection and process control instruments by chemical companies requires a smaller amount of maintenance, which means that the reliability of the instrument is as high as possible. With the upgrading of meters , especially the introduction of microelectronic technology into the meter manufacturing industry, the meter's reportability has greatly improved. Instrument manufacturers also pay more and more attention to this performance index, and generally use the mean time between failures (MTBF) to describe the reliability of the instrument.