1.0 Foundational Principles of Electronic Measurement
1.1 Understanding Instrument Performance Characteristics
A comprehensive understanding of an instrument’s performance characteristics is of strategic importance for any technician or engineer. These characteristics define an instrument’s reliability, accuracy, and suitability for specific measurement tasks. They are the metrics by which we judge the quality and trustworthiness of measurement data, forming the basis for ensuring the integrity of any test, calibration, or troubleshooting procedure.
Performance characteristics are broadly classified into two primary categories:
- Static Characteristics: These are the characteristics of instruments that measure quantities that do not vary with respect to time, or that vary very slowly.
- Dynamic Characteristics: These are the characteristics of instruments that measure quantities that vary very quickly with respect to time.
Static Characteristics
Accuracy Accuracy is the algebraic difference between the indicated value of an instrument (A_i) and the true value (A_t). It signifies how close the instrument’s reading is to the actual value of the quantity being measured.
- Formula:
Static Error Static error (e_s) is the difference between the true value (A_t) of a time-constant quantity and the value indicated by the instrument (A_i). It is a measure of the instrument’s inaccuracy.
- Formula for Static Error:
- Formula for Percentage of Static Error:
Precision Precision refers to the ability of an instrument to indicate the same value repeatedly when used to measure the same quantity under identical circumstances multiple times. An instrument with high precision demonstrates excellent repeatability.
Sensitivity Sensitivity (S) is defined as the ratio of the change in an instrument’s output (ΔA_out) to the corresponding change in its input (ΔA_in). It indicates the smallest change in the input that the instrument can detect and respond to.
- Formula:
- For an instrument with a linear calibration curve, the sensitivity is constant and is equal to the slope of the curve.
- For an instrument with a non-linear calibration curve, the sensitivity varies with respect to the input.
Resolution Resolution is the smallest increment of input change that will cause a corresponding change in the instrument’s output. If an input change is smaller than the instrument’s resolution, it will not be detected or displayed.
Dynamic Characteristics
Speed of Response This characteristic describes the speed at which an instrument responds to a change in the quantity being measured. It indicates how quickly the instrument can provide an updated reading.
Lag Lag, or measuring lag, is the amount of delay in the response of an instrument following a change in the measured quantity.
Dynamic Error Dynamic error (e_d) is the difference between the true value (A_t) of a quantity that varies with time and the value indicated by the instrument (A_i).
Fidelity Fidelity is the degree to which an instrument can indicate changes in the measured quantity without introducing any dynamic error. It represents the instrument’s ability to faithfully reproduce the input signal’s variations at its output.
The inherent limitations described by these performance characteristics are a primary source of measurement errors, which must be systematically analyzed and mitigated.
1.2 Analysis and Mitigation of Measurement Errors
Measurement errors are unavoidable phenomena in all forms of instrumentation. The primary goal of a skilled technician is not to achieve impossible perfection but to identify, classify, and systematically minimize these errors to ensure the most reliable and accurate results possible. Errors can be categorized into three main types.
Gross Errors These errors are primarily caused by human mistakes, stemming from a lack of observer experience or the improper selection of an instrument for the task. Gross errors can be minimized by following two fundamental steps:
- Choose the best suitable instrument based on the range of values to be measured.
- Note down the readings carefully to avoid simple observational mistakes.
Random Errors These errors arise from unknown and unpredictable sources during the measurement process. While they cannot be completely eliminated, their effect can be minimized to achieve a more accurate final value by following a two-step method:
- Take multiple readings of the same measurement, preferably by different observers.
- Perform a statistical analysis on the collected readings to determine the most probable true value.
Statistical Analysis of Random Errors
To manage random errors, we employ statistical analysis. This allows us to determine the most probable true value from a set of imperfect measurements and to quantify the measurement’s consistency. The following are key statistical parameters used in this process.
- Mean (m) The mean, or average value, is calculated from a set of N readings (x_1, x_2, …, x_N). When N is large, the mean value is approximately equal to the true value.
- Median (M) The median is the middle value of a set of readings that have been arranged in ascending order.
- For an odd number of readings (N):
- For an even number of readings (N):
- Deviation from Mean (d_i) The deviation is the difference between an individual reading (x_i) and the calculated mean (m) of the dataset.
- Standard Deviation (σ) The standard deviation is the root mean square of the deviations, indicating the spread of the data around the mean.
- For N ≥ 20 readings:
- For N < 20 readings:
- Variance (V) The variance is the square of the standard deviation and is also known as the mean square of the deviation.
- Alternatively, it can be calculated directly from the deviations:
- For N ≥ 20 readings:
- For N < 20 readings:
Note: If the value of standard deviation is small, then there will be more accuracy in the reading values of measurement.
Systematic Errors A systematic error is a constant, uniform deviation that occurs during an instrument’s operation, often due to the inherent characteristics of the materials used in its construction. These errors can be further classified into three subtypes:
- Instrumental Errors: Caused by shortcomings of the instrument itself or by loading effects.
- Environmental Errors: Caused by changes in the ambient environment, such as variations in temperature or pressure.
- Observational Errors: Caused by the observer while taking readings. Parallax errors, which occur when the observer’s eye is not directly perpendicular to the measurement scale, are a common type of observational error.
Having established these foundational principles of instrument performance and error analysis, we can now examine the practical operation of specific electronic measuring instruments.