Accuracy vs Precision

The difference between precision and accuracy needs to be understood carefully. Precision means repetition of successive readings, but it does not guarantee accuracy; successive readings may be close to each other, but far from the true value. On the other hand, an accurate instrument has to be precise also, since successive readings must be close to the true value (that is unique).

Accuracy gives information regarding how far the measured value is with respect to the true value, whereas precision indicates the quality of measurement, without giving any assurance that the measurement is correct. These concepts are directly related to random and systematic measurement errors.

A) Not precise and not accurate. B) Precise but not accurate. C) Accurate but not precise. D) Precise and accurate.

It can clearly be seen from the figure that precision is not a single measurement but is associated with a process or a set of measurements. Normally, in any set of measurements performed by the same instrument on the same component, individual measurements are distributed around the mean value and precision is the agreement of these values with each other.

The difference between the true value and the mean value of the set of readings on the same component is termed as an error. An error can also be defined as the difference between the indicated value and the true value of the quantity measured.

\[E=V_m−V_t\]

where \(E\) is the error, \(V_m\) the measured value, and \(V_t\) the true value. The value of \(E\) is also known as the absolute error. For example, when the weight being measured is of the order of 1 kg, an error of ±2 g can be neglected, but the same error of ±2 g becomes very significant while measuring a weight of 10 g. Thus, it can be mentioned here that for the same value of the error, its distribution becomes significant when the quantity being measured is small.

Hence, % error is sometimes known as relative error. Relative error is expressed as the ratio of the error to the true value of the quantity to be measured. The accuracy of an instrument can also be expressed as % error. If an instrument measures \(V_m\) instead of \(V_t\), then:

\[\%_{error}=\dfrac{\textrm{error}}{\textrm{true value}}\times 100\]

\[\%_{error}=\dfrac{V_m-V_t}{V_t}\times 100\]

The accuracy of an instrument is always assessed in terms of error. An instrument is more accurate if the magnitude of error is low. It is essential to evaluate the magnitude of error by other means as the true value of the quantity being measured is seldom known, because of the uncertainty associated with the measuring process. In order to estimate the uncertainty of the measuring process, one needs to consider the systematic and constant errors along with other factors that contribute to the uncertainty due to the scattering of results about the mean.

Consequently, when precision is an important criterion, mating components are manufactured in a single plant and measurements are obtained with the same standards and internal measuring precision, to accomplish interchangeability of manufacture. If mating components are manufactured at different plants and assembled elsewhere, the accuracy of the measurement of two plants with true standard value becomes significant.

To maintain the quality of manufactured components, the accuracy of measurement is an important characteristic. Therefore, it becomes essential to know the different factors that affect accuracy. Sense factor affects the accuracy of measurement, be it the sense of feel or sight. In instruments having a scale and a pointer, the accuracy of the measurement depends upon the threshold effect, that is, the pointer is either just moving or just not moving. Since the accuracy of measurement is always associated with some error, it is essential to design the measuring equipment and methods used for measurement in such a way that the error of measurement is minimized.

Two terms are associated with accuracy, especially when one strives for higher accuracy in measuring equipment: sensitivity and consistency. The ratio of the change of instrument indication to the change of quantity being measured is termed as sensitivity. In other words, it is the ability of the measuring equipment to detect small variations in the quantity being measured. When efforts are made to incorporate higher accuracy in measuring equipment, its sensitivity increases. The permitted degree of sensitivity determines the accuracy of the instrument. An instrument cannot be more accurate than the permitted degree of sensitivity. It is very pertinent to mention here that unnecessary use of a more sensitive instrument for measurement than required is a disadvantage.

When successive readings of the measured quantity obtained from the measuring instrument are same all the time, the equipment is said to be consistent. A highly accurate instrument possesses both sensitivity and consistency. A highly sensitive instrument need not be consistent, and the degree of consistency determines the accuracy of the instrument. An instrument that is both consistent and sensitive need not be accurate, because its scale may have been calibrated with a wrong standard.

Errors of measurement will be constant in such instruments, which can be taken care of by calibration. It is also important to note that as the magnification increases, the range of measurement decreases and, at the same time, sensitivity increases. Temperature variations affect an instrument and more skill is required to handle it. The range is defined as the difference between the lower and higher values that an instrument is able to measure. If an instrument has a scale reading of 0.01÷100 mm, then the range of the instrument is 0.01÷100 mm, that is, the difference between the maximum and the minimum value.

Scroll to Top