Random error is always present in a measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement instrument, in operating and environmental conditions or the experimenter’s interpretation of the instrumental reading.
Random errors can be analyzed statistically, as it is empirically seen that they are generally distributed according to simple laws. In particular, it is often hypothesized that the causes of these errors act in a completely random manner, thus determining deviations, with respect to the average value, both negative and positive. This allows us to expect that the effects vanish on average; substantially that the average value of the accidental errors is zero.
The smaller the random errors are, the more it is said that the measurement is precise.
Random (or accidental) errors have less impact than systematic errors because, by repeating the measurement several times and calculating the average of the values found (reliable measurement), their contribution is generally reduced for a probabilistic reason.
This observation has a fundamental consequence: if we can correct all the gross errors and the systematic ones, so we will have to deal only with accidental errors, we will just need to take repeated measures and then mediate the results: the more measures we will consider, the less the result final (average of the individual results) will be affected by accidental errors.