Metrologists will often say that these studies are not as reliable as uncertainty evaluation, but what does that mean in practical terms?
Manufacturing engineers often say that because a gauge study uses actual measurements of a calibrated reference it is ‘the truth’ while uncertainty evaluation is just theory. Such a view ignores the fact that gauge studies can fail to detect significant sources of uncertainty.
This article uses simple examples to show this. Measurement Systems Analysis (MSA) uses gauge studies while the framework for uncertainty evaluation is defined by the Guide to the Expression of Uncertainty in Measurement (GUM).
Error or uncertainty?
Before getting into specific examples, let’s get a few basic definitions clear. First, it’s important to realise that all measurements have uncertainty. Try to estimate the height of this text. You might say “it’s about 4mm,” implying that there is some uncertainty in your estimate. All measurements are just estimates with uncertainty.
It is not possible to know the exact true value of anything. Measurement error is the difference between a measurement result and the true value. Since we can’t know the true value, we also can’t know the error; these are unknowable quantities.
If you said the text is “about 4mm, give or take 1mm” this would assign limits to your uncertainty. You still wouldn’t be absolutely sure the true value is within these limits, but you could have some level of confidence, say 95%. If you increase the limits, perhaps to +/-2mm, then your confidence would increase. Uncertainty gives bounds within which we have a level of confidence that the true value lies.
Random effects
Errors, and the resulting uncertainty, are caused by both random and systematic effects. Random effects are unpredictable and cause errors which change every time a measurement is repeated. They cannot be compensated, but they may be reduced by averaging a number of measurements. Random effects can also be quantified by statistical analysis of repeated measurements, typically by calculating the standard deviation.
MSA refers to this as precision while the GUM refers to the uncertainty characterising a random effect. They both distinguish between repeatability and reproducibility. Repeatability quantifies random effects under the same conditions, while reproducibility is a measure of the greater variation when the conditions vary.
Systematic effects cause errors that are not random and may have a known cause. Systematic effects can often be quantified and compensated. In MSA the mean of many measurements is compared with a reference value to calculate the bias, or trueness. This assumes that the uncertainty of the reference is negligible compared to the instrument being tested.
Such an approach cannot, therefore, be used for the most accurate instruments. MSA defines accuracy as the combination of precision and trueness. The GUM assumes that any identifiable bias has been corrected, but where this is not practical the uncertainty and bias may be combined.
The GUM acknowledges the existence of random and systematic effects. However, it recommends that uncertainties are not categorised in this way. This is because a random effect present when calibrating a reference will become a systematic effect when that standard is used to calibrate another instrument. To avoid this confusion the GUM identifies Type A uncertainties which are evaluated by statistical analysis of repeated observations.
Both random and systematic effects may be considered as influence quantities. The measurement result can be represented as a function of the true value and the influence quantities. If the uncertainty of each influence quantity is known, then the uncertainty of the measurement can be calculated. This involves considering the sensitivity of the function to each influence, using an uncertainty budget or a Monte Carlo simulation.
Simpler approach
Gauge studies are much simpler. The mean of many measurements is compared with a reference value and the difference is the bias, or trueness. The precision is calculated from the measurement results either by calculating the standard deviation or using ANOVA to determine the variance components associated with known factors such as the part and the operator.
Gauge studies ignore any error in the reference standard, assuming this to be negligible. They are saying that the reference value is the true value for the reference artefact. To prevent this causing issues, it is specified that the uncertainty of the reference standard must be less than 10% of accuracy of the measurement being evaluated. In practice, this often cannot be achieved, meaning the gauge study does not fully represent the accuracy of a measurement. In a properly performed gauge study, significant uncertainty in the reference standard should at least be understood. The reference uncertainty should be checked from a calibration certificate and the fact that it is greater than 10% should call into question the results.
Uncertainty evaluation accepts that some influences cannot be realistically varied in a study. Type A evaluation determines uncertainty by the statistical analysis of a series of observations, while a Type B evaluation uses any other means such as taking a value from a calibration certificate or a material specification. They can all be combined with the mathematical approach of the GUM; gauge studies don’t provide any way to do this.
Reference uncertainty is just one example of a systematic effect not being reflected in the numerical results of a gauge study. Other systematic effects may go unnoticed. The variation and bias seen in the results are only what results from influences present during these observations. In theory, all potential sources of variation should be included in the study but in practice this is impractical. Most studies take the standard approach of only considering parts, operators and repetitions. The implicit assumption is that the part and the operator are the only significant reproducibility conditions.
Temperature often has a significant effect but gauge studies are often carried out at close to nominal temperature. During production significant variations may be caused by the time of day, doors opening or seasonal variation. Some influences may be difficult to vary in a realistic way and designing experiments that vary large numbers of influences would take far too long.
Environmental factors
Even if the measurement process involves a correction for temperature and the gauge study involves measurements at different temperatures, uncertainties remain that the gauge study can’t detect. The parts used in the study are likely to come from the same batch of material and therefore have a similar coefficient of thermal expansion (CTE). If this value is close to the nominal CTE then it will not have much effect on the study results.
However, the uncertainty of CTE for real materials is very high and can vary by up to 50% between batches. When the batch changes this will introduce a large bias in the corrections for thermal expansion.
Environmental effects and material properties are common examples of influences that are very hard to properly detect in a gauge study but which can be easily included using Type B uncertainty. The VDA-5 standard provides a practical approach in which gauge studies are used to evaluate Type A uncertainty and then combined with Type B uncertainties using an uncertainty budget.
Similar issues can arise in statistical process control, where a systematic effect can act on both the output of a manufacturing process and measurements of that output. This can cause errors in the process output to be tracked by very similar errors in the measurement, effectively hiding the effect in the data. This only becomes a problem if the measurement uncertainty is significant with respect to the process variation, in other words the measurement is not capable. However, for the above reasons, if MSA is used to evaluate the measurement then the capability may not be properly understood.
Time spent gaining a deeper understanding of measurements by creating an uncertainty budget can give a much greater insight into variation in manufacturing processes.
Want the best engineering stories delivered straight to your inbox? The Professional Engineering newsletter gives you vital updates on the most cutting-edge engineering and exciting new job opportunities. To sign up, click here.
Content published by Professional Engineering does not necessarily represent the views of the Institution of Mechanical Engineers.