Resolution, precision, accuracy, repeatability: Difference between revisions
mNo edit summary |
|||
Line 200: | Line 200: | ||
A statistica description can be seen as an estimator, and you may get better precision (consistent measurements), but not necessarily better accuracy (closeness to a true value). | A statistica description can be seen as an estimator, and you may get better precision (consistent measurements), but not necessarily better accuracy (closeness to a true value). | ||
Revision as of 15:49, 29 March 2024
Accuracy is how far measurements are from their true value (which is sometimes a standard value).
- how you know that true value itself to more accuracy is a good question, though mostly amounts to more work.
Precision is largely about how consistent measurements are
- measurements can be very precise but not at all accurate - e.g. a precision instrument that is consistently off to one side due to bad calibration
- it may then be that calibration is the only thing keeping it from being a lot more accurate -- OR things may be a lot messier and more complex
Resolution is (usually) the units in which is reported (or controlled)
- resolution often hints at the order of accuracy and/or precision, but this can be misleading
- yet high resolution is also a great way to hint at more accuracy and/or precision than you really have
- e.g. does your 4-digit multimeter always show accurate digits? How would you tell?
Repeatability asks that when you later return to the same true value, how stable your measurement of it is
- this is much like precision, but focuses more on the tool or the machine, than the measurement.
- If repeatability is contrasted with reproducibility, then
- repeatibility is often a "can one person on one instrument get the same measurement again", and
- reproducibility is often a "if you have different operators, and/or different instruments, do they get the same measurement?"
- Resolution and repeatability are also words used when asking how well you can actuate/control something - which also makes things more complex because both the control and the measurement of the result may each have their own precision/accuracy details.
If my multimeter shows me 1.153 volts, it makes it easy to assume its accuracy is three digit's worth (around 0.1%).
But generally, multimeters shouldn't be assumed to be better than 1%, both because more extreme values (e.g. many-megaohm) are often harder to measure, and because cheap ones don't care as much about calibration. Cheap + extreme value combination may be more like 5%. And those figures are not very easy to find.
-->