Measurement Error

Could some please distinguish the difference between measurement error and measurement uncertainty. I have actually held conversations at conferences over this issue with statisticians from NIST and no one has ever been able to give me a consistent answer. It seems to be more philosophical than anything. I was just wondering if any body could shed some thoughts on this subject?

See Uncertainty. My "educated guess" is that uncertainty is minimized when, say, all error terms (in the sense of residual = measured value - true value) are either -1 or +1. But error can further be minimized when all error terms are randomly distributed, say, between -0.01 and +0.01, in which case uncertainty may be greater than in the previous (binary) case because eror terms are "all over" (albeit within a tiny interval).

That helps matters. The statement made in the link ("Because of an unfortunate use of terminology in systems analysis discourse, the word "uncertainty" has both a precise technical meaning and its loose natural meaning of an event or situation that is not certain.") provided still leaves things open for discussion. I will study the information provided and the link information further. On a second note it is basically another way of measuring dispersion.