determinate.htmlTEXTR*ch4@:*l4l4 Determinate Error Analysis

Uncertainty, Error, and Precision in Quantitative Measurements

Much of the work in any chemistry laboratory involves the measurement of numerical quantities. A quantitative measurement tells us three things:

  1. Numerical quantity,
  2. Appropriate units,
  3. Uncertainty of the measurement.

The first two of these are fairly easy to understand but the last one, uncertainty, needs some explanation. There is always an uncertainty associated with physical measurements, arising not only from the care with which you take the measurement, but also from the care with which the measuring device is calibrated. If you have done all that you can to minimize error in taking measurement, your recorded values should then reflect the uncertainty (precision) of the measuring tool. This is usually the smallest numerical value that can be estimated with the measuring device. For example, imagine trying to measure the length of the following line segment using a cheap metric ruler:

The ruler has divisions every 0.5 cm.

Is the length of the line between 4 and 5 cm? Yes, definitely.
Is the length between 4.0 and 4.5 cm? Yes, it looks that way.
But is the length 4.3 cm? Is it 4.4 cm?

Given the precision of the ruler and our ability to estimate where between a set of marked graduations (the tick marks on the ruler) a measurement falls, we are somewhat uncertain about what number to record after the decimal. So, what we can say is that the actual length is around 4.4 cm, but it might be closer to 4.3 cm, or it might be closer to 4.5 cm. In other words, we think the length is 4.4 cm but we might be off by 0.1 cm in either direction. We would record this measurement in this way:

4.4 ± 0.1 cm

Keeping track of the uncertainty would be cumbersome if the uncertainty had to be reported this way each time the measurement itself were reported or used in a calculation. Therefore, we use significant figures to imply the precision of a measurement without having to state the uncertainty explicitly. In this course, we will assume an uncertainty of ±1 in the last recorded digit unless stated otherwise. the measurement recorded above could then be recorded just as:

4.4 cm

and the uncertainty of ±0.1 cm would be implied (but you still, always, have to include the units). Note that when using this method, it is very important that you record all significant digits. If you measured a mass and found it to be 2.0000 ± 0.0001g, it would be wrong to record the mass as:

2 g   (Wrong!)

Instead you must include all significant figures, even if they happen to be trailing zeros:

2.0000 g   (Right!)

The uncertainty that we have been discussing so far is always associated with individual physical measurements. The value that you are trying to find when you make such a measurement has a true value which is unknown and is fundamentally unknowable. Because there is (unavoidable) uncertainty in your measurements, the values you get when taking a series of measurements will tend to scatter around the true value. For example, if the above line were measured with several different rulers, a series of measurements would be obtained, each of which might be slightly different.

The difference between the true value and any given measured value is called the error in the measurement.

Experimental error, when used in this context, has a very specific meaning and does not necessarily imply a mistake or blunder. If you know about a mistake or blunder, you can, at least in principle, fix the problem and eliminate the mistake. Some experimental error is intrinsic. While it can be minimized, it cannot be eliminated. A perfectly executed experiment. with no mistakes or blunders, still has experimental error. Experimental error falls into two catagories: determinate and indeterminate.

Determinate errors
have a definite direction and magnitude and have an assignable cause (their cause can be determined). Determinate error is also called systematic error. Determinate error can (theoretically) be eliminated.
Indeterminate errors
arise from uncertainties in a measurement as discussed above. Indeterminate error is also called random error, or noise. Indeterminate error can be minimized but cannot be eliminated.

Let's imagine that you weigh a calibration weight several times. The calibration weight is supposed to weigh 10.0g, but every time you weigh it, you get a value that is about 2.0g too large (error has both magnitude and direction). You look more closely at the weight and discover there is a piece of tape stuck to it. That's a determinate error (you were able to determine the cause of the error). Now, you repeat your weighings of the calibration weight (after removing the tape) and collect a series of values. All of the values fall near 10.0g, but some are a bit higher, some a bit lower, some differ from 10.0 by 0.1g, others differ by 0.2g. This random fluctuation is a result of indeterminate error.

If the only errors affecting a measurement are random (i.e. you have managed to eliminate all sources of determinate error), then taking a large number of measurements will yield values that are symmetrically distributed to either side of the true value. The data are said to be distributed according to a Gaussian, or normal distribution. (You may know this as a bell-curve.) If you have many measurements, then for every individual measurement that is a bit too small, there will be another one that is a bit too large. If you take the mean or average, of many such measurements, the random errors will tend to cancel out. Individual measurements can have fairly large errors, but the mean of many such measurements will tend to fall close to the true value of the quantity under investigation. The more measurements you take, the closer the mean will be to the true value.

It is important to note that, although taking the mean of many measurements will yield a good estimate of the true value, the true value remains fundamentally unknowable. There are two reasons for this. As the number of measurements increases the mean approaches the true value asymptotically, but for the mean to equal the true value, you would have to take an infinite number of measurements. The second reason is that the mean approaches the true value if and only if the only sources of error are random. We can never be 100% certain that the value toward which the mean is converging really represents the true value or whether it represents the true value modified by some small determinate error. In practice, we just assume that the mean equals the true value, and for most work, that's just fine.

The mean of a set of measurements indicates the center of the normal distribution. The width of the normal distribution is given by the standard deviation. Taking again the example involving the 10.0g calibration weight, a series of measurements that ranges from 9.9 to 10.1g is obviously "better" in some sense than a series of measurements that ranges between 7.0 and 13.0g. Standard deviation is a measure of how widely a series of measurements is spread around the mean. Measurements that are closely clustered together (and around the mean) have a small standard deviation. Measurements that are widely spread apart have a large standard deviation. It might make intuitive sense to you that measurements that are clustered closely together are "better", but here is a statistical reason: the smaller the standard deviation, the faster the mean converges toward the true value. That is, if the standard deviation is very small, it might take only a handful of measurements before the mean gives a very good estimate of the true value. When the standard deviation is large, many more measurements must be taken before the mean gives as good an estimate of the true value.

Two related concepts, defined below, are precision and accuracy. In the figures which follow, imagine that the center of the target represents the true value of some observable that interests us, and that the crosses are individual measurements of that observable.

Precision
refers to how closely multiple measurements of the same quantity cluster to one another.
Accuracy
refers to how closely multiple measurements of the same quantity cluster around the true value.

This target has hits scattered
all over the place. The data may be characterized as:
  • Neither precise nor accurate.
  • Large standard deviation.
This target has hits tightly clustered
but off to one side. The data may be characterized as:
  • Precise, but not accurate.
  • Small standard deviation.
  • Determinate error present.
This target has hits tightly clustered
in the very center of the target. The data may be characterized as:
  • Both precise and accurate.
  • Small standard deviation.
  • No determinate error present.

Sometimes measurements that are not precise will distribute around the expected value in such a way that the mean of the measurements closely matches the expected value. In such a case, it is tempting to say that the data are accurate but not precise. However, by definition, data that are not precise cannot be accurate.

As discussed previously, all individual measurements are subject to uncertainty and the uncertainty should be reported (although it may be reported implicitly with significant figures). The uncertainty tells what you think the magnitude of error is in each of your individual measurements. Whenever you make three or more quantitative measurement of some observable, you should report a standard deviation. When reporting your value, follow these guidelines:

  1. Calculate the standard deviation (mean = 10.145, s = 0.467)
  2. Round standard deviation to one significant digit (mean = 10.145, s = 0.5)
  3. Round mean so that it has the same number of digits after the decimal point as does the standard deviation (mean = 10.1, s = 0.5)
  4. Report mean and standard deviation as mean ± one standard deviation (10.1 ± 0.5)
Standard deviation can be used to compare two values or to compare an experimentally determined value with a literature value. A property of normal distribution is that 95.5% of the values in a series of measurements fall within two standard deviations of the mean. If you take another measurement, you can expect with 95.5% certainty that it will fall within two standard deviations of the mean. If, instead, it falls (say) seven standard deviations away, then the probability is very low that you were still measuring the same thing. Similarly, if you collect a series of data, compute the mean, compute the standard deviation, compare your mean value with a literature value, then you can say with some confidence that your results agree with the literature value. If on the other hand, your mean value is (say) seven standard deviations away from the literature value, then you cannot claim to have reproduced the expected value, and you better start searching for determinate errors (either that or you have just discovered something wonderful and new- but search for the determinate errors first, okay?). Note that this last described situation is analogous to the middle target above.

Calculation of the mean and standard deviation:

Calculation of the mean and standard deviation is a small part of the very large field of statistics. The mean
of a series of measurements is equal to the sum of the individual measurements
divided by the total number of measurements (N):
mean = (x1 + x2 + ... + xN) / N
Once the mean has been calculated, the standard deviation (s) is determined by the following equation:
s = sqrt( sum[(mean - xi)^2] / (N-1))
When in the statistics mode, modern calculators can quickly calculate the average and standard deviation. Because you will frequently need to report these values, it is important that you learn to do the calculations using the calculator rather than going through these tedious calculations "by hand"

Reading graduated measuring devices:

Many laboratory measurement devices are graduated. That is, they are marked with equally spaced lines corresponding to incremental values of the quantity measured. For example, a 100-mL graduated cylinder is marked with large lines every 10 mL and smaller lines every mL. It is graduated in mL. Similarly, burets are graduated in 0.1 mL, 10-mL measuring pipets to 0.1 mL, and standard laboratory thermometers.

The standard procedure for reading a value from a graduated device is to estimate the value to one decimal place more than the level of graduation. This means that you should record the volume measured with a 100-mL buret reading to the nearest hundredth of a mL (if the liquid level falls exactly on a major graduation, say at 14.00-mL, write 14.00mL, not 14).

You will be expected in this course to adhere to this procedure. Failure to do so will introduce more uncertainty than actually exists.