Exp1 Error Analysis


II Error Analysis

[ Graphing Techniques | eg Case 1 | eg Case 2 ]

In any quantitative experiment it is necessary to record the observations as sets of numbers. Associated with these numbers are various degrees of uncertainty, referred to as errors. They are not errors in the sense of mistakes, but rather errors inherent in the process of making observations. We shall assume the mistakes have been eliminated by being careful in taking readings. The errors we wish to consider fall into two categories, systematic and random. Systematic errors arise from inaccuracies in the technique used or in the instrument itself. An instrument which is incorrectly calibrated or which gives a non-zero reading when the quantity measured has a zero value are examples of systematic errors. Although this type of error is often difficult to detect, it can usually be accounted for with proper experimental techniques. Random errors arise from unpredictable and unknown variations during the experiment. They may be due to estimation in reading a scale, fluctuations from mechanical vibrations, temperature or illumination. Truly random errors are treated using statistical methods.

Making a series of observations for a given quantity will give values which vary slightly due to these errors. The best value for that quantity is the average or mean value. The uncertainty in this mean value is given by a ±number following the mean value. One estimate of the uncertainty is found by taking the average of the absolute values of the deviations of each of the readings from the mean value. This is sometimes referred to as the average deviation. In that there are N values for a variable x, they are designated with a subscript i as xi and the average for x is given by

(1)..........

The average deviation is then defined to be

(2)...........

Sometimes the standard deviation is a better measure of the uncertainty. This is defined as:

(3)............
Here s2  is called the variance of the set of observations.

If there is a very large number of measurements, the error in x is not likely to be greater than. For most of our work, we will not take sufficiently large numbers of observations to warrant such calculations and we can usually obtain a good estimate of the precision of our measurements from the average deviation. In general this overestimates the uncertainty.

The next question is how the uncertainty in a given measurement propagates to quantities which are calculated using the measurement. For example, the volume of a cylinder is given by

The uncertainty in V is found by assuming the deviations in r and h are small enough that they are considered as differentials and the uncertainty in V is the differential

or more usefully,

When multiplying quantities, the relative uncertainty in the result is the sum of the relative uncertainties for each quantity in the product. If, for example, dr/r = .02 and dh/h = .01 then dV/V = .05; the value of V is uncertain to about 5%. This process is referred to as the prorogation of errors. For a quantity V which is a function of several variables xi with i=1...n,

(4)..............

so the fractional error is

 

(5)..............

where is the derivative of V with respect to xi. A more accurate relation may be derived using the standard deviations of the means of the variables xi . The variance

(6)................

where is the variance of the mean of the xi variable.

In applying equation (4), since dxi may be positive or negative, each term of the sum is taken to be positive; the absolute value of is used. There is no such ambiguity in equation (6), where the terms are added in quadrature.

In the experiments we shall be performing, one quantity ( the dependent variable ) is measured as a function of another ( independent variable ). There is usually some mathematical relation which should hold for these variables. Often this relation may be expressed in a straight line form as in the examples above. Because of the errors discussed above, the measured points do not always fall precisely on the expected line. The problem then is to determine which line will best fit the data. It turns out that the best fit is a line for which the sum of the squares of the deviations of the measured values from that line has the smallest value ( minimum ). The process of finding that line is called least squares fitting or, in a more sophisticated terminology, linear regression analysis. In applying this analysis in the simplest manner, it is assumed that the variables are related by an equation of the form y = mx + b. It is also assumed that there are no errors in the determination of the x values and the distribution of errors for , for each value yi are random. For a given experimental point (xi,yi), the deviation di =y i-y , where y is the point on the line corresponding to the value x = xi.  As can be shown mathematically, setting up the condition such that

is a minimum enables us to find the slope m and the intercept b for the line which best fits the data. The equations found for these conditions are:

(8a)..................

(8b)....................

where N is the number of pairs of data points, , and are the sums of the values of xi, yi and x2i respectively, etc. The equations (8) can be easily derived by requiring

,

 

these two conditions yield two equations which must be solved simultaneously. Some hand calculators have built in programs for linear regression. A large number of computer programs carry out this minimization for a variety of functional forms.

 

We will look at two examples: in the first case all of the measurement errors will be assumed to be the same, so = constant. In this case, the equations (8) simplify to

This will usually be the case for your experiments in this class.

In the second example, the errors (standard deviations) will depend upon the measured values yi so the equations in (8) do not simplify.

The uncertainties in the values of m and b thus determined can be calculated from

 

(9a)...................

 

(9b)....................

where

(9c)...................

 

 

[ Graphing Techniques | Error Analysis | eg Case 1 | eg Case 2]