In Physics, like every other experimental science, one cannot make any measurement without having some degree of uncertainty. In reporting the results of an experiment, it is as essential to give the uncertainty, as it is to give the best-measured value. Thus it is necessary to learn the techniques for estimating this uncertainty. Although there are powerful formal tools for this, simple methods will suffice for us. To a large extent, we emphasize a “common sense” approach based on asking ourselves just how much any measured quantity in our experiments could be in error.

A frequent misconception is that the experimental error is the difference between our measurement and the accepted “official” value. What we mean by error is the estimate of the range of values within which the true value of a quantity is likely to lie. This range is determined from what we know about our lab instruments and methods. It is conventional to choose the error range as that which would comprise 68% of the results if we were to repeat the measurement a very large number of times.

In fact, we seldom make enough repeated measurements to calculate the error precisely, so the error is usually an estimate of this range.
Note, however, that the error range is established so as to include **most** of the likely outcomes,
but not all of htem. You might think of the process as a wager: pick the range so that if you bet on
the outcome being within your error range, you will be right about 2/3 of the time. If you
underestimate the error, you will lose money in your betting; if you overestimate it, no
one will take your bet!

If we denote a quantity that is determined in an experiment as , we can
call the error . Thus if represents the length of a book measured with a meter stick
we might say the length cm where the central value for the length is 25.1
cm and the error, is 0.1 cm. Both the central value and error of measurements **must** be
quoted when reporting your results. Note that in this example, the central value is given with
just three significant figures. Do not write significant figures beyond the first digit of
the error on the quantity. Giving more precision to a value than this is misleading and
irrelevant.

An error such as that quoted above for the book length is called the **absolute
error**; it has the same units as the quantity itself (cm in the example) . Note that if the
quantity is multiplied by a constant factor the absolute error of is :

(E.1)

We will also encounter **relative error**, defined as the ratio of the error to the
central value of the quantity so that the

relative error of

(E.2)

Thus the relative error of the book length is . The relative error is dimensionless, and should be quoted with as many significant figures as are known for the absolute error. Note that if the quantity is multiplied by a constant factor the relative error of is the same as the relative error of ,

(E.3)

since the constant factor cancels in the relative error of . Note that quantities with assumed negligible errors are treated as constants.

You are probably used to the **percentage error** from everyday life. The percentage error is the relative
error multiplied by 100.

**Changing from a relative to absolute error:**

Often in your experiments you have to change from a relative to an absolute error by multiplying the relative error with the central value,

(E.4)

Random error occurs because of small random variations in the measurement process. For example, measuring the time of a pendulum's period with a stopwatch will give different results in repeated trials due to small differences in your reaction time in hitting the stop button as the pendulum reaches the end point of its swing. If this error is random, the average period over the individual measurements would get closer to the correct value as the number of trials is increased. The correct reported result would be the average for our central value,

(E.5)

The error is usually taken as the standard deviation of the measurements. (In practice, we seldom take the trouble to make a very large number of measurements of a quantity in this lab.) An estimate of the random error of a single measurement is

(E.5a)

and of the error of the average t, , is

(E.5b)

where the sum is over the measurements . Note in equation (E.5b) the “bar” over the letter , indicating that the error refers to the average .

In the case that we only have one measurement, but know, (from a previous measurement), what the error of the average is, we can use this error of the average , , multiplied by as the error of this single measurement (which you see when you divide equation (E.5a) by equation (E.5b).)

If we don’t have a value of the error of , , we must guess the likely variation from the character of your measuring equipment. For example in the book length measurement with a meter stick marked off in millimeters, you might guess that the error would be about the size of the smallest division on the meter stick (0.1 cm).

Some sources of uncertainty are not random. For example, if the meter stick that you used to measure the book was warped or stretched, you would never get a good value with that instrument. More subtly, the length of your meter stick might vary with temperature and thus be good at the temperature for which it was calibrated, but not others. When using electronic instruments such 1.5 voltmeters and ammeters, you obviously rely on the proper calibration of these devices. But if the student before you dropped the meter, there could well be a systematic error. Estimating possible errors due to such systematic effects really depends on your understanding of your apparatus and the skill you have developed for thinking about possible problems. For example if you suspect a meter might be mis-calibrated, you could compare your instrument with a 'standard' meter -but of course you have to think of this possibility yourself and take the trouble to do the comparison. In this course, you should at least consider such systematic effects, but for the most part you will simply make the assumption that the systematic errors are small. However, if you get a value for some quantity that seems rather far off what you expect, you should think about such possible sources more carefully.

Often in the lab, you need to combine two or more measured quantities, each of which has an error, to get a derived quantity. For example, if you wanted to know the perimeter of a rectangular field and measured the length and width with a tape measure, you would then have to calculate the perimeter, , and would need to get the error of from the errors you estimated for and , and . Similarly, if you wanted to calculate the area of the field, , you would need to know how to do this using and . There are simple rules for calculating errors of such combined, or derived, quantities. Suppose that you have made primary measurements of quantities and , and want to get the best value and error for some derived quantity .

For **addition** or **subtraction** of measured quantities the absolute error of the
sum or difference is the ‘addition in quadrature’ of the absolute errors of the measured
quantities, if ,

(E.6)

This rule, rather than the simple linear addition of the individual absolute errors, incorporates the fact that random errors (equally likely to be positive or negative) partly cancel each other in the error

For **multiplication** or **division** of measured quantities the **relative error** of the
product or quotient is the ‘addition in quadrature’ of the **relative errors** of the measured
quantities,if **or**

(E.7)

Due to the quadratic addition in (E.6) and (E.7) one can often neglect the smaller of two errors. For example, if the error of is 2 (in arbitrary units) and the error of B is , then the error of is .

Thus, if you don’t want to be more precise in your error estimate than ~12 % (which in most cases is sufficient, since errors are an estimate and not a precise calculation) you can simply neglect the error in B, although it is is 1/2 of the error of A.

For the **power** of the measured quantity the **relative error** of the power is
the relative error of multiplied by the magnitude of the **exponent** , if ,

(E.8)

Often you will be asked to graph results obtained in the lab and to find certain quantities from the slope of the graph. You will always plot the quantities against one another in such a way that you end up with a linear plot. It is important to have error bars on the graph which reflect the uncertainty in the quantities you are plotting and help you to estimate the error in the slope of the graph, and hence the error in the quantity you are trying to find.

To demonstrate this we are going to consider an example that you will study in detail later in the course, the simple pendulum. A simple pendulum consists of a weight suspended from a fixed point by a string of length . The weight swings about a fixed point. At a given time, is the angle which this string makes relative to the vertical (direction of the force of gravity).

The **period** of this motion is defined as the time necessary for the weight to swing back and forth once. You will learn later in Chapter 9 on oscillations that an approximate relation between the period and length of the pendulum is given by where is the constant acceleration of gravity . In the derivation of this equation in Chapter 9 the assumption is made that the angle is small.

By measuring the period of oscillation of the pendulum as a function of the length in the string we can find a value for the acceleration due to gravity. The video shows you how we measure the different quantities that are important in the experiment, the length of the string, , the angle, and the period of oscillation, . Note that we measure the length several times and the period for 10 oscillations to try to minimize **random** errors. Notice that we use the computer as a stopwatch: we will be using the computer frequently in this course to make measurements and record data.

Suppose that we had measured the string 5 times and found the following 5 values for the length of the string, L.

Measurement No. | L [cm] |
---|---|

1 | 56.4 |

2 | 56.6 |

3 | 56.7 |

4 | 56.6 |

5 | 56.5 |

Finding the average value is straightforward:

cm

You find the error in the average using equation (E.5b):

cm

So we can say that our measured value for is 56.56+/-0.05 cm.

We ask you to do a similar calculation for a different set of numbers in the assignment to see that you've got it.

If we measure the time for 10 oscillations we can find the time for one oscillation simply by dividing by 10. Now we need to make an estimate of the error.

First you need to estimate the error in your measurement. How accurately do you think you can press the button to tell the computer when to start and stop the measurement? Let's say that you think you can press the button within 0.2 seconds of either the start or the stop of the measurement. You need to account for the errors both times, but as we discussed earlier, because these errors are random they add in quadrature so you can say that

s

Now we find the error in T by dividing by 10

s

So you can see it was a good idea to measure several periods instead of one, we get a much more accurate result. Maybe you'd like to think about why we don't measure 100 oscillations (and because you'd get bored is only part of the answer!).

Again, in the assignment we'll ask you to do this with a different set of numbers.

Now we have some idea of the uncertainty in our measurements we can look at some data and try to see if it matches the formula we expect. What we would do is to, for a fixed angle change the length of the string and find the oscillation period.Take a look at the following data set which was taken by one of our TAs:

L[cm ] | ΔL [cm] | 10T[s] | T[s] | ΔT[s] | T^{2}[s^{2}] | ΔT^{2}[s^{2}] |
---|---|---|---|---|---|---|

10.6 | 0.1 | 6.2 | 0.62 | 0.028 | 0.38 | 0.03 |

21.9 | 0.1 | 9.1 | 0.91 | 0.028 | 0.82 | 0.05 |

33.2 | 0.1 | 11.6 | 1.16 | 0.028 | 1.34 | 0.06 |

40.5 | 0.1 | 12.8 | 1.28 | 0.028 | 1.65 | 0.07 |

48.4 | 0.1 | 14.0 | 1.40 | 0.028 | 1.95 | 0.08 |

61.6 | 0.1 | 15.8 | 1.48 | 0.028 | 2.48 | 0.09 |

73.1 | 0.1 | 17.4 | 1.74 | 0.028 | 3.01 | 0.10 |

81.4 | 0.1 | 18.1 | 1.81 | 0.028 | 3.27 | 0.11 |

89.6 | 0.1 | 19.4 | 1.91 | 0.082 | 3.75 | 0.08 |

You should understand from what we discussed above how we got the first 5 columns. The rest of the table shows the necessary transformation of the data into the quantities we need to plot. You might be wondering why we have calculated . Recall that we said earlier that we expect that . We can rearrange this as , which means that we should get a straight line if we plot against , and of course we need to know what the error in , , is so that we can draw error bars on the graph. Finding is an exercise on your assignment. (Don't worry we will just ask you to do it for one set of numbers and we'll guide you through the formulas.)

Let's try using the plotting tool we will be using in this course to plot this data. It's built right in to the webpage, but when you enter your data and click “submit” it will make the graph in a new tab. This makes it easy to change something and get another graph if you made a mistake. You should enter the values as your x values and your values as your y values. According to the equation we are testing when , so you should check the box which asks you if the fit goes through (0,0). Enter the appropriate errors in the +/- boxes and choose “errors in x and y”. Click “submit” when you are done.

If you entered everything right then on your new tab you should see something that looks like this:

The data is clearly quite linear when plotted like this, so it gives us an indication that our formula at the least has the right form. (Maybe you would like to try plotting directly against and see what looks like). Notice that you can't see the y error bars because they are very small. The program has fitted the data using a least-squares fitting approach. This means that it has calculated for each data point the square of the difference between the data point and the line. It then adds up all these “squares” and uses this number to determine how good the fit is. The computer tries to find the line that gives the smallest sum of squares and calls this the line of best fit. It's drawn this on the graph and called it “y=a*x”. It's also given you the value of a and its estimate for the uncertainty in a. The value the program gives for the error in a is often fairly small, it relies mostly on the scatter in a and only uses the errors you enter to weight the points differently in its fit.

Another technique you can use to estimate the error in the slope is to draw “max-min lines”. Here we draw in two lines, one that has the maximum slope that seems reasonable, the “max” line, and another with the smallest slope that seems reasonable, the “min” line. Normally we do these exercises on paper, but you can probably do it simply by holding a clear plastic ruler up to the screen to decide where you think the max-min lines should be (please DON'T draw on the screen!!). A line is reasonable if it just passes within *most* of the error bars. You then just take two convenient points on the line, and find the change in y over the change in x to calculate the slope. You can then work out the slope of both lines to give yourself an estimate of the error in the slope. In the example below you can calculate that the “max” line has a slope of about 90/3.6=25 cm/s^{2}, and the “min” line has slope of about 90/3.8=23.7 cm/s^{2}, therefore if you used this method you would conclude that the value of the slope is 24.4+/-0.7 cm/s^{2}, as compared to the computers estimate of 24.41+/-0.16 cm/s^{2}. Note that I drew the lines through (0,0) which we can consider as an error free point, i.e. the fits **must** go through this point.

Now we have a value from the slope we can calculate a value for the acceleration due to gravity, g, from it. Remember that . This means that the slope of our graph should be equal to . To get g we should multiply the slope by and we should also divide by 100 to convert from cm/s^{2} to m/s^{2} which are the standard SI units. Using the computers values we find:

**g=9.64+/-0.06 m/s ^{2}**

or using the max/min line estimate of the error we find:

**g=9.64+/-0.28 m/s ^{2}**

The accepted value for g is **9.81 m/s ^{2}**, which falls within the range we found using the max/min method and so we can say, based on that estimate, that our experiment is

Bearing these things in mind, an important point to make is that in general we should not necessarily be surprised if something we measure in the lab does not match exactly with what we might expect. When things don't seem to work we should think about why they don't, but, most importantly of all, we must **never** modify our data to make it match our expectations! This is not acceptable scientific practice, and indeed many famous discoveries would never have been made if scientists did this kind of thing. (Actually sometimes they do, and when it happens it can set science back a long way, and ruin the careers of those who do it, a prominent recent example in physics is Jan Hendrik Schön).