Error Manual

You can get a pdf version of this manual here.

Uncertainty in measurements

In Physics, like every other experimental science, one cannot make any measurement without having some degree of uncertainty. In reporting the results of an experiment, it is as essential to give the uncertainty, as it is to give the best-measured value. Thus it is necessary to learn the techniques for estimating this uncertainty. Although there are powerful formal tools for this, simple methods will suffice for us. To a large extent, we emphasize a “common sense” approach based on asking ourselves just how much any measured quantity in our experiments could be in error.

A frequent misconception is that the experimental error is the difference between our measurement and the accepted “official” value. What we mean by error is the estimate of the range of values within which the true value of a quantity is likely to lie. This range is determined from what we know about our lab instruments and methods. It is conventional to choose the error range as that which would comprise 68% of the results if we were to repeat the measurement a very large number of times.

In fact, we seldom make enough repeated measurements to calculate the error precisely, so the error is usually an estimate of this range. Note, however, that the error range is established so as to include most of the likely outcomes, but not all of htem. You might think of the process as a wager: pick the range so that if you bet on the outcome being within your error range, you will be right about 2/3 of the time. If you underestimate the error, you will lose money in your betting; if you overestimate it, no one will take your bet!

Error

If we denote a quantity that is determined in an experiment as $X$, we can call the error $\sigma_X$. Thus if $X$ represents the length of a book measured with a meter stick we might say the length $l=25.1\pm0.1$ cm where the central value for the length is 25.1 cm and the error, $\sigma_l$ is 0.1 cm. Both the central value and error of measurements must be quoted when reporting your results. Note that in this example, the central value is given with just three significant figures. Do not write significant figures beyond the first digit of the error on the quantity. Giving more precision to a value than this is misleading and irrelevant.

Absolute Error

An error such as that quoted above for the book length is called the absolute error; it has the same units as the quantity itself (cm in the example) . Note that if the quantity $X$ is multiplied by a constant factor $a$ the absolute error of $(aX)$ is :


$\sigma_{aX}=a\sigma_X$
(E.1)


Relative Error

We will also encounter relative error, defined as the ratio of the error to the central value of the quantity so that the


relative error of $X= \Large \frac{\sigma_X}{X}$
(E.2)


Thus the relative error of the book length is $\sigma_l/l = (0.1/25.1) = 0.004$. The relative error is dimensionless, and should be quoted with as many significant figures as are known for the absolute error. Note that if the quantity $X$ is multiplied by a constant factor $a$ the relative error of $(aX)$ is the same as the relative error of $X$,


$\Large \frac{\sigma_{aX}}{aX}=\frac{\sigma_X}{X}$
(E.3)



since the constant factor $a$ cancels in the relative error of $(aX)$. Note that quantities with assumed negligible errors are treated as constants.

You are probably used to the percentage error from everyday life. The percentage error is the relative error multiplied by 100.

Changing from a relative to absolute error:

Often in your experiments you have to change from a relative to an absolute error by multiplying the relative error with the central value,


$ \sigma_X=\Large \frac{\sigma_X}{X}\normalsize \times X$
(E.4)


Random Error

Random error occurs because of small random variations in the measurement process. For example, measuring the time of a pendulum's period with a stopwatch will give different results in repeated trials due to small differences in your reaction time in hitting the stop button as the pendulum reaches the end point of its swing. If this error is random, the average period over the individual measurements would get closer to the correct value as the number of trials $N$ is increased. The correct reported result would be the average for our central value,


$\Large \overline{t}=\frac {\sum t_{i}}{N} $
(E.5)



The error is usually taken as the standard deviation of the measurements. (In practice, we seldom take the trouble to make a very large number of measurements of a quantity in this lab.) An estimate of the random error of a single measurement $t_{i}$ is


$\Large \sigma_t=\sqrt{\frac {\sum (t_{i}-\overline{t})^2}{N}} $
(E.5a)



and of the error of the average t, $\overline{t}$ , is


$\Large \sigma_{\overline{t}}=\sqrt{\frac {\sum (t_{i}-\overline{t})^2}{N(N-1)}} $
(E.5b)



where the sum is over the $N$ measurements $t_{i}$ . Note in equation (E.5b) the “bar” over the letter $t$, indicating that the error refers to the average $t$.

In the case that we only have one measurement, but know, (from a previous measurement), what the error of the average is, we can use this error of the average $\overline{t}$, $\sigma_{\overline{t}}$, multiplied by $\sqrt{N-1}$ as the error of this single measurement (which you see when you divide equation (E.5a) by equation (E.5b).)

If we don’t have a value of the error of $\overline{t}$, $\sigma_{\overline{t}}$, we must guess the likely variation from the character of your measuring equipment. For example in the book length measurement with a meter stick marked off in millimeters, you might guess that the error would be about the size of the smallest division on the meter stick (0.1 cm).

Systematic Error

Some sources of uncertainty are not random. For example, if the meter stick that you used to measure the book was warped or stretched, you would never get a good value with that instrument. More subtly, the length of your meter stick might vary with temperature and thus be good at the temperature for which it was calibrated, but not others. When using electronic instruments such 1.5 voltmeters and ammeters, you obviously rely on the proper calibration of these devices. But if the student before you dropped the meter, there could well be a systematic error. Estimating possible errors due to such systematic effects really depends on your understanding of your apparatus and the skill you have developed for thinking about possible problems. For example if you suspect a meter might be mis-calibrated, you could compare your instrument with a 'standard' meter -but of course you have to think of this possibility yourself and take the trouble to do the comparison. In this course, you should at least consider such systematic effects, but for the most part you will simply make the assumption that the systematic errors are small. However, if you get a value for some quantity that seems rather far off what you expect, you should think about such possible sources more carefully.

Propagation of Errors

Often in the lab, you need to combine two or more measured quantities, each of which has an error, to get a derived quantity. For example, if you wanted to know the perimeter of a rectangular field and measured the length $l$ and width $w$ with a tape measure, you would then have to calculate the perimeter, $p =2(l+w)$, and would need to get the error of $p$ from the errors you estimated for $l$ and $w$, $\sigma_L$ and $\sigma_w$. Similarly, if you wanted to calculate the area of the field, $A = lw$, you would need to know how to do this using $\sigma_L$ and $\sigma_w$. There are simple rules for calculating errors of such combined, or derived, quantities. Suppose that you have made primary measurements of quantities $A$ and $B$, and want to get the best value and error for some derived quantity $S$.

For addition or subtraction of measured quantities the absolute error of the sum or difference is the ‘addition in quadrature’ of the absolute errors of the measured quantities, if $S=A\pm B$,


$\sigma_S=\sqrt{((\sigma_A)^2+(\sigma_B)^2)}$
(E.6)



This rule, rather than the simple linear addition of the individual absolute errors, incorporates the fact that random errors (equally likely to be positive or negative) partly cancel each other in the error $\sigma_S$

For multiplication or division of measured quantities the relative error of the product or quotient is the ‘addition in quadrature’ of the relative errors of the measured quantities,if $S=A\times B$ or $\Large \frac{A}{B}$


$\Large \frac{\sigma_S}{S}=\sqrt{((\frac{\sigma_A}{A})^2+(\frac{\sigma_B}{B})^2)}$
(E.7)



Due to the quadratic addition in (E.6) and (E.7) one can often neglect the smaller of two errors. For example, if the error of $A$ is 2 (in arbitrary units) and the error of B is $1$, then the error of $S=A+B$ is $\sigma_S=\sqrt{((\sigma_A)^2+(\sigma_B)^2)}=\sqrt{2^2+1^2}=\sqrt{5}=2.23$.

Thus, if you don’t want to be more precise in your error estimate than ~12 % (which in most cases is sufficient, since errors are an estimate and not a precise calculation) you can simply neglect the error in B, although it is is 1/2 of the error of A.

For the power $A^n$ of the measured quantity $A$ the relative error of the power is the relative error of $A$ multiplied by the magnitude of the exponent $n$, if $S=A^n$,


$\Large \frac{\sigma_S}{S}=|n|\times \frac{\sigma_A}{A}$
(E.8)



Derivation of error formulas

The formulas above are useful relationships derived from more fundamental equations we derived using calculus in Lecture 1. There we found that, in the case of one variable when $f(x)$

$\sigma_f = |\frac{df}{dx}|\sigma_x$

and in the case of two variables when $f(x,y)$

$\sigma_f = \sqrt{(\frac{\partial f}{\partial x})^2\sigma_x^2 + (\frac{\partial f}{\partial y})^2\sigma_y^2}$

Equation (E.1) is thus trivially derived, if $f(x)=ax$ then $|\frac{df}{dx}|=a$

Equation (E.6) considers the case $f(x,y)=x+y$, here both $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ are equal to 1 and again it is straigtforward to see how (E.6) is arrived at.

Equation (E.7) considers the case $f(x,y)=xy$. Now $\frac{\partial f}{\partial x}=y$ and $\frac{\partial f}{\partial y}=x$ so

$\sigma_f=\sqrt{y^2\sigma_x^2+x^2\sigma_y^2}$

The equation is more useful to us if everything is expressed in quantities related to a single variable, the relative error of each quantity, so we divide both sides by $f=xy$, resulting in equation (E.7)

$\large\frac{\sigma_f}{f}=\sqrt{\frac{\sigma_x^2}{x^2}+\frac{\sigma_y^2}{y^2}}$

The same logic applies for division.

Equation (E.8) applies when $f(x)=x^n$, $|\frac{df}{dx}|=|nx^{n-1}$| so

$\sigma_f = |nx^{n-1}|\sigma_x$

Once again this formula is going to be easier to use in terms of relative errors, so we divide both sides by $f=x^n$, giving us

$\large\frac{\Delta{f}}{f}=|n|\frac{\sigma_x}{x}$

phy141kk/error.txt · Last modified: 2016/08/15 15:40 (external edit)
CC Attribution-Noncommercial-Share Alike 3.0 Unported
Driven by DokuWiki