**This is an old revision of the document!**

The division between classical physics and modern physics is largely historical, classical physics is used where quantum effects and relativity can be neglected, and was largely developed before the beginning of the 20th century. This course will cover all this knowledge in two semesters (so we'll be moving fast!).

PHY131 | PHY132 |
---|---|

Motion | Electricity |

Fluids | Magnetism |

Oscillations | Light |

Waves | |

Heat |

Measurements are made with respect to a standard which defines a unit.

- How far/big? [m]
- How much? [kg]
- How much time [s]

These three quantities are the base SI units for mechanics, all other units in mechanics are derived. This makes dimensional analysis a useful test of an equation, especially when calculus is taken in to account.

There are many non-SI units in common usage (*especially in the USA!*) . You need to know how to convert from these units to SI units and vice-versa.

More on Wikipedia and in text.

Prefix | Abbreviation | Value |
---|---|---|

femto | $f$ | $10^{-15}$ |

pico | $p$ | $10^{-12}$ |

nano | $n$ | $10^{-9}$ |

micro | $\mu$ | $10^{-6}$ |

milli | $m$ | $10^{-3}$ |

centi | $c$ | $10^{-2}$ |

deci | $d$ | $10^{-1}$ |

kilo | $k$ | $10^{3}$ |

mega | $M$ | $10^{6}$ |

giga | $G$ | $10^{9}$ |

tera | $T$ | $10^{12}$ |

Large and small numbers are best written in scientific notation.

Examples:

$0.000056$ m = $5.6 \times 10^{-5}$ m or $5.6 \times 10^{-2}$ mm.

$795,000$ g = $7.95 \times 10^{5}$ g or $7.95 \times 10^{2}$ kg or $795$ kg.

In general it is not correct to give more significant figures for a number than the precision to which you know it. However you should not round off numbers too early in a calculation, as this can affect the accuracy of the final answer.

While the words accuracy and precision sound like they refer to similar things their meanings in physics are actually slightly different.

Precision refers to how well a quantity can be determined. This determination of this quantity be the result of multiple independent measurements, which presumably would improve the precision, but when we talk of precision we are **not** considering how the value of the quantity compares to a “known” or “established” value.

Accuracy, on the other hand, does make this comparison. The accuracy of a measurement refers to how well a measured value agrees with a a “known” or “established” value.

Examples:

- Precision - I measure the length of an object with a ruler and I am confident that my measured value, in meters, is correct to 3 decimal places. I can now say that my measurement has a precision of 1mm.
- Accuracy - The object I measured is actually a standard length whose length is known absolutely (because it's a standard!), the difference between my measured value and the known correct value is the accuracy of my measurement.

The **precision** to which we can measure something is limited by experimental factors, leading to **uncertainty**.

The deviation of a measurement from the “correct” value is termed the **error**, so error is a measurement of how inaccurate our results are. There are two general types of errors.

- Systematic Errors - A error that is constant from one measurement to another, for example, an incorrectly marked ruler would always make the same mistake measuring something as either bigger or smaller than it actually is every time. These errors can be quite difficult to eliminate!
- Random Errors - Random errors in your measurement occur statistically, ie. they deviate from the correct value in both directions. These can be reduced by repeated measurement.

But here is where it gets confusing… When you estimate the uncertainty of your measurement, as you will do frequently in the lab component of this course, you should consider the possible sources of error that contribute to the uncertainty. This way if there are large sources of error in your experiment, you will have a large uncertainty which will not exclude the accurate value of the quantity you are trying to measure.

An easily accessible standard is the offical US time kept by NIST. This can be compared to a watch to evaluate the systematic error in the time we should consider if we were to use it to time an experiment. This would probably only concern us if we needed to know the actual time that an event occurred at, rather than the difference in time for two events measured with the same watch.

Suppose in the video below I wanted to know the number of cars in the ring, while sitting in the van at the bottom of the screen. One way I could do try this is to measure the distance between two cars, and use this (presuming I know what the circumference of the ring is) to estimate the number of cars in the ring. Of course as we can see that depending on when I made the measurement I would either underestimate or overestimate the number of cars in the ring, we can see how the changing density of the cars both with position and time leads to a random error.

Think of the round object as an archery target. The archer shoots some number of arrows at it, and each dot shows where one landed. Now think of the “bull's eye” – the larger black dot in the center – as the “true” value of some quantity that's being measured, and think of each arrow-dot as a measurement of that quantity. The problem is that the one doing the measurements does not know the “true” value of the quantity; s/he's trying to determine it experimentally, and this means there must be uncertainty associated with the experimentally determined value. Note that each archery target – we'll call them 1,2,3,4 from left to right – shows a different distribution of arrow-hit/measurements.

When a leading cause of uncertainty in our measurement is random error we can lower the uncertainty in our measurement by repeated measurement and averaging (if the main sources of error are random). If we can assume that our measurements are governed by a typical statistical distribution then the standard deviation becomes a useful measurement of the variance of our data.

Average of N measurements: $\overline{t}=\frac {\sum t_{i}}{N} $

Deviation, or, how much does an individual measurement differ from the mean value: $t_{i}-\overline{t}$

The standard deviation of a measurement is given by: $\Delta_t=\sqrt{\frac {\sum (t_{i}-\overline{t})^2}{(N-1)}}$

The standard deviation of the mean, or Standard Error is given by: $\Delta_{\overline{t}}=\sqrt{\frac {\sum (t_{i}-\overline{t})^2}{N(N-1)}}$

The standard deviation of the mean gives an estimate of how much the calculate mean value may differ from the true mean value.

Very often we need to make more than one measurement, or manipulate a measured quantity in an equation to find the quantity we really want. In these cases we need to **propagate** uncertainty. If the quantity we want to know is $f(x)$ and there is some variation in the measured quantity $x$ we can deduce the variation in $f(x)$ using calculus.

$\delta f = \frac{df}{dx}\delta x$

We should first square this (as the sign of the variation is not important to us).

$\delta f^{2} = (\frac{df}{dx})^2\delta x^{2}$

and then take the average of our mutliple variations

$\langle \delta f^{2} \rangle = (\frac{df}{dx})^2\langle \delta x^{2} \rangle$

Taking the square root gives us the relationship we are looking for

$\sqrt{\langle \delta f^{2} \rangle} = |\frac{df}{dx}|\sqrt{\langle \delta x^{2} \rangle} \rightarrow \Delta_f = |\frac{df}{dx}|\Delta_x$

If $f(x,y)$ then $\delta f = \frac{\partial f}{\partial x}\delta x + \frac{\partial f}{\partial y}\delta y$

Now $\delta f^{2} = (\frac{\partial f}{\partial x})^2\delta x^2 + (\frac{\partial f}{\partial y})^2\delta y^2 + \frac{\partial f}{\partial x}\frac{\partial f}{\partial y}\delta x \delta y$

And $\langle \delta f^{2} \rangle = (\frac{\partial f}{\partial x})^2\langle\delta x^2 \rangle + (\frac{\partial f}{\partial y})^2\langle\delta y^2 \rangle + \frac{\partial f}{\partial x}\frac{\partial f}{\partial y}\langle\delta x \delta y\rangle$

Does the variation in $x$ affect the variation in $y$? If not we can say that these variables are uncorrelated and that $\langle\delta x \delta y\rangle = 0$, which leads us to

$\Delta_f = \sqrt{(\frac{\partial f}{\partial x})^2\Delta_x^2 + (\frac{\partial f}{\partial y})^2\Delta_y^2}$

From $\Delta_f = |\frac{df}{dx}|\Delta_x$ and $\Delta_f = \sqrt{(\frac{\partial f}{\partial x})^2\Delta_x^2 + (\frac{\partial f}{\partial y})^2\Delta_y^2}$

we can derive some useful rules for common operations you will carry out in the labs. You can find more details about these equations in the error manual

if $S=aX$ then $\Delta S=a\Delta X$

if $S=A\pm B$ then $\Delta_S=\sqrt{(\Delta_A)^2+(\Delta_B)^2}$

if $S=A\times B$ **or** $\frac{A}{B}$ then $\large \frac{\Delta_S}{S}=\sqrt{(\frac{\Delta_A}{A})^2+(\frac{\Delta_B}{B})^2}$

if $S=A^n$ then $\large \frac{\Delta_S}{S}=|n|\times \frac{\Delta_A}{A}$

An accurate estimate of the uncertainty in an experiment is the **only** way to determine whether an experiment is **consistent** or **inconsistent** with a theory.

If the theoretical prediction lies within the estimate of the uncertainty of the experiment then we can say the theory is consistent with the experiment. Another way of stating this would be that the measured value is consistent with the theoretical one if the error is less than the uncertainty. However, if the uncertainty is very large this this may be a meaningless statement! If we estimate the uncertainty to be smaller than it really is we may discard a valid theory (and perhaps an important discovery).

Our aim is therefore always to accurately estimate the uncertainty of our results and strive to improve it!