# Lecture 1 - Basic concepts: Measurements, Uncertainty and Units

Physics is perhaps the most fundamental science. Built upon mathematical foundations it produces models and theories which can explain and predict the way that matter interacts. Within a physics department like ours there are usually scientists studying effects from the very big to the very small using both high level theory and cutting edge experiments.

Physics is at it's core a science of measurement. In this lecture we will cover key concepts related to measurement of physical quantities.

## Video of lecture

The video should play in any browser, but works best in anything that isn't Internet Explorer. If you are having trouble watching the video within the page you can download the video and play it in Quicktime. Due to technical failure the first lecture of Fall 2013 was not recorded. Here you can find the 2012 version. Don't use the course codes from this video, they have changed, use the ones you find in the syllabus.

## Classical Physics

The division between classical physics and modern physics is largely historical, classical physics is used where quantum effects and relativity can be neglected, and was largely developed before the beginning of the 20th century. This course will cover all this knowledge in two semesters (so we'll be moving fast!).

PHY141 PHY142
Motion Electricity
Fluids Magnetism
Oscillations    Light
Waves
Heat

## Units

Measurements are made with respect to a standard which defines a unit.

• How far/big? [m]
• How much? [kg]
• How much time [s]

These three quantities are the base SI units for mechanics, all other units in mechanics are derived. This makes dimensional analysis a useful test of an equation, especially when calculus is taken in to account.

There are many non-SI units in common usage (especially in the USA!) . You need to know how to convert from these units to SI units and vice-versa.

## Unit prefixes

More on Wikipedia and in text.

Prefix  Abbreviation  Value
femto $f$ $10^{-15}$
pico $p$ $10^{-12}$
nano $n$ $10^{-9}$
micro $\mu$ $10^{-6}$
milli $m$ $10^{-3}$
centi $c$ $10^{-2}$
deci $d$ $10^{-1}$
kilo $k$ $10^{3}$
mega $M$ $10^{6}$
giga $G$ $10^{9}$
tera $T$ $10^{12}$

## Scientific Notation

Large and small numbers are best written in scientific notation.

Examples:

$0.000056$ m = $5.6 \times 10^{-5}$ m or $5.6 \times 10^{-2}$ mm.

$795,000$ g = $7.95 \times 10^{5}$ g or $7.95 \times 10^{2}$ kg or $795$ kg.

In general it is not correct to give more significant figures for a number than the precision to which you know it. However you should not round off numbers too early in a calculation, as this can affect the accuracy of the final answer.

## Accuracy and precision

While the words accuracy and precision sound like they refer to similar things their meanings in physics are actually slightly different.

Precision refers to how well a quantity can be determined. This determination of this quantity be the result of multiple independent measurements, which presumably would improve the precision, but when we talk of precision we are not considering how the value of the quantity compares to a “known” or “established” value.

Accuracy, on the other hand, does make this comparison. The accuracy of a measurement refers to how well a measured value agrees with a a “known” or “established” value.

Examples:

• Precision - I measure the length of an object with a ruler and I am confident that my measured value, in meters, is correct to 3 decimal places. I can now say that my measurement has a precision of 1mm.
• Accuracy - The object I measured is actually a standard length whose length is known absolutely (because it's a standard!), the difference between my measured value and the known correct value is the accuracy of my measurement.

## Error and Uncertainty

The precision to which we can measure something is limited by experimental factors, leading to uncertainty.

The deviation of a measurement from the “correct” value is termed the error, so error is a measurement of how inaccurate our results are. There are two general types of errors.

• Systematic Errors - A error that is constant from one measurement to another, for example, an incorrectly marked ruler would always make the same mistake measuring something as either bigger or smaller than it actually is every time. These errors can be quite difficult to eliminate!
• Random Errors - Random errors in your measurement occur statistically, ie. they deviate from the correct value in both directions. These can be reduced by repeated measurement.

But here is where it gets confusing… When you estimate the uncertainty of your measurement, as you will do frequently in the lab component of this course, you should consider the possible sources of error that contribute to the uncertainty. This way if there are large sources of error in your experiment, you will have a large uncertainty which will not exclude the accurate value of the quantity you are trying to measure.

## Example of systematic error

An easily accessible standard is the offical US time kept by NIST. This can be compared to a watch to evaluate the systematic error in the time we should consider if we were to use it to time an experiment. This would probably only concern us if we needed to know the actual time that an event occurred at, rather than the difference in time for two events measured with the same watch.

## Example of random error

Suppose in the video below I wanted to know the number of cars in the ring, while sitting in the van at the bottom of the screen. One way I could do try this is to measure the distance between two cars, and use this (presuming I know what the circumference of the ring is) to estimate the number of cars in the ring. Of course as we can see that depending on when I made the measurement I would either underestimate or overestimate the number of cars in the ring, we can see how the changing density of the cars both with position and time leads to a random error.

## Averaging and Standard Deviation

When a leading cause of uncertainty in our measurement is random error we can lower the uncertainty in our measurement by repeated measurement and averaging (if the main sources of error are random). If we can assume that our measurements are governed by a typical statistical distribution then the standard deviation becomes a useful measurement of the variance of our data.

Average of N measurements: $\overline{t}=\frac {\sum t_{i}}{N}$

Deviation, or, how much does an individual measurement differ from the mean value: $t_{i}-\overline{t}$

Standard deviation:$\sigma_t=\sqrt{\frac {\sum (t_{i}-\overline{t})^2}{(N-1)}}$

Standard deviation of the mean, or Standard Error: $\sigma_{\overline{t}}=\sqrt{\frac {\sum (t_{i}-\overline{t})^2}{N(N-1)}}$

## Propagation of uncertainty

Very often we need to make more than one measurement, or manipulate a measured quantity in an equation to find the quantity we really want. In these cases we need to propagate uncertainty. If the quantity we want to know is $f(x)$ and there is some variation in the measured quantity $x$ we can deduce the variation in $f(x)$ using calculus.

$\delta f = \frac{df}{dx}\delta x$

We should first square this (as the sign of the variation is not important to us).

$\delta f^{2} = (\frac{df}{dx})^2\delta x^{2}$

and then take the average of our mutliple variations

$\langle \delta f^{2} \rangle = (\frac{df}{dx})^2\langle \delta x^{2} \rangle$

Taking the square root gives us the relationship we are looking for

$\sqrt{\langle \delta f^{2} \rangle} = |\frac{df}{dx}|\sqrt{\langle \delta x^{2} \rangle} \rightarrow \sigma_f = |\frac{df}{dx}|\sigma_x$

## Propagation with two variables

If $f(x,y)$ then $\delta f = \frac{\partial f}{\partial x}\delta x + \frac{\partial f}{\partial y}\delta y$

Now $\delta f^{2} = (\frac{\partial f}{\partial x})^2\delta x^2 + (\frac{\partial f}{\partial y})^2\delta y^2 + \frac{\partial f}{\partial x}\frac{\partial f}{\partial y}\delta x \delta y$

And $\langle \delta f^{2} \rangle = (\frac{\partial f}{\partial x})^2\langle\delta x^2 \rangle + (\frac{\partial f}{\partial y})^2\langle\delta y^2 \rangle + \frac{\partial f}{\partial x}\frac{\partial f}{\partial y}\langle\delta x \delta y\rangle$

Does the variation in $x$ affect the variation in $y$? If not we can say that these variables are uncorrelated and that $\langle\delta x \delta y\rangle = 0$, which leads us to

$\sigma_f = \sqrt{(\frac{\partial f}{\partial x})^2\sigma_x^2 + (\frac{\partial f}{\partial y})^2\sigma_y^2}$

## Some propagation rules

These are useful rules to remember that come as a result of the calculus we did before (derivations in error manual)

if $S=A\pm B$ then $\sigma_S=\sqrt{((\sigma_A)^2+(\sigma_B)^2)}$

if $S=A\times B$ or $\frac{A}{B}$ then $\large \frac{\sigma_S}{S}=\sqrt{((\frac{\sigma_A}{A})^2+(\frac{\sigma_B}{B})^2)}$

if $S=A^n$ then $\large \frac{\sigma_S}{S}=|n|\times \frac{\sigma_A}{A}$

## Why is error and uncertainty so important?

An accurate estimate of the uncertainty in an experiment is the only way to determine whether an experiment is consistent or inconsistent with a theory.

If the theoretical prediction lies within the estimate of the uncertainty of the experiment then we can say the theory is consistent with the experiment. Another way of stating this would be that the measured value is consistent with the theoretical one if the error is less than the uncertainty. However, if the uncertainty is very large this this may be a meaningless statement! If we estimate the uncertainty to be smaller than it really is we may discard a valid theory (and perhaps an important discovery).

Our aim is therefore always to accurately estimate the uncertainty of our results and strive to improve it!