Anda di halaman 1dari 12

6/25/2018 Book | Kognity

() Subtopic 1.2

Uncertainties and errors


1.2 ()
1.2.0 The big picture ()
1.2.1 Uncertainties and errors ()
1.2.0 ()
1.2.2 Quantifying uncertainties ()
1.2.1 () 1.2.3 Combining uncertainties ()
1.2.4 Displaying uncertainties ()
1.2.2 ()
1.2.5 Checklist ()
1.2.3 ()

1.2.4 ()

1.2.5 () The big picture


Essential idea: Scientists aim towards developing experiments that can
give a true value from their measurements. However, due to the limited
precision in measuring devices, they often quote their results with some
form of uncertainty.

What is uncertainty? How do we know what is uncertain? How can we know


how uncertain something is? Is there a difference between the uncertainty in the
measurement we make and the uncertainty between our measurement and the
true value?

All measurements and calculations are uncertain. The best instruments in the
world are imperfect; they have limitations. The way we use the instruments can
affect what we are measuring. For example, things we are trying to measure
have a natural variation, like a cork bobbing in a sea of noise.

Ultimately, uncertainty is our estimate of the range and bounds of our


measurement precision and how confident we are that what we measured lies
within those bounds. Accuracy is a measure of a measure. It is the difference
between our own measured value and the generally accepted value of the same
thing. For example, we might measure the index of refraction for glass, or the

https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 1/12
6/25/2018 Book | Kognity

gravitational field near the Earth, and compare our values to the published or
true values. This does not apply to the original measurement of a value because
we have no existing true value against which to compare it. Accuracy is most
()
commonly used in school laboratories as method to judge the experimental
design or skill of students.

1.2 ()

1.2.0 ()  Nature of Science

1.2.1 () All scientific knowledge is uncertain...And it is of paramount


importance, in order to make progress, that we recognize this
1.2.2 () ignorance and this doubt. Because we have the doubt, we then propose
looking in new directions for new ideas.
1.2.3 ()
Feyman, Richard P. 1998. The Meaning of it All: Thoughts of a
1.2.4 () Citizen-Scientist. Reading, Massachusetts, USA. Perseus. p. 13

1.2.5 () In this subtopic, you will learn how to mathematically and


diagrammatically represent uncertainty, which is not a measure of
your own ignorance, but rather a measure of your confidence in
the instrument and methodology used to collect data.

In Figure 1 the darts represent our measurement values and the target centre
represents the true value. Tight clusters of darts suggest that our measurement
precision is high and our uncertainty is low. The closer a dart is to the centre, the
more accurate our measurement is. In each of the four situations try to explain
why the precision and accuracy is high or low.


https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 2/12
6/25/2018 Book | Kognity

()

1.2 ()

1.2.0 ()

1.2.1 ()

1.2.2 ()

1.2.3 ()
Figure 1. Uncertainty (precision) and accuracy.
1.2.4 ()

In this section we explore how we quantify uncertainty and how we process


1.2.5 ()
uncertainties when manipulating data – in a sense, constructing a chain of
evidence – and representing them on graphs.

Uncertainties and errors


In a well-designed, carefully-executed experiment the measurements made will
have limits of precision that depend on the instruments used – the closeness of
the divisions of its scales (and our ability to interpolate between gradations) or
the digits available on digital displays – but will also depend on the nature of
random variations in values that often have nothing to do with the
instruments. For example, when we are studying the sum of the numbers
appearing on the faces of dice, or mapping the air temperature and currents in a
room; all will have some level of randomness in the measurement.

Experience and mathematics tell us that each measurement is an attempt to


measure some true value, but that owing to the effects mentioned in the last
paragraph, the value that we measure may change by some amount each time we
measure it. Better apparatus and techniques may help, but there will always be
some random variations in the measured values. This is (generally) not the fault
of a careful experimenter, or the equipment. What is important to remember is
 that what we intend to measure is close to what we actually measure.
https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 3/12
6/25/2018 Book | Kognity

As we make a large number of measurements of a value, we build up a


distribution of values; some appear much more frequently than others, and
some with quite different values may only be seen once or twice. A long history
()
of experimental work and mathematical probability theory predicts that the
distribution of measured values takes on a characteristic symmetrical shape
called a Gaussian (named after Carl Friedrich Gauss, a famous German
1.2 ()
mathematician of the late 18th and early 19th century). The Gaussian is a
1.2.0 () normal, or bell curve, that is clustered around a central maximum, as shown in
Figure 1. This curve provides a theoretical description of the shape of the
1.2.1 ()
distribution of measured values. Values near the mean (or average) μ are the
1.2.2 () most probable measured values, calculated exactly as you would expect: the sum

1.2.3 ()
of the measured values divided by the number of measured values.

1.2.4 ()

1.2.5 ()

Figure 1. The normal (Gaussian, bell) curve showing the mean μ, and the
standard deviation σ.

The width of the curve is determined by the dispersion (spread) in measured


values as calculated by the standard deviation σ , which is a measure of the
deviation of the measurements around the mean. σ sounds complicated, but
think of it as an average of how much each measurement differs from the mean

https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 4/12
6/25/2018 Book | Kognity

value. The mathematics shows that about 68% of the measured values lie within
one standard deviation of the mean and that about 95% of the measured values
lie within two standard deviations of the mean value. The mean is a measure of
()
the value around which the data centre (more related to accuracy). The standard
deviation is a measure of the spread (more related to uncertainty).

1.2 ()

1.2.0 ()  Definition

1.2.1 () Standard deviation is a measure of the deviation of the


measurements around the mean.
1.2.2 ()

1.2.3 ()

1.2.4 () However, in school, since we are often more interested in the experimental
approach to a measurement, it is rare that we would spend time to collect the
1.2.5 ()
large numbers of values that would lead to a normal curve. Most experiments
require many fewer data measurements and we use what we have to estimate an
experimental value. As we make or first measurement of a quantity we assume
that it is, after only one measurement, our best estimate. It turns out that if we
draw a best fit line to graphical data, it helps us in recognising the best estimate
of a relation between variables even if we have not made large numbers of
measurements for each graphical data points.

If we were focused on seeing the variation in a value then we would create a


histogram to represent the distribution of measured values. We divide the entire
range of measured values that we have seen into a series of equal intervals called
bins, and we count how many measured values fall into each interval. We place
each bin with its value marked on the horizontal axis, and we plot the number of
measurements that fell into the bin, as indicated on the vertical axis. You might
have seen such a graph in a spreadsheet program. An example of a histogram is
shown in Figure 2. In the histogram shown, the values measured were whole
numbers, but they could have been bins of width . There can be no gaps
1

between bins. The shape of the histogram is somewhat bell-shaped, and we


would not expect it to be a perfect reproduction of the theoretical shape. As we
continue to measure values we would expect that the histogram would
increasingly resemble the normal curve.

https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 5/12
6/25/2018 Book | Kognity

()

1.2 ()

1.2.0 ()

1.2.1 ()

1.2.2 ()

1.2.3 ()

1.2.4 ()

1.2.5 ()

Figure 2. A typical histogram with a bin width of 1 unit: mean = 7.3, standard
deviation = 6.3.

As increasingly more measurements are recorded, the distribution of measured


values begins to take on the characteristic symmetrical normal (or Gaussian, or
bell) shape where the peak position represents the most likely true value that we
are trying to measure, and therefore our best estimate of it. The width of the
distribution is an indicator of the random variations of the measured values that
we have recorded. As noted for the normal curve, the peak position in a
histogram should correspond to the mean value of all the measurements; but not
always, as shown here, unless the data set is large.

The standard deviation (calculated as noted above) is a value that relates to the
width of the distribution and is quite wide in our example. The appearance of
this histogram might then encourage us to collect more data to improve our
confidence that the mean and standard deviation are valid. There are other
statistical tests that can help us with that.

As we accumulate repeated measurements we begin to get not only our best


estimate of the true value that underlies the data, but we can also get an indicator
of our uncertainty regarding the true value (it is 68% likely to be within one

https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 6/12
6/25/2018 Book | Kognity

standard deviation of the mean). This process does not eliminate random
uncertainty, but it does give us a set of statistical boundaries. In other words,
knowing the standard deviation tells us that 95% of the measurements lie
()
with two standard deviations of the mean.

You will not be examined on these details. Modern scientific calculators that
1.2 ()
you use in mathematics contain functions that will calculate mean and standard
1.2.0 () deviation, but the focus of this digression is to explain and demonstrate the
existence of uncertainty and how we can determine the most likely value of our
1.2.1 ()
measurements and judge our confidence in it.
1.2.2 ()
However, there are situations where the experimental equipment or procedure
1.2.3 () can affect measured values in a way that is not random.

1.2.4 () Imagine an experiment to determine how long it takes a leaf to fall from rest (in
its tree) through certain distances (see Figure 3). We will focus on fall time and
1.2.5 ()
fall height, ignoring surface area and mass.

Figure 3. Does the time it takes a leaf to fall depend upon the tree it falls from?

We draw up a table, arrange that a leaf will fall from rest (by some means),
measure the fall times with a stopwatch, enter them into the table, and take
averages of three trials, as shown in Table 1. Taking an average of three trial
measurements should improve our confidence that the average value removes
some of the random variation that we see in the results and takes us closer to the

https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 7/12
6/25/2018 Book | Kognity

actual value than any one measurement could. However, three trials is too few to
calculate a trustworthy distribution and standard deviation. Notice that we have
included units and a measurement uncertainty, for example, s ± 0.8 s) for most
()
columns. We think that an individual time measurement is in error by no more
than 0.8 s. This value is based on our tests of using a stopwatch, where reaction
time and visual judgement combine. We believe that repeating three
1.2 ()
measurement trials will improve our estimate of the fall time, but we do not yet
1.2.0 () know how to determine its uncertainty.

1.2.1 ()
In the case of the fall height we could only measure to the nearest 10 cm. A
1.2.2 () measurement of 8.0 m may actually be anywhere between 7.95 m and 8.05 m.
We will set our uncertainty to 5 cm so that the measurement value and its
1.2.3 ()
uncertainty have the same number of decimal digits. We will return to this issue
1.2.4 () shortly.

1.2.5 ()
Table 1. Time taken, in s , for a leaf to fall from rest from a given height, in m, to the
ground.

Time 

Height / First Trial / Second Trial  Third Trial / Average of


m ± 0.1 m s ± 0.8 s / s ± 0.8 s s ± 0.8 s Trials / s±? s

18.00 26.5 26.9 25.0 26.1

15.00 22.5 21.8 21.6 22.0

10.00 15.3 15.3 15.9 15.5

8.00 13.4 12.8 12.8 13.0

5.00 8.9 9.2 9.1 9.1

3.00 6.2 6.6 6.1 6.30

1.00 3.6 3.5 3.7 3.6

This table is not yet in a proper format and we will improve this in the next
section.

When we graph the data we get results as shown in Figure 4. We will return to
this graph in the final section.

https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 8/12
6/25/2018 Book | Kognity

()

1.2 ()

1.2.0 ()

1.2.1 ()

1.2.2 ()

1.2.3 ()

1.2.4 ()

1.2.5 ()

Figure 4. Average fall time in s for a leaf as a function of fall height in  m .

We see that the trend of the data looks like a fairly straight line, with the points
scattered around in the shape of a line. The line, which we have not yet drawn is
known as a best fit line, which we will discuss later.

The scattering of data around the overall trend, is called statistical variation
or random error. From our exploration above we know that it is not an error (not
a mistake we made), but rather the natural variation of measured values. Some
variation results from over-estimating a measurement and some from under-
estimating it. Taking an average produces a value that should be closer to the
true value, but with only three trials we do not expect to have eliminated
variation. The relation between fall time and release height is what we want to
find.

We do notice something curious about the pattern of the graph, though we have
not yet drawn a straight line through the data. A leaf laying on the ground should
take no time to fall to the ground. We would expect the pattern of the graphed to
pass through (0, 0). This graph does not. All the data points seem to fall along a
line, but the line is shifted vertically and it does not pass through 0, 0). The usual
 name for such a spurious, but self-consistent set of results is the

https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20o… 9/12
6/25/2018 Book | Kognity

systematic error. This is not random variation but an error in the apparatus or
procedure. Here, a constant amount seems to have been added to each
measurement. If the amount had been random it would not have been constant
()
and likely would have averaged out to zero, although more than three
measurements would likely be needed.

1.2 ()
A systematic error may have arisen from a stopwatch being started early for
1.2.0 () some reason – the anticipation of reaction time, perhaps – so it showed too large
a reading. If that were the case we still might have expected some random
1.2.1 ()
variation by the time recorder. Alternatively, what if the height measurements
1.2.2 () were incorrect by a constant amount? Were they too large or too
small? Systematic errors in an experiment are detected by their consistent nature
1.2.3 ()
across a data set.
1.2.4 ()
When measurements are recorded with a systematic error, a graph of results will
1.2.5 () often alert us to the existence of a problem. We must then consider how the data
were affected: were they made consistently greater or smaller? This can help us
identify the source of the error. We can then carefully check, zero, and
calibrate instruments, and refine the way measurements are taken.

For example, we may check the stopwatch over a long period of time against a
clock known to be reliable. Perhaps we need to locate ourselves on a ladder so
that we can better see both the start and end of the fall, and this would also
reduce parallax error (https://en.wikipedia.org/wiki/Parallax). Would a video
camera work better? Maybe we are simply tired and our reaction times are
slightly slower than normal. The above data highlight poor experimental design
or measurement technique.

Occasionally, measurements are made that seem completely unpredictable and


are not consistent with other data. These unusual values are called outliers and
they are often inexplicable. There may have been a glitch with the apparatus or a
momentary distraction of the experimenter. If repeating a measurement under
the same conditions produces a value that is more consistent then, leaving the
unexpected data point in place but annotating it as an outlier, we may exclude it
from further analyses. It is a dangerous practice to eliminate data, but it can be


https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20… 10/12
6/25/2018 Book | Kognity

justified. If the oddball measurement is reproducible then we have to redirect


additional effort to discover the cause of the problem or accept it as real, but
unexplained.
()
Statistical variations (random errors) can never be avoided, but only reduced by
careful experimental design and procedure. We reduce these random errors as
1.2 ()
much as possible to improve precision. Systematic errors are our fault; we must
1.2.0 () check for them and take corrective action if they are found. Systematic errors
affect accuracy and the best way to ensure that the experiment is going as
1.2.1 ()
planned is to analyse and plot data as it proceeds. Do not leave the laboratory –
1.2.2 () wherever that may be – until you are certain that the preliminary data analysis is
free of defects that can be checked and corrected then and there.
1.2.3 ()

1.2.4 ()
 Definition
1.2.5 ()
Precision describes the variation in a measurement
done repeatedly with the same device. A small variation indicates
high precision.

Accuracy describes the difference between a measurement and


the true value. A small difference indicates high accuracy.

 Definition

Statistical variations, or random errors:

represent the natural variation of measured values


can sometimes be reduced but cannot be avoided
affect precision.

Systematic errors:

are errors which are the result of an error in experimental


design or procedure
affect accuracy.

Example 1

https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20… 11/12
6/25/2018 Book | Kognity

In Figure 5 the centre of a target represents a true value that we are trying to
measure. The arrow locations represent the results of our repeated attempts at
measuring this value.
()

1.2 ()

1.2.0 ()

1.2.1 ()

1.2.2 ()

1.2.3 ()

1.2.4 () Figure 5. Accuracy and precision.

1.2.5 ()
Complete the blanks in the following sentences:

Target _ represents high accuracy with high precision in measurement.


Target _ represents high accuracy with low precision in measurement.
Target _ represents low accuracy with high precision in measurement.

» Show solution

In the next sections we explore how to estimate uncertainties and propagate


them through calculations.

  

Checkpoint questions
+ Show 3 questions

2018 © Kognity Terms & conditions (/terms/) | Privacy policy (/privacy/)


https://xsph.kognity.com/schoolstaff/app/physics-sl-fe-2016/book/measurement-uncertainties/uncertainties-errors/the-big-picture/?source=Table%20… 12/12

Anda mungkin juga menyukai