Anda di halaman 1dari 40

WSU Bereket Geberselassie

CHAPTER 1

1.Introduction
In surveying nothing is ever absolutely confident. The product of surveying may be thought
of as being in two parts, that is, the derivation of the desired quantities such as coordinates
of, or distances between, points, and the assessment and management of the uncertainty
of those quantities. In other words not only must the survey results be produced, but there
should be numerical statements of the quality of the results for them to be meaningful.
Measurements are defined as observations made to determine unknown quantities. They
may be classified as either direct or indirect.
Direct measurements are made by applying an instrument directly to the unknown quantity
and observing its value, usually by reading it directly from graduated scales on the device.
Determining the distance between two points by making a direct measurement using a
graduated tape, or measuring an angle by making a direct observation from the graduated
circle of a theodolite or total station instrument, are examples of direct measurements.
Indirect measurements are obtained when it is not possible or practical to make direct
measurements. In such cases the quantity desired is determined from its mathematical
relationship to direct measurements. Surveyors may, for example, measure angles and lengths
of lines between points directly and use these measurements to compute station coordinates.
From these coordinate values, other distances and angles that were not measured directly
may be derived indirectly by computation. During this procedure, the errors that were present
in the original direct observations are propagated (distributed) by the computational process
into the indirect values. Thus, the indirect measurements (computed station coordinates,
distances, and angles) contain errors that are functions of the original errors. This distribution
of errors is known as error propagation.
It can be stated unconditionally that

No measurement is exact,
Every measurement contains errors,
The true value of a measurement is never known, and
The exact sizes of the errors present are always unknown.

1
WSU Bereket Geberselassie

These facts can be illustrated by the following. If an angle is measured with a scale divided into
degrees, its value can be read only to perhaps the nearest tenth of a degree. If a better scale
graduated in minutes were available and read under magnification, however, the same angle
might be estimated to tenths of a minute. With a scale graduated in seconds, a reading to the
nearest tenth of a second might be possible. From the foregoing it should be clear that no matter
how well the observation is taken, a better one may be possible.

Obviously, in this example, observational accuracy depends on the division size of the scale. But
accuracy depends on many other factors, including the overall reliability and refinement of the
equipment used, environmental conditions that exist when the observations are taken, and human
limitations (e.g.the ability to estimate fractions of a scale division). As better equipment is
developed, environmental conditions improve, and observer ability increases observations will
approach their true values more closely, but they can never be exact. By definition, an error is
the difference between a measured value for any quantity and its true value, or

2
WSU Bereket Geberselassie

From the discussion thus far, it can be stated with absolute certainty that all measured values contain
errors, whether due to lack of refinement in readings, instabilities in environmental conditions,
instrumental imperfections, or human limitations. Some of these errors result from physical conditions
that cause them to occur in a systematic way, whereas others occur with apparent randomness.
Accordingly, errors are classified as either systematic or random. But before defining systematic and
random errors, it is helpful to define mistakes

1. Mistakes are caused by confusion or by an observers carelessness. They are not classified as errors
and must be removed from after any surveying data observation or set of observations. Examples of
mistakes include (a) forgetting to set the proper parts per million (ppm) correction on an EDM
instrument, (reflector) or failure to read the correct air temperature, (b) mistakes in reading graduated
scales, and (c) blunders in recording (i.e., writing down 27.55 for 25.75).

Mistakes are also known as blunders or gross errors.

2. Systematic errors These errors follow some physical law, and thus these errors can be predicted. Some
systematic errors are removed by following correct measurement procedures (e.g., balancing backsight
and fore-sight distances in differential leveling to compensate for Earth curvature and refraction). Others
are removed by deriving corrections based on the physical conditions that were responsible for their
creation (e.g., applying a computed correction for Earth curvature and refraction on a trigonometric
leveling observation). Additional examples of systematic errors are

Temperature not being standard while taping,


an index error of the vertical circle of a theodolite or total station instrument, and
use of a level rod that is not of standard length.

Corrections for Systematic errors can be computed and applied to observations to elim-inate their
effects. Systematic errors are also known as biases.

3
WSU Bereket Geberselassie

3. Random errors are the errors that remain after all mistakes and systematic errors have been removed
from the measured values. In general, they are the result of human and instrument imperfections. They
are generally small and are as likely to be negative as positive. They usually do not follow any physical
law and therefore must be dealt with according to the mathematical laws of probability. Examples of
random errors are

Imperfect centering over a point during distance measurement with an EDM instrument,
bubble not centered at the instant a level rod is read, and
Small errors in reading graduated scales. It is impossible to avoid random errors in
measurements entirely. Although they are often called accidental errors, their occurrence
should not be considered an accident
2. PRECISION and ACCURACY
Due to errors, repeated observation of the same quantity will often yield different values.
Discrepancy is defined as the algebraic difference between two observations of the same quantity.
When small discrepancies exist between repeated observations, it is generally believed that only small
errors exist. Thus, the tendency is to give higher credibility to such data and to call the observations
precise. However, precise values are not necessarily accurate values.
To help understand the difference between precision and accuracy, the following definitions are given
Precision is the degree of consistency between observations based on the sizes of the discrepancies in a
data set. The degree of precision attainable is dependent on the stability of the environment during the
time of measurement, the quality of the equipment used to make the observations, and the observers
skill with the equipment and observational procedures.
Accuracy is the measure of the absolute nearness of a measured quantity to its true value. Since the
true value of a quantity can never be determined, accuracy is always an unknown.

4
WSU Bereket Geberselassie

5
WSU Bereket Geberselassie

WHY IDENTIFYING ERROR IS NEED


To know the degree of Precision in surveying data measurement and data set
To compute the absolute value in data set
To decide the rejections
to estimate error in number of observation
PROPERTIES OF THE NORMAL DISTRIBUTION CURVE
The Normal or Gaussion distribution is the most important of all types of distribution
(x) is the probability of occurrence of an error of size between x and x dx , where dx is an
infinitesimally small value. The errors probability is equivalent to the area under the curve between the
limits of x and x dx , which is shown crosshatched in Figure 3.3. As stated previously, the total area
under the probability curve represents the total probability, which is 1. This is represented in equation
form as

As a true value of measurement continuity is known the true error is also never known but instead the
value is known as Most Probable Value (MPV) and it is more close to the true value and

6
WSU Bereket Geberselassie

the different b/n measurement quantity and most probable value is called residual error (v) or
variation
There for in the above equation area can be
)2

Where y is the frequency of the occurrence of residual error


V is residual
e is base of natural logarizem
is the standard devotion

= v 2
/ n 1


=
(vi v) / n 1
2

Where n is the number of observation

Standard devotion () is the numerical value which indicates the amount of variations in central value
and it establishes the limit of error bounder in which 68.1% of the value to set should be equal to

=
(vi v) 2 / n 1
Where Vi is the individual measurement

v is the mean value of observation

CHAPTER 2
2. SURVEYING DATA OBSERVATIONS AND THEIR ANALYSIS

7
WSU Bereket Geberselassie

Surveying Data can also be presented in numerical form and be subjected to numerical analysis. As a
simple example, instead of using a bar chart, the daily high temperatures could be tabulated and the
mean computed. In surveying, observational data can also be represented and analyzed either
graphically or numerically. In this chapter some rudimentary methods for doing so are explored
2.1 SAMPLE and POPULATION
Due to time and financial constraints, generally only a small data sample is collected from a much larger,
possibly infinite population. For example, political parties may wish to know the percentage of voters
who support their candidate. It would be prohibitively expensive to query the entire voting population
to obtain the desired information. Instead, polling agencies select a sample subset of voters from the
voting population. This is an example of population sampling
Population is consists of all possible measurements that can be made on a particular item or
procedure. Often, a population has an infinite number of data elements.
Sample is a subset of data selected from the population
RANGE AND MEDIAN
Suppose that a 1-second (1") micrometer theodolite is used to read a direction 50 times. The
seconds portions of the readings are shown in Table 2.1. These readings constitute what is called a
data set. How can these data be organized to make them more meaningful? How can one answer
the question: Are the data representative of readings that should reasonably be expected with this
instrument and a competent operator? What statistical tools can be used to represent and analyze
this data set?
One quick numerical method used to analyze data is to compute its range, also called dispersion. A
range is the difference between the highest and lowest values. It provides an indication of the
precision of the data. From Table 2.1, the lowest value is 20.1 and the highest is 26.1. Thus, the
range is 26.120.1, or 6.0. The range for this data set can be compared with ranges of other sets, but
this comparison has little value when the two sets differ in size.

For instance, would a set of 100 data points with a range of 8.5 be better than the set in Table 2.1?
Clearly, other methods of analyzing data sets statistically would be useful. To assist in analyzing data,
it is often helpful to list the values in order of increasing size. This was done with the data of Table
2.1 to produce the results shown in Table 2.2. By looking at this ordered set, it is possible to
determine quickly the datas middle value or midpoint In this example it lies between the values of
23.4 and 23.5. The midpoint value is also known as the median
Since there are an even number of values in this example, the median is given by the average of the
two values closest to (which straddle) the midpoint.

8
WSU Bereket Geberselassie

That is, the median is assigned the average of the 25th and 26th entries in the ordered set of 50
values, and thus for the data set of Table 2.2, the median is the average of 23.4 and 23.5, or 23.45
2.3 GRAPHICAL REPRESENTATION OF DATA
Although an ordered numerical tabulation of data allows for some data distribution analysis, it can
be improved with a frequency histogram, usually called simply a histogram. Histograms are bar
graphs that show the frequency distributions in data. To create a histogram, the data are divided in
to classes
These are sub regions of data that usually have a uniform range in values, or class width
Although there are no universally applicable rules for the selection of class width, generally 5 to 20
classes are used. As a rule of thumb, a data set of 30 values may have only five or six classes,
whereas a data set of 100 values may have 10 or more classes.
In general, the smaller the data set, the lower the number of classes used.
The histogram class width (range of data represented by each histogram bar) is determined by
dividing the total range by the selected number of classes. Consider, for example, the data of Table
2.2. If they were divided into seven classes, the class width would be the range divided by the
number of classes, or 6.0 / 7 0.857 0.86. The first class interval is found by

In surveying, the varying histogram shapes just described result from variations in personnel,
physical conditions, and equipment: for example, repeated observations of a long distance made
with an EDM instrument and by taping.

9
WSU Bereket Geberselassie

NUMERICAL METHODS OF DESCRIBING DATA


Numerical descriptors are values computed from a data set that are used to interpret its precision or
quality. Numerical descriptors fall into three categories:
Measures of central tendency,
Measures of data variation, and
Measures of relative standing. These categories are all called statistics .Simply described, a
statistic is a numerical descriptor computed from sample data
MEASURES OF CENTRAL TENDENCY
Measures of central tendency are computed statistical quantities that give an indication of the value
within a data set that tends to exist at the center. The arithmetic mean, the median, and the mode
are three such measures. They are described as follows:
Arithmetic mean. For a set of n observations, y1, y2,..., yn , this is the average of the observations. Its
value, is computed from the equation its value (Ymean), is computed from the equation

10
WSU Bereket Geberselassie

11
WSU Bereket Geberselassie

12
WSU Bereket Geberselassie

13
WSU Bereket Geberselassie

14
WSU Bereket Geberselassie

15
WSU Bereket Geberselassie

16
WSU Bereket Geberselassie

17
WSU Bereket Geberselassie

18
WSU Bereket Geberselassie

19
WSU Bereket Geberselassie

CHAPTER 3
ERROR PROPAGATION
As discussed in chapter 1.2, unknown values are often determined indirectly by making direct
measurements of other quantities which are functionally related to the desired unknowns.
Examples in surveying include computing station coordinates from distance and angle
observations, obtaining station elevations from rod readings in differential leveling, and
determining the azimuth of a line from astronomical observations, since all quantities that are
measured directly contain errors, any values computed from them will also contain errors. This
intrusion, or propagation, of errors that occurs in quantities computed from direct measurements
is called error propagation . This topic is one of the most important discussed in this teaching
martial . In this chapter it is assumed that all systematic errors have been eliminated, so that only
random errors remain in the direct observations. To derive the basic error propagation equation,
consider the simple function,

The calculation of quantities such as areas, volumes, difference in height, horizontal distance,
etc., using the measured quantities distances and angles, is done through mathematical
relationships between the computed quantities and the measured quantities. Since the measured
quantities have errors, it is inevitable that the quantities computed from them will not have
errors. Evaluation of the errors in the computed quantities as the function of errors in the
measurements, is called error propagation.

20
WSU Bereket Geberselassie

where dx1, dx2 ....., etc., are the errors in x1, x2,....., etc., and x1 , x2 ....., etc., are their
standard deviations. In a similar way if

In the above relationships it is assumed that x1, x2,......., xn are independent implying that the
probability of any single observation having a certain value does not depend on the values of
other observations.

21
WSU Bereket Geberselassie

22
WSU Bereket Geberselassie

23
WSU Bereket Geberselassie

CHAPTER 4

24
WSU Bereket Geberselassie

WEIGHTS OF OBSERVATIONS
5. INTRODUCTION
When surveying data are collected, they must usually conform to a given set of geometric
conditions, and when they do not, the measurements are adjusted to force that geometric
closure. For a set of uncorrelated observations, a measurement with high precision, as
indicated by a small variance, implies a good observation, and in the adjustment it should
receive a relatively small portion of the overall correction. Conversely, a measurement with
lower precision, as indicated by a larger variance, implies an observation with a larger error,
and should receive a larger portion of the correction.
The weight of an observation is a measure of its relative worth compared to other
measurements. Weights are used to control the sizes of corrections applied to measurements
in an adjustment. The more precise an observation, the higher its weight; in other words, the
smaller the variance, the higher the weight From this analysis it can be stated intuitively that
weights are inversely proportional to variances . Thus, it also follows that correction sizes
should be inversely proportional to weights
In situations where measurements are correlated, weights are related to the inverse of the
covariance matrix the elements of this matrix are variances and covariances Since
weights are relative, variances and covariances are often replaced by cofactors. A cofactor is
related to its covariance by the equation

25
WSU Bereket Geberselassie

26
WSU Bereket Geberselassie

27
WSU Bereket Geberselassie

28
WSU Bereket Geberselassie

29
WSU Bereket Geberselassie

30
WSU Bereket Geberselassie

31
WSU Bereket Geberselassie

CHAPTER 5
Least square Adjustment Method in Surveying Measurement data
In surveying, observations must often satisfy established numerical relation-ships known as
geometric constraints As examples, in a closed polygon traverse, horizontal angle and distance
measurements should conform to the geometric constraints , and in a differential leveling loop, the
elevation differences should sum to a given quantity. However, because the geometric constraints
meet perfectly rarely, if ever, the data are adjusted

32
WSU Bereket Geberselassie

As discussed in earlier chapters, errors in observations conform to the laws of probability; that is,
they follow normal distribution theory. Thus, they should be adjusted in a manner that follows these
mathematical laws. Whereas the mean has been used extensively throughout history, the earliest
works on least squares started in the late eighteenth century. Its earliest application was primarily
for adjusting celestial observations. Laplace first investigated the subject and laid its foundation in
1774. The first published article on the subject, entitled (Method of Least Squares), was written in
1805 by Legendre. However, it is well known that although
Gauss did not publish until 1809, he developed and used the method extensively as a student at the
University of Gottingen beginning in 1794 and thus is given credit for the development of the
subject. In this chapter, equations for performing least squares adjustments are developed and their
use is illustrated with several examples.

33
WSU Bereket Geberselassie

34
WSU Bereket Geberselassie

35
WSU Bereket Geberselassie

36
WSU Bereket Geberselassie

37
WSU Bereket Geberselassie

38
WSU Bereket Geberselassie

39
WSU Bereket Geberselassie

40

Anda mungkin juga menyukai