Anda di halaman 1dari 9

Spatial Interpolation

Amanuel Godefa
Department of Computer Science
University of Minnesota, Minneapolis, MN
gode0009@umn.edu
SYNONYMS
Spatial Interpolation, Spatial Estimation, Space Estimation
DEFINITION

Interpolation is a collective term for various techniques of determining an approximate value of a function at a point
in the domain between given points at which the function values are known or infer data values between points or
predict values beyond the range of the available data [2]. Interpolation is one of the GIS functions that can be
applying mathematical estimation techniques to a given value of a collected sets of data to expand to the sites where
no sample are available or predicting values for a whole surface based on the collected sample point values. This
helps us to distinguish data in terms of relatively simple functions that are then easily manipulated. Similarly Spatial
Interpolation is used to estimate a value of a variable at an unsampled location from measurements made at other
sites within a defined area. Spatial Interpolation is based on the principle of spatial dependence, which measures the
degree of dependence between near and distant objects.
HISTORICAL BACKGROUND

As the names associated with interpolation method, which are included Newton, Lagrange, Hermite and many
others suggest, polynomial interpolation has long been an important part of applied mathematics. The term
interpolation is due to Wallis, an older contemporary of Newton [2]. Newton began his work on the subject in 1675,
which laid the foundation of classical interpolation theory. In 1795, Lagrange published the interpolation formula
now known under his name, despite the fact that Waring had already produced the same formula sixteen years
earlier [5].
Throughout history, interpolation has been used in one form or another for just about every purpose in our world.
Some of the first surviving evidence and earliest known contributions to interpolation theory came from ancient
Babylon and Greece. In antiquity, astronomy was all about time keeping and making predictions concerning
astronomical events. This is not only linear, but also more complex forms of interpolation which served important
practical needs and to predict the positions of the sun, moon, and the planets they knew of. To this end, it was of
great importance to keep up lists, which is called ephemerides (singular: ephemeris) of the positions of the sun,
moon, and the known planets for regular time intervals [5]. Farmers, timing the planting of their crops, were the
primary users of these predictions, accordingly to rearrange their planting strategies based on these predictions. An
early example of the use of interpolation methods in ancient Greece dates sometime around 150 BC, Hipparchus of
Rhodes used linear interpolation to construct a chord function, which is similar to a sinusoidal function, to
compute positions of celestial bodies [5].
Many similar land-based purposes were found for interpolation over the ages, but ocean navigation was found to be
one of the most important applications for centuries. Tables of special function values were constructed using
numerical methods, and seafarers used certain ones to determine latitude and longitude values. We can find an
overview on history of interpolation theory of all ages and some relatively recent research into the history of
science, in particular of interpolation problem date back to early antiquity in [5]. They gave some examples of
interpolation techniques originally conceived by ancient Babylonian as well as early-medieval Chinese, Indian, and
Arabic astronomers and mathematicians and they briefly discussed the links with the classical interpolation
techniques developed in Western countries from the 17th until the 19th century.
Interpolation in spatial analysis or spatial interpolation is used to estimate a value of a variable at an un-sampled
location from measurements made at other sites. Strictly speaking, interpolation results are valid within the context
hull described by the sample locations. Spatial interpolation is based on the notation that points, which are close

together in space, tends to have similar value attributes. This is known as positive spatial auto-correction.
Interpolation can give us an appropriate value in events (temperature, soil parameters, ground water characteristics,
vegetation data, and other geographical data) that we can not able to measure, for example we are not able to
measure the values of the particular phenomenon in all points of the sphere, but we can only pick a sample point to
estimate all points. The values of the collected type of data can be expanded to the site where no samples are
available using interpolation methods.
SCIENTIFIC FUNDAMENTALS

Interpolation plays a big role in our daily life and also the importance of interpolation methods is continuously
growing especially by connecting the GIS with some kind of modeling software. Interpolation method can be
applied in different measurements and it can be easily involve in spatial analysis. Interpolation can be performed
either in the phase of data production or when the data are used for spatial analysis. Among others we can mention
the precipitation, temperature, soil parameters, ground water characteristics, pollution sources, vegetation data, etc.
Spatial and spatio-temporal distributions of physical and socio-economics phenomena can be approximated by
functions depending on location in a multi-dimensional space, as multi-variate scalar, vector or tensor field [6]. As
listed above typical examples that can be approximate their measurement are elevation, temperature, soil parameters,
population densities, etc. These phenomena can be measured using various methods. Many interpolations and
approximation methods were developed to predict values of spatial phenomena in unsampled locations [6].
Spatial interpolation is a very important feature of many GISs they can be used to provide contours for displaying
data graphically, to calculate, to change the unit of comparison when using different data structures in different
layers and frequently is used as an aid in the spatial decision making process both in physical and human geography
and in related disciplines such as mineral prospecting and hydrocarbon exploration [7]. A function to be called an
interpolant it is not only capture the general trend of the data, but also actually pass through the data points. The
function must match the given values exactly. Interpolation simply means fitting some function to given data so that
the function has the same values as the given data. [2]
Eg1. The general formulation of the spatial interpolation problem can be defined as follows: for a given data m
values of studied phenomenon yi where i=1,,m; measured at discrete points ti = (xi[1], xi[2], xi[3],..., xi[d]), i=1,2,,m
within a certain region of a ddimensional space (for interpolating functions in more than one dimension we need
more than one variable), find a d variate function F(t) which passes through the given points, that means fulfills the
condition (ti, yi,) where i=1,,m; with t1< t2 < t3 << tm, we seek a function f: R R such that
f(ti) = yi, where i= 1, , m.. [2][6]
in the case of complicated interpolation functions, we need to consider some additional constraints or requirements
such as monotonicity, convexity, or the degree of smoothness.
The spatial interpolation methods differ in their assumptions, based on the number and location of points used for
interpolation at a grid point they can be classified in different ways of interpolation procedures. Because of the
combination approaches of most of the methods, it is not easy to classify each method in a particular category.
According [1, 4, 7, 8] some of the basic classifications of interpolation procedures are:
a. Point Interpolation or Areal Interpolation

Point based given a number of points whose locations and values are known, determine the values of other points
at predetermined locations. Point interpolation is used for data which can be collected at point locations e.g. weather
station readings, spot heights, oil well readings, porosity measurements.
Areal interpolation is the problem of transferring data from one set of areas (source reporting zones) to another
(target reporting zones). This is easy if the target set is an aggregation of the source set, but more difficult if the
boundaries of the target set are independent of the source set. e.g. given population counts for census tracts, estimate
populations for electoral districts.
b. Global Interpolators or Local Interpolators

Global uses every control or given points available and all grid points to derive an equation or model, which is a
single function, which is mapped across the whole region. And a change in one input value affects the entire map.
Local uses a sample of control points in estimating an unknown value. Local interpolators apply an algorithm
repeatedly to a small portion of the total set of points, and a change in an input value only affects the result within
the window
c. Exact Interpolators or Approximate Interpolators

Exact interpolators honor the data points upon which the interpolation is based. The surface passes through all
points whose values are known honoring data points. It is seen as an important feature in many applications e.g. the
oil industry. Proximal interpolators, B-splines and Kriging methods all honor the given data points
Approximate interpolators are used when there is some uncertainty about the given surface values. This utilizes
the belief that in many data sets there are global trends, which vary slowly, overlain by local fluctuations, which
vary rapidly and produce uncertainty (error) in the recorded values. The effect of smoothing will therefore be to
reduce the effects of error on the resulting surface
d. Stochastic or Deterministic Interpolators

Stochastic methods incorporate the concept of randomness; the interpolated surface is conceptualized as one of
many that might have been observed, all of which could have produced the known data points. Stochastic
interpolators include trend surface analysis, Fourier analysis and Kriging
Deterministic methods do not use probability theory (e.g. proximal)
e. Gradual or Abrupt Interpolators

Gradual an example of a gradual interpolator is the distance weighted moving average


Abrupt geologic faults is an example of abrupt interpolator, since it unexpected changes; and also impermeable
barriers will produce abrupt changes.
Interpolation can be characterized by the discrepancy of the interpolated value from the true value. Because the true
value is not known in general, we can select some measured points for testing the interpolation procedure. In the
stage of the data production we can calculate the values of a particular phenomenon in predefined spots using
interpolation procedures. For example if we want to deliver the data in a regular grid, but the samples are measured
in scattered points we have to calculate the values of grid points from the samples using interpolation procedures.
A good GIS tool should include a range of spatial interpolation methods that can be used by users in order to
accomplish different tasks appropriately, such as to create digital elevation model, density of population, rainfall,
temperature, etc. Ideally, these methods should provide a natural language interface, which would lead the user
through an appropriate series of questions about the interests, goals and aims of the user and about the nature of the
data [7]. As Ledoux [4] defined a good interpolation method: a given a set of samples to which an attribute is
attached, spatial interpolation is the procedure used to estimate the value of the attribute at an unsampled location x.
To achieve that, it creates a function f, called the interpolant that tries to fit the samples as well as possible.
Interpolation is based on spatial autocorrelation that is the attribute of two points close together in space is more
likely to be similar than that of two points far from each other. Some of the prosperities that are mentioned by
Ledoux are: 1. Exactness of the data points or the function must pass through them. 2. Continuousness that for
all f(x) there must be a value of y that correspond to each value of f(x) 3. Smoothness: existence of first or second
derivative of the function. 4. Localness: the interpolation function uses only some neighboring samples to estimate
the value at a given location. 5. Adaptability, i.e. the function should give realistic results for anisotropic data
distributions and/or for datasets where the data density varies greatly from one location to another. 6. the method
must be computationally efficient. 7. Automatic: the method must require as little input as possible from the user,
the method doesnt be rely on user-defined parameters that require a priori knowledge of the dataset
KEY APPLICATION

Spatial interpolation a key applications of digital terrain modeling abound civil engineering, landscape planning,
military planning, aircraft simulation, radio communications planning, visibility analysis, hydrological modeling,
and more traditional cartographic applications, such as the production of contour, hill-shaded, slope and aspect
maps.
As listed above we use interpolation for many purposes to mention some for plotting a smooth curve through
discrete data points, reading between the lines of a table, differentiating or integrating tabular data, evaluating a
mathematical function quickly and easily, replacing a complicated function by a simple one. These include image
enhancements, image compression, geo-registration, geo-rectification, etc.
In the growth of the GIS tools, we need to look on the accuracy and precision of spatial data by using the appropriate
application. These applications integrated by different interpolation methods and in order to get an improved
performance in our work, which is with out error or minimum error, we have to pick the right application. GIS error
begins with data collection and continues through data input, storage, manipulation, output, and interpretation of the
results. Understanding the source, nature and extent of error in GIS is the first step in a strategy for the reducing
errors in GIS [1]. Further comparison between spatial interpolation methods and details approach of the methods
can be found on [1] [6] [3]. Because it is impossible to cover all or even most of the existing interpolation
techniques, only methods which are often using in connection with GIS or have a potential to be widely used for
GIS applications are mentioned and references to literature for more detailed description are given. To mention some
of the common interpolation methods that are often used in GIS applications are listed as follows [6], [1]:
Inverse distance weighed interpolation (IDW) or IDW averaging (IDWA): is one of the simplest and most
readily available methods. It is a deterministic estimation method or based on an assumption where values at
unsampled points are determined or approximated as a weighted average of a linear combination of values at known
sampled points. Distance-based weighting methods have been used to interpolate climatic data [6]. IDW makes the
assumption that values closer to the unsampled location are more representative of the value to be estimated than
samples further away. Weights change according to the linear distance of the samples from the unsampled point. The
spatial arrangement of the samples does not affect the weights. IDWA has seen extensive implementation in the
mining industry due to its ease of use. IDW has also been shown to work well with noisy data. The choice of power
parameter in IDW can significantly affect the interpolation results. As the power parameter increases, IDWA
approaches the nearest neighbor interpolation method where the interpolated value simply takes on the value of the
closest sample point. Optimal inverse distance weighting is a form of IDWA where the power parameter is chosen
on the basis of minimum mean absolute error. For estimator formula please refer to [1].
Natural neighbor interpolation: uses weighted average techniques that uses a geometrical relationships of local
data based on the concept of natural neighborhood coordinates in order to chose and weight nearby points. The value
in an unsampled location is computed as a weighed average of the nearest neighbor (nearest neighbor, essentially
each point of estimation (node) becomes equal in value to its nearest neighbor) values with weights dependent on
areas/volumes rather than distances [6] (It is not necessary to set up parameters with nearest neighbor). Natural
neighbor interpolation can be used for both interpolation and extrapolation and it generally works well with
clustered scatter points. Natural neighbor interpolation was first introduced by [9].
Triangulated irregular network (TIN): an interpolation based on a TIN, which uses a triangular representation of
the given points or sample points on the surface. DELONE Boris Nikolaevich, professor of mathematics at the
Moscow University proved that the field of scattered points can be uniquely transformed to a triangulated network
using the links determined by the Voronoy cells of the points [8]. This confirms the unique triangulation is
established, instead of constructing different triangulation of the same points.
Spline: is a deterministic technique to represent a smooth three-dimensional curve defined piecewise by
polynomials that passes through a given points. The interpolation is flexible through the choice of the tension
parameter which controls the properties of the interpolation function and smoothing parameter which enables to
filter out the noise [8]. Spline functions were first formulated by Schoenberg in 1946. Splining may be thought of as
the mathematical equivalent of fitting a long flexible ruler to a series of data points. Like its physical counterpart, the
mathematical spline function is constrained at defined points. Splines assume smoothness of variation. Splines have
the advantage of creating curves and contour lines which are visually appealing. Some of splinings disadvantages
are that no estimates of error are given and that splining may mask uncertainty present in the data. Splines are

typically used for creating contour lines from dense regularly spaced data. Splining may, however, be used for
interpolation of irregularly spaced data [1].
Polynomial regression: is classified as a stochastic, global technique which fits the variable of interest to some
linear combination of regressor variable, example in the case of temperature in the three dimensional X, Y and Z; X
and Y coordinators with Z elevation polynomial regression obtains the best fit with the simplest model [1].
Trend surface analysis (TSA): can be thought of as a subset of polynomial regression. TSA is a stochastic
technique which separates the data into regional trends and local variations. The regional component of TSA can be
thought of as a regression surface fit to the data, while the local variations can be thought of as a map of residuals.
Values at unsampled locations may be estimated using the mathematical relationship between the locational
variables X, Y and the regionalized meteorological variable o f interest. In this study, temperature was fitted to a
third order polynomial. A third order polynomial was assumed to be sufficient to capture regional temperature
variations [1]. TSA differs from polynomial regression above in that elevation is not used in estimating temperature
and TSA uses all regressors variables [1].
Kriging: is a stochastic technique similar to inverse distance weighted averaging that considers both the distance
and the degree of variation between known data points when estimating values in unknown areas. A kriged estimate
is a weighted linear combination of the known sample values around the point to be estimated Kriging uses a
semivariogram, a measure of spatial correlation between two points, so the weights change according to the spatial
arrangement of the samples. Unlike other estimation procedures investigated, kriging provides a measure of the error
or uncertainty of the estimated surface. In addition, kriging will not produce edge-effects resulting from trying to
force a polynomial to fit the data as with TSA. [1]
Cokriging: is a new branch to kriging except it uses additional covariates, usually more intensely sampled, to assist
in prediction. Cokriging is most effective when the covariates are highly correlated. Both kriging and cokriging
assume homogeneity of first differences. While kriging is considered the best linear unbiased spatial predictor
(BLUP), there are problems of nonstationarity in real-world data sets [1]. Cokriging uses one or more secondary
features which are usually spatially correlated with the primary feature (e.g. heights secondary, rain primary). If the
secondary features have more dense sample sets then the less intensively captured primary feature, with cokriging it
can be estimated with higher accuracy without any surplus expenditure [8].
As a general example, let me present an interpolation problem from my Analysis of Numerical Algorithm course,
which was taught by Prof. Bole D. at UMN
In this question, we have sampled a function at 49 points on a regular grid. The object is to fill in the intermediate
values using splines, and draw the result. The values could represent either altitudes of points on a hill, or the
amount of light being reflected off a shiny object, as a function of position. We need to fit a 2D natural cubic spline
through the points on the following grid.
0001000
0002000
0024200
1248421
0024200
0002000
0001000
We wish to evaluate this spline at all the intermediate points at a spacing of 0.1. In order
to do this, fit an ordinary natural spline across each row, evaluating each at the points x = 0.0, 0.1, 0.2, . . . , 5.9, 6.0.
The result will be a 761 matrix of values. Then fit a cubic spline down each column of these intermediate values,
evaluating each one at y = 0.0, 0.1, 0.2, . . . , 5.9, 6.0. The result will be a 61 61 matrix of values. Calling this
matrix of values Z, plot it using the two different methods:
The following figure was generated after I used a Matlab code to solve the problem:

8
6
4
2
0
-2
6
6

2
0

Figure 1
Another example figure 2, an area stealing interpolation from [8], and also called natural neighbor interpolation this
method automatically selects from the sample points those which should take part in the interpolation.
These points are called as natural neighbors [8].

Figure 2

Similarly as in [6] presented the three figures below figure 3, figure 4 and figure 5 are generated from methods IDW,
TIN and Kriging methods respectively.

Figure 3

Figure 4

Figure 5
FUTURE DIRECTION

As we learned from the previous interpolation methods computers are playing an incredible role in approximation
and prediction in GIS area. In the current growth of technology especially on the increase of hardware and software
development, the GIS capabilities for spatial interpolation have improved radically. The methods of interpolation
within GIS integrated advanced methods, and the usage of GIS has been increased in any spatial data processing.
Along with this the awareness of error or concern over accuracy and precision on spatial data declined from time to
time and it will continue decreasing. The probability of missing the exact prediction will be minimized.
CROSS REFERENCES

Geo-statistics
GIS functions
Extrapolation
digital elevation model (DEM)
least squares approximation
Some of the interpolation-methods in Mathematical approach are explained in are as follows:
Polynomial s (Monomial Basis, Lagrange Interpolation, Newton Interpolation, Orthogonal Polynomial)
Piecewise polynomials (Hermite Cubic Interpolation, Cubic Spline Interpolation, B-splines)
Trigonometric functions; Exponential functions; Rational functions

RECOMMENDED READING

1. Fred Collins Jr. (IBM Boulder Colorado) & Paul Bolstad (Univ. of Minnesota): A Comparison of Spatial
Interpolation Techniques in Temperature Estimation
2. Michael T. Heath McGraw Hill, 2002, Scientific Computing: An Introductory Survey, (2nd edition)
3. Kristian Kirk, 07, 2006, Aalborg University, Spatial Sampling and Interpolation Methods Comparative
Experiments using Simulated data
4. H. Ledoux, C. Gold Interpolation As A Tool For The Modelling Of Three-Dimensional Geoscientific Datasets
5. E. Meijering, November, 2001, A Chronology of Interpolation: From Ancient Astronomy to Modern Signal and
Image Processing, Netherlands Organization for Scientific Research
6. Lubos Mita, Helena Mitasova, 2002, Spatial interpolation, University of Illinois at Urban-Champaign, Urbana, IL
7. NCGIA, 1990, The National Center for Geographic Information Analysis (NCGIA) Core Curriculum 1990
Version
8. Ferenc Srkzy, April 1998, GIS FUNCTIONS INTERPOLATION, Technical University Budapest
9. R. Sibson, 1981, A Brief Description Of The Natural Neighbor Interpolant. In: D.V. Barnett, editor, Interpreting
Multivariate Data. John Wiley & Sons, Chichester.
10. Watson, D. F., 1992. Contouring: A Guide to the Analysis and Display of Spatial Data

Anda mungkin juga menyukai