Anda di halaman 1dari 19

Find study materials,Notes,Ebooks online @

www.chennaiuniversity.net
o
Unit_3
P'?o",-*.*4:-
CE2024-Remore Sensing and
I wo Marks Questions and
GIS /*.1/a,' { I

Answers

- z-r z)

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net
@

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
r
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

1S.List any four image enhancement operations,


Density slicing,
Contrast stretching,
Filtering,
Edge enhancement.
16.\Ybat is Speckle? fito,t f D, -
- ,^t1
speckle is the grainy or salt and pepper pafiem arising from the coherent nature ofradar
waves,causing random constructive and destructive interference.

|7.What are the steps involved in image processing?

l.lnitial data statistics


2.Image rectification and restoration
3.lmage enhancement
4.Image Transformation
5.lmage classification

6 ,,-)t,li) Aq;/ot ,>'aga)- $"/run-&ooq)


a);g;la imga- ) &*^oa * " ^':t:r'i:',:r';;""
1\ )nalq ?to(t":ffA[,./ ' g;N eote^ t4-rrleort'l
www.chennaiuniversity.net
L oi{ot
d *r.,aris ,^*r.lt1 .t (*.ff-lg ar*anol- '' '-
\-"'' ffi ts;,n" t*r!
Find study materials,Notes,Ebooks
ir,,"+q fubag)n J
att?on online !
,a'cr)
Find study materials,Notes,Ebooks online
&+ @

o L%"
www.chennaiuniversity.net
18.Define stereoscopes.
It is a devices that facilitate stereoscopic viewing of aerial photographs .The simple
and most common is the pocket stereoscope.It is compact sizi and inexpensive coit *d
most widely used instruments in remote sensing for visual interpretation.

l9.what are the equipments required for image interpretation process.

Li ght tables,Rulers,Stereoscopes,magnifi ers,Densitometer,paral lax bar.

20.What is meant by geometric rectifcation? j


Is the process by which the geometry of an image is made planmetric.It requires the use
of GCPs whose image coordinates in terms of rows and pixel number are known along
with the map or ground coordinates.

16 marks questions

l.What are the requirements and methods of image process ngt - ?- lO


2.Explain the geometric correction and spatial fittering techniques.

3.Explain the elements of interpretation with an example for each one ofthem .- P -+
4.Explain briefly about image enhancement.
- P- ,b
5.Explain briefly about image classification.

6.Discuss briefly about filtering techniques

ln^EJErrDrrrE'
T.Write the Applicaiions of remote sensing.

, ilr"t he_?aceE\j
@ -[*1,, &./d le,"
fr@t o?srqt)on) 2 -7- l2-
?<b4\oo
L Q AAoov,a,z;t eoJVt&don
a arrWe zbrr:tA <al;o>-,

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

UNIT - III
IMAGE INTERPRETATION AND ANALYSIS

l. Give a brief account of Image Interpretation and Analysis?


In order to take advantage ofand make good use of remote sensing dat4 we must be able to
exkact meaningful information from the imagery. This brings us to the topic of discussion in this
chapter - -interpretation and analysis - the sixth element of the remote sensing process rvhich se
defined in chapter 1. Interpretation and analysis of remote sensing imagery involves the
identification and/or measurement of various targets in an image in order to extract usefirl
information about them. Targets in remote sensing images may be any feature or object *'hich c^'r
be observed in an image, and have the following characteristics:

* Targets may be a point, line, or area feature. This


means that they can have any form, from a bus in a
parking lot or plane on a runway, to a bridge or roadury-
to a- large expanse of water or a field.
A The target must be distinguishable; il musr
contrast with other features around it in the image

Much interpretation and identification of targets in


remote sensing imagery is performed manually or
visually, i.e. by a human interpreter. In manv cases

this is done using imagery displayed in a pictorial or


photographtype format, independent of uhat tvpe of
sensor was used to collect the data and horv the data

were collected. In this case w'e refer to the data as

being in gnalog format,remote sensing images can also be represented in a computer as arrays of
pixels, with each pixel corresponding to a digital number, representing the brightness level of that
pixel in the image. In this case, the data are in a digital format. Visual interpretation mav also be
performed by examining digital irragery displayed on a computer screen. Both analogue and
digital imagery can be displayed as black and white (also called monochrome) images. or as colour
images by combining different charurels or bands represenring diflerent savelenerhs.

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

When remote sensing data are available in digital format, digital processing and analysis
may be performed using a computer. Digital processing may be used to enhance data as a prelude
to viSual interpretation. Digital processing and analysis may also be carried out to automatically
identify targets and extract information completely without manual intervention by a human
interpreter. However, rarely is digital processing and analysis carried out as a oomplete
replacement for manual interpretation. Often, it is done to supplement and assist the human
analyst.
Manual interpretation and analysis dates back to the early beginnings of remote sensing
for air photo interpretation. Digital processing and analysis is more recent with the advent of
digital recording of remote sensing data and the development of computers. Both manual and
digital techniques for interpretation of remote sqnsing data have tleir respective advantages and
disadvantages. Generally, manual ilterpretation requires little, if any, specialized equipmen!
while digital analysis requires specialized, and often expensive, equipment. Manual
interpretation is often limited to analyzing only a single channel of data or a single image at.a
time due to the difficulty in performing visual interpretation with multiple images. The computer
environment is more amenatle to handling complex images of several or many channels or from
several dates. In this sense, didtal analysis is usefirl for simultaneous analysis of many spectral
bands and can process large data sers much faster than a human interpreter. Manual
interpretation is a subjective p,rocess, Daning that the results will vary with different
interpreters. Digital analysis is based on tbe manipulation ofdigital numbers in a computer and is
thus more objective, generally rcsulthg in more consistent results. However, detemiining the
validity and accuracy of the results from digital fooessing can be .tifficult.
It is important to reiterate that visral ald digital analyses of remote sensing imagery are
not mutually exclusive. Both methods have their mcrits. In most cases, a mix of both methods is
usually employed when analyzing imagery. In fr.r, fte ultimate decision of the ulility and
relevance of the information extracted at 6e d of tte analysis process still must be made by
humans.

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net
Lt;' Shape refers to the generar form,
srrucrure. or outrine oi individuar objects. Shape can be a
very distinctive crue foii..interpret?iion, strEijiti='ffit',
strapes typicatty i.p."r.n, urban or
agricultural (field; targetl, whire natural fearures,
suchas forest edges, are generally more
irregular in shape, except"where man has created
a road or clear cuts. Farm or crop land inigated
by rotating sprinkler systems would appear as
circular shapis.
'c), Size of objects in
an image is a function of scale. It is imponant to
assess the size of a target relative to other
objects in a scene, as well
as the absolure size. to aid in the interpretation
ofthat target. A quick
approximation of mrget size can direct interpretation to an
appropriate result more quickly. For example,
if an interpreter had to
distinguish zones of land use, and had ldentified an area with a
number of buildings in it, Iarge buildings such
as factories or
warehouses rvould suggest comflrrEia.l
Foperry, whereas small buildings would indicate
,i:
residential use.

d| Pettri refers ro 6e spatial arrangement of visibly discernible


objects. T,"icallv an orderly repetition of similar tones and textures
*ill produoe a
disrincrive and ultimately recognizable pattern.
OrctarUs *irh cwnly.
eaced trees, and urban streets with regular.ly
qaced hoses ae good examples of pattern.

e). TcrErc rcfers to the arrangement and frequency


of tonal
variarim in panicular areas of an image. Rough
textures woukl
corrsis ofa monled roo rritere the grey levels
change abruptly in
a snall aa, rrtercas srrooth textures would
have very little tonal
variatbn- Smoodr rrtures are most often the
result of uniform,
even srtrces, sr.fi as fields. asphalt, or grasslands. A target with a
rough srrfe aa inegular structure, such as a forest canopy,
JrEDL laUrUPy,
results in a rough textured appearance-
Terule is one of the most important elements for
dlsliieylllril_e fearures in radar imager-v.

P.Ranramoorthy,asst.f r@EGl Engg


Page

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

f). Shadow is also helpful in interpretation as it may provide an idea of the


profile and relative height of a target or targets which may make
identifrcation easier. However, shadows can also reduce or eliminate
interpretation in their area of influence, since targets within shadows are
much less (or not at all) discemible from their sunoundings' Shadow is also
useful for enhancing or identifying topography and landforms, particularly in radar imagery'

or features
G),Association takes into account the relationship between other recognizable objects
in proximity to the target of interest. The identifrcation of features that one would expect to
associatewith other features may provide information to facilitate identification. In the example
given above, commercial properties may be associated with proximity to major transpodation
routes, whereas residential areas woulcl be associated with schools, playgrounds, and sports
fields. In our example, a lake is associated with boats, a marina, and adjacent recreational land'

***** rrr.*** it rr*

P.Ramamoorthy,Asst.Prof/Civil Engg PageJ

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
i.'
j
, .l Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net
3. Describe in detailed about the Digital Imiifiii pi:tiiC3sing?
NovlDec_20t0
p .
:,
Digital , iinage processing may
I
fti#..t involve
lr nulnerous procedures including formatting and
t'
:
corecting of the data, digital enhancement to
* ffiinfl facilitate better visuat interpretation, or even
--.@ automated classification of targets and fe6tures
entirely by computer. ln order to process
remote sensing imagery digitally, the data must
\gA<- , be recorded and available in a digital form
suitable for oorug" onf computar iffi.T.u obviously, the other **""-*, ,", o,r,o,
image processing is a comp,,er sysem' sometimes
referred to as an image atratysis system, with
rhe appropdae hardc,ae ad sfiurare to pmcss rte dara several commercia y available
software systems harrc bceo &dqoa spcAmaffy
fa remote semsing image processing and
analysis.
Digital image amllais qrlEDs ca be c{g.riF,l iro rb fo[,owing four categories:
+ kprocessiog
t tunageEnhe@
t Image Transformainn
{. Image Classificitiin d rfnelyrns

Preprocessing functions invorw .hre o6ains th* ue norraaly required prior to the rnain
data analysis and extraction
of infulio, d ae generarty grouped as radiometric or
geometric consctioDs- Raaiuic mains
i*Iude conecting the data for sensor
irregularities and unwanted sFq c' r-,r!t..ir noise, and converting the data so they
laccurately represent the rcfleood or '.nt .d rediaioo measured by the sensor. Geometric
'corrections include correcting fr g'i.ta dtutins due to sensor-Earth geometry variations,
and conversion of the da," to rear rord mfu (e-g. Iatitude and longitude) on the Earth,s
surface.

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
t'"
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

P"*. ,
" "",
The objective of the second group
of image processing functions grouped under the term
of
image enhancement is sorery to improve
the appearance of the inagery to assist in visuar
interpretation and analysis. Examples
of enhancement functions include contrast stretching
to
increase the tonal distinction between
various features in a scene, and spatiar firtering
to
enhance (or suppress) specific spatial pattems
in an image.

Image transformations are operations similar


in concept to those for image
enhancement However, unlike image
enhancement operations which are normally
appried onry
to a single channel ofdata at a time,
image transformations usua,y involve
combined processing
of data from multiple spectral bands.
Arithmetic operations (i.e. subtraction. addition,
multiplication, division) are performed
to combine and transform the originar bands
into ,,nerv,,
images which better dispray or
highright certain features in the scene.
we rvi, Iook at some of
these operations incruding various
methods of spectrar or band ratioing,
and a procedure calied
principal components anarysis
which is used to more efficientry represent
the information in
muitichannel imagery.

Image classification and anarysis


operations are used to digitary identifl.
and crassif..
pixels in the data' classification
is usually performed on multi-channel dara
sets (.{) and rhis
process assigns each pixer in
an image to a panicurar class or theme (ts)
based on smtisiicar

P.Ramamoorthy,As.t-p-ra;il fi gg
Page t I
www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net
characteristics of the pixer brightness vahies. There
are a variety of approaches taken to perform
digital classification. we rvill briefly describe the two generic
approaches which are used most
often, namely supervised and unsupervised classification.
In the following sections we will
describe each ofthese four categories ofdigital image processing
functions in more detair.
*i** ii *:ii**i.**ti* *

4. Explain detaile{ about the pre-processing operatious?


> Qry lr"rn- Aot)
Prc-processing !'\perarions- sometimes referred to as image
resroradon as"l rectificarlon. are intended to correct for sensor_
aa ptattira-specific radiometric and geometric distonions of
.t-.. R diomfiic correflions ma;- be necessary .lue to
tz.irims in q"re iltumination and vierving geometry,
fr.tlt".ic cgr,ti;rns, and sensor noise and response. Each of
these rvill vary depeading o th +ccifrc <erqr ad pldEorm used to acquire the data and the
conditions during a-a 6ssdsitio- -{rs h mry. be desirable ro converr and/or calibrate the dam
to kno\n (arbsolute) radiai<n c rcIleoe r.hs; tr frcilitare comparison between data.

E^- tI_ Hr:;,flft:ffii:i I


O
B -
4l di:rzrrg begeen the areas of the Earth,s
l..lrf,u imaeed rhe sun. and the sensor. This is
often required so as to be *t. * i1Fr5,r1. corpare iri-ages coflected
by difrerent sensors ar
different dates or times- or ro nclh .riprc ilegcs from a singre
sensor while maintaining
unilorm illumination condirions fiom scme ro s(-er-
Scattering ofradiarion occurs as ir par+< rhrurgfi and inreracts rvith
the atmosphere. This
scattering may reduce, or aftenrxrae- sonre or the er'.rg,' i[uminating
the surface. In addition, the
atmosphere will fi-rrther attenu:fe *'" sioal propagaring
rrom the target to the sensor. various
methods of
atmospheric correction can be appried ranging from detaired
modeling of the
atmospheric condirions during dara acquisition- ro simpre
carcurations based solely on the image
data An example of the latter method is to eramine the obsened
brightness varues (digital

P.Ramamoorthy,Asst.prof/Civil Engg
Page lL

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

numbers), in an area of shadow or for a very dark object (such as a large clear lake - A) and
determine the minimum value (B). The correction is applied by subtracting the minimum
observed value, determined for each specific band, from all pixel values in each respective
band.
Since scattering is wavelength dependent (chapter 1), the minimum values will vary from band
to band. This method is based on the assumption that the reflectance from these features. if rhe
atmosphere is clear, should be very srnall, if not zero. If we observe values much greater than

zero, then they are considered to have resulted from atmospheric scattering.

Noise in an image may be due to irregularities or erors that occur in the sensor response
and/or data recording and kansmission. Common forms of noise include systematic striping or
banding and dropped lines. Both of
these effects should be conected before further
enhancement or classification is performed. striping was common in early
Landsat MSS datu
due to variations and drift in the response over time of the six MSS detectors. The ,'drift,, ,,r.as
different for each of the six detectors. causing the same brightness to be represented
differentll.
by each detector. The overall appearance was thus a 'striped' effect. The corrective process
made
a relative correction among the six sensors to bring their
apparent values in line with each other.
Dropped lines occur when there are systems errors which result in missing
or- defectir.e data
along a scan line. Dropped lines are normally 'corrected' by repracing the
line r.r.ith the pi-rel
values in the line above or below, or with the average
ofthe two.
For many quantitative applications of remote sensing dara, it is
necessary to convert the digital numbers to measurements in

units which represent the actual reflectance or emittance from


the surface. This is done based on detailed knorvledge of the
sensor response and the way in which the analog signal (i.e. the

reflected or emitted radiation) is converted to a digiral number.


called analog{o-digital (A-to-D) conr.ersion. Bs soh.inq rhis

P.Ramamoorthy,Asst.prof/Civil Engg
Page t3
www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
F
www.chennaiuniversity.net
relationship in the reverse direction,
the absorute.udi*"" be carcurated for each pixer, so
that comparisons canlbe accriutery "un
made over time and between different
sensors.
All remote sensing imagery is inherently subject to geometric distortions. These
distortions may be due to severar
factors, including: the perspective of the
sensor optics; the
motion of the scanning system; the
motion of the platform; the platform altitude,
attitude, and
velocity; the terrain relief; and, the curvature
and rotation of the Earth. Geometric corrections
are
intended to compensate for these
distortions so that the geomerric representation
rn. *"r".,
will be as crose as possibre to the rear worrd. "r
Many of these variations are systematic or
predictable in nature and can be accounted
for by accurate modering ofthe sensor and platform
motion and the geometric rerationship
of the pratform with the Earth. other unsystimatic,
or
random, errors cannot be modered and
corrected in this way. Therefore, geometric
registration
ofthe imagery to a known ground coordinate
system must be performed.

The geometric registratiotr process involves


identiSing the image coordinates (i.e. row,
column) ofseveral clearly discemible points,
called ground control points (or GCps),
in the
distorted image (A - A1 to A4), and matching
-rffi
g. Iatitude, '"'""J,rt# j::fi ffi j::il
;l} " *,,ffi:
(e. gitude). rn"
Ion
",
to B4)' either in paper or digitar
" il
format. This is image-to-map registration.
once several welr_
distributed GCP paim have been identified,
the coordinate information is processed
by the
computer to determine the proper
tansformation equations to appry to the
original (row and
column) image coordinates to map
them into their new ground coordinates.
Geometric
registration may arso be performed
by registering one (or more) images
to another image, instead
olto geographic coordinates. This is ca,ed
image+o-image registration and is
often done prior to
performing various image transformarion
procedures, rvhich wilr be discussed
in section 4.6, or
lor multitemporal image comparison.

P.Ramamoorthy,nrrtF-Icirileneg..-..-
Page I ll

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

In order to actually geometrically correct the


original distorted image, a procedure called resampling is
' 'j used to determine the digital values to place in the nerv

.j resampling process calculates the new pixel values from


the original digital pixel values in the uncorrected image.
. -! -
-,- ; L There are three common methods for resampling: nearest
Oriqinat lmge
-I-" -' '
- ccRsi ccT
neighbour, bilinear interpolation, and cubic
convolution. Nearest neighbor re-sampling uses the digital value from the pixel in the original
image which is nearest to the new pixel location in the corrected image. This is the simplest
method and does not alter the original values, but may result in some pixel values being

duplicated while others are lost. This method also tends to result in a disjointed or blocky image
appearance.

Bilinear interpolation resampling takes a weighted avelage of four pixels in the original image
nearesi to the new pixel location. The averaging process alters the original pixel values and
creates entirely new digital values in the output image. This may be undesirable if further
processing and analysis, such as classification based on spectral response, is to be done. If this is

the case, resampling may best be done after the classification process. Cubic convolution
resampling goes even further to calculate a distance weighted average of a block of sixteen

pixels from the original image which surround the new output pixel location. As with bilinear
interpolation, this method results in completely new pixel values. However, these two methods
both produce images which have a much sharper appearance and avoid the blocky appearance of
the nearest neighbour method.

P.Ramamoorthy,Asst.Prof/Civil Engg Page {-


www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net
5. Write a detail
note on the Ir
nage Enhancement
Enhancemerts technique?
are used {rq*j /lrr*- e.n;)
and understandinsor
fl*-,-#;:X"1::'::linterpretation
::*Ti*#;il#
j:".,,,,,jiff

ffi
ffil1*
;.:.#":nff::":ffi;5i.:'-#

.
"r--l
interpretarion.
r(r'Iro' oe optimized for
|t *"r*" visrral

correction courd
optimary
a, targers. Thus, for
fit*fiffi*lff{ffi
a"J:' '"'"n.,
rrom a diverse
.ro*r,.",1llrE
*o o^orr,n"-"'riwter'
range of targqts (e'g.
etc') no generic
radiometric

q,smburion
*"n *rlllt lt i;;;,
brigf tness range
and
"ont'u't fo.
of brighrness ,r,r".',:i::',io "*,
; Is usually
necessary.
'--'r,!vossaly'
u,,JImtom adjustment
of the range
---o' and
In raw
In :_-
--, imagery, the data ofren populates
range ofdigiar on
,urr., (.o.,, lful a small portion
of the available
the original varues tt' * ,r, ;;;;;onlv
so ,n", .l1l'' l f enhancement
berween tarBets
ottn" availabte r;;;";;:.-" involves changing
and rheir o""1ll therebv increasins
lhe contrasr
understand rhe
concepr ,ltl"'11' 'n"
u", ;;"ilJ'- contrast enhancements
brightness varues "r*
thar.".o.,.l1':
histogram.
i n,ra*rrT''ng
a graphical
is ro
represenhrion
rne x-axis of tn" or,*ir"r". ,rtrs ofrhe
rhe graph. ,n" ilreguency
1r"*..
of occurrence
0-255) are displayed
shown on the y-axis-
r o[ Ithese alorrg
"r.r.ntt-tl values in the
image is

BI^manipulating
the range
ofdigital values
originat in an
lmage, graphically
represente by
its histogram'
*'e can apply
various enhanct
rhere are to the datu.
,*;;,.;;;ements
:1*, "**"ru'"",;:, ;"j::ff ,, T;
rmage; we rvill
cover only a
fi
here. The ,rr*,-*.''", ff""r.:,
il
Prsu t b

www.chennaiuniversity.net
F
Find study materials,Notes,Ebooks online !
t
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

linear contrast stretch. This involves identifying lower and upper bounds from the hisrogram
(usually the minimum and maximum brightness values in the image) and apprring a
transformation to stretch this range to fill the full range. In our example, the minimum salue
(occupied by actual data) in the histogram is 84 and the maximum value is I53. These 70 levels

occupy less than one-third of the full 256 levels available. A linear stretch uniformll erpanG
this small range to cover the full range ofvalues from 0 to 255. This enhances the contrast in ihe
image with light toned areas appearing lighter and dark areas appearing darker, making visual
interpretation much easier. This graphic illustrates the increase in contrast in an image before
(left) and after (right) a linear contrast stretch.

A uniform distribution of the input


range of values across the full range may nol
always be an appropriate enhancemen!
particularly if the input range is not uniformll.
distributed. In this case, a histogram-

stretched .' equalized stretch may be better. This stretch

t. t assigns more display values (range) to the


,, !s5
frequently occurring portions of the histogram.
In this way, the detail in these areas rvill be
better enhanced relative to those areas of the original histogram w-here values occur less
frequently. In other cases, it may be desirable to enhance the contrast in only a specitic ponion
of
the histogram. For example, suppose we have an image of the mouth of a ri'er, and the *aler
portions of the image occupy the digital values from 40to76 out of the entire imase
histogram.
If we wished to enhance the detail in the water, perhaps to see variations in sediment load. u.e
could stretch only that small portion of the histogram represented by the *.arer (.10 10 76) ro rhe
full grey level range (0 to 255). All pixels belorv or above these values rvould Ss 3-s5iqpsfl 16 i,r

P.Ramamoorthy,Asst.Prof/Civil Engg Page tl


www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net
and 255, respectively, and the detail in these areas would tre lost. However, the detail
in the
water would be grearly enhanced.

-.-.-----_.->
Spatial filtering encompasses another set of digital
processing functions which are used to enhance the
appearance ofan image. Spatial filters are designed to
highli-eht or suppress specific features in an image
@ccRsrccr basedon their spatial frequency. Spatial frequency is
related to the concept ol image texture, $hich refers to-the frequency of the variations in tone
that appear in an image. "Rough" texured areas of an image, rvhere the changes in tone are
abrupt over asmall area, har.e high sparial frequencies, r,r'hile ',smooth" areas with little variation

in tone over several pixels, have

low spatial frequencies. A


common filtering procedure
involves moving a 'window' of a

ferv pixels in dimension (e.g.


3x3, 5x5, etc.) over each pixel in
the image, applying a
mathematical calculation using
the pixel values under that
window' and replacing the centrar plrer r,r'irh rhe nerv rarue. The rvindow is moved arong
in both
the rorv and column dimensiors om pixer:: a rim: and:he carcuration is repeated
until the entire
image has been filtered and a 'ne*-- image hes been qenerated. By varying the carcuration
performed and the *eightings of the indiriduar pixers in the firter window, filters
can be
designed to enhance or suppress differeat qpes of feamres.
A low-pass filter is designed to emprra-<ize rarger, homogeneous areas
of similar tone a,d
reduce the smaller detail in an image- Thus. rorv-pass firters generalry serve to
smooth the
appearance ofan image. Average and median filterr often used for radar imagery are examples
of low-pass filters' High-pass firters do trre oppocite and serve to sharpen the appearance
of fine
detail in an image. one implementarion of a higrr-pass filter first applies a low-pass
filter to an
image and then subtracts the resurt from rhe originar. Iea'ing behind onry the high
spatiar
lrequency information. Directionar. or edge derection filters are designed
to highlight linear
P.Ramamoorthy,Asst.Prof/Civil Engg Page\d

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @
www.chennaiuniversity.net

i-eatures. such as roads or field boundaries. These filters can also be designed to enhance i-*.'r:res
utich are oriented in specific directions. These filters are useful in applications sJch :s g3L-':L.,S].

tbr the detection of linear geologic structures.

6.Explain in Elaborately about the Multispectral Image Classification end

Analysis?(May/June-201 l)

-+
o
t
"'"*4, ""t
A human analyst attempting to classify features in an image uses the elements of T-isuai

interpretation (discussed in section 4.2) to identify homogeneous groups of pixels siJcl


represent various features or land cover classes of interest. Digital image classification usEs ir:
spectral information represented by the digital numbers in one or more spectral bands. =i
attempts to classifr each individual pixel based on this spectrai ilformaticn. This qpt o:
classification is termed spectral pattern recognition. In either case, the objective is to assign all
pixels in the image to particular classes or themes (e.g. water, coniferous forest, deciduous fore*-
com, wheat, etc.). The resulting classified image is comprised of a mosaic of pixels. each .-i
which belong to a particular theme, and is essentially a thematic "map" ofthe original image.
When talking about classes. we need to distinguish between information classes and
spectral classes. Information classes are those categories of interest that the anal!'st is acrually'
trying to identify in the imagery, such as different kinds of crops, different forest t)'pes or tree
species, different geologic units or rock types, etc. Spectral classes are groups of pixels that are
uniform (or near-similar) rvith respect to their brightness values in the different spectral channels
of the data. The objective is to match the spectral classes in the data to the information classes r.:
interest. Rarely is there a simple one{o-one match between these two tvpes of classes. Rrrh:r.
unique spectral classes may appear which do not necessarily correspond to an! inlLlrm3ri.-:. a-:,is
of particular use or interest to the analyst. Alternatively, a broad information cla-.s,..g. I.::s:,
may contain a number of spectral sub-classes with unique spectral variations. Us:n-: rll: ro:es:
example, spectral sub-classes may be due to variations in age, species. and d.-nsiI)-.. Lr: Fii-::s -

P.Ramamoorthy,Asst.Prof/Civil Engg ?ege t7

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !
Find study materials,Notes,Ebooks online @

a result of shadowing or variations www.chennaiuniversity.net


in scene illumination. It is the anaryst,s job
to decide on the utirity of
the different spectral classes and their
conespondence to useful information classes.

Common classification procedures can be broken


down into two
E = asrlcuthrl,6 broad subdivisions based on the method used:
supervised
classification and unsupervised classification.
In a supervised
classification, the analyst idenfifies in the imagery
homogeneous
representative samples of the different surface
cover types
(information classes) of interest.
These samples are referred lo as
training areas. The selection of appropriate training
areas is based
6ccRs/ccr on the analyst,s familiarity with the geographical

present in the image. rhus, the anaryst


:fS*T.lY::519:.o1.1!' 11*' ""1*..::':f lro's is

in all spectral

areas for each class' The computer uses


nputer to recogniz" *;"i,rl.i,".
a special progrirm or atgorithm (of which there
are several
'variariom) to determfuE the murerical
'signanrss" for each aaining class. once the computer
has
determined tl* signatwes for eaci crasg
each pixer in the image is compared to these
signatures and
labeled as the class it mou clcely 'resembles'
digitalll.- Thuq in a supervised classification we are
first
identifying the informariol classes rvhich
ae then rned to detsmine the spect-al classes which represent
them.

unsupervised clrssificatio[ in esserrce reverses


dre supervised classification proiess. Spectral
classes
Dir!
are grou@ first, based solely on the numerical information
in thc dat4 and are then matched by the analyst
to
infqrmdon classes (if possible). programs,
called

E|,Gfi:H- r-rE sar- chstcrirg algorirhms, are used to determine the


narural
A-Elf
($atr$ical) groupings or srrucrures in
3:tr-,. the data. Usually. the
mb:* specifies how many groups or clusters are to be
I-\;l
toted for in the data. In addition to speci4,ing the
l^_"Y number of classes, the analysr may also
desired

spacrat cratsc
I--;G,
^) I speciff parameters
rd&d ro rlrc separation distance among the clusters
and the
variation within each cr,ster. Trr firar res.ft of this iterdive crustering process may result
in some
clusters that the analyst will wanr
to sub6e$Erd). cornbinq a clusers
that should be broken down further
- each of these requiring a fifir.f appricaion
of rhe crusering argorithm. Thus, unsupervised
classification is not completely wilhou
hrm intcrvEalion However, ir does not start with
a pre_
determined sel ofclasses as in a supernised
clasifcaion-
t+++a+r+art+++ta
P.Ramamoo.thyn-IF ffi,iffi Page Qo

www.chennaiuniversity.net
Find study materials,Notes,Ebooks online !

Anda mungkin juga menyukai