Anda di halaman 1dari 33

ABSTRACT

This thesis presents the comparative study of three various hybrid image fusion techniques
utilized in pan sharpening of Panchromatic (PAN) and Multispectral (MS) satellite images. The
hybrid fusion technique’s are: like combination of Brovey transform and guided filter (BTGF),
combination of IHS transform and guided filter (IHSGF) and combination of wavelet transform
and guided filter (WTGF). Over all six performance parameters are considered to verify the
fusion methods under consideration like, Structural Angle Mapper(SAM), Cross Correlation
(CC), Root Mean Square Error(RMSE), Peak Signal to Noise Ratio (PSNR), Standard
Deviation(SD), Structural Similarity Index Map (SSIM). The experimental study is conducted in
MATLAB2013b device.
CHAPTER 1
INTRODUCTION
1.1 INSPIRATION FOR IMAGE FUSION RESEARCH
In present scenario inspiration for image fusion is the result of latest improvements in the remote
sensing field. As the new image sensors are of high resolution and are available at low cost when
compared to others, multiple sensors are used in wide-range imaging applications. These sensors
are of high spatial and spectral resolutions which produce quicker scan rates. The images taken
by these sensors are more reliable, informative and contains complete picture of the scanned
environment. Thus, they help in improved performance of dedicated imaging systems. Over a
period of decade remote sensing, medical imaging, surveillance systems, etc., are few application
areas that were benefited by these multi-sensors. As the number of sensors increases in an
application, the more proportionate amount of image data is collected. To improve imaging
system’s performance, deployment of additional sensors is permitted by a corresponding increase
in the processing power of the system. A sensor captures multiple images of a location and one
of them will be considered for analysis. However, the considered image may not have good
spatial and spectral resolution. To overcome this and to generate a fused image with excessive
spatial and spectral resolution, this work identifies the need for image fusion by means of
developing new methods to improve the performance of current fusion techniques.

Image fusion is an advanced procedure of mixing or combining of two or more image to


form new image which includes superior facts from the source image i.e unique facts have to be
preserved and the artifacts should be minimized in the fused image. The desire of image fusion is
to boost the spatial and spectral resolution by using variety several low resolution images. Due to
this cause image fusion has become a very interesting subject for many researchers [1],[2].

Today the image fusion find its position in image category such as, aerial and satellite
imaging, avionics, clinical imaging, concealed weapon detection, multi-focus image fusion,
digital camera application, defense for situation consciousness, surveillance, goal tracking,
intelligence gathering, war field monitoring, individual authentication, geo-informatics etc.
Satellites are the objects in orbit around Earth, and also around the Sun. In the universe
there are many satellites present. These satellites are of two types - Natural and Artificial.
Natural satellites are objects which orbit around another object in space. E.g. Moon, Earth,
Comets etc. Artificial satellites are scientifically manmade satellites they are important to earth.

There are six kinds of artificial satellites.

1. Communication Satellites: They collect several varieties radio waves and send them to
various spots in the world. These help us to communicate around the world.
2. Resource Satellites: They assist scientists to monitor natural resources by helping them to
take snap shots. Later the scientists used to plot the snap shots on maps. These maps
show things such as underground oil, foggy air, and precious natural resources and so on.
3. Navigation Satellites: They capture the signals from ships and aircrafts and send them to
emergency resource stations. These images are used by pilots, ship’s captain and sailors
to understand where they are and where they are heading.
4. Military Satellites: They assist the military to navigate, communicate, and spy on
different nations. They collect all images and pictures and also radio waves that are sent
by other nations.
5. Scientific Satellites: They study the Sun, planets, different solar systems and deep space
area. They assist scientists to study Earth and outer space. They assist to locate asteroids,
comets, and black holes etc.
6. Weather Satellites: They assist scientists to study variations in climatic conditions. They
are able track storms and predict the weather.

Remote sensing images offer a better manner to understand earth's surroundings by using
huge quantity of records or data’s through manmade satellites, aircrafts, Synthetic Aperture
Radar (SAR). Remote sensing images that are unique from natural images which cover massive
amount and wide variety of natural, manmade and military objects. In modern-day’s sensors
showcase high resolution and can sense many items that have exceptional varieties of shapes,
edges and contours. So, those images are extra reliable as they contain many facts in high-
frequency band widths as well as in low-frequency band widths.
In satellite imaging there are two forms of images available which are as follows:-

1. Panchromatic images (PAN): An image captured in larger visual wavelength range


however rendered in black and white. In PAN approach, the image is obtained with high
spatial resolution and depends upon the type of the satellite. For example, 5.8m pixel
(IRS), 10m pixel (SPOT), 1m pixel (IKONOS).
2. Multispectral images (MS): An image optically gained in couple of spectral wavelength
intervals. In MS approach, the image is acquired with much lower spatial resolution and
depends upon on the type of the satellite.For example, of 23.5m pixel (IRS), 20m pixel
(SPOT), 4m pixel (IKONOS).

1.3. LEVELS OF IMAGE FUSION:


1.3.1. Pixel level image fusion:
This is often the fusion at the lowest possible level of abstraction, where the collected data
from two extraordinary sources fuse directly. In image fusion, the collected data are the
pixels of the images from unique sources. Fusion at this stage is beneficial because it uses
original data that is near to the reality. The images fuse on the pixel-via-pixel basis, after the
software co-registered them exactly at the identical resolution level. Most of the times, the
collected data or images are geo-coded before fusion process, because of that the fusion on
pixel degree requires correct registration of the image to be merged. The correct registration
requires re-sampling and geometric alteration. There are numerous approches of re-sampling
and registration of the images. The geometric alteration requires the knowledge of the sensor
viewing parameters with software program that takes under considerations the image
acquisition geometry and Ground Control Points (GCPs). The GCPs are the landscape
capabilities whose actual locations at the floor are recognized. The GCPs may be exactly
happening, e.g. avenue intersections and costal features; or may be purposely introduced for
the mission of geometric alteration. In some instances, in which the surfaces are noticeably
irregular, a DEM is needed. This is particularly important for SAR statistics processing,
whose sensors has the side-looking geometry, i.e. indirect view. The indirect radar waves
strike a bump at the uneven terrains instead of the focused region on the surface area. The
image fusion at this degree has the apical requirements of computer memory and processing
strengths and takes longer durations.
1.3.2. Feature Level image fusion:
This technique combine the datasets, i.e. images at transitional level of abstraction. It is
appropriate to choose for the feature-level fusion only if the features extracted from different
data sources, i.e. images can be related to one other, as an instance the functions like edges
and segments can be extracted from both, the optical as well as SAR images and then can be
combined together to exercise sessions joint features and types. SAR images offer textural
message that is integrative to the spectral information from the optical images. Accordingly,
texture features extracted from SAR images and spectral function extracted from MS images
may also fuse before a classifier classifies them. [3] fuses hyper-spectral image with high-
resolution image at the functional stage. Some works suggest fusing special kinds of features
extracted from the identical before classifying the image. For instance, [4] fuses texture
function for type of very high-resolution RS images and [5] fuses special texture features
extracted from SAR images.

1.3.3. Decision Level image fusion:


It’s isn’t always important to accomplish fusion at one of the three levels. The fusion can also
happen at any or all of the three levels and there exist illustration of manner that permit
fusion of image and non-image records at couple levels of inference [4]. [5] apply multi-
stage fusion to multi-spectral image sequences for goal detection. [6] Suggest the multi level
image fusion carry out image fusion at all the three stages and reports substantially higher
effects with the image fusion together compleated at the beginning two stages (i.e. pixel and
feature stage) than with the fusion executed at any single stage only. However, multi level
fusion may additionally absorb and take in different kinds mentioned and such one in the
succeeding discussion.

1.4 GENERIC REQUIREMENTS OF IMAGE FUSION

After an in-depth and critical analysis of literature survey, the present study found that to
design an image fusion system one needs to take care of the following requirements:

1. The fused image should protect all relative information carried within the input images.
2. The fusion procedure must not recommend any artefacts which can confuse human
viewer either some of following image handling levels.
3. The integrated image should suppress to a maximum extent in the irrelevant features and
noise.
4. The fusion manner should expand the quantity of relevant records in the fused image,
while diminishing the amount of inappropriate information, uncertainty and redundancy
inside the fused image.
CHAPTER 2

LITERATURE REVIEW

Claire Thomas, Thierry Ranchin et,al [1] Introduced a framework to combine the excessive
resolution multispectral image from lower resolution panchromatic image. Given more current
fusion processes like substitution, primarily based methods, relative spectral contribution
techniques, ARISIS based methods and advantages and disadvantages of each technique.

Henrik Aanaes, Johmaes R et,al [2] Introduced the approach of pixel level satellite image
fusion from the imaging sensor model. Pixel neighborhood regularization is offered for the
regularization of the projected approach. The algorithm examined on Quick Bird, IKONOS,
Meteosat data sets. The performance evaluation matrices used are Root Mean square
Error(RMSE), Cross Correlation (CC), Structural Similarity Index(SSI), and Q4. The author
shown that the projected technique will be compared to many existing technique.

Faming Fang, Fang Li, et,al [3] proposed another variation image fusion technique depend on
three suspicions i) gradient of PAN image is combination of image groups utilized in pan
sharpened image. Ii) The gradient in the spectrum of the fused image should be resemble to low
resolution MS image. The algorithm is tested on QuickBird, IKONOS data sets. The
performance evaluation parameters are RMSE, CC, Structural Angle Mapper (SAM), Spatial
Frequency (SF).

Xinghao Ping, Yiyong Jiang et,al [4] Proposed a Bayesian non parametric lexicon learning
version for image combination. The proposed strategy won’t request the first MS image for
dictionary learning, rather it straightforwardly uses the reconstructed images for dictionary
learning. The algorithm is tested on IKONOS, Pleiades,QuickBird data sets. The performance
evaluation metrics used are RMSE, CC,ERGAS,Q4.

S. Li and B. Yang et,al [5] proposed an image combination problem from compressed sensing
theory. First the degradation model is constructed from low MS and high PAN as a sampling
procedure. So the image fusion process is converted into restoration problem. Later the interest
calculation is used to determine the reclamation issue. The QuickBird also IKONOS satellite
images remain utilized to analysis image fusion algorithm. In this the presentation evaluation
parameters used are CC, SAM, RMSE, ERGAS and Q4.

F. Palsson, J. R. Sveinsson, et,al [6] proposed a pattern based image combination procedure.
The model is dependent on the assumption that a straight mixes of the groups of the fused
images gives a panchromatic image and the down sampling the fused image provides
multispectral image. The algorithm is tested by using QuickBird data sets and performance
evaluation metrics used are SAM, ERGAS, CC, and Q4.

S. Leprince, S. Barbot, et,al [7] proposed a method to automatic co-register the optical satellite
images by using ground deformation measurement. By using the proposed method the images
are co-registered with 1/50 pixel accuracy. The algorithm is tested for SPOT satellite images in
the case of non ceseismic deformation and in the case of large ceseismic deformation.

M. L. Uss, B. Vozel, et,al [8] proposed latest execution bound for investigating the image
enrollment approaches impartially. This proposed lower bond engaged with a geometric
transformation accepted between the passing on image and templet images. The test results
proved that the lower bound portrays all the more productively the execution of traditional
estimators then different limits submitted in the writing.

Y. Peng, A. Ganesh, et,al [9] proposed image registration technique called strong arrangement
by inadequate and low rank disintegration for directly related images (RASL) which efficiently
co-register linearly correlated satellite images. The accuracy of the introduced approach is very
high and this introduced approach efficiently co-registers the data sets over extensive variety of
practical misalignments and distortions.

Miloud Chikr El-Mezouar, Nasreddine Taleb, et,al [10] Another combination approach that
induces images with normal colors is introduced. Additionally, in this strategy, a high-resolution
standardized contrast is proposed and utilized in outlining the vegetation. The method is
performed in two different stages: MS combination utilizing the HIS strategy and vegetation
improvement. Vegetation improvement is nothing but it’s a correction step, and it relies upon the
program. The new methodology gives great outcomes as far as target quality measurements.
Furthermore, visual examination demonstrates that the idea of the proposed approach is
promising, and it enhances well combination quality by upgrading vegetated zones.
M.E.Nasr,S.M.Elkaffas et,al [11] proposed image combination method, in light of
incorporating together Intensity Hue Saturation (IHS) also Discrete Wavelet Frame Transform
(DWFT), planned as advancing the nature of remote sensing images. A PAN and MS images
taken away Landsat-7(ETM+) satellite fused utilizing the present fresh methodology.
Exploratory outcome represents the latest introduced method gives best result for spatial and
spectral quality of fused image. In addition, at the present procedure being connected via noisy
and de-noisy remote sensing images that can save the nature of the fused image. Comparing and
analyzing among various combination procedures are also additionally displayed that the
proposed method beats alternate methods.

Xia Chun-lina, Deng Jie, et,al [12]proposed another fused technique was exhibited—PWI
change: First, multispectral image was changed by HIS, and after that they obtained brilliance
part—I was changed by PCA to extricate the first principal component— PC1. Utilizing wavelet
transform, the PC1 and panchromatic images were fused, and after that the outcomes were
utilized to replace the brilliance part of the multispectral image. At long last the new
multispectral image was gained by the backward IHS transform. The new strategy was better
then the single one of three combination strategies of HIS, the wavelet transform and PCA,
upgrade the presenting ability of image spatial detail to greater extent, and all around held the
spectral information of the multispectral image.

Hamid Reza Shahdoosti and Hassan Ghassemian et,al [13] Designing an ideal channel that can
remove important and non redundant data from the PAN image is introduced in this type. The
excellent channel coefficients extricated from factual properties of the images are most steady
with sort and surface of the remotely detected images compared with different kernels, for
example, wavelets. Visual and statistical assessments demonstrate that the proposed calculation
neatly enhances the fusion quality as far as connection coefficient, relative dimensionless
worldwide error in synthesis, spectral angle mapper, including enhanced intensity–hue–
immersion, multiscale Kalman channel, Bayesian, enhanced non sub sampled contour let
transform, and scanty fusion of image.

Jianwen Hu and Shutao Li, et,al [14]provides a novel technique dependent on the created
multiscale double two sided channel to combine high spatial PAN and high spectral MS image.
Compared with multiresolution based strategies, the procedure of detail eradication deal with the
attributes about PAN figure and MS figure at the same extent. The lower resolution MS image is
re-unit via similar amount of high resolution PAN image also sharpened by infusing extracted
information. The proposed combination technique is tried through QuickBird also IKONOS
image again compared with 3 mainstream strategies.

Qian Zhang, Zhiguo Cao, et,al [15]proposed an iterative optimization method, where it considers
the registration and fusion forms of methods, was suggested for panchromatic (PAN) and also
for multispectral (MS) images. Provided an registration strategy and a fusion technique, the
collective advancement method is explained as finding the optimal registration parameters to
gain profit in the optimal fusion performance. Hear in this methodology, the declining simplex
calculation is adopted to filter the registration parameters alternatively. Examinations on
arrangement of PAN also MS images of ZY-3 also GeoEye-1 demonstrate the invented method
beats a few contending ones as far as registration accuracy and fusion quality
CHAPTER 3
METHODOLOGY

3.1 Brovey Transform:

The Brovey transform strategy can be a proportioned fusion system that protect the relative

spectral donations of individual pixel, but replaces its general brightness by the high-resolution

panchroma image. It is worked by the equation:


𝑅𝐵𝑇 𝑅
𝑃𝐴𝑁
𝐺 ′
[ 𝐵𝑇 ] = ∗ [𝐺 ] (1)
𝐼

𝐵𝐵𝑇 𝐵

By equation. (5), it is obvious that BT is reality basic fusion strategies requiring just arithmetical

tasks with no measurable examination of filter design. Attributable to their effectiveness and

execution, both of them can complete the objective of quick fusion for IKONOS/QuickBird

symbolism. Anyhow, color distortion issues are regularly created in fused images. Therefore,

begin with the image fusion technique; color distortion turns into a critical problem for handy

functions [24].

3.2 IHS Transform:

The color framework along with red, green and blue tunnel (RGB) is generally utilized in pc

screens to show colored image. Next color framework generally utilized in explaining the color

arrangement of Intensity, Hue and Saturation (IHS). The intensity present exhibits the sum of the

light in a color, the hue is the characteristic of color dictated by its wavelength, and the

immersion is the trueness of color [10].Take any algorithm, the IHS transform is constantly
connected to an RGB combination. This shows the fusion will be connected to bands of three

groups of MS image. Because of this change, we acquire the latest Intensity Hue and Saturation

segments. The PAN images that change the intensity image. Early doing this, to limit the change

of spectral info of the MS image concerning of the first MS image, also the histogram about the

panchromatic image is coordinated along a well known intensity image. Assigning backward

transform, we get the combined RGB image, along with the spatial explanation about PAN

image fused in it [9-10].

1 1 1
𝐼 √3 √3 √3 𝑅
1 1 −2
[𝑉1 ] = √6 √6 √6
𝑥 [𝐺 ] (2)
𝑉2 1 −1 𝐵
[√2 0]
√2

𝑉
𝐻 = 𝑡𝑎𝑛−1 ( 2) (3)
𝑉1

𝑆 = √𝑉12 + 𝑉22 (4)

Replacing I by PAN image and taking inverse transform shown below equation (12) gives the

fused MS image.

−1 3
1
𝑀𝑆1𝐻 √6 √6 𝑃𝐴𝑁
−1 −3
[𝑀𝑆2𝐻 ] = 1 𝑥 [ 𝑉1 ] (5)
√6 √6
𝑀𝑆3𝐻 2 𝑉2
[1 √6
0]

3.3 PCA method for image fusion:

The PCA is favored as the K-L transform, is an extremely helpful and understood method for
pressure of the dimensionality connected MS information. Usually, the main principal
component (PC1) gathers the data that is normal with every group utilized as input information
in the PCA. It builds PCA an exceptionally satisfactory when combining with MS and PAN
images. For this situation, each and every band of first MS image holds the input information,
because of this transformation we get non-correlated new groups. The PC1 is simulated by the
PAN image, and whose histogram already been coordinated along with PC1. At last, inverse
transformation is connected to the entire dataset framed by the alternate PAN image and the
PC2…PCn, acquiring that incorporated into them [11].

3.4 Wavelet transform based fusion:

The Wavelet suites for the fusion of image, not just on the ground that it empowers one to fuse

images included independently at various scales, yet it creates huge amount of coefficients close

edges in the transformed image and uncovers relevant spatial data [12]. The Wavelet decays the

signal dependent on basic capabilities. Wavelet is explained based on two groups of functions:

 Wavelet function

 Scale function

Wavelet function is described as "mother wavelet", and scaling function is described as the
"father wavelet". Hence the transformation of parent wavelets rest "daughter" also "son"
wavelets. For single dimensional shell, the constant wavelet change about distribution perhaps
declared.

1 ∞ 𝑡−𝑏
𝑊𝑇(𝑎, 𝑏) = ∫ 𝑓(𝑡) 𝜓( 𝑎 )𝑑𝑡 (6)
√𝑎 −∞

Coming to this WT (a, b) move the wavelet collaborating about the capacity Rt); dissecting

wavelets a(a > 0) and b together are scaling and translational parameter, separately. Every basic

function is scaled and translated type of function 𝜓(𝑡)known as mother Wavelet.

Wavelet based image fusion methods are most probably depending on two algorithms:

 The Mallat algorithm [13] and

 The It torus algorithm [14].


The Mallat algorithm depends on dyadic wavelet transform (WT), which utilizes decimation,

isn’t move invariant and displays ancient rarities because of associating with the fused image

[15]. The WT strategy permits the disintegration of image in an arrangement of wavelet also

approximation plane’s, as indicated by the hypothesis of many verdict wavelet transformation

provided through Mallat. Every wavelet holds the wavelet coefficients, and coefficient amplitude

characterizes the scale and information of nearby features. Wavelet coefficients are processed by

methods for the following condition:-

𝑝
𝑤𝑗 (𝑘, 𝑙) = 𝑃𝑗−1 (𝑘, 𝑙) − 𝑃𝑗 (𝑘, 𝑙) (7)

j= 1, ... ,N, more over j is scale index, and number of decomposition is N, Po(k, l) referred to true

image p(k, l) also P/k, l) is filtered image produced by this condition:-

𝑃𝑗 (𝑘, 𝑙) = ∑𝑚 ∑𝑛 ℎ(𝑚, 𝑛)𝑃𝑗−1 (𝑛 + 2𝑗−1 𝑘, 𝑚 + 2𝑗−1 𝑙) (8)

Coefficients are H(n,m).

𝑊𝑇𝑗 (𝑘, 𝑙) = 𝑃𝑗−1 (𝑘, 𝑙) − 𝑃𝑗 (𝑘, 𝑙) (9)


j=1,2, .....,N,more over j is scale index, and number of decompositions is N, Po(k,l) referred to

true ETM+ image P(k, I) also P(k, l) is the filtered image.

Figure 1. Three level decomposition using wavelet transform


3.4.1. Wavelet based fusion scheme:

Useful characteristics in image i,e generally these are larger than single pixel, and principles

based on one pixel may not be the most correct techniques. The principles based on

neighbourhood characteristics of pixels are more accurate and correct techniques. This type of

principle utilizes in few area highlights the single pixel to guide coefficients at locations.

Window of neighbourhood is set 3*3 within the cardboard. Assume A and B together greater

frequency images waiting to fuse, than fusion result is F for image, At that point,

𝑓 (𝑥, 𝑦) = 𝐴(𝑥, 𝑦)𝑖𝑓 𝜎𝐴 (𝑥, 𝑦) ≥ 𝜎𝐵 (𝑥, 𝑦) (10)

𝑓 (𝑥, 𝑦) = 𝐴(𝑥, 𝑦)𝑖𝑓 𝜎𝐴 (𝑥, 𝑦) < 𝜎𝐵 (𝑥, 𝑦) (11)

3.5 Guided filter based fusion method [25]:

3.5.1 TWO SCALE DECAY:

As appeared in Fig. 3, the root images are first decayed into two-scale portrayed by normal

filtering. The base layer of each root image is gained by following:

𝑀𝑛 = 𝐼𝑛 ∗ 𝐴 (12)

The nth source image says, normal filter is Z, filter size is 31x31. When the bottom layer is

gained; powerful layer could be simply gained through subtracting the bottom layer from the

source layer.

𝑁𝑛 = 𝐼𝑛 − 𝑀𝑛 (13)

The two-scale decay aims by differentiating every source image into basic layer holding huge-
scale variations in layer consisting small-scale information’s.

3.5.2 Construction of wait map by using guided filtering


At first, to obtain high-resolution image Hn laplacian filtering is applied to each and every source

image.

𝐻𝑛 = 𝐼𝑛 ∗ 𝐿(14)

Where L is definitely 3 x3 Laplacian pervade.

The ordinary character of the genuine value of Hn is utilized to buildup the saliency maps Sn.

𝑆𝑛 = |𝐻𝑛 |𝐹𝑟𝑔,𝜎𝑔 (15)

Hence“g” is a Gaussian low-pass filter of scope (2rg + 1) (2rg + 1), as a consequence present

state of affairs rg and σg are set to 5. The calculated saliency maps given excellent statement of

the saliency level summarized information. After spectacular, saliency when maps are matched

to figure out the weight maps thus and so:

1, 𝑖𝑓𝑆𝑛𝑘 = 𝑚𝑎𝑥 {𝑆1𝑘 , 𝑆2𝑘 , 𝑆3𝑘 , 𝑆𝑛𝑘 }


𝑂𝑛𝑘 = { } (16)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Where N is variety of origin images, Skn is definitely the saliency quality of the pixel k of the nth

image. After all, the maps obtained are usually noisy and never coordinated with target

boundaries (see Fig. 3), which will directly produce artifacts to the fuse image. By taking spatial

Consistency which is most accurate and powerful way to resolve all issues. Spatial unity

signifies that if two contiguous pixels have same brilliance or color, and also they have same

weights. A well known spatial unity based fusion method is process of codifying energy

functions, where the element saliencies may encode in the function and coroner coordinated

weights are imposed by regularization units, e.g., softness units. The electricity content may be

minimized totally to gain the specified wait maps. Anyhow, the optimization based approaches

are usually somewhat ineffectual.


Within this report, most fascinating approach such as optimization based approach is proposed.

Every single wait map Pn along with origin source image allows to perform guided image

filtering on it.

𝑊𝑛𝑀 = 𝐺𝑟1,𝜖1 (𝑂𝑛 , 𝐼𝑛 ) (17)

𝑊𝑛𝑁 = 𝐺𝑟2,𝜖2 (𝑂𝑛 , 𝐼𝑛 ) (18)

Where r1, _𝛆1, r2, and 𝛆2 are powerful status of most focused pervade, WnB and WnD are the

successive weight maps of the common and summarizes layers. Ultimately, the standards of the

N weight maps are assign similar which they sum to one at every pixel k.

3.5.3 TWO SCALE IMAGE REORGANIZATION

Two-scale image reorganization is composed of the succeeding two steps. Initially, base and

summarized layers of other source images are fused in combination along weighted surveying.

𝐵̅ = ∑𝑁 𝑀
𝑛=1 𝑊𝑛 𝑀𝑛 (19)

̅ = ∑𝑁
𝐷 𝑁
𝑛=1 𝑊𝑛 𝑁𝑛 (20)

𝑅 = 𝐵̅ + 𝐷
̅ (21)

3.6 Proposed Hybrid image fusion methods: In this thesis we proposed comparative study

of three hybrid image fusion methods which are named as:

i) Brovey transform with Guided filter hybrid image fusion method

ii) IHS with Guided filter image fusion method

iii) Wavelet transform with Guided fusion method


3.6.1Brovey Transform with Guided filter (BTGF):-the detailed steps of fusion procedure is

as given below:

I. Consider panchromatic and Multispectral images.

II. Preprocess both the images (Ortho-rectification and Geo-rectification) using ERDAS

tool.

III. Apply Guided filter fusion as far as two PAN with MS images used generate high

resolution MS image called GMS.

IV. Separate RGB components from the MS image.

V. Use this GMS in Brovey transform to generate new R’G‘ B’ components.

VI. Generate MS image from new ‘R’ ‘G’‘B’ components using ERDAS tool.

3.6.2. IHS Transform with Guided filter(IHSGF):-The detailed steps of fusion procedure is as

given below:

I. Consider panchromatic and Multispectral images.

II. Preprocess both the images (Ortho-rectification and Geo-rectification) using ERDAS

tool.

III. Apply Guided filter fusion as far as two PAN and MS images together generate high

resolution MS image called GMS.

IV. Apply IHS transform to MS image and GMS images to extract the component I

(Intensity) from both the images. Now take I as intensity component from MS image

from GMS image.

V. Remove and replace I factor starting with MS image together with the I’ factor from the

GMS image.

VI. Inverse transform the components I’ H S to get the fused image.


3.6.3. Wavelet transform with Guided filter(WTGF):- The detailed steps of fusion procedure

is as given below:

I. Consider panchromatic and Multispectral images.

II. Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS

tool.

III. Apply Guided filter fusion to the two PAN also MS image to generate high resolution

MS image called GMS.

IV. Apply three level wave let transform the two MS and GMS images and fuse the

decomposed components using the specific wavelet rules.

V. Apply inverse wavelet transform to the fused components to get the high resolution MS

images.

3.7 Performance Measurement Parameters:

∑L
i=1 ui ,vi
1. SAM(u, v) = cos −1 [ ] (22)
√∑L 2 L 2
i=1 ui √∑i=1 vi

∑M N
i=1 ∑j=1(xi,j −x
̅ )(yi,j −y
̅)
2. CC = (23)
√∑M N ̅ )2 −∑M
i=1 ∑j=1(x,j −x
N ̅)2
i=1 ∑j=1(yi,j −y

∑M N
i=1 ∑j=1(xi,j −yi,j )
2
3 RMSE = √ (24)
MxN

L2
4.PSNR = 20LOG10 [ 1 ] (25)
∑M ∑N (I (i,j)−If (i,j))
MN i=1 j=1 r

∑M N
i=1 ∑j=1(BR(m,n)−μ)
2
5.SD = √ (26)
MxN
(2μx μy +C1 )(2σxy +C2 )
6. SSIM(x, y) = (27)
(μ2x +μ2y +C1 )(μ2x +μ2y +C2 )

CHAPTER 4

RESULTS AND DISCUSSION


A) ORIGINAL B) ORIGINAL PANCHROMATIC IMAGE.

MULTISPECTRAL IMAGE.

C) RESULTS OF BTGF METHOD. D) RESULTS OF IHSGF METHOD.


E) RESULTS OF WTGF METHOD.

Figure .1. Outcome for data set 1

A) ORIGINAL B) ORIGINAL PANCHROMATIC

MULTISPECTRAL IMAGE. IMAGE.


C) RESULTS OF BTGF METHOD. D) RESULTS OF IHSGF METHOD.

D) RESULTS OF WTGF METHOD.

Figure .2. Outcome for input folder 2.


A) ORIGINAL MULTISPECTRAL B) ORIGINAL PANCHROMATIC

IMAGE. IMAGE.

C) RESULTS OF BTGF METHOD. D) RESULTS OF IHSGF METHOD.


D) RESULTS OF WTGF METHOD.

Figure.3. Outcome for input folder 3.

A) ORIGINAL MULTISPECTRAL IMAGE. B) ORIGINAL PANCHROMATIC

IMAGE.
C) RESULTS OF BTGF METHOD. D) RESULTS OF IHSGF METHOD.

D) RESULTS OF WTGF METHOD.

Figure.4. Outcome for data set 4.


Table 1. Performance measurement parameters for data set 1

IDEAL BTGF IHSGF WTGF


VALUES
SAM 0 5.8590 2.7589 1.3178

CC 1 0.8191 0.9582 0.9941

MSE 0 0.0768 0.0234 0.0191

PSNR MAXIMUM 140.84 161.4846 165.0632


SD 0 0.0228 0.0634 0.0228

SSIM 1 0.8076 0.9234 0.9091

Table 2. Performance measurement parameters for data set 1

IDEAL VALUES BTGF IHSGF WTGF

SAM 0 6.8660 3.8461 1.5164


CC 1 0.6099 0.9456 0.9930
MSE 0 0.0725 0.0273 0.0188
PSNR MAXIMUM 141.8393 158.8136 165.2613
SD 0 0.0284 0.0506 0.0284
SSIM 1 0.7939 0.9034 0.9188
Table 3. Performance measurement parameters for data set 1

IDEAL VALUES BTGF IHSGF WTGF

SAM 0 5.0028 4.2428 1.1325


CC 1 0.7920 0.9426 0.9910
MSE 0 0.0798 0.0293 0.0107
PSNR MAXIMUM 140.1900 157.6005 175.1078
SD 0 0.0269 0.0277 0.0269
SSIM 1 0.8497 0.8819 0.9700

Table 4. Performance measurement parameters for data set 1

IDEAL VALUES BTGF IHSGF WTGF

SAM 0 6.4060 4.3301 1.3675

CC 1 0.6989 0.9485 0.9960

MSE 0 0.0597 0.0272 0.0194

PSNR MAXIMUM 145.2268 158.8695 164.7638

SD 0 0.0368 0.0572 0.0368

SSIM 1 0.8397 0.8996 0.9310


A) RESULTS OF STRUCTURAL ANGLE B) RESULTS OF CROSS CORRELATION.
MAPPER.

C) RESULTS OF ORIGIN AIM SPACE D) RESULTS OF SUMMIT SIGNALIZE UP


INACCURACY. TO CRASH CORRELATION.
E) RESULTS OF TYPICAL VARIATION. F) RESULTS ABOUT STRUCTURAL
SSIMILARITY INDEX
Figure 5. Graphs showing the results of various parameters

4.1 Discussion: The Figures 1, 2, 3 and 4 shows the fusion results of four data sets and three
different hybrid image fusion methods under consideration. Tables 1,2,3 and 4 show the
performance measurement parameters values for different datasets and methods under
consideration. Figures 5 (a) to 5 (e) shows the graphs of six performance measurement
parameters and their values.
Objective Evaluation: in this thesis total six performance measurement parameters are
considered to validate the results. From the tables 1, 2, 3, and 4, we can observe that out of three
hybrid fusion methods considered the WTGF method performs well compared to remaining two
methods. From the graphs of figures 5 (a) to 5 (f) it clear that the WTGF method have best
values in all performance measurement values under considered.
Subjective Evaluation: the dataset-1 contains the both vegetation and non vegetation areas, from
the figures 1(c), 2(c), 3(c), 1(d), 2(d), 3(d),1(e), 2(e), and3(e),visually it is observed that the
hybrid methods of BTGF, IHSGF are unable to retain the color information of the original MS
figure i.e, different approaches produce comic color distortion of fused image. The hybrid
methods like BTGF, IHSGF and WTGF can produce the good pan sharpened image but BTGF
and IHSGF are unable to preserve the color information, the WTGF can retain color information
too.
CHAPTER 5
CONCLUSION
In this research we have considered the total number of three hybrid image fusion methods to
conduct the comparative study, and total six performance measurements of parameters to test the
algorithms of three hybrid image fusion methods. Experimental study shows that the two hybrid
fusion methods like BTGF and IHSGF are unable to preserve the color clue of the original
Multispectral image, while the WTGF method is best in retaining the color clue from the original
MS image. Thus we are able to confirm from the results obtained, that the hybrid image fusion
method of wavelet transform and guided filter (WTGF) is good in preserving the spatial and
spectral properties.
REFERENCES
[1]. Mouyan Zou,Yan Liu, “ Multisensory image fusion: Difficulties and key techniques”, IEEE
second international congress on image and signal processing, pages-1-5,2009.

[2]. Vaishali Asirwal, Himanshu Yadav, Anurag Jain, “ Hybrid model for preserving brightness
over the digital image processing”, 4th IEEE international conference on computer and
communication technology(ICCCT),pages-48-53, 2013.

[3]. Goshtasby, A. Ardeshir, and Stavri Nikolov. "Image fusion: Advances in the state of the art."
(2007): 114-118, ScienceDirect, Elsevier.

[4]. Paul Mather, Brandt Tso, Classification Methods for Remotely Sensed Data, Second Edition,
CRC Press, 19-Apr-2016 - Technology & Engineering - 376 pages.
[5]. A. Fanelli, A Leo, M.Ferri, “Remote sensing image data fusion: A wavelet transform
approach for urban analysis”, IEEE/ISPRS joint workshop on remote sensing data fusion over
urban areas, pages-112-116, 2001.

[6]. Jinaliang Wang, Chengana Wang, Xiaohu Wang, “ An experimental research on fusion
algorithms of ETM+image, pages-1-6, IEEE 18th international conference on Geoinformatics,
2010.

[7]. C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, “Synthesis of multispectral images to


high spatial resolution: A critical review of fusion methods based on remote sensing physics,”
IEEE Trans. Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1301–1312, 2008.

[8]. H. Aanæs, J. R. Sveinsson, A. A. Nielsen, T. Bovith, and A. Benediktsson, “Model-based


satellite image fusion,” IEEE Trans. Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1336–
1346, 2008.

[9]. F. Fang, F. Li, C. Shen, and G. Zhang, “A variational approach for pan-sharpening,” IEEE
Trans. Image Processing, vol. 22, no. 7, pp. 2822–2834, 2013.

[10]. S. Li and B. Yang, “A new pan-sharpening method using a compressed sensing technique,”
IEEE Trans. Geoscience and Remote Sensing, vol. 49, no. 2, pp. 738–746, 2011.

[11]. F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson, “A new pansharpening algorithm based


on total variation,” IEEE Geo-science and Remote Sensing Letters, vol. 11, pp. 318–322, 2014.

[12].S. Leprince, S. Barbot, F. Ayoub, and J.-P. Avouac, “Automatic and precise
orthorectification, coregistration, and subpixel cor-relation of satellite images, application to
ground deformation measurements,” IEEE Trans. Geoscience and Remote Sensing, vol. 45, no.
6, pp. 1529–1558, 2007.

Anda mungkin juga menyukai