Anda di halaman 1dari 8

CiiT International Journal of Digital Image Processing, Vol 1, No 5, August 2009

213

Area Level Fusion of Multi-Focused Images Using


Dual Tree Complex Wavelet Packet Transform
K. Kannan, S. Arumuga Perumal and K. Arulmozhi
AbstractThe fast development of digital image processing leads
to the growth of feature extraction of images which leads to the
development of Image fusion. Image fusion is defined as the process
of combining two or more different images into a new single image
retaining important features from each image with extended
information content. There are two approaches to image fusion,
namely spatial fusion and multi scale transform fusion. In spatial
fusion, the pixel values from the source images are directly summed
up and taken average to form the pixel of the composite image at that
location. Multi scale transform fusion uses transform for representing
the source image at multi scale. The most common widely used
transform for image fusion at multi scale is Discrete Wavelet
Transform (DWT) since it minimizes structural distortions. But,
wavelet transform suffers due to poor directionality and does not
provide a geometrically oriented decomposition in multiple directions.
One way to generalize the discrete wavelet transform so as to generate
a structured dictionary of base is given by the Discrete Wavelet Packet
Transform (DWPT). This benefit comes from the ability of the
wavelet packets to better represent high frequency content and high
frequency oscillating signals in particular. However, it is well known
that both DWT and DWPT are shift varying. The Dual Tree Complex
Wavelet Transform (DTCWT) introduced by Kingsbury, is
approximately shift -invariant and provides directional analysis. And
there are three levels for image fusion namely pixel level, area level
and region level. In this paper, it is proposed to implement area level
fusion of multi focused images using Dual Tree Complex Wavelet
Packet Transform (DTCWPT), extending the DTCWT as the DWPT
extends the DWT and the performance is measured in terms of various
performance measures like root mean square error, peak signal to
noise ratio, quality index and normalized weighted performance
metric.
KeywordsImage fusion, Dual Tree Discrete Wavelet Packet
Transform, Root Mean Square Error, Peak Signal to Noise Ratio,
Quality Index and Normalized Weighted Performance Metric.

I. INTRODUCTION

MAGE fusion is defined as the process of combining two or


more different images into a new single image retaining
important features from each image with extended information
Manuscript received on September 24, 2009, review completed on
September 26, 2009 and revised on October 01, 2009.
K. Kannan is with Department of Electronics and Communication
Engineering, Kamaraj College of Engineering and Technology, SPGC Nagar,
Post Box No.12, Virudhunagar 626 001, Tamilnadu, India. (Phone:
00-91-9789343899; e-mail: kannan_kcet@ yahoo.co.in).
S. Arumugaperumal is working as Professor & Head, Department of
Computer Science, St. Hindu Colleg, Nagarcoil 629 002, Tamilnadu, India.
(e-mail: visvenk@yahoo.co.inlamar. colostate.edu).
K. Arulmozhi is working as Principal in Kamaraj College of Engineering
and Technology, SPGC Nagar, Post Box No.12, Virudhunagar 626 001,
Tamilnadu, India. (e-mail: kcet@md3.vsnl.net.in ).
Digital object Identifier No: DIP082009007

0974 9691/CIIT-IJ-0296/10/$20/$100 2009 CiiT

content. For example, IR and visible images may be fused as an


aid to pilots landing in poor weather or CT and MRI images
may be fused as an aid to medical diagnosis or millimeter wave
and visual images may be fused for concealed weapon
detection or thermal and visual images may be fused for night
vision applications[2]. The fusion process must satisfy the
following requirements such as it should preserve all relevant
information in the fused image, should suppress noise and
should minimize any artifacts in the fused image. There are two
approaches to image fusion, namely Spatial Fusion (SF) and
Multi Scale Transform (MST) fusion. In Spatial fusion, the
pixel values from the source images are summed up and taken
average to form the pixel of the composite image at that
location [16]. Image fusion methods based on Multiscale
Transforms are a popular choice in recent research[2] . MST
fusion uses pyramid or wavelet transform for representing the
source image at multi scale. Pyramid decomposition methods
construct a fused pyramid representation from the pyramid
representations of the original images. The fused image is then
obtained by taking an inverse pyramid transform [15]. Due to
the disadvantages of pyramid based techniques, which include
blocking effects and lack of flexibility, approaches based on
wavelet transform have begun [16], since it minimizes
structural distortions. But, wavelet transform suffers due to
poor directionality and does not provide a geometrically
oriented decomposition in multiple directions. One way to
generalize the discrete wavelet transform so as to generate a
structured dictionary of base is given by the Discrete Wavelet
Packet Transform (DWPT). This benefit comes from the ability
of the wavelet packets to better represent high frequency
content and high frequency oscillating signals in particular.
However, it is well known that both DWT and DWPT are shift
varying. The Dual Tree Complex Wavelet Transform
(DTCWT) introduced by Kingsbury [7,8,9,13,18,20,21], is
approximately shift -invariant and provides directional
analysis. There are three levels in multi resolution fusion
scheme namely Pixel level fusion, feature level fusion and
decision level fusion. The performance measures which can be
computed independently of the subsequent tasks express the
successfulness of an image fusion technique by the extent that it
creates a composite image that retains salient information from
the source images while minimizing the number of artifacts or
the amount of distortion that could interfere with interpretation.
In this paper, it is proposed to implement area level fusion of
multi focused images using Dual Tree Complex wavelet Packet

Published by the Coimbatore Institute of Information Technology

CiiT International Journal of Digital Image Processing, Vol 1, No 5, August 2009


transform extending the Dual Tree Complex Wavelet
Transform as the Discrete Wavelet Packet Transform extends
the Discrete wavelet Transform and the performance is
measured in terms of various performance measures like root
mean square error, peak signal to noise ratio, quality index and
normalized weighted performance metric.

II. DISCRETE WAVELET & PACKET TRANSFORM


The Discrete Wavelet Transform (DWT) of image signals
produces a non-redundant image representation, which
provides better spatial and spectral localization of image
formation, compared with other multi scale representations
such as Gaussian and Laplacian pyramid. Recently, Discrete
Wavelet Transform has attracted more and more interest in
image fusion. An image can be decomposed into a sequence of
different spatial resolution images using DWT. In case of a 2D
image, a single level decomposition can be performed resulting
in four different frequency bands namely LL, LH, HL and HH
sub band and an N level decomposition can be performed
resulting in 3N+1 different frequency bands and it is shown in
figure 1.

214

oriented decomposition in multiple directions. For an image,


the frequency decomposition provided by the DWT might not
be optimal. To find a more suitable decomposition, algorithms
have been proposed to find the best-basis from a structured
dictionary of bases. For example, a best-basis algorithm that
finds a sparse representation by minimizing the
transform-domain entropy and an algorithm that finds the best
basis in a rate-distortion sense were proposed in the literature.
One way to generalize the DWT so as to generate a structured
dictionary of bases is given by the discrete wavelet packet
transform. The Discrete Wavelet Packet Transform (DWPT) is
obtained by iterating a perfect reconstruction filter bank on
both its low-pass and high-pass output. However, like the
DWT, the DWPT is also shift-variant and mixes perpendicular
orientations in multiple dimensions. In case of a 2D image, a
single level decomposition can be performed resulting in four
different frequency bands namely LL, LH, HL and HH sub
band and an N level decomposition can be performed resulting
in 4N different frequency bands and it is shown in figure 2.

Fig.2 2D-Discrete Wavelet Packet Transform

III. DUAL TREE WAVELET TRANSFORM

Fig.1 2D-Discrete Wavelet Transform

DWT obtained by iterating a perfect reconstruction filter


bank on its low pass output, decomposes a 2D image according
to octave band frequency decomposition. The DWT is far from
being shift invariant and does not provide a geometrically
0974 9691/CIIT-IJ-0296/10/$20/$100 2009 CiiT

The Dual Tree Wavelet Transform (DTWT) overcomes the


limitations of DWT like poor directionality and shift
invariance. It can be used to implement 2D wavelet transforms
that are more selective with respect to orientation than the
separable 2D DWT. For example, the 2D DTWT transform
produces six subbands at each scale, each of which is strongly
oriented at distinct angles. There are two versions of the 2D
DTWT transform namely Dual Tree Discrete Wavelet
Transform (DTDWT) which is 2-times expansive, and Dual
Tree Complex Wavelet Transform (DTCWT) which is 4-times
expansive.

Published by the Coimbatore Institute of Information Technology

CiiT International Journal of Digital Image Processing, Vol 1, No 5, August 2009


A. Dual Tree Discrete Wavelet Transform
The DTDWT of an image is implemented using two critically
sampled separable DWT in parallel. Then for each pair of
subbands, the sum and difference are taken. The six wavelets
associated with DTDWT are illustrated in figure 3 as gray scale
images. Note that each of the six wavelet are oriented in a
distinct direction. Unlike the critically-sampled separable
DWT, all of the wavelets are free of checker board artifact.
Each subband of the 2-D dual-tree transform corresponds to a
specific orientation.

215

the second row can be interpreted as the complex part of


complex wavelets.
The filter bank structure of the DTCWT has CWT filters
which have complex coefficients and generate complex output
samples. This is shown in figure 5, in which each block is a
complex filter and includes down sampling by 2 (not shown) at
its outputs. Since the output sampling rates are unchanged from
the DWT, but each sample contains a real and imaginary part, a
redundancy of 2:1 is introduced. The complex filters may be
designed such that the magnitudes of their step responses vary
slowly with input shift only the phases vary rapidly. The real
part is an odd function while the imaginary part is even. The
level 1 filters, Lop and Hip in figure 4, include an additional pre
filter, which has a zero at z =-j, in order to simulate the effect of
a filter tree extending further levels to the left of level 1.

Fig 3 Directionality of DTDWT

B. Dual Tree Complex Wavelet Transform


The DTCWT also gives rise to wavelets in six distinct
directions and two wavelets in each direction. In each direction,
one of the two wavelets can be interpreted as the real part of a
complex valued wavelet, while the other wavelet can be
interpreted as the imaginary part of a complex-valued wavelet.
Because the complex version has twice as many wavelets as the
real version of the transform, the complex version is 4-times
expansive. The DTCWT transform is implemented as four
critically sampled separable DWTs operating in parallel.
However, different filter sets are used along the rows and
columns. As in the real case, the sum and difference of subband
images is performed to obtain the oriented wavelets. The
twelve wavelets associated with the real 2D dual-tree DWT are
illustrated in the following figure as gray scale images. The
wavelets are oriented in the same six directions as those of
DTDWT. However, there are two in each direction. If the six
wavelets displayed on the first row are interpreted as the real
part of complex wavelets, then the six wavelets displayed on

Fig 4 Directionality of DTCWT

0974 9691/CIIT-IJ-0296/10/$20/$100 2009 CiiT

Fig 5 Four levels of Complex Wavelet Tree for real 1-D input
signal x
Extension of complex wavelets to 2-D is achieved by
separable filtering along rows and then columns. However, if
row and column filters both suppress negative frequencies, then
only the first quadrant of the 2-D signal spectrum is retained.
Two adjacent quadrants of the spectrum are required to
represent fully a real 2-d signal, so we also need to filter with
complex conjugates of either the row or column filters. This
gives 4:1 redundancy in the transformed 2-D signal. If the
signal exists in m-d (m > 2), then further conjugate pairs of
filters are needed for each dimension leading to redundancy of
2m:1. The most computationally efficient way to achieve the
pairs of conjugate filters is to maintain separate imaginary
operators, j1 and j2, for the row and column processing, as
shown in figure 6. This produces 4-element `complex' vectors:
{r, j1, j2, j1j2} (where r means `real'). Each 4-vector can be
converted into a pair of conventional complex 2-vectors, by
letting j1 = j2 = j in one case and j1 = -j2 = -j in the other case.
This corresponds to sum and difference operations on the {r,
j1j2} and {j1,j2} pairs in the summation blocks in figure 6 and
produces two complex outputs, corresponding to first and
second quadrant directional filters respectively. Complex
filters in multiple dimensions provide true directional
selectivity, despite being implemented separably, because they
are still able to separate all parts of the m-D frequency space.
For example a 2D DTCWT produces six band pass sub-images

Published by the Coimbatore Institute of Information Technology

CiiT International Journal of Digital Image Processing, Vol 1, No 5, August 2009

216

' (t ) = H { (t )}

(2)

If the low-pass filter h0(n) is equal to the half-sample


delayed version of h0(n), then the wavelets generated by the
DTCWT satisfy (2) as desired. To construct the DTCWPT, the
packet form of the DTCWT, it is also important to use the
consequential relationship among the high-pass filters, h1(n)
and h1(n). It is assumed that {h0(n), h1(n)} form an FIR
conjugate quadrature filter (CQF) pair, as does {h0(n),
h1(n)}.

Fig 7 Implementation of DTCWT using Two Wavelet Filters

Fig 6 Two levels of the Complex Wavelet tree for a real 2-D input image x
giving 6 directional bands at each level.

of complex coefficients at each level, which are strongly


oriented at angles of 15o, 45o, 75o, shown by the
double-headed arrows in figure 6.

If the given wavelets, (t) and (t), are orthogonal to its


integer translates, then the Hilbert relation (2) is satisfied if and
only if[5,6,7],

H 0' (e jw ) = e j 0.5 w H 0 (e jw ) for w

Recall that for an orthonormal wavelet basis, the lowpass and


high-pass filters are related as

F1 (e jw ) = e jdw H 0' (e j ( w ) )

IV. PROPERTIES OF DTCWT AND DTCWPT


The DTCWT consists of two wavelet transforms operating in
parallel on an input signal as illustrated in Figure 7. We denote
the wavelet associated with the first wavelet filter bank as (t)
and the wavelet associated with the second filter bank as (t).
The wavelet (t) is defined by

(3)

(4)

equivalently h1(n)=(-1)nh0(d-n), where d is an odd integer.


Hence, it follows from (2) that for the ideal DT-CWT, the
high-pass filters satisfy

H 1' (e jw ) = j sgn( w)e j 0.5 w H1 (e jw ) for w

(t ) = 2 h1 (n) (2t n),

(5)

where

(t ) = 2 h0 (n) (2t n).


n

(1)

The second wavelet, (t), is defined similarly in terms of


{h0(n), h1(n)}. For the ideal DT-CWT, the second wavelet,
(t), is the Hilbert transform of the first wavelet, (t),

0974 9691/CIIT-IJ-0296/10/$20/$100 2009 CiiT

To construct a packet form of the DT-CWT each of the


subbands should be repeatedly decomposed using low-pass/
high-pass perfectly reconstructed filter banks. The perfectly
reconstructed filter banks should be chosen so that the response
of each branch of the second wavelet packet filter bank is the
discrete Hilbert transform of the corresponding branch of the
first wavelet packet filter bank. Then each subband of the
DTCWPT will be analytic. This requirement can be fulfilled

Published by the Coimbatore Institute of Information Technology

CiiT International Journal of Digital Image Processing, Vol 1, No 5, August 2009

217

with a simple rule which is derived below. First, note that if a


given filter g(n) is the discrete Hilbert transform of some other
filter h(n), that is,

G (e jw ) = j sgn( w) H (e jw ) for w

(6)

then when g(n) is convolved with some sequence c(n), it


have,

G (e jw )C (e jw ) = j sgn( w) H (e jw )C (e jw ) for w

(7)

Fig 8 Left is the decomposition of kth subband of the first filter bank of
the DT-CWT using filters fi(n) right is the decomposition of kth stage
of the second filter bank of the DT-CWT extended using filters f i (n).

As shown by this equation, if h(n) and g(n) is a discrete


Hilbert transform pair, then g(n)*c(n) and h(n)*c(n) is also a
discrete Hilbert transform pair. The discrete Hilbert Transform
may be regarded as a linear time-invariant (LTI) system. Now
turning back to the DT-CWT, suppose the kth stage high-pass
subband of the DT-CWT is decomposed using some 2-channel
perfectly reconstructed filter banks as shown in figure 8. To
determine conditions on the filters fi(n), fi(n) such that the
resulting DT-CWPT is analytic, the following condition(8) is to
be satisfied

H ( k ) (e jw ) Fi (e j 2 w ) = H {H '( k ) (e jw ) Fi ' (e j 2 w )}
k

(8)

for which it is necessary and sufficient that fi(n)=fi(n).


Therefore, we conclude that whatever perfectly reconstructed
filter bank is used to decompose the first filter bank of the
DT-CWT should also be used decompose the second (dual)
filter bank in order to preserve the Hilbert transform
relationship already satisfied by those branches. The branches
of the DT-CWT that do not already satisfy the Hilbert
transform property are the low-pass branch of the final stage
and the high-pass branch of the first stage.Note that the
low-pass branch of the final stage is not further decomposed.
The high-pass branch of the first stage, that is, the filters h(1)1
(n) and h(1)1 (n), satisfy h(1)1 (n) = h(1)1 (n1), which is

0974 9691/CIIT-IJ-0296/10/$20/$100 2009 CiiT

Fig 7 First Wavelet Packet Filter Bank of Four Stages DTCWPT

exactly the same relationship satisfied by the low-pass filters of


the first stage, h(1)0 (n) = h(1)0 (n1). Observing that the
analysis carried out in the previous section for the DTCWT is
dependent only on the relative delays, it is concluded that the
filter bank structure following the low-pass filters of the first
stage should also follow the high-pass filters of the first stage.
This procedure produces a DTCWPT consisting of two wavelet
packet filter banks operating in parallel, where some filters in
the second wavelet packet FB are the same as those in the first
wavelet packet filter bank. The first of these two wavelet packet
filter banks are illustrated in Figure 7, for a four-stage
DT-CWPT. The second wavelet packet FB is obtained by
replacing the first stage filters h(1)i (n) by h(1)i (n 1) and by
replacing hi(n) by hi(n) for i {0, 1}. The filters denoted by Fi
in Figure 6 are unchanged in the second wavelet packet filter
bank. An important point about the transform described above
is the choice of the extension filters fi(n). Notice that,
preserving the Hilbert transform property, constrains only the
transform so as to force the use of the same filter pair in both
filter banks. However, this does not place any restrictions on
fi(n). Hence, the same criteria for the selection of a CQF pair to
extend a regular DWT can be used for the selection of fi(n).

Published by the Coimbatore Institute of Information Technology

CiiT International Journal of Digital Image Processing, Vol 1, No 5, August 2009

218

V. AREA LEVEL IMAGE FUSION


This section describes six methods of area level image fusion
based on multi scale representation of source images using
wavelets. Since the useful features in the image usually are
larger than one pixel, the pixel by pixel selection rule of pixel
level fusion may not be the most appropriate method. In feature
level of fusion algorithm, an area based selection rule is used.
The images are first decomposed into sub bands using wavelet
transform. Then the feature of each image patch over 3X3 or
5X5 window is computed as an activity measure associated
with the pixel centered in the window. To simplify the
description of different feature level image fusion methods, the
source images are assumed as A & B and the fused image as F.
All the methods described in this paper can be used in the case
of more than two source images.
Method1:
In this method, the maximum value of coefficients of
sub-bands of wavelet transformed image over 3X3 or 5X5
window is computed as an activity measure of pixel centered in
the window. The coefficient whose activity measure is larger in
chosen to form the fused coefficients map. A binary decision
map of same size as the wavelet transform is then created to
record the selection results. This binary map is subject to
consistency verification. Specifically in wavelet domain, if the
centre pixel value comes from image A while the majority of
the surrounding pixel values comes from image B, the centre
pixel value should be switched to that image B. This method is
called consistency verification method.
Method 2:
In this method, the maximum absolute value over 3X3 or
5X5 window is computed as an activity measure of pixel
centered in the window. The coefficient whose activity
measure is larger in chosen to form the binary decision map and
the consistency verification is applied to form the fused
coefficients map.
Method3:
This fusion scheme is the weighted average scheme
suggested by Burt and Kolezynski (1993). This salient features
are first dentified in each source image. This salience of a
feature is computed as a local energy in the neighborhood of a
coefficient.
2

E(A, p) = W(q)C j (A, q)

(9)

=Q

where w(q) is a weight and

=Q

w(q)=1. In practice, the

neighborhood Q is small (typically 5X5 or 3X3) window


centered at the current coefficient position. The closer the point
q is near the point P, the greater w(q) is E(B,p) can also be
obtained by this rule. The selection mode is implemented as:
0974 9691/CIIT-IJ-0296/10/$20/$100 2009 CiiT

Cj (A, p), E(A, p) E (B, p)


Cj (F, p) =

Cj (B, p), E(B, p) > E (A, p)

(10)

This selection scheme helps to ensure that most of the


dominant features are incorporated into the fused image.
Method 4:
In this fusion method, the salience measure of each source
image is computed using Equation1. At a given resolution level
j, this fusion scheme uses two distinct modes of combination
namely Selection and Averaging. In order to determine whether
the selection or averaging will be used, the match measure
M(p) is calculated as

M(p) =

2 W(q)C j (A, q)C j (B, q)


=Q

E(A, p) + E(B, p)

(11)

If M(p) is smaller than a threshold T, then the coefficient


with the largest local energy is placed in the composite
transform while the coefficient with less local energy is
discarded. The selection mode is implemented as

Cj (A, p), E(A, p) E (B, p)


Cj (F, p) =

Cj (B, p), E(B, p) > E (A, p)

(12)

Else if M(p) T, then in the averaging mode, the combined


transform coefficient is implemented as

WmaxCj(A,p) + WminCj(B,p),E(A,p) E(B,p)


Cj(F,p) =

WmaxCj(B,p) + WminCj(A,p),E(B,p) > E (A,p)


(13)
where

1 - M(p)
Wmin = 0.5 - 0.5
and Wmax = 1 - Wmin )
1- T
In this study, the fusion methods are implemented using the
parameters such as a window size 3*3 and a T-value of 0.75.
Method 5:
For a function f (x,y) it is common practice to approximate
the magnitude of the gradient by using absolute values instead
of squares and square roots

f = Gx + Gy = f(x, y) - f(x + 1, y) + f(x, y) - f(x, y + 1)


(14)
This equation is simpler to compute and it still preserves
relative changes in grey levels. In image processing, the

Published by the Coimbatore Institute of Information Technology

CiiT International Journal of Digital Image Processing, Vol 1, No 5, August 2009


difference between pixel and its neighbors reflect the edges of
the image. Firstly compute the difference between the low
frequency coefficient at the point p and its eight neighbors,
respectively. The value E is acquired by summing squares of all
the differences. At last, choose the low frequency coefficient
with the greater value E as the corresponding coefficient of the
fused image. This method can maintain the information of
edges. So it can improve the quality of the fused image. The
algorithm is as follows.

E(A, p) = C j (A, q) - C j (A, q)

E(B, p) = C j (B, q) - C j (B, q)

219
2

of R & F, a , b be the variance of image R,F. The


Normalized Weighted Performance Metric (NWPM) which is
given in the equation 19 as [3] ,

Q
NWPM =

i, j

AF
ij

WijA + QijAFWijB
WijA + WijB

(19)

i, j

VII. EXPERIMENTAL WORK

=Q

=Q

Finally, select the corresponding high frequency coefficient


of the fused image.

Cj (A, p), E(A, p) E (B, p)


Cj (F, p) =

Cj (B, p), E(B, p) > E (A, p)

(15)

Method 6:
In this method, the gradient over 3X3 or 5X5 window is
computed as an activity measure of pixel centered in the
window. The coefficient whose activity measure is larger in
chosen to form the fused coefficients map.

A pair of source images namely Pepsi of size 512x512 is


taken. The pair of source images to be fused is assumed to be
registered spatially. The images are wavelet transformed using
the same wavelet, and transformed to the same number of
levels. For taking the wavelet transform of the two images,
readily available MATLAB routines are taken. In each
sub-band, individual pixels of the two images are compared
based on the fusion rule that serves as a measure of activity at
that particular scale and space. A fused wavelet transform is
created by taking pixels from that wavelet transform that shows
greater activity at the level. The inverse wavelet transform is
the fused image with clear focus on the whole image. The block
diagram representing the wavelet transform based image fusion
is shown in figure 8.

VI. EVALUATION CRITERIA


There are four evaluation measures are used in this paper, as
follows,
The Root Mean Square Error (RMSE) between the reference
image R and fused image F is given by,
N


i=1

[ R ( i , j ) F ( i , j )]

j=1

RMSE=

Fig 8 Wavelet Based Image Fusion

(16)

The Peak Signal to Noise Ratio (PSNR) between the


reference image R and fused image F is given by,
PSNR= 10log 10 (255) 2/(RMSE)2 (db)

(17)

Quality index of the reference image (R) and fused image (F)
is given by[5],
4

QI= ( a

ab

+ b 2 )(

ab
2
a

2
b

(18)

The maximum value Q=1 is achieved when two images are


identical, where a & b are mean of images,

ab

be covariance

0974 9691/CIIT-IJ-0296/10/$20/$100 2009 CiiT

VIII. RESULTS
For the above mentioned method, image fusion is performed
using DWT, DWPT, DTCWT & DTCWPT and their
performance is measured in terms of Root Mean Square Error,
Peak Signal to Noise Ratio, Quality Index & Normalized
Weighted Performance Metric and the results are shown in
figure 6 and tabulated in table1. From the table, it is inferred
that in all the six method of area level fusion, the performance
of DTCWPT based image fusion is better than all other wavelet
transform methods in terms of RMSE, PSNR, NWPM and QI.
Apart from that the salience and match measure method is the
better method for DWT, DWPT and DTCWT based image
fusion method and gradient based image fusion method better

Published by the Coimbatore Institute of Information Technology

CiiT International Journal of Digital Image Processing, Vol 1, No 5, August 2009

220

fusion due to its improved directionality and preserves


geometric structures more faithfully. Hence using these fusion
methods, one can enhance the image with high geometric
resolution and better visual quality.

TABLE I
RESULTS OF AREA LEVEL IMAGE FUSION

REFERENCES
[1]

Fig 6 Results of Area Level Image Fusion


(A) Input Image 1 (B) Input Image 2
(C) Reference Image (D) Fused Image using DWPT
(E) Fused Image using DTCWT (F) Fused Image using DTCWPT.

performs when it is implemented using DTCWPT in terms of


RMSE, PSNR and QI since DTCWPT improves directionality.

IX. CONCLUSION
Given two DWTs that together form a DTCWT, this paper
shows how to extend each DWT so that the obtained DWPTs,
forming the DTCWPT, possess the desirable features of the
DTCWT, namely approximate shift-invariance and directional
analysis in 2-D and higher dimensions. And also this paper
presents the comparison of area level of fusion of multi focused
images using DWT, DWPT, DTCWT and DTCWPT in terms
of various performance measures. DTCWPT provides very
good results both quantitatively & qualitatively for area level

0974 9691/CIIT-IJ-0296/10/$20/$100 2009 CiiT

S. Mallat, Wavelet Tour of Signal Processing, New York, Academic


Press, 1998.
[2] Rick S. Blum and Yang Jin zhong, 2006, Image Fusion Methods and
Apparatus, US Patent, WO/2006/017233.
[3] C.S. Xydeas and V. Petrovic, Objective Image Fusion Performance
Measure, Electronics Letter, Vol.36, N0.4, pp. 308-309, 2000.
[4] I. W. Selesnick, Hilbert transform pairs of wavelet bases, IEEE Signal
Processing Letters, 8(6):170173, June 2001.
[5] Zhou Wang and Alan C. Bovik, A Universal Image Quality Index,
IEEE Signal Processing Letters, Vol. 9, No.3, pp. 81-84, March, 2002.
[6] S. Mallat, A theory for Multiresolution signal decomposition: The
wavelet representation, IEEE transaction pattern anal. Machine Intell.,
vol. 11, no. 7, pp 674-693, July 1989.
[7] N. G. Kingsbury, Image processing with complex wavelets, Philos.
Trans. R. Soc. London A, Math. Phys. Sci., 357(1760):25432560,
September 1999.
[8] N. G. Kingsbury, Complex wavelets for shift invariant analysis and
filtering of signals Journal of Appl. and Comp. Harmonic Analysis,
10(3):234 253, May 2001.
[9] N.G. Kingsbury, Complex wavelets for shift invariant analysis and
filtering of signals Applied Computational Harmonic analysis, vol. 10,
no.3, pp 234-253, May 2001.
[10] I. W. Selesnick, The design of approximate Hilbert transform pairs of
wavelet bases, IEEE Trans. Signal Processing, 50(5):11441152, May
2002.
[11] I. W. Selesnick, R. G. Baraniuk, and N. G. Kingsbury, The dual-tree
complex wavelet transforms - A coherent framework for Multiscale
signal and image processing IEEE Signal Processing Magazine,
22(6):123151, November 2005.
[12] R. Yu and H. Ozkaramanli, Hilbert transform pairs of orthogonal
wavelet bases: Necessary and sufficient conditions IEEE Trans. Signal
Processing, 53(12):47234725, December 2005.
[13] N.G. Kingsbury, The dual tree complex wavelet transform: A technique
for shift invariance and directional filters in proc. of 8th IEEE DSP
Workshop, Utah, paper no. 86 August 9-12, 1998.
[14] Zhu Shu-long, Image Fusion using Wavelet Transform, Symposium on
Geospatial Theory, Processing and Applications, pp. 5-9, 2004.
[15] P. J. Burt and R. J. Kolczynski, Enhanced image capture through image
fusion, proceedings of the 4th International Conference on Computer
Vision, pp. 173-182, 1993.
[16] H. Li, B.S. Manjunath, and S.K. Mitra, Multi-sensor image fusion using
the wavelet transform, Proceedings of the conference on Graphical
Models and Image Processing, pp. 235245, 1995.
[17] N.G. Kingsbury and J.F. A. Magarey, Wavelet transforms in image
processing in proc. of I European Conference on Signal Analysis and
Prediction, Prague, pp 23-34, June 24-27, 1997.
[18] N. G. Kingsbury. The dual-tree complex wavelet transform: A new
efficient tool for image restoration and enhancement. In Proceedings of
European Signal Processing Conference (EUSIPCO), 1998.
[19] Z. Zhang and R. Blum. A categorization of Multiscale decomposition
based image fusion schemes with a performance study for a digital camera
application, proc. of the IEEE, pages 1315-1328, August 1999.
[20] N. G. Kingsbury. A dual-tree complex wavelet transform with improved
orthogonality and symmetry properties. In Proceedings of IEEE
International Conference on Image Processing (ICIP), 2000.
[21] N.G. Kingsbury, A dual tree complex wavelet transform with improved
orthogonality and symmetry properties in proc. of IEEE International
Conference on Image processing, Canada, vol.2, pp 375-378, September
10-13, 2000.
[22] http://taco.poly.edu/selesi/

Published by the Coimbatore Institute of Information Technology

Anda mungkin juga menyukai