Anda di halaman 1dari 33

A Seminar report on

MULTIFOCUS IMAGE FUSION

A Thesis Report Submitted To The Department Of Electronics


In Partial Fulfillment of The Requirements For The Degree of
MASTER OF TECHNOLOGY
(Communication System)
by

Prajapati Ankit
(P11 EC 025)

Guided by

Prof. K P Upla
Assistant Professor, ECED

DEPARTMENT OF ELECTRONICS
SARDAR VALLABHBHAI NATIONAL INSTITUTE OF TECHNOLOGY
2012-2013

Acknowledgements
I take this opportunity to express my sincere thanks and deep sense of gratitude to my guide Prof.
K.P.Upla for imparting me valuable guidance and priceless suggestions during the literature study
of the dissertation work entitled Multifocus Image Fusion. Also I am very thankful to our H.O.D.
(Head of Department) Prof. P.K.Shah for giving me this opportunity under the programme of
master in technology. I would like to extend my sincere thanks towards all the faculty members
of the Electronics.
I would like to thank my all classmates who motivate me throughout this work.
Finally, I would also like to thank my parents and my sister hetal.prajapati they were always supporting me and encouraging me with their best wishes.

Last but not the least, humble thanks to the ALMIGHTY GOD.
Ankit.Prajapati (P11EC025)
SVNIT-Surat

ii

Abstract
Due to imperfections of imaging devices (opticaldegradations, limitedresolutionof CCD sensors)
and instability of the observed scene (object motion, media turbulence), acquired images are often blurred, noisy and may exhibit insufficient spatial and/or temporal resolution. Such images
are not suitable for object detection and recognition. Reliable detection requires recovering the
original image. If multiple images of the scene are available, this can be achieved by multi focus
image fusion. It a part of image fusion in which the same scene is captured from different focal
settings. And no of images are captured and are combined into a single image retaining the important features from each of the original images. Important applications of the fusion of images
include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics.
Fusion techniques include the simplest method of pixel averaging to more complicated methods
such as principal component analysis and wavelet transform fusion. Several approaches to image
fusion can be distinguished, depending on whether the images are fused in the spatial domain or
they are transformed into another domain, and their transforms fused.

iii

Contents
Page
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ii

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iii

Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iv

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi

Chapter
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1

Introduction to Image Fusion of multi focus image . . . . . . . . . . . . . . . .

1.1.1

. . . . . . . . . . . . . . . . .

1.2

Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3

Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1

introduction to Multi focus image fusion

Essence and Discussion of/on Related Work . . . . . . . . . . . . . . . . . . . .

2.1.1

Pixel level Multifocus image fusion

. . . . . . . . . . . . . . . . . . .

2.1.2

Wavelet based multi focus image fusion method. [3] . . . . . . . . . . .

10

2.1.3

Multi focus image fusion using blurring measurement. [9] . . . . . . . .

13

3 Multifocus image fusion using spatial frequency . . . . . . . . . . . . . . . . . . .

14

3.1

Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

3.2

Spatial frequency. [11] [6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

3.3

implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

3.4

RMSE(Root Mean Square Error) [6] [13] . . . . . . . . . . . . . . . . . . . . .

18

3.5

Majority filter [13] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

4 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

4.1

pixel Based Image Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

4.2

Multifocus image fusion based on spatial frequency . . . . . . . . . . . . . . . .

22

5 Conclusion and Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

5.1

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

5.2

Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

iv

List of Figures
1.1

Basic image fusion system. [1] . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

Block diagram of Basic image fusion system. . . . . . . . . . . . . . . . . . . .

1.3

Multi focus concept. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1

Two level DWT Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . .

11

2.2

Image Fusion Scheme Using Wavelet Scheme . . . . . . . . . . . . . . . . . . .

12

3.1

Original and blured version of an image block extrated from thelena image: (a)
original image; (b) radius=0.5; (c) [6] radius=0.8; (d) radius=1.0; (e) radius=1.5 .

16

3.2

schematic Diagram Of Multi Focus image Fusion . . . . . . . . . . . . . . . . .

17

3.3

majority filtering on image using closest four . . . . . . . . . . . . . . . . . . .

19

3.4

majority filtering on image using closest eight . . . . . . . . . . . . . . . . . . .

19

4.1

pixel based image fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

4.2

pixel based image fusion,Example-2 . . . . . . . . . . . . . . . . . . . . . . . .

21

4.3

pixel based image fusion,Example-3 . . . . . . . . . . . . . . . . . . . . . . . .

21

4.4

Multifocus image fusion based on spatial frequency,Example-1(1616) . . . . .

23

4.5

Multifocus image fusion based on spatial frequency,Example-2(1616) . . . . .

23

4.6

Multifocus image fusion based on spatial frequency,Example-1(3232) . . . . .

24

4.7

Multifocus image fusion based on spatial frequency,Example-2(3232) . . . . .

24

4.8

Multifocus image fusion based on spatial frequency,Example-1(6464) . . . . .

25

4.9

Multifocus image fusion based on spatial frequency,Example-2(6464) . . . . .

25

List of Tables
3.1

Spatial frequencies of the image blocks in Figure-3.1 . . . . . . . . . . . . . . .

15

4.1

Effects on fusing images in Figures.(4.4 - 4.9)with different blck size . . . . . . .

22

vi

Chapter 1
Introduction
1.1

Introduction to Image Fusion of multi focus image

Imaging devices have limited achievable resolution due to many theoretical and practical restrictions. An original scene with a continuous intensity function o[x, y] warps at the camera lens
because of the scene motion and/or change of the camera position. In addition, several external
effects blur images .atmospheric turbulence, camera lens, relative camera scene motion, etc We
will call these effects volatile blurs to emphasize their unpredictable and transitory behavior, yet
we will assume that we can model them as convolution with an unknown point spread function.
Image fusion is the process by which two or more images are combined into a single image retaining the important features from each of the original images. The fusion of images is often
required for images acquired from different instrument modalities or capture techniques of the
same scene or objects. Important applications of the fusion of images include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics. Fusion techniques
include the simplest method of pixel averaging to more complicated methods such as principal
component analysis and wavelet transform fusion. [1] Several approaches to image fusion can
be distinguished, depending on whether the images are fused in the spatial domain or they are
transformed into another domain, and their transforms fused.
With the development of new imaging sensors arises the need of a meaningful combination of
all employed imaging sources. The actual fusion process can take place at different levels of
information representation; a generic categorization is to consider the different levels as, sorted
in ascending order of abstraction: signal, pixel, feature and symbolic level.
The goal of image fusion is to integrate complementary information from all frames into one new
image containing information. the quality of which cannot be achieved otherwise. Here, the term
better quality means less blur and geometric distortion, less noise, and higher spatial resolution
We may expect that object detection and recognition will be easier and more reliable when performed on the fused image. Regardless of the particular fusion algorithm, it is unrealistic to
assume that the fused image can recover the original scene o[x, y] exactly. Fusion of images acquired is a three-stage process it consists of image registration (spatial alignment), which should

1.1. Introduction to Image Fusion of multi focus image


compensate for geometric deformation followed by a multichannel (or multiframe) blind deconvolution (MBD) and super resolution (SR) fusion. The goal of MBD is to remove the impact of
volatile blurs and the aim of SR is to increase spatial resolution of the fused image by a userdefined factor. While image registration is actually a separate procedure, we integrate both MBD
and SR into a single step as shown in Figure 1.1, which we call blind super resolution (BSR).
The approach presented in this chapter is one of the first attempts to solve BSR under realistic
assumptions with only little a priori knowledge. [1]

Figure 1.1: Basic image fusion system. [1]


In multi focus image fusion we are generally taking registered images as a input and perform
fusion operation that by using spatial domain operation or transform domain operation. The difference between multi focus image fusion and simple fusion is that simple fusion fuses any two
images of same size but multi focus image fusion fuses registered image of same scene the twodimensional discrete wavelet transform is becoming one of the standard . tools for image fusion.
The DWT is computed by successive low pass and high pass Filtering of the digital image or images. Its significance is in the manner it connects the Continuous time multiresolution to discretetime filters. The principle of image fusion using wavelets is to merge the wavelet decompositions
of the two original images using fusion methods applied to approximations coefficients and details coefficients.
2

1.1. Introduction to Image Fusion of multi focus image


The basic idea of the generic multiresolution fusion scheme is motivated by the fact that the hu-

Figure 1.2: Block diagram of Basic image fusion system.


man visual system is primary sensitive to local contrast changes, i.e. edges. Motivated from this
insight, and in mind that both image pyramids and the wavelet transform result in an multiresolution edge representation, it is straightforward to build the fused image as a fused multiscale
edge representation. The fusion process is summarized in the following: In the first step the input
images are decomposed into their multiscale edge representation, using either any image pyramid
or any wavelet transform. The actual fusion process takes place in the difference resp. wavelet
domain, where the fused multiscale representation is built by a pixel-by-pixel selection of the
coefficients with maximum magnitude. Finally the fused image is computed by an application of
the appropriate reconstruction scheme

1.1.1

introduction to Multi focus image fusion

It is a fusion technique in which two or more then two images of same scene captured by multiple
camera or same scene capture by same camera using different focal setting(or different length)
and the images are fused by applying any type of fusion rule such a process is called multi focus
image fusion
we can perform fusion operation in spatial domain as well as in transform domain.the main
3

1.2. Motivation
purpose is to enhance the quality of fused image by applying different fusion rule.figure ??shows
the multi focus concept.inwhich by setting differennt focal

Figure 1.3: Multi focus concept.

1.2

Motivation

Motivation for image fusion research is mainly the result of recent technological advances in the
fields of sensing methods and sensor design. Improved robustness and increased resolution of
modern imaging sensors and, more significantly, availability at a lower cost, have made the use
of multiple sensors common in a range of imaging applications. In past decade medical imaging, night vision, military and civilian avionics, autonomous vehicle navigation, remote sensing,
concealed weapons detection and various security and surveillance systems are only some of the
applications that have benefited from such multisensory arrays. Increased spatial, spectral resolution and faster scan rates offered by such modern sensor suites provide a more reliable and
complete picture of the scanned environment. This in turn can lead to improved performance of
dedicated imaging systems. However, potential performance gains come at the cost of a large
increase on the raw amount of sensor data that has to be processed. [2]
Thus, an increase in the number of sensors used in a particular application leads to the proportional increase in the amount of image data. Furthermore, a linear increase in the size of the imaging arrays has an even more dramatic effect resulting in an exponential (specifically quadratic)
increase in the amount of data appearing at the sensor array output. This means that, if system

1.3. Outline of the thesis


performance improvements are to be realized. Deployment of additional sensors must be accompanied by a corresponding increase in the processing power of the system. In automated task
applications, such as autonomous vehicle navigation, this demand can be met by a corresponding increase in the number of processing units, using faster Digital Signal Processing (DSP) and
larger memory devices. This solution however, can be quite expensive. In addition, when imaging systems are operated by humans, presenting multiple images to an individual operator is both
cumbersome and places an unreasonable demand on the operator resulting in diminishing performance. Furthermore, in systems which use a group of operators, integrating visual information
across the group is almost impossible
Pixel-level (PL) image fusion algorithms represent an efficient solution to this problem of operator related information overload. By fusing, input signals containing redundant information
are condensed to a single representation by choosing the most important features at each image
spatial position. Thus PL fusion effectively reduces the amount of data that needs to be processed without any significant loss of useful information. Additionally, image fusion provides an
effective way of integrating visual information across different sensors. To accurately determine
spatial alignment of features in different input images is a near impossible task for human observers either by viewing the images simultaneously or in a sequence. By preserving the spatial
characteristics of image features during fusion, the integration of spatial information is achieved
by displaying the features in a single fused image. Having to consider only one displayed image
at any one time significantly reduces the workload of the operator.

1.3

Outline of the thesis

In this chapter i have covered the fundammentals of basic image fusion method.how we can fuse
the two images.also focus on image registration.I have also mention two basic domain for image
fusion.i.e,spatial and transform domain.
chapter 2 summarises the literature suarvey part how much work is done in this area and few
methodologies about multifocus image fusion

1.3. Outline of the thesis


Chapter 3 Describe a method for impementation of mulltifocus image fusion using spatial frequency.
Chapter 4 is on Experimental results
Chapter 5 is the last chapter to summarize all the work done through out this thesis and then
we are giving some concluding remarks on that and also about future work in my next semester.

1.3. Outline of the thesis


Introductory Chapter

Chapter 2
Literature Survey
2.1

Essence and Discussion of/on Related Work

There are so many methods developed for the multi focus image fusion.generallly for fusion two
types of approaches are done. Based on spatial domain and transform domain in spatial domain
we work on pixel based method and in transformed domain method we take DWT, DCT or
contour let transform of no of input images and then find their coefficient based on any fusion we
fuse the images and then take its inverse transform to get fused image also apply some filtering
operation to enhance its quality.

Simple image fusions by averaging of two or more image or by taking maximum intensity
value. [3]
Multi focus image fusion Image fusion by establishing focal connectivity. [5]
Image fusion by using wavelet based method. [3]
Pixel base image fusion using spatial frequency. [6]
A Novel Two Step Region Based Multifocus Image Fusion Method. [4]
After referring the above technical papers, I came to the following techniques for the fusion of
multiple images. Most of these are based on pixel level and image segmentation (region based)
and the transform domain technique like DWT based image fusion method. The most potential
ways for image fusion are:

Pixel level Multifocus image fusion.


Wavelet based Multifocus image fusion.

2.1. Essence and Discussion of/on Related Work

2.1.1

Pixel level Multifocus image fusion

With the development of new imaging sensors arises the need of a meaningful combination of
all employed imaging sources. The actual fusion process can take place at different levels of
information representation; a generic categorization is to consider the different levels as, sorted
in ascending order of abstraction: signal, pixel, feature and symbolic level. This focuses on the
so-called pixel level fusion process [1], where a composite image has to be built of several input
images. To date, the result of pixel level image fusion is considered primarily to be presented to
the human observer, especially in image sequence fusion (where the input data consists of image
sequences).In pixel-level image fusion, some generic requirements can be imposed on the fusion
result. The fusion process should preserve all relevant information of the input imagery in the
composite image (pattern conservation) the fusion scheme should not introduce any artifacts or
inconsistencies which would distract the human observer or following processing stages. The
fusion process should be shift and rotational invariant, i.e. the fusion result should not depend
on the location or orientation of an object the input imagery. In case of image sequence fusion
arises the additional problem of temporal stability and consistency of the fused image sequence.
The human visual system is primarily sensitive to moving light stimuli, so moving artifacts or
time depended contrast changes introduced by the fusion process are highly distracting to the
human observer. So, in case of image sequence fusion the two additional requirements apply.

Temporal stability: The fused image sequence should be temporal stable, i.e. gray level
Changes in the fused sequence must only be caused by gray level changes in the input
Sequences; they must not be introduced by the fusion scheme itself.

Temporal consistency: Gray level changes occurring in the input sequences must be
Present in the fused sequence without any delay or contrast change.

The goal of pixel-level image fusion can broadly be defined as: to represent the visual Information present in any number of input images, in a single fused image without the introduction of
distortion or loss of information
Although we can safely say that multiresolution and multiscale methods dominate the field

2.1. Essence and Discussion of/on Related Work


of pixel-level fusion, arithmetic fusion algorithms are the simplest, and sometimes effective
fusion methods. Arithmetic fusion algorithms produce the fused image pixel by pixel, as an
arithmetic combination of the corresponding pixels in the input images. Arithmetic fusion can
be summarized by the expression given in Equation given below
F (m, n) = k1.A(m, n) + k2.B(m, n) + C

(2.1)

Where A, B, and F represent the inputs and the fused images respectively at location (m,n).
k1,k2And C all are constants defining the fusion method, with k1,k2 defining the relative
influence of the individual inputs on the fused image and C the, mean offset. Image averaging is
the most commonly used example of such fusion methods. this case, the fused signal is evaluated
as the average value between the inputs, i.e.k1=0.5,k2=.05 and c=0 However, despite being
significantly more computationally efficient than most other fusion systems, image averaging,
like all other arithmetic fusion methods, does not achieve enviable performance. The main
reason for this is the loss of contrast, a result of destructive superposition when input signals are
added. Further contrast reduction is also introduced when a normalized sum is used, such as in
image averaging. In general, averaging produces reasonable image quality in areas where input
images are similar but the quality rapidly decreases in regions where inputs are different.

2.1.2

Wavelet based multi focus image fusion method. [3]

It is also known as multiresolution approach for image fusion. The Discrete Wavelet Transform
(DWT) of image signals produces a non-redundant image representation, which provides
better spatial and spectral localization of image formation, compared with other multi scale
representations such as Gaussian and Laplacian pyramid. Recently, Discrete Wavelet Transform
has attracted more and more interest in image de-noising. The DWT can be interpreted as signal
decomposition in a set of independent, spatially oriented frequency channels. The signal S
is passed through two complementary filters and emerges as two signals, approximation and
Details. This is called decomposition or analysis. The components can be assembled back
into the original signal without loss of information. This process is called reconstruction or
synthesis. The mathematical manipulation, which implies analysis and synthesis, is called
discrete wavelet transform and inverse discrete wavelet transform. An image can be decomposed

10

2.1. Essence and Discussion of/on Related Work


into a sequence of different spatial resolution images using DWT. In case of a 2D image, an N
level decomposition can be performed resulting in 3N+1 different frequency bands namely, LL,
LH, HL and HH as shown in figure. These are also known by other names, the sub-bands may
be respectively called a1 or the first average image, H1 called horizontal fluctuation, V1 called
vertical fluctuation and d1 called the first diagonal fluctuation. The sub-image a1 is formed by
computing the trends along rows of the image followed by computing trends along its columns.
In the same manner, fluctuations are also created by computing trends along rows followed by
trends along columns. The next level of wavelet transforms is applied to the low frequency sub
band image LL only. The Gaussian noise will nearly be averaged out in low frequency wavelet
coefficients. Therefore, only the wavelet coefficients in the high frequency levels need to be
thresholded. [7], [8]
The two dimensional discrete wavelet transform is becoming one of the standard tools for image

Figure 2.1: Two level DWT Decomposition


fusion. The DWT is computed by successive lowpass and highpass filtering of the digital image
or images. Its significance is in the manner it connects the continuous time multiresolution
to discrete time filters.The principle of image fusion using wavelets is to merge the wavelet
decompositions of the two original images using fusion methods applied to approximations
coefficients and details coefficients
Wavelet based techniques for fusion of 2-D images is described here. In all wavelet based
image fusion techniques the wavelet transforms W of the two registered input images I1 (x,y) and
I2 (x,y) are computed and these transforms are combined using Some kind of fusion rule as
11

2.1. Essence and Discussion of/on Related Work

Figure 2.2: Image Fusion Scheme Using Wavelet Scheme


show in below equation.
I(x, y) = W 1 ((W (I1 (x, y)), W (I2 (x, y)))

(2.2)

Where: W(I1 (x,y)) is wavelet transform of first image and W(I2 (x,y)) is wavelet transform of
second image.and W 1 shoews the inverse wavwlet teansform of fused image.
In the case of wavelet transform based fusion method, all respective wavelet coefficients from
the input images are combined using the fusion rule. Since wavelet coefficients having large
absolute values contain the information about the salient features of the images such as edges
and lines, a good fusion rule is to take the maximum of the corresponding wavelet coefficients.
Steps of our proposed method of the wavelet based image fusion is explained below:

1. Read the set of multifocus imags i.e. here in our proposed algorithm we have consider two
images which are of same size (registered images).
2. Apply wavelet decomposition on both the images with the use of Daubechies filter.
3. Extracts from the wavelet decomposition structure [C,S] the horizontal, vertical, or diagonal detail.
4. Perform averag of approximation coefficients of both decomposed images.
5. Compare horizontal, vertical and diagonal coefficient of both the images and apply maximum selection scheme to select the maximum coefficient value by comparing the coeffi12

2.1. Essence and Discussion of/on Related Work


cient of the two images. Perform this for all the pixel values of image i.e. m x n.
6. Now Apply wavelet decomposition on both the images with the use of Daubechies filter
And display the final fused image.

2.1.3

Multi focus image fusion using blurring measurement. [9]

This paper is based on finding the focused region and non focused region from input images
and the combined the focus region of both the image .this method can significantly reduces the
amount of distortion artifacts and loss of contrast information.

13

Chapter 3
Multifocus image fusion using spatial frequency
3.1

Background

Image fusion attempts to combine complementary information from multiple images of the
same scene, so that the resultant image is more suitable for human visual perception and
computer processing tasks such as segmentation feature extraction and object recognition. In
this implementation I have images with diverse focused by first decomposing the source image
in to number of blocks and then by the use of spatial frequency .the algorithm is computationally
simple and can be Implemented in real time applications.
Recently, image fusion has become an important topic in image analysis and computer vision. Image fusion refers to image processing technique that produce a new, enhanced image
by combining images from two or more sensors the fused image is more suitable for human
/machine perception and object recognition.The most suitable method is by simple averaging
method but its drawback is that it reduce the contrast of resultant image
In this chapter I have taken the situation where two or more objects in the scene are at different
distance from the camera. As I consider that the camera have not very high resolution so the
image obtain from this camera is not be in focus everywhere, i.e., if one object is in focus another
will be out of focus .by fusing the different image in to single image we get the final image
having focus in the entire image. [8]
An important preprocessing step in image fusion is image registration .that is the corresponding pixel position in the source image must refer to the same location. Here I have taken
registered image .if you dont have registered image then this algorithm is not applicable

14

3.2. Spatial frequency. [11] [6]

3.2

Spatial frequency. [11] [6]

Spatial frequency is a key parameter for enhancing the quality of image it is generally use for
good quality of image if the image having high spatial frequency this means that the image is
of a good quality. Spatial frequency measures the overall activity level in an image [7].For an
MN image block F, with gray value F(m,n),the spatial frequency is defined as
SF =

(RF )2 + (CF )2

(3.1)

Where RF and CF are row and column frequency Respectively


v
u
M X
N
u
X
[F (m, n) F (m, n 1)]2
RF = t1/M N

(3.2)

m=1 n=2

v
u
N X
M
u
X
t
CF = 1/M N
[F (m, n) F (m 1, n)]2

(3.3)

n=1 m=2

The figure figure 3.1 shows a (6464) image block extracted from Lena in which figure (a)(e) shows the degraded version after blurring with a Gaussian of Radius 0.5, 0.8, 1.0 and 1.5,
respectively. From the figure 3.1 given below it is clear that as the amount of blurring increases
the spatial frequency of the image is decreases

Table 3.1: Spatial frequencies of the image blocks in Figure-3.1


SF

SF Figure.a

Figure.b

Figure.c

Figure.d

Figure.e

Clock

16.10

12.09

9.67

8.04

6.97

The table below shows that as the radious increasea. the amount of blurr also increases and
the spatial frequency is decreases.

15

3.3. implementation steps

Figure 3.1:

Original and blured version of an image block extrated from thelena image: (a) original image; (b) radius=0.5; (c) [6] radius=0.8; (d) radius=1.0;
(e) radius=1.5

From below figure it Is clear that as blurring increases the spatial frequency of the image
decreases .spatial frequency directly proportional to image quality. special frequency can be
used to reflect the clarity of an image

3.3

implementation steps

Figure ?? shows a schematic diagram for the multi focus image fusion sing spatial frequency
method .for simplicity here we consider only two source images, though it can be extended
straight forwardly to handle more than two images. The algorithm consists of the following
steps:
1. Decompose the source images A and B into blocks of size MN .denote the ith blocks of
A and B byAi and Bi respectively.

16

3.3. implementation steps

Figure 3.2: schematic Diagram Of Multi Focus image Fusion


2. Compute the spatial frequency of each block, and denote spatial frequencies of Ai And Bi
by SFiA and SFiB , Respectively.
3. Compare the spatial frequency of two corresponding blocks of Ai And Bi and construct
the ith block Fi of the fused images.

Fi =

Ai

Bi

(A + B )/2
i
i

SFiA
SFiA

>
<

SFiB
SFiB

+ T H

+ TH

otherwise

(3.4)

Here TH is a user defined threshold, and (Ai +Bi )/2 means taking the pixel-by-pixel gray
level average of Ai and Bi

4. Verify the correct fusion result in step 3: specifically, if the center block comes from a but
the majority of its center block comes from B then this center block will be changed to
be from B, and vise versa. In the implementation, we use a majority filter together with a
33 window

Here TH is a user defined threshold, and (Ai +Bi )/2 means taking the pixel-by-pixel gray
level average of Ai and Bi
17

3.4. RMSE(Root Mean Square Error) [6] [13]

3.4

RMSE(Root Mean Square Error) [6] [13]

The objective is to measure the differences between two images, and measurement of image
quality. Or in other word it is the way of measuring the difference between estimated value and
original value. in our case we compare our fused image with the original image and see the
difference between them.
Here we have reference image is R and fused image F(both of size IJ) ,the RMSE is
defined as .

v
u I J
uX X
RM SE = t
[R(i, j) F (I, J)]2 /I J

(3.5)

i=1 j=1

Where R(i,j) and F(i,j) are the pixel values at the position (i,j) of R and F Respectively.
Mutual information has also been used .but the result are similar to RMSE .also some other
parameter like varience is also used to measure the variation between the fused image and
refrence image.we can also apply edge detection on both the image to measure the edges of both
the images.

3.5

Majority filter [13]

Majority filter has to be used to avoid the artifacts in fused image caused by the fusion rules. If
the center block comes from I1 and five or more in its 3x3 neighborhood blocks are from I2 then
the centre block will be replaced by the corresponding block fromI2 and vice versa. [6] [13].
The Majority Filter tool replaces cells based on the majority value in their contiguous neighborhoods. Majority Filter has two criteria to satisfy before a replacement can occur. First, the
number of neighboring cells with the same value must be large enough to be the majority value,
or at least half the cells must have the same value (depending on the parameter specified). That is,
three out of four or five out of eight connected cells must have the same value with the majority
parameter and two out of four or four out of eight are needed for the half parameter. Second,
those cells must be contiguous to the center of the specified filter (for example, three out of four

18

3.5. Majority filter [13]


cells must be the same). The second criteria concerning the spatial connectivity of the cells minimizes the corruption of cellular spatial patterns. If these criteria are not met, no replacement
occurs, and the cell retains its value.
In the image below, Majority Filter is applied to the input raster using a filter of the closest four
cells, which are the four orthogonal neighboring cells, requiring the majority (three out of four
cells) to be the same before a cell will change its value. Only those cells surrounded by three or
more (orthogonal) cells with the same value are changed
In the image below, the Majority Filter is applied using the closest eight cells as a filter and

Figure 3.3: majority filtering on image using closest four


requiring at least half the values (four out of eight cells) to have the same value before changing
the cells value. Notice there is a greater smoothing effect.

Figure 3.4: majority filtering on image using closest eight

19

Chapter 4
Experimental results
4.1

pixel Based Image Fusion

The following (shown as below) are the results from the pixel level image fusion method
for fusion of set of images. Here we assume that the input image should be either color or
gray-scaled and are registerd. The experimental results are shown in Figure 4.1 , Figure 4.2 and
Figure 4.3 here, I have shown two result one is from pxel averaging and the second is taking
maximum pixel value among the given number of images .from results we can conclude that by
fusion we get the information in fused image which will be missing in input images.but from
results it is shown that bysimple averaging the contrast of fused image is decresed so ,I have
use thresold value C=1 we can also increase the value for better result.also conclude that the
maximum pixel value method does not give image of good quality.

Figure 4.1: pixel based image fusion

20

4.1. pixel Based Image Fusion


Experimental result 2

Figure 4.2: pixel based image fusion,Example-2


Experimental result 3

Figure 4.3: pixel based image fusion,Example-3

21

4.2. Multifocus image fusion based on spatial frequency

4.2

Multifocus image fusion based on spatial frequency

In this method as, I have mentioned before I have partitioned my input images in images blocks
Figure 4.4 and Figure 4.5shows a fused image by partitioning input images in to block of
1616.here I have taken the value of thresold is=0.
Figure 4.6 and Figure 4.7shows a fused image by partitioning input images in to block of
3232.here I have taken the value of thresold is=0.
Figure 4.8 and Figure 4.9shows a fused image by partitioning input images in to block of
6464.here I have taken the value of thresold is=0.
From table 4.1 It is clear that as the No of blocks per image increases the spatial frequency(SF)
of fused image is incerases and the value of RMSE is dereases
Table 4.1: Effects on fusing images in Figures.(4.4 - 4.9)with different blck size
Figure name and No

Block size

SF Of first image

SF Of second image

SF Of fused image

RMSE

Clock

(16X16)

5.9719

4.913

6.410

1.92

Clock

(32X32)

5.9719

4.913

6.382

2.03

Clock

(64X64)

5.9719

4.913

6.33

2.25

Leopord

(16X16)

8.521

7.671

8.94

1.53

Leopord

(32X32)

8.521

7.671

8.912

1.63

Leopord

(64X64)

8.521

7.671

8.874

1.91

22

4.2. Multifocus image fusion based on spatial frequency

Figure 4.4: Multifocus image fusion based on spatial frequency,Example-1(1616)

Figure 4.5: Multifocus image fusion based on spatial frequency,Example-2(1616)

23

4.2. Multifocus image fusion based on spatial frequency

Figure 4.6: Multifocus image fusion based on spatial frequency,Example-1(3232)

Figure 4.7: Multifocus image fusion based on spatial frequency,Example-2(3232)

24

4.2. Multifocus image fusion based on spatial frequency

Figure 4.8: Multifocus image fusion based on spatial frequency,Example-1(6464)

Figure 4.9: Multifocus image fusion based on spatial frequency,Example-2(6464)

25

Chapter 5
Conclusion and Future work
5.1

Conclusion

After performing experiment and from table .I have concluded that multifocus image fusion
method gives better fused image and the spatial frequency of fused is greater than the input
image.It gives batter result than the wavelet based method.I have seen that as the no of blocks
ofimage division increases the quality of fused image is also increases .and the RMSE is decreases.

5.2

Future work

As a future work ,I am going to devlope multifocus image fusion method based on blurr measurement.which can find out blurr from any image.also I want to add majority filter in my recent
implementation method.I think it will increase the quality my result,also I am going to compare
the results with blurr measuring method.

26

References
[1] Fillip roubek, Jan Fluster, Barbara zitova, IMAGE FUSION: A POWERFUL TOOL
FOR OBJECT IDENTIFICATION,2007.
[2] V S Petrovict,Multisensory Pixel-level image Fusion, Feb-2001.
[3] C K Solanki, N M Pate,l Pixel based and wavelet based Image Fusion Methods with Their
Comparative Study, National Conference,NCERET.2011.
[4] Zaver I T,Zaveri M,A Novel Two Step Region Based Multifocus Image Fusion Method,
International Journal of Computer and Electrical Engineering, vol. 2,NO.1, February-2010
[5] H Hariharan, A Koschan and M abidi,Multi focus image fusion Image fusion by establishing Focal connectivity, IEEE ,2007
[6] shutao Li,J T Kwok,Yaonan Wang, e, combination of images with diverse focus using the
spatial frequency , Information Fusion, vol. 2, pp 169-176, 2001.
[7] D L Hall,J Linas An introduction to multi sensor data fusion, IEEE Proc, vol. 2pp.6-23
[8] W B Seales,S Duttaevery wher in focus image fusion using controllable , Proc. SPIE 256,
pp. 227-236, 1996.
[9] Ch. Xiong, J.Hou, J. Tian, J.Liu, Efficient fusion scheme for multi-focus images by blurring measurement , Digital signal processing 87, pp. 186-193, 2009
[10] R Murli,Dr K R Sankarsubrmanian, Multi focus image fusion based on the information
level in the regions of the images, journal of thoritical and applied informaton technology.,
pp. 80-85,2005-2007
[11] Shutao Li,YaonanWang, Multifocus image fusion using spatial features and vector machine, Springer, pp. 753-758, 2005.
[12] V P S Naidu, Multi focus image fusion using mesaure of focus, J opt, 11 May 2012
[13] V P S Naidu,J R Raol Fusion of out of focus images using principal component analysis
and spatial frequency, J Aerosp. Sci. Technol., vol 3,pp. 216-225, 2008

27

Anda mungkin juga menyukai