Anda di halaman 1dari 4

Green Technologies: Smart and Efficient Management-2012, SLIET, Longowal

Pixel Level Image Fusion Techniques


Sweeti , Manpreet Kaur ,Birmohan Singh
Abstract-Image Fusion is the procedure of combining useful input images from single or multiple sensors taken at same or different times from single or multiple views to a single composite image with redundant and complementary information for better visualization. This can be performed in various domains and at various levels. Among the three levels of fusion; we will discuss the pixel level image fusion methods. Image fusion methods have a wide area of applications. In this paper we will discuss various domains, levels, applications and techniques of fusion with some results of present thesis work. Keywords Pixel level fusion methods, multispectral images, multifocus images

1996).In fused image information associated with each pixel is determined from a set of pixels of source images 2. Feature Level- It requires an extraction of objects/features recognized from the source images These similar features from input images are then fused together to get the fused image. 3. Decision Level- Decision-level fusion algorithms combine image descriptions to the fused image .Input images are processed individually for information extraction. The obtained information is then combined by applying decision rules [4], [5], [9]. II. APPLICATIONS Image fusion covers wide ranges of applications.some of them are:Intelligent robots - Stereo camera fusion , Intelligent viewing control , Automatic target recognition and tracking Manufacturing- Electronic circuit and component inspection , Product surface measurement and inspection , non-destructive material inspection , Complex machine/device diagnostics Military and law enforcement- Detection, tracking, identification of (ocean, air , ground)target/event, Concealed weapon detection , Battle-field monitoring and Night pilot guidance Remote sensing-satellite imaging Medical image -Computer assisted surgery, Fusing X-ray, computed tomography (CT) and magnetic resonance (MR) images [9]. On the basis of different fields of the applications, image fusion must fulfil some objectives like reducing noise, improving spatial resolution, visualization of high dimensional images and images from different physical principles such as infrared images, thermal image, sonar image, X-ray image etc. III. IMAGE FUSION METHODS From the literature we found that there are various pixel level fusion methods that have been developed to perform image fusion. Some well-known image fusion methods are listed below [5-9]. (1) Intensity-hue-saturation (IHS) transform based fusion (2) Principal component analysis (PCA) based fusion (3) Arithmetic combinations (a) Brovey transform (b) Synthetic variable ratio technique (c) Ratio enhancement technique (4) Multiscale transform based fusion (a) High-pass filtering method (b) Pyramid method (c) Wavelet transform (d) Curvelet transform

I. INTRODUCTION Image Fusion is the procedure of combining useful input images from single or multiple sensors taken at same or different times from single or multiple views to a single composite image with redundant and complementary information for better visualization. The fusion process should preserve all relevant information of the input images on the fused image .The fusion should not introduce any artifacts or inconsistencies by following processing stages. The fusion scheme should be shift and rotational invariant, i.e. the fusion result should not depend on the location or orientation of an object in the input imagery. (Rockinger,1996) Fusions Domains-There are various domains of image fusion process. These are given below:1. Multiview fusionIn this images of the same modality are taken from different views and thus it supplies the complementary information from different views 2. Multimodal fusionIn this images of the same scene are taken from different modalities like CT, MRI, PET, SPET etc. It provides compressed fused images with the band specific information. 3. Multitemporal fusionIn this images of the same scene taken at different times from the same modality are fused together to detect the changes. For ex. In tumor growth detection, land marking etc. 4. Multifocus fusionImages of the scene having different focused regions are fused together to get a fused image having complete information over all regions [6], [9]. Fusions Levels-There are three levels where image fusion may take place and they include: 1. Pixel Level-It is performed on a pixel-by-pixel basis. To perform pixel level fusion all input images are spatially registered/aligned exactly to all other input images to make a correspondence between all pixels positions(Rockinger,
Sweeti is M.Tech student in Department of Instrumentation and Control Engineering, SLIET, Longowal (Sangrur) ( sweeti724bairagi@gmail.com) Manpreet Kaur is AP in Department of Instrumentation and Control Engineering, SLIET, Longowal (Sangrur)(aneja_mpk@yahoo.co.in) Birmohan Singh is Associate Professor in CSE Department,SLIET,Longowal

GT10-1

Green Technologies: Smart and Efficient Management-2012, SLIET, Longowal (5) Total probability density fusion (6) Biologically inspired information fusion [11] Some basic methods are discussed below:1. Averaging TechniqueIt works by simply taking the average intensity value of the various input images pixel by Pixel. The value of the pixel P (i, j) of each image is taken and added. This sum is then divided by N to obtain the average. The average value is assigned to the corresponding pixel of the output image. This is repeated for all pixel values [10]. 2. Standard IHS fusion method The intensity-hue-saturation (HIS) fusion converts a colour MS image from the RGB space into the IHS colour space. Because the intensity (I) band resembles a panchromatic (PAN) image, it is replaced by a high-resolution PAN image in the fusion. A reverse IHS transform is then performed on image [4], [9]. 3. PCA transform PCA is a method of finding patterns in data of high dimensions and compressing the data into a more manageable form by reducing the number of dimensions without much loss of information. The whole process of PCA is to transform the data so that it can be expressed in terms of patterns and decompose the data into vectors that describe the greatest contribution to the patterns within the data set. It can convert the correlated MS bands into a set of uncorrelated components. The first principle component resembles the PAN image. Therefore, the PCA fusion scheme is similar to the IHS fusion scheme [5],[9]. 4. Pyramid transformPyramid transform appears to be very useful to perform the fusion in the transform domain. An image pyramid consists of a set of low pass or band pass copies of an image where each copy represents pattern information of a different scale. In fusion we construct the pyramid transform of the fused image from the pyramid transforms of the source images and then the fused image is obtained by taking inverse pyramid transform. At every level pyramid reduces to half the size of the pyramid in the preceding level and the higher levels will concentrate upon the lower spatial frequencies [1], [5], [10]. 5. Wavelet TransformThe original concept of wavelet-based analysis came from Mallat. It is a mathematical tool that can detect local features. A 1-D DWT is first performed on the rows and then columns of the data by separately filtering and down sampling. This results to one set of approximation coefficients and three sets of detail coefficients, representing the horizontal, vertical and diagonal directions of the image, respectively. By recursively applying the same scheme of decomposition to the LL subband a multi-resolution decomposition with a desires level can then be achieved [4],[10].

In wavelet fusion the images to be fused undergo decomposition, the coefficients of both the low frequency band and high frequency bands are then performed with a certain fusion rule and then IDWT is performed on the combined coefficients to get the fused image IV. SOME RESULTS At this stage of thesis we are working on basic methods. For this purpose we performed some basic fusion method on the standard images [2], [3]. Some results are discussed here: Qualitative evaluation:1. Multifocus images fusion

(a)

(b)

(c)

(d)

(e)

(f)

Fig 1-Fusion on multifocus clock image (a) (b) Original CT and MRI (imagefusion.org) (c) minimum (d) maximum (e) averaging (f) PCA fusion

GT10-2

Green Technologies: Smart and Efficient Management-2012, SLIET, Longowal 2. Multispectral image fusion V. DISCUSSION AND FUTURE ASPECTS From the study of various methods we can conclude some point in favour and against the methods of fusion. The simple average method leads to undesirable side effect such as reduced contrast; so we go for weighted average approach where the fused pixel is estimated as the weighted average of the corresponding input pixels but this weight estimation requires a user-specific threshold. The fused images obtained by intensity-hue-saturation (IHS), principal component analysis (PCA), and the Brovey transform methods have high spatial quality but they suffer from spectral degradation. Pyramid method fails to introduce any spatial orientation selectivity in the decomposition process and cause blocking effects in the fused image [6]. The DWT of image signals produces a non redundant image representation and provide better spatial and spectral localization of image than other multiresolution representations. It provides the better signal to noise ratios, increased directional information and reduces the blocking artifacts. In next step to our work we will focus on the application of the wavelet transform on multispectral medical images for better diagnosis and visualization. Our future work includes the development of a hybrid algorithm that will provide good results in accordance with the appropriate performance evaluation parameters. REFERENCES
[1] Firooz Sadjadi Lockheed Martin Corporation firooz.sadjadi@ieee.org, Comparative Image Fusion Analysais Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05) 1063-6919/05 $20.00 2005 IEEE [2]R.Maruthi, Dr.K.SankarasubramanianMULTI FOCUS IMAGE FUSION BASED ON THE INFORMATION LEVEL IN THE REGIONS OF THE IMAGESJournal of Theoretical and Applied Information Technology 2005 - 2007 JATIT. All rights reserved. [3] Tanish Zaveri, Mukesh Zaveri, A Novel Region Based Multimodality Image Fusion Method, JOURNAL OF PATTERN RECOGNITION RESEARCH 2 (2011) 140-153 Received Jun. 6, 2009. Accepted Feb. 9, 2011. [4] Dong Jiang, Dafang Zhuang, Yaohuan Huang and Jinying Fu ,Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications, Institute of Geographical Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing,China [5] Harry Angel, Chris Ste-Croix and Elizabeth Kittel,,REVIEW OF FUSION SYSTEMS AND CONTRIBUTING TECHNOLOGIES FOR SIHS, Defence Research and Development Canada-Toronto [6] Yong Yang,Dong Sun Park,Shuying Huang, and Nini Rao Medical Image Fusion via an Effective Wavelet-Based Approach EURASIP Journal on Advances in Signal Processing,Volume 2010 (2010), Article ID 579341, 13 pages,doi:10.1155/2010/579341,Received 30 July 2009; Revised 27 October 2009; Accepted 10 March 2010 [7] Parul Shah Shabbir N. Merchant Uday B. Desai,Multifocus and multispectral image fusion based on pixel significance using multiresolution decomposition, Springer-Verlag London Limited 2011, Received: 23 April 2010 / Revised: 14 February 2011 / Accepted: 16 February 2011 [8]Firouz Abdullah Al-Wassai, N.V. Kalyankar , Ali A. Al-Zuky Arithmetic and Frequency Filtering Methods of Pixel-Based Image Fusion Techniques , http://arxiv.org/abs/1107.3348 [Computer Vision and Pattern Recognition (cs.CV)] [19 July 2011] [9] Zheng, Sicong, "Pixel-level Image Fusion Algorithms for Multi-camera Imaging System. " Master's Thesis, University of Tennessee,2010. http://trace.tennessee.edu/utk_gradthes/848

(a)

(b)

(c)

(d)

(e)

(f)

Fig 2-Fusion on multispectral medical image (a) (b) Original CT and MRI (imagefusion.org) (c) minimum (d) maximum (e) Averaging (f) PCA fusion

Quantitative evaluation1. Multifocus images Mean Minimum selection Maximum selection Averaging PCA 95.7282 104.2056 100.2009 99.9147 Standard deviation 50.3800 51.3465 50.4537 50.4258

2. Multispectral image fusion Mean Minimum selection Maximum selection Averaging PCA 4.4789 59.6852 32.3337 51.7376 Standard deviation 17.4368 61.6822 34.8820 54.2457

Other Performance evaluation parameters include Average Gradient, Entropy, Mutual Information, Fusion Symmetry, Normalized Correlation, and Mean Square Error [7].Evaluation parameters could be reference based or non reference based. The choice of evaluation parameter depends upon the area of application of fusion method.

GT10-3

Green Technologies: Smart and Efficient Management-2012, SLIET, Longowal


[10]M. Chandana, S. Amutha, and Naveen Kumar A Hybrid Multi -focus Medical Image Fusion Based on Wavelet Transform International Journal of Research and Reviews in Computer Science (IJRRCS),Vol. 2, No. 4, August 2011, ISSN: 2079-2557, Science Academy Publisher, United Kingdom [11] Shih-Gu Huang, Wavelet for Image Fusion, Graduate Institute of Communication Engineering & Department of Electrical Engineering, National Taiwan University

, ,

GT10-4

Anda mungkin juga menyukai