Anda di halaman 1dari 6

International Journal of Computer Trends and Technology- volume3Issue5- 2012

PERFORMANCE COMPARISON OF DIFFERENT MULTI-RESOLUTION TRANSFORMS FOR IMAGE FUSION


Bapujee Uppada#1, G.Shankara BhaskaraRao#2 M.Tech (VLSI&ES), Sri Vasavi engineering collage, Tadepalli gudem, Associate Professors (ECE Department),Sri Vasavi engineering collage, Tadepalli gudem,
1

ABSTRACT: This paper analyses the comparision of image fusion combines information from multiple images of the same scene to get a composite image that is more suitable for human visual perception or further image-processing tasks. In this project, we compare various multi-resolution decomposition algorithms, especially the latest developed image decomposition methods, such as curvelet and contourlet, for image fusion. The investigations include the effect of decomposition levels and filters on fusion performance. By comparing fusion results, we give the best candidates for multi-focus images, infraredvisible images, and medical images. The experimental results show that the shift-invariant property is of great importance for image fusion. In addition, we also conclude that short filter usually provides better fusion results than long filter, and the appropriate setting for the number of decomposition levels is four. I INTRODUCTION Today, imaging sensors of various types are widely used in military and civilian applications, such as battlefield surveillance, health-care applications, and traffic control. However, the information provided by different imaging sensors may be complementary and redundant. For example, visible image provides the outline of scene, while infrared image can show the existence of some special objects, such as concealed guns or people. To obtain an image that simultaneously contains the outline of scene as well as special objects for the convenience of human visual perception or for further image-processing tasks, image fusion can be used to integrate the information provided by individual sensors. In this project, we concern with the fusion of three types of source images: multi-focus images, infraredvisible images, and medical images. During the past two decades, many image fusion methods are developed. According to the stage at which image information is integrated, image fusion algorithms can be categorized into pixel, feature, and decision levels. Pixel level image fusion: The pixel-level fusion integrates visual information contained in source images into a single fused image based on the original pixel information . In the past decades, pixellevel image fusion has attracted a great deal of research attention. Generally, these algorithms can be categorized into spatial domain fusion and transform domain fusion.

Spatial domain fusion: The spatial domain techniques fuse source images using local spatial features, such as gradient, spatial frequency, and local standard derivation . Transform domain fusion: For the transform domain methods, source images are projected onto localized bases which are usually designed to represent the sharpness and edges of an image. Therefore, the transformed coefficients (each corresponds to a transform basis) of an image are meaningful in detecting salient features. Consequently, according to the information provided by transformed coefficients, one can select the required information provided from the source images to construct the fused image. Zhang and Blum established a categorization of multi scale decomposition-based image fusion to achieve a high-quality digital camera image. They focused mainly on fusing the multi scale decomposition coefficients. For this reason, only a few basic types were considered, i.e. the Laplacian pyramid transform, the DWT, and the discrete wavelet frame (DWF). Only visible images were considered in performance comparisons for digital camera application. Pajares and Cruz gave a tutorial of the wavelet-based image fusion methods. They presented a comprehensive comparison of different pyramid merging methods, different resolution levels, and different wavelet families. Three fusion examples were provided, namely multi-focus images, multispectralpanchromatic remote sensing images, and functional anatomical medical images. Wavelets and related classical multiscale transforms conduct decomposition over a limited dictionary in which the two-dimensional bases simply consist of all possible tensor products of one-dimensional basis functions. To solve this problem, some new multiscale transforms such as curvelet and contourlet are introduced .The main motivation of these transforms is to pursue a true two-dimensional transform that can capture the intrinsic geometrical structure . Various transforms have been used to explore the image fusion problem. however, there is a lack of comprehensive comparison of these methods on different types of source images. In addition, notice that the general image fusion framework with pyramid decomposition and wavelet has been well studied. In this project, we investigate some recently developed multi scale image decomposition methods including the DWT,CT, and NSCT, especially different decomposition levels and filters using a general fusion rule. Image fusion is a technique used to integrate a high-resolution panchromatic image with a low-resolution multispectral image to produce a high-resolution

ISSN: 2231-2803 http://www.internationaljournalssrg.org

Page 741

International Journal of Computer Trends and Technology- volume3Issue5- 2012


multispectral image, which contains both the high-resolution spatial information of the panchromatic image and the colour information of the multispectral image. Although an increasing number of high-resolution images are available along with sensor technology development, image fusion is still a popular and important method to interpret the image data for obtaining a more suitable image for a variety of applications, such as visual interpretation, digital classification, etc. From studying existing image fusion techniques and applications, Pohl and Van Genderen (1998) concluded that image fusion can provide the following functions: 1. Sharpen images; 2. Improve geometric corrections; 3.Providestereo-viewing capabilities for stereo photo grammetry; 4. Enhance certain features not visible in either of the single data alone; 5. Complement data sets for improved classification; 6. Detect changes using multitemporal data; 7. Substitute missing information (e.g. clouds-VIR, shadowsSAR) in one image with signals from another sensor image; 8. Replace defective data. II BACKGROUND AND RELATED WORK The curvelet transform was developed initially in the continuous domain [4] via multiscale filtering and then applying a block ridgelet transform [3] on each bandpass image. Later, the authors proposed the second generation curvelet transform [5] that was defined directly via frequency partitioning without using the ridgelet transform. Both curvelet constructions require a rotation operation and correspond to a 2-D frequency partition based on the polar coordinate. This makes the curvelet construction simple in the continuous domain but causes the implementation for discrete images sampled on a rectangular grid to be very challenging. In particular, approaching critical sampling seems difficult in such discretized constructions. Several other well-known systems that provide multiscale and directional image representations include: 2-D Gabor wavelets [5], the cortex transform [16], the steerable pyramid [1], 2-D directional wavelets [18], brushlets [19], and complex wavelets [20]. The main differences between these systems and our contourlet construction is that the previous methods do not allow for a different number of directions at each scale while achieving nearly critical sampling. In addition, our construction employs iterated filter banks, which makes it computationally efficient, and there is a precise connection with continuous-domain expansions. Fig.3 demonstrates the fusion of two images using the complex wavelet transform. The areas of the images more in focus give rise to larger magnitude coefficients within that region. A simple choose maximum scheme is used to produce the combined coefficient map. The resulting fused image is then produced by transforming the combined coefficient map using the inverse complex wavelet transform. The wavelet coefficient images show the orientated nature of the complex wavelet sub bands. Each of the clocks hands which are pointing in different directions are picked out by the differently orientated sub bands. All of the coefficient fusion rules implemented with the discrete wavelet transform can also be implemented with the complex wavelet transform. In this case, however, they must be applied to the magnitude of the DT-CWT coefficients as they are complex [1]. Although extremely computationally efficient, the discrete wavelet transform is not shift invariant. Shift invariance within wavelet transform image fusion is essential for the effective comparison of coefficient magnitudes used by the fusion rule. This is because the magnitude of a coefficient within a shift variant transform will often not reflect the true transform content at that point. The shift variance within a DWT is a result of the sub sampling necessary for critical decimation. The shift invariant discrete wavelet transform (SIDWT) was an initial attempt to integrate shift invariance into a DWT by discarding all sub sampling [5]. The SIDWT is therefore considerably over complete. It has been used for image fusion as reported in with much improved results over the standard DWT techniques.

Fig 3: Image fusion process using DTCWT Contourlets are implemented by using a filter bank that decouples the multiscale and the directional decompositions. In Figure, Do and Vetterli show a conceptual filter bank setup that shows this decoupling. We can see that a multiscale decomposition is done by a Laplacian pyramid, then a directional decomposition is done using a directional filter bank. This transform is suitable for applications involving edge detection with a high curve content. [4]

Filter bank for contourlet transform

3. PROPOSED FRAMEWORK

ISSN: 2231-2803 http://www.internationaljournalssrg.org

Page 742

International Journal of Computer Trends and Technology- volume3Issue5- 2012


Curvlet wavelet transform: Curvelets are a non-adaptive technique for multiscale object representation. Being an extension of the wavelet concept, they are becoming popular in similar fields, namely in image processing and scientific computing. Wavelets generalize the Fourier transform by using a basis that represents both location and spatial frequency. For 2D or 3D signals, directional wavelet transforms go further, by using basis functions that are also localized in orientation. A curvelet transform differs from other directional wavelet transforms in that the degree of localisation in orientation varies with scale. In particular, fine-scale basis functions are long ridges; the shape of the basis functions at scale j is 2 j by 2 j / 2 so the fine-scale bases are skinny ridges with a precisely determined orientation. Curvelets are an appropriate basis for representing images (or other functions) which are smooth apart from singularities along smooth curves, where the curves have bounded curvature, i.e. where objects in the image have a minimum length scale. This property holds for cartoons, geometrical diagrams, and text. As one zooms in on such images, the edges they contain appear increasingly straight. Curvelets take advantage of this property, by defining the higher resolution curvelets to be skinnier the lower resolution curvelets. However, natural images (photographs) do not have this property; they have detail at every scale. Therefore, for natural images, it is preferable to use some sort of directional wavelet transform whose wavelets have the same aspect ratio at every scale. The Dual-tree complex wavelet transform (DTCWT) calculates the complex transform of a signal using two separate DWT decompositions (tree a and tree b). If the filters used in one are specifically designed different from those in the other it is possible for one DWT to produce the real coefficients and the other the imaginary. This redundancy of two provides extra information for analysis but at the expense of extra computational power. It also provides approximate shift-invariance (unlike the DWT) yet still allows perfect reconstruction of the signal. The design of the filters is particularly important for the transform to occur correctly and the necessary characteristics are: The low-pass filters in the two trees must differ by half a sample period Reconstruction filters are the reverse of analysis All filters from the same orthonormal set Tree a filters are the reverse of tree b filters Both trees have the same frequency response curve as a superposition of bases of various lengths and widths obeying the scaling law width _ length2 . Two examples of the CVT bases are shown in Fig. 2a. Fig. 2b presents two examples of wavelet bases. From Fig. 2, it can be seen that the CVT is more suitable for the analysis of image edges, such as curve and line characteristics, than wavelet. The CVT is referred to as the true 2D transform. The discrete version implemented in this research uses a wrapping transform. The flowchart of the second generation of curvelet transform is presented in Fig. 3. Firstly, the 2D FFT is applied to the source image to obtain Fourier samples. Next, a discrete localizing window smoothly localizes the Fourier transform near the sheared wedges obeying the parabolic scaling. Then, the wrapping transformation is applied to re-index the data. Finally, the inverse 2D FFT is used to obtain the discrete CVT coefficients. More details can be found. In this project, we make an assumption that there are just two source images A and B. It should be noticed that the multiscale methods can easily be extended to more source images . Fig. 5 illustrates the generic image fusion framework based on multiscale image decomposition methods. The source images are firstly decomposed into low-frequency subbands and a sequence of high-frequency subbands in different scales and orientations. Then at each position in the transformed subbands, the value with the highest saliency is selected to construct the fused subbands. Finally, the fused image is obtained by applying inverse transform on the fused subbands. There are two key issues in pixel-level image fusion algorithms, namely identifying the most important information in source images and transferring the salient information into the fused image. They are referred as activity-level measurement and coefficient combination, respectively, in many literatures. The activity-level measurement is used to express the salience of each coefficient in image fusion methods based on multiscale decomposition. Generally, it is described by the absolute value of the corresponding transform coefficients. Other techniques of activity-level measurements include the square of the corresponding coefficient method, rank filter method, and spatial frequency method. They have been well discussed . In this project, we choose the absolute value of the corresponding coefficient at each position as the activitylevel:

Ai( p ) Di ( p ) |
where D is the multiscale coefficient and p is the index of a particular coefficient; x and y indicate the spatial position in a given subband; and l and k indicate the scale and orientation of D, respectively. The coefficients combining should integrate the visual information contained in all source images into the fused image without introduction of distortion or loss of information. However, this goal is almost impossible. A more practical way is to integrate the faithful representation of the most important input information into the fused image. A general and effective image fusion rule for the multi-resolution-based methods is adopted in this project. The low-frequency coefficients are fused by the average

Curvelet: The DWT, SWT, and DTCWT cannot capture curves and edges of images well. More reasonable bases should contain geometrical structure information when they are used to represent images. Cands and Donoho proposed the curvelet transform (CVT) with the idea of representing a

ISSN: 2231-2803 http://www.internationaljournalssrg.org

Page 743

International Journal of Computer Trends and Technology- volume3Issue5- 2012


method, meaning the fused coefficient is the average of the corresponding coefficients of the source images. The highfrequency coefficients are fused by the approach of choosing absolute maximum The steps of using Curvelet: Transform to fuse two images are as follows: Resample and registration of original images, we can correct original images and distortion so that both of them have similar probability distribution. Then Wavelet coefficient of similar component will stay in the same magnitude. Using Wavelet Transform to decompose original images into proper levels. One low-frequency approximate component and three high-frequency detail components will be acquired in each level. Curvelet Transform of individual acquired low frequency approximate component and high frequency detail components from both of images, neighborhood interpolation method is used and the details of gray cant be changed. According to definite standard to fuse images, local area variance is chose to measure definition for low frequency c j 0 ( k1, k 2) component. First, divide low-frequency into individual foursquare sub-blocks which are then calculate local area variance of the current sub-block:
( N1 1)/2 i ( N1 1)/ 2 ( M1 1)/2 j ( M 1 1)/2

when determining the fused multiscale coefficients, a set of coefficients in other orientations and frequency subbands corresponding to the same pixel should be considered jointly. Consistency verification is based on the idea that a composite multiscale coefficient is unlikely to be generated in a completely different manner from all its neighbors. These two issues have been investigated in the literature , so they are not considered in this paper. In general, the grouping method and consistency verification can improve fusion result. Nonsubsampled Contourlet Transform: The contourlet transform was proposed to address the lack of geometrical structure in the separable twodimensional wavelet transform. Because of its filter bank structure the contourlet transform is not shift-invariant. In this paper we propose the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multi-scale, multi-direction, and shift-invariant image decomposition that can be efficiently implemented via the `a trous algorithm. At the core of the proposed scheme is the non separable two-channel nonsubsampled filter bank. We study the filter design problem and propose a design framework based on the mapping approach. We exploit the less stringent design condition of the nonsubsampled filter bank to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. NSCT algorithm steps: 1. Compute the NSCT of the input image for N levels. 2. Estimate the noise standard deviation of the input image. 3. For each level DFB, (a) Estimate the noise variance. (b) Compute the threshold and the amplifying ratio. (c) At each pixel, compute the mean and the maximum magnitude of all directional subbands at this level, and classify it by (1) into strong edges, weak edges, or noises. (d) For each directional subband, use the nonlinear mapping function given in (2) to modify the NSCT coefficients according to the classification. 4. Reconstruct the enhanced image from the modified NSCT coefficients.

[C j0 (k1 i, k2 j ) C j0 ( k1 , k2 )]2 N1 * M 1

STD

Here C j0 ( k1 , k 2 ) stands for low-frequency coefficient mean of original images. If variance is bigger, it shows that the local contrast of original image is bigger, that means clearer definition. Based on the general rule, many new fusion rules, such as selection method, entropy method, and linear dependency method , have been developed. As indicated in the references, these new methods can improve fusion performance in some respects. For example, Zhang and Guo proposed a fusion rule to obtain contrast improvements for multi-focus image fusion. However, if all combinations of the multi-resolution transforms and the extend fusion rules are investigated, it is hard to obtain useful conclusions. We think that the conclusions of the general rules are also effective for more powerful rules, which will be verified by an experiment in Section 4. There are two other issues, grouping method and consistency verification in the fusion process. Grouping method means that

5. EXPERIMENTAL RESULTS

ISSN: 2231-2803 http://www.internationaljournalssrg.org

Page 744

International Journal of Computer Trends and Technology- volume3Issue5- 2012


The following are the experimental results obtained by using wavelet , Curvelet, Non sub sampled Curvelet

Wavelets

curvelets

NSCT

Fusion Method Fusion Method Entropy Evaluation of the multi-focus image fusion. RMS error 0.4827 0.67937 0.04394 Correlation Coefficients 0.99341 0.99788 0.99637 PSNR Standard deviation 46.4689 0.3561 0.1779 Special frequency 15.2735 0.0789 0.0391 MF DWT 3.0052e+005 0.3264 0.1376 Curvelets NSCT 6.1108 6.4991 6.5110 Entropy

Evaluation of the multi-focus image fusion. RMS error 0.29725 0.58747 0.02656 Correlation Coefficients 0.98285 0.99484 0.099194 PSNR Standard deviation 46.7026 0.3431 0.1714 Special frequency 28.5238 0.1383 0.0691 MF

58.6683 52.7511 79.6462

2.9867e+003 0.1873 0.1003

DWT Curvelets NSCT

6.7957 7.1208 7.1249

54.4573 51.4886 75.2725

Table: Evalution of the multi-focus image fusion results. Table: Evalution of the multi-focus image fusion results

MEDICAL IMAGE FUSION:

wavelets

WAVELETS

Curvelets
Fusion Method Entropy
CURVELETS NSCT

NSCT

Evaluation of the multi-focus image fusion. RMS error Correlation Coefficients PSNR Standard deviation Special frequency MF

ISSN: 2231-2803 http://www.internationaljournalssrg.org

Page 745

International Journal of Computer Trends and Technology- volume3Issue5- 2012


DWT Curvelets NSCT 7.2726 7.5153 7.5011 0.43996 0.63718 0.05248 0.98757 0.99215 0.9889 55.2625 52.0456 73.7296 66.4859 0.5015 0.2501 31.8018 0.1523 0.0740 420.1323 [3] 0.4364 0.1866

Table: Evalution of the multi-focus image fusion results.

100 50 0 PSNR RMS Entropy Curvelet Wavelet

F. E. Ali, I. M. El-Dokany, A. A. Saad and F. E. Abd ElSamie, Curvelet Fusion of MR and CT Images Progress In Electromagnetics Research C, Vol. 3, 215224, 2008. [4] M.J. Fadili, J.-L. Starck, Curvelets and Ridgelets GREYC CNRS UMR 6072, Image Processing Group, ENSICAEN 14050, Caen Cedex, France, October 24, 2007. [5] Yan Sun, Chunhui Zhao, Ling Jiang, A New Image Fusion Algorithm Based on Wavelet Transform and the Second Generation Curvelet Transform,IEEE Trans on medical imaging ,vol 18,No 3.2010pp. [6] Yong Chai,You He,Chalong Ying,Ct and MRI Image Fusion Based on Contourlet Using a Novel Rule, IEEE Trans on medical imaging,vol 08,No 3.2008pp. [7] J. Antoine, P. Carrette, R. Murenzi , and B. Piette, Image analysis with two-dimensional continuous wavelet transform, Signal Processing, vol. 31, pp. 241272, 1993. (8) G. Pajares, J. Cruz, A wavelet-based image fusion tutorial, Pattern Recognition 37 (9) (2004) 18551872. (9) G. Piella, A general framework for multiresolution image fusion: from pixels to regions, Information Fusion 4 (4) (2003) 259280. (10) R. Redondo, F. roubek, S. Fischer, G. Cristbal, Multifocus image fusion using the log-Gabor transform and a multisize windows technique, Information Fusion 10 (2) (2009) 163171.

Curve 51.489 0.6794 7.1208 let Wavel 54.457 0.4827 6.7957

6. CONCLUSION AND FUTURE WORK This paper puts forward an image fusion algorithm based upon Wavelet Transform. It consists of multiresolution analysis ability in Wavelet Transform. Image fusion seeks to blend information from different images. It integrates complementary information to provide a much better visual photograph of a scenario, suited to processing. Image Fusion produces a single image given by a multitude of input images. It is widely recognized as an efficient tool for improving overall performance in image based application. Wavelet transforms provide a framework by which an image is decomposed, with each level equivalent to a coarser resolution band. The wavelet sharpened images have a excellent spectral quality. The wavelet transform suffers from noise and artifacts and actually has low accuracy for curved edges. In imaging applications, images exhibit edges & discontinuities across curves. In image fusion the side preservation is valuable in obtaining complementary details of input images. Proposed Curvelet based image fusion which is certainly most suited for medical images. REFERENCES: [1]Image Fusion based on Wavelet Transform for Medical Application R.J.Sapkal, S.M.Kulkarni. International Journal of Engineering Research and Applications. [2] J. Antoine, P. Carrette, R.Murenzi , and B. Piette, Image analysis with two-dimensional continuous wavelet transform, Signal Processing, vol. 31, pp. 241272, 1993.

ISSN: 2231-2803 http://www.internationaljournalssrg.org

Page 746

Anda mungkin juga menyukai