Anda di halaman 1dari 4

WAVELETS AND IMAGE FUSION

Laure J. Chipman and Timothy M Orr Intergraph Corporation Huntsville, Alabama Lewis N. Graham, Metric Vision Madison, Alabama

ABSTRACT This paper describes an approach to image fusion using the wavelet transform. When images are merged in wavelet space, we can process different frequency ranges differently. For example, high frequency information from one image can be combined with lower frequency information from another, for performing edge enhancement.

2. OVERVIEW OF THE SYSTEM Our approach was to build a system that allows manipulation of the separate wavelet frequency blocks independently of each other. This allows us to emphasize different frequency ranges from different inputs in the output product.

We have built a prototype system that allows experimentation with various wavelet array combination and manipulation methods for image fbsion, using a set of basic operations on wavelet frequency blocks. Problems caused by image misregistration and processing artifacts are described. Examples of wavelet fusion results are shown which merge a pair of images from different sensors.

A wavelet transform array is synthesized for the product image and populated from the source images based on a set of predefined rules. After population, this synthetic array is inverse wavelet transformed to create the product image. Graham [4] provides a more detailed discussion of the application of wavelet theory to image fusion. Our prototype system handles images which are coregistered and the same size, with dimensions which are powers of two. A wavelet transform using the Daubechies [SI basis functions with filter lengths of 4, 12, or 20 is performed on all input images to be fused. A significant part of our work centered on determining rules to use in combining wavelet transform array information. The system needed to allow operations to be performed on individual wavelet array blocks, so that low and high frequency components could be treated differently. The system needed to enable the use of many different combination rules. The approach taken to achieve this was to identify several primitive operations that would be required to implement a variety of combination rules. These primitive operations act upon individual frequency blocks in wavelet arrays or on whole
wavelet arrays at once.

1. RELATED WORK There has not been extensive work on image fusion using wavelets. One application which uses a similar approach of Gaussian pyramid decomposition for fusion of multiple coincident images is presented by Akerman [l].

Toet, Van Ruyven, and Valeton [2] describe the fusion of CCD and FLIR images using a ratio of low pass pyramid, which is similar to the Laplacian pyramid. Ranchin, Wald, and Mangolini [3] present a technique of
enhancing the spatial resolution of a 20 meter

multispectral SPOT image with the 10 meter panchromatic band from the same satellite. The technique they present is related to our work in that they apply multiresolution image decomposition and reconstruction using the wavelet transform. The key to Ranchins algorithm is to synthesize the missing frequency band for the multispectral image using the corresponding layer from the wavelet pyramid for the panchromatic images, then inverse transform.

The most useful primitive operation is to simply take the coefficient with the maximum amplitude from any input image at each location in the wavelet transform array. Another operation is to average the values in all input wavelet transform arrays at each location. This operation, if performed on the entire wavelet array and inversetransformed, produces a result indistinguishable from the result of simply averaging the input images. We found

248
0-8186-7310-9195 $4.00 0 1995 IEEE

this operator useful when performed on selected frequency blocks in combination with other operators. In order to implement more complicated combination rules, we use additional primitive operations which can produce temporary output wavelet arrays. For example, consider the following rule: Use the coeficientfiom Image I unless the coeficientfiom Image 2 is greater thanfive times the coeficientfiom Image 1. To perform this rule, we can use the primitive operations of multiplication by a constant, greater-than comparison, and masking in.

the artifacts. The original image contained only the white crosshatch pattern. The blocky artifacts were caused by zeroing out low frequency blocks. 2) When features in two scenes are misregistered, there will often be blockiness along the feature, or a sort of jagged or wavy edge. This occurs when copying the higher frequency components from just one image to the output array, and also when taking the imaximum to obtain the output array. Figure 3 shows a fusion of two misregistered rectangles using the maximum-frequency rule. The contrast has been enhanced to make the artifacts more easily visible. In Figure 3, there are wavy artifacts at the edges as well as faint blotches farther away from the edges. Figure 4 demonstrates the different effect of having only horizontally or vertically oriented features.
3.3 Sensor fusion Another application for wavelet-based iimage fusion is in combining information from different sensors. The example presented here consists of portions of a blue and IR band aerial photo. The two source images are shown in Figure 5. Figure 6 shows variations in a rule for taking the maximum of coefficients in high frequency blocks. In Figure 6(a), the maximum is taken at every frequency level. In Figure 6(b), we copied the coefficients from the blue band for frequency blocks 0,O through 4,4and in Figure 6(c), we copied the coefficients from the blue band for frequency blocks 0,O through 6,6. (Block 0,O is the DC component. Frequencies increase iin highernumbered blocks.) As we limit the influence of the IR band, the output images look more like the blue band, with the contributions from the IR band appearing as edge outlines. This idea can be useful in enhancing one sensor with edge detail from another.

3. RESULTS In this section, we describe some interesting findings from our experimentation with wavelet-based fusion. We contrast the tensor and isotropic wavelet representations, describe some image artifacts introduced by the wavelet processing, and show examples of sensor fusion.
3.1 Tensor vs. Isotropic Our prototype system allows tensor or isotropic representations of the wavelet array. Our naming of these representations is from Wickerhauser [6]. The tensor representation contains frequency blocks which are a mix of different frequency levels in the x and y direction. The isotropic representation contains only blocks in which the frequency ranges in x and y are equal within each block.

In our experimentation, we found that using the isotropic representation reduced some of the blocky artifacts in the output images. Figure 1 shows products made by fusing an image having the test pattern on its left side only with an image having the test pattern on its right side only, using the maximum frequency rule. Figure l(a) was produced using the tensor wavelet representation, while Figure l(b) used the isotropic representation. (The irregular appearance of the test pattern near the tops of the images is caused by resampling, and was not present in the full-scale images.) Determining the cause of this phenomenon is an area for further work. 3.2 Artifacts We have identified two causes for blocky or wavy effects in wavelet-fused images: 1) When low-frequency components are zeroed out or altered in some ways, the resultant image has a checkerboarding effect. In effect, the image histogram is being balanced so that each square has similar average intensity. As higher frequency components are zeroed, the squares become smaller. Figure 2 shows a simple example of this effect, contrast-enhanced to accentuate

4. CONCLUSION We have described an approach to imag;e fusion in which the product image is created by inverse transforming a synthetic wavelet transform array which combines information from the input images. The flexibility of the approach in allowing combinations of operations on different ranges of frequency blocks allows the creation of products with widely varying characteristics. This could provide a tool for image analysts with specific needs in mind. For example, low resolution images from one sensor can be edge enhanced by fus,ing in high frequency features from a higher resolution image. There may also be applications in cloud removal, change detection, and for a quick-look capabilily, where the analyst could look at a single image and see all important details in his area of interest.

249

Further work is needed in determining what types of products would be useful to analysts, so that the fusion rules for obtaining those products can be automated.
5. ACKNOWLEDGMENTS The authors wish to thank TRIFID Corporation and Positive Systems, Inc. for providing many of the image samples used in this paper.

[3] T. Ranchin, L. Wald, and M. Mangolini, Efficient Data Fusion using Wavelet Transform: The Case of SPOT Satellite Images, SPIE Vol. 2034 Mathematical Imaging (1993) pp. 171-178. [4] L. Graham, Jr., Coincident Image Fusion using the Discrete Wavelet Transform: A Thesis, University of Alabama in Huntsville, 1994. [5] I. Daubechies, Ten Lectures on Wavelets. CBMSNSF Series in Applied Mathematics, SIAM Publications, Philadelphia, 1992. [6] V. Wickerhauser, Adapted Wavelet Analysisfrom Theory to SofhYare, A.K. Peters, Wellesley, MA, 1993.

6. REFERENCES [I] A. Akerman 111, Pyramidal Techniques for Multisensor Fusion, SPIE Vol. 1828 Sensor Fusion V (1992), pp. 124-131.

[2] A. Toet, L.J. van Ruyven, and J.M. Valeton, Merging Thermal and Visual Images by a Contrast Pyramid, Optical Engineering, July 1989, Vol. 28, No.7, pp. 789-792.

Figure 1: (a) Fused image, maximum-amplitude rule, tensor representation, (b) Fused image, maximum-amplitude rule, isotropic representation.

Figure 2: Checkering caused by different processing in different frequency blocks.

Figure 3: Effect of misregistration on fused result.

Figure 4: Effect of misregistration on features oriented horizontally or vertically.

250

(4 Figure 5 : (a) A blue band aerial photo. (b) An IR band aerial photo.

Figure 6: Results of fusing the blue and IR using the maximum rule, and copying progressively more of the low frequency components from band 1. (a) Maximum rule, all frequencies. (b) Blocks 0,O through 4,4 colpied from band 1. (c) Blocks 0,Othrough 6,6 copied from band 1.

251

Anda mungkin juga menyukai