Full text: Proceedings, XXth congress (Part 4)

Istanbul 2004 
Comparison of 
resolution and 
panchromatic, 
pp.295 - 303. 
e images from 
mation model, 
ol. 17, No. 17 
Í multispatial, 
juency content 
10.10, pp. 1325 
nhancement of 
igh resolution 
s and Remote 
, 1985, Spatial 
g a multiband 
| — 1029. 
ero Geophysics 
by the Ministry 
benefited from 
Z. L. and Prof. 
: Shadows-SAR) in one image with signals from another 
THE EFFECTS OF DIFFERENT TYPES OF WAVELETS ON IMAGE FUSION 
Gang Hong, Yun Zhang 
Department of Geodesy and Geomatics Engineering 
University of New Brunswick, Fredericton, New Brunswick, Canada E3B 5A3 
v5z78@unb.ca 
YunZhang@UNB.ca 
KEY WORDS: fusion, integration, multiresolution, multisensor, multispectral, resolution 
ABSTRACT: 
Image fusion is a tool for integrating a high-resolution panchromatic image with a multispectral image, in which the 
resulting fused image contains both the high-resolution spatial information of the panchromatic image and the color 
information of the multispectral image. Wavelet transformation, originally a mathematical tool for signal processing, is 
now popular in the field of image fusion. Recently, many image fusion methods based on wavelet transformation have 
been published. The wavelets used in image fusion can be categorized into three general classes: Orthogonal, 
Biorthogonal and Nonorthogonal. Although these wavelets share some common properties, each wavelet leads to 
unique image decomposition and a reconstruction method which leads to differences among wavelet fusion methods. 
This paper focuses on the comparison of the image fusion methods which utilize the wavelets of the above three 
general classes. The typical wavelets from the above three general classes — Daubechies (Orthogonal), spline 
biorthogonal (Biorthogonal), and A trous (Nonorthogonal) — are selected as the mathematical models to implement 
image fusion algorithms. 
When wavelet transformation alone is used for image fusion, the fusion result is often not good. However, if wavelet 
transform and IHS transform are integrated, better fusion results may be achieved. Because the substitution in IHS 
transform is limited to only the intensity component, integrating of the wavelet transform to improve or modify the 
intensity and the IHS transform to fuse the image can make the fusion process simpler and faster. This integration can 
also better preserve color information. The fusion method based on the above IHS and wavelet integration concept is 
employed in this paper. IKONOS image data are used to evaluate the three different kinds of wavelet fusion methods 
mentioned above. The fusion results are compared graphically, visually, and statistically. 
Wavelet is a relative new fusion method, which is a 
mathematical tool initially designed for signal 
processing. Because it provides multiresolution and 
multiscale analysis function, image fusion can be 
implemented in the wavelet transform domain. This 
feature cannot be replaced by any traditional fusion 
methods. Many papers about image fusion based on 
wavelet transform have been published in recent years 
(Yocky, 1995; Li, et al, 1995; Y ocky, 1996; Zhou et al, 
1998; Nüfiez, et al., 1999:Ranchin et al, 2000; Aiazzi, 
et.al, 2002). Until now, the wavelets that have been 
used in image fusion domain can generally be 
categorized into three typical different types: 
Daubechies (Orthogonal), spline  biorthogonal 
(Biorthogonal) and A trous (Nonorthogonal). This 
paper focuses on these three different wavelets and 
compares their fusion results. 
1.INTRODUCTION 
Image fusion is a tool for integrating a high-resolution 
panchromatic image with a multispectral image, in 
which the resulting fused image contains both the 
high-resolution spatial information of the 
panchromatic image and the color information of the 
multispectral image. More and more high-resolution 
sensors appear as the technology develops, and 
correspondingly, a variety of high-resolution images 
are available; however, because of the benefits of 
image fusion, it is still a popular method to interpret 
image data. Pohl and Genderen (1998) have concluded 
that image fusion has the following functions by 
studying the literature: sharpen images; improve 
geometric — corrections, provide  stereo-viewing 
capabilities for  stereophotogrammetry; enhance 
certain features not visible in either of the single data 
alone; complement data sets for improved 
classification; detect changes using multitemporal 
data; substitute missing information (e.g., clouds-VIR, 
The rest of this paper is organized as follows: general 
description of wavelet theory used in the image fusion 
2 
is given in section 2; section 3 is the experimental 
results and comparison; the conclusion is provided in 
sensor tmage; replace defective data. section 4. 
915 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.