Full text: Proceedings, XXth congress (Part 7)

CV ede 
v= 
— NR 
mij 4 
t 0 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004 
  
designate the approximations at the resolution of 2" and the 
coarser resolution 2"*!. Cm,n also denotes the difference between 
one approximation and the other. To calculate Gun: and Can 
coefficients, a scaling function is necessary. Then, the 
convolution of scaling function and the signal is implemented 
at every scale using a low pass FIR (Finite Impulse Response) 
filter 4, to calculate a,,, coefficients (Nikolov et al, 2001). This 
process can be designated with the following equation (Nikolov 
et al, 2001). 
Umn 7 2 Pay (3) 
Similarly, by using a related high pass FIR filter gy ihe c, 
coefficients are calculated using the following equation 
(Nikolov et al, 2001). 
C n.n = 2 E2n-k alk (4) 
t 
For 2-D DWT, it is just necessary to separately filter and 
downsample the image in the horizontal and vertical directions 
(Nikolov et al, 2001). By doing this, the spatial resolution is 
halved at each level by subsampling the image by a factor two. 
Each image provides four sub-images at each resolution level 
corresponding to one approximation image (low spatial 
resolution) and three detail (horizontal, vertical and diagonal) 
images (Chibani and Houacine, 2002). The same input image 
can be obtained by inverse DWT using calculated wavelet 
coefficients. 
3. IMAGE FUSION ALGORITHM 
3.1. Preprocessing of input images 
In image fusion, the first step is to prepare the input images for 
the fusion process. This includes registration and resampling of 
the input images (Zhou, 1998). Registration is to align 
corresponding pixels in the input images. This is usually done 
by geo-referencing the images to a map projection such as 
UTM (Universal Transverse Mercator). If the images are from 
the same sensors and taken at the same time, they are usually 
already co-registered and can be directly used for fusion 
processing. However, if the images are from different sensors, 
and even if they are georeferenced by the image vendors, a 
registration process is likely still necessary to ensure that pixels 
in the input images exactly represent the same location on the 
ground. 
Image registration can be performed with or without ground 
control. The most accurate way is to rectify the images using 
ground control points. However, in most cases, it is not 
possible to find ground control points in the input images. In 
such situations, taking the panchromatic image, which has a 
better spatial resolution, as the reference image and registering 
the multispectral images with respect to the panchromatic one 
can be a good solution to refine the rectified multispectral 
images. 
Image fusion essentially occurs when the involved images or 
their transformation have the same spatial resolution. In the 
selected wavelet decomposition, the dimension of the newly 
decomposed image becomes half the size of the image at the 
previous level (Chibani and Houacine, 2002). Therefore, 
another important task in the preparation phase is to make the 
proportion between the pixel spacing of the panchromatic and 
multispectral images to be a power of two. The panchromatic 
and multispectral images of the same sensors (i.e. QuickBird, 
1245 
SPOT and Ikonos panchromatic and multispectral images 
respectively) may inherently meet this requirement. For 
example, the proportion between the pixel sizes of the 
panchromatic and multispectral images is 2? for QuickBird 
images (0.7m. versus 2.8m. for panchromatic and multispectral 
bands respectively). For this reason, no resampling is needed 
for these images. Their pixel sizes will be the same if one-level 
and three-level discrete wavelet decomposition are performed 
to these images respectively. If the pixel size of the input 
images does not have the 2" multiplier relationship, resampling 
is needed. 
However, resampling will deteriorates the quality and structure 
of the image involved. For this reason, it is expected that the 
resampling should be performed at minimum extent. (Du et al, 
2003) propose an algorithm to find the minimum resampling 
needed. According to this algorithm, a coefficient S that makes 
the pixel sizes (P,, Pg) of two images (A and B) equal is 
determined using the equation P4 — SPg. Then, another number 
Sy, which is the nearest number to S that is the power of two, is 
found. Finally, the image A, which has a larger pixel size, is 
resampled to have a pixel size of S *Pg. This approach ensures 
that the resampled image now has a pixel spacing that the 
proportion between the Py and resampled image is the power of 
two. It also ensures that the (S-S,) is the minimum number that 
meets. this requirement. This resampling approach is used in 
our study. 
3.2 Implementing wavelet transform 
Wavelet transform based image fusion involves three steps; 
forward transform, coefficient combination and backward 
transform. In the forward transform, two or more registered 
input images are calculated to get their wavelet coefficients. 
These coefficients respectively represent the approximation, 
horizontal, vertical and diagonal components of the input 
images (Hill et al, 2002). Figure | below illustrates a 2-D 
forward DWT process (Misiti, 2002). 
colum 
  
  
  
  
  
  
  
  
  
  
Y 
row h, low 7 €4; , | 
s 142 4 
h bass filter ; 
H T approximalon 
ye iue 2 
low-pass 2 Y! colum 
filter £n low TUE) cD * | 
ass filter £T 
Input image colum vertical 
Aj h, low 7 p^ 
row ne 142 Ds, 
ass filter - / +1 
En horizontal 
low pass 241 colum 
filter 2, low | i 5 cD d 1 
pass filter J 
  
  
diagonal 
Figure 1. 2-D forward DWT to get approximation, vertical, 
horizontal and diagonal wavelet coefficients 
The same process needs to be applied to all input images one 
by one. Then, these wavelet coefficients from the different 
input images are combined according to certain fusion rules to 
get fused wavelet coefficients. 
3.3 Fusion Rules 
This is where the fusion essentially occurs. The wavelet 
transform coefficients obtained from the input images need to 
be combined to form a new set of coefficients to be used for 
backward transform. There are various fusion rules to form the 
fused wavelet coefficients matrix using the coefficients of the 
input images. In this study, taking the largest absolute values of 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.