Full text: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
is restricted to bands that are spectrally located within the 
spectral range of the panchromatic image. An interesting 
approach has been presented by Zhukov et al. (1995), which is 
based on the retrieval of spectral signatures, which correspond 
to constant grey levels in the panchromatic image. The result 
reveals sub-pixel variations in the multispectral bands, which are 
associated with grey level variations in the panchromatic image. 
Most promising are methods that use wavelet transforms for 
fusion of multiresolution images, as they preserve the spectral 
characteristics of the fused image to a high extent (Ranchin and 
Wald, 1993, Garguet-Duport et al., 1996, Yocky, 1996, Wald et 
al., 1997). In this paper, we will present an alternative method 
that is based on adaptive filters. 
2. METHODOLOGY 
Fusion of multiresolution optical image data aims at the 
derivation of multispectral images providing the high spatial 
resolution of the panchromatic image. The perfect result of such 
a process would be an image, that is identical to the image the 
multispectral sensor would have observed if it had the high 
resolution of the panchromatic sensor (Wald et al., 1997). Such 
an image would allow differentiating at least all objects that are 
detectable in the panchromatic image. An approximation of the 
desired result could be obtained by first extracting the objects 
from the panchromatic image and „filling“ them with the 
corresponding average multispectral information. Similar 
techniques have been successfully used for integrating spectral 
image data with vector layers in a GIS (Janssen et al., 1990). 
The drawback of this method for multi-image fusion lies in the 
requirement of a consistent set of object borders. Although the 
requirement can be fulfilled through segmentation of the 
panchromatic image, it would need computationally intensive 
and therefore time consuming preprocessing. As an alternative, 
we propose a filtering approach, called Adaptive Image Fusion 
(AIF), that uses local object edges instead of image segments. 
2.1. Sigma filter 
We assume that an image object Z is represented by a set of 
neighbouring pixels Zj, whose values are Gaussian distributed, 
i.e. z ~ N (|i z , o z ). This assumption of normality is generally 
reasonable for common spectral response distributions 
(Lillesand and Kiefer, 1994). In the local neighbourhood of an 
object edge we will therefore find two distributions, each 
representing one of the neighbouring objects. To separate these 
objects, i.e. to assign each pixel to one of these objects, adaptive 
filter techniques can be used. These filters have been 
successfully applied to image data for noise reduction, in 
particular for suppression of speckle in SAR imagery. We have 
chosen a modified sigma filter as it matches the assumptions 
given above. 
The sigma filter averages only those pixels in a local window, 
which lie in a two-sigma range of the central pixel value (Lee 
1983). All other pixels are assumed to belong to another 
distribution i.e. they represent a neighbouring object. As this 
filter is based on the assumption that the central pixel is in fact 
the mean of its Gaussian distribution, it might not include all 
relevant pixels in the averaging process. Therefore, a more 
general approach, namely the modified sigma filter, was 
presented by Smith (1996). It averages all pixels, which could 
belong to the same distribution as the central pixel without 
knowing the actual mean of this distribution. 
Application of the modified sigma filter requires the estimation 
of the normalised standard deviation which can be based on 
empirical analysis of the panchromatic image (Smith 1996). 
First, a local average and a local standard deviation image are 
computed from the original image, using the window size 
chosen for the filter process. Next, the standard deviation image 
is divided by the average image on a pixel basis resulting in a 
normalised standard deviation image. The mode of the 
histogram of this image is an adequate estimate to start with, 
although test runs have shown that in most cases it has to be 
reduced to get the desired results. 
When applying the modified sigma filter, areas with a low 
standard deviation, i.e. areas containing one object, will be 
smoothed. Within areas of high standard deviation, i.e. areas 
containing object edges, only those pixels will be averaged that 
belong to the same distribution as the central pixel. 
2.2. Adaptive image fusion 
For the fusion approach, the multispectral bands are included in 
the filtering process. As most fusion techniques, AIF requires 
co-registration of the panchromatic and the multispectral images 
and nearest neighbor resampling of the multispectral bands to 
the higher spatial resolution. There is no need of applying a 
higher level resampling process, such as cubic interpolation, as 
the blockiness of the lower resolution images will be eliminated 
during the fusion process. In the following, the new high 
resolution pixels of the multispectral bands are addressed as sub 
pixels. 
The AIF algorithm starts with applying a modified sigma filter 
to the panchromatic image. At each position of the moving 
window the two sigma range related to the central pixel is 
calculated, and all pixels within the window which fall into that 
range are selected. The position of the selected pixels is then 
transferred to the multispectral band, where the averaging of the 
respective sub-pixels is performed. The process could be 
described as sigma filtering of the multispectral band where the 
filter behavior is controlled by the panchromatic image. It is 
important to note that no spectral information is transferred from 
the panchromatic image to the multispectral band during the 
whole procedure. This leads to a better delineation of objects in 
the multispectral band without significantly changing the 
spectral information. 
The effect of the AIF is shown in Figure 1. The figure on the left 
side represents an (idealised) panchromatic image, the figure in 
the center the respective (idealized) multispectral band. It is 
obvious that the spectral correlation between the two images is 
relatively low (as occurs e.g. with a panchromatic image and a 
near infrared channel). By applying the AIF iteratively, the 
mixed pixels of the multispectral band will be separated step by 
step into the single objects they are composed of. The right
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.