International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
108
ADAPTIVE FUSION OF MULTISOURCE RASTER DATA APPLYING FILTER TECHNIQUES
K. Steinnocher
Austrian Research Centers, Seibersdorf, Environmental Planning, A-2444 Seibersdorf, klaus.steinnocher@arcs.ac.at
KEYWORDS: Adaptive Filters, Image Fusion, Image Sharpening, Multiresolution Data, Pre-Segmentation.
ABSTRACT
Current remote sensors offer a wide variety of image data with different characteristics in terms of temporal, geometric, radiometric
and spectral resolution. Although the information content of these images might be partially overlapping, the complementary aspects
represent a valuable improvement for information extraction. To exploit the entire content of multisensor image data, appropriate
techniques for image fusion are indispensable. The objective of this paper is to analyse the benefits that can be gained from the
adaptive image fusion (AIF) method. This method allows the fusion of geometric (spatial) and thematic (spectral) features from
multisource raster data using adaptive filter algorithms. If applied to multiresolution image data, it will sharpen the low spatial
resolution image according to object edges found in the higher spatial resolution image. In contrast to substitution methods, such as
Intensity-Hue-Saturation or Principal-Component Merging, AIF preserves the spectral characteristics of the original low resolution
image. Thus, it supports applications that rely on subsequent numerical processing of the fused image, such as multispectral
classification. However, it may also be used in combination with substitution merging methods leading to an improved product for
visual interpretation. In this case, the AIF will be applied in order to sharpen the original low spatial resolution image before
performing the substitution. From the various applications that could benefit from AIF, three examples are presented: improving the
delineation of forest areas, sharpening of agricultural fields and monitoring of urban structures. In all cases, the fusion leads to an
improved segmentation and a more precise estimation of the area of single land cover objects.
1. INTRODUCTION
Image fusion in a general sense can be defined as “the
combination of two or more different images to form a new
image by using a certain algorithm“ (Van Genderen and Pohl,
1994). It aims at the integration of all relevant information from
a number of single images into one new image. From an
information science point of view, image fusion can be divided
into three categories depending on the abstraction level of the
images: pixel, feature and decision based fusion. On the pixel
level, the fusion is performed on a per-pixel basis. This category
encompasses the most commonly used techniques (Vrabel
1996). The second level requires the derivation of image
features, which are then subject to the fusion process. Decision
based fusion combines either pre-classified data derived
separately from each input image or data from multiple sources
in one classification scheme (Benediktsson and Swain, 1992,
Schistad Solberg et al. 1994).
An alternative grouping of image fusion techniques refers to the
different temporal and sensor characteristics of the input
imagery. The combination of multitemporal - single sensor
images represents a valuable basis for detecting changes over
time (Singh, 1989, Weydahl, 1993, Kressler and Steinnocher,
1996). Multisensor image fusion combines the information
acquired by different sensor systems, to benefit from the
complementary information inherent in the single image data. A
representative selection of studies on multisensor fusion,
comprising a wide range of sensors, is given by Pohl and van
Genderen (1998). Within this group, a focus can be found on the
fusion of optical and SAR data (Harris et al., 1990, Schistad
Solberg et al., 1994) and of optical image data with different
spectral and spatial resolutions (Chavez et al., 1991, Pellemans
et al., 1993, Shettigara, 1992, Zhukov et al., 1995, Garguet-
Duport et al., 1996, Yocky, 1996, Vrabel, 1996, Wald et al.
1997).
In the remainder of this paper, we will concentrate on the fusion
of multisensor optical image data with different spatial and
spectral resolutions. High resolution data sets of this kind are
typically acquired from single platforms carrying two sensors in
the optical domain - one providing panchromatic images with a
high spatial resolution, the other providing multispectral bands
(in the visible and near infrared spectrum) with a lower spatial
resolution. Current examples of these platforms are SPOT3/4,
IRS-1 C/D, and the just recently launched Landsat 7. For the
near future a number of satellites with similar characteristics are
announced (Carlson and Patel, 1997).
The motivation for merging a panchromatic with multispectral
images lies in the increase of details while preserving the
multispectral information. The result is an artificial
multispectral image stack with the spatial resolution of the
panchromatic image. Common methods to perform this task are
arithmetic merging procedures or component substitution
techniques such as the Intensity-Hue-Saturation (IHS) or the
Principal Component Substitution procedures (Carper et al.,
1990, Chavez et al., 1991, Shettigara, 1992). These techniques
are valuable for producing improved image maps for visual
interpretation tasks, as they strongly enhance textural features.
On the other hand, they can lead to a significant distortion of the
radiometric properties of the merged images (Vrabel, 1996).
Pellemans et al. (1993) introduced the radiometric method,
where the new multispectral bands are derived from a linear
combination of multispectral and panchromatic radiances. While
this method keeps the radiometry of the spectral information, it