International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
124
data, the images have to be geometrically and radiometrically
corrected, before being suitable for the fusion process, using
collateral data such as atmospheric conditions, sensor viewing
geometry, ground control points (GCPs), etc. (Pohl, 1996). An
elementary pre-processing step is the accurate co-registration of
the dataset, so that corresponding features coincide. A general
description of the necessary pre-processing steps can be found
in Cheng et al. (1995), Toutin (1994) and Richter (1997). An
overview on the concepts of fusion is given in Wald (1998a).
Fig. 1. Overall data fusion process in remote sensing.
Depending on the processing stage at which data fusion takes
place, it is distinguished between three different fusion levels:
• pixel,
• feature, and
• decision level.
Image fusion mostly refers to pixel-based data fusion, where the
input data is merged applying a mathematical algorithm to the
coinciding pixel values of the various input channels to form a
new output image. The concept of the different fusion levels is
described in more detail in Pohl and van Genderen (1998) and
Pohl (1998). An overview on the different definitions of data
and image fusion is available in Wald (1998b).
Once the alignment of the dataset is established, it is possible to
apply certain fusion techniques. The manifold fusion techniques
can be grouped into
1. Colour related techniques and
2. Statistical/numerical approaches (Pohl, 1996).
The first group comprises methods that refer to the different
possibilities of representing pixel values in colour spaces. An
example is the Intensity (I) - Hue (H) - Saturation (S) colour
transformation. The IHS technique intends to separate different
characteristics of colour perception by the human interpreter.
The intensity relates to the brightness, hue represents the
dominant wavelength, whilst the saturation is defined by the
purity of the colour (Gillespie et al., 1986). If a multispectral
image is transformed from the RGB space into HIS, it is
possible to integrate a fourth channel exchanging it with one of
the elements obtained (I, H or S). There are many other
techniques that follow the substitution principle. A description
can be found in Shettigara (1992). Of course, there are other
colour transformations which suit the fusion concept (e.g. RGB
or Luminance/Chrominance - YIQ).
The second group of fusion techniques deals amongst others
with arithmetic combinations of image channels, Principal
Component Analysis (PCA) (Singh and Harrison, 1985) and
Regression Variable Substitution (RVS) (Shettigara, 1992;
Singh, 1989). Fusion by band combinations using arithmetic
operators opens a wide range of possibilities to the remote
sensing data user. Image addition or multiplication contribute
to the enhancement of features, whilst channel subtraction and
ratios allow the identification of changes (Mouat et al., 1993).
The Brovey transformation forms a particular method of
ratioing, preserving spectral values, while increasing the spatial
resolution. The PCA and similar methods serves the reduction
of data volume, change detection or image enhancement. RVS
is used to replace bands by linearly combining additional image
channels with the dataset. An overview of existing techniques,
as well as a comprehensive description of their use is given in
the review by Pohl and van Genderen (1998).
3. APPLICATIONS OF IMAGE FUSION
Image fusion is used in a broad variety of applications: geology,
landuse / agriculture / forestry, change detection, map updating,
hazard monitoring, just to name a few. However, in many cases
it has not reached an operational status, due to the difficulty of
generalizing image combinations and fusion processes. Due to
the scarcity of simultaneously acquired satellite imagery, most
image fusion applications carry a multi-temporal component. In
some cases, it is used in the framework of monitoring (change
detection); in others it is an unavoidable constraint and has to
be considered in the evaluation of the resulting fused product.
A very important factor of applying image fusion is the
integration of complementary data. The complementarity of
visible and infrared (VIR) with synthetic aperture radar (SAR)
images is a well known example, where the objects contained in
the images are ’seen’ from very different perspectives
(wavelength and viewing geometry). The integration of high
resolution and multispectral information forms another type of
complementarity.
The following sections provide an overview of issues in
operationally-used image fusion relating to the processing
involved and discuss benefits and limitations of approaches,
illustrated by applications. All results have to be viewed in the
context of visual image exploitation.
3.1. Resolution Merge
One of the more established approaches is the so-called
resolution merge. It aims at the integration of lower resolution
multispectral with higher resolution panchromatic data in order
to benefit from high spectral and spatial resolution in one
image. It is relatively straightforward, when using data from the
same satellite, e.g. SPOT PAN & XS, IRS-1C PAN & LISS,
etc. But it is as well applicable to imagery originating from
different satellites carrying similar sensors (e.g. SPOT XS &