International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
figure shows an (idealized) output of the fusion process, where
the shape of the objects is identical with the panchromatic
image, but the spectral behavior of the objects corresponds to
the multispectral image.
Fig. 1. Effect of Adaptive Image Fusion: (a) panchromatic
image ; (b) multispectral image ; (c) fused
multispectral image.
Although AIF sharpens the multispectral image according to
object edges found in the panchromatic image, this effect is
limited to object edges that occur both in the panchromatic and
the multispectral image. Edges appearing only in the panchro
matic image will cause no significant effect in the multispectral
band, while edges in the multispectral band that do not show up
in the panchromatic image will get slightly blurred. Objects that
are smaller than the original multispectral pixel size will only be
sharpened, if their spectral reflectance is significantly different
from their local environment. At the same time as object edges
are sharpened, the area within an object will be smoothed. This
effect can be seen as a pre-segmentation of the multispectral
image. It reduces the occurrence of mixed pixels, thus
improving the quality of a subsequent classification.
3. APPLICATION
3.1. Agricultural case study
The AIF algorithm was first applied to a multisensor image
acquired by the Indian remote sensing satellite IRS-1C. This
system provides panchromatic images with a spatial resolution
of 5.6m (PAN), and three multispectral bands (green, red and
near infrared) with a resolution of 23.5m (LISS-2/3/4). The
commercially available scenes are radiometrically preprocessed,
panorama corrected and resampled to a pixel size of 5m and
25m respectively. The acquired scene (path 30, row 35, quadrant
A) was sensed on August 9, 1996 and covers an area of
approximately 70x70 km 2 located in Upper Austria (centered on
13°35’E / 47°58’N). The images were geocoded to the Austrian
reference system (GauB-Kruger M31). The multispectral bands
were resampled to 5m pixel size to match the resolution of the
panchromatic image.
Application of AIF requires the selection of two parameters, the
normalised standard deviation and the size of the local window.
The former can be estimated from the panchromatic image as
described in section 2.1 The minimum size of the local window
is the ratio between original multispectral and panchromatic
pixel size. This minimum size offers the advantage of short
computation time. However, test runs of AIF have shown that
larger window sizes, such as 9x9 or 11x11, lead to significantly
better results. This will increase the computation time of the
program, but at the same time will reduce the number of
iterations. This is due to the stronger effect of averaging, when
larger window sizes are used.
For the demonstration study a normalized standard deviation of
0.01 and a window size of 11x11 were chosen. Each
multispectral band was fused subsequently with the
panchromatic image, thus leading to a synthetic multispectral
image stack. Being the most commonly used merging technique,
an IHS procedure was performed for comparison.
Visual evaluation of the fusion results shows the differences
between AIF and IHS merging (Fig. 2). The impression of the
AIF image is that of a pre-segmented image, where the variation
of grey levels within the objects is very low, while the objects
are clearly separated from each other. Areas with a high
variance in the panchromatic image, such as the village left of
the center, appear like one homogenous object in the fused
image. The IHS image has more textural information that
supports visual interpretation, but also distorts the spectral
information significantly, as can be seen in some of the
agricultural fields.
Multispectral classification of the fused images confirms the
different effects (Fig. 3). A comparison of the classification of
the original image and the AIF image, both classified with the
same set of spectral signatures, shows the similar pattern of
assigned classes. The delineation of objects is significantly
better in the AIF image, and it contains less single pixel objects.
Classification of the IHS image was performed with a different
set of spectral signatures, as the spectral distortions did not
allow using the signatures from the original multispectral image.
The appearance of the classification is noisier, and in some areas
differs significantly from the original classification.
Quality assessment. The assessment focuses on how much the
radiometry of the multispectral images is distorted by the fusion
procedure. It is based on the idea that a synthetic image once
degraded to its original resolution should be as identical as
possible to the original image. This property is estimated by
comparing the mean values and standard deviations of and the
correlation coefficients between the degraded and the original
images (Wald et al., 1997). In order to produce the degraded
images, the fusion results were averaged applying a 5x5 filter
kernel and resampled to 25m pixel size. Table 1 presents the
results of the comparison between the degraded images and the
original data. The first column shows the global mean (p) and
standard deviation (a) of the original multispectral bands. The
columns entitled AIF and IHS give the respective values of the
merged products.
The results of AIF show no significant differences in the mean
values, but the standard deviation is slightly reduced. This is
due to the averaging performed during the merging process. IHS
merged images show a higher deviation in the mean values and
an increase of the standard deviation resulting from the
inclusion of textural information from the panchromatic image.
It is interesting to note, that for channel 4 (NIR) the standard
deviation has decreased during the IHS merge.