×

You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Title
Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Author
Baltsavias, Emmanuel P.

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
the input images to link related parameters of the observed
Earth surface.
c
Fig. 1. Overall data fusion process in remote sensing.
After having corrected system-induced and geometric errors in
the dataset as indicated in Fig. 1, the images are fused to
produce the higher spatial resolution using one of the following
techniques:
- RGB colour composites;
Intensity Hue Saturation (IHS) transformation;
Arithmetic combinations (e.g. Brovey transform);
Principal Component Analysis;
Wavelets (e.g. ARSIS method);
- Regression Variable Substitution;
- Combinations of techniques.
The following sections describe the context and process of
various techniques in more detail.
3.1. Red-Green-Blue Colour Composites
The so-called additive primary colours allow the assignment of
three different types of information (e.g. image channels) to the
three primary RGB colours. Together they form a colour
composite that can be displayed on conventional media, e.g.
cathode ray tube (CRT), with the parallel use of a LookUp-
Table (LUT). The colour composite facilitates the interpretation
of multi-channel image data due to the variations in colours
based on the values in the single channels. Operations on the
LUT and the histogram of the image data can enhance the
colour composite for visual interpretation.
The possibilities of varying the composite are manifold.
Depending on the selection of the input image channels, the
fused data will show different features. Very important for the
colour composite is the distribution of the available 0-255 grey
values to the range of the data. It might be of advantage to
invert input channels before combining them in the RGB
display with other data depending on the objects of interest to
be highlighted (Wang et al., 1995).
the colour aspects in its average brightness representing the
surface intensity, its dominant wavelength (hue) and its purity
(saturation) (Gillespie et al., 1986; Carper et al., 1990). The
IHS values, commonly expressed in cylindrical or spherical
coordinates, can be mapped to Cartesian coordinates through
values v lt v 2 using a linear transformation (Harrison and Jupp,
1990):
f 1
i
l i
fI
Vi
Vs
Vi
1
1
2
V 1
T*
Te
G
V2>
i
l
[Tl
~Ti
0
H =
tan~'(—)
(b)
S = V v
2
+V, 2
V !
f 1
1
1
N
(R\
Vi
Tb
S
fl
G
1
1
1
*
v
V3
V6
1
2
Te
0
J
(1)
(2)
In order to apply this technique for the enhancement of spatial
resolution, a panchromatic higher resolution channel replaces
the intensity component of a lower resolution multispectral
dataset.
There are two ways of applying the IHS technique in image
fusion: direct and substitutional. The first refers to the
transformation of three image channels assigned to I, H and S.
The second transforms three channels of the dataset
representing RGB into the IHS colour space which separates
the colour aspects from its average brightness (intensity). Then,
one of the components (usually intensity) is replaced by a
fourth higher spatial resolution image channel, which is to be
integrated. In many published studies the channel that replaces
one of the IHS components is contrast stretched to match the
latter. A reverse transformation from IHS to RGB as presented
in Eq. 2 (Harrison and Jupp, 1990) converts the data into its
original image space to obtain the fused image.
3.3. Arithmetic Combinations
The possibilities of combining the data using multiplication,
ratios, summation or subtraction are manifold. The choice of
weighing and scaling factors may improve the resulting images.
Eq. 3 gives an example of a summation, and Eq. 4 of a
multiplication technique used to combine Landsat TM with
SPOT PAN as resolution merge (Yesou et al., 1993).
DN f = A(w, * DN a + w 2 * DN b ) + B (3)
DN f = A* DN a * DN b + B (4)
3.2. Intensity-Hue-Saturation Colour Transform
The IHS transformation (Eq. la-c) separates spatial (I) and
spectral (H, S) information from a standard RGB image. It
relates to the human colour perception parameters. It separates
A and B are scaling and additive factors respectively and Wi and
w 2 weighting parameters. DN f , DN a and DN b refer to digital
numbers of the final fused image and the input images a and b,
respectively.