International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
refined surface in Figure 1. However, a more abstract
representation is needed, called segmented surface. Here, the
topographic surface is separated from objects that have a
certain vertical dimension, which, in turn, are approximated
by planar surface patches or higher order polynomials.
Object recognition and identity fusion is performed in the 3-D
object space. This last step is basically a spatial reasoning
process, taking into account the grouped features, including
the visible surface, and based on knowledge on how the
features are related to each other. Abductive inference
(Josephson and Josephson, 1994) provides a suitable
framework by generating and evaluating hypotheses formed
by the features.
Performing fusion in object space requires that sensor data or
extracted features are registered to the object space.
Preprocessed laser scanning points are already in object
space. To establish a relationship between aerial imagery and
object space, an aerial triangulation must be performed that
will provide the exterior orientation parameters. Instead of
generating an orthophoto for the multispectral imagery, we
register it to the aerial imagery. This offers the advantage of
performing the classification and feature extraction on the
original image, thus preserving the radiometric fidelity. The
results are then transformed to object space through the aerial
images.
We have not implemented the design into a complete system,
but performed experiments with the purpose of testing the
fusion stages. The following sections report some of these
experimental results. 4
4. MAJOR FUSION PROCESSES
4.1. Multispectral imagery
Multi- (and hyperspectral) systems are capturing images in a
number of spectral bands in the visible and infrared region. In
the visible-NIR part of the spectra, the dominant energy
source is the solar radiation, and features in the images are
mostly related to changes in surface reflectance, or in the
orientation of the surface elements, or in both. Owing to the
complex relationship between the spectral curves of the
different materials, objects may look quite different in
different spectral domains. For example, note the differences
between the gray level images of the same area in visible and
NIR frequencies (Figure 2a and b, right parts of images).
Different combination of these bands, such as the false color
composites in Figure 3a, can facilitate visual interpretation.
The non-turbid, deep, clear water of the channels almost
completely absorbs the energy resulting in a black color. The
different man-made materials have more or less uniform
reflectance throughout the visible-NIR domain creating a
characteristic gray hue with an intensity that depends on the
total brightness of the material. The bright red areas are
associated with live green vegetation, which scatters most part
of the solar radiation in the NIR. There is almost no energy
reflected back from areas in deep shadow along the northern
part of the houses.
In thermal infrared sensing the emitted EM radiation is
imaged (Figure 2c). The measured radiant temperature of the
objects depends on their kinetic or ‘true’ temperature and
their emissivity. The temperature of the different objects
changes differently throughout the day. For example, trees
and water bodies are generally cooler than their surroundings
during the day and warmer during the night. Fortunately, not
all the objects exhibit this complex temporal behavior. For
example, paved roads and parking lots are relatively warm
both during day and night Similarly to the visible images,
daytime thermal imagery contains shadows in areas shaded
from the direct sunlight. The energy captured by thermal IR
sensing is also a function of the emissivity of the objects. In
contrast to most natural surfaces, which have very similar
emissivities, some man-made materials possess very distinct
emissivities. For example, unpainted metal roofs have a very
low emissivity (0.1-0.2), causing extremely low gray values in
the thermal images. Hence, they provide excellent clues for
locating metal surfaces.
Two different approaches were selected and tested for
automatic interpretation of multispectral data. In the
‘multispectral edges’ method, edges extracted from selected
individual spectral images were fused in the image space. In
the more traditional approach, first the visible-NIR bands
were segmented in image space by using unsupervised
classification. Since visible-NIR and thermal images are
based on different physical principles, the thermal imagery
was not included in this step. Then, the boundaries between
the different classes were extracted. Finally, these boundaries
were fused with the ones extracted from the thermal imagery.
Multispectral-edges method. Edges obtained from different
portions of the spectrum form a family - not unlike the scale
space family - that add a new dimension to the grouping,
segmentation, and object recognition processes. For example,
Githuku, (1998) analyzed the relationship of 'colored' edges
and exploited the uniqueness for matching overlapping
images.
Edges extracted from visible, NIR, and thermal images can be
strikingly different (2 a-c, left part of images). By extracting
the edges from the individual bands and then analyzing and
merging them, composite features can be created. Edges
extracted from a visible (blue), a NIR (green) and a thermal
band (red), are combined in a color composite image in
Figure 4. The color of an edge on this image tells us which
band had the strongest discontinuity in the location. All man
made objects are bounded by edges. Fortuitously, no edges
were extracted along the fuzzy boundaries of some natural
surfaces, such as the transition between bare soil, sparse and
vigorous vegetation. Note, that the edges of man-made