You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects
Baltsavias, Emmanuel P.

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
Fig. 5. This stereo pair shows a small image patch from the low altitude flight. Laser points from the same region are projected back
to the two images with their exterior orientation parameters. Viewed under a stereoscope, a vivid 3-D scene appears with the
laser points on the top of the surface that is obtained by fusing the stereopair. The colored dots indicate different elevations,
with red the lowest and blue the highest points. Note that blue points are on top of buildings.
employed in aerial stereopairs, we may obtain surface
discontinuities directly. This is because edges in aerial images
may have been caused by breaklines in object space. Not all
edges correspond to breaklines, but there is hardly a breakline
that is not manifest as an edge in the image. Fusing surface
features is actually a two step process. First, the images
covering the same scene are processed using multiple image
matching (Krupnik, 1996). Next, the surface obtained during
image matching is fused with the laser surface. Obviously, the
aerial imagery must be registered to the same object space the
laser points are represented in. In turn, this requires an aerial
triangulation. To achieve the best fit between the visible and
laser surface, the aerial triangulation should be performed by
incorporating the laser data (Jaw, 1999).
Figure 5 illustrates the registration of the visible and the laser
surface. Here, a small image patch from two overlapping
photographs is shown, together with laser points that have
been projected back from object space to the images based on
their exterior orientation. Viewed under a stereoscope, one
gets a vivid 3-D impression of the surface. The figure also
demonstrates the distribution of laser points, which is rather
random with respect to surface features. For example, features
smaller than the (irregular) sampling of laser point may not be
captured. Moreover, Figure 6 clearly supports the claim that
breaklines, such as roof outlines, should be determined from
the aerial imagery.
The refined surface, obtained in a two step fusion process, is
now analyzed for humps in an attempt to separate the
topographic surface. Then, the difference between the refined
and the topographic surface would result in what we call
hump-objects of a certain vertical dimension. The prime
motivation is to partition the object space, such that the
subsequent surface segmentation is only performed in areas
identified as humps. The segmentation is a multi-stage
grouping process aimed at determining a hierarchy of edges
and surface patches. As an example, breaklines are segmented
Fig. 6. Superimposed on the aerial image are the results from
classifying the multispectral imagery and from
segmenting the laser surface. Pink areas indicate dark,
non-vegetated regions, and yellow are bright, non-
vegetated regions. Green refers to woody vegetation.
Finally, blue are shaded areas. The red contours are
derived from laser data and they indicate humps. The
combination green (from multispectral) and hump
(from laser) triggers the hypothesis for a tree, for
example. A hump with planar surface patches and a
non-vegetated region is used for a building
The fusion of surface information from aerial imagery and
laser scanning systems ought to take into account the
strengths and weaknesses of the two sensors. The major
advantage of laser measurements is the high density and high
quality. However, breaklines and formlines must be extracted
from irregularly spaced samples. If feature-based matching is