ISPRS Commission III, Vol.34, Part 3A ,Photogrammetric Computer Vision‘, Graz, 2002
request.
Surface patches are obtained by segmenting the laser
point cloud. The segmentation process will leave gaps, that
is, patches do not contiguously cover the visible surface. A
variety of reasons contribute to this situation. For one, oc-
clusions and low reflectance (e.g. water bodies) result in
regions with weakly populated laser points. Moreover, cer-
tain surfaces, such as the top of canopies, or single trees
and shrubs do not lend themselves to a simple analytical
surface description. It is conceivable to augment the set of
surface patches obtained from LIDAR data by surfaces ob-
tained from aerial imagery. An interesting example is vertical
walls, such as building facades. The number of laser points
reflected from vertical surfaces is usually below the thresh-
old criterion for segmentation. It is therefore very unlikely
that vertical surface patches are extracted. During the anal-
ysis of spatial relationships among patches it is possible to
deduce the existence of vertical patches. These hypotheses
can then be confirmed or rejected by evidence gained from
aerial images.
Boundaries: it is assumed that surface patches correspond
to physical surfaces in object space. As such, they are only
relevant within their boundaries. Thus, the complete bound-
ary description, B, is important. The simplest way to rep-
resent the boundary is by a closed sequence of 3D vec-
tors. The convex hull of the laser points of a surface patch
serves as a first crude estimation of the patches' bound-
ary. It is refined during the perceptual organization of the
surface. However, boundaries inferred from LIDAR data re-
main fuzzy because laser points carry no direct information
about boundaries. A much improved boundary estimate can
be expected from aerial imagery. Matching extracted edges
in two or more overlapping images is greatly facilitated by
the LIDAR surface and by the knowledge where boundaries
are to be expected. Thus it stands to reason to replace the
somewhat fuzzy boundaries obtained from LIDAR by 3D
edges derived from aerial imagery.
Discontinuities are linear features in object space that sig-
nal either an abrupt change in the surface normal or an
abrupt change in the elevation. Discontinuities constitute
very valuable information, not only for automatic scene inter-
pretation but also for mundane tasks such as the generation
of orthophotos. Like boundaries, discontinuities are repre-
sented as 3D polylines. With a few exceptions, boundaries
are, in fact, discontinuities. Whenever patches are adjacent
their common boundary must be a discontinuity. Take a sad-
dle roof, for example. If the adjacency of the two roof planes
is confirmed then their common boundary (e.g. intersec-
tion of roof planes) is a discontinuity. Since discontinuities
are richer in information than boundaries, it is desirable to
replace boundaries whenever possible by discontinuities.
Discontinuities are derived from aerial images in the same
fashion as boundaries. Moreover, some of them can be ob-
tained from LIDAR by intersecting adjacent surface patches.
As noted earlier, corresponding 3D edges from images and
3D edges from LIDAR are used for establishing a common
reference frame between images and LIDAR.
Roughness is a surface patch attribute that may be useful
in certain applications. It can be defined as the fitting error
of the surface patch with respect to the laser points. The
waveform analysis of returning laser pulses yield additional
information about the roughness of the laser footprint and
hence the surface patch.
5 EXPERIMENTAL RESULTS
In this section we briefly demonstrate the feasibility of the
proposed approach to reconstruct surfaces in an urban
scene. We use data from the Ocean City test site. As de-
scribed in Csathó et al. (1998) the data set comprises
aerial photography, laser scanning data, and multispectral
and hyperspectral data. Fig. 3(a) depicts the southern part
of Ocean City, covered by a stereomodel. In the interest
of brevity we concentrate on a small sub-area containing
a large building with a complex roof structure, surrounded
by parking lots, garden, trees and foundation plants that
are in close proximity to the building, see Fig. 3(b). The
aerial photographs, scale 1 : 4; 200, have been digitized
with a pixelsize of 15 m. The laser point density is 1:2
points/m? .
First we oriented the stereopair with respect to the laser
point cloud by using sensor invariant features, including
straight lines and surface patches. The intersections of ad-
jacent roof planes are examples of straight-line features ex-
tracted from the LIDAR data (Fig. 3(d)). In the aerial im-
ages, some of these roof lines are detected as edges, see
e.g. Fig. 3(e,f)) and consequently used in the orientation
process. In order to avoid image matching we oriented the
two images first individually by single photo resectioning.
For checking the internal model accuracy we performed a
relative orientation with the exterior orientation parameters
and the laser points as approximations. The average par-
allax error, obtained from matching several thousand back-
projected laser points, was 2:6 m. The error analysis
revealed a horizontal accuracy of sensor invariant features
of 2:6 m, confirming that the LIDAR data sets (NASA's
Airborne Topographic Mapper, ATM) are indeed well cali-
brated
We now move on to the surface reconstruction of the sub-
area, beginning with the LIDAR data. As described in detail
in Lee (2002a), the laser point cloud is subjected to a three-
stage perceptual organization process. After having identi-
fied suitable seed patches, a region-growing segmentation
process starts with the aim to find planar surface patches.
In a second step, the spatial relationship and the surface
parameters of patches are examined to decide if they can
be merged. At the same time, boundaries are determined.
Fig. 3(c) shows the result after the first two steps. A to-
tal of 19 planar surface patches have been identified. The
white areas between some of the patches indicate small
gaps that did not satisfy the planar surface patch condi-
tions. We see confirmed that the boundaries of physical
surfaces, e.g. roofs, are ill-defined by laser points. The
third step of the perceptual organization process involves
the intersection of planar surface patches that satisfy adja-
cency condition. The result of intersecting adjacent planes
with distinct different surface normals is depicted in Fig. 3(d).
Although extremely useful for spatial reasoning processes,
the segmentation results from LIDAR are lacking well de-
fined boundaries. Moreover it is desirable to increase the
discrimination between surfaces that may belong to differ-
ent objects. An interesting example is patch 3 (roof), 19
(tree), and 11 (foundation plant). With LIDAR data only one
cannot determine if these three patches belong to the same
object.
A - 315