1 2004
05) for
nedtan
slope
n(R)].
0.9) if
ins are
of the
nbined
uation
port. It
sholds
mooth
ige of
ibuted
causes
“tree”
> filter
eate a
sis. À
ory is
g four
n. The
DTM
as AH
[ferent
of
eature
mooth
Py(H)
= Poof
eature
y mass
) used
DVI,
xy) =
VDV1,,
for P.
initial
r class
egions
daries
Shafer
ipe on
age of
egions
cation
gions.
r than
y.
ponds
"Tiver
is, and
often
lata, it
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004
was not easy to separate trees from buildings in the residential
areas. A few residential buildings were erroneously classified as
trees, especially if the roof consisted of many small faces.
Problems also occurred with bridges, chimneys or other objects
on top of large buildings, with parked cars, and with power
lines. Shadows in the colour orthophoto were an error source.
There are no shadows in the LIDAR intensity data, so that the
“pseudo-NDVI” was systematically wrong in these areas.
T. mn " M.
Ves. A Ta P uir N kc
A sant ax.
T ^ 5 s
Ps pu^ d a
pA A sul.
RE TE EU ep
se inti. il
e
£d
Éd ET
Figure 1. Left: results of the initial classification. White: grass
land. Light grey: bare soil. Dark grey: trees. Black:
buildings. Right: the final building label image.
In order to evaluate our method, the completeness and the
correctness (Heipke, 1997) of the results were determined both
on a per-pixel and on a per-building level. The evaluation on a
per-pixel level shows that 94% of the building pixels were
actually detected. The missed buildings were small residential
buildings, some having roofs with high reflectance in the
wavelength of the laser scanner (thus, a high pseudo-NDVI),
others having roofs consisting of many small planar faces, or
they are too small to be detected given the resolution of the
LIDAR data. For a few larger industrial buildings, some
building parts could not be detected due to errors in DTM
generation. 85% of the pixels classified as building pixels do
actually correspond to a building. This number is affected by
errors at the building boundaries, and there are a few larger
false positives at bridges, at small terrain structures not covered
by vegetation, and at container parks.
100 T T
ON |
96
90
<10 30 50 70 90 110 130 150 170 190 210 230 “230
Area [n]
—#— Completeness —H= Correctness
Figure 2. Completeness and correctness of the detection results
in dependence of the building size.
The results of the evaluation on a per-building basis are
presented in figure 2. It shows the cumulative completeness and
correctness for buildings being larger than the area shown in the
abscissa. Our algorithm detected 95% of all buildings larger
than 50m? and 90% of the buildings larger than 30 m°.
Buildings smaller than 30 m? (mostly garden sheds or garages)
could not usually be detected. The correctness was 9676 for
buildings larger than 120 m? and 8996 for all detected regions.
515
3. FUSION OF LIDAR DATA AND IMAGES FOR
ROOF PLANE DETECTION AND DELINEATION
The work flow for the geometric reconstruction of the buildings
consists of four steps (Rottensteiner and Briese, 2003):
—
. Detection of roof planes based on a segmentation of the
DSM and/or the image data to find planar segments which
are expanded by region growing algorithms.
. Grouping of roof planes and model generation: Co-planar
roof segments are merged, and hypotheses for intersection
lines and/or step edges are created based on an analysis of the
neighbourhood relations. This results in a model consisting of
a conglomerate of roof planes, complemented by walls.
3. Consistent estimation of the model parameters: The
parameters of the building models are improved by a
consistent estimation procedure using all the available data.
4.Model regularisation: The models are improved by
introducing hypotheses about geometric constraints between
planes, and parameter estimation is repeated.
n2
In this section we want to show how the fusion of a LIDAR
DSM and digital aerial images contributes to an improved
detection of planar segments and an improved delineation of the
roof boundary polygons. Examples will be presented for the
building in figure 3.
us
Figure 3. Left: DSM of a building (grid width: 0.5 m). Right:
aerial image (ground resolution: 0.17 m). Length of
the larger wing of the building: 30 m.
3.1 Data Fusion for Roof Plane Detection
The left part of figure 4 shows the planar segments that were
extracted from the DSM in figure 3 using the iterative
segmentation scheme by Rottensteiner and Briese (2003). The
basic structure of the building has been captured, but the
segment outlines are very irregular. A proper determination of
the roof plane boundaries from these results is difficult for two
reasons. First, the segmentation errors cause errors in the
neighbourhood relations between the segments, the latter being
important prerequisites for checking whether the intersection
line between two neighbouring planes is a part of the boundary
polygons. Second, the geometric quality of step edges is poor in
LIDAR data, and in order to improve it, better approximations
are required. In other cases than the one depicted in figure 3,
some of the roof planes might actually be missing
(Rottensteiner and Briese, 2003). The results of roof plane
segmentation in figure 4 can be improved by matching the
planar segments detected in the DSM with image segments.
We extract homogeneous segments from the aerial images using
polymorphic feature extraction (Foerstner, 1994). In order to
mitigate the problem of erroneously merged regions, this is
done iteratively, in a similar way as for DSMs (Rottensteiner
and Briese, 2003). We use the DSM for geo-coding the results
of image segmentation, yielding a label image in object space
for each of the aerial images involved (figure 4). The resolution
of these label images is chosen in accordance with the image