International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
1.4 Related work
Building reconstruction problem has been studied for many
years (Weidner and Foerstner, 1995, Rottensteiner and Briese
2002, Dorniger and Pfeifer, 2008, Haala and Kada, 2010),
however, in current research it is still an open issue. Most of the
existing building detection techniques proposed in last few
years relied on aerial images. The domination of the image
based techniques can be explained by the insufficient accuracy
and too big point spacing of ALS systems of the past.
Improvement of laser scanning technology enabled the
acquisition of very dense 3D point clouds and thus triggered the
development of numerous methods that use LIDAR data.
Reconstruction of building outlines comprises three parts,
buildings detection followed by contour tracing and
regularization. Numerous approaches to building detection
suggest to transform ALS data into planar, grid structure
(Alharty and Bethel, 2002; Rottensteiner and Briese, 2002). It
facilitates computation since extraction of 2D features is more
accurate using 2D inputs than 3D data (Kaartinen and Hyyppa,
2006). In such methods buildings are usually identified from
normalized digital surface model that is computed by the
comparisons of two models, digital terrain and digital surface
(Weidner and Foerstner, 1995). Although the methods provide
good results, building outlines are strongly influenced by the
poor resolution of the interpolated DSM. The outline
determination directly from ALS data has a potential to deliver
better accuracy of reconstructed objects. The example of such
approach is presented in Sampath and Shan (2007), where the
data is separated into building and non- building points by
slope-based algorithm. Detected building points are segmented
in order to obtain single clusters. In Matikainen et al. (2010)
building detection method is based on region-based
segmentation and laser points classification.
The final part of outline reconstruction, building boundary
tracing and regularization can also be solved in various
approaches. Vosselman (1999) reconstructs buildings applying
Hough transform on dense height data. Regularization
of building outline is performed using main orientation of the
building. The orientation is determined by the direction of the
ridge line computed as the horizontal intersection between roof
faces. Sampath and Shan (2007) propose a new procedure that
utilize Jarvis algorithm. The contour is regularized in
hierarchical least squares adjustment. In order to delineate
building footprints Neidhart and Sester (2008) perform
Delaunay-Triangulation. They propose three versions of outline
simplification, modified Douglas-Peucker algorithm, graph-
based approach and RANSAC algorithm.
Revision on the existing approaches for building boundary
reconstruction is outlined in Vosselman and Maas (2010). The
evaluation of different algorithms for the detection of building
footprints and their changes is given in Champion (2009).
2. OUTLINE RECONSTRUCTION METHOD
The workflow of the building outlines extraction is presented in
Fig.l.The method consists of three main steps: building
detection based on address points, identification of the initial
boundary and regularization of the contour. The input for the
reconstruction algorithm contains a data set provided by
airborne LIDAR sensors and a list of buildings address points.
120
LIDAR data
| Address points |
v
Height image
Buildings detection
v
Buildings outlining (pixels)
Y
| Set of LIDAR points that
make up the outlines
Straight line detection with RANSAC
Y
Initial boundaries
Contour refinement
Building outlines
Figure 1. The workflow of outline extraction.
2.1 Derivation of building footprints
In the pre-processing step raster height image is interpolated
from the original data. Image resolution depends on the density
of points. This pre-processing step simplifies neighbourhood
relation within the data and thus optimizes algorithm time
performance. Building detection is carried out by region
growing. As the seeds it utilizes the pixels associated with
consecutive address points. During that process we obtain not
only a building mask, as it is often performed in other
approaches, but the group of pixels constituting individual
objects. The use of the initial information about buildings
position significantly improves the time performance of
building detection. As well, it prevents classification errors that
assign compact group of trees to buildings. As the output, the
method provides the set of separated building clusters
composed of adjacent pixels. On that stage, a building cluster
may contain pixels that belong to the trees, which are adjacent
to the building or above it. In order to remove outliers, detected
pixels are mapped onto original point cloud, which is then
segmented according to the local normal vectors and
connectivity.
2.2 Initial boundary extraction
Once building regions are extracted from the image, they can be
utilized to determine bands of pixels that constitute building
boundaries (c.f. Fig. 2.b). The boundary pixels are detected by
connected components analysis. The computation is executed
using resampled image, hence, the precision of extracted
building boundaries is deteriorated by the interpolation.
In order to maintain the level of detail provided by laser
scanning, detected pixel are mapped onto the original LIDAR
point cloud. Because one pixel may contain more than one point
the mapping process delivers a set of 2D points (projected on
the |
Thu:
dete
(RA.
to e:
set c
the I
from
by c
the
The
dout
of tl
equa
are :
data.
next
com
The
Thei
built
to al
disjc
equa
acce
segn
divi
upd:
line:
2.3
The
unst
shap
mer;
sim[
intei
to a
Opp:
grav
all s
clos
cloc
estal
orie
the 1
to tl
leve
the |
The
adju
is d
buil
labe
and
in o
dete
resp
eith:
The
step
buil
rect:
adju