Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B6b)

2008 
51 
BUILDING ROOF RECONSTRUCTION FROM LIDAR DATA AND AERIAL IMAGES 
THROUGH PLANE EXTRACTION AND COLOUR EDGE DETECTION 
Angelina Novacheva 3 ' * 
institute of Photogrammetry and Remote Sensing, Technical University of Dresden, 01062 Dresden, Germany 
angelina. - novacheva@mailbox.tu-dresden.de, http://www.tu-dresden.de/ipf/photo 
KEY WORDS: Laser Sanning (LiDAR), Building Rconstruction, Aerial Photogrammetry, Data Integration, Urban planning, Image 
Pocessing 
ABSTRACT: 
In this paper a strategy for 3D reconstruction of building roofs from airborne laser scanning and aerial images is discussed. In order 
to keep it as general as possible, no predefined primitives or ground plans are required. The processing is done directly on the raw 
LiDAR point cloud, so as to avoid any loss of information due to interpolation. Computations involving local surface normals, 
which are usually rather noisy in dense datasets, are avoided. Only roofs composed of planar patches are considered. The guiding 
principle is to select thresholds that can be derived from the data itself and to make the algorithms largely independent of their exact 
values. The main purpose of image integration is the refinement of the building outline. In that the importance of utilising the 
available chromatic information is shown. 
1. INTRODUCTION 
3D city modelling has recently been a lively research area 
within the photogrammetric community. Buildings, as the most 
prominent features of the urban landscape, receive special 
attention. The new developments in sensor technology allow for 
increased automation in their reconstruction. The improved 
accuracy and density of airborne laser scanning (LiDAR), as 
well as the availability of simultaneously recorded height data 
and colour frame imagery have directed the attention of many 
researchers towards the extensive application of LiDAR and the 
integration of aerial images. 
Most authors, e.g. [Rottensteiner & Briese, 2003] concentrate 
on the usage of raster DSMs, obtained through the interpolation 
of the laser scanning data to a regular grid. That allows for the 
application of available image processing software and fast 
segmentation methods, but has the disadvantage of decreasing 
the information content. 
Another work, focused on building reconstruction through the 
combination of image and height data [Haala, 1996], uses a 
raster DSM, obtained through matching from aerial images 
along with the image data itself. 3D intensity or DSM edges are 
compared to a building model. The data integration is mainly 
limited to the detection of regions, corresponding to buildings 
in the height data. 
In the following, LiDAR data with density of about 5.3 
points/ m 2 is considered along with colour aerial images of 10 
cm ground resolution. 
2. ROOF RECONSTRUCTION FROM LIDAR 
2.1 Prerequisites 
.There are two important assumptions related to the proposed 
algorithm. First, a rough segmentation of the point cloud should 
be available. Second, only roofs consisting of planar faces can 
be reliably reconstructed. An overview of the current 
processing pipeline is given in Figure 1. 
The segmentation could be done by pre-processing the data as 
described in [P. Axelsson, 1999], which does not require 
additional information like multiple return or intensity values. 
In the following processing steps it is expected to be neither 
error free nor complete. However, in order to accurately extract 
the separate buildings, at least three neighbouring vertices from 
each roof plane should be present in a connected component of 
segments labelled as belonging to class “buildings”. 
Further each single building with its immediate surroundings is 
handled separately. The standard deviation (RMS) of plane fit is 
also determined a priori and considered uniform within the data 
set. 
2.2 Roof Segment Identification 
At first the 2.5D Delaunay triangulation of a building region is 
computed, which becomes the basis for the definition of the 
neighbourhood relations within the point cloud. In that, as well 
as to support further development works, the data structures and 
the functionality of the Computational Geometry Algorithms 
Library [CGAL, 2006] are employed. 
A procedure based on the Hough transform is responsible for 
the successive detection, verification and refinement of planar 
segments. First a modified version of the Hough transform is 
performed, generating a 3D parameter space for plane detection, 
based on the perpendicular distance to the origin and the polar 
coordinates of the plane’s normal vector [A. Novacheva, 2007]. 
In that, special care is taken to provide uniform sampling of the 
Gaussian sphere. As the parameters acquired in such a way are 
only approximate, they are refined in the next step. 
Outliers are removed given a predefined threshold for the 
orthogonal distances of the measurements to the plane based on 
the standard deviation. Empirically the value of 1.2 x RMS was 
found appropriate.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.