ıl 2004
hedron
oS.
on for
mation
mputer
aheim,
epwise
K.W.
cs and
yvich).
dlands,
> 8th
TE
la, pp.
mated
nputer
tion of
oronoi
ril, pp.
Seoul,
f the
ty of
ience),
s and
elling,
€
ds, pp.
ending
2002,
Notes
arra &
>rdam,
] TIN
raphic
^e, M.
(Eds),
Solid
nputer
URBAN MODELING BASED ON SEGMENTATION AND REGULARIZATION
OF AIRBORNE LIDAR POINT CLOUDS
Aparajithan Sampath Jie Shan
Geomatics Engineering, School of Civil Engineering, Purdue University
550 Stadium Mall Drive, West Lafayette, IN 47907-2051, USA, (asampath, jshan)@ecn.purdue.edu
Commission III, WG I11/3
Key words: Lidar, Building reconstruction, Urban area modelling, 3D GIS, Feature extraction
ABSTRACT:
This paper presents an approach to process raw lidar 3-D point clouds over urban area and extract terrain, buildings and other urban
features. In the initial step, *non-ground points" are separated from ground points using a one dimensional filtering process based on
the slope between two consecutive points in the point cloud and the terrain elevation in the vicinity of the points. In the next step,
the non-ground point dataset is processed to segment individual buildings. This is accomplished by using a 3-D regional growing
approach. At the end of this step, each lidar point is attributed to a building. The first step towards building reconstruction is to
obtain an approximate footprint of the building, which is accomplished by extracting the points on the building boundary by a
modified convex hull algorithm. Once the footprint boundary points are found, their edges are regularized by using a least squares
model to form the final building shape. Mathematic formulation of 3D region growing and boundary regularization is presented.
Tests results of reconstructed buildings over complex urban areas are reported.
I. INTRODUCTION
Lidar (Light Detection And Ranging) records dense 3-D point
clouds over the 3-D reflective terrain surface. Combined with
GPS (Global Positioning System) and IMU (Inertial Meaure-
ment Unit), these point clouds are georeferenced. Therefore,
they can be used to model urban environments, create city
models, and obtain 3-D topographic surface information
(Ackermann, 1999; Balsavias, 1999). The first step in gener-
ating city models is to remove all the points that do not repre-
sent buildings from the dataset. Since it is easier to mathe-
matically define the ground than other features, lidar returns
from ground are first separated from non-ground features. In
this process, we can also generate bald ground DEMs (Axels-
son, 1999; Sampath and Shan, 2003; Schickler and Thorpe,
2001; Sithole, 2001; Vosselman, 2000; Vosselman and Mass,
2001). The rest of the work is then to segment each individual
building from the building class and reconstruct it one by one
based on certain building models.
In this study, we present an approach to segment and extract
buildings from raw lidar point data. Most approaches to fea-
ture extraction from lidar data make use of the differences be-
tween DEMs generated using raw lidar dataset, and the data-
set that is generated after non-ground points have been
filtered. This will give the footprints of buildings. The prob-
lem of converting these footprints to vectors is addressed by
assuming two orthogonal dominant directions for each build-
ing and then constraining the building edges to lie along those
directions (Al-Harthy and Bethel 2002). Rottensteiner and
Briese (2002) apply a morphological filter over the building
footprints to get a binary image of planar regions. A con-
nected component analysis then reduces these regions to
smaller buildings. Another approach to this problem would be
l0 use the detected building points and the surrounding
ground points to interpolate the building boundaries. This is
achieved after determining the internal 3-D breaklines of the
buildings (Morgan and Habib, 2002).
In this paper, we discuss the process by which we detect, seg-
ment, regularize and reconstruct building shapes from raw
lidar data. After briefly discussing the initial processing stage,
where the raw point dataset is classified into two classes:
buildings and non-buildings (mainly bald ground), we present
a region growing algorithm to segment individual buildings
from the building point dataset. Next, we propose a method to
select the building boundary points based on a modified con-
vex formation approach. These points are then used to deter-
mine parametric equations for lines that represent the building
edge.
Section 5 discusses the steps that are taken for reconstructing
the buildings in 3D. In the first step, regularized boundary of
the selected building is obtained using a least squares model.
This gives us the final building footprint. The third dimension
is visualized by slicing the buildings at various levels of ele-
vation, for roof structures that are flat. Each sliced segment
can, in turn, be regularized to obtain the roof model. Build-
ings that have sloping roofs are a bit more complicated. Such
cases are handled by segmenting out those points that belong
to each face of the roof, and then regularizing them.
Buildings over Baltimore, MD, USA and Toronto, Canada are
used in our study. Their quality is evaluated by comparing the
lidar generated results with ortho-images.
2. SEPARATION OF BUILDING FROM GROUND
A few airborne lidar datasets are used in this study. They
include downtown Baltimore, Maryland and Toronto. Their
average point density is one point per 5.5 square meters and
2.25 square meters, respectively. Details about the datasets can
be found in (Shan and Sampath, 2004).
An algorithm to separate ground and building points from raw
dataset was suggested in our previous work (Sampath and
Shan, 2003). The proposed labeling approach is based on slope