Beijing 2008
AUTOMATIC CLASSIFICATION OF LIDAR DATA INTO GROUND AND NON
GROUND POINTS
Yu-Chuan Chang a , Ayman F. Habib a , Dong Cheon Lee* 5 , and Jae-Hong Yom* 5
department of Geomatics Engineering, University of Calgary, Calgary, 2500 University Drive NW, Alberta,
Canada T2N 1N4 - (ycchang, habib)@geomatics.ucalgary.ca
^Department of Geo-Informatics, Sejong University, Seoul, South Korea - (dclee, jhyom)@sejong.ac.kr
Commission: WG IV/3
KEY WORDS: LiDAR, DEM/DTM Extraction, Photogrammetry, Classification, Laser Scanning, Point Cloud
ABSTRACT:
Recently, automatic object extraction from Light Detection And Ranging (LiDAR) data has attracted great attention. The level of
detail and the quality of the collected point cloud motivated the research community to investigate the possibility of automatic object
extraction from such data. Prior accurate knowledge of terrain information is usually essential for the data to be usable in further
processing, such as feature extraction, and to obtain better object detection results. In this paper, a new strategy for automatic terrain
extraction from LiDAR data is presented. The proposed strategy is based on the fact that sudden elevation changes, which usually
correspond to non-ground objects, will cause relief displacements in perspective views. The introduced relief displacements will occlude
neighboring ground points. A Digital Surface Model (DSM) is first generated by resampling the irregular LiDAR point clouds to a
regular grid. By using synthesized projection centers located above the DSM and analyzing the visibility maps in perspective images, we
can classify the DSM into non-ground and ground hypotheses. Surface roughness and inherent noise in the point cloud will lead to some
false hypotheses. By using a novel algorithm which combines plane fitting and statistical filtering to remove these false hypotheses, non
ground and ground points can be separated. The algorithm has been tested using both simulated and real datasets. The results have
demonstrated that our approach can perform well with highly complex data from an urban area. In a comparison with the results obtained
with TerraScan software, our algorithm showed the capability of producing better results while being less sensitive to used parameters.
1. INTRODUCTION
LiDAR technology has been demonstrated in recent years to be
a prominent technique for the acquisition of highly dense and
accurate information for physical surfaces. As LiDAR is a non-
selective mapping method, the acquired data consists of a point
cloud that includes bare-ground and non-ground objects such as
trees and buildings. Methods of removing non-ground points,
also referred to as filtering techniques, have been the focus of
many researchers. Many applications, for example, the
generation of contour lines for topographic maps, road
engineering projects, and the delineation of flooding zones,
among others, require the generation of a DTM from the ground
points. A DTM can be produced by resampling those extracted
ground points from LiDAR data. The filtering step is also
essential for the data to be usable in further processing, such as
in feature extraction. Building detection and reconstruction
procedures for the generation of 3D city models can be
facilitated by first detecting the non-ground points. The feature
extraction and modeling procedures are also beneficial to
applications such as change detection and database updating.
To satisfy the needs of these applications, the research
community has been developing several techniques for the
classification of LiDAR data. The first group of methods that
can be identified in the literature are based on mathematical
morphology. A method related to the erosion operator was
proposed by Vosselman (2000). In this method, the acceptable
height difference between two points is explicitly defined as a
function of the distance between the points. Morphological
filters have some drawbacks when certain features, such as large
buildings and dense forest canopy, are involved. In such cases, a
window size that is too small could be including only building
points, thereby classifying them as ground. However, a larger
window size can potentially chop off hills that have a significant
slope. Strategies such as the use of multiple window sizes, as
proposed by Kilian et al. (1996), and the one developed by
Zhang et al. (2003), which gradually increases the window size,
might help in overcoming these problems. However, the success
of these types of filters is strongly dependent on the selection of
the discriminant function parameters. The second group of
filters are based on the progressive densification of a TIN
(Triangulated Irregular Network). In Axelsson (2000), ground
points are classified by iteratively building a triangulated
surface model. The third group of methods are based on linear
prediction and hierarchic robust interpolation (Kraus and Pfeifer,
2001). The approach is based on a surface model defined for the
entire point set that iteratively approaches the ground surface.
However, these two groups of methodologies cannot handle the
surface with low and complex objects very well, as reported by
Sithole and Vosselman (2004).
Approaches that rely on segmentation are also found in the
literature. Jacobsen and Lohmann (2003) developed a method
that first segments the data and then classifies the segments as
either ground segments or off-terrain segments, based on
neighborhood height differences. When dealing with large areas,
segmentation techniques require expensive computation for
processing. Other filtering algorithms are also described by
Elmqvist et al. (2001), and Brovelli et al. (2002), among others.
A detailed comparison of some filters is provided in Sithole and
Vosselman (2004). The experimental study conducted shows
that in flat and uncomplicated landscapes, all the algorithms
give satisfactory results. However, significant differences in the
accuracy of these methods appear when landscapes containing
457