ISPRS Commission III, Vol.34, Part 3A „Photogrammetric Computer Vision‘, Graz, 2002
FILTERING STRATEGY: WORKING TOWARDS RELIABILITY
George Sithole
Department of Geodesy, Faculty of Civil Engineering and Geosciences
Delft University of Technology
The Netherlands
g.sithole@citg.tudelft.nl
Commission III, Working Group 3
KEY WORDS: laser scanning, LIDAR, DEM/DTM, classification, filtering
ABSTRACT
The filtering of a laser scanner point-cloud to abstract the bald earth has been an ongoing research topic in laser altimetry. To date a
number of filters have been devised for extracting DEMs from laser point-clouds. The measure of the performance of these filters is
often based on tests against some reference data (rms, ratio of misclassifications vs. correct classifications, etc.) obtained by
photogrammetric measurement or other means. However, measures based on such tests are only global indicators of how the filters
may perform. Therefore, when applied to real life applications, based on such measures it is not possible to say with certainty how
well a filter has performed. This uncertainty suggests that a method be devised to identify in a point-cloud those regions where a
filter may have difficulty classifying points. This done other sources of information can be gathered to clarify the status of points in
(difficult) regions. This fits in with the thinking that external sources of data, such as imagery, maps have be used in the filtering of
laser scanner point-clouds. However, devising a method as suggested above requires that the reasons for the misclassification of
points be first identified. When filtering a point-cloud based on spatial information alone, misclassification arises from three sources,
(1) the nature and arrangement of objects and the terrain in a landscape (e.g. terrain, buildings, vegetation, etc.,) (2) the
characteristics of the data (resolution, outliers, data gaps, etc.,) and (3) the implementation of filters. In this paper, the first two
reasons for misclassification are outlined because they are common to all filtering problems, and an initial attempt at developing a
method for identifying regions in a point-cloud where a filter may have trouble in classifying points is described.
1 INTRODUCTION
The filtering of laser scanner point-clouds to abstract the bald
earth has been an ongoing research topic in laser altimetry.
Filtering in the context of this paper is understood to mean the
removal from a laser scanner point-cloud of those points that do
not belong to the terrain. To date various filters have been
developed, e.g., (Kraus & Pfeipfer 1998), (Axelsson 1999),
(Petzold et. al. 1999), (Elmqvist 2001), (Vosselman & Maas
2001), (Haugerud et. al. 2001). Additional to these are
proprietary filters (being used in practice), whose algorithms are
not known. The performance of filters is currently measured by
comparison of filter results with some reference data (rms, ratio
of misclassifications vs. correct classifications, etc.,). However,
there are three problems with performance measures based on
reference data:
* They are global indicators. The performance of filters is
unified into one single measure. Unfortunately, experience
has shown that this masks localized filtering errors (which
can sometimes be large, although few in number).
e The performance measures are indicative of filter
performance only in areas that have characteristics similar
to those in the reference data (used for deriving
performance measures). For example, if performance
measures are derived from reference data set in a rural
landscape, then in practice those measures cannot be
applied to gauge filter performance in urban areas.
e Reference data is usually only available for areas that can
be measured photogrammetrically or by conventional
survey. For example, in reference data, areas covered by
dense vegetation are usually not sampled.
Nonetheless, filters are developed with the expectation that they
will succeed. This expectation is founded on the knowledge that
in most cases the characteristics of the terrain (e.g., slope,
roughness, form, curvature, etc.,) are bounded and that surfaces
that fall outside these bounds are not terrain. Figure 1 depicts the
current approach to filtering. From a monotonic function (based
on a test point and its neighborhood) a decision measure is
derived. Based on a pre-selected threshold for the decision
measure the test point is classified as either terrain or object. As
already stated this strategy works in most cases. However, it
also leads to some misclassifications. Therefore, improving
filtering strategy requires that the reason for misclassification be
known. Two types of misclassification are possible, Type I
errors and Type II Type I errors being the
misclassification of terrain points as object points and Type II
errors being the misclassification of object points as terrain
points. This is also shown in Figure 1. Within a certain band,
either side of the threshold a point has a likelihood of having
been misclassified (hence, the question marks in Figure 1).
Knowing where and what type of misclassification has occurred
can be determined using reference data. However, in normal
practice this reference data will not exist. Therefore, the
challenge is to detect misclassification (or the likelihood of it)
where no reference data exists, or to be more precise, devise an
alternative means checking the possibility of misclassification.
errors.
When reference data does not exist, an operator has to manually
check the results of the filtering. However, an operator using
just the point-cloud cannot achieve a 100% classification. There
A - 330