Full text: Technical Commission III (B3)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
1.1.2 Progressive densification filtering. 
The filter methods included in this group work follow a 
progressive approach, that starts from a small number of points 
that generates a first surface approximation. In successive 
iterations, new points are adding to those that have previously 
been classified as belonging to the ground. This method was 
proposed by Axelsson (2000). The procedure begins with a 
triangulation process obtained from the lowest points presented 
in the area, using a grid with large dimensions spacing. The rest 
of the ground points are progressive included through a iterative 
process. This iterative process is based in the analysis of each 
point according to the triangle where the point is located, 
considering the distance from the point to the triangle and the 
angles formed between this point and the triangle vertices. 
Within this filter group, other authors such as Hansen and 
Vogtle (1999) use the point height respect to its position in the 
corresponding triangle instead the distance used in other 
methods. Sohn and Dowman (2002) add a initial descending 
densification before the final ascending densification. 
1.1.3 Surface based filtering. 
Same to the algorithms based on the progressive densification, 
all methods classified into this group, use an initial surface 
reconstruction from a point cloud for a further filtering of the 
whole dataset. In progressive densification methods, the point 
assigned to the ground class are increased step by step, but the 
surface based methods typically start with a previous hypothesis 
that all point belong to the ground. Iteratively the influence of 
the non ground points will be reduced. 
One of the most popular methods of this group is the method 
proposed by Kraus and Pfeifer (1998), known as robust 
interpolation method. This filter integrates data filtering and 
DTM interpolation in one single process. The purpose of this 
algorithm is to determinate which is the individual weight of 
each item in the modeled surface that represents the ground. 
Finally, the points are classified as ground points or not ground 
points, depending on whether or not exceeds a threshold value 
of different in height with respect to the final DTM surface. 
This method is improved in Pfeifer et al. (2001) and Briese at el. 
(2002). Moreover, there are proposals such as Elmqyist et al. 
(2001) which introduce the inner and outside strengths concepts 
to accomplish the ground surface location. Brovelli et al. (2004) 
proposed a method based on the splines surface calculation and 
edge extraction techniques for the ground points classification. 
1.1.4 Clustering and segmentation based filtering. 
This group deals with the localization of the homogeneous 
classes (clustering; Nardinocci et al. (2003), Jacobsen and 
Lohoman 2003), and Vosselman and Sithole (2005) or Filin and 
Pfeifer (2006).). This clustering can be done directly in the 
object space, using region growing techniques. Usually the 
homogeneity criterion is the normal vector or its variation, 
resulting flat surfaces in the first case, or variation surfaces in 
the second one ( 
1.4.5 Others. 
Within this group, it is considered all those methodologies that 
have not fit in the above groups, for example, the repetitive 
interpolation filter proposed by Klober et al. (2007). 
152 
Last generation aerial fullwaveform laser systems present the 
capability of storage the full wave of each received pulse. The 
received waves represent the sum of the reflections of all the 
intercepted surfaces in the laser terrain footprint. The form of 
the received waves has been objective of different studies 
oriented to the improve of the object detection capability 
(Chauve et al., 2007; Lin et al., 2008). 
Also, it has been established the use of wave parameters (width 
and pulse amplitude, etc.). Such scan systems provide additional 
information of interest, although Lin and Mills (2009) show that 
these systems still require a significant research effort to 
demonstrate their potential in different applications. 
Doneus et al. (2008) and Wagner et al. (2008) use a modified 
robust interpolation filter method in order to include the 
information about the pulse width.. But tests on different terrain 
types, particularly those giving raise problems with traditional 
systems (non full-waveform) are still under research. 
Certainly, the LiDAR data filtering is a challenge issue. This 
interest has led to the emergence of a large number of different 
method, which show (and prove) the great effort made in this 
matter by the most important worldwide geomatics research 
centers. However, most authors conclude that with current 
methods is not possible to establish a fully automated filtering 
procedure for any data point configuration and scene. For this 
reason, it is interesting the integration of different 
methodologies in a unique filtering processing. 
2. PROPOSED METHODOLOGY 
We present a filtering method for LiDAR data classification 
according to an approach which combines the use of different 
filtering methods in one process. The main objective of this 
classification is to separate the points that are located on the 
terrain (ground points) and the points that are located on another 
objects (buildings, trees, etc.). The proposed methodology is 
based on the combined use of a segmentation process and a 
progressive triangulation densification. The proposed 
methodology is implemented on four stages: a preparation 
phase, a segmentation process and retrieval of existing segments 
using region growing techniques, a progressive triangulation 
densification oriented to the low height areas extraction, and, 
finally, a fusion process of the above results and the ground 
point classification and DTM generation. 
2.1 Data preparation 
The proposed methodology does not work directly using the 
original data. First, a process oriented to the detection- 
elimination of outliers (low-height data) presented in the area 
must be accomplished; next, a regular grid is generated. This 
grid must be a correct representation of the original data in 
cloud point structure. 
For the regular grid generation, two considerations must be 
taken into account: the grid spacing and the assigned value to 
each grid position. The cell size (spacing) must be according to 
the average data density, in order to minimize the loss of spatial 
resolution and the presence of no-data cells. Once the grid 
spacing is defined, there will be cells where there are available 
several points and other cells that do not contain any point. 
Since the objective of this classification is to obtain the ground 
data, the lower point of each cell is selected, using only simple 
pulses and the last echoes of the multiple pulses. Since there 
are 
per 
nei 
gric 
sur 
Fis 
2.2 
Th 
ac 
Se; 
us 
cle 
Th 
co 
co 
us
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.