Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B5-2)

745 
SEGMENTATION OF TERRESTRIAL LASER SCANNING DATA 
BY INTEGRATING RANGE AND IMAGE CONTENT 
Shahar Bamea, Sagi Filin 
Transportation and Geo-Information, Civil and Environmental Engineering Faculty, Technion - Israel Institute of 
Technology, Haifa, 32000, Israel - {bameas, filin}@technion.ac.il 
Commission V, WG V/5 
KEY WORDS: Segmentation, Terrestrial Laser Scanner, Point Cloud, Algorithms, Object Recognition 
ABSTRACT: 
Terrestrial laser scanning is becoming a standard technology for 3D modeling of complex scenes. Laser scans contain detailed 
geometric information, but still require interpretation of the data for making it useable for mapping purposes. A fundamental step in 
the transformation of the data into objects involves their segmentation into consistent units. These units should follow some 
predefined rules, and result in salient regions guided by the desire that the individual segments represent object or object-parts within 
the scene. Nonetheless, due to the scene complexity and the variety of objects in it, it is clear that a segmentation using only a single 
cue will not suffice. Considering the availability of additional data sources like the color channels, more information can be 
integrated in the data partitioning process and ultimately into the reconstruction scheme. We propose in this paper the segmentation 
of terrestrial laser scanning data by the integration of range and color content and by using multiple cues. This concept raises 
questions regarding the mode of their integration, and definition of the expected outcome. We show, that while individual 
segmentation based on given cues have their own limitations; their integration provide a more coherent partitioning that has better 
potential for further processing. 
1. INTRODUCTION 
Terrestrial laser scanners emerged in recent years as standard 
measuring technology for detailed 3D modeling of scenes. From 
a geometrical perspective, scanners provide rich and accurate 
information of the acquired scene. Additionally, with cameras 
becoming an integral part of modem scanners, the resulting 
radiometric information provides supplementary color content. 
The combination of direct geometric details and radiometric 
content offers excellent foundations for the extraction of objects 
in an autonomous manner. 
Raw data (3D points and 2D RGB pixels) resulting from a 
single scan can reach tens of millions of elemental units. 
However, for common laser scanning applications, e.g., 
mapping, modeling, and object extraction that require high level 
of abstraction, this huge amount of data is hard to use. A 
fundamental step in the extraction of objects is the application a 
mid-level processing phase involving the grouping of pixels 
containing redundant information into segments. Essentially, 
each segment should form a collection of 3D points in which 
two conditions must be met, one is that the segment will 
maintain geometrical connectivity among all points constituting 
it; the second is that the feature value for the connected points 
will share similarity of some measure. Similarity can be 
geometrically based, radiometric based, or both. In addition, the 
basic units of each segment have to create a spatial continuation 
in the 3D sense. While segmentation of image content, and to 
some degree, of terrestrial point clouds, has been studied in the 
past, segmentation of the combined set has not been addressed 
by many so far. The motivation for pursuing this avenue is 
however clear and relates to the desire to benefit from the 
descriptive power of the rich radiometric content while being 
subjected to objects geometry and spatial connectivity in 3D 
space. 
In general, segmentation concerns partitioning the data into 
disjoint salient regions usually under the assumption that 
individual segments tend to represent individual objects within 
the scene. Due to its important role, segmentation has been 
studied for years beginning from thresholding techniques (Otsu, 
1979; Huang et al., 2005) and classic "region growing" based 
methods (e.g., Pal and Pal, 1993). Other methods propose 
converting the image into a feature space, and by doing so 
transforming the segmentation problem into a classification task. 
Carson et al. (2002) propose modeling the distribution of feature 
vectors as a mixture of Gaussians, with the model parameters 
being estimated using the expectation-maximization algorithm. 
Graph based approaches have been receiving growing attention. 
Using this scheme images are viewed as a graph in which each 
vertex represent a pixel (Shi and Malik, 2000; Felzenszwalb and 
Huttenlocher, 2004). The graph-view enables an intuitive 
representation of the segmentation problem as similarity 
between pixels can be assigned to the edges linking them. The 
challenge is then to find sets of vertices such that each has high 
connectivity value between its vertices and low connectivity to 
the rest of the graph. For a computational model for such 
segmentation, normalized cuts algorithm has been proposed 
(Shi and Malik, 2000). Sharon et al. (2000) make use of the 
multi-grid theory (Brandt, 1986) to solve efficiently the 
normalized-cut problem. A comprehensive review and test of 
some of the leading segmentation algorithms is provided in 
Estrada and Jepson (2005). Recent works, e.g., Russell et al. 
(2006), Roth and Ommer, (2006), Mian et al. (2006), and Alpert 
et al., (2007) demonstrated the application of segmentation 
processes for recognition tasks, showing promising results both 
in relation to object class recognition and to correct 
segmentation of the searched objects. Applications making use 
of segmentation as part of other tasks, have been reported for 
stereovision and image registration purposes (Bleyer and 
Gelautz, 2004; Klaus et al., 2006; Coiras et al., 2000).
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.