Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B1-3)

INDOOR NAVIGATION BY USING SEGMENTATION OF RANGE 
IMAGES OBTAINED BY LASER SCANNERS 
B. Gorte a , K. Khoshelham a , E. Verbree b 
a Optical and Laser Remote Sensing, Delft University of Technology, The Netherlands 
b GIS Technology, Delft University of Technology, The Netherlands 
{b.g.h.gorte, k.khoshelham, e.verbree}@tudelft.nl 
Commission I, ICWG-I/V 
KEY WORDS: laser scanning, localization, mapping, navigation, robotics, segmentation, tracking 
ABSTRACT: 
This paper proposes a method to reconstruct the trajectory of a moving laser scanner on the basis of indoor scans made at different 
positions. The basic idea is that a position with respect to a measured scene is completely determined by the distances to a minimum 
of three intersecting planes within that scene, and therefore displacements are defined by changes in those distances between 
successive scans. A segmentation algorithm based on the gradients of a range image is used to extract planar features from the 
measured scene. The correspondence establishment between planes of two successive scans is formulated as a combinatorial 
problem, and a heuristic search method is employed to find combinations that indicate corresponding sets of planes between the two 
scans. The position of the scanner at each scan is computed by intersecting sets of four planes. An experiment with multiple scans 
obtained by a Faro laser scanner in an indoor environment is described, and the results show the reliability and accuracy of the 
proposed indoor navigation technique. 
1. INTRODUCTION 
Terrestrial laser scanning is a popular method for acquiring 3D 
data of the interior of buildings and closed environments such 
as tunnels. Laser range data can also be used for positioning 
and navigation through sensing features that have known 
coordinates. The combination of the above tasks, mapping and 
navigation, is known in the field of robotics as simultaneous 
localization and mapping (SLAM). The problem of 
simultaneous localization and mapping concerns the 
measurement of an environment using a sensor such as a laser 
scanner, while these measurements are processed to create a 
(3D) map of the environment and at the same time determine 
the position of the sensor with respect to the obtained map. 
Example applications of simultaneous 3D modelling and 
localization include autonomous vehicles,' intelligent 
wheelchairs and automated data acquisition within indoor 
environments. 
In recent years, simultaneous localization and mapping in both 
outdoor and indoor environments has been well researched by 
the scientists in the field of robotics and automation. In indoor 
environments, localization methods based on a 2D laser range 
finder have been more popular (Zhang and Ghosh, 2000; Xu et 
al., 2003; Victorino and Rives, 2004), but these methods 
provide only a 2D map of the environment. Other sensors that 
have been used for localization and mapping include sonar 
(Leonard and Durrant-Whyte, 1991) and cameras with 
structured light (Kim and Cho, 2006) as well as unstructured 
light (Diebel et al., 2004). Some of the existing approaches 
utilize multiple sensors and adopt a sensor fusion strategy. 
Dedieu et al., (2000) combine a camera and a 2D laser scanner. 
Newman et al., (2002) and Araneda et al., (2007) opt for the 
combination of a laser scanner and an odometer. Diosi and 
Kleeman (2004) fuse laser range data with data acquired from a 
sonar. Guivant et al., (2000) utilize a laser scanner with a 
number of beacons for the localization, although natural 
landmarks are also taken into account to improve the 
localization accuracy. 
In this paper, we describe a method for simultaneous 
localization and 3D modelling of an indoor environment using 
a laser scanner. The proposed method is based on extracting 
planar features from the range images, and positioning the laser 
scanner with respect to corresponding planes in successive 
scans. The localization is, therefore, entirely based on the 
measured scene, and no beacons or known landmarks are 
required. In addition, the mapping is carried out in 3D since the 
laser scanner provides a 3D point cloud of the scanned scene. 
The paper is structured in five sections. Section 2 describes the 
algorithm for the segmentation of the range images into planar 
features. In Section 3, the method for sensor localization using 
the extracted planar features is discussed. Experimental results 
of a test with multiple scans in an indoor environment are 
presented in Section 4. Conclusions appear in Section 5. 
2. RANGE IMAGE SEGMENTATION AND 
EXTRACTION OF PLANAR FEATURES 
The fundamental data structure of terrestrial laser scans is the 
range image (rather than the point cloud), since it is most 
closely linked to the scanner’s operation, measurement of the 
range (distance between the scanner and a point) as function of 
an azimuth (horizontal) and a zenith (vertical) angle. The range 
image is a discrete representation of a point cloud in a spherical 
coordinate system. With the scanner at the centre of the sphere, 
the azimuth and zenith angles determine the columns and row 
971
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.