×

You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Title
CMRT09
Author
Stilla, Uwe

CMRT09: Object Extraction for 3D City Models, Road Databases and Traffic Monitoring - Concepts, Algorithms, and Evaluation
accomplish this task, we proposed a hybrid strategy that
integrates context-guided progressive method with 3-d
segmentation based classification. Experiments demonstrated
that the assimilation of these two approaches (Fig. 1) can make
our vehicle extraction from LiDAR data of urban areas more
competent and robust, even against complex scenes.
Raw LiDAR data
Figure 1. Integrated scheme for vehicle extraction.
2.1 Context-guided extraction
This extraction strategy comprises knowledge about how and
when certain parts of the vehicle and context model of traffic
related objects in urban areas are optimally exploited, thereby
forming the basic control mechanism of the extraction process.
In contrast to other common approaches dealing with LiDAR
data analysis, it neither uses the reflected intensity for
extraction nor combines multiple data sources acquired
simultaneously. The philosophy is to exploit geometric
information of ALS data as much as possible primarily based
on such context-relation that vehicles are generally placed upon
the ground surface. Moreover, the approach on the one side can
be viewed as a processing strategy progressively reducing
“region of interest”. It is subdivided into four steps: ground
level separation, geo-tiling and filling, vehicle-top detection and
selection, segmentation, which are respectively elaborated in
detail in Yao et al., (2008)a. An exemplary result on one co
registered dataset is shown in Fig.2.
Figure 2. Vehicle extraction result as white outlined contours
for test data I using context-guilded method.
2.2 3D segmentation based classification
Since many vehicles in modem cities might travel on the
elevated roads such as flyover or bridge, the context relation
abided by the method in section 2.1 does not always hold.
Therefore, we introduced a 3D object-based classification
strategy for extracting semantic objects directly from LiDAR
point cloud of urban areas. It could either extract two object
classes - vehicle and elevated road simultaneously or only
elevated road, where vehicle can further be detected considering
elevated road here as ground. The ALS data is firstly subjected
to the segmentation process using nonparametric clustering tool
- mean shift (MS). The obtained results are usually not able to
give a significative description of distinct natural and man-made
objects in complex scenes, even though MS does a genuine
clustering directly on 3D point cloud to discover various
geometric modes in it. Hence, the initial resulted point segments
have to be handled under the global optimization criterions to
generate more consistent subsets of laser data. For it, a modified
normalized-cuts (Ncuts) is applied with the sense of perceptual
grouping. Finally, based on derived features of spatially
separated point clusters that potentially correspond to semantic
object entities, classification is performed to evaluate them to
extract the flyover and vehicle (Yao et al., 2009). Applying this
approach to a one-path dataset yielded Fig.3.
Figure 3. Vehicle (green) and flyover extraction results for test
data II using 3D segentation based classification.
3. VEHICLE MOTION INDICATION
For extracted vehicles resulted from last step, the parameterized
model for point sets of single vehicles can then be produced by
shape analysis. From the parameterized features of vehicle
shape, the across-track vehicle motion (-component) is able to
be indicated unambiguously based on the moving vehicle model
in ALS data, whereas the along track motion cannot be implied
without prior knowledge about individual vehicle sizes. In this
section, the vehicle motion status is attempted to be inferred up
to the across-track direction w ithout derive the velocity.
3.1 Vehicle Parametrization
Generally, the laser data provide us a straightforward 3D
parameterization, as vehicle forms change more vertically than
horizontally. To refine the 3D vehicle envelope model (Yao et
al., 2008b), however, is difficult, because the laser point density
acquired under common configurations is usually not sufficient
to model the vertical profile of a vehicle. The situation is even
more degraded by motion artifacts, because the large relative
velocity of the sensor to object results in fewer laser points,
making vehicle appears like a blob. Consequently, it is not easy
to analytically model the vertical vehicle profiles from ALS
data, which would be a simple task for much denser terrestrial
laser data.
36