Full text: CMRT09

In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3/W4 — Paris, France, 3-4 September, 2009 
VEHICLE ACTIVITY INDICATION FROM AIRBORNE LIDAR DATA OF URBAN 
AREAS BY BINARY SHAPE CLASSIFICATION OF POINT SETS 
W. Yao a ' *, S. Hinz b , U. Stilla 3 
‘‘Photogrammetry and Remote Sensing, Technische Universitaet Muenchen, Arcisstr.21, 80290 Munich, Germany 
b Institute of Photogrammetry and Remote Sensing, Universität Karlsruhe (TH), 76128 Karlsruhe, Germany 
KEY WORDS: Airborne LiDAR, Urban areas, Vehicle extraction, Motion indication, Shape analysis 
ABSTRACT: 
This paper presents a generic scheme to analyze urban traffic via vehicle motion indication from airborne laser scanning (ALS) data. 
The scheme comprises two main steps performed progressively — vehicle extraction and motion status classification. The step for 
vehicle extraction is intended to detect and delineate single vehicle instances as accurate and complete as possible, while the step for 
motion status classification takes advantage of shape artefacts defined for moving vehicle model, to classify the extracted vehicle 
point sets based on parameterized boundary features, which are sufficiently good to describe the vehicle shape. To accomplish the 
tasks, a hybrid strategy integrating context-guided method with 3-d segmentation based approach is applied for vehicle extraction. 
Then, a binary classification method using Lie group based distance is adopted to determine the vehicle motion status. However, the 
vehicle velocity cannot be derived at this stage due to unknown true size of vehicle. We illustrate the vehicle motion indication 
scheme by two examples of real data and summarize the performance by accessing the results with respect to reference data 
manually acquired, through which the feasibility and high potential of airborne LiDAR for urban traffic analysis are verified. 
1. INTRODUCTION 
Transportation represents a major segment of the economic 
activities of modem societies and has been keeping increase 
worldwide which leads to adverse impact on our environment 
and society, so that the increase of transport safety and 
efficiency, as well as the reduction of air and noise pollution are 
the main task to solve in the future (Rosenbaum et al., 2008). 
The automatic extraction, characterization and monitoring of 
traffic using remote sensing platforms is an emerging field of 
research. Approaches for vehicle detection and monitoring rely 
not only on airborne video but on nearly the whole range of 
available sensors; for instance, optical aerial and satellite 
sensors, infrared cameras, SAR systems and airborne LiDAR 
(Hinz et al., 2008). The principal argument for the utilization of 
such sensors is that they complement stationary data collectors 
such as induction loops and video cameras mounted on bridges 
or traffic lights, in the sense that they deliver not only local data 
but also observe the traffic situation over a larger region of the 
road network. Finally, the measurements derived from the 
various sensors could be fused through the assimilation of 
traffic flow models. The broad variety of approaches can be 
found, for instance, in compilations by Stilla et al., (2005) and 
Hinz et al., (2006). 
Nowadays, airborne optical cameras are widely in use for these 
tasks(Reinaitz et al., 2006). Yet satellite sensors have also 
entered into the resolution range (0.5-2m) required for vehicle 
extraction. Sub-metric resolution is even available for SAR data 
since the successful launch of TerraSAR-X. The big advantage 
of these sensors is the spatial coverage. Thanks to their 
relatively short acquisition time and long revisit period, satellite 
systems can mainly contribute to the collection of statistical 
traffic data for validating specific traffic models. Typical 
approaches for vehicle detection in optical satellite images are 
described by Jin and Davis, (2007) and Sharma et al., (2006), 
and in spacebome SAR images by Meyer et al., (2006) and 
Runge et al., (2007). For monitoring major public events, 
mobile and flexible systems which are able to gather data about 
traffic density and average speed are desirable. Systems based 
on medium or large format cameras mounted on airborne 
platforms meet the demands of flexibility and mobility. With 
them, large areas can be covered (up to several km 2 per frame) 
while keeping the spatial resolution high enough to image 
sufficient detail. A variety of approaches for automatic tracking 
and velocity calculation from airborne cameras have been 
developed over the last few decades. These approaches make 
use of substructures of vehicles such as the roof and windscreen, 
for matching a wire-frame model to the image data (Zhao and 
Nevada. 2003). 
Despite that LiDAR has a clear edge over optical imagery in 
terms of operational conditions, there have been so far few 
works conducted in relation to traffic analysis from laser 
scanners. On the one hand it is an active sensor that can work 
day and night; on the other hand it is range senor that can 
capture 3d explicit description of scene and penetrate 
volumetric occlusions to some extent. Toth and Grejner- 
Brzezinska. (2006) has presented an integrated airborne system 
of digital camera and LiDAR for road corridor mapping and 
dynamical information acquisition. They addressed a 
comprehensive working chain for near real-time extracting 
vehicles motion based on fusing the images with LiDAR data. 
Another example of applying ALS data for traffic-related 
analysis can be found in Yarlagadda et al., (2008), where the 
vehicle category is determined by 3-d shape-based 
classification. 
In this paper, a generic scheme to discover the vehicle motion 
solely from airborne LiDAR data is presented. It is based on 
two-step strategy, which firstly extracts single vehicles with 
contextual model of traffic objects and 3d-segmentation based 
classification (3-d object-based classification), and secondly 
classifies vehicle entities in view of motion status based on 
shape analysis. 
2. VEHICLE EXTRACTION 
In this step, we need to at first extract various vehicle categories 
as complete and accurate as possible, but not considering the 
difference among them in terms of dynamical status. To 
Corresponding author.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.