Full text: Technical Commission VII (B7)

  
   
  
  
   
   
  
   
  
    
  
  
  
  
  
  
  
  
   
    
  
   
   
  
   
  
  
   
   
  
  
  
  
   
  
    
   
  
   
  
  
   
   
  
   
  
  
   
   
   
   
   
   
   
   
   
   
      
vel 
ind 
nge 
iria 
pp. 
ing 
ng, 
rtz, 
/Sis 
yk 
[00 
ol. 
ect 
ster 
ote 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
AUTOMATIC MOVING VEHICLE'S INFORMATION EXTRACTION FROM ONE-PASS 
WORLDVIEW-2 SATELLITE IMAGERY 
Rakesh Kumar Mishra 
Department of Geodesy and Geomatics Engineering, University of New Brunswick, NB, CANADA 
rakesh.mishra@unb.ca 
Commission VII, WG VII/5 
KEY WORDS: Satellite images, WorldView-2, vehicles detection, vehicle information, traffic, AdaBoost. 
ABSTRACT: 
There are several applications of vehicle information (position, speed, and direction). WorldView-2 satellite has three sensors: one 
Pan and two MS (MS-1: BGRNI, Pan, and MS-2:CYREN2). Because of a slight time gap in acquiring images from these sensors, 
the WorldView-2 images capture three different positions of the moving vehicles. This paper proposes a new technique to extract the 
vehicle information automatically by utilizing the small time gap in WorldView-2 sensors. A PCA-based technique has been 
developed to automatically detect moving vehicles from MS-1 and MS-2 images. The detected vehicles are used to limit the search 
space of the adaptive boosting (AdaBoost) algorithm in accurately determining the positions of vehicles in the images. Then, RPC 
sensor model of WorldView-2 has been used to determine vehicles’ ground positions from their image positions to calculate speed 
and direction. The technique has been tested on a Worldview-2 image. A vehicle detection rate of over 95% has been achieved. The 
results of vehicles’ speed calculations are reliable. This technique makes it feasible to use satellite images for traffic applications on 
an operational basis. 
1. INTRODUCTION 
The increasing volume of already-high traffic loads creates new 
challenges for traffic management and planning. Moving 
vehicle information (position, speed, and direction) is crucial for 
traffic planning, security surveillance, and military applications. 
Today’s road systems are equipped with a suite of sensors for 
monitoring traffic status, such as induction loops, overhead 
radar sensors and video sensors. While they all deliver reliable 
measurements, the results are merely point-based in nature. On 
the other hand, information provided by remote sensing 
techniques covers a larger area and thus could often be useful 
for better understanding the dynamics of the traffic. The launch 
of high resolution satellites such as QuickBird and WorldView- 
2 has made it feasible to use satellite images for traffic 
applications. These satellites capture images with a spatial 
resolution better than 1-m and hence can be used to extract road 
traffic information. Furthermore, the high resolution satellite 
images give a synoptic view of complex traffic situations and 
the associated context. 
In the past, several efforts (Gerhardinger et al., 2005; Sharma et 
al., 2006; Jin and Davis, 2007; Zheng et al., 2006; Zheng and 
Li, 2007) have been made to detect vehicles from HR satellite 
imagery. A few attempts (Xiong and Zhang, 2008; Leitloff and 
Hinz, 2010; Liu et al, 2010) have been made to determine 
vehicle speeds using QuickBird imagery. These methods utilize 
the small time interval between the acquisition of Pan and MS 
images by QuickBird sensors. Xiong and Zhang (2008) 
developed a methodology to determine vehicle's ground 
position, speed and direction using QuickBird Pan and MS 
images. However, the major limitation of the Xiong and Zhang 
(2008) approach is that in this method there is a need to select 
vehicles’ central positions manually from Pan and MS images. 
Leitloff and Hinz (2010) have used adaptive boosting 
(AdaBoost) classification technique to detect single vehicles 
from Pan images and then the corresponding vehicles from MS 
images have been detected using the similarity matching 
approach. Whereas, Liu et al. (2010) have used an object-based 
method to detect single vehicles from Pan images and then the 
corresponding vehicles from MS images have been detected 
using the area correlation method. Both aforementioned 
approaches have achieved a fair level of accuracy in vehicle 
detection from Pan images. However, accuracy of vehicle 
detection from MS images is quite low which leads to high error 
in determining vehicles” position is MS images. As the time 
interval between the acquisition of Pan and MS images is very 
short, a very small error in vehicles’ position determination will 
lead to a very high error in vehicles” speed computation. 
The recently-launched high resolution satellite, World View-2, 
has three sensors: one Pan and two MS (MS-1: BGRNI, Pan, 
and MS-2:CYREN2). Because of a slight time gap in acquiring 
images from these sensors, the WorldView-2 images capture 
three different positions of the moving objects (vehicles) and 
static objects remain at the same position. Therefore, 
theoretically it is possible to detect moving vehicles from the 
WorldView-2 imagery. Practically, these calculations bring 
many challenges in the image processing domain. The spatial 
resolution of the MS image is low (2m) which makes vehicle 
extraction a difficult task. Furthermore, MS-1 and MS-2 images 
constitute different spectral wavelengths; therefore the existing 
change detection methods are incapable of detecting moving 
vehicles from the images. In addition, the accurate 
determination of ground positions of a moving vehicle available 
in each image is important for accurate speed computation. 
This paper proposes a completely different and new 
methodology to automatically and accurately extract moving 
vehicle's information (position, speed and direction) from MS-1 
and MS-2 images captured by the WorldView-2 satellite in one 
pass. A motion detection algorithm has been developed which 
looks into MS-1 and MS-2 images and detects the objects which 
are in motion. The novelty of this algorithm is that it is 
completely automatic and there is no need for road extraction 
prior to vehicle detection. In earlier vehicle detection methods, 
prior to the vehicle detection, there is a need to extract roads 
either manually or from GIS data. A vehicle detection rate of
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.