Full text: Proceedings, XXth congress (Part 3)

  
  
  
  
AUTOMATIC ROAD EXTRACTION FROM DENSE URBAN AREA BY INTEGRATED 
PROCESSING OF HIGH RESOLUTION IMAGERY AND LIDAR DATA 
Xiangyun Hu 
C. Vincent Tao 
Yong Hu 
Geospatial Information and Communication Technology Lab (GeoICT), 
Department of Earth and Space Science and Engineering, 
York University, 4700 Keele Street, Toronto ON,Canada, M3J 1P3 
xyhu@yorku.ca, tao@yorku.ca, yhu@yorku.ca 
KEY WORDS: Extraction, Road, Multisensor, High Resolution, LIDAR, Urban, Contextual 
ABSTRACT: 
Automated and reliable 3D city model acquisition is an increasing demand. Automatic road extraction from dense urban areas is a 
challenging issue due to the high complex image scene. From imagery, the obstacles of the extraction stem mainly from the difficulty 
of finding clues of the roads and complexity of the contextual environments. One of the promising methods to deal with this is to use 
data sources from multi-sensors, by which the multiple clues and constraints can be obtained so that the uncertainty can be 
minimized significantly. This paper focuses on the integrated processing of high resolution imagery and LIDAR (LIght Detection 
And Ranging) data for automatic extraction of grid structured urban road network. Under the guidance of an explicit model of the 
urban roads in a grid structure, the method firstly detects the primitives or clues of the roads and the contextual targets (i.c., parking 
lots, grasslands) both from the color image and lidar data by segmentation and image analysis. Evidences of road existing are 
contained in the primitives. The candidate road stripes are detected by an iterative Hough transform algorithm. This is followed by an 
procedure of evidence finding and validation by taking advantage of high resolution imagery and direct height information of the 
scene derived from lidar data. Finally the road network is formed by topology analysis. In this paper, the strategy and corresponding 
algorithms are described. The test data set is color ortho-imagery with 0.5 m resolution and lidar data of Toronto downtown area. The 
experimental results in the typical dense urban scene indicate it is able to extract the roads much more reliable and accurate by the 
integrated processing than by using imagery or lidar separately. It saliently exhibits advantages of the integrated processing of the 
multiple data sources for the road extraction from the complicated scenes. 
1. INTRODUCTION 
Automatic road extraction from remotely sensed imagery has 
attracted much attention for the last few decades. In this issue, a 
great number of research papers were published both in 
geospatial and computer vision communities. In general, road 
extraction consists of four steps (Gruen and Li, 1995): road 
sharpening, road finding, road tracking, and road linking. In the 
earlier research (Bajesy and Tavakoli, 1976; Nevatia and Babu, 
1980), some line detection algorithms were presented for 
extracting the roads from remotely sensed imagery. There is not 
much high-level knowledge involved in the methods for road 
finding. To process gaps bridging, road tracing and handle the 
complicated image scenes, more sophisticated strategies should 
be used for more reliable extraction. Knowledge or rule based 
methods or similar methods based on hypothesis-verification 
(Mckeown and Delinger, 1988; Tonjes, R., and S. Growe, 1998; 
Trinder and Wang, 1998) had been used for handling the issue 
of linear feature alignment and fragmentation. Optimal route 
search algorithms were frequently employed as semiautomatic 
road extraction. The optimization can be realized by dynamic 
programming (Gruen and Li, 1995; Bazohar and Cooper, 1998), 
snakes (Trinder and Li, 1995; Gruen and Li, 1997; Tao et. al, 
1998; Agouris ef. al. 2001) and Kalman filtering (Vosselman 
and de Knecht , 1995). Furthermore, contextual information 
supported methods (Stilla, 1995; Baumgartner et.al. 1999) were 
applied to extract road more reliably. Actually we can also find 
many strategies (Bazohar and Cooper, 1998; Couloigner and 
Ranchin 2000; Laptev et. al 2000; Katartzis, et.al., 2001, Hinz 
and Baumgartner; Hu and Tao, 2003; Hu and Tao, 2004) which 
attempt to combine the methods or use the specific techniques 
in order to deal with different scenarios in image scale, 
  
complexity and road type etc. However, automating road 
extraction is still challenging as the involved problems of 
intelligent image understanding are too complicated to be 
solved straightforward. Most of the methods applied to extract 
roads from open or rural areas were successful to some extent 
due to the relative simple image scene and road model. For the 
extraction of roads in dense urban areas, especially from high 
resolution imagery, there are primary obstacles which lead to 
unreliable extraction results: complicated image scene and road 
model, furthermore, occlusion caused by high buildings and 
their shadows. In other words, the lack of information, 
especially three-dimensional information is the principle 
difficulty in obtaining the road information with high reliability 
and accuracy in the urban scenes. 
Airborne lidar (Light Detection And Ranging) is a relatively 
new data acquisition system complementary to traditional 
remote sensing technologies. Lidar data contains plenty of scene 
information, from which most ground features such as roads and 
buildings are discernible. Roads have homogeneous reflectivity 
in lidar intensity and the same height as bare surface in 
elevation. Lidar range data is able to improve the analysis of 
optical images for detecting roads in urban areas (Hofmann, 
2001). But the use of range data requires that the urban areas be 
relatively flat. Some researchers (Zhang et al., 2001; Alharthy 
and Bethel, 2003; Hu, 2003) used the height information 
derived by subtracting the DTM from the DSM to reason if a 
region is on the ground and to compensate the missing 
information in classification of aerial images. In cases when 
shadows or buildings occlude road segments, their shape can be 
well detected due to the height information. Lidar intensity data 
has good separability if the wavelength of the laser is suitable 
   
   
   
    
    
   
   
    
   
   
    
    
    
  
    
  
  
  
  
  
  
   
     
  
   
   
     
     
   
   
    
    
     
     
   
   
   
   
    
   
   
   
    
   
   
    
   
   
    
    
    
   
   
   
Interna 
for gro 
feature: 
compar 
are ver 
2002). 
structui 
of pare 
main oi 
detecte: 
imager’ 
lack of 
of the : 
and toj 
improv 
As mer 
can be 
while i 
verifica 
existen: 
motivat 
method 
resoluti 
informé 
section 
strategy 
the roa 
informé 
detectic 
sources 
remark: 
2.0 
LID 
2.1 Dat 
In early 
mission 
its ATL 
region. 
pebbles 
intensit 
million 
generat 
also ob 
range 
informe 
bare Ea 
plane. I 
and the 
from oi 
resoluti 
sample: 
lidar da 
area. Fi 
imager 
comput 
area, w 
contain: 
kinds o 
vehicle: 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.