Full text: Proceedings, XXth congress (Part 3)

   
GRATED 
A 
rban areas is a 
n the difficulty 
th this is to use 
rtainty can be 
Ight Detection 
t model of the 
S (i.e., parking 
id existing are 
followed by an 
rmation of the 
corresponding 
town area. The 
ccurate by the 
cessing of the 
mating road 
| problems of 
licated to be 
lied to extract 
0 some extent 
nodel. For the 
lly from high 
which lead to 
cene and road 
buildings and 
information, 
the principle 
gh reliability 
is a relatively 
to traditional 
lenty of scene 
h as roads and 
us reflectivity 
re surface in 
he analysis of 
as (Hofmann, 
urban areas be 
001; Alharthy 
t information 
to reason if a 
the missing 
n cases when 
r shape can be 
intensity data 
ser is suitable 
   
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
for ground materials. The relative separations between ground 
features (i.c., asphalt road, grass, building and tree) have been 
compared using intensity data. It is found that the separabilities 
are very high for road vs. grass and road vs. tree (Song ct al., 
2002). In many cities, road networks are arranged in a grid 
structure in urban areas. These grid roads are mainly composed 
of parallel and orthogonal straight roads with respect to the 
main orientation of the network. The existence of streets can be 
detected much more easily from the arrangements than from 
imagery in which the highly complicated image content and 
lack of information lead to high complexity of direct extraction 
of the street network. It is recognized that the simple geometry 
and topology relations among grid streets may be used to 
improve the reliability of road extraction results significantly. 
As mentioned above, instead of using imagery, using lidar data 
can be easier to extract the road primitives in built-up areas, 
while imagery can also be used for additional information for 
verification and accurate extraction. Many clues of road 
existence can be obtained from high resolution imagery. The 
motivation of this paper is to explore the strategy and 
methodology of integrated processing of lidar data and high 
resolution imagery in order to obtain reliable road network 
information from the dense urban environment. In the followed 
section, the case study data is introduced and the overall 
strategy of the processing is given. The third section describes 
the road extraction methods by using of these two source of 
information, including road area segmentation, road clue 
detection and verification, fusion of the clues from the two data 
sources. The case study result is presented and conclusion 
remarks are then given. 
2. OVERVIEW OF INTEGRATED PROCESSING OF 
LIDAR AND HIGH RESOLUTION IMAGERY FOR 
ROAD EXTRACTION 
2.1 Data of the Case Study Area 
In early 2002, Optech International, Toronto completed a flight 
mission of acquiring the lidar data of Toronto urban area using 
its ATLM 3200. The lidar dataset provided is around downtown 
region. The roads in the study area are coated with asphalt with 
pebbles or concrete. The first and last returns lidar range and 
intensity data were collected. The dataset contains about 10.6 
million points and has a density of about 1.1 points/m“. We 
generate the DTM using the last-return lidar range data, and 
also obtain the height data by subtracting the DTM from the 
range data (Hu, 2003). The height data contains height 
information that has removed the retain relief relative to the 
bare Earth, and puts all the ground features on a flat reference 
plane. Figure 1 (a) and (b) shows the first-return intensity data 
and the height data. The high resolution imagery is obtained 
from ortho-rectified aerial image of the same area. The image 
resolution is 0.5m. To do integrating processing, it is re- 
sampled into Im resolution and is manually registered with the 
lidar data in geometry. Figure 1 illustrated the lidar data of the 
area. Figure 1 (¢) shows an image window of the high resolution 
imagery. Its size is 1024 by 1024 pixel. Considering the 
computational cost, we carry out our extraction in this selected 
area, which demonstrates typical scene of dense urban area. It 
contains buildings with great height, roads (streets) and many 
kinds of typical ground objects (parking lots, grass land, trees, 
vehicles etc.). It is feasible to testify our method. 
  
(c) High resolution aerial imagery 
Figure 1. Lidar data and imagery used for road extraction 
2.2 PROCESSING WORKFLOW 
Figure 2 illustrated the workflow of the integrated processing 
for road network extraction from the dense urban environment. 
The strategy is based on an observation to the scene in which 
the major clues of road existence should be from lidar data from 
which the height data enables it eliminate the principle 
difficulties in occlusion of the roads. So firstly the lidar data are 
used to obtain the candidate road stripes, because in the build- 
up area the dense building arrangement demonstrates grid 
structure and the grid road network can be perceived easily from 
the structure rather than from optical imagery due to the 
occlusion. The segmented road and open areas could be further 
segmented by using of the results of classification of the optical 
imagery. The grass lands and tree areas are extracted from the 
image by pixel based classification. The open areas extracted 
from the lidar data contain road stripes and parking lot areas. 
Possible parking areas are extracted by morphologic operation 
of the segmented lidar data. To verified and differentiate the 
road stripes and the parking areas, clues from shape analysis 
and vehicle detection are involved. The vehicle detection is 
fulfilled in the high resolution optical imagery. The information 
of verified roads and parking areas is used for formation of the 
road grid. 
In the next section, the methods of integrated processing are 
briefly described. 
   
    
    
  
  
  
  
  
  
  
  
  
   
   
  
  
  
     
      
   
   
   
   
     
     
    
   
  
  
  
  
  
      
     
    
   
   
   
    
  
    
   
    
    
    
   
   
    
    
     
      
   
    
    
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.