Full text: Proceedings, XXth congress (Part 3)

   
3. Istanbul 2004 
(GIS road axis] 
A e 
segment ) 
4 
dere S 
end of! 
EEE pe 
ds 
oH 
eight variations 
5— d 
| curvatures 
  
  
ya: 
UEM 
raluation 
and gaps: con- 
low 
low 
-structures and 
—can be mod- 
'n road objects, 
oncepts "Lane 
road network. 
ality measures 
nents) defining 
Model compo- 
ce of an object 
' evaluating its 
1, model com- 
lependent from 
grouped mark- 
sed to create a 
ght column are 
| to predefined 
The fuzzy ag- 
indicating the 
of all involved 
gartner, 2002). 
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
4 EXTRACTION AND FUSION OF ROAD OBJECTS 
4.1 Extraction of Road Objects — Overview 
The extraction strategy inheres knowledge about how and when 
certain parts of the road and context model are optimally ex- 
ploited, thereby being the basic control mechanism of the ex- 
traction process. It is subdivided into three levels (see Fig. 3): 
Context-based data analysis comprises the segmentation of the 
scene into the urban, rural, and forest area and the analysis of con- 
text relations. While road extraction in forest areas seems hardly 
possible without using additional sensors, e.g., infrared or LI- 
DAR sensors, the extraction in rural areas may be performed with 
the system of (Baumgartner et al., 1999). In urban areas, extrac- 
tion of salient roads includes the detection of homogeneous rib- 
bons in coarse scale, collinear grouping thin bright lines, i.e. road 
markings, and the construction of lane segments from groups of 
road markings, road sides, and detected vehicles. The lane seg- 
ments are further grouped into lanes, road segments, and roads. 
During road network completion, finally, gaps in the extraction 
are iteratively closed by hypothesizing and verifying connections 
between previously extracted roads. Similar to (Wiedemann and 
Ebner, 2000), local as well as global criteria exploiting the net- 
work characteristics are used. Figures 4 and 5 illustrate interme- 
diate steps of extraction and Figs. 6 and 7 show typical results. 
For details regarding the extraction we refer the reader to (Hinz 
et al., 2001, Hinz and Baumgartner, 2002). The system described 
there extracts roads from a single image and uses a DSM and 
views from other images to circumvent occlusions. In contrast, 
the new version extracts roads from all available images and fuses 
them in object space. The next section focuses on this particular 
issue. 
      
  
Segmentation of 
context regions 
Context- 
based da 
hased data | urban areas il forest ] [ rural areas ] 
analvsts 
  
Analysis of con 
text relations 
  
  
Road extraction 
Extraction 
of salient Construction of 
roads lane segments 
using approach 
for rural areas 
   
  
c AT 
| Fusion i 
J 
Road 
network 
completion 
Road Network 
completion 
  
Figure 3: Extraction Strategy. 
  
Detected shadow reas Markings outside of shadow area 
Figure 4: Examples of intermediate steps during road extraction. 
4.2 Fusion of Road Objects 
To exploit information from multiple views, an appropriate fu- 
sion strategy has been developed, which is especially suitable for 
complex environments like urban areas. It can be characterized 
by following features: 1) It is based on objects, i.e., parts of the 
road network such as lane segments and road segments 2) It is 
carried out in object space 3) It is embedded in the system’s con- 
cept of self-diagnostic extraction algorithms. From a method- 
ological viewpoint, the novelty of this approach mainly relates to 
the incorporation and use of self-diagnosis algorithms for fusion. 
The first two points, however, accommodate the special proper- 
ties of urban scenes and are thus of no minor importance. In the 
following comments on each point are given: 
Ad 1) Fusion is based on objects because, as mentioned above, 
aerial images of urban areas show very high complexity. If fusion 
would be based on low level image primitives like raw gray val- 
ues or edge structures, either an extremely accurate DSM must 
be given (effectively a 3D city model) or the fusion algorithm has 
to cope with many ambiguities and many conflicting hypothe- 
ses that occur when matching primitives over different images. 
Hence, our philosophy is to stay in 2D as long as possible and 
to extract objects of large extent and high semantics in each im- 
age separately. Matching such kind of objects over images is 
much easier and the requirements on a DSM can be relaxed sig- 
nificantly. In the case of our road extraction system, the ob- 
jects which are subject for fusion are lane segments extracted in 
each available image. These are constructed in previous process- 
ing steps from groups of markings (i.e., thin bright lines) and 
(anti-)parallel road sides (i.e., grayvalue edges) while constrain- 
ing them to enclose a homogeneous region or alternatively a ve- 
hicle (Hinz et al., 2001, Schlosser et al., 2003). 
Ad 2) The main reason for performing fusion in object space is 
its natural way in treating each image with equal importance and 
not preferring any image a priori. Thus a dependence of the fi- 
nal results on the processing order of the images can be avoided. 
As a side effect, objects extracted in images of different resolu- 
tions may be combined easily and all necessary parameters can 
be passed in real-world values. 
Ad 3) The fusion algorithm is embedded in the system’s concept 
of self-diagnostic extraction algorithms. The idea behind this ap- 
proach is that each module used during extraction should attach 
its result with a confidence value indicating the quality how well 
the job has been done. Our approach to define evaluation crite- 
ria from which the confidence values can be calculated is to split’ 
up the components of the underlying object models into two dif- 
ferent types. Model components of the first type are used for 
extracting an instance of an object and the components of the 
second type serve as criteria for evaluating the quality of the ex- 
tracted instance. For guaranteeing an unbiased evaluation, model 
components belonging to different types should be independent 
from each other. In order to evaluate a certain object, pre-defined 
fuzzy functions are used. Since the road model underlying our 
  
Markings inside of shadow areca Verified fanes (top), detected car (bottom) 
351 
  
   
    
  
   
   
   
   
  
   
    
   
   
  
   
  
   
   
   
  
  
   
   
   
  
   
   
   
  
   
   
    
   
  
  
   
  
   
   
   
   
  
     
     
   
  
   
    
  
    
  
    
  
  
  
  
  
  
  
  
  
  
  
  
  
   
     
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.