Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision, Graz, 2002 
  
  
Figure 11: Extracted road network of Scene I 
evidence was given to accept connections between the individual 
branches of the junction. Another obvious failure can be seen 
at the right branch of the junction in the central part of Scene 
II (Fig. 12). The tram and trucks in the center of the road have 
been missed since our vehicle detection module is only able to 
extract vehicles similar to passenger cars. Thus, this particular 
road axis has been shifted to the lower part of the road where the 
implemented parts of the model fit much better. 
In summary, the results indicate that the presented system ex- 
tracts roads even in complex environments. The robustness is last 
but not least a result of the detailed modelling of both extrac- 
tion and evaluation components accommodating the mandatory 
flexibility of the extraction. An obvious deficiency exists in form 
of the missing detection capability for vehicle types as busses and 
trucks and the (still) weak model for complex junctions. The next 
extension of our system, however, is the incorporation of multi- 
ple overlapping images in order to accumulate more evidence for 
lanes and roads in such difficult cases. The internal evaluation 
will greatly contribute to this because different — possibly com- 
peting — extraction results have to be combined. Also for multiple 
images, we plan to treat the processing steps up to the generation 
of lanes purely as 2D-problem. The results for each image are 
then projected onto the DSM and fused there to achieve a con- 
sistent dataset. Then, new connections will be hypothesized and, 
again, verified in each image separately. 
REFERENCES 
Baltsavias, E., Gruen, A. and van Gool, L. (eds), 2001. Automatic Extrac- 
tion of Man-Made Objects from Aerial and Space Images (III). Balkema 
Publishers, Lisse, The Netherlands. 
Baumgartner, A., Steger, C., Mayer, H., Eckstein, W. and Ebner, H., 1999. 
Automatic Road Extraction Based on Multi-Scale, Grouping, and Con- 
text. Photogrammetric Engineering and Remote Sensing 65(7), pp. 777— 
785. 
Clément, V., Giraudon, G., Houzelle, S. and Sandakly, F., 1993. Inter- 
pretation of Remotely Sensed Images in a Context of Multisensor Fusion 
Using a Multispecialist Architecture. IEEE Transactions on Geoscience 
and Remote Sensing 31(4), pp. 779—791. 
Faber, A. and Fórstner, W., 2000. Detection of Dominant Orthogonal 
Structures in Small Scale Imagery. In: International Archives of Pho- 
togrammetry and Remote Sensing, Vol. 33, part B. 
Fórstner, W., 1996. 10 pros and cons against performance characteriza- 
tion of vision algorithms. In: H. I. Christensen, W. Fórstner and C. B. 
Madsen (eds), Workshop on Performance Characteristics of Vision Algo- 
rithms, pp. 13-29. 
Fuchs, C., Gülch, E. and Fórstner, W., 1998. OEEPE Survey on 3D-City 
Models. OEEPE Publication, 35. 
Figure 12: Extracted road network of Scene II 
Gruen, A., Baltsavias, E. and Henricsson, O. (eds), 1997. Automatic 
Extraction of Man-Made Objects from Aerial and Space Images (II). 
Birkhàuser Verlag, Basel. 
Gruen, A., Kuebler, O. and Agouris, P. (eds), 1995. Automatic Extraction 
of Man-Made Objects from Aerial and Space Images. Birkhäuser Verlag, 
Basel. 
Heller, A., Fischler, M., Bolles, R. and Connolly, C., 1998. An Integrated 
Feasibility Demonstration for Automatic Population of Spatial Databases. 
In: Image Understanding Workshop '98. 
Hinz, S. and Baumgartner, A., 2001. Vehicle Detection in Aerial Images 
Using Generic Features, Grouping, and Context. In: Pattern Recogni- 
tion (DAGM 2001), Lecture Notes on Computer Science 2191, Springer- 
Verlag, pp. 45-52. 
Hinz, S., Baumgartner, A. and Ebner, H., 2001a. Modelling Contex- 
tual Knowledge for Controlling Road Extraction in Urban Areas. In: 
IEEE/ISPRS joint Workshop on Remote Sensing and Data Fusion over 
Urban Areas. 
Hinz, S., Baumgartner, A., Mayer, H., Wiedemann, C. and Ebner, H., 
2001b. Road Extraction Focussing on Urban Areas. In: (Baltsavias et al., 
2001), pp. 255-265. 
Laptev, I., Mayer, H., Lindeberg, T., Eckstein, W., Steger, C. and Baum- 
gartner, A., 2000. Automatic Extraction of Roads from Aerial Images 
Based on Scale Space and Snakes. Machine Vision and Applications 
12(1), pp. 22-31. 
Mayer, H. and Steger, C., 1998. Scale-Space Events and Their Link to 
Abstraction for Road Extraction. ISPRS Journal of Photogrammetry and 
Remote Sensing 53(2), pp. 62-75. 
Price, K., 2000. Urban Street Grid Description and Verification. In: 5th 
IEEE Workshop on Applications of Computer Vision, pp. 148—154. 
Ruskoné, R., 1996. Road Network Automatic Extraction by Local Con- 
text Interpretation: application to the production of cartographic data. 
PhD thesis, Université Marne-La-Valleé. 
Tónjes, R., Growe, S., Bückner, J. and Liedke, C.-E., 1999. Knowledge- 
Based Interpretation of Remote Sensing Images Using Semantic Nets. 
Photogrammetric Engineering and Remote Sensing 65(7), pp. 811—821. 
Tupin, F., Bloch, I. and Maitre, H., 1999. A First Step Toward Automatic 
Interpretaion of SAR Images Using Evidential Fusion of Several Struc- 
ture Detectors. IEEE Transactions on Geoscience and Remote Sensing 
37(3), pp. 1327-1343. 
Wang, Y. and Trinder, J., 2000. Road Network Extraction by Hierarchical 
Grouping. In: International Archives of Photogrammetry and Remote 
Sensing, Vol. 33, part B. 
Wiedemann, C. and Ebner, H., 2000. Automatic completion and evalua- 
tion of road networks. In: International Archives of Photogrammetry and 
Remote Sensing, Vol. 33, part B. 
Zhang, C. and Baltsavias, E., 1999. Road Network Detection by Mathe- 
matical Morphology. In: International Workshop on 3D Geospatial Data 
Production: Meeting Application Requirements, Paris. 
A - 168
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.