Retrodigitalisierung Logo Full screen
  • First image
  • Previous image
  • Next image
  • Last image
  • Show double pages
Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

CMRT09

Access restriction

There is no access restriction for this record.

Copyright

CC BY: Attribution 4.0 International. You can find more information here.

Bibliographic data

fullscreen: CMRT09

Monograph

Persistent identifier:
856955019
Author:
Stilla, Uwe
Title:
CMRT09
Sub title:
object extraction for 3D city models, road databases, and traffic monitoring ; concepts, algorithms and evaluation ; Paris, France, September 3 - 4, 2009 ; [joint conference of ISPRS working groups III/4 and III/5]
Scope:
X, 234 Seiten
Year of publication:
2009
Place of publication:
Lemmer
Publisher of the original:
GITC
Identifier (digital):
856955019
Illustration:
Illustrationen, Diagramme, Karten
Language:
English
Usage licence:
Attribution 4.0 International (CC BY 4.0)
Publisher of the digital copy:
Technische Informationsbibliothek Hannover
Place of publication of the digital copy:
Hannover
Year of publication of the original:
2016
Document type:
Monograph
Collection:
Earth sciences

Chapter

Title:
RAY TRACING AND SAR-TOMOGRAPHY FOR 3D ANALYSIS OF MICROWAVE SCATTERING AT MAN-MADE OBJECTS S. Auer, X. Zhu, S. Hinz, R. Bamler
Document type:
Monograph
Structure type:
Chapter

Contents

Table of contents

  • CMRT09
  • Cover
  • ColorChart
  • Title page
  • Workshop Committees
  • Program Committee:
  • Preface
  • Contents
  • EFFICIENT ROAD MAPPING VIA INTERACTIVE IMAGE SEGMENTATION O. Barinova, R. Shapovalov, S. Sudakov, A. Velizhev, A. Konushin
  • SURFACE MODELLING FOR ROAD NETWORKS USING MULTI-SOURCE GEODATA Chao-Yuan Lo, Liang-Chien Chen, Chieh-Tsung Chen, and Jia-Xun Chen
  • AUTOMATIC EXTRACTION OF URBAN OBJECTS FROM MULTI-SOURCE AERIAL DATA Adriano Mancini, Emanuele Frontoni and Primo Zingaretti
  • ROAD ROUNDABOUT EXTRACTION FROM VERY HIGH RESOLUTION AERIAL IMAGERY M. Ravenbakhsh, C. S. Fraser
  • ASSESSING THE IMPACT OF DIGITAL SURFACE MODELS ON ROAD EXTRACTION IN SUBURBAN AREAS BY REGION-BASED ROAD SUBGRAPH EXTRACTION Anne Grote, Franz Rottensteiner
  • VEHICLE ACTIVITY INDICATION FROM AIRBORNE LIDAR DATA OF URBAN AREAS BY BINARY SHAPE CLASSIFICATION OF POINT SETS W. Yaoa, S. Hinz, U. Stilla
  • TRAJECTORY-BASED SCENE DESCRIPTION AND CLASSIFICATION BY ANALYTICAL FUNCTIONS D. Pfeiffer, R. Reulke
  • 3D BUILDING RECONSTRUCTION FROM LIDAR BASED ON A CELL DECOMPOSITION APPROACH Martin Kada, Laurence McKinle
  • A SEMI-AUTOMATIC APPROACH TO OBJECT EXTRACTION FROM A COMBINATION OF IMAGE AND LASER DATA S. A. Mumtaz, K. Mooney
  • COMPLEX SCENE ANALYSIS IN URBAN AREAS BASED ON AN ENSEMBLE CLUSTERING METHOD APPLIED ON LIDAR DATA P. Ramzi, F. Samadzadegan
  • EXTRACTING BUILDING FOOTPRINTS FROM 3D POINT CLOUDS USING TERRESTRIAL LASER SCANNING AT STREET LEVEL Karim Hammoudi, Fadi Dornaika and Nicolas Paparoditis
  • DETECTION OF BUILDINGS AT AIRPORT SITES USING IMAGES & LIDAR DATA AND A COMBINATION OF VARIOUS METHODS Demir, N., Poli, D., Baltsavias, E.
  • DENSE MATCHING IN HIGH RESOLUTION OBLIQUE AIRBORNE IMAGES M. Gerke
  • COMPARISON OF METHODS FOR AUTOMATED BUILDING EXTRACTION FROM HIGH RESOLUTION IMAGE DATA G. Vozikis
  • SEMI-AUTOMATIC CITY MODEL EXTRACTION FROM TRI-STEREOSCOPIC VHR SATELLITE IMAGERY F. Tack, R. Goossens, G. Buyuksalih
  • AUTOMATED SELECTION OF TERRESTRIAL IMAGES FROM SEQUENCES FOR THE TEXTURE MAPPING OF 3D CITY MODELS Sébastien Bénitez and Caroline Baillard
  • CLASSIFICATION SYSTEM OF GIS-OBJECTS USING MULTI-SENSORIAL IMAGERY FOR NEAR-REALTIME DISASTER MANAGEMENT Daniel Frey and Matthias Butenuth
  • AN APPROACH FOR NAVIGATION IN 3D MODELS ON MOBILE DEVICES Wen Jiang, Wu Yuguo, Wang Fan
  • GRAPH-BASED URBAN OBJECT MODEL PROCESSING Kerstin Falkowski and Jürgen Ebert
  • A PROOF OF CONCEPT OF ITERATIVE DSM IMPROVEMENT THROUGH SAR SCENE SIMULATION D. Derauw
  • COMPETING 3D PRIORS FOR OBJECT EXTRACTION IN REMOTE SENSING DATA Konstantinos Karantzalos and Nikos Paragios
  • OBJECT EXTRACTION FROM LIDAR DATA USING AN ARTIFICIAL SWARM BEE COLONY CLUSTERING ALGORITHM S. Saeedi, F. Samadzadegan, N. El-Sheimy
  • BUILDING FOOTPRINT DATABASE IMPROVEMENT FOR 3D RECONSTRUCTION: A DIRECTION AWARE SPLIT AND MERGE APPROACH Bruno Vallet and Marc Pierrot-Deseilligny and Didier Boldo
  • A TEST OF AUTOMATIC BUILDING CHANGE DETECTION APPROACHES Nicolas Champion, Franz Rottensteiner, Leena Matikainen, Xinlian Liang, Juha Hyyppä and Brian P. Olsen
  • CURVELET APPROACH FOR SAR IMAGE DENOISING, STRUCTURE ENHANCEMENT, AND CHANGE DETECTION Andreas Schmitt, Birgit Wessel, Achim Roth
  • RAY TRACING AND SAR-TOMOGRAPHY FOR 3D ANALYSIS OF MICROWAVE SCATTERING AT MAN-MADE OBJECTS S. Auer, X. Zhu, S. Hinz, R. Bamler
  • THEORETICAL ANALYSIS OF BUILDING HEIGHT ESTIMATION USING SPACEBORNE SAR-INTERFEROMETRY FOR RAPID MAPPING APPLICATIONS Stefan Hinz, Sarah Abelen
  • FUSION OF OPTICAL AND INSAR FEATURES FOR BUILDING RECOGNITION IN URBAN AREAS J. D. Wegner, A. Thiele, U. Soergel
  • FAST VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE BURSTS Karsten Kozempel and Ralf Reulke
  • REFINING CORRECTNESS OF VEHICLE DETECTION AND TRACKING IN AERIAL IMAGE SEQUENCES BY MEANS OF VELOCITY AND TRAJECTORY EVALUATION D. Lenhart, S. Hinz
  • UTILIZATION OF 3D CITY MODELS AND AIRBORNE LASER SCANNING FOR TERRAIN-BASED NAVIGATION OF HELICOPTERS AND UAVs M. Hebel, M. Arens, U. Stilla
  • STUDY OF SIFT DESCRIPTORS FOR IMAGE MATCHING BASED LOCALIZATION IN URBAN STREET VIEW CONTEXT David Picard, Matthieu Cord and Eduardo Valle
  • TEXT EXTRACTION FROM STREET LEVEL IMAGES J. Fabrizio, M. Cord, B. Marcotegui
  • CIRCULAR ROAD SIGN EXTRACTION FROM STREET LEVEL IMAGES USING COLOUR, SHAPE AND TEXTURE DATABASE MAPS A. Arlicot, B. Soheilian and N. Paparoditis
  • IMPROVING IMAGE SEGMENTATION USING MULTIPLE VIEW ANALYSIS Martin Drauschke, Ribana Roscher, Thomas Läbe, Wolfgang Förstner
  • REFINING BUILDING FACADE MODELS WITH IMAGES Shi Pu and George Vosselman
  • AN UNSUPERVISED HIERARCHICAL SEGMENTATION OF A FAÇADE BUILDING IMAGE IN ELEMENTARY 2D - MODELS Jean-Pascal Burochin, Olivier Tournaire and Nicolas Paparoditis
  • GRAMMAR SUPPORTED FACADE RECONSTRUCTION FROM MOBILE LIDAR MAPPING Susanne Becker, Norbert Haala
  • Author Index
  • Cover

Full text

In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3/W4 — Paris, France, 3-4 September, 2009 
159 
Eventually, the simulation process provides the following 
output data for each reflection contribution detected in the 3D 
object scene: 
• coordinates in azimuth, slant range, and elevation 
[units: meter] 
• intensity data [dimensionless value between 0 and 1] 
• bounce level information for every reflection 
contribution [1 for single bounce, 2 for double 
bounce, etc.] 
• flags marking specular reflection effects [value 0 or 
1] 
Figure 2: left: Simulation using box model having a size of 20 
m x 20 m x 20 m, line of sight indicated by arrow; 
right: simulated reflectivity map simulated (slant- 
range indicated by arrow) 
Figure 3: simulation using step model (left), line of sight 
indicated by arrow; simulated reflectivity map (right), 
slant-range indicated by arrow 
2.3 Reflectivity maps in azimuth and slant range 
Firstly, all reflection contributions are mapped into the azimuth 
- slant range plane. Afterwards, a regular grid is imposed onto 
the plane and intensity contributions are summed up for each 
image pixel. Figure 2 shows the resulting reflectivity map for a 
cube (dimensions: 20 m x 20 m x 20 m) which has been 
illuminated by the virtual SAR sensor using an incidence angle 
of 45 degrees. The size of one resolution cell has been fixed to 
cover 0.5 m x 0.5 m in azimuth and slant range. Surface 
parameters are chosen in a way that box surfaces can be clearly 
distinguished from ground parts, i.e. in the current example box 
surfaces show stronger diffuse backscattering than the 
surrounding ground. Following top-down in ground range 
direction, diffuse single bounce contributions of the ground are 
visible followed by a layover area of ground, wall of the box 
and top of the box. At the end of the layover area, a strong 
double bounce line is visible which is caused by the interaction 
between the front wall and the ground in front of the box. 
For this type of scene geometry, a 2D simulation and analysis is 
usually sufficient. The next section will however illustrate 
examples that underline the necessity of including the elevation 
direction as third dimension into the simulation. 
Figure 4: selection of pixel for elevation analysis (left); 
definition of three slices (right) in slant-range (1), 
azimuth (2), and elevation direction (3) 
2.4 3D analysis of scattering effects 
Figure 3 shows a reflectivity map simulated by illuminating a 
step model (width: 10 m, length 20 m, height 20 m). For 
providing the map, the same imaging geometry has been chosen 
as for the box example, i.e. the step was oriented in direction to 
the sensor and the incidence angle was fixed to 45 degrees in 
order to obtain specific overlay effects for single and double 
bounce contributions which are explained in the following. 
Compared to the reflectivity map containing the box model 
(Figure 2), the reflectivity map of the step shows similar 
characteristics. Both the layover area of single bounce 
contributions and the location of focused double bounce 
contributions are identical. Only the size of the shadow zone 
indicates a height difference between the illuminated objects. In 
the case of the step model, separation of dihedrals - two right 
angles at the steps - is impossible in the reflectivity map since 
all double bounce effects are condensed in one single line. 
Hence, separation of scattering effects in elevation direction 
may be helpful since it enables to resolve layover effects for the 
purpose of distinguishing several scatterers within one 
resolution cell. To this end, an interactive click-tool has been 
included into the simulator for defining two-dimensional slices 
to be analyzed. In the case of the given reflectivity map for the 
step model, one pixel is selected, e.g. located in the double 
bounce area as shown in Figure 4. Based on the coordinates of 
the pixel center, three slices are defined: 
• slice no. 1 for displaying elevation data in slant-range 
direction 
• slice no. 2 for displaying elevation data in azimuth 
direction
	        

Cite and reuse

Cite and reuse

Here you will find download options and citation links to the record and current image.

Monograph

METS MARC XML Dublin Core RIS Mirador ALTO TEI Full text PDF DFG-Viewer OPAC
TOC

Chapter

PDF RIS

Image

PDF ALTO TEI Full text
Download

Image fragment

Link to the viewer page with highlighted frame Link to IIIF image fragment

Citation links

Citation links

Monograph

To quote this record the following variants are available:
Here you can copy a Goobi viewer own URL:

Chapter

To quote this structural element, the following variants are available:
Here you can copy a Goobi viewer own URL:

Image

To quote this image the following variants are available:
Here you can copy a Goobi viewer own URL:

Citation recommendation

Stilla, Uwe. CMRT09. GITC, 2009.
Please check the citation before using it.

Image manipulation tools

Tools not available

Share image region

Use the mouse to select the image area you want to share.
Please select which information should be copied to the clipboard by clicking on the link:
  • Link to the viewer page with highlighted frame
  • Link to IIIF image fragment

Contact

Have you found an error? Do you have any suggestions for making our service even better or any other questions about this page? Please write to us and we'll make sure we get back to you.

Which word does not fit into the series: car green bus train:

I hereby confirm the use of my personal data within the context of the enquiry made.