Full text: CMRT09

CMRT09: Object Extraction for 3D City Models, Road Databases and Traffic Monitoring - Concepts, Algorithms, and Evaluation 
158 
2. SIMULATION CONCEPT 
The simulation approach presented in this paper is based on ray 
tracing algorithms provided by POV Ray (Persistence of Vision 
Ray Tracer), a free-ware ray tracing software. Main advantages 
of POV Ray are free access to its source code, optimized 
processing time, separability of multiple reflections and existing 
interfaces to common 3D model formats. In order to provide 
necessary output data for two-dimensional analysis of reflection 
phenomena, additional parts have been included to POV Ray’s 
source code. The simulation concept consists of four major 
parts: 
• Modeling of scene objects (Section 2.1) 
• Sampling of the 3D model scene in POV Ray 
(Section 2.2) 
• Creation of reflectivity maps (Section 2.3) 
• 3D analysis of reflection effects by means of output 
data provided by POV Ray (Section 2.4) 
In the following subsections, the processing chain will be 
explained in more detail. 
Cylindrical light 
Elevation 
Figure 1: Approximation of SAR system by a cylindrical light 
source and an orthographic camera; 3D sampling due 
to coordinates in azimuth, slant-range, and elevation 
2.1 Modeling of scene objects 
First, the 3D scene to be illuminated by the virtual SAR sensor 
has to be described in the modeling step. 3D models can be 
designed in POV Ray or can be imported into the POV Ray 
environment. Then, parameters are adapted for describing the 
reflection behavior at object surfaces. To this end, POV Ray 
offers parametric models for specular reflection and diffuse 
reflection. A reflectivity factor for each surface defines the loss 
of intensity affecting rays specularly reflected at object 
surfaces. 
In the case of a modeled SAR system both the light source and 
the camera are located at the same position in space. The 
concept for approximating the imaging geometry of the SAR 
system is shown in Figure 1. Focusing effects due to SAR 
processing in azimuth and range are considered by using a 
cylindrical light source and an orthographic camera whose 
image plane is hit perpendicularly by incoming signals. 
2.2 Sampling of the 3D model scene 
For analyzing backscattered signals within the modeled 3D 
scene, rays are followed in reverse direction starting at the 
center of an image pixel and ending at the ray’s origin at the 
light source (Whitted, 1980). This concept is commonly 
referred to as Backwards Ray Tracing (Glassner, 2002). Since 
ray tracing is performed for each pixel of the image plane, 
output data for creating reflectivity maps is derived by discrete 
sampling of the three-dimensional object scene (Auer et al., 
2008). 
Coordinates in azimuth and range are derived by using depth 
information in slant-range provided during the sampling step. 
For instance, according to Figure 1, focused azimuth 
coordinates a, and slant-range coordinates r f of double 
bounce contributions are calculated by: 
a r , + a 
0 p 
(1) 
r x +r^+ r 3 
(2) 
where d Q , a p = azimuth coordinates of the ray’s origin and 
the ray’s destination at the image plane 
7j , r 2 , 7*3 = depth values derived while tracing the 
ray through the 3D model scene 
So far, only two axes of the three-dimensional imaging system - 
azimuth and range - have been used for reflection analysis 
(Auer et al., 2008). However, the third dimension, elevation, 
may provide potential to enhance the simulators capacities to 
3D analysis of reflection effects. To this end, extraction of 
elevation data has been added to the sampling step. According 
to the imaging concept shown in Figure 1, the elevation 
coordinate for a double bounce contribution is derived by 
means of the following equation: 
*/ = 
e n +e 
0 p 
(3) 
where e 0 , e p = elevation coordinates of the ray’s origin and 
the ray’s destination at the image plane 
At this point, elevation data derived during the sampling step 
shall be discussed in more detail. Due to Eq. (3) and the discrete 
sampling of the scene, all backscattering objects are assumed to 
behave as point scatterers. Resolution in elevation is not 
affected by limits occurring due to the size of sampling 
intervals along the elevation direction or the length of the 
elevation aperture (Nannini et al., 2008). From a physical point 
of view, deriving discrete points directly in elevation direction 
may be a disadvantage since comparison of the processed 
reflectivity function with a simulated one could be a desirable 
task. For instance, in the case of single bounce, the discrete 
concept will not be able to represent a planar surface 
continuously but only by discrete points. 
For layover caused by multiple reflections along the elevation 
direction the discrete simulation concept is nonetheless 
reasonable since approaches for tomographic analysis also seek 
for scatterers whose backscattered intensity is concentrated in 
individual points along the elevation direction. Concentration 
on scene and SAR geometry and thereby neglecting the 
physical characteristics provides some advantages, though, to 
overcome well known limitations of tomographic analysis (Zhu 
et al., 2008). For instance, it leads to a better understanding of 
the SAR geometry in the elevation direction by means of 
simulating the reflectivity slice which is helpful for 3D 
reconstruction. Additionally, it has the potential to provide the 
number of scatterers in a cell as a priori for parametric 
tomographic estimators if the scene geometry is available at a 
very detailed level, e.g. based on airborne LIDAR surface 
models.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.