Full text: Mapping without the sun

18 
CPU GPU 
Figure 2. A GPU uses more transistors as arithmetic logical 
units (NVIDIA, 2007) 
In rasterization, each geometry primitive is calculated separately 
from the others, which allows for a highly parallel design. The 
visualization is controlled by the so-called graphics pipeline 
(see Figure 3). After the transformation from world to screen 
space, calculated by the so-called vertex shader, the data is ras 
terized by the hardware rasterizer of the graphics card. Each re 
sulting pixel is piped through the pixel shader, another specia 
lized and programmable part of today’s graphics hardware. The 
pixel shader is used to compute the color of each displayed pix 
el. This is done according to the lighting and material or texture 
information of each pixel. Due to the flexible and programm 
able shaders used in modem graphics cards, different methods 
for calculating the reflections can be implemented. Finally, the 
so called z-buffering is done before the image is displayed on 
the screen or saved in the texture memory of the graphics hard 
ware. 
vertex shader 
texture pixel shader 
Figure 3. Programmable graphics pipeline of modem graphics 
cards 
SAR simulations are visualization applications. GPUs are there 
fore well suited for SAR simulations. But radar images differ in 
many ways from images acquired by passive sensor systems. 
Using the flexible programmable GPUs, the different imaging 
geometry and radiometry of radar images can be implemented, 
as described in the following section. 
3. REAL-TIME SAR SIMULATION USING SARVIZ 1.0 
The real-time SAR simulation tool SARViz (Balz, 2006), has 
been constantly improved since it has been presented for the 
first time in 2006. The newest version is supporting squint an 
gles, real multi-look, the visualization of moving objects as well 
as simple bi-static configurations. SARViz is using methods de 
veloped by computer graphics to simulate SAR images. The 
GPU is processing triangles using local illumination. Each trian 
gle is visualized independently from the other triangles. Each 
triangle point is processed by the vertex shader, which treats the 
geometry. After the rasterization, the radiometry of each pixel is 
calculated by the pixel or fragment shader. 
3.1 SAR geometry 
The vertex shader is transforming each point from the model co 
ordinate system to world coordinates and then subsequently to 
image coordinates. The so-called camera transformation matrix 
(Microsoft, 2005) has to be adapted to achieve the desired paral 
lel projection. 
The range position of each object in a SAR image depends on 
the distance between the object and the sensor, thus higher 
points, i.e. points with larger z-values, are closer to the sensor 
and are therefore mapped closer to near-range. The resulting 
shift in range direction Ax, depends on the height above the 
ground level z and the off-nadir angle V 
Ax = z-tan(0 o# ) 
3.2 SAR radiometry 
The pixel shader is processing every pixel to compute the cor 
responding radiometry. For each pixel the corresponding face 
normal is determined using a 3D model. Taking material prop 
erties, like the dielectric constant, and sensor properties into ac 
count, the reflection strength can be calculated. SARViz offers 
three different methods of backscattering computation. The sta 
tistical method based on measurements of Ulaby & Dobson 
(1989), a direct calculation based on the roughness and dielec 
tric constant of the material developed by Zribi (2006) and an 
adaptation of computer graphics methods. Most commonly used 
is the adaptation of the computer graphics methods, due to its 
computing time efficiency. 
According to the Phong reflection model (Phong, 1975), three 
illumination elements (diffuse, specular and ambient) are com 
bined. In computer graphics, the diffuse element is calculated 
using the material properties and the light strength as well as the 
light position and face normal n (Gray, 2003). In the SAR case, 
the reflection strength is determined by the reflections strength r 
and the sensor position vector s\ 
a d = r(n, s) 
The specular part of the overall reflection value can be derived 
from the visualization of optical specular reflections based on 
Blinn’s (1977) work, with p~32. Because in the mono-static 
SAR case the “light” and “camera” position are identical, the 
calculation can be simplified: 
<x s = r(n, s) p 
Comparing the calculated results with the statistical analysis of 
Ulaby & Dobson, it is possible to retrieve realistic values for the 
reflection and the roughness values, which are needed to calcu 
late the overall reflection strength. 
The reflection is calculated locally. Therefore, multi-reflections 
as well as shadows are not supported. In the rasterization ap 
proach, the paths of the rays are not traced and every vertex and 
pixel is processed separately, therefore occlusions are not mod 
eled. By using shadow maps (Williams, 1978) both shadows 
and occluded areas can be modeled. A shadow map is generated 
in two steps. First, the scene is rendered from the position of the 
light source, which is in the mono-static case equivalent to the 
SAR sensor position. Instead of reflection values, the distance 
of every rendered pixel to the sensor is written to the so-called 
shadow map, as it is depicted in Figure 4. 
In the second step the scene is rendered from the position of the 
virtual camera. SARViz directly simulates ground-range images 
to avoid the computational intense transformation from slant- 
range to ground-range. Because of this, the scene is rendered 
looking from 
distance of ea 
the transform 
object. If the < 
stored in the s 
rendered. 
sensor view 
I 
Shadow map] 
implemented 
this method, 
shadow area 
camera and 1 
static SAR si 
and the virtu, 
precision and 
maps. 
3.3 Soft sha 
The edges of 
sharp, becau: 
of a shadow 
visualizing s< 
shadows, esj 
images, then 
ambient lighi 
Due to the si 
dow still ref 
ized by gene 
image centre 
edges of the 
maps are de’ 
pends on th 
maps, the sh 
In our appn 
shadow area 
two or more 
areas are no 
pixels inside 
limited amo 
in Figure 5, 
lobe and is \ 
Figure 5. Vi 
1 
3.4 Spotlq 
The spatial 
the spotligh 
radar anteni 
the exposur
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.