Full text: Mapping without the sun

1 
20 
Figure 8. Simple (left) and separately simulated (right) multi 
look image of the New Palace (Neues Schloss) in 
Stuttgart 
3.6 Side-lobe visualization 
Strong reflecting objects like comer reflectors can cause typical 
blooming effects in SAR images. The blooming is caused by the 
high amount of energy reflected back to the sensor. Analyzing 
the simulation result, which is first rendered to a texture, strong 
reflections are detected. Depending on a certain threshold, ref 
lections are considered to cast side-lobes. Afterwards these side- 
lobes are additively rendered to the simulation result, as de 
picted in Figure 9. 
Figure 9. SARViz simulation of the 3D-model of the „Stifts- 
kirche“ in Stuttgart 
4. IMAGING GEOMETRY AWARE DATA FUSION 
Fusing SAR data and optical imagery can provide a variety of 
new information, not available analyzing each data set sepa 
rately. Using low or medium resolution data, the data fusion can 
be done by simple methods. In flat terrains, even straightfor 
ward geo-coding approaches are suitable. While analyzing ve 
getation in images with about 25m ground resolution, the differ 
ent geometries of the images are not crucial, because the lay 
over of most vegetation related objects is influencing less than 
one pixel. 
But while analyzing today’s high-resolution images, this is not 
true anymore. The spatial resolution of SAR sensors improved 
tremendously during the last decade. The new TerraSAR-X pro 
vides images with a spatial resolution of about one meter (Wer- 
ninghaus, 2006), modem airborne systems achieve spatial reso 
lutions in the decimeter scale (Ender & Brenner, 2003). In these 
images, the geometrical effects caused by the distance geometry 
of a SAR image are significant, especially in urban areas. The 
appearance of buildings in high-resolution images differs from 
the low-resolution appearance. Even small structures are visible 
inside the layover. 
The position, the shape and the radiometric appearance of any 
object depends on the sensor position, the sensor properties and 
the environment of the object. For a successful fusion of high- 
resolution images from different sensor types, the sensor prop 
erties as well as the 3D shape of the objects of interest should be 
known. The spatial accuracy of this data has to be high, because 
for any data fusion approach, the geo-coding accuracy should 
match the data resolution (Soergel et al, 2006). If the shape is 
available, additional information about the object properties can 
be analyzed. Analyzing bridges for example, the outlines of a 
bridge can be easily determined using aerial photos, whereas 
deriving the outlines using SAR is difficult. Using the width 
from the aerial image measurement and the true position of the 
bridge from the double-bounce reflection, the real height of the 
bridge can be determined easily (Soergel et al, 2007). For many 
applications such approaches are not feasible. Remote sensing is 
often used because the terrain and the objects are unknown. Any 
remote sensing approach considering information about the 3D 
terrain or shape as prerequisites is problematic. 
Multi-sensor data fusion can be implemented as a multi-step 
strategy. The chosen strategy depends on the application and the 
available data. Assuming no change in the terrain or the 3D 
shape between the different images, e.g. because the time differ 
ence between the acquisitions is small, the 3D shape of the area 
can be determined by one sensor and the data fusion is based on 
the generated model. The 3D model can be generated by stan 
dard remote sensing methods like interferometric SAR, LIDAR 
or photogrammetry (Brenner, 2005). Also terrestrial data acqui 
sition by terrestrial laser scanning, photogrammetry or video- 
based reconstruction (Mordohai et al, 2007) is possible. Further 
research could reveal a path for a direct generation of 3D mod 
els from optical and SAR images, but this has not yet been pre 
sented. In another approach, changes occurring between the im 
age acquisition times are assumed. In this case, the 3D shape 
should be generated from the data acquired earlier. Changes 
which occur between the acquisition dates can be detected 
based on the fusion of the data. 
5. SAR SIMULATION ASSISTED CHANGE 
DETECTION 
Assuming available stereo images, LIDAR or terrestrial data, 
3D building models can be generated. The automated gener 
ation of building models using LIDAR and GIS footprint infor 
mation is a well-known approach (Haala & Brenner, 1999). The 
building models in Figure 11 are reconstructed using this auto 
mated method. Like any automated approach, some models are 
not reconstructed correctly. 
Figure 10. Erroneous reconstructed building models 
If this erroneous reconstructed models are used for change de 
tection applications, various false alarms will occur. 
Figure 11. Subset of a DOSAR image of Karlsruhe (left) and 
SARViz simulation of the area (right) 
In Figure 11 tl 
of the erroneo 
is the multi-fr 
EADS Domie 
dir angle is 70 
ing purposes ; 
12, has been 
model to test t 
The detected 
changes from 
tionally some 
visible and ev 
comer of the 1 
to the incomp 
10. 
Figure 13. De 
Fij 
As already pr 
outweighing t 
tion based on 
ated 3D data, 
More reliable 
automatic rec 
published by < 
Using the sen 
on wrongly re 
ralizations in 
not included i 
parking cars, i 
Semi-automat 
ses are anothe 
can be preser 
clarifying the 
building modi 
object enviror 
interpretation.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.