Full text: CMRT09

Height difference 
Figure 3. Upper row: steps of building corner segmentation in slant range geometry with illumination direction from left to right; 
lower row: steps of the InSAR height filtering and slant range to ground range projection of the building corner lines 
The particularities of Synthetic Aperture Radar (SAR) and 
optical cameras in terms of sensor principle and viewing 
geometry result in very different properties of the observed 
objects in the acquired imagery. In Fig. 4a an elevated object P 
of height h above ground is imaged by both a SAR sensor and 
an optical sensor (OPT). SAR is an active technique measuring 
slant ranges to ground objects with a rather poor angular 
resolution in elevation direction. Layover, foreshortening, and 
shadowing effects consequently occur and complicate the 
interpretation of urban scenes. Buildings therefore are displaced 
towards the sensor. Point P in Fig. 4a is thus mapped to point 
PS in the image. The degree of displacement depends on the 
object height h and the off-nadir angle 8 t of the SAR-sensor. 
By contrast, optical sensors are passive sensors acquiring 
images with small off-nadir angles. No distances but angles to 
ground objects are measured. Elevated objects like P in Fig. 4a 
that are not located directly in nadir view of the sensor are 
displaced away from the sensor. Instead of being mapped to P\ 
P is mapped to PO in the image. The degree of displacement 
depends on the distance between a building and the sensor’s 
nadir point as well as on a building’s height. The further away 
an elevated object P is located from the nadir axis of the optical 
sensor (increasing 6 2 ) and the higher it is, the more the building 
roof is displaced. The higher P is, the further away P is located 
from the optical nadir axis and the greater the off-nadir angle 9/ 
becomes, the longer the distance between PO and PS will get. 
The optical data was ortho-rectified by means of a DTM in 
order to reduce image distortions due to terrain undulations. 
Building façades stay visible and roofs are displaced away from 
the sensor nadir point since buildings are not included in the 
DTM. Such displacement effect can be seen in Fig. 4b to 4d. In 
Fig. 4b the building in the optical image is overlaid with its 
cadastral boundaries. The building roof is displaced to the right 
since the sensor nadir point is located on the left. The upper 
right part of the building is more shifted to the right than the 
lower left part because it is higher (see Fig. 4d for building 
height). Fig. 4c shows the same cut-out overlaid with the corner 
line extracted from the corresponding InSAR cut-out. Such 
corner line represents the location where the building wall 
meets the ground which can nicely be seen in Fig. 4d. Due to 
the previously outlined perspective effect the building roof falls 
to the right over the corner line. This effect is of high interest 
and can be exploited for three-dimensional modelling of the 
scene (Inglada and Giros, 2004, Wegner and Soergel, 2008) 
because the distance between the corner line and the building 
edge comprises height information. 
4.2 Joint classification framework 
A joint classification is carried out after having projected the 
optical and the InSAR primitive objects to the same ground 
geometry. In order to combine the building hints from optical 
and InSAR data, a fusion step is required. One possibility is 
data fusion in a Bayesian framework while another would be 
Dempster-Shafer evidential theory (Klein, 2004). Both 
approaches are usually requiring an object to be represented 
identically in the different sensor outputs, i.e., exactly the same 
region is found in both datasets but with slightly different 
classification results. This requirement is not met in the case of 
the combination of line features from InSAR data with roof 
regions from optical imagery. 
Hence, combined analysis is based on the linear regression 
classifier already used for building extraction from optical data 
in (Mueller and Zaum, 2005). All potential building objects 
from the optical image are evaluated based on a set of optical 
features described in section 2.2 and on the InSAR corner line 
objects. The evaluation process is split up into two parts, an 
optical part and an InSAR part. Optical primitive objects are 
believed to contribute more information to building detection 
and hence their weight is set to two thirds. InSAR data is 
assumed to contribute less information to overall building 
recognition and thus the weight of primitive objects derived 
172
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.