Full text: Technical Commission VII (B7)

    
terizes, 
in the 
vithin a 
mation 
ending 
uce the 
cted or 
ulation 
nage is 
Woods, 
Itering, 
on due 
image 
ctance) 
iltering 
‚ontrast 
n). 
and a 
change 
tion. 
oped to 
t-event 
our the 
ig. 3. 
    
  
Firstly, an edge detection filter is applied to the remotely sensed 
image in order to extract the contours of buildings. In our study 
the Canny edge detector is used. 
Additionally, we adjusted the first derivative operator (Sobel 
operator) to calculate the derivatives for horizontal and vertical 
directions (Gx, Gy). The main point of this analysis is the 
calculation of building contour orientations: 
a = tan! (2) - 90° (1) 
Thus, the resulting image presents the objects contours, where 
pixels have values corresponding to their direction. The pixels 
that do not belong to the contour are given a “no data” value. 
Secondly, we extract control points from the vector objects. In 
the vector map the buildings are symbolized by polygons. The 
distance between two polygon’s vertexes is partitioned into 
segments. At the centre of each segment a control point is 
defined. Thirdly, a contour direction at the position of the 
control points is calculated. 
Finally, the algorithm examines the contour image (i.e. the 
result of the edge detection process) for pixels with appropriate 
contour direction at the position of each control point within the 
specified study window. It is assumed, that in the case of an 
intact contour the window has to contain a number of contour 
pixels equal to its size. Consequently, the DPC is defined as the 
ratio of the pixel number found on the raster contour and the 
number expected for the intact building. 
ND os 
UN P min(P|N;) 
DPC e 
100% (2) 
Here N; is the number of pixels found in the i-th study window, 
P is the size of the study window in pixels, Np is the number of 
study windows (or control points). 
More detailed information on the DPC calculation can be found 
in Sofina et al. (2011). 
3.2 Calculation of textural features 
Since building roofs are mostly visible in the remotely sensed 
images, we focus our analysis on them. If the building is 
damaged or destroyed, the texture of its roof has changed. The 
satellite image represents these changes that can be identified by 
texture analysis. One of the most effective approaches of texture 
analysis is the grey value co-occurrence matrix method that 
describes the grey value relationships in the neighbourhood of 
the current pixel (Haralick et al., 1973). 
Conventional techniques of textural feature calculation exploit a 
fixed rectangular sliding window for the calculation of a grey- 
tone spatial-dependence matrix. The object-oriented GIS 
approach enables an image analysis that is restricted to only the 
area of the investigated object. In order to analyse the image 
area corresponding to the building, a small fragment containing 
the building is cut out from the image under investigation. For 
the obtained picture a binary mask is created, which allows the 
selection of the pixels belonging to the study area. Assuming 
that only pixels from the building area have to be used for 
calculation of textural features, the equations for grey-tone 
spatial-dependence matrices can be modified as follows: 
P(i,j,d,0°) = #{((k,1),(m,n)) € (BX B) [k—m=0, |l—n| = d, 
I(k,l) 2 i, (m,n) = j} 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
P(i, j, d, 45°) = #{((k, D, (m,n)) € BXx B)|k-m=d, |-n = 
—d),or(k — m = —d, L+n = d), I(k,l) = i I(m,n) zy) 
P(i,j,d,90°) = #{(( D, (m,n)) € (8 x Blk - ml 2 dL 7n — 0, 
I(k,D = i,1(m,n) = j} 
P(i, j, d,135°) = #{((k, ),(m,n)) E(Bx B)|k—-m=d, I-n= 
d),or(k—m = —d, ln = —d), I(k,l) = i, I(m, n) = j} 
(3) 
# denotes the number of elements in the set and B is the set of 
pixels from building area selected by the mask. 
The matrix has to be normalized to remove a dependency on the 
building size. The following normalization can be used: 
Rest da MEN) (4) 
pli j) = F2 (5) 
R is a normalization constant and N, is the number of grey 
levels in the input image (in our study N, = 256). 
Among different textural characteristics proposed by Haralick et 
al. (1973) we concentrated on the features that describe image 
homogeneity (Table 1). 
  
  
  
  
Textural feature Equation Description 
Measure of uniformity. 
Angular Second ASM = > wey High values correspond to 
Moment (ASM) r^ very similar image 
texture. 
rs. | Characterizes availability | 
Inertia Inertia = VS -J Pj) of sharp borders and 
d contours. 
Measures of local 
: | homogeneity. High values 
Inverse Difference | pm = T PD n 8 gn ve 
M t (IDM) | irü-p indicate highly 
oment ( ) | Fd homogeneous image 
| texture. 
  
  
  
Table 1. Calculated textural features. 
Besides the commonly used average value of the angular 
features we include also minimum and maximum values as 
inputs into the classifier. 
3.3 Feature selection 
A known problem of data classification is the reduction of the 
dimensionality of the feature space and redundant information. 
In our study the potential to separate the objects into two classes 
is a decision criterion of the feature selection. As can be 
observed in Figure 4 the application of the maximum of angular 
texture features enables a better object separation then the 
application of their average. The calculation of average values 
shows a loss of information about the texture orientation and 
results - as a consequence — in a worse performance. At the 
same time, the maximum values of the angularly features are 
significant if the buildings are intact.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.