Full text: Proceedings, XXth congress (Part 7)

| 2004 
  
  
— 
rmat of 
zle was 
inimum 
n, these 
junding 
'nerated 
s edges 
ike into 
lext, the 
letected 
ws. The 
This is 
joint on 
> | share 
ected as 
S (di, d», 
e corner 
ounding 
listances 
he edges 
shadow 
distances 
; shadow 
e. If d, > 
ige 1 and 
nnecting 
anking is 
low edge 
es d, and 
  
  
angle of 
ing # 175 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004 
3.2.2 Buffer Zone Generation: 
A three pixel wide buffer zone was generated around the 
shadow producing edges of each building (figure 6). It was 
divided into two sub-zones (1) inside building, and (ii) outside 
building. The inside building part of the buffer zone (zone B) 
was used for building analysis while the outside building part of 
the buffer zone (zone S) was used for shadow analysis. The 
purpose of the buffer zone generation was to deal with the 
shadow and building areas around the shadow producing edges 
of the buildings. These areas can also be called 'the most 
significant parts’ of a building for the damage assessment. 
  
    
Buffer zone, 
outside +, Buffer zone, 
building area "*, « inside 
e 
(S) dd Building area 
(B) 
  
  
  
Figure 6. Buffer zone generation along the shadow producing 
edges 
3.2.3 Watershed Segmentation: 
The watershed segmentation was performed based on the idea 
of flooding from selected sources (Beucher ef al., 1992). These 
sources represent the markers. Two sets of markers were 
needed, one for shadow areas and the other for the building 
regions. These markers were utilized to avoid the over- 
segmentation. After the gradient image was found, the shadow 
and the building markers were selected within the outside 
building buffer zone (S) and the inside building buffer zone (B) 
respectively. The locations of the markers were seeded 
randomly. Figure 7a shows an example for the marker 
orientation on a gradient image. 
The watershed segmentation algorithm was run to generate a 
binary image. After running the watershed algorithm, the two- 
region output image was obtained. Of these regions, one refers 
to shadow areas while the other corresponds to the building 
areas. In figure 7b, the shadow and the building areas are shown 
in blue and yellow colors respectively. 
  
   
  
Markers 
for the 
Markers building 
for the - area (S) I The region 
shadow The region detected 
area (S) | detected as 
as . BPH * 
“shadow” building 
(a) (b) 
  
  
  
Figure 7. (a) The starting pixels (markers) for watershed 
transform, and (b) the segmented output after the watershed 
transform. 
3.2.4 Assessing the Conditions of the Buildings: 
After detecting the shadow and building areas, for each 
building, the agreement was measured within the buffer zone of 
the shadow producing edges between the pixels labeled as 
building and the pixels labeled as shadow (figure 8). To do that 
the pixels inside the shadow buffer (S) and the building buffer 
(B) were counted and categorized as shadow pixel or building 
pixel. Then, a ratio was computed between those pixels labeled 
as building and the total number of pixels falling inside the 
building of the buffer zone. Similarly, a ratio was also computed 
between those pixels labeled shadow and the total number of 
pixels falling inside the shadow region of the buffer zone. This 
can be illustrated with an example. The pixel distribution of 
building # 175 is shown in table 1. For this building, the shadow 
detection algorithm detected two shadow edges, which are 1 and 
2. The total pixels inside the buffer zone along the shadow 
edges were calculated and labeled as “Total Assessed Pixels” 
(Table 1). Totally, 99 pixels were generated for shadow buffer 
and 99 pixels were generated for building buffer. After the 
watershed transform, 91 shadow pixels (blue pixels) fell into the 
shadow buffer and 66 building pixels (yellow pixels) fell into 
the building buffer. Then, the building and the shadow 
percentages were calculated as 66/99 = 66.67% and 91/99 = 
91.92% respectively. A user-defined threshold was used to 
make a decision about the building. If the building ratio or the 
shadow ratio is below the threshold value, then the building is 
labeled as collapsed. If on the hand, both the building and the 
shadow ratios are over the threshold value then, the building is 
labeled as un-collapsed. The building and the shadow ratios 
were used together in deciding the building condition in order to 
reduce the misdetection of the buildings. 
  
   
  
Significant 
regions for 
building 
analysis 
  
  
  
Figure 8. The regions used in the building analysis. 
  
Total Assessed Pixels: 99 
Detected Shadow Pixels: 91 
Detected Building Pixels: 66 
Shadow Ratio: 0.9192 
Building Ratio: 0.6667 
  
  
  
Table 1. Calculation of (building / shadow) pixel percentages. 
3.3 Results 
Table 2 provides the accuracy indices computed for the 
threshold values between 20% and 80%. These are overall 
accuracy, overall kappa, average user’s accuracy, average 
producer’s accuracy, combined user’s accuracy and combined 
producer’s accuracy. Of the six indices, four gave the highest 
percentage in the 50% threshold level. The remaining two 
indices did not reach to the maximum value at 50%. But, their 
percentages were not quite different from the maximum. For this 
reason, 50% level was chosen as the optimum threshold. The 
trend of the overall accuracies versus varying threshold values is 
also shown graphically in figure 9. It can be clearly seen in the 
645 
RGR iuri 
CS TRRE 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.