Full text: Proceedings (Part B3b-2)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Voi. XXXVII. Part B3b. Beijing 2008 
Figure 1: Rectified facade image and an detail from 
eTRIMS data base with manually annotated regions, e. g. 
window panes, vegetation. 
We did our first experiments on manually annotated regions 
from rectified facade images, see fig. 1. These tests can 
show us the relevance of the features with respect to an op 
timal image segmentation. 
In the second experiments, we used automatically segmented 
image regions. We obtain these image regions from the 
analysis of the image’s scale-space with S discrete layers, 
which we have already used in (Drauschke et al., 2006). 
We adopted some parameters, i. e. we only consider the 
original image and additional 41 layers in scale-space with 
scales between a = 0.5 and a = 8. Then, we automati 
cally trace the regions through the scale-space structure to 
derive a region’s hierarchy, similar to the approach of Bang- 
ham et al. (1999). For complexity reason, we reduce the 
number of regions by selecting only stable regions in scale- 
space structure. Distinct region borders are often good ev 
idences for stable regions. Thus, most stable regions cor 
respond to man-made structures or are caused by shadows. 
The process of determining stable regions is explicitly de 
scribed in (Drauschke, 2008). Fig. 2 shows all detected 
stable regions in scales a = 2. 
Figure 2: Segmented stable regions at scales a — 2. 
vectorized region’s border, which is a simplification of the 
region’s boundary by using the algorithm of Douglas and 
Peucker (1973). 
Table 1: List of derived features from image regions. 
/1 
area 
/2 
circumference 
/3 
form factor 
/4 
vertical elongation of bounding box 
/5 
horizontal elongation of bounding box 
fe 
ratio /1 : (/4 • /5) 
/7-/12 
mean color value in original image 
regarding the six channels 
/13-/18 
variance of color values in original image 
regarding the six channels 
/19-/108 
normalized histogram entries of gradients 
magnitude, 15 bins per channel 
/109-/156 
normalized histogram entries of gradients 
orientation, 8 bins per channel 
/157 
portion of lengths of parallel edges 
in vectorized region’s boundary 
/l58 
portion of number of parallel edges 
/l59 
portion of lengths of boundary edges 
which are parallel to region’s major axis 
/l60 
portion of number of boundary edges 
which are parallel to region’s major axis 
/l61 
portion of lengths of boundary edges 
which are parallel to region’s minor axis 
/l62 
portion of number of boundary edges 
which are parallel to region’s minor axis 
/l63 
portion of number of orthogonal angles 
in vectorized region’s boundary 
/l64 
portion of lengths of boundary edges 
which are adjacent to orthogonal angles 
The targets y n are obtained differently. For manually anno 
tated regions, we additionally select the appropriate class. 
Otherwise, the automatically segmented regions inherit the 
class target from the best fitting manually annotated region. 
The best fitting annotated region Ai* is determined by 
RnAi 
* =argmax -flirs? (2) 
where R is a automatically segmented region and Ai are all 
manually annotated regions. Furthermore, the best fitting 
annotated region must fulfill the condition 
RV\Ai* 
R 
> 0.5, 
(3) 
otherwise the class target of the segmented region will be 
set to none, and the segmented region will be always treated 
as background. 
5 EXPERIMENTS 
We derive 164 features from each manually annotated and 
each automatically segmented image region. Thus, our sam 
ples x n are 164-dimensional feature vectors. These fea 
tures are roughly described in tab. 1. Color features are 
determined with respect to two color channels RGB and 
HSV. Last, we derive the features /157 to /i64 from the 
The goal of our experiments was to find appropriate fea 
tures from the set of image features, which can be used for 
classifying our automatically segmented stable image re 
gions. Therefore, we designed the Adaboost algorithm as 
follows. First, we only use weak classifiers which perform 
threshold classifications on a single feature. And secondly,
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.