Full text: XVIIIth Congress (Part B3)

  
  
   
  
  
  
  
  
  
   
  
  
  
  
  
   
   
   
   
    
  
  
   
  
  
  
  
   
   
  
   
   
  
   
  
  
   
   
   
   
  
   
   
  
  
   
   
   
  
   
   
  
  
  
   
  
  
  
   
   
  
houses 
\ 
  
  
  
  
  
  
  
20 dn : T T , -5 T T T T T T T T 
dr \ fields 7 
30 r vegetation + 
AME 45 F ; E 4 
Do red roofs ok ^ trees 
-25 - x J 
10 | : a x Xx 
* -30 x ç z 
roads : m 2 
+ 
ot -35 + o RA - 
roads, terrasses 0 | 
achromatic roofs x 
-10 + A S 1 AK 
blue roofs ? ^ * 45r E ie a 1 
x x 
-20 ] -50 1 1 1 1 1 1 1 1 
-40 -30 -20 -10 0 10 20 -40 -30 -20 -10 0 10 20 30 40 50 
a* as 
(A) (B) 
Figure 4: Chromatic clusters of object classes in their (a^, b") color components. (A) using a color image, and (B) using a 
false color infrared image. The separation between man-made and natural objects is easier with infrared images. 
performed where each of the images for buildings and non- 
buildings is classified using k—3 as in the second step. Finally, 
the two building images from the third step are combined, 
some regions are deleted based on their area, shape and min- 
imum dimensions, and small holes in the remaining regions 
are filled in. 
The MMO class in the first classification step or the building 
class in the classifications of the second and third steps can 
be found by a procedure, which is based on the projection of 
the DSM blobs in the images using the known interior and 
exterior image orientation. Since the projected blobs might 
have holes, these are filled in by morphological operations. 
The MMO or the building class is the one that includes the 
majority of the projected blob pixels. Other procedures are 
possible that work also when no DSM is available, such as 
classification based on color or infrared images, and the char- 
acteristics of the edges included in the class regions, such as 
straightness, length, and orientation. When these additional 
cues are used, the classification can actually stop after the 
first step. 
As an optional step, a refinement of the detected building out- 
line can be performed. DSM blobs usually do not perfectly 
outline the building. Therefore, a refinement procedure is ap- 
plied by using the classification results of the classes MMO 
and buildings together with the DSM and edges. This op- 
tional step is described in [Sibiryakov 1996]. 
Figure 5A shows the residential scene of the Avenches data 
set. Figure 5D shows the results of color classification for the 
building class. Figure 5E shows the result of color classifica- 
tion for the MMO class and the projected DSM blobs, with 
NOs shown in black (the upper right house has no blob be- 
cause it was outside the DSM). Figure 5F shows the result of 
building detection after combining the spectral classification 
and the DSM blobs and refining the outline of the blobs by 
the use of edges. It can be noted that edges may introduce 
some small spurious house elements. With our test images, 
buildings were always included in the MMO class. Almost all 
buildings were included in the building class. 
The above results demonstrate that an approximate detection 
of isolated buildings can be performed with practically no 
human interaction. However, when buildings are connected, 
human interaction is often required to indicate the outline of 
the buildings. 
324 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996 
5 FEATURE EXTRACTION AND RELATIONS 
All intermediate and high level processing in our project needs 
low-level features, in particular straight contours. In this sec- 
tion, we present methods to generate an attributed contour 
graph and we show how to relate pairs of straight contours 
based on similarity in position, orientation, and in photomet- 
ric and chromatic attributes. The attributed contour graph 
and the similarity relations form an excellent collection of 
symbolic data for further processing. 
5.1 Edge Detection and Aggregation 
Based on the assumption that object boundaries are gener- 
ally smooth and mostly contrast defined, much effort has 
been devoted to design suitable edge detectors that reliably 
detect these 1-D features. The presented work does not re- 
quire a particular edge detector, however, we believe it is wise 
to use the best operator available to obtain the best possi- 
ble results. For this reason, we use the SE energy operator 
recently presented in [Heitger 1995]. The operator produces 
a more accurate representation of edges and lines in images 
of outdoor scenes than traditional edge detectors due to its 
superior handling of interferences between edges and lines, 
for example at sharp corners. The edge and line pixels are 
then linked to produce a contour graph by using the algorithm 
in [Henricsson and Heitger 1994]. The result in Fig. 6B is a 
high quality representation of the contours connected to each 
other at junctions, corner and other important 2-D points. 
5.2 Contour and Region Attributes 
The contour graph contains only basic information about ge- 
ometry and connectivity. To increase its usefulness, attributes 
are assigned to each contour and end-point. The attributes 
assigned to contours reflect either properties of the contour 
or region properties on either side. The latter are denoted re- 
gion attributes and are attached to the generating contour. A 
region is constructed on both sides of each contour by a trans- 
lation of the original contour in the direction of its normal. 
When neighboring contours interfere with the constructed re- 
gion, a truncation mechanism is applied. For details on the 
construction of the regions we refer to [Henricsson 1995]. 
Since each flanking region is assumed to be fairly homoge- 
neous (due to the way it is constructed), the data points 
contained in each region tend to concentrate in a small re- 
gion of the color space, however, outliers must also be ac- 
  
       
   
  
Figure 
measu 
other | 
shown 
spectr. 
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.