Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A „Photogrammetric Computer Vision", Graz, 2002 
  
presented. The RMS error between manually extracted 
coordinates and the produced coordinates for six buildings is 
0.25 meter and only two vertices were missing. These results 
suggest the completeness and accuracy that this method 
provides for extracting complex urban buildings. 
Section 2 explains the split and merge image segmentation 
technique. In section 3 the region classification process is 
discussed. Section 4 presents the region to polygon 
conversion. The multi image 3D polygon extraction 
algorithm is explained in section 5. Results are given in 
section 6. Conclusions are discussed in section 7. 
2. IMAGE REGION EXTRACTION 
In this section the process of extracting image regions is 
presented. Image segmentation could be done using a wide 
range of techniques. The best technique we have found for 
segmenting aerial images is the split and merge image 
segmentation technique. The split and merge image 
segmentation technique has three main steps. First splitting 
the image: the image is recursively divided into smaller 
regions until a homogeneity condition is satisfied. Then 
adjacent regions are merged to form larger regions based on 
a similar criterion. In the last step, small regions are either 
eliminated or merged with larger regions. The criterion used 
in the split and merge image segmentation method is that the 
difference between the minimum and maximum intensities in 
any region is less than a certain threshold. More details can 
be found in (Horowitz and Pavlidis, 1974) and (Samet, 
1982). The results of the split and merge image segmentation 
technique for five sample buildings, are shown in Figure 1-a, 
b, c, and d. 
  
Figurel-a and b. Split and Merge Image Segmentation 
Results for 2 Buildings 
  
  
Figurel-c and d. Split and Merge Image Segmentation 
Results for 3 Buildings 
3. REGION CLASSIFICATION USING NEURAL 
NETWORKS 
A Neural Network is implemented to distinguish roof regions 
from non-roof regions. Each region is assigned two attributes 
for the classification process. The first attribute measures the 
linearity of the region boundaries, while the second attribute 
measures the percentage of the points in the region that are 
above a certain height. 
3.1. Region Border Linearity Measurement 
After segmenting the building images a modified version of 
the Hough transformation is employed to measure border 
linearity. The approach includes the following steps; 
extracting region border points, linking border points, finding 
local lines that fit groups of successive points, and filling a 
parameter space similar to the Hough parameter space for 
line extraction. The parameter space is then searched and 
analyzed to determine a measure for the border linearity, 
(BL), Equation 1. The border linearity is measured as the 
percentage of the sum of the number of points in the larger 
four cells in the parameter space to the total number of 
border points. Figure 2-a shows a parameter space for a roof 
region, while Figure 2-b shows a parameter space for a non- 
roof region. 
af Points 
um 
In 
  
ü p 
Figure 2-a. The Modified Hough Parameter Space for 
the Border of a Roof Region 
Number of Points in Larger 4 Cells 
Total Number of Border Points 
BL 
  
(1) 
of Points 
Num 
  
Figure 2-b. The Modified Hough Parameter Space for the 
Border of a Non-Roof Region
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.