Full text: Proceedings, XXth congress (Part 3)

   
   
   
     
   
   
    
    
   
    
     
   
   
   
   
   
   
   
   
   
   
   
   
    
    
   
   
  
    
   
  
   
    
   
    
  
    
   
   
   
    
  
  
    
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
4. BUILDING DESCRIPTION 
This section presents a building description process which 
reconstructs building outlines from each building “blob”. 
Generic building shape is represented as a mosaic of convex 
polygon. A set of linear cues is extracted by both data-driven 
and model-driven approaches. The building “blobs” are 
recursively intersected by those linear cues, which produces 
a set of polygon cues. Finally, building outlines are 
reconstructed by merging only “building” polygons forming 
building objects. 
4.1 Data-driven linear cue extraction 
The first stage of the building description is to extract 
boundary lines from Ikonos imagery with the support of the 
RTF filtering result. Straight lines extracted by the Burns 
algorithm (Burns et al., 1986) are filtered by a length 
criterion, by which only lines larger than pre-specified 
length threshold, /; =5m, remain for further processing. 
Then, two rectangle boxes with certain width, /,=5m, are 
generated along two orthogonal directions to the line vector 
filtered in length. The determination of boundary line can be 
given if non-building and building points are simultaneously 
found in both boxes or if only building-label points are 
found in one of the boxes and no lidar point can be found in 
the other box. The latter boundary line condition is 
considered if a low density lidar dataset is used. Figure 4 
illustrates this. 
   
O non-building point @ building point — line cue 
Figure 4. Illustration of boundary line detection 
     
I ? Mia Lom as 
(a) extracted straight lines. (b) filtered boundary lines 
Figure 5. Result of data-drive cue extraction 
As a final line filtering process, a geometric disturbance 
corrupted by noise is regularized over boundary lines. A set 
of dominant line angles of boundary lines is analyzed from a 
gradient-weighted histogram which is quantized in 255 
discrete angular units. In order to separate a weak, but 
significant peak from other nearby dominant angles, a 
hierarchical histogram-clustering method is applied. Once 
the dominant angle, 6, is obtained, lines with angle 
discrepancies which are less than certain angel thresholds, 
0,,-30?, from 0; are found. Then, their line geometries are 
modified as their angles are replaced with 04. These 
modified lines do not contribute to the succeeding dominant 
angle analysis and the next dominant angle is obtained. In 
this way, a set of dominant angles is obtained, by which 
geometric properties of boundary lines can be regularized 
(see figure 5). 
4.2 Model-driven linear cue extraction 
New line cues are "virtually" extracted from lidar space in 
order to compensate for the lack of intensity line cue density 
by employing specific building models. For each intensity 
line cue, parallel lines and *U" structured lines are inferred 
from lidar space. First, a box growing direction, pointing to 
the location of parallel boundary line is determined. To this 
end, a small virtual box is generated with a width of /,=5m 
from the selected intensity line in the same way of detecting 
boundary lines presented in 84.1. To that direction, the 
virtual box grows until it comes across any on-terrain point 
(see figure 6 (a)) Then, it de-grows in order to have 
maximum building points while in its minimum size (see 
figure 6 (b)). In this way, the virtual box is expanded, but at 
this time, towards to two orthogonal directions to the 
parallel boundary line detected (see figure 6 (c)). Thus, *U" 
structured boundary lines made with the parallel boundary 
line can be detected. Finally, these three virtual lines 
detected are back-projected onto image space and then, their 
line geometry is adjusted by gradient weighted least-square 
method. Figure 6(d) shows model-driven cues extracted 
from figure 5(b). 
  
(c) (d) 
  
© non-building point @ building point 
Figure 6. Result of model-driven cue extraction 
intensity line cue ==«= virtual line cue 
4.3 Polygonal cue generation 
Initial polygons resulting from the building detection result 
of figure 3(d) are decomposed of a set of convex polygons by 
a recursive intersection of linear cues, called hyperlines. 
This polygonal segmentation is implemented by BSP 
(Binary Space Partitioning) tree algorithm introduced by 
Fuchs et al. (1980). Figure 7 illustrates the overall 
partitioning scheme to generate polygons. Suppose that we 
have an initial polygon with rectangle geometry, P^. wherein 
LIDAR points are distributed with building and non- 
building label. All vertices comprising P^ are stored as à 
root node of BSP tree for further recursive partitioning (see 
figure 7(a)). 
Inter. 
— 
A se 
inter: 
{Fi 
drive 
hype 
start: 
poly, 
(b) 
  
4.3.] 
Poly 
not 
poly 
poly 
clos 
dete 
men 
Po
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.