Full text: Proceedings (Part B3b-2)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008 
696 
rooftops which have the same directions as the boundaries. To Lidar data were proposed 
remove the non-boundary segments, some solutions based on 
(a) a building image 
(b) the extracted segments (c) two boxes for each segment (d) the selected segment 
Figure 2. Accurate boundary segments selected by our algorithm 
(Schenk & Csatho, 2002; Ma, 2004). The common idea of 
these solutions is to get approximate building boundaries from 
Lidar data, then remove the line segments far from the 
approximate boundaries. The limitations of these solutions are 
mainly in two points. Firstly, the quality of the approximate 
boundaries determined by Lidar data is uncertain, which is 
largely influenced by the quality of Lidar data filtering 
processing. Secondly, how to dynamically select the optimal 
boundaries in a local region is a problem. Sohn and Sampath 
(2003) proposed a different boundary filtering solution on 
IKONOS with Lidar data. However, compared to IKONOS, 
there exist much more possible object segments in a local 
region extracted from very high resolution imagery. In order to 
get an accurate boundary from a very high resolution image, a 
rigor selection rule should be used. An algorithm based on 
Lidar point density analysis and Kmeans clustering is proposed 
to ensure the accuracy of the selected boundary segments in a 
very high resolution image in this study. Figure 2(a) is a 
building image, the extracted line segments in a local region is 
shown in Figure 2(b). Based on the extracted line segments, the 
boundary segments selection algorithm consists of 4 steps as 
follows. 
Step 1: Two rectangle boxes with a certain width (3-5 times 
Lidar points spacing) are generated along two orthogonal 
directions of a boundary segment. Two rectangle boxes are 
created for each segment, as shown in Figure 2 (c). 
Step 2: If no Lidar points can be found in both boxes, the line 
segment is removed because the line segment is far from a 
building. If Lidar points are found in both boxes and the density 
values of the two boxes almost equal, the line segment is 
removed because the line segment surrounded by Lidar points 
should locate on the rooftops. The remaining line segments are 
considered as possible boundary segments. The following 
processes are to get the accurate boundary segments from the 
possible object segments. 
Step 3: Grouping the remaining line segments. As the line 
segments are extracted with principal orientations constraint, 
the remaining line segments have two orientations and are 
grouped according to their angles and distances. Three parallel 
object segments in one group can be found in Figure 2 (c). 
Step 4: Two rectangle boxes are also generated for each 
segment in Figure 2 (c). The difference in Lidar point density of 
the two boxes is calculated for each segment. The basic 
principle is that the difference in Lidar point density of an 
accurate boundary is larger than that of an inaccurate boundary. 
A data set of the difference in Lidar point density in a group is 
defined as formula 1. 
L = {| d k || k - 0,...,m} (1) 
d k means the difference in Lidar point density of a segment. The 
Kmeans clustering algorithm with K=2 is applied to divide the 
data set into two sets, a set with big difference values and a set 
with small difference values. The segments with the data set of 
small difference values will be eliminated. The remaining line 
segments are identifies as the boundary segments. The selected 
boundary segment is demonstrated in Figure 2(d). 
3. EVALUATION 
3.1 Data set 
In this study, both aerial stereo pairs and orthoimage can be 
used to test the effect of our approach. Comparing with an 
aerial image, an orthoimage can contain a much larger area and 
more buildings. So a true orthoimage are used to test the effect 
and applicability of our approach shown in Figure 3(a). The 
image is in a size of 7300*8300 pixels, which spatial resolution 
is 5cm. Lidar data in same area have average point spacing of 
1.1m. The image contains a large area and more buildings with 
different orientations, different structures, and different texture 
conditions. As shown in Figure 3(a), the buildings have 
different orientations, and most of buildings have complex 
geometric shapes. Image texture conditions are also different, 
including simple texture, highly repeated texture, and complex 
texture. The complex texture conditions are formed because the 
trees are so close to the buildings. 
3.2 Experimental results and discussion 
The line segment extraction algorithm proposed in this study is 
an accurate and robust method for peak detection on 
accumulative space of Hough transformation. It is compared 
with a classical peak detection method based on maximum 
value, max-value method. Figure 3(b) and (c) are the results of 
line segments extraction by max-value method and our 
algorithm, respectively. The results show that the orientations 
of all segments in Figure 3(c) are almost coincided with the 
principle orientations of the building while segments in Figure 
3(b) are not. It also shows that almost all important boundaries 
extracted by max-value method are extracted by our algorithm, 
but a few important boundaries extracted by our algorithm 
successfully are not obtained by the max-value method. It is 
shown in detail by label A, B in the rectangle box in Figure 3(c). 
Compared to the max-value method, our algorithm performs 
better in avoiding missing boundary details. The reason that 
more detailed boundary segments can be detected by our 
algorithm is that peak detection on accumulative space of 
Hough transformation with principal orientation constraint
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.