Full text: Technical Commission III (B3)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
main orientation statistics. 
a 
d 
ange() 
(b) District 2 
  
i 
00 5 
  
(a) District 1 
Figure 5 Main orientations for districts in Fig 4 
4.2 Building outline modelling 
Refer to Fig. 1, the building outline modelling include several 
main steps: corner detection, main orientation estimation, line 
modelling, etc. Main orientation derived from line connecting 
between corners by angle nearest neighbouring. Building model 
is defined as a polygon, which is described by corners but 
actually generated by line model. 
Corner detection: 
Edges are directly detected as the pixels on the contour of the 
buildings. For better understanding of the shape and better 
feature extraction, the contour edges of a building are 
transferred to a curve of it. Corner is defined as the points with 
local maximal curvature of the curve (Mokhtarian 1998). 
Line representation: 
A line segment is represented by a line equation 
y=kx+b (1) 
where 
k = tan(0),0 = {a, B} @) 
The slope & is initialized as the one of the connecting line 
between two sequential corners c(i) and c(i+1) - Midpoint 
p,(i) separates the line into two segments with respective 
midpoints p (i) and p, (i). The three midpoints are employed 
to refine the location and model of the line. The intercept b is 
calculated by the midpoint coordinates of p, G) and slope &. 
Line refining algorithm: 
When the main orientation angle is determined, the lines must 
be rearranged to subject to the orientation, parallel or 
perpendicular. The model parameters should be recalculated. 
The location of the lines will be adjusted in this process to fit 
the edge of the image better. For the RGB image, the edges 
under the DSM building mask are easily detected by the 
gradient. But the best edge is searched along the direction 
perpendicular with the line to make the three midpoints have 
maximal gross gradient. After that, the refined midpoints are 
used to compute the modelled line parameter 5b when k is 
determined according to the distance to the pair of the main 
orientation af 
Line merging and generating: 
Neighbouring parallel line segments 1(1): y = k(i)x + b(i) and 
[(i+1):y=k(i+1)x+b(i +1) within certain distance (5 
pixels here) are reunited into a single segment /(;) , whose 
intercept h(i) is re-calculated according to updated midpoint 
Pp, (i) which is re-located by the average of the original two 
midpoints Pol) and p, G1) and re-refined according to the 
44 
refining algorithm. The numbers of the following lines after 
i+1 are reduced by 1. 
For two neighbouring parallel line segments (i) and /(i +1) 
beyond this distance, a new line segment is inserted as I(i 4-1) 
which is perpendicular with them. Other segments' indexes are 
increased by 1. The inserted segment equation will be 
y - k(i - 1)x t b(i - 1), where equation (3) should be 
satisfied. 
k(i - Dk(i) 2 -1 (3) 
b(i+1) is decide by the midpoint py ^1) which is initialized 
by the average of the two midpoints of the original parallel 
segments and refined according to the refining algorithm. 
Polygon representation 
A building then is represented by a polygon with the refined 
corners as the vertices. They are calculated by the intersections 
of the neighbouring modelled lines. 
5. ANALYSIS ON THE RESULT 
Besides of datal, a large set of data of Tokyo naming data2 was 
worked and some of the results are displayed in Fig. 6 and Fig. 
7. Fig. 6 (a)-(c) shows the results in several stages of the 
algorithm for a dense residential area. There is one class 
detected and one main orientation estimated. For comparison, 
Fig. 6 (d) shows the polygons modelled once at a time. Due to 
less of samples for orientation estimation, the building polygons 
hardly match the real houses and display random errors. The 
comparison of the obviously wrong orientation (over 10 degree 
deviation) for the singly modelled result with proposed one is 
listed in Tab. 1 for datal and data2. 
  
  
  
Datal Data2 
Group 9 1 
Single 55 16 
  
  
  
  
  
Table 1 Orientation errors 
We also compared the models derived automatically 
using our algorithms with that made manually and display 
them superimposed in Fig. 6 (e). Some buildings on the 
four edges of the image sample are not extracted or not 
entirely modelled, because of the modelling algorithm 
doesn't consider this situation. For the houses wholly 
display in the image, the locations are highly correct and 
the outline shapes are very fitting. There are totally 186 
houses which can be recognized from this area. Only one 
was not extracted. Among 169 with whole shape and 
close to whole shape buildings, there are 6 models have 
area error lower than 80%, where 4 models locate at the 
edges and have close to whole shapes. There are 4 models 
have are error lower than 90%. Most edges of the houses 
deviate 1 to 3 pixels. For better visualization effect the 
corners and models of a patch of this area are illustrated 
in Fig. 7. 
For evaluation, the numbers of correctly extracted 
houses and correctly modelled houses were counted, as in 
Tab. 2. A house with more than half its shape is taken as a 
whole house. There is some false merging occurring in 
the left part of data 1, where large buildings and small 
houses are mixed. This is caused by NDSM generation 
als 
ch 
po 
of 
by 
de 
co 
as
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.