Full text: Proceedings, XXth congress (Part 2)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B2. Istanbul 2004 
  
  
classification schemes, where training sets can be found for 
building and tree height points, can be introduced. 
To use 3D range points as training vectors, correct co- 
registration with optical images is indispensable especially 
correcting for "building lean effects". We applied the method 
proposed by Baltsavias et al.,(2001). 
AX;=-AZ sin(a)/tan(e) 
AY;=-AZ cos(a)/tan(e) (4) 
where AZ : normalised height points-Bald earth height angle, 
a : sensor azimuth angle, 
e : sensor elevátion angle 
This relationship is not very exact but can be used with some 
margin considering the intrinsic planimetric error of IKONOS 
Pro image (3-4 metre in here). By combining NDVI and n-DEM 
heights, training vectors for tree, building and bare earth can be 
defined. Then a Bayesian supervised classification using 
individual ROIs can be applied using these training vectors. A 
Priori probabilities of tree and building areas in a Bayesian 
classifier can be calculated through (5) 
AIndividuanl ™ 
( buitaing ) 
« ROI "M 
r1 
wel alien mafia imr: : esie dais 
i FOFM : 
' Edge detection 
! € "segmentation 
I cr 
eno» ;. Combining edge 
ic E í rent 
  
ration of height points 
onto opticat image 
Dafinition of training v 
NDVI And oorolizec 
  
  
  
  
  
  
J Combining by overlapping ratio 
" check 
   
Figure 3. ROI Refinement scheme 
m*w (5) 
Plw,)= 
(o, M 
  
where M - total number of 3D range points in the ROI, 
m = 3D points of some height and NDVI range, which 
is assumed from the iteration process 
w : weight value from normalized heights 
The results of this supervised classification stage are then 
combined with segments to preserve edges using the FCFM 
(fuzzy clustering and fuzzy merging) method (Looney 2002). 
To begin with, the edge lines within a building Rol are extracted 
and the remaining parts are pre-segmented. Then, the separated 
edge portion is combined by a distance measurement in colour 
space along its path. For 8-way connectivity, the nearby 
segments are checked and the nearest colour distance is 
measured to all of the surrounding segments’ centres. 
u € D (6) 
  
d, = min( 
where Cy: the centre of cluster k, 
u; : colour vector of edge points 
Now the edge preservation is completed in all the other 
processing chains. Secondly, FCFM, whose prototype class 
number depends on the pixels in region P, is applied to a pre- 
828 
defined region, P. The relationship between the prototype's 
class number and pixels can be expressed as follows. 
S|/500) (7) 
where |S| : the number of pixels from a predefined segment S 
The cluster number is adjusted by the internal logic of FCFM 
once more, so that optimal numbers of segments are produced 
keeping the edge parts, because relatively noisy portions of 
segments are already removed as the detected edges. The next 
step is the data fusion between FCFM colour segmentation and 
the results of the Bayes classifier. This is performed by 
measuring an overlap ratio and then reassembling. The overlap 
measurement between the FCFM segment and the building part 
by a Bayes classifier is given 
C, =1+log( 
1 
  
  
  
  
Honan or 1, 
IR.|- |R 
where |Rt| : vector number of a region t 
IRx|: vector number of a cluster x 
[R4]: vector number of maximum size region 
|Rmin|: Vector number of minimum size region 
N : constants by clusters N=1 building, N=0.7 tree, N=0.5 
bare filed 
8) 
  
p(x)= N( 
  
"d 
The results are shown in Figure 4. On the opposite side of the 
direction of the Sun (i.e. shadow side), the edges are clearer. 
Consequently, the boundaries of those parts show a good 
agreement with an estimated straight line. However, one 
problem for this scheme is the hidden part of the building in 
shadow, where the distances of the colour space are all similar 
in spite of the difference in the multi-spectral signature, and as 
the supervised classified scheme doesn’t work correctly, the 
correct building boundaries are not detected. 
      
Figure 4. The first refinement result of building ROI by 
clustering scheme (missing Lidar points over some 
“hidden” buildings results in no identification.) 
To compensate for the weakness of this method, the SRG 
(Seeded Region Growing) algorithm developed by Adams and 
Bischof (1994) was introduced in the last part of the refinement. 
One difference with the original SRG is the use of multiple seed 
points, which are matched onto the roof structures by the 
previous registration work. From clusters of seed points, the 
building area grows until there is a convergence of pixel number 
by updating the statistical value of each cluster. The result of the 
SRG stage is shown in Figure 5. 
  
  
a 
E 
(a) Cookie cut (b) Lidar seed (d) Newly defined 
image by building points on building edge by SRG 
ROI roofs 
Figure 5. Final building outline examples refinement by SRG 
        
  
  
  
  
  
  
Inte: 
  
23 
2.3, 
The 
but 
publ 
prim 
metl 
Algc 
prot 
shar 
betw 
sim[ 
(198 
shov 
2.3; 
The 
requ 
bour 
our 
enot 
appl 
is N 
does 
of 
trans 
G-B 
follo 
Ch 
This 
high 
to a 
mas} 
trans 
meal 
the 
cons 
ND 
DEM 
€: 
used 
class 
appr 
gras: 
point 
has | 
conti 
there 
Ther 
Woo 
DEN 
secti 
DEN 
enck 
CTOW 
chan 
appli 
is ch 
weak 
is re- 
have
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.