Full text: Technical Commission III (B3)

       
   
    
    
    
   
   
   
  
  
  
  
  
  
  
  
  
  
   
   
  
  
     
   
  
  
  
  
  
  
  
  
  
    
    
   
    
     
    
   
  
  
   
    
   
   
    
    
Object Based Image Analysis (OBIA) gained recently a lot of 
attention among the geographical mapping applications as an 
alternative analysis framework that can avoid the drawbacks 
associated with pixel based analysis. In spite of the advantages 
of pixel based image analysis, it suffers from problems such as 
sensitivity to variations within objects significantly larger than 
pixel size (Alpin et al., 2008). The spatial extent of the objects 
to be classified is of more importance to the classification task 
than the spatial scale of image pixels (Platt et al., 2008). Object 
based classification can remarkably improve the classification 
accuracy by relieving the problem of misclassifying individual 
pixels (Alpin et al., 1999). 
The proposed approach, presented in this paper, uses a single 
return LIDAR data along with aerial images to extract 
buildings, and trees of urban areas. Object based analysis is 
adopted to segment the entire DSM data into objects based on 
height variation. The classification task is based on two stages 
where the primary classified objects can help to derive new 
feature which is the height to ground for the second stage. 
Among the many features provided by the aerial imagery, a 
normalized difference vegetation index based on R and IR 
bands have been used due to its high significance in vegetation 
extraction. The second classification stage uses the object size, 
average height to ground, and the vegetation index to fine tune 
the classification of objects. 
The following section demonstrates the steps of the proposed 
approach. Then, experimental results of the proposed approach 
for different urban areas are presented. Finally, the conclusions 
are provided. 
2. METHODOLOGY 
The proposed approach, adopts object based analysis where 
objects are the targets for classification. The first step is to 
perform image segmentation on DSM height image to divide 
the whole scene into objects. A region growing algorithm is 
conducted over the entire DSM height image starting from the 
upper left corner based on neighbourhood height similarity. The 
same traverse of data during object’s extraction is exploited to 
calculate the area of each object to be used in the classification 
step. 
Based on the neighbourhood height similarity used in the 
segmentation step, the points of each extracted object tend to 
belong to the same object plane. Planar objects such as ground 
and building surfaces will exhibit large patches as they maintain 
smooth height changes. On the other hand, trees typically 
exhibit high variation of height due to the frequent LIDAR 
penetrations of its crowns. Consequently, trees areas exhibit 
small areas. 
As a preliminary classification, objects under minimum area 
threshold are classified as vegetation; this threshold represents 
the smallest expected area of a building object and was selected 
as 10 m? during our tests. The rest of objects are classified as 
buildings except for the largest object which is classified as 
ground. The largest object is used as a height reference, and the 
height to ground of each pixel of the rest of area is calculated as 
the difference between the pixel height and the nearest ground 
pixel height. 
Due to the interpolation applied to the LIDAR data, some walls 
of the buildings exhibit misleading high height variation that 
results in small patches misclassified as vegetation, the same 
misclassification is encountered for the architectural details of 
buildings as they also show abrupt height changes over small 
areas. These misclassifications are revised during the second 
classification stage. 
   
       
  
For finding the corresponding spectral data of the extracted 
objects, an ortho-photo of the scene is constructed using all the 
overlapping images over the scene. All the ortho-rectified 
images that intersect with the scene boundary are merged 
together to obtain a true ortho-photo of the scene where the 
occluded or invisible areas in an ortho-photo is complemented 
by the other ortho-photos from the other images. Figure | 
illustrates sample ortho-photos of an area along with the merged 
true ortho-photo obtained. 
  
1.a an ortho-photo with partial — 1.b an ortho-photo with partial 
Scene coverage coverage for the same scene 
  
l.c an ortho-photo with partial 1.d the overall true ortho- 
coverage for the same scene photo of the scene 
Figure 1. True ortho-photo generation 
Normalized Difference Vegetation Index (NDVI) is computed 
for all objects in the scene using IR and R bands of the 
generated true ortho-photo as in (1) 
(R-R) 
(IR+R) 
  
NDVI = (1) 
The second stage of classification is conducted to tune the 
preliminary classification of the first stage according to the 
following rules: 
e Objects of high height-to-ground (70.2) and high 
NDVI (>0.18) are classified as trees. 
e Objects of high height-to-ground (70.2) and low 
NDVI («0.18) are classified as buildings. 
e Objects that do not satisfy the previous two conditions 
maintain their preliminary classification. 
3. RESULTS 
To evaluate the proposed approach, both aerial images and 
LIDAR data of three urban areas in the centre of the city of 
Vaihingen are used for testing. These data sets are provided by 
ISPRS test project on urban classification and 3D building 
reconstruction. These areas have historic buildings with rather 
complex shapes, few high-rising residential buildings that are 
surrounded by trees, and a purely residential area with small 
detached houses. The digital aerial images are a part of the high- 
  
  
resol 
were 
RWE 
cons 
strip: 
  
The 
grou 
of 1 
para 
The 
(ALS 
acqu 
ALS 
abov 
the r 
cons 
but i 
is 4 
digit. 
inter 
cm, 1 
To q 
class 
using 
tech 
com] 
per-c 
objec 
Com] 
Figu 
and ] 
class 
of th 
Figu 
area, 
builc 
class
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.