Full text: Technical Commission III (B3)

Building 
nd road. 
d called 
ctor and 
n matrix 
) gram is 
id aerial 
rmation. 
[ and the 
) from 
built-up 
applied 
regions, 
separate 
plied an 
decision 
ohn and 
xtraction 
imagery 
| utilized 
2 outline. 
ion and 
solution 
showed 
'enerally 
magery. 
ia is that 
0 more 
sion can 
n-levels 
, 1994). 
physical 
sitive to 
yrmation 
cedures. 
Itisource 
ta units. 
tal City, 
This kind of fusion techniques are often based on the spectra 
and spatial characteristics derived from datasets and the 
segmented objects are combined for further object recognition 
using fuzzy clustering, hierarchical decision tree and other 
pattern recognition algorithms (Geneletti and Gorte, 2003). 
Nowadays LiDAR data are often derived from one or multi 
returns of laser pulses and the digital imageries usually contain 
multispectral bands. With the availability of full-waveform 
LiDAR data and hyperspectral imageries, the problems of data 
fusion and pattern classification become more complicated. 
Opportunities are that high classification accuracy should be 
achieved due to more spectral and spatial features. But there are 
still challenges in data processing, waveform modeling and 
measurements interpretation of full-waveform LiDAR (Wagner 
et al., 2004). 
2. METHODOLOGY 
The workflow and software we use are illustrated in Figure 1. 
The main tasks are described in the following subsections with 
emphasis on method for ground object extraction. 
2.1 Data Preparation 
Orientation and registration procedures should be carried out 
first to guarantee that multisource data are operated under the 
same spatial framework (Habib, et al., 2006). The provied DSM 
file with resolution of 25cm is used as reference to orthrectify 
and mosaic image using given orientation parameters and the 
task is completed using Leica Photogrammetry Suite. 
We combine mosaic image with DSM data and extract Area of 
interest (AOI) using ERDAS IMAGINE. Areal, Area2 and 
Area3 are extracted as required and image of each test area has 
four ‘bands’ (namely IR-R-G-H). All the airborne images are 
contrast-enhanced before classification. 
2.2 Ground Object Extraction 
Buildings, trees and vegetation (natural ground covered by 
vegetation) are extracted in Areal, Area2 and Area3. Before the 
extraction, we enhance the contrast of image to improve 
distinctiveness of different ground objects (Figure 2a). The 
ground object extraction procedure consists of two steps: coarse 
classification and refinement. Firstly we use spectral 
information to coarsely classify the images. Then a refinement 
process is carried out using elevation information. The method 
we use to extract ground objects is Sparse Representation. The 
seminal works to refer are (Chen, Donoho, and Saunders 1999; 
Candés and Tao 2005; Donoho 2006 a,b; Bruckstein, Donoho 
and Elad 2009; Wright, Yang, Ganesh, Sastry and Ma 2009). 
The key idea is to represent the spectral vector (vector of IR-R- 
G value) of a pixel using spectral vectors of pixels of typical 
ground objects. The problem of classification is formulated as a 
Basis Pursuit problem and then solved using convex 
programming (Equation 1) methods in MATLAB. 
  
min |x| SLY = Ax (1) 
555 
where y is the spectral vector of a pixel and column vectors of 
observation matrix A are spectral vectors of pixels of typical 
ground objects. These pixels are interactively selected on the 
images of test areas. In our implementation, we select five 
pixels for each typical ground objects (that is trees/vegetation, 
buildings and road). Then a test procedure is carried out to 
examine the distinctiveness of spectral vectors we select as 
observations and vectors which lead to misclassification are 
updated. Lastly, each pixel of images from test areas is 
classified using given observation matrix A. The procedure 
works as follows: for each pixel we extract its spectral vector as 
y in Equation 1; then we solve Equation 1 using ll 
minimization solver; the classification of the pixel is same as 
the column vector of A corresponding to the largest positive 
component of the solution vector x (Figure 2b, 4b, 6b). 
Therefore the methodology we use is under framework of 
Supervised Classification. And it is in essence a pixel-oriented 
classification method. 
Often we have to refine the coarse classification due to 
misclassification of trees/vegetation and buildings/road. 
Refinement is mainly based on elevation histogram. We select 
values that separate trees/vegetation and buildings/road as 
thresholds to refine coarse classification results. 
The outputs of “Ground Object Extraction” have to be 
georeferenced due to loss of geoinformation when processing in 
MATLAB. The classified objects are separately output to files 
and georeference information is added using ERDAS 
IMAGINE. 
  
  
  
  
  
  
  
   
  
  
  
  
  
  
    
  
  
   
: Workflow and the software tools 
3. RESULTS 
The whole research area is illustrated in Figure 2. Three test 
areas for buildings/trees/vegetation extraction are outlined in 
yellow. Test Area 1 consists of house with complex roof 
structures. The ground objects in Area 2 are mainly trees and 
buildings. Area 3 The classification results are separately shown 
in Figure 3a-3c, 4a-4c, 5a-5c). The extracted objects are color- 
coded as: vegetation (green), trees (yellow), road (blue) and 
buildings (red). 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.