Full text: Technical Commission III (B3)

  
     
   
  
     
   
   
  
   
  
    
  
    
    
    
    
     
     
   
   
   
   
   
    
     
   
    
    
    
   
    
    
  
    
    
lattice, the pixel-based MRF methods could conveniently model 
the spatial contextual information. However, pixel level MRF 
can not consider information, such as shape, texture and spatial 
relation of land cover classes for classification. Hence, many 
researchers have extended the MRF model from the pixel level 
to the region level(Zhang, L., Q. Ji, 2010). The region-based 
MRF methods usually divide an image into over segmented 
regions firstly(Antonis Katartzis, et al, 2005). Then, the region- 
based MRF model is defined on these initial regions to obtain 
the finally classification results. Although the MRF at the 
region level overcomes some shortcomings of the pixel-based 
MRF, it still suffers the inaccuracy of the over segmented 
regions and the irregular spatial contextual relationship. In order 
to improve the ability to describe large scale behaviors, both 
pixel and region based MRF model can be extended to multi- 
scale MRF(MSRF). The inter-scale dependencies of multi-scale 
class labels across scales can be captured with MSRF structures, 
and a non-iterative algorithm can be developed to speed up 
classification. 
In this paper, we propose a new classification method that 
unifies the pixel level and region level MRF in multi-scale 
space(UMSRF). This method attempts to improve the MSRF(C. 
Bouman, M. Shapiro, 1994)model by taking advantages of both 
pixel and region based MRF to get better classification result. 
The classification method is carried out on multi-scale region 
adjacency graph(RAG), which can utilize information about 
region shape, texture. Specifically, we focused on how to 
introduce shape information into MSRF model to mitigate 
appearance ambiguity between different land cover classes. 
This is motivated by the fact that most man made objects in 
remote sensing image can be modeled by simple mathematical 
function, and thus can be easily integrated into MRF model. In 
order to take into account the pixel and region features, the 
likelihood function of UMSREF is decomposed into the product 
of the pixel likelihood function and the region likelihood 
function. Region-based likelihood function is based on the 
introduced region feature, which captures the interaction 
between regions and characteristics within a region. 
The UMSRF based method consists of two modules, multi- 
scale image segmentation and inference of land cover classes 
label of each pixel. The first module is about to build image 
pyramid and partition input image at each scale. Then region 
feature is extracted to describe region shape and contextual 
information. The hierarchical segmentation is carried out in 
wavelet domain, taking wavelet coefficients as image feature. 
The watershed transform is used to partition the image at each 
scale. The second module is to assign a class label to each pixel. 
The standard two sweep forward-backward algorithm is 
extended to integrate the pixel and region information. The 
upward sweep starts at the finest scale to compute the likelihood 
which takes into account the interaction across scales. The 
upward procession repeated until reaches the coarsest scale. 
Then the downward sweep starts at coarsest scale to get the 
label of each pixel. In this sweep, the label of each pixel is 
obtained by maximizing the posteriori probability. When 
computing the posteriori probability, the likelihood is 
decomposed into pixel likelihood and region likelihood. The 
process repeated until reaches the finest scale. The UMSRF 
model parameters were estimated by EM algorithm 
This rest of the paper is organized as follows. After brief review 
of MRF model based image classification in section 2, the 
multi-scale image segmentation, region shape feature extraction 
and UMSRF model are discussed in section 3 in detail. In 
Section 4, we illustrate classification results on high resolution 
image and perform a comparison of our method with pixel- 
based classification approaches that follow the Bayesian 
inference. Finally, conclusions and directions for future 
research work are given in Section 5. 
2. FRAMEWORK OF MRF MODEL BASED 
CLASSIFICATION 
This section briefly presents the framework of MRF based 
image classification. 
Let S denote a set of sites, YÉy,sesis the observed random 
field defined on S which represent the spectral statistical 
property at each site and ys {y,,seS} is denoted as the 
occurrence of Y. MRF model assumes that the behavior of Y is 
dependent on a corresponding unobserved label field. The 
unobserved label field is denoted as X 2 {X,,s€S} and take their 
value in a discrete set L={1...M}, where M is the total number 
of classes. Letx={x,s €S) denote a realization of X. The image 
classification is to estimate the X that maximizes the posterior 
probability AX]y), given the observed image y. 
Under the Bayesian law, the x that maximizes the P(«|y) is 
equal to maximize P(v| X)P(X) . 
The joint probability P(X) models the spatial context of 
different land cover object. The label random field X is 
assumed to possess the Markovianity property, and then it 
follows Gibbs distribution. The multilevel logistic model is 
often used to model the spatial contextual relationship. The 
MLL model favors smooth classification result. This could 
make the MRF model resist noise and reduce the impact of 
intra-class variation. 
The likelihood function Ay|X) is used to model the statistical 
characteristic of observed image given the label field. The 
Gaussian distribution is usually employed to model Ay|X) for 
simplicity. 
MRF models can be defined both on pixel level and region level 
after initial segment. For the Pixel-based MRF model, each 
element s = (i, j) in S denotes a pixel and 5 ={s|1<:<M,1<j<M 
is a MxN discrete rectangular lattice. Hence, y, ev and x, ex 
are the observed image data and label for each pixel, 
respectively. Due to the regular spatial context, one can 
conveniently define the neighborhood system for the MRF, 
such as the 4-neighborhood system and the 8-neighborhood 
system. However, the local pixel-based neighborhood 
relationship is limited to describe large range interaction of 
image data and limit the classification accuracy. 
For the region-based MRF model, each element s in S 
represents a region obtained by the initial over segmented 
image, and y ey and y c y are the region feature and label of 
the region s, respectively. The region level observation field 
could enhance the ability of the MRF for describing the region 
geometrical information, which would improve the 
classification accuracy. However, it also brings disadvantages at 
the same time. There are mainly two kinds of disadvantages. 
First, the initially over segmentation may be imprecise. As 
mentioned in(Kuo. W.F. and Y.N. Sun, 2010), the approaches 
used for initially segmented, such as watershed, has some 
imprecise segments that can't be redressed in the following 
processes. Second, the spatial context relationship is irregular. 
Both the pixel and region level MRF can be defined in multi- 
scale space to model large range of interaction. The MSRF will 
be discussed in section III. 
      
m (SV CO 9» vu 
fh 2] x eA c ry = fad (D CD oC eed
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.