Full text: Technical Commission VII (B7)

    
    
    
  
  
  
   
  
  
   
    
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
   
  
  
  
   
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
HYPERSPECTRAL DATA CLASSIFICATION USING FACTOR GRAPHS 
Aliaksei Makarau, Rupert Müller, Gintautas Palubinskas, and Peter Reinartz 
German Aerospace Center (DLR) 
German Remote Sensing Data Center (DFD) bzw. Remote Sensing Technology Institute (IMF) 
82234 Oberpfaffenhofen, Germany 
{aliaksei.makarau, rupert.mueller, gintautas.palubinskas, peter.reinartz}(@dir.de 
Commission VII/3 
KEY WORDS: Hyper spectral, Classification, Training, Reference Data 
ABSTRACT: 
Accurate classification of hyperspectral data is still a competitive task and new classification methods are developed to achieve desired 
tasks of hyperspectral data use. The objective of this paper is to develop a new method for hyperspectral data classification ensuring 
the classification model properties like transferability, generalization, probabilistic interpretation, etc. While factor graphs (undirected 
graphical models) are unfortunately not widely employed in remote sensing tasks, these models possess important properties such as 
representation of complex systems to model estimation/decision making tasks. 
In this paper we present a new method for hyperspectral data classification using factor graphs. Factor graph (a bipartite graph consisting 
of variables and factor vertices) allows factorization of a more complex function leading to definition of variables (employed to store 
input data), latent variables (allow to bridge abstract class to data), and factors (defining prior probabilities for spectral features and 
abstract classes; input data mapping to spectral features mixture and further bridging of the mixture to an abstract class). Latent 
variables play an important role by defining two-level mapping of the input spectral features to a class. Configuration (learning) on 
training data of the model allows calculating a parameter set for the model to bridge the input data to a class. 
The classification algorithm is as follows. Spectral bands are separately pre-processed (unsupervised clustering is used) to be defined 
on a finite domain (alphabet) leading to a representation of the data on multinomial distribution. The represented hyperspectral data 
is used as input evidence (evidence vector is selected pixelwise) in a configured factor graph and an inference is run resulting in the 
posterior probability. Variational inference (Mean field) allows to obtain plausible results with a low calculation time. Calculating the 
posterior probability for each class and comparison of the probabilities leads to classification. Since the factor graphs operate on input 
data represented on an alphabet (the represented data transferred into multinomial distribution) the number of training samples can be 
relatively low. 
Classification assessment on Salinas hyperspectral data benchmark allowed to obtain a competitive accuracy of classification. Employ- 
ment of training data consisting of 20 randomly selected points for a class allowed to obtain the overall classification accuracy equal 
to 85.3294 and Kappa equal to 0.8358. Representation of input data on a finite domain discards the curse of dimensionality problem 
allowing to use large hyperspectral data with a moderately high number of bands. 
graphical model type is not so wide for remotely sensed data in- 
terpretation. 
1 INTRODUCTION 
Development of new methods for single/multisensory data clas- 
sification leads to an improvement of the data classification and In this paper a new approach for hyperspectral imagery super- 
a more precise identification of land-cover classes. Nevertheless, 
requirements on the methods such as transferability, integration 
into complex systems, or augmenting ability motivate an employ- 
ment of probabilistic graphical models [Bishop, 2006]. Applica- 
tion of probabilistic graphical models becomes more and more 
popular and efficient solution for image annotation, classification, 
for definition of semantic link between data and a high level la- 
id [Lienou et al., 2010], [Bratasanu et al., 2011], [Wang et al., 
009]. 
Factor graphs (FG) were proposed in 1997 [Kschischang et al., 
2001] and since then the application of FGs for signal/image 
processing and recognition is gradually emerging. B. Frey et 
al. [Frey and Jojic, 2005] performed a work on a comparison of 
learning and inference methods for probabilistic graphical mod- 
els (Bayesian networks, Markov random fields, factor graphs). 
Factor graph is a convenient tool to define complex systems for 
data processing/interpretation, to expand the systems, allow to 
model complex interactions among a system parts (e.g. map fea- 
tures/properties from low to high level), to perform approximate 
inference on data, or use non full data for plausible decision mak- 
ing. Nevertheless, application of factor graphs as a more general 
137 
vised classification using factor graph is proposed. The structure 
of the factor graph is defined in order to define prior probabilities 
for input data, to map the input data to a latent variable (a mixture 
of the input features) and bridge the mixture to a semantic class. 
A configuration of the factor graph on training data allows to es- 
timate the parameter set of the graph (probabilistic functions in 
the factors) and an employment of a fast inference method (Mean 
field [Frey and Jojic, 2005]) allows to obtain a competitive accu- 
racy of the hyperspectral data classification. 
2 FACTOR GRAPH MODEL FOR CLASSIFICATION 
Factor graph (undirected probabilistic model) is a more general 
graphical model than a Bayesian network or a Markov random 
field. An FG possesses properties of Bayesian network and Markov 
random field and allows to describe complex relationships among 
parts of a modeled system. A factor graph is a bipartite graph 
containing two types of nodes: variable nodes (x;, i — 1..n) and 
function nodes (factors) (f;(x1,x2,..-,Xn),] = 1..m), where 
a variable node x; takes value on a finite domain (alphabet A:) 
[Kschischang et al, 2001]. Figure 1 presents an example of 
  
  
   
  
  
   
  
  
    
   
  
  
  
  
  
  
  
   
  
     
  
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.