Full text: XVIIIth Congress (Part B3)

IMAGE CLASSIFICATION USING NON-PARAMETRIC 
CLASSIFIERS AND CONTEXTUAL INFORMATION * 
F.J. Cortijo and N. Perez de la Blanca 
Depto. Ciencias de la Computación e I. A. (DECSAI) 
E.T.S. Ingeniería Informática 
Universidad de Granada 
18071 Granada, Spain 
cb@robinson.ugr.es 
Commision Ill, Working Group 2 
KEY WORDS: Classification, Learning, Algorithms, Combination, Accuracy, Pattern Recognition, Contextual Classification 
ABSTRACT 
This paper shows some combinations of classifiers that achieve high accuracy classifications. Traditionally it is used the 
maximum likelihood classification as the initial classification for the contextual correction. We will show that using non- 
parametric spectral classifiers to obtain the initial classification we can improve the accuracy of the classification significatively 
with a reasonable computational cost. More specifically we propose to apply the contextual correction performed by the ICM 
algorithm to some non-parametric spectral classifications. 
1 INTRODUCTION 
Supervised classifiers assume the existence of a training set 
T composed by n labeled training samples, where the labels 
represent informational classes (labels). This information is 
used for learning -construction of the classifier- and usually 
for testing too. We will note by Q = {w1,w2,...,ws} to the 
set of informational classes and by X to the samples used for 
learning and classifying. We assume they are d-dimensional 
random variables. 
Spectral classifiers use only the spectral information related 
to the pixel to be classified. The thematic map they give as 
output has the overall impression of a “noisy” classification. 
This effect is more evident when there is overlapping among 
the training sets in the spectral space [Cortijo et al., 1995]. 
In this case it is necessary a post-processing over the initial 
classification because it is expected to find homogeneous re- 
gions in the map as they can be found in the Nature. The 
straightforward solution consists in incorporating additional 
information into the classifier related to the spatial neighbor- 
hood -its context- of the pixel to classify. That information 
may be the spectral values of the spatial-neighbors pixels, 
their labels or both kinds of information combined in some 
way. When this kind of information is used for classification 
the classifier is known as a contextual classifier. 
From a general point of view a contextual classifier can be 
seen as a smoothing process over an initial image of labels. 
This map is obtained usually by a spectral classifier. lt is 
well known that some contextual classifiers achieve a local 
optimum [Besag, 1986] determined by the initial classifica- 
tion. It is used traditionally the maximum likelihood (ML) 
classification as the starting point for the smoothing process. 
We have shown [Cortijo & Pérez de la Blanca, 1996a] that 
the ML classifier is not the best choice when the training 
sets are high-overlapped. In this work we propose the use of 
different spectral classifications as initial classifications to a 
contextual classifier in order to obtain high-accuracy classi- 
fications with a reasonable computational cost. 
In order to achieve a higher accuracy it looks reasonable to 
  
*This work has been supported by the Spanish "Dirección General de 
Ciencia y Tecnologia” (DGCYT) under grant PB-92-0925-C02-01 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996 
adopt a high accuracy spectral classification as starting point 
to the contextual classifier, given that contextual classifiers 
assure convergence to a local maximum. Our proposal con- 
sists in adopting different spectral classifications as starting 
points with the aim of improving the accuracy of the conven- 
tional methodology consisting in contextually correcting the 
ML classification. Many others spectral classifiers improve 
significatively the results obtained by the ML classifier and 
the classifications obtained by them are good candidates to 
be the initial classifications for contextual classifiers. Finally, 
we must considere the required computational effort to per- 
form the global process: spectral classification followed by 
the contextual classification. For a particular contextual clas- 
sifier it is obvious that the contextual classification effort is 
the same for any initial classification, thus the global com- 
putational effort is determined by the spectral classification 
computing demands. 
This paper is organized as follows: In section 2 we describe 
the methodology we have adopted in this work together with 
a brief description of the classifiers we have used. In section 3 
we describe the datasets used in this paper and in section 4 
we show the results obtained. Finally, the main conclusions 
we have achieved are summarized in section 5. 
2 METHODOLOGY 
Our objective in this work is to show some combinations of 
classifiers that achieve high-accuracy classifications. In order 
to determine some interesting combinations of classifiers for 
Remote Sensing image classification we have tested a wide 
number of families of spectral and contextual classifiers. 
2.1 Spectral Classifiers 
Spectral classifiers are partitioned in two main categories: 
a) parametric classifiers, if they assume the existence of an 
underlying probability distribution of the data and b) non- 
parametric classifiers, if they do not assume anything about 
the probability distribution. 
The structure of the Bayes classifier is determined, basically, 
by the probability density functions (pdf's) p(X|wi). The 
objective in the construction of a supervised parametric clas- 
sification rule is to characterize the pattern of each class in 
120 
  
  
  
   
   
   
   
  
  
  
   
   
   
  
   
  
  
  
   
    
  
   
   
   
   
   
   
   
    
  
  
   
   
  
  
  
   
   
   
    
    
    
   
    
   
   
   
   
   
   
   
    
   
    
   
   
  
  
   
   
    
    
   
terms of its 
function, tl 
(estimated 
class. The 
so it is on 
each class: 
Seis 
The maxir 
quadratic c 
in the repr 
variance IY 
ratic classi 
boundaries 
sensitive t 
pdf's than 
training se 
than the t 
is well-knc 
high dimer 
enough to 
When ado 
ing an ext 
aries to th 
highly ove 
good choi 
adjustmen 
daries. TI 
inant anal 
man [Frie 
RDA allo 
ing the qi 
particular 
is perform 
parameter 
situations 
future mi: 
parameter 
nique is n 
set size is 
In most « 
forms of t 
istic. The 
actually e 
ric model: 
problems 
The only | 
sification 
assumptic 
We can 
classifiers 
categories 
parametr 
(nearest 
timation 
[Devijver 
directly tl 
classifica 
consists 
means of 
ables inv 
opted tw« 
and knov
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.