Full text: Actes du Symposium International de la Commission VII de la Société Internationale de Photogrammétrie et Télédétection (Volume 1)

MULTISPECTRAL IMAGE CLASSIFICATION 
BY THE SEPARATING HYPERPLANES METHOD: A COMPUTER PROGRAM 
S- L. EKENOBI 
Faculty of Engineering 
University of Lagos, Lagos, Nigeria 
ABSTRACT 
Whereas the well known Maximum Likelihood multispectral image classifica-- 
tion algorithm assigns a pixel (picture element) to the class whose middle 
"point" (mean values vector) in the feature space the pixel lies closest to, 
the Separating Hyperplanes algorithm breaks the whole feature space up into 
"boxes" each of which encloses a class. Also, unlike the former, the latter 
algorithm is not influenced by the statistical properties of the classification 
data. 
Land-use classification by both methods with Landsat data of an area near 
the city of Hannover, West Germany, and also of an area near the city of Jos, 
Nigeria, are used to demonstrate the advantages of the Separating Hyperplanes 
algorithm over the Maximum Likelihood algorithm. 
  
1. INTRODUCTION 
The goodness of a digital multispectral image classification depends not 
only on the quality and quantity of the "ground truth" (or training samples) 
but also on the suitability of the algorithm to be used. The Maximum Likelihood 
algorithm is a powerful classifier, but it has certain weaknesses, and should 
not be viewed as universal. Its weaknesses have nothing to do with the soundness 
of the algorithm, but with the classification data. The Gaussian distribution, 
on which the algorithm is based, is violated by the multispectral scanner (MSS) 
data. 
Data may be fully normally (Gaussain) distributed or poorly distributed. 
The distribution may be so poor that some of the object class covariance matrices 
are singular and therefore uninvertible (which means a dead-end for the whole 
'" classification). It is convenient to use the term "level of variance" to refer 
to the level of distribution of data of an object class, where the highest level 
of variance should correspond with the fully normally distributed data. Relative 
levels of variance of object classes behave as, and are directly proportional 
to, "relative weights" of the classes in the classification process. This is 
definitely not a desirable situation in any classification job, since a rare 
class could, by virtue of the high level of its variance, be shown as impor- 
tant on the classification map. 
Algorithms which assume the Gaussian distribution prefer certain combina- 
tions of object classes to the others. For example, classifications of vegeta- 
tion types will always be successful, and this fact is clearly documented in 
Literature. Vegetative covers yield very high variance levels. When variance 
levels vary considerably, that is, classes of poor data distribution are inclu- 
ded, the problems which arise display themselves in the misclassifications of 
the impure (or mixed) pixels which always exist in large numbers (Ekenobi 1981). 
$5 
Mm Dmm ET ER nica Eu a 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.