Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B7-3)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B7. Beijing 2008 
extracted three independent component features are recorded as 
T3 here. 
By means of the methods of feature extraction mentioned above, 
this paper can get three major feature of fused images, i.e. 
spectral feature (Tl), texture feature of TICA (T2) and linear 
transform feature of ICA (T3). 
4.3 Multi-Classifier Construction 
4.3.1 Principle of Multi-Classifier System: Classification 
is the process of assigning presented information into classes 
and categories of the same type. The classification of the image 
requires the estimation of the posterior probability for each 
class. Such estimates can be obtained by using supervised and 
unsupervised classification algorithms. 
The output of a classifier can take abstract form, rank level and 
measurement level. In the past few years, significant efforts 
have been devoted to the development of effective algorithms 
for combining different types of classifiers in order to exploit 
the complementary information that they 
provide(Burzzone,2001; Ranawana,2006). So if a multi 
classifier system is to be successful, the different classification 
should have good individual performances and be sufficiently 
different from each other. A multi-classifier can be constructed 
either in a parallel, stack or combined manner. Once the 
individual classifiers have been designed and implemented, the 
next most important task involves the combination of the 
individual results obtained through each individual classifier. 
The strategy includes linear combination methods, non-linear 
combination methods, statistical methods and computationally 
intelligent method. 
The success of a multi-classifier system depends on three key 
features: proper selection of classifier with diversity, topology 
and combinational methodology. The main purpose of multi- 
classifier combination is to take advantage of the different 
classifiers to enhance the generalization ability of the individual 
classifier to gain the better results of classification. This paper 
makes a useful attempt in the multi-classifier system and 
proposes a multi-classifier fusion method base on extension of 
ICA. 
4.3.2 Classifier Selection: Corresponding to the three 
different features in the fused images, this paper makes the 
pointed choice the following classifiers, including K-NN 
classifier, BP neural network classifier, decision tree classifier 
and multi-category SVMs. 
1. K-nearest neighbor classifier, K-NN. The K-NN has a very 
effective strategy as a learner, it keeps all training instances. A 
classification is made by measuring the distances from the test 
instance to all training instances, most commonly using the 
Euclidean distance. From these distances, a distance matrix is 
constructed between all possible pairings of points. The data 
points, k-closest neighbors are then found by analyzing the 
distance matrix. The k-closest data points are then analyzed to 
determine which class label is the most common among the set. 
Finally the majority class among the K nearest instances is 
assigned to the test instance. K-NN classifier is denoted by Cl 
here. 2 
2. BP neural network classifier. Back-propagating network (BP 
network) is a type of neural network. When positive direction 
spread, the imported model disposes layer by layer by way of 
hidden units from the input layer and sent to the output layer, 
neural state of each layer only affects state of the next layer. If 
the expected output can not be obtained in the output layer, so 
transfer to back propagation, and let error signal back along the 
original link pathway, the error signal can became least through 
amend the values of each nerve cell. This paper chooses the BP 
neural network with one hidden layer and uses C2 denotes it. 
3. Decision tree classifier. The decision tree classifier is a set of 
hierarchical rules which are successively applied to the input 
data. Those rules are thresholds used to binary split the data into 
two groups. Each node is such that the descendant nodes are 
purer in terms of classes. Decision tree rules are explicit and 
allow for identification of features which are relevant to 
distinguish specific classes. Then the analysis is reduced to the 
most useful layers. The structure of the decision tree can also be 
reveal hierarchical and nonlinear relationships among input 
layers. These relationships often result in a given class being 
described by various terminal nodes. Terminal nodes are the 
final decision, which assign a sample to certain class. Here 
decision tree classifier is denoted by C3. 
4. Support vector machines, SVMs. Support vector machines 
(SVMs) is a kind of machine learning based on statistical 
learning theory(Vladmir,2000). The basic idea of applying 
SVMs to pattem classification can be stated briefly as follows: 
firstly map the input vectors into one feature space, either 
linearly or non-linearly, which is relevant with the selection of 
the kernel function. Then with the feature space from the first 
step construct a hyperplane which separates two classes,.This 
can be extended to multi-class. 
The commonly used four kernel function in SVMs are: linear 
function, polynomial function, radial basis function, sigmoid 
function. SVMs have the important computational advantage 
that no nonconvex optimization is involved. Moreover, its 
performance is related to the margin with which it separates the 
data. As a new classification technique, SVMs outperforms 
many conventional approaches in various applications. Here 
SVMs classifier is denoted by C4. 
4.3.3 Strategy of Multi-Classifier Fusion: Corresponding 
to the three different features extracted from the fused images 
and the four different selected classifiers, this paper constructs 
the parallel topology of multi-classifiers firstly, detail 
descriptions are as followings. 
Towards the spectral features Tl in Ohta color space, K-NN Cl 
and decision tree C3 are chosen and combined in parallel 
topology. All the feature vectors are put into the two classifiers 
and respective classification results are obtained in parallel 
topology style. 
For texture features of TICA basis, the paper chooses K-NN Cl 
and BP neural network classifier C2 and combines them in 
parallel style, resulting two respective classification results. 
In regard to independent component features T3, K-NN Cl is 
chosen to get the corresponding classification results. 
A number of training area for different classes are chosen in the 
study images, following the above methods to extract spectral 
and ICA/TICA image features, then training all the chosen 
classifiers and the trained classifiers are applying to classify the 
whole fused images every pixel. Through different classifiers 
the corresponding posterior probability of different 
1115
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.