Full text: Proceedings, XXth congress (Part 7)

rmany 
n 
lection. 
stem to 
Modern 
provide 
nethods 
-means 
nfusion 
e using 
ries or 
of the 
in the 
Istering 
Izzy C- 
(plored 
in the 
which 
ership 
vector 
ick up 
cation. 
ype of 
nizing 
ing, is 
Is. The 
rough 
et of n 
centre 
ilarity, 
jective 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004 
function that should be minimized, when the Euclidean distance 
is selected as dissimilarity measures can be described as: 
7-5 X 
i=l \ k.x, EG; 
  
ju —ef 
NK i 
er | is the objective function within group i, 
EK i 
where € 
k.x; €0; 
  
  
         
2, : 
-c| is a chosen distance measure between a data 
point x, and the cluster centre C;. 
The partitioned groups are typically defined by a C X71 binary 
membership matrix U, where the element 4; is 1 if the jth 
data point x; belongs to group i, and 0 otherwise. That means: 
if EN 
ij | | 0 otherwise (2) 
  
  
<|x, Ze for each kzi, 
u 
The algorithm of K-means clustering is consists of the 
following steps: 
Step 1: Initialize the cluster centres C Em, c This is 
typically achieved by randomly selecting c points from among 
all of the data. 
Step 2: Determine the membership matrix U according to 
Equation (2). 
Step 3: Compute the objective function according to Equation 
(1). Stop if either it is below a certain tolerance value or its 
improvement over previous iteration is below a certain 
threshold. 
Step 4: Update the cluster centres. Go to step 2. 
The algorithm is significantly sensitive to the initial randomly 
selected cluster centres. The K-means algorithm can be 
employed multiple times to reduce this effect. More details can 
be found in (Jang 1997). 
2.20 Fuzzy C-means clustering algorithm: 
Fuzzy C-means clustering (FCM), also known as fuzzy 
ISODATA, is a data clustering algorithm in which each data 
point belongs to a cluster to a degree specified by its 
membership grade. Bezdek (1981, 1987) proposed this 
algorithm as an alternative to earlier (hard) K-means clustering. 
FCM partitions a collection of n vector x, /—1,...,n into c 
fuzzy groups, and finds a cluster centre in each group such that 
an objective function of a dissimilarity measure is minimized. 
The major difference between FCM and K-means is that FCM 
employs fuzzy partitioning such that a given data point can 
belong to several groups with the degree of belongingness 
specified by membership grades between 0 and 1. In FCM the 
membership matrix U is allowed to have not only 0 and 1 but 
also the elements with any values between 0 and 1. 
$ u; = Ll, Vj zl, 
i=l 
(3) 
The objective function for FCM is then a generalization of 
Equation (1) as following: 
39 
T 
(4) 
  
J(U, 6 e. 
=) Yu ds, - C 
ib j=l 
  
,C.)= S 
izl 
Where uj; is between 0 and 1; c; is the cluster centre of fuzzy 
group i. Fuzzy partitioning is carried out through an iterative 
optimization of the objective function shown above, with the 
update of membership u;; and the cluster centres c; by: 
3S Vs (5) 
e=—, 
i n m 
Y tg 
| (6) 
Us = 27 
3 /(m- n 
(=e) 
Ze J, el 
The FCM clustering algorithm is composed of the following 
steps: 
Step I: Initialized the membership matrix U with random values 
between 0 and 1 such that te constraints in Equation (3) are 
satisfied. 
ii. 
Step 2: Calculate c fuzzy cluster centres C; , using 
Equation (5). 
Step 3: Compute the cost function (objective function) 
according to Equation (4). Stop if either it is below a certain 
tolerance value or its improvement over previous iteration is 
below a certain threshold. 
Step 4: Compute a new U using Equation (6). Go to step 2 
The cluster centres also can be initialized firstly and then 
iterative process carried out. (Jang 1997) provides a detailed 
behaviour of fuzzy c-means clustering including its variants and 
convergence properties. 
2.3 Competitive Learning Networks 
Competitive leaning networks are an unsupervised learning 
method which is based on the neural network concept. But 
unlike other neural learning algorithms in unsupervised 
classification no external teacher is available, thus only input 
vectors can be used for learning. In the following a competitive 
learning algorithm is described. Competitive learning is a self- 
organizing property of the neural network. 
The method updates weights only on the basis of the input 
patterns. Figure (1) presents an example. All input units / re 
connected to all output units j with weight w;. The number of 
inputs is the input dimension, while the number of outputs is 
equal to the number of clusters. A cluster's centre position is 
specified by the weight vector connected to the corresponding 
output unit. 
  
  
  
  
  
  
  
  
  
W411 1] > 
X1 
20> 
X2 : 
3i > 
X3 
Was 41> 
Figure 1: Competitive learning network 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.