Full text: XVIIIth Congress (Part B2)

  
dE SN adu ES 
2. FEATURE EXTRACTION BY KOHONEN MAP 
The objective of the feature extraction phase is to identify the 
spectral classes present in the image and to define the set of 
correspondent samples to be used in the classification phase 
afterwards. 
There is no well-developed theory for feature extraction, 
mostly features are application-oriented and often found by 
heuristic methods and interactive data analysis. 
An important basic principle is that the features must be 
independent of class membership because, by definition, at the 
feature extraction phase the membership to the classes is not 
yet known. This implies that any learning methods used for 
feature extraction should be unsupervised in the sense that the 
target class for each object is unknown (Oja et al. 1994). 
One of the approaches is the use of competitive learning 
resulting in data clustering. An example is Kohonen's Self- 
Organizing Map (SOM) (Kohonen 1988). 
It’s well known the SOM property of dividing the input space 
into convex regions, where a set of reference vectors associates 
vector codes with the input space. The classification of an 
image may then be based on the cluster codes found to the 
image by the SOM. 
In our approach we generated an auxiliary visual tool from the 
SOM, denominated Kohonen Clusters Map (KCM), which 
enables to identify the spectral classes present in the image 
through the visualization of the clusters generated by SOM. 
2.1. SOM Description 
The SOM belongs to the class of unsupervised neural netwoks 
based on competitive learning, in which only one output 
neuron , or one per local group of neurons at a time gives the 
active response to the current input signal. The level of activity 
indicates the similarity between the input signal vector and its 
respective weight vector. A standard way of expressing 
similarity is through the Euclidian distance between these 
vectors. 
  
hexagonal 
900 
00.00 
9000© 
0000 C) aere 
0:0:0:0 OOO 
d neurons rm 
99090 © 
rectangular 
OOO 
00.00 
OOOO 
00900 
e o 6 o e o 
input vectors’ 
  
  
  
Figure 2: Geometrical representations of neurons 
for SOM. 
Since the distance between the weight vector of a given neuron 
and the input data vector is minimal to all neurons in the 
network, this neuron together with a predefined set of 
neighbour neurons will have their weights automatically 
updated by the learning algorithm. The neighbourhood for each 
neuron may be defined accordingly to the geometrical form, 
over which the neurons are arranged. Figure 2 depicts two 
examples of representation proposed by (Kohonen 1989): a 
rectangular grid and an hexagonal grid. 
A short description of the learning algorithm of SOM is given 
bellow: 
Step I: Select a training pattern X — (x4, X»,..., xw) and present 
it as an input to the network. 
Step 2: Compute distances di between the input vector, and 
each j neuron's weight vector, acording to: 
N 
d;=S &0-w,07 (D) 
j 
where x;(t) is the j-th input in a given iteraction and 
Wi ;(t) is the weight of neuron j from the input layer connected 
to neuron i from the output layer. 
* 
Step 3: Select neuron i with the smallest distance among all 
.* . 
other neurons, and update the weight vector of 1 and its 
neighbours using the following expression: 
Wi (t1) & wi ;(0 + (t) * (x; (0 = wi, ; (0) 
for i€ N.«,j- L2... N (2) 
* 
where N. is a set that contains 1 and its neighbours, and 
oft) is the learning rate, usually smaller than 1. This 
procedure repeats until the the weight update is no longer 
significant. 
By the end of the learning process each neuron or group of 
neighbour neurons will represent a distinct pattern among the 
set of patterns presented as input to the network. 
2.2. Kohonen Clusters Map (KCM) 
In this approach, 3x3 pixel windows taken from the original 
image were used as training patterns for the SOM. These 
patterns were randomly and uniformly obtained from all over 
the image and presented as input vectors to the SOM. Since the 
SOM has the property of arranging its weight vectors in 
rectangular or hexagonal grids and considering that both input 
data vectors and weight vectors have the same dimension, this 
enables to generate an image of the weight grid of the SOM. 
The resulting grid image, after the unsupervised learning by 
the SOM, was denominated Kohonen Clusters Map (KCM). 
Figure 5 shows an example of a rectangular KCM generated 
from the test image (Fig. 8). 
The KCM produces a visual auxiliary tool for the task of 
identifying and selecting the spectral classes present in the 
image and their correspondent training samples, which will be 
used afterwards in the module of neural classification. The 
118 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B2. Vienna 1996 
KCM 
tasks: 
e As 
it's p 
origir 
e The 
amon 
of p 
obtaii 
close 
posse 
e The 
founc 
highe 
mapp 
There 
used 
and n 
2,3: P 
The 
parall 
and ii 
In 
imple 
imprc 
SOM 
Labo 
PVM 
heter 
conct 
may 
super 
work: 
netwe 
User 
throu 
comn 
For S 
was 
proce 
to red 
A cor 
the p: 
Havir 
samp 
propc 
imag:
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.